zhengpeng7 / gconet_plus Goto Github PK
View Code? Open in Web Editor NEW[TPAMI'23] GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector.
Home Page: https://huggingface.co/spaces/ZhengPeng7/GCoNet_plus_demo
License: MIT License
[TPAMI'23] GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector.
Home Page: https://huggingface.co/spaces/ZhengPeng7/GCoNet_plus_demo
License: MIT License
Why does an epoch stop early and go straight to the next epoch?
(中文:为什么一个epoch提前停止并直接进入了下一个epoch)
I didn't notice this situation when I was training before. Later, it happened when I changed to multi-GPU training. I thought it was a problem with the multi-GPU training I changed, but I re-downloaded the code for single-GPU training and still had this problem:
Each epoch ends early and a new epoch is performed
Is this correct? Or that it was
(中文:之前我训练时并没有注意到有没有出现这个状况,后来我改成多GPU训练时出现了,我以为是改的多GPU训练有问题,可是我重新下了代码单GPU训练还是有这个问题:
每个epoch提前结束,进行新的epoch
这是正确的吗?还是说原本就这样)
想知道RefUnet(nn.Module)模块的作用是什么,在论文中有对应的部分吗?
如能回答,万分感谢!
祝好~
您好!请问可以提供类激活图可视化代码吗?
Hello, thanks for nice project and you had done solid work.
I have some about the inference time measurement.
I also write script to check the inference time.
while my result is 10 ms per image(100 fps).
It is only inference time, the time of data loading and image resizing are not included.
I am wondering how to make 250 fps.
My hardware are 2080ti GPU and i7 cpu.
Thanks in advance!
I want to make simple inference for some images I have and the results are bad
from models.GCoNet import GCoNet
import torch
from pathlib import Path
from torchvision.io import read_image
from torchvision.utils import make_grid, save_image
from torchvision import transforms
dataset_path = Path("/path/to/data")
device = torch.device("cuda")
model = GCoNet()
model = model.to(device)
gconet_dict = torch.load("/path/to/weights")
# gconet_dict = torch.load("/shared_volume/rashad/salient_app/ultimate.pth")
trans = transforms.Compose([
transforms.Resize((320, 320)),
# transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
model.to(device)
model.load_state_dict(gconet_dict)
model.eval()
images = [read_image(str(img_path))[None, ...] for img_path in dataset_path.glob("11/*")]
# print(images)
torch_images = torch.concat(images).float()
torch_images = trans(torch_images)
print(torch_images.shape)
with torch.no_grad():
result = model(torch_images.cuda())[-1]
result = torch.concat([torch.nn.functional.interpolate(res.unsqueeze(0), size=(res.shape[1], res.shape[2]), mode='bilinear', align_corners=True) for res in result]).sigmoid()
result = (result > result.mean()).float() * 255
print(result.shape)
print(result.max(), result.min())
save_image(make_grid(result), "salient_example.png")
您好,我从您google drive下载的权重包中用 (ultimate_duts_cocoseg (The best one).pth)评估,但和论文中还略微有点差距,且这个权重和(ultimate_duts_cocoseg.pth)应该是同一个,是上传错误了吗?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.