Giter VIP home page Giter VIP logo

dann's People

Contributors

fungtion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dann's Issues

Loss of domain classification part

Hi, I am trying to understand GRL effect. While training should i expect to see the domain classification loss decrease to some extent or should it stay around random guess( so that it makes possible domain generalization)

Thanks!

Question about updating the domain classifier parameters

Hi @fungtion ,

I am trying to implement a similar architecture for speech processing and I am using your code for reference.

According to my understanding, DANN must minimize the loss of class labels and maximize the loss of domain with respect to the features. It must also optimize just the parameters of domain classifier to make it a good "adversary" by minimizing the domain loss without affecting the feature layer params.

How do you achieve the optimization of just the domain classifier parameters without propagating the gradients back to the feature extraction layer? Could you please point me towards the code that does it?

Thanks a lot for providing your helpful code.

Brij

Data access

What's the verification code of the Baidu Cloud?

alpha

您好,我想请问一下,程序里的alpha参数,是DANN图中反转层的lambda嘛

Number of classes of domain classifier

Hi all, I have a question. Why does the output layer of domain classifier have 2 output features (this line) ? Since the domain labels are either 0 or 1 in the paper, I think it should be a binary classifier as below,

self.domain_classifier.add_module('d_fc2', nn.Linear(100, 1))

Cooperation Proposal: fungtion/DANN & PaddlePaddle

Dear fungtion:

您好,我是开源深度学习平台飞桨(PaddlePaddle)团队的张春雨,冒昧联系您,感谢您花时间阅读我们的邮件。飞桨(PaddlePaddle)以百度多年的深度学习技术研究和业务应用为基础,是**首个开源开放的产业级深度学习平台(2016年开源至今),包含核心框架、基础模型库、端到端开发套件与工具组件。
我们关注到您作为开源社区优秀的引领者,一直持续回馈开源社区,创作了诸多颇受开发者喜爱的作品fungtion/DANN。飞桨社区一直也秉承着开源开放的理念,希望可以拥抱更多优秀的开发者与作品。如您感兴趣,我们诚邀您与我们共同为您的作品增加对于飞桨框架的适配,以便更多飞桨社区的开发者可学习和使用,我们将竭诚为您提供必要的技术支持(为您专门服务)和迁移支持(降低迁移代价)。

我们十分期待与您的合作,诚邀您莅临由深度学习技术及应用国家工程实验室与百度联合主办的“WAVE SUMMIT+2020”开发者造物节(2020年12月20日于北京),向开源社区的开发者们分享您的开源技术内容。本次“WAVE SUMMIT+2020”上,人工智能专家将分享深度学习时代的最新技术发展与产业落地经验。深度学习开源开放平台——飞桨也将带来多项全新升级,更好地将深度学习技术赋能产业,与生态伙伴携手推进产业智能化进一步发展,与开发者共促开源技术社区发展。

我们也很期待能与您联合开设技术分享的内容,让更多开发者了解和学习您的作品。飞桨目前有较为完备的课程体系,我们很期待在其中也能加入您的优秀作品。

除此之外,我们也诚邀您加入PPDE飞桨开发者技术专家(PaddlePaddle Developers Experts),向更多开发者们传递开源开放的理念,促进**开源社区的发展!我们也为PPDE飞桨开发者技术专家准备了一系列权益,包括个人影响力塑造、参与全球游学、顶会交流等,有机会进入百度孵化器,还有机会参与顶级项目支持,比如拥有1000万基金、1000万算力、100亿流量加持的星辰计划等。

我们期待与您的合作!
如您有意向,欢迎随时联系。
手机:+86 13311535619
微信:同手机号
邮箱:[email protected]

飞桨(PaddlePaddle)官方网站:https://www.paddlepaddle.org.cn
飞桨(PaddlePaddle)开源仓库:https://github.com/PaddlePaddle/Paddle

Question about ReverseLayerF

Greetings! Could you give me some quick explanation about why ReverseLayerF() could lead the gradients of parameters in domain classifier to negative? I'm confused since that I though it would instead influence the parameters before domain classifier (in the feature layer). Could you please correct me a little bit? Thanks in advance!

loss is nan

When I run the main.py, output results show that three types of loss: err_s_label, err_s_domain, err_t_domain, become nan after several iterations. Anything wrong?

RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28]

When I run your code, I have met a error when executing data_source = data_source_iter.next():

RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28].

I have no idea why this error occurs. Could you please give some suggestions.
Besides, I'm using Python3.6 and Pytorch 1.0. My operating system is Ubuntu 16.04. Thanks.

Accuracy error unstable

Hi,
I ran the code and found results
epoch: 98, [iter: 461 / all 461], err_s_label: 0.039522, err_s_domain: 0.636988, err_t_domain: 0.617111
epoch: 98, accuracy of the mnist dataset: 0.989100
epoch: 98, accuracy of the mnist_m dataset: 0.893790
This is even more than the paper where author's have quoted ~ 0.76 over mnist_m. What could be the issue ? Would be great if you can share accuracy you obtained on this code ?
Any version issue ? Would be great if you can share the versions of packages (py,torch, etc.). Maybe the environment.yml file itself.

ImportError: No module named dataset.data_loader

Traceback (most recent call last):
File "main.py", line 8, in
from dataset.data_loader import GetLoader
ImportError: No module named dataset.data_loader

I'm sorry. I met this problem and don't know how to solve it.

Why the achieved classification accuracy on MINIST-M is much higher than that of paper?

Hi, Shicheng. I downloaded your code yesterday and found that the DANN which trained both on labeled MINIST and unlabeled MINIST-M achieved accuracy as below on the test sets:

epoch: 99, accuracy of the mnist dataset: 0.987900
epoch: 99, accuracy of the mnist_m dataset: 0.907121

As we have known, the original paper got 0.7666 accuracy when transferring MINIST to MINIST-M, while here I got 0.907121. So do you know the reasons behind this? I check the neural network architecture is almost same as the paper.

Thanks for your help in advance.

err_t_domain and err_s_domain

Hi. I refer to your network design and loss to train my dataset for this experiment. When traing the network, I try to output the total loss of err_t_domain and err_s_domain, the total loss decreases a little at first and then remain unchanged. Thanks.

Does domain Adaptation really help the training

I have confusions when using DANN in speech emotion style transfer. And I have two questions as below
1.for the domain classifier, what level of accuracy should we expected? Will the domain-classifier-loss decline normally?
2.quetion as the title

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.