yihuacheng / puregaze Goto Github PK
View Code? Open in Web Editor NEWCode of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.
Hi, thank you for sharing the excellent work. i am curious how to generate attention map. i found you guys used the same attention during training. which image do you choose to generate attention map? could you share the code if you want? Thanks.
Yong
Thank you for your research.
However, there is some problem with your code.
When I tested data_processing_core.py, the error message happened and it said "ModuleNotFoundError: No module named 'im_plot'"
How can I fix it?
Thank you
Hi, Yihua!
Thanks for your impressive work!
Here is one question for the generation of mask in LP-Loss.
Specifically, before I trained PureGaze in my own data, I tried to generate attention map with the eye centers and sigma^2 =20, the generated heat point is so small, very different to what you released in Mask file.
So i want to figure out the details or could you please release some key code here to let me study for?
Looking forward to your reply, thanks!
我没有对代码和除了dataset和masker位置以外的config设置进行更改,只是把resnet-18的文件夹里的三个文件替换了原始的resnet-50文件。并且在实验中我仔细检查确保了ETH-Gaze pitch,yaw翻转带来的问题(即代码中tester/total.py的97-98行,是否采用gtools.gazeto3d),然而我并没有成功复现出原文的结果,以ETH-Gaze作为源域,我得到的各目标域的结果为Gaze360:26.17,MPIIGaze:11.42,EyeDiap:23.35。想向您请教下可能是哪一块使得我的复现结果有误呢?
@yihuacheng hello, now I only have a RGB camera, I can use this camera to capture people face image. How to label the gaze vector(pitch, yaw) based on the image I captured? Then I can use my own dataset to train the model. Is there some method to label the data,or some paper to I can reference?
or Are there some methods to get the gaze vector(pitch, yaw) only use the RGB image through deep learning method or non deep learning method? Is there some paper to reference? Thank you!
我英文不是很好,我用中文重复下。您好,我想问下,我现在仅有一部RGB的相机,我可以用这部相机拍摄一些人脸图像,我如何去标注该人脸图像的视线方向(pitch, yaw)呢?从而构造自己的数据集,用于模型的训练?
或者有什么方法可以直接从RGB图像直接得到视线方向(pitch, yaw),无论是深度学习或传统机器学习。有什么论文可以参考吗?谢谢!
是使用您提供的代码和预训练模型在eth上做训练,始终得不到和论文中相似的结果(误差很大avg=23),请问您在训练的时候的配置是怎么样的(如Betch size和epoch),十分感谢!
Is using the code and pre-trained model you provided to do training on eth, never get similar results as in the paper, may I ask what is your configuration while training (such as Betch size and epoch), thanks a lot!
您好,请问为什么注意力图是固定的呢,但这与论文中的描述并不符
Thank you for your research.
However, there is some problem with your code.
When I tested total.py of your code, the error message happened and it said "ModuleNotFoundError: No module named 'gtools'"
Is there any reference to gtools.py?
Where can I find gtools.py
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.