dougsouza / face-frontalization Goto Github PK
View Code? Open in Web Editor NEWThis is a port of the Face Frontalization code provided by Hassner et al. at http://www.openu.ac.il/home/hassner/projects/frontalize
This is a port of the Face Frontalization code provided by Hassner et al. at http://www.openu.ac.il/home/hassner/projects/frontalize
Hello,
Thank you very much for this implementation!
While reviewing the code, I noticed one error on the landmark coordinates. I plot the landmarks stored in the 3D model.
model3D.model_TD
Here is what I got:
For the frontal view, the landmarks seem to be good.
However, if we turn a bit to the bottom view, there are some obvious defects of the landmarks on the right jaw of the model.
I think these defects would lead to a drop in performance, resulting from a wrongly estimated projection matrix from 3D to 2D.
Could you please correct this error? Thank you in advance!
When I ran the code, it gave me an error because the archive seemed corrupt. Thanks to this SO answer: https://stackoverflow.com/questions/45179033/unable-to-extract-shape-predictor-68-face-landmarks-dat-for-bz/45179235#45179235 I manually downloaded the archive from http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 and it worked. I don't know if this is just on my setup (Python3 through Anaconda, Windows10) or if the URL as changed?
Hi
can you please give a direct link to download the matlab source?
face-frontalization/camera_calibration.py
Line 118 in 6cf2f09
Hi, thank you very much for your code. I think this function is not well implemented.
First of all, when we check if one point in a frustum, we should go through 6 faces rather than 3 faces.
so.
for p in range(0,3):
should be changed to
for p in range(0, 6)
Second, there is a typo in your code:
if(frustum[p, 0] * x + frustum[p, 1] * y + frustum[p, 2] + z + frustum[p, 3] <= 0):
the frustum[p, 2] + z
should be changed to frustum[p, 2] * z
How can I generate my own "model3Ddlib.mat", if I want use my own template 3D model?
Is there a way or tool that I can use?
Thank you!
when i try to run demo.py. The error is :
File "demo.py", line 19, in demo
check.check_dlib_landmark_weights()
File "/data/yun/speech2video/face-frontalization-master/check_resources.py", line 45, in check_dlib_landmark_weights
download_file(dlib_facial_landmark_model_url, dlib_models_folder)
File "/data/yun/speech2video/face-frontalization-master/check_resources.py", line 8, in download_file
u = urllib2.urlopen(url)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno -2] Name or service not known>
and i feel the error is "dlib_facial_landmark_model_url = "http://ufpr.dl.sourceforge.net/project/dclib/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2"" in check_resources.py.
and then i try to download "shape_predictor_68_face_landmarks.dat.bz2" by internet but i can't open the site.
can you tell me how to slove it? or can you give the file "shape_predictor_68_face_landmarks.dat.bz2"
Hi,
First of all, thanks a lot for this awesome python port. This is helping me a lot in my project which is age classification from facial images. I am using your code to frontalize the images before giving them as input to a Conv Net.
The dataset I am using has been collected by my teammates here in NTU, Singapore. Some of the images of this dataset have non-facial occlusions such as hands, etc. As per the original paper "Effective Face Frontalization in Unconstrained Images" (Sec. 3.3) these are taken care off by doing conditional soft-symmetry.
However, I find that this is not implemented in this Python port. As a matter of fact, this is also not implemented in the original MatLab codes.
Can you please let me how to implement this for my project? This will be very helpful.
Hi,
I got the following error when running the code under the following setup:
Ubuntu 16.04 (4.4.0-97)
Python 2.7.12
OpenCV 3.3.0.10
DLib 19.7.0
SciPy 0.19.0
Traceback (most recent call last):
File "demo.py", line 49, in
demo()
File "demo.py", line 28, in demo
lmarks = feature_detection.get_landmarks(img)
File "~/face-frontalization/facial_feature_detector.py", line 22, in get_landmarks
predictor = dlib.shape_predictor(predictor_path)
Boost.Python.ArgumentError: Python argument types in
shape_predictor.__init__(shape_predictor, str)
did not match C++ signature:
__init__(boost::python::api::object, std::string)
__init__(_object*)
This is the line 22 of facial_feature_detector.py file:
predictor = dlib.shape_predictor(predictor_path)
Any ideas? What should I change to make the code work?
Thank you.
Hello, do you have an idea why I am getting so very different results for the same image depending on its dimensions? If it is too big the program doesn't seem to be doing the right thing after detecting the facial landmarks.
Any help would be very much appreciated, I'll attach screenshots of two versions of the same image.
Thanks!
Here the large version:
And the small one:
Hi and thank you for this work.
I tried demo.py on my computer and it works well.
The I tried demo.py with a picture of myself (taken with my webcam) and the result is totally wrong.
Any ideas ? :)
when i perform the code, everything is ok. But i am confused that how to extract the face area( according to the standard 3d model) on frontalized face image
in other issue, the frontlization can be clipped into rectangular region according to: cropped_face = frontal_image[(d.left()):(d.right()), (d.top()):(d.bottom()), :] but now how to obtain the overall facial region
for example the red line :
Hi. I'm using your code for my research.
But I found a bug.
When a frontal face image is input, the demo script show some strange images.
This error occurred when np.abs(sum_diff)
is smaller than ACC_CONST
.
I think it is because in the function frontalize()
`frontal_raw `` will be converted to image array (from 0 to 255 value int array) only when one side is occluded.
So, I think the result arrays should be converted to image arrays in the both cases.
Thank you for this work.
I tried demo.py with the other facial feature detector and got the landmarks.
I wonder if the model3Ddlib.mat needs to be modified?
Thanks,
Xie
Hi, is it possible to get the corresponding 68 landmarks after frontalization?
Hi,
Is it possible to generate just the face without the background, i.e., just the face frontalized with black pixels surrounding it? I have been trying to mess with lines 60-63 in frontalize.py, but I am not having any success. Any help would be appreciated.
Thank you,
MS
I have seen that all other people are facing the same issue I am: When you use an image different from the provided test.jpg
, the frontalization doesn't work at all.
@dougsouza I was wondering if you think that the problem may lie somewhere in your repo, by having hard-coded specific parameters for such image, or in contrast it is a problem in the original matlab code/model.
Thanks,
Dani
Hello, I got the error:
frontal_raw[ind_frontal, :] = cv2.remap(img, temp_proj2[0, :].astype('float32'), temp_proj2[1, :].astype('float32'), cv2.INTER_CUBIC)
ValueError: array is not broadcastable to correct shape
what's the error?How can i sovle?
Hi and thanks for your work!
I run the demo.py,but there was a runtimeerror:Unable to open /dlib_models/shape_predictor_68_face_landmarks.dat
I had used your dependency,is there something else wrong?
Hi, I have just downloaded the code and installed all the dependencies. Then, I've run demo.py. But I got the error below. Do you have any suggestion?
Number of faces detected: 1
('query image shape:', (250, 250, 3))
OpenCV Error: Assertion failed (dst.cols < SHRT_MAX && dst.rows < SHRT_MAX && src.cols < SHRT_MAX && src.rows < SHRT_MAX) in remap, file /io/opencv/modules/imgproc/src/imgwarp.cpp, line 4956
Traceback (most recent call last):
File "demo.py", line 49, in
demo()
File "demo.py", line 38, in demo
frontal_raw, frontal_sym = frontalize.frontalize(img, proj_matrix, model3D.ref_U, eyemask)
File "/home/emanuele/Desktop/test_hiding/face-frontalization-master/frontalize.py", line 57, in frontalize
frontal_raw[ind_frontal, :] = cv2.remap(img, temp_proj2[0, :].astype('float32'), temp_proj2[1, :].astype('float32'), cv2.INTER_CUBIC)
cv2.error: /io/opencv/modules/imgproc/src/imgwarp.cpp:4956: error: (-215) dst.cols < SHRT_MAX && dst.rows < SHRT_MAX && src.cols < SHRT_MAX && src.rows < SHRT_MAX in function remap
I'm using OpenCV '3.2.0'
When I perform Face Frontalization(by the way I used MATLAB code however I found black areas do appear in this Python code too. So I'm asking here) , I get following result.
What drives me crazy is the black areas that appear in frontalized output.
I want to crop around the faces so that black areas disappear. I was actually thinking about cropping the frontalized image by manually setting ROI box to crop only around the face so that black areas will be gone. Does this makes sense?
When seeing the Adience3D.0.1.1.zip database provided in the author's webpage, it looks like the black areas are pretty much gone as shown below
The question is How do I deal with black areas that appear in frontalized image?
I want to get only the frontalized face image without black blank pixels so that the face is fitted to the whole image just like an original input image shown above.
That's all!
Thanks
Hi,
I run demo.py for a single image. it worked well
When I run more than 300 images in a loop in windows 32bit it shows the following error
Traceback (most recent call last):
File "demo.py", line 75, in
demo()
File "demo.py", line 57, in demo
frontal_raw, frontal_sym = frontalize.frontalize(img, proj_matrix, model3D.r
ef_U, eyemask)
File "C:\Users\Hubino\Downloads\face-frontalization-master\face-frontalization
-master\frontalize.py", line 88, in frontalize
denominator = weights + weight_take_from_org + weight_take_from_sym
MemoryError
How to avoid the memory error
Thanks in advance for your valuable help
First I'd like to thank you sharing the code.
I have been reviewing and when I tested a near-frontal image I got a very ugly result. After checking the code I realized that if the occlusion between both sides does not exceed 800 then the returned images are the sames. If the occlusion is bigger than 800 in the if statement, line 93 frontalize.py, there is a check for values bigger than 255 which is not the case for else. Do not if somebody already mentioned it.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.