Giter VIP home page Giter VIP logo

facelib's Introduction

FaceLib: Face Analysis

Used for face detection, facial expression, AgeGender estimation and recognition with PyTorch.

  • Instalation: pip install git+https://github.com/sajjjadayobi/FaceLib.git

How to use:

Check this example_notebook or take a look at the following sections

1. Face Detection: RetinaFace

You can use these backbone networks: Resnet50, mobilenet. Default model is mobilenet and it will be automatically downloaded.

  • The following example illustrates the ease of use of this package on your webcam:
     from facelib import WebcamFaceDetector
   detector = WebcamFaceDetector()
   detector.run()
  • Low-level access to bounding boxes and face landmarks
   from facelib import FaceDetector
   detector = FaceDetector()
   boxes, scores, landmarks = detector.detect_faces(image)

2. Face Alignment: Using face landmarkd

For face aligment always use the detect_align function it gives you better performance.

  • Face detection and aligment using the detect_align function.
 from facelib import FaceDetector
 detector = FaceDetector()
 faces, boxes, scores, landmarks = detector.detect_align(image)
Original Aligned & Resized Original Aligned & Resized
image image image image

3. Age & Gender Estimation:

ShufflenetFull is the default model, and it will be automatically downloaded.

  • Age and gender estimation live on your webcam (or any camera)
from facelib import WebcamAgeGenderEstimator
estimator = WebcamAgeGenderEstimator()
estimator.run()
  • Low-lvel access to ages and genders
from facelib import FaceDetector, AgeGenderEstimator
face_detector = FaceDetector()
age_gender_detector = AgeGenderEstimator()

faces, boxes, scores, landmarks = face_detector.detect_align(image)
genders, ages = age_gender_detector.detect(faces)
print(genders, ages)

4. Facial Expression Recognition:

The default model is densnet121 and it will be automatically downloaded. Note that face size must be (224, 224).

  • Emotion detector live on your webcam
from facelib import WebcamEmotionDetector
detector = WebcamEmotionDetector()
detector.run()
  • Emotions as an array with their probabilities
from facelib import FaceDetector, EmotionDetector
face_detector = FaceDetector(face_size=(224, 224))
emotion_detector = EmotionDetector()

faces, boxes, scores, landmarks = face_detector.detect_align(image)
emotions, probab = emotion_detector.detect_emotion(faces)
  • on my Webcam 🙂 Alt Text

5. Face Recognition: InsightFace

  • This module is a pytorch reimplementation of Arcface(paper), or Insightface(Github)

Pretrained Models & Performance

  • IR-SE50
LFW(%) CFP-FF(%) CFP-FP(%) AgeDB-30(%) calfw(%) cplfw(%) vgg2_fp(%)
0.9952 0.9962 0.9504 0.9622 0.9557 0.9107 0.9386
  • Mobilefacenet
LFW(%) CFP-FF(%) CFP-FP(%) AgeDB-30(%) calfw(%) cplfw(%) vgg2_fp(%)
0.9918 0.9891 0.8986 0.9347 0.9402 0.866 0.9100

Prepare the Facebank

Save the images of the faces you want to detect in this folder

Insightface/models/data/facebank/
  ---> person_1/
      ---> img_1.jpg
      ---> img_2.jpg
  ---> person_2/
      ---> img_1.jpg
      ---> img_2.jpg

You can save a new preson in facebank with 2 ways:

  • Use add_from_webcam: it takes 4 images and saves them on facebank.
 from facelib import add_from_webcam
 add_from_webcam(person_name='sajjad')
  • use add_from_folder: it takes a path with some images from just a person.
 from facelib import add_from_folder
 add_from_folder(folder_path='./', person_name='sajjad')

Recognizer

The default model is mobilenet and it will be automatically downloaded

  • Face Recognition live on your webcam
from facelib import WebcamVerify
verifier = WebcamVerify(update=True)
verifier.run()
  • Low-level access to your images
import cv2
from facelib import FaceRecognizer, FaceDetector
from facelib import update_facebank, load_facebank, special_draw, get_config

conf = get_config()
# conf.use_mobilenet=False # if you want to use the bigger model
detector = FaceDetector(device=conf.device)
face_rec = FaceRecognizer(conf)

# set True when you add someone new to the facebank
update_facebank_for_add_new_person = False
if update_facebank_for_add_new_person:
    targets, names = update_facebank(conf, face_rec.model, detector)
else:
    targets, names = load_facebank(conf)

image = cv2.imread(your_path)
faces, boxes, scores, landmarks = detector.detect_align(image)
results, score = face_rec.infer(faces, targets)
print(names[results.cpu()])
for idx, bbox in enumerate(boxes):
    special_draw(image, bbox, landmarks[idx], names[results[idx]+1], score[idx])

image

Reference: InsightFace

facelib's People

Contributors

badjano avatar sajjjadayobi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

facelib's Issues

ShuffleneTiny model for Age-Gender estimation

Could you provide ShuffleneTiny model for Age-Gender estimation ? or guide how to preapare it ?
I looked at TreB1eN / InsightFace_Pytorch and the model for mobile devices: Mobilefacenet - but as I understand it is not the same?

How to train Face Expression Recognition?

Hi, hangsome boy, thanks for your nice work , could you share me how to train Facial Expression Recognition. I didn't find something about these( Dataset and train code).
Hope your reply , thanks.

could not run resnet model

Thanks for your great work, it runs mobilenet well but when I changed it to resnet the following Error incurred,
Please kindly help.

RuntimeError: Error(s) in loading state_dict for RetinaFace:
	Missing key(s) in state_dict: "body.conv1.weight", "body.bn1.weight", "body.bn1.bias", "body.bn1.running_mean", "body.bn1.running_var", "body.layer1.0.conv1.weight", "body.layer1.0.bn1.weight", "body.layer1.0.bn1.bias", "body.layer1.0.bn1.running_mean", "body.layer1.0.bn1.running_var", "body.layer1.0.conv2.weight", "body.layer1.0.bn2.weight", "body.layer1.0.bn2.bias", "body.layer1.0.bn2.running_mean", "body.layer1.0.bn2.running_var", "body.layer1.0.conv3.weight", "body.layer1.0.bn3.weight", "body.layer1.0.bn3.bias", "body.layer1.0.bn3.running_mean", "body.layer1.0.bn3.running_var", "body.layer1.0.downsample.0.weight", "body.layer1.0.downsample.1.weight", "body.layer1.0.downsample.1.bias", "body.layer1.0.downsample.1.running_mean", "body.layer1.0.downsample.1.running_var", "body.layer1.1.conv1.weight", "body.layer1.1.bn1.weight", "body.layer1.1.bn1.bias", "body.layer1.1.bn1.running_mean", "body.layer1.1.bn1.running_var", "body.layer1.1.conv2.weight", "body.layer1.1.bn2.weight", "body.layer1.1.bn2.bias", "body.layer1.1.bn2.running_mean", "body.layer1.1.bn2.running_var", "body.layer1.1.conv3.weight", "body.layer1.1.bn3.weight", "body.layer1.1.bn3.bias", "body.layer1.1.bn3.running_mean", "body.layer1.1.bn3.running_var", "body.layer1.2.conv1.weight", "body.layer1.2.bn1.weight", "body.layer1.2.bn1.bias", "body.layer1.2.bn1.running_mean", "body.layer1.2.bn1.running_var", "body.layer1.2.conv2.weight", "body.layer1.2.bn2.weight", "body.layer1.2.bn2.bias", "body.layer1.2.bn2.runnin...
	Unexpected key(s) in state_dict: "module.body.conv1.weight", "module.body.bn1.weight", "module.body.bn1.bias", "module.body.bn1.running_mean", "module.body.bn1.running_var", "module.body.bn1.num_batches_tracked", "module.body.layer1.0.conv1.weight", "module.body.layer1.0.bn1.weight", "module.body.layer1.0.bn1.bias", "module.body.layer1.0.bn1.running_mean", "module.body.layer1.0.bn1.running_var", "module.body.layer1.0.bn1.num_batches_tracked", "module.body.layer1.0.conv2.weight", "module.body.layer1.0.bn2.weight", "module.body.layer1.0.bn2.bias", "module.body.layer1.0.bn2.running_mean", "module.body.layer1.0.bn2.running_var", "module.body.layer1.0.bn2.num_batches_tracked", "module.body.layer1.0.conv3.weight", "module.body.layer1.0.bn3.weight", "module.body.layer1.0.bn3.bias", "module.body.layer1.0.bn3.running_mean", "module.body.layer1.0.bn3.running_var", "module.body.layer1.0.bn3.num_batches_tracked", "module.body.layer1.0.downsample.0.weight", "module.body.layer1.0.downsample.1.weight", "module.body.layer1.0.downsample.1.bias", "module.body.layer1.0.downsample.1.running_mean", "module.body.layer1.0.downsample.1.running_var", "module.body.layer1.0.downsample.1.num_batches_tracked", "module.body.layer1.1.conv1.weight", "module.body.layer1.1.bn1.weight", "module.body.layer1.1.bn1.bias", "module.body.layer1.1.bn1.running_mean", "module.body.layer1.1.bn1.running_var", "module.body.layer1.1.bn1.num_batches_tracked", "module.body.layer1.1.conv2.weight", "module.body.layer1.1.bn...

pip install?

I tried:
pip install git+https://github.com/sajjjadayobi/FaceLib.git
and also:
python -m pip install git+https://github.com/sajjjadayobi/FaceLib.git
Both did not work... it is kinda vague on the readme...
I did install this before but I can´t remember how I did it
Thanks

Age & Gender Estimation

How do you set the parameters of your training age and gender model?
batch_size, num_epochs, do you use a pre-trained model?

about ShufleNet

May I ask you something ?
Would you let me know the reason why you used ShufleNet not other models?

CPU error

Whenever I am trying to run the code using CPU it is showing the error like this:
RuntimeError Traceback (most recent call last)
in
----> 1 detector = FaceDetector(name='mobilenet', weight_path='mobilenet.pth', device='cpu')

~\FRS\FaceLib-master\facelib\Retinaface\Retinaface.py in init(self, name, weight_path, device, confidence_threshold, top_k, nms_threshold, keep_top_k, face_size)
42
43 # setting for model
---> 44 model.load_state_dict(torch.load(weight_path))
45 model.to(device).eval()
46 self.model = model

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
591 return torch.jit.load(f)
592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
594
595

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
771 unpickler = pickle_module.Unpickler(f, **pickle_load_args)
772 unpickler.persistent_load = persistent_load
--> 773 result = unpickler.load()
774
775 deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in persistent_load(saved_id)
727 obj = data_type(size)
728 obj._torch_load_uninitialized = True
--> 729 deserialized_objects[root_key] = restore_location(obj, location)
730 storage = deserialized_objects[root_key]
731 if view_metadata is not None:

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in default_restore_location(storage, location)
176 def default_restore_location(storage, location):
177 for _, _, fn in _package_registry:
--> 178 result = fn(storage, location)
179 if result is not None:
180 return result

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in _cuda_deserialize(obj, location)
152 def _cuda_deserialize(obj, location):
153 if location.startswith('cuda'):
--> 154 device = validate_cuda_device(location)
155 if getattr(obj, "_torch_load_uninitialized", False):
156 storage_type = getattr(torch.cuda, type(obj).name)

~\Anaconda3\envs\cpu_env\lib\site-packages\torch\serialization.py in validate_cuda_device(location)
136
137 if not torch.cuda.is_available():
--> 138 raise RuntimeError('Attempting to deserialize object on a CUDA '
139 'device but torch.cuda.is_available() is False. '
140 'If you are running on a CPU-only machine, '

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

TypeError: infer() got multiple values for argument 'tta'

I captured an image using the add_from_webcam function and stored it in a facebank folder with the specified person_name as a directory.


# from facelib import add_from_webcam

# add_from_webcam(person_name='chuks')

Now when I tried to verify the image in the folder using the WebcamVerify function.,It resulted to a TypeError. How can I fix this.?

from facelib import WebcamVerify
verifier = WebcamVerify(update=True)
verifier.run()

Error Traceback

"C:\Users\Desktop\FaceRec\Project\FaceLib\facelib\InsightFace\verifier.py", line 49, in run
    results, score = self.recognizer.infer(self.conf, faces, self.targets, tta=self.tta)
TypeError: infer() got multiple values for argument 'tta'

How to Compare two faces?

Hello, Thank you for your great work
Is there any way to compare two detected faces and return a percentage of similarity?
In other words, is there any way to get face encodings ?

Metrics

Hi great repo,

I can't seem to find any metrics on the performance of you gender/age model. Do you know where I can find that?

cheers,
Augeust

pred = torch.argmax(input[:, :2], dim=1)

In model.py:

def accuracy_gender(input, targs):
pred = torch.argmax(input[:, :2], dim=1)
y = targs[:, 0]
return torch.sum(pred == y)

I want to know why not pred = input[:,:1] ?
I think first two columns of input are genders and races.

Expected emotions to detect

Hi! First I want to say thank you for your work in face detection and facial expression.
About the second topic, I have a question that I would gladly appreciate you answer:

What range of emotions is able to detect your implementation? The basic ones from Paul Ekman? (happy, sad, anger, disgust, surprise, fear and contempt) or another selection of emotions?

If you could explain as much as you can about the implementation for facial expression would be really helpful for me.

Nocorrect box coordinates

OS Win 10 / Python 3.8

env

six               1.15.0
sklearn           0.0
threadpoolctl     2.1.0
tifffile          2021.4.8
toml              0.10.2
torch             1.8.1
torchvision       0.9.1
tqdm              4.60.0
typing-extensions 3.7.4.3
urllib3           1.26.4

image

Error in insightface

I have created a dataset as data/facebank and kept the images inside individual person name. I am facing an error for the below mentioned block of code:

if update_facebank_for_add_new_person:
targets,names = update_facebank(conf, face_rec.model, detector)
else:
targets, names = load_facebank(conf)

NameError Traceback (most recent call last)
in ()
----> 1 if update_facebank_for_add_new_person:
2 targets,names = update_facebank(conf, face_rec.model, detector)
3 else:
4 targets, names = load_facebank(conf)

NameError: name 'update_facebank_for_add_new_person' is not defined

RuntimeError: number of dims don't match in permute

When i try recognize same images, i get next error:

Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in call
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 199, in call
await super().call(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in call
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in call
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/cors.py", line 86, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/cors.py", line 142, in simple_response
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in call
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in call
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in call
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 201, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 148, in run_endpoint_function
return await dependant.call(**values)
File "/app/./main.py", line 519, in get_age
genders, ages = age_gender_detector.detect(faces)
File "/usr/local/lib/python3.8/site-packages/facelib/AgeGender/Detector.py", line 46, in detect
faces = faces.permute(0, 3, 1, 2)
RuntimeError: number of dims don't match in permute

Image sample: https://drive.google.com/open?id=14xAAW5a74dpNynF3xji5eMQO7yWjPN2c

Coordinates for cv2.rectangle() need to be casted to int.

Dear developers,

I'v got a problem running the test notebook with the below package (just using pip install -r requirements.txt):

opencv-python 4.5.5.62

, and was able to fix them with small patch to facelib/InsightFace/models/utils.py, casting coordinates to int explicitly:

diff --git a/facelib/InsightFace/models/utils.py b/facelib/InsightFace/models/utils.py
index ec929a0..764ae63 100644
--- a/facelib/InsightFace/models/utils.py
+++ b/facelib/InsightFace/models/utils.py
@@ -112,7 +112,7 @@ def special_draw(img, box, landmarsk, name, score=100):
     """draw a bounding box on image"""
     color = (148, 133, 0)
     tl = round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1  # line thickness
-    c1, c2 = (box[0], box[1]), (box[2], box[3])
+    c1, c2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3]))
     # draw bounding box
     cv2.rectangle(img, c1, c2, color, thickness=tl)
     # draw landmark
@@ -123,7 +123,7 @@ def special_draw(img, box, landmarsk, name, score=100):
     score = 0 if score < 0 else score
     bar = (box[3] + 2) - (box[1] - 2)
     score_final = bar - (score*bar/100)
-    cv2.rectangle(img, (box[2] + 1, box[1] - 2 + score_final), (box[2] + (tl+5), box[3] + 2), color, -1)
+    cv2.rectangle(img, (int(box[2] + 1), int(box[1] - 2 + score_final)), (int(box[2] + (tl+5)), int(box[3] + 2)), color, -1)
     # draw label
     tf = max(tl - 1, 1)  # font thickness
     t_size = cv2.getTextSize(name, 0, fontScale=tl / 3, thickness=tf)[0]

This has been tested on Windows 10 and Ubuntu 18.04. Please let me me know if PR is welcome for this fix.

Thanks!

OpenCV gives an error after a few seconds of opening the window

I wanted to try FaceLib with the emotion detector from my webcam, but I got an error after a few seconds when a frame opens!

Code (from the README file):

from facelib import WebcamEmotionDetector

detector = WebcamEmotionDetector()
detector.run()

Error message:

loading ...
from EmotionDetector: weights loaded
type q for exit
Traceback (most recent call last):
  File "/home/sheikhartin/w/.tmp/facelib_demo.py", line 15, in <module>
    detector.run()
  File "/home/sheikhartin/.local/lib/python3.10/site-packages/facelib/FacialExpression/from_camera.py", line 27, in run
    special_draw(frame, b, landmarks[i], name=emotions[i], score=emo_probs[i])
  File "/home/sheikhartin/.local/lib/python3.10/site-packages/facelib/InsightFace/models/utils.py", line 117, in special_draw
    cv2.rectangle(img, c1, c2, color, thickness=tl)
cv2.error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'rectangle'
> Overload resolution failed:
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
>  - argument for rectangle() given by name ('thickness') and position (4)
>  - argument for rectangle() given by name ('thickness') and position (4)

Face Landmarks

Hello,
How many Landmarks is it detecting and comparing?

FaceLib on Nvidia Jetson Nano

I have tried to install on Nvidia Jetson Nano. It support bcolz 1.2.0 and matplotlib 2.1.1. Could you share the oldest version of FaceLib that support older version of package ( bcolz 1.2.0 and matplotlib 2.1.1) ?

Missing dependencies

I installed this repo on a project using venv, and it complained about scikit-image and scikit-learn

I can´t make a PR right now, but whenever I get the time I´ll do it, just a heads up!

Issue with EasyDict

Hi, your requirements say EasyDict > 1.7.0 is required. I have 1.10.0 installed and get the following error:

File "C:\Users\admin\anaconda3\envs\myproj\lib\site-packages\facelib\InsightFace\models\utils.py", line 11, in faces_preprocessing
faces = faces.permute(0, 3, 1, 2).float()
AttributeError: 'EasyDict' object has no attribute 'permute'

Do you have any idea if the function has been renamed?

some questions

Hi , thanks your great job on this open project. there are some questions I confused , please give some clues on them:

  1. which sub dataset is used for training, the original wild or cropped? anyone is viable?
  2. why dont you use MoibleNet but ShuffleNet in Age and gender model?
    appreciate your effort again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.