Giter VIP home page Giter VIP logo

serengil / tensorflow-101 Goto Github PK

View Code? Open in Web Editor NEW
1.0K 48.0 632.0 55.12 MB

TensorFlow 101: Introduction to Deep Learning

Home Page: https://www.youtube.com/watch?v=YjYIMs5ZOfc&list=PLsS_1RYmYQQGxpKV44jsxXNgjEpRoW61w&index=2

License: MIT License

Java 0.01% Python 0.34% Jupyter Notebook 99.65%
tensorflow python neural-networks deep-learning machine-learning face-recognition facial-expression-recognition style-transfer autoencoders transfer-learning

tensorflow-101's People

Contributors

jack-cruz avatar ma7555 avatar serengil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-101's Issues

ValueError: bad marshal data (unknown type code)

Hi, The code of the notebook throws an error:
ValueError: bad marshal data (unknown type code)

Traceback:

File "facenet.py", line 16, in
model = model_from_json(open("facenet_model.json", "r").read())
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/models.py", line 349, in model_from_json
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 143, in deserialize_keras_object
list(custom_objects.items())))
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 2507, in from_config
process_layer(layer_data)
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/engine/topology.py", line 2493, in process_layer
custom_objects=custom_objects)
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 143, in deserialize_keras_object
list(custom_objects.items())))
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/layers/core.py", line 711, in from_config
function = func_load(config['function'], globs=globs)
File "/home/fran/anaconda/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 232, in func_load
code = marshal.loads(raw_code)
ValueError: bad marshal data (unknown type code)

Any idea to solve this is appreciated! Thank you!

Adding threshold to the Elasticsearch Query

Hi,

I am little confused about how the l2norm function is executing for face recognition as we haven't given our less than threshold in the query. Can you please explain where it is being done ?

query = {
    "size": 5,
    "query": {
    "script_score": {
        "query": {
            "match_all": {}
        },
        "script": {
            #"source": "cosineSimilarity(params.queryVector, 'title_vector') + 1.0",
            "source": "1 / (1 + l2norm(params.queryVector, 'title_vector'))", #euclidean distance
            "params": {
                "queryVector": list(target_embedding)
            }
        }
    }
}}

Does your python file named "openface-real-time" uses single image for each person?

I am not an expert but from the following code what i have concluded is that you can only use 1 image per employee.
As you cannot put multiple images of same person under same name.
I am trying to built a facial recognition system and i can train the person before hand with 50 images.
Kindly tell me about this so i can change code according to my need.

`
#put your employee pictures in this path as name_of_employee.jpg
employee_pictures = "database/"

employees = dict()

for file in listdir(employee_pictures):
employee, extension = file.split(".")
img = preprocess_image('database/%s.jpg' % (employee))
representation = model.predict(img)[0,:]

employees[employee] = representation

print("employee representations retrieved successfully")
`

age-gender-prediction-real-time.py

When I run your above program, I encountered below messages.
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-vi271kac\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
Thank you

Process multiple images instead of one

I am trying to figure out how to process multiple images to extract the face. The images are jpg sequence from a video of the same person. I found your code very helpful but unable to process multiple images at once. Any suggestions?

python/Face-Normalization-with-MediaPipe.ipynb

Replace harcascade with DNN

Hi again Serengil, Thanks for the help.

Im trying to replace the harcascade with the dnn module from opencv, Ive got the dnn working recogniseing the faces but the bounding box's return are rectangles, not box's.

The perfect ratio seems to be 1.35 * dnn_rect.width, here's the origninal code:

    # compute the (x, y)-coordinates of the bounding box for the
    # object
    box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
    (startX, startY, endX, endY) = box.astype("int")

    # draw the bounding box of the face along with the associated
    # probability
    text = "{:.2f}%".format(confidence * 100)
    y = startY - 10 if startY - 10 > 10 else startY + 10
    cv2.rectangle(frame, (startX, startY ), (endX, endY), (255, 255, 255), 1)
    cv2.putText(frame, text, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (255, 255, 255), 1)

How is best to change the above code to take the bounding rectangle width from the dnn and do the 1.35 * dnn_rect.width to still be able to run the face emotion detector code on the resulting cropped square.

Cheers J

Insufficient resources

I am trying to run this code for a school project, and am using AWS SageMaker as my laptop does not have enough resources. However, even with 8 cores and 64 GB of RAM, each epoch is estimated to take about an hour to run. Any suggestions to speed this up?

model_from_json(open("facial_expression_model_structure.json", "r").read())

#face expression recognizer initialization
model = model_from_json(open("facial_expression_model_structure.json", "r").read())

This code generates the following issue:
Traceback (most recent call last):
File "C:/Users/palitabhishek/Documents/Analysis/Face_Recognition/test.py", line 4, in
model = model_from_json(open("facial_expression_model_structure.json", "r").read())
File "C:\Users\palitabhishek\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\saving.py", line 490, in model_from_json
config = json.loads(json_string)
File "C:\Users\palitabhishek\AppData\Local\Programs\Python\Python36\lib\json_init_.py", line 354, in loads
return _default_decoder.decode(s)
File "C:\Users\palitabhishek\AppData\Local\Programs\Python\Python36\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\palitabhishek\AppData\Local\Programs\Python\Python36\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 7 column 1 (char 6)

Please help me correct the same.

vgg-face.ipynb how to evaluate on a dataset

Could you share the script to do the evaluation of the model on dataset. My doubt is in order to find the accuracy on a dataset, do we take every pair of embedding in the dataset and check if the model can correctly detect as the same person or not? Or is there some other standard approach to evaluate the face recongition models which are designed in one shot learning fashion?

Voyager Face embedding storing

what exactly is this piece of code doing

for i in range(len(embeddings), target_size):
    embedding = np.random.uniform(-5, +5, num_dimensions)
    embeddings.append(embedding)
    img_names.append(f'synthetic_{i}.jpg')
print(f'There are {len(embeddings)} embeddings available')

and can i just add my own faces embeddings without creating synthetic data? if so how can i do that?
Thank you.

SystemError: unknown opcode while running model_from_json(open("facenet_model.json", "r").read())

Hi Sefik,

first of all, thanks for your work.
Unfortunately, I get this error when I try to run your model, in particular command "model from jason":

~/miniconda3/lib/python3.6/site-packages/keras/layers/core.py in scaling(x, scale)
24 from ..utils.generic_utils import has_arg
25 from ..utils import conv_utils
---> 26 from ..legacy import interfaces
27
28

SystemError: unknown opcode

I have python 3.6, tensorflow 1.12.0 and keras 2.2.4.

What could be the issue?

P.S. I tried to leave a comment on the related post on your site but I keep getting mistaken for a Bot.

Xml not found

Where is C:/ProgramData/Anaconda3/envs/tensorflow/Library/etc/haarcascades/haarcascade_frontalface_default.xml????

Wrong cosine similarity results on face recognition

First of all I would like to thank for the code, very well done and written.

I used your code for face recognition (and using the model template provided by you, thank you) . And in my tests everything went as planned.
But when I used it on my ip camera (640x480 resolution), the results for my face were confusing. Where can I be wrong? Can the results be wrong because the image I want to compare is in grayscale?

The face I'd like to compare (face 1): Me

The face of someone else (face 2): Other person

My face (which I'd like to match w/ the face 1): Me in ip camera

Other person "cosine similarity": 0.4480170012
Me "cosine similarity": 0.6674099863

I wish you would answer me even if you could not help me. I do not know what to do anymore.

Thank youuu!!

How to train network for custom dataset?

I've used your pre-trained weights for face recognition and it seems to work well. Thank you.
Now, I am interested to use a similar network to verify if two images match or not (no faces). Do you have any idea about how could I train my network?

In your code for face recognition, an image is used as input for network and a vector is generated as output. Both images (faces) are fed to network such that -both- generated vectors are then compared with cosine or euclidean similarity.

But in my case I just have two images that match or not.

Memory usage too much

I tried the Find-Look-Alike-Celebrities.ipynb on colab but on
df['pixels'] = df['full_path'].apply(getImagePixels)
this line give a memory error. Colab has 25gb memory.
How can I avoid that?

Getting Error at 6th Cell of LFW.ipynb

I'm getting a value error at the 6th cell of LFW.ipynb. I'm getting this error both in google colab and my local machine. Please solve this issue.

The error is:

ValueError Traceback (most recent call last)
in
11
12 #obj = DeepFace.verify(img1, img2, model_name = 'VGG-Face', model = vgg_model)
---> 13 obj = DeepFace.verify(img1, img2, model_name = 'Dlib', model = dlib_model, distance_metric = 'euclidean')
14 prediction = obj["verified"]
15 predictions.append(prediction)

~/.local/lib/python3.8/site-packages/deepface/DeepFace.py in verify(img1_path, img2_path, model_name, distance_metric, model, enforce_detection, detector_backend)
152 , detector_backend = detector_backend)
153
--> 154 img2 = functions.preprocess_face(img=img2_path
155 , target_size=(input_shape_y, input_shape_x)
156 , enforce_detection = enforce_detection

~/.local/lib/python3.8/site-packages/deepface/commons/functions.py in preprocess_face(img, target_size, grayscale, enforce_detection, detector_backend)
454
455 if enforce_detection == True:
--> 456 raise ValueError("Detected face shape is ", img.shape,". Consider to set enforce_detection argument to False.")
457 else: #restore base image
458 img = base_img.copy()

ValueError: ('Detected face shape is ', (0, 92, 3), '. Consider to set enforce_detection argument to False.')

can not reproduction acc with lfw dataset

I try to reproduction acc with lfw dataset, but error occur.

Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 2772, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "deepface_test.py", line 33, in
obj = DeepFace.verify(img1, img2, model_name = 'Dlib', model = dlib_model, enforce_detection=False, distance_metric = 'euclidean')
File "/usr/local/lib/python3.6/dist-packages/deepface/DeepFace.py", line 152, in verify
, detector_backend = detector_backend)
File "/usr/local/lib/python3.6/dist-packages/deepface/commons/functions.py", line 454, in preprocess_face
img = align_face(img = img, detector_backend = detector_backend)
File "/usr/local/lib/python3.6/dist-packages/deepface/commons/functions.py", line 437, in align_face
img = alignment_procedure(img, left_eye, right_eye)
File "/usr/local/lib/python3.6/dist-packages/deepface/commons/functions.py", line 360, in alignment_procedure
img = Image.fromarray(img)
File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 2774, in fromarray
raise TypeError("Cannot handle this data type: %s, %s" % typekey) from e
TypeError: Cannot handle this data type: (1, 1, 3), <f4

what shoud I do?

img_path2 missing error

resp_obj = DeepFace.verify(instances, model_name = model, distance_metric = metric), what should be the image path2 in this line, this is the code from ensemble method for face recognition

why only detect the faces? Not recognize.

Brother, I used your real-time face recognition of the deep face. Its working fine but it only detects the faces not recognise the faces via my database like your video? Give me a solution brother.

Adding custom GUI for emotion labels

Hi Serengil, Ive loved your tutorials and have been very helpfull to me in my learning, I was woundering if you could share an exmple of how you added the custom label background in grey that shows all the emotion detection states with percentanges on the zuckerberg example image, In the demo code you just get the emotion writen above the bounding box. This would really help me out as cant find any info on custom GUI outputs with opencv and AI platforms. Cheers J

help me in improving accuracy

I followed your ideas to train a model. After training when I test with my own data my predictions are not upto the mark.
Please share me your accuracy level.
print('Test accuracy:', 100*score[1])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.