Giter VIP home page Giter VIP logo

jeeliz / jeelizweboji Goto Github PK

View Code? Open in Web Editor NEW
1.1K 43.0 149.0 48.08 MB

JavaScript/WebGL real-time face tracking and expression detection library. Build your own emoticons animated in real time in the browser! SVG and THREE.js integration demos are provided.

Home Page: https://jeeliz.com

License: Apache License 2.0

JavaScript 99.68% Python 0.32%
javascript webgl threejs weboji webcam deep-learning face face-expression computer-vision augmented-reality

jeelizweboji's Introduction

NOTICE: Apple©'s lawyers threatened us to file a complain on the 21th of August 2019 for infringing their intellectual property. So we have replaced the 3D animated fox by a raccoon.

Indeed, Apple© owns the intellectual property of 3D animated foxes (but not on raccoons yet). Thank you for your understanding.

JavaScript/WebGL library to detect and reproduce facial expressions

You can build your own animated emoticon embedded in your web application thanks to this library. The video is processed exclusively on the client-side.

The computing power of your GPU is important. If your GPU is powerful, many detections per second will be processed and the result will be smooth and accurate.

The face detection should work even if the lighting is not great. However, the better is the input image, the better is the face expressions detection. Here are some tips to get a good experience:

  • The face should be well enlighted: the nose, the eyes should be distinguishable,
  • Avoid backlighting: The background should be a wall, not a window,
  • The face should not be too far, neither too close to the camera: the face should ideally cover 1/3 of the camera height. It should be fully visible,
  • The camera should be placed in front of the user. A side view is not recommended,
  • Beards and mustaches can make mouth movement detection harder, and glasses can disturb eyes detection.

Table of contents

Features

  • face detection and tracking,
  • detects 11 facial expressions,
  • face rotation around the 3 axis,
  • robust to lighting conditions,
  • mobile friendly,
  • examples provided using SVG and THREE.js.

Architecture

  • /assets/: assets, both for 3D and 2D demonstrations (3D meshes, images),
  • /demos/: the most interesting: the demos!,
  • /dist/: heart of the library:
    • jeelizFaceExpressions.js: main minified script. It gets the camera video feed, exploit the neural network to detect the face and the expressions and stabilize the result,
    • jeelizFaceExpressionsNNC.json: neural network model loaded by the main script,
  • /doc/: some additionnal documentation,
  • /helpers/: The outputs of the main script are very raw. It is convenient to use these helpers to animate a 3D model with the THREE.js helper or a SVG file with the SVG helper. All demos use these helpers,
  • /libs/: some javascript libs,
  • /meshConverter/: only for the THREE.js use. Tool to build the 3D model file including morphs from separate .OBJ files.

Demonstrations

All the following demos are included in this repository, in the /demos path. You can try them:

If you have made an application or a fun demonstration using this library, we would love to check it out and add a link here! Just contact us on Twitter @Jeeliz_AR or LinkedIn.

Run locally

You just have to serve the content of this directory using a HTTPS server. Camera access can be not authorized depending on the web browser the application is hosted by an unsecured HTTP server. You can use Docker for example to run a HTTPS server:

  1. Run docker-compose
docker-compose up
  1. Open a browser and go to localhost:8888

If you have not bought a camera yet, a screenshot video of the Cartman Demo is available here:

Using module

/dist/jeelizFaceExpressions.module.js is exactly the same as /dist/jeelizFaceExpressions.js except that it works as JavaScript module, so you can import it directly using:

import 'dist/jeelizFaceExpressions.module.js'

or using require:

const faceExpressions = require('./lib/jeelizFaceExpressions.module.js')
//...

There is no demo using the module version yet.

Integration

With a bundler

If you use this library with a bundler (typically Webpack or Parcel), first you should use the module version.

Then, with the standard library, we load the neural network model (specified by NNCPath provided as initialization parameter) using AJAX for the following reasons:

  • If the user does not accept to share its camera, or if WebGL is not enabled, we don't have to load the neural network model,
  • We suppose that the library is deployed using a static HTTPS server.

With a bundler, it is a bit more complicated. It is easier to load the neural network model using a classical import or require call and to provide it using the NNC init parameter:

const faceExpressions = require('./lib/jeelizFaceExpressions.module.js')
const neuralNetworkModel = require('./dist/jeelizFaceExpressionsNNC.json')

faceExpressions.init({
  NNC: neuralNetworkModel, //instead of NNCPath
  //... other init parameters
});

With JavaScript frontend frameworks

We don't cover here the integration with mainstream JavaScript frontend frameworks (React, Vue, Angular). If you submit Pull Request adding the boilerplate or a demo integrated with specific frameworks, you are welcome and they will be accepted of course. We can provide this kind of integration as a specific development service ( please contact us here ). But it is not so hard to do it by yourself. Here is a bunch of submitted issues dealing with React integration. Most of them are for Jeeliz FaceFilter, but the problem is similar:

You can also take a look at these Github code repositories:

Native

It is possible to execute a JavaScript application using this library into a Webview for a native app integration. But with IOS < 14.3 the camera access is disabled inside webviews. If you want to make your application run on devices with IOS versions older than 14.3, you have to implement a hack to stream the camera video into the WKWebview using websockets.

His hack has been implemented into this repository:

But it is still a dirty hack introducing a bottleneck. It still run pretty well on a high end device (tested on Iphone XR), but it is better to stick on a full web environment.

Hosting

This library requires the user's camera feed through MediaStream API. Your application should then be hosted with a HTTPS server (the certificate can be self-signed). It won't work at all with unsecure HTTP, even locally with some web browsers.

Be careful to enable gzip HTTP/HTTPS compression for JSON and JS files. Indeed, the neuron network JSON in, /dist/ is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.

The neuron network JSON file is loaded using an ajax XMLHttpRequest after the user has accepted to share its camera. We proceed this way to avoid to load this quite heavy file if the user refuses to share its camera or if there is no camera available. The loading will be faster if you systematically preload the JSON file using a service worker or a simple raw XMLHttpRequest just after the loading of the HTML page. Then the file will be in the browser cache and will be fast to request.

About the tech

Under the hood

The heart of the lib is JEELIZFACEEXPRESSIONS. It is implemented by /dist/jeelizFaceExpressions.js script. It relies on Jeeliz WebGL Deep Learning technology to detect and track the user's face using a deep learning network, and to simultaneously evaluate the expression factors. The accuracy is adaptative: the best is the hardware, the more detections are processed per second. All is done client-side.

The documentation of JEELIZFACEEXPRESSIONS is included in this repository as a PDF file, /doc/jeelizFaceExpressions.pdf. In the main scripts of the demonstration, we never call these methods directly, but always through the helpers. Here is the indices of the morphs returned by this library:

  • 0: smileRight → closed mouth smile right
  • 1: smileLeft → closed mouth smile left
  • 2: eyeBrowLeftDown → left eyebrow frowned
  • 3: eyeBrowRightDown → right eyebrow frowned
  • 4: eyeBrowLeftUp → raise left eyebrow (surprise)
  • 5: eyeBrowRightUp → raise right eyebrow (surprise)
  • 6: mouthOpen → open mouth
  • 7: mouthRound → o shaped mouth
  • 8: eyeRightClose → close right eye
  • 9: eyeLeftClose → close left eye
  • 10: mouthNasty → nasty mouth (show teeth)

Compatibility

  • If WebGL2 is available, it uses WebGL2 and no specific extension is required,
  • If WebGL2 is not available but WebGL1, we require either OES_TEXTURE_FLOAT extension or OES_TEXTURE_HALF_FLOAT extension,
  • If WebGL2 is not available, and if WebGL1 is not available or neither OES_TEXTURE_FLOAT or OES_HALF_TEXTURE_FLOAT are implemented, the user is not compatible.

In all cases, you need to have WebRTC implemented in the web browser, otherwise this library will not be able to get the camera video feed. The compatibility tables are on, caniuse.com: WebGL1, WebGL2, WebRTC.

If a compatibility error is triggered, please post an issue on this repository. If this is a camera access error, please first retry after closing all applications which could use your device (Skype, Messenger, other browser tabs and windows, ...). Please include:

This library works quite everywhere, and it works very well with a high end device like an Iphone X. But if your device is too cheap or too old, it will perform too few evaluations per second and the application will be slow.

Documentation

License

Apache 2.0. This application is free for both commercial and non-commercial use.

References

jeelizweboji's People

Contributors

bjlaa avatar compscikai avatar davemee avatar timsun28 avatar xavierjs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jeelizweboji's Issues

Hardware requirements

Hello, thank you for the work you do for this library.
I would like to know the minimum hardware requirements.
a pc with 4gb of ram and an Intel Atom x5-Z8350 processor
with windows 10 can it work?
thanks

Seperate Texture files and new emotions

I would first like to thank you for this library! I have been trying to implement different kind of models, but I'm not sure how I would go along with adding multiple texture files to a model.

In my case this would be for example a skin.png and some teeth.png and an eye.png.

Would this be possible to add to the ThreeJeelizHelper.js?

I was also wondering if it would be possible to add different face 'emotions'. For example stick out your tongue or move your eye balls to left/right/up/down? I'm not very experienced with training a model for this, but I was wondering if this could be added to the library or if you could suggest some way for me to add this functionality to my app / your library?

Thank you in advance!

Is it possible to control the webojiCanvas (weboji) externally?

Hi,

We are trying to control the webojiCanvas (weboji) externally by separating the JEEFACETRANSFERAPI face-detection logic and the control of weboji.

Our idea is to send the face-detection data remotely through the net and control the weboji in the other side.

However, we found that the control of the weboji is deep down in the JEEFACETRANSFERAPI.

We want to ask if there is any way we can achieve what we want through this library, please?

If not, can you suggest what kind of tools, libraries or tutorials we should look for, please?

Thanks for your helps!

Can I use an image as a source?

Hi, first of all, thanks for this amazing library!

I'm trying to make an app that tracks the face in the image source. What I did is drawing the image to the HTMLCanvasElement and making it a MediaStream by using .captureStream() and updating the source of the <video> linked to your lib with it.
It seems to barely work, it only works for very limited type of images but it mostly even fails to begin tracking.

Is there a better way to track the face from the single image, which you would recommend? Please let me know.

Thanks!

Noise in AU's movement

Good day!

I succsessefully transfered jeeliz weboji to my project and my 3D models. But movement of morph targets in model goes with strong twitching (for example mouth and eyelids goes wild), which means, that I get signal from AU's with heavy noise. I tried to fix it on my side, by writing some filters. But maybe there is a way or configs on Jeeliz's side to tune AU's noises?

Can I make a bounding box a circle?

Hello Jeeliz Team,

Thank you for the great job.

I want to make the shape of the bounding box of the canvas a circle. How to do so?

Regards

Camera box not appearing

Totally cool app!

I am running Chrome Version 67.0.3396.99 (Official Build) (64-bit) and have allowed access to cam.

Nothing happens on the screen with either of your example demos. The cute fox, nothing but a background image with rounded lines on screen. The SVG appears on the other one but nothing else. No camera box in lower corner as shown in your example screenshot.

Is this for MAC/Apple only, or will it run on Windows?

Thanks.

Can I save this 3D animation video locally?

Hi, This looks cool and interesting. I can use any external video I'm interested in and convert it into 3D animation. I want to know if I can save the converted animation locally instead of just opening the browser to watch it.

Have a question about how you handle changing emotion in real time.

I have a question related to this demo (jeelizWeboji/demos/threejs/raccoon).

I think that 2 important roles of JEEFACETRANSFERAPI in this demo are

  1. Calculate direction of the head in real-time just like
    const rotation = JEEFACETRANSFERAPI.get_rotationStabilized(); // in animate()
  2. Also calculate morph coefficients in real-time for morph blending.

I tried to find where those morph coefficients are calculated, but I could only find const morphTargetInfluencesDst = JEEFACETRANSFERAPI.get_morphTargetInfluencesStabilized(); // in successCallBack() of ThreeMorphAnimGeomBuilder().
And this, in contrast to calculating the rotation in animate(), seems like not calculating the coefficients in real-time.
If you don't calculate those morph coefficients in real-time, how do you track clients face expression that changes in real-time?

I guess most of the calculations for animation are done in GPU using GLSL, so I want to know where your API extract information(morph coefficients?) required for morph animation from the video stream.

This demo works pretty well, so I must misunderstand something about morph animation or your API.

If you could help me untangle this problem, I really appreciate it!

Thank you for reading and sharing this awesome project!

Typo in Readme.md

Hello Jeeliz,

Just wanted to inform you that there is a typo in readme.md file
It should be indeed not indead.

Indead, Apple© owns the intellectual property of 3D animated foxes (but not on raccoons yet). Thank you for your understanding.

Nothing major.

Regards,
Abhilash

How to reduce CPU-GPU consumption

Hello @xavierjs ,
Thanks for great lib, I'm using this lib for my personal project but it consume quite much CPU and GPU so question that are there any options to optimize this even reduce the frame rate or accuracy?

detach and reattach camera/canvas

Hello @xavierjs ,
hope you can help us again.
We need to be able to use the camera in other parts of our project.Is there a way to to detach and reattach the camera to the JFT?
We are hoping you would be able to tell us how we could do this.

Many Thanks ,
Kei

A question about turtle example

Hello, first of all thank you very much for sharing this amazing project, its incredible to see what you did with javascript 😊
I was looking for the other examples and i saw a turtle with body, and of course i tried to see how a turtle with body looks like but i couldn't make it work it. I'm sorry for my dumb question but, Do you have an instruction to convert the corps_tortue.obj and reference into javascript ?
Thank you very much for your attention.

Jeeliz Exposure controller in Jeeliz Weboji for improving lighting

Hi,

Sorry for bothering you again.

I have checked your repo https://github.com/jeeliz/jeelizExposureController and I think it is really good. I have a suggestion, since Weboji needs good lighting conditions to perform better, have you tried adding exposure controller to weboji for improving expression results.

This may not work for face filter because it requires a webcam stream canvas/video behind the filter. But for weboji it will improve the results under bad lighting conditions (back-light, low or uneven light). However I believe the tracking effectiveness of the face will remain constant.

Have you tried to integrate them already? If not then is there a reason for not doing so?

I would love to try the same and share my experiences with you. Please guide.

Thanks and Regards,
Andy

detecting live person rather than Mobile video / static image

Is there any possiblity that face-detection should neglect detection from mobile videos / static printed pictures.

I am trying to use this awesome lib for liveness detection and these are hurdles i am facing.

One idea is to look for rectangular shapes around images through edge- detection algo.

Thanks, in advance

Is jeelizFaceTransfer open source ?

Hi,

First, congrats for this project. it's amazing !
I'd like to learn more about how jeelizFaceTransfer works but the file is minified. Is this an open source project ?
I am trying to make weboji works smooth on mobile devices. The SVG demo is smooth on a high-end laptop but lags on mobile devices (really slow on a 2 year-old Samsung A5). It seems that the face detection is the main cause of those lags. Maybe something like Dlib (landmarks detection) + FACS (facial action coding system) could improve expression detection?

Thanks

How to set the dimension and position of bounding box?

Hello Jeeliz Team,
Thanks again for creating this wonderful library!

I am building a project where the user is generally sitting in between with very little side rotation. I want to set the bounding box to a limited area and not to the full canvas. This will increase the searching speed for face if the tracking fails.
How can I accomplish this?

Regards,
Andy

Change position after change model

Hello,

thanks for share this project, i'm developing a function to let user change the model (Fox and girl).

i use load weboji
`'load_weboji': function(modelURL, matParams, callback){

    return load_model(modelURL, false, function(){

      that.set_materialParameters(matParams);

      if (callback) callback();

    });

  },`

the issue is i'm setting the init position [0, -80 , 0] and works well, but when i change the model
the _ThreeMorphAnimMesh is removed from the scene and set the position to [0,0,0]

i tried with set_positionScale( [0, -80, 0], 1) but i get Cannot read property 'fromArray' of undefined in this line

` set_positionScale: function(vector3Array, scale){

    if (vector3Array){

      _ThreeMorphAnimMesh.position.fromArray(vector3Array);

    }

    if (typeof(scale) !== 'undefined'){

      _ThreeMorphAnimMesh.scale.set(-scale, scale, scale);

    }

  },`

am i doing anything wrong? did i miss something?

How to destroy?

jeelizFaceFilter have 'native' destroy method, how to do it correctly for jeelizWeboji?

Intellectual claim on Webojis.com

Hey,

First of all, I am a big fan of your work.

I have some questions related to Apple's claim of intellectual property rights on webojis.com

I want to know if Apple has claimed full intellectual property right on implementation i.e. no similar application (irrespective of technology used) can be developed or just few characters (eg. Foxes, Raccoon etc.) I wanted to develop an open source application but with different emojis so that's why I thought its best to consult you guys first.

I am really pissed off at the fact that they have claimed intellectual property rights on something that was a concept before their "implementation". I really can't believe that.

Looking forward for your reply.

Thank you in advance!

Capture frame

Is there a way to capture the frame. For example if a button was on the page, on click will capture a frame as JPEG and display that on a page so that it can be shared?

How to control springiness effect for custom 3d models?

In the ThreeJSHelper File, in my assumption rotationSpringCoeff and rotationAmortizationCoeff parameters are responsible for "springiness" effect of the 3d model.

It works great for the fox model. But it doesn't look good for my custom 3d models which I created using this tutorial. For my model I have set these parameters as follows to lower the springiness.
rotationSpringCoeff = 0.0005 and rotationAmortizationCoeff = 0.5

However, I want to use this effect for my 3d model but at the same time I want to control the location and intensity of this effect, for example trunk and ears of an elephant. Is there any specific requirements or code I have to change to achieve this? I have found no information to continue on. Please guide me to the correct path.

Regards,
Andy

Are there plans for NPM package?

Hi! Are there any plans to set this up as an NPM package, like how you have it for FaceFilter? If not I'd be happy to help with a Pull Request and also help maintain it, as I intend to use this for a few different side projects (by the way, you did an amazing job with these libraries and keeping file size small!)

is there any way to set a fixed position while _isDetected == false?

Hello,

Is there a way to set the values until the face is detected? (Ex: morph_X = -0.5486651)

I have been able to modify the values and freeze the rotation of waiting stance, but I am trying that during that position, it makes random faces like smiling, getting angry, etc. while the face is not detected but the values are "reset" in 1 ms after i changed...

how can i set this values and avoid the default values?

thanks in advance

Possible to get two emoji on the screen?

I'm playing around with an idea, and trying to get two different raccoons on the screen (from the THREE.js demo) parroting expressions back to each other. I've been looking through the code, but haven't found a good spot to instantiate the second mesh. It always seems like there's some code referring to the first mesh in a global way.

Any suggestions? Places you can recommend I look?

Get coordinates of the face

Hi, guys, me again
In the app that I am working I need to get a picture of the face for each gesture that the user makes. For that I need to get the data of the x, y, width and height coordinates like jeelizFaceFilter does with detectState.
I saw that they added the method get_positionScale() (#5) but I can not get the data correctly and I do not know if it will help me for what I need. Could you tell me if that is the way to go to get that data (x, y, w, h) with JEEFACETRANSFERAPI?

Thank you!

Error in device with WebGL2

Hi @xavierjs, me again :)
I am doing tests and I have encountered a special case. With the Samsung A5 from 2016 ... that has WebGl2 throw the following error:
ERROR in ContextFeedForward: Not enough capabilities
Why is it wrong? Is there any way to capture that error with a browser or device test?
Thanks!!

Error when not running on localhost

When I try to run a basic script in a server I get this error.

Screenshot_20201005_202620

But it works fine on localhost. I do no get what is the difference.

Tested on Firefox 81.0 (Linux) and Chrome 83.0.4103.116 (Linux)

Only head tracking (orientation)?

So far, only the jeelizWeboji demos worked for me (other demos were not able to detect my head). All I need is head tracking, with only its orientation (no face expression, emotion, etc). Can a simplest (and maybe more robust) model could be developed, that would require even less CPU resources?

Safari Version 13.0 (15608.1.24.40.4) Error

Hi @xavierjs!

I'm experimenting with the JEELIZFACETRANSFER today and receive this error when previewing in Safari:
[Log] ERROR in ContextFeedForward : – "Your configuration cannot process color buffer float" (jeelizFaceTransfer.js, line 45)

[Log] ERROR in JeelizSVGHelper - CANNOT INITIALIZE JEEFACETRANSFERAPI : errCode = – "GL_INCOMPATIBLE" (JeelizSVGHelper.js, line 34)

I tested Safari on both https://webglreport.com/?v=2 and https://webglreport.com/?v=1.

Any ideas what the issue might be? Let me know if I can send you anything.

Thanks!

running a function if face detected

Hi, how can I run a function if a face is detected/ not detected? I can't seem to find any existing functions in the source file controlling the detection. Is there anything I should be targeting?

Hide blue box in JEEFACETRANSFERAPI

Hi guys,

I congratulate them you on the project, it's amazing!

I have a question, Is it possible to remove the blue box that appears on the face jeelizFaceTransfer.js?
I need to take a screenshot (canvasScreen.getContext ('2d'). DrawImage () ..) but I do not want to show that frame.

The quick solution I came up with is to create another canvas context with the video and hide the instance JEEFACETRANSFERAPI.switch_displayVideo (false);
But, I do not know if it is the best solution.

Thanks!!

Can I control source media stream?

Hi. It looks like very useful app.

I'll think when I use it, I want to control inner of jeelizFaceTransfer.js. for example, controling timing of getUserMedia(), using remote media stream, or using recorded video of my face.
But I can't judge possibility of them because it's source code isn't public.
How should I control it? or another solution, could I see sourcecode of jeelizFaceTransfer.js?

thanks

How to get translation values?

Sir,
I want to get the value of how much the user has moved from the center of the screen (i.e translation (XYZ) of the face). I want to move the model just as the user is moving. Even the relative change of the blue square will work.

I feel that jeelizFaceTransfer.js file is responsible for this. It is too complex for me to understand. Its like converted from a machine. I have found no functions that may help me.
Please tell me how to get translation values?

Expected Performance on Mobile?

Reading Issue #26 - #26

"It should work nicely with a Samsung S7..."

When i run the Raccoon Demo on a Samsung Note 10 i am only getting around 5-12 FPS with Chrome, and there arent any errors in the console. Im assuming i should be getting better performance if it should work nicely on a 3 generations old phone?

Getting error while using file jeelizFaceTransfer.js

i am using this file " jeelizFaceTransfer.js" in my project to call a function
but i am getting below error in the console:-
Error:-
image

Steps to reproduce:-

  1. Download the project from :- https://github.com/abhilash26/sit-straight
  2. run index.html in chrome

it will ask for camera permission, click ok
now camera does not open and black screen with loading logo is displayed continuosly and console displays error.

Also try to run this project in Internet Explorer, camera is open and working fine

NOTE:- Issue is with browser compatibility. not working with chrome/Firefox

WEBGL1 Screenshot:-
image

WEBGL2 Screenshot:-
image

Below screenshot for the console output:-
image

New demos from abhilash26

First of all thank you Jeeliz for this amazing piece of code.

I have created a minified reusable version of this repository Face Primer for my use. Since I want use your technology in various applications. I hope this will help others who just want to use your AI mode for face detection.
I have previously contacted you with my work account and got great feedbacks from you on many issues. I wanted to repay you by sending you the demo of what I was building but it was a non-disclosed client project. This is my home account. I will sure make something wonderful and share a demo with you.

I have few questions.

  1. Did you use tensorflow to make the model?
  2. How can we improve the model with more sample images?

Some suggestions for improving animation which I have used while working on the client project.

  1. I have noticed that the influences (expression) values are having very high precision, this works against animation because the model seems to be moving more than we desire. I solved the issue by using .toFixed() function.
  2. There since the eyes generally close and open at the same time (except for winking) we can average out and use the same value for both the eyes, the same goes for eyebrows. So a function that can calculate a mean value for them would be great.
  3. I had created a mean-displacement function to calculate the change in last-frame and the current frame using the change in rotation and influences values. If the mean-displacement is below a certain threshold we do not update the weboji. This reduce stutter.

Error on iOS / Cordova builds

Hi,

We're using this library with great success on Android and macOS Safari, but when we bundle it as part of a Cordova application on iOS we get the following error:

WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL (k - jeelizFaceTransfer.js:86:267)

This is when testing on an iPad 3 running iOS 9.3.5 (13G36).

Does this error trigger any resonance?

Get the capacities of devices

Hi @xavierjs,
A question, do you have a some helpers with i can get if the device have or not the capacities for jobs with the libraries?
Is for not load the Jeeliz if the device not have the capacities and search other alternative.
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.