victordibia / handtrack.js Goto Github PK
View Code? Open in Web Editor NEWA library for prototyping realtime hand detection (bounding box), directly in the browser.
Home Page: https://victordibia.com/handtrack.js/
License: MIT License
A library for prototyping realtime hand detection (bounding box), directly in the browser.
Home Page: https://victordibia.com/handtrack.js/
License: MIT License
Chrome Version 81.0.4044.138 (Official Build) (64-bit)
OSX Catalina 10.15.2 (19C57)
pong.js:112 766 1440
planck-with-testbed.js:13564 On load.
planck-with-testbed.js:13193 Starting...
planck-with-testbed.js:13205 Loading apps...
planck-with-testbed.js:13156 Creating app...
planck-with-testbed.js:13592 Creating Canvas...
planck-with-testbed.js:13617 Creating stage...
planck-with-testbed.js:13159 Initing app...
planck-with-testbed.js:13165 Starting app...
planck-with-testbed.js:13662 Resize: 2880 x 1532 / 2
pong.js:205 ready!
5handtrack.min.js:25 Uncaught (in promise) TypeError: Cannot read property 'getUserMedia' of undefined
at handtrack.min.js:25
at new Promise ()
at Object.startVideo (handtrack.min.js:25)
at startVideo (pong.js:30)
at toggleVideo (pong.js:45)
at HTMLButtonElement. (pong.js:57)
planck-with-testbed.js:13662 Resize: 1362 x 1532 / 2
DevTools failed to load SourceMap: Could not load content for chrome-extension://hdokiejnpimakedhajhdlcegeplioahd/sourcemaps/onloadwff.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME
I tried to train a smaller size ssdmobilenetv1 model and convert it into tfjs_graph_model formatοΌbut how can I use it in the project? Is it necessary to modify the handtrack.min.js code because I found basepath and defaultParams variables in it . Any help would be greatly appreciated!!
Thought I would add the following codepen demo as a reference:
Hi! This is such an awesome project! π
I was wondering how hard would it be to add hand rotation tracking?
My use case is turning virtual knobs in the air for live music/video applications.
Thanks!
I tried cloning the repo just to check out the demo - both index and pong break with seemingly the same issue:
Uncaught ReferenceError: require is not defined handtrack.min.js:2464:294
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:2464
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:1
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:1
and
Uncaught TypeError: handTrack.load is not a function
<anonymous> http://localhost:3005/js/index.js:86
If I switch to loading from <script src="lib/handtrack.js"> </script>
instead of JSDelivr everything runs fine. But loading from <script src="handtrack.min.js"> </script>
produces the same error. (seems like the libs folder still has the older version from 2018).
The same thing happens when trying to run this example https://github.com/victordibia/handtrack.js#import-via-script-tag :
Uncaught ReferenceError: require is not defined handtrack.min.js:2464:294
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:2464
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:1
<anonymous> https://cdn.jsdelivr.net/npm/handtrackjs@latest/dist/handtrack.min.js:1
and
Uncaught SyntaxError: await is only valid in async functions and async generators index.html:21:22
Line 22 in index.html is const predictions = await model.detect(img);
from the example
In the prediction, I got:
{bbox: Array(4), class: 0, score: 0.985114336013794}
Per documentation, I would expect "hand", correct?
Thanks for your time.
$ npm start
[email protected] start /private/tmp/handtrack.js
cross-env NODE_ENV=development parcel demo/index.html --open
events.js:167
throw er; // Unhandled 'error' event
^
Error: spawn parcel ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:232:19)
at onErrorNT (internal/child_process.js:407:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
at Function.Module.runMain (internal/modules/cjs/loader.js:744:11)
at startup (internal/bootstrap/node.js:285:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:739:3)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:238:12)
at onErrorNT (internal/child_process.js:407:16)
[... lines matching original stack trace ...]
at bootstrapNodeJSCore (internal/bootstrap/node.js:739:3)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: cross-env NODE_ENV=development parcel demo/index.html --open
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/abib/.npm/_logs/2019-03-13T10_17_17_633Z-debug.log
I get the error:
Uncaught (in promise) TypeError: navigator.mediaDevices is undefined
startVideo https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js:25
startVideo https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js:25
Using the following code:
<div id="main">
<div class="knob selected">0</div>
</div>
<video class="" autoplay="autoplay" id="video_tag" style="width: 300px; height: 300px; background-color: rgb(155,255,122);"></video>
<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>
<script>
let tracking_webcam = false;
const video_tag = document.getElementById('video_tag')
const run_detection = (model, video) => {
model.detect(video).then(predictions => {
console.log('Predictions: ', predictions);
});
}
const track_webcam = (model, video) => {
handTrack.startVideo(video).then(status => {
console.log('status trying to start video: ', status)
if (status) {
tracking_webcam = true;
run_detection(model, video)
}
})
}
// Load the model.
console.log('Loading handtrack model')
handTrack.load().then(model => {
console.log('Handtrack model')
track_webcam(model, video_tag)
});
</script>
The error is thrown at the following line:
handTrack.startVideo(video).then(statu...
I'm unsure what I'm doing wrong here.
I have 2 different Vue projects wich both use handtrackjs. The components which use handtrackjs are identical. In the newer project i get the following error:
Uncaught (in promise) TypeError: Cannot read property 'node' of undefined
at e.evaluateFeature (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at e.get (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at e.register (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at e.registerTensor (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at new e (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at Function.e.make (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at tensor (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at Module.tensor2d (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
at eval (webpack-internal:///./node_modules/handtrackjs/src/index.js:131)
at eval (webpack-internal:///./node_modules/@tensorflow/tfjs-core/dist/tf-core.esm.js:250)
Whereas in the old project everything works as expected. Both projects use the same handtrackjs version + same Vue version. Is there any hint what might cause this behaviour?
Hey. I am facing the following issue during conversion of fozengraph model to .js format.
In your (awesome) blog you mentioned that
I followed the suggestion by authors of the Tensorflow coco-ssd example [2] in removing the post-processing part of the object detection model graph during conversion.
So I went to that link and executed the command in the readme file :
tensorflowjs_converter --input_format=tf_frozen_model \
--output_format=tfjs_graph_model \
--output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' \
./frozen_inference_graph.pb \
./web_model
But with my frcnn - inception v2 model I am getting this error:
KeyError: "The name 'Postprocessor/Slice' refers to an Operation not in the graph."
And when I am trying to convert SSD_MobileNet_V2, I am getting this error:
ValueError: Unsupported Ops in the model before optimization
NonMaxSuppressionV5
tensorflowjs version 1.3.2
tensorflow version 1.15.0
Any help would be much, much appreciated.
Thank you.
Hi, can I know got method to detect my one hand only or get the unique id? Thanks
Hi,
Could you please change the main property in the package.json to be the file in the distdirectory rather than the src so that this project compiles properly when used from npm?
Thanks
It seems like I can't use it with import * as handTrack from
I am getting this error: handTrack.load is not a function
When I use the traditional script loading ti works but as soon I load via <script type="module"> I get the above error.
J
Hi
https://victordibia.github.io/handtrack.js/#/doodle on chrome, i've got the js error below when clicking on Start Video doodle
react-dom.production.min.js:232 Uncaught TypeError: Cannot read property 'then' of undefined
at t.value (Doodle.jsx:164)
at Object.<anonymous> (react-dom.production.min.js:49)
at p (react-dom.production.min.js:69)
at react-dom.production.min.js:73
at S (react-dom.production.min.js:140)
at T (react-dom.production.min.js:169)
at N (react-dom.production.min.js:158)
at D (react-dom.production.min.js:232)
at En (react-dom.production.min.js:1718)
at Is (react-dom.production.min.js:5990)
Same thing for https://victordibia.github.io/handtrack.js/#/ when clicking on button Start Video
I am trying to integrate A-Frame with handtracking.js but not successfull.
Here's the code: https://woolen-windy-legal.glitch.me
When I see Console ,I get error:
Uncaught (in promise) TypeError: Cannot read property height of null
at getInputTensorDimensions (handtrack.min.js:25)
at ObjectDetection.detect (handtrack.min.js:42)
at runDetection (aframe.html:49)
at aframe.html:60
Please help me fix this !!!
Am trying to play around with the library and would like to try to draw some objects on it.
Hi,
in the source: https://github.com/victordibia/handtrack.js/blob/master/src/index.js
you put this on line 84:
video.style.height = "20px";
Why is that?
This makes using the webcam impossible, no?
could you explain why this is there, please?
thanks!
Regards
Mario
Im having some issues to properly set the dimensions of the video stream for a live detection.
I can correctly set the width & height of the video element. Tough its aspect ratio always stays at 3:4.
I saw that there is a commit for this to support the safari browser, but Im not using safari and would actually prefer a 16:9 resolution. Is that an expected behaviour, or do I miss something?
First, thank you for creating this library. π
I was wondering while reading the Reade.me file.
For maxNumBoxes,
Does this parameter mean load count for one time?
or
the maximum value I can load?(If I put 20 in maxNumBoxes, will modal be delete after load 20 boxes?)
thank you π
Hello. Whenever I install the handtrackjs package in Node.js, and run my project, I get the following error:
(node:2780) ExperimentalWarning: The ESM module loader is experimental.
/Users/me/Documents/Projects/MCH/node_modules/handtrackjs/src/index.js:11
import * as tf from '@tensorflow/tfjs';
^^^^^^
SyntaxError: Cannot use import statement outside a module
at wrapSafe (internal/modules/cjs/loader.js:1053:16)
at Module._compile (internal/modules/cjs/loader.js:1101:27)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
at Module.load (internal/modules/cjs/loader.js:985:32)
at Function.Module._load (internal/modules/cjs/loader.js:878:14)
at ModuleWrap.<anonymous> (internal/modules/esm/translators.js:155:15)
at ModuleJob.run (internal/modules/esm/module_job.js:139:37)
at async Loader.import (internal/modules/esm/loader.js:179:24)
I think the package could use a transition to CommonJS (or maybe it's something on my end)
Hello everyone,
we are creating a simple gesture app that will understand when hands move from left to right and viceverse.
The problem that we are finding is that if the user has it camera point to the face, the default config always detect faces and the hand over the face is never recognized.
Is there any way to configure handtrackjs to ignore faces vs hands ?
Regards
Hi, I was trying to convert your frozen model(ssd handtracking) to json format.
I'm using tensorflowjs_converter tool, I tried converting and I got output files as model.json and group1_shard files. But In handtracking.js you have generated tensorflow_model.pb, weights_mainfest.json and shard files.
Can you guide me on what you have done while conversion?
Hi author,
I'm trying to detect hand of two cameras in my conference like below image. But the result always return null. Could you help me? Thanks
Here is my code:
const largeVideo = ((document.getElementById('largeVideo'): any): HTMLVideoElement);
const localVideo = ((document.getElementById('localVideo_container'): any): HTMLVideoElement);
// Load the model.
const defaultParams = {
flipHorizontal: false,
modelType: "ssd640fpnlite",
modelSize: "medium",
};
handTrack.load(defaultParams).then(model => {
console.log("model loaded");
setTimeout(() => { clearInterval(checkHighFive); }, 6000);
checkHighFive = setInterval(()=>{
let localHand = false;
model.detect(localVideo).then(predictions => {
console.log('Predictions localVideo: ', predictions);
if(predictions.lenght > 0 && predictions[0].label == "open"){
localHand = true;
}
});
model.detect(remoteVideo).then(predictions => {
console.log('Predictions remoteVideo: ', predictions);
if(predictions.lenght > 0 && predictions[0].label == "open" && localHand){
this.props.dispatch(highFive(true));
setTimeout(() => { this.props.dispatch(highFive(false)); }, 4000);
this.props.conference.sendCommandOnce('HIGH_FIVE', { value:_participant.id });
clearInterval(checkHighFive);
}
});
}, 500);
})
How can I use the API in the web based python application. Do I include the js source code in html code of the app and then directly import with load() in the python code or what?
I tried installing a fresh node environment:
twn$ nodeenv --prebuilt nodeenv
* Install node (15.5.1)... done.
twn$ source nodeenv/bin/activate
(nodeenv)twn$ npm install
added 380 packages, and audited 381 packages in 4s
4 vulnerabilities (3 low, 1 high)
To address all issues, run:
npm audit fix
Run `npm audit` for details.
(nodeenv)twn$ npm install --global
added 1 package, and audited 3 packages in 472ms
found 0 vulnerabilities
Then I tried to run build:
(nodeenv)twn$ npm run build
> [email protected] build
> rollup -c
sh: 1: rollup: not found
npm ERR! code 127
npm ERR! path /home/twn/repos/github.com/victordibia/handtrack.js
npm ERR! command failed
npm ERR! command sh -c rollup -c
npm ERR! A complete log of this run can be found in:
npm ERR! /home/twn/.npm/_logs/2021-01-07T20_53_32_648Z-debug.log
Okay so I tried installing rollup
:
(nodeenv)twn$ npm install rollup
npm WARN deprecated [email protected]: "Please update to latest v2.3 or v2.2"
added 2 packages, and audited 383 packages in 1s
4 vulnerabilities (3 low, 1 high)
To address all issues, run:
npm audit fix
Run `npm audit` for details.
(nodeenv)twn$ npm install --global rollup
npm WARN deprecated [email protected]: "Please update to latest v2.3 or v2.2"
added 2 packages, and audited 3 packages in 509ms
found 0 vulnerabilities
Then I tried running build again:
(nodeenv)twn$ npm run build
> [email protected] build
> rollup -c
src/index.js β dist/handtrack.min.js, demo/handtrack.min.js...
(!) Use of eval is strongly discouraged
https://rollupjs.org/guide/en/#avoiding-eval
node_modules/@tensorflow/tfjs-converter/dist/tf-converter.esm.js
15: * =============================================================================
[ ... removed ... ]
18: //# sourceMappingURL=tf-converter.esm.js.map
[!] (plugin babel-minify) TypeError: Banner must be a valid comment.
TypeError: Banner must be a valid comment.
at PluginPass.Program (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@comandeer/babel-plugin-banner/dist/babel-plugin-banner.js:2:476)
at newFn (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/visitors.js:193:21)
at NodePath._call (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/path/context.js:53:20)
at NodePath.call (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/path/context.js:40:17)
at NodePath.visit (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/path/context.js:88:12)
at TraversalContext.visitQueue (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/context.js:118:16)
at TraversalContext.visitSingle (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/context.js:90:19)
at TraversalContext.visit (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/context.js:146:19)
at Function.traverse.node (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/index.js:94:17)
at traverse (/home/twn/repos/github.com/victordibia/handtrack.js/node_modules/@babel/traverse/lib/index.js:76:12)
npm ERR! code 1
npm ERR! path /home/twn/repos/github.com/victordibia/handtrack.js
npm ERR! command failed
npm ERR! command sh -c rollup -c
npm ERR! A complete log of this run can be found in:
npm ERR! /home/twn/.npm/_logs/2021-01-07T20_53_59_766Z-debug.log
What am I missing here? I'm sure this is something basic and obvious for those a bit more experienced with node, but I'm a little lost. By the way, I'm running
Thanks for any help! And thanks for the super interesting project!
Hello,
I want to create a hand gesture based texture changing WebAR part using THREE.js
I used the handtrack.js but in that when wrote the code of detection the phone camera is stucking. May be the phone is not responding or the code is not getting the perfect output.
Would be glad to have a response on it if anyone had worked on.
Thanks
When I tried to use this demo on my Microsoft Surface computer, the webcam detection was working with the front-facing webcam instead of the rear-facing webcam, so it was facing away from my keyboard. Could this demo be modified to allow switching webcams, so that it will work correctly on computers with more than one webcam?
When trying to use the basic example on iOS Safari (iPhone 11 Pro), and toggling video, it either shows all black or just what shows on the camera for the first frame - then freezes.
I've stopped runDetection(); running within the startVideo() function when camera is enabled to rule this out, and the same behaviour occurred then.
I have tried t work with handTrack js,,but unfortunately handTrack.load does not returns the object
Adding issue templates or forms for bug reports, features, documentation updates, etc. can help to ensure that all the necessary information is collected from contributors and--in some cases--can lead to answers and closed issues.
This guide explains how to do it, and if you're interested in seeing forms in action, you can check out our issue forms.
currently handdetector.js
module downloads all results and processes per-class scores in js before going back to tfjs
which causes large dataset transfers from tfjs backend
(either gpu (when used with webgl
tfjs backend) or from within webassembly context (when used with wasm
tfjs backend))
and then recreate required tensor and upload it back to tfjs backend
that is a double unnecessary round-trip for data plus cpu intensive processing loop in js
a much more efficient approach is to handle as much processing as possible using tfjs and only download final results
(not to mention much shorter code)
async function detectHands(input: Tensor, outputSize = [1, 1]) {
const hands = tf.tidy(() => {
const [rawScores, rawBoxes] = await models.executeAsync(tensor, modelOutputNodes);
const boxes = tf.squeeze(rawBoxes, [0, 2]); // remove zero-dims
const scores = tf.squeeze(rawScores, [0]); // remove zero-dims
const classScores = tf.unstack(scores, 1); // split all-scores into individual per-class scores
const hands = [];
// now lets process data once per each class
// could also be used to process only some classes, e.g. skip face if desired
for (let i = 0; i < classScores.length; i++) {
// get best results for each class
nmsT = await tf.image.nonMaxSuppressionAsync(boxes, classScores[i], maxDetected, iouThreshold, minConfidence);
const nms = await nmsT.data();
for (const res of nms) { // now process only those best results
const boxSlice = tf.slice(t.boxes, res, 1); // download just the box we need
const yxBox = await boxSlice.data();
// convert [y1,x1,y1,x2] to [x,y,width,height]
const boxRaw = [yxBox[1], yxBox[0], yxBox[3] - yxBox[1], yxBox[2] - yxBox[0]];
// scale back to original resolution
const box = [Math.trunc(boxRaw[0] * outputSize[0]), Math.trunc(boxRaw[1] * outputSize[1]), Math.trunc(boxRaw[2] * outputSize[0]), Math.trunc(boxRaw[3] * outputSize[1])];
const scoreSlice = tf.slice(classScores[i], res, 1); // download just the score we need
const score = await scoreSlice.data();
const hand = { score: score[0], box, boxRaw, class: classes[i] };
hands.push(hand);
}
}
return hands;
})
}
i'd post this as PR, but it's total change of the library, so posting here instead...
.hth.
The library runs fine on my laptop. However, when I run the online demo (even on provided images) on my mobile device (Samsung Galaxy S10e) I get a lot of false positives which confidences are 0.999.
Do you know what is happening ?
as title, is it build in via coco-ssd model or should we try to introduce our own model?
Thanks!
Hi!
I am using Handtrack.js which works well, however after a few minutes of use my browser freezes. I can't do anything with my computer, except hardly turning it off with the start button.
In my browser I have the following warning: WebGL warning: getBufferSubData: Reading from a buffer with usage other than *_READ causes pipeline stalls. Copy through a STREAM_READ buffer.
refering to the file handtrack.min.js
which I think is the origin of my problem.
Do you know if there's a solution for the multiple freezes ? I saw there's already a fixed issue in the Tensorflow.js project. Does Handtrack.js have a recent version of Tensorflow.js ?
I'm blocked in my work.. Thank :)
Hey, thanks for making this awesome project opensource.
I tried the demo link form
https://victordibia.com/handtrack.js/#/
I get 12-13 FPS on my laptop
but only 6 FPS on my mobile (chrome v92 android v10),
can anyone please explain why is it so and is there any way to improve the FPS on mobile.
Hey, is this project able to detect fingers or just hands ? If yes, how ?
Thanks !
I am copy pasting the entire code from ur codepen.io but it is not working on my system?
Hi I want to make an AR demo for myself with using hand tracking. The project in my mind is like the Iron Man Hand cannon as 3D object on my hand and tracking it. How can I do that ? someone can explain or help me with it ? ty.
It would be helpful to know how this software can be used. IE for commercial use or not for commercial use. Here are some instructions on how to do it:
https://docs.github.com/en/github/building-a-strong-community/adding-a-license-to-a-repository
this is my code:
`
Navigator.getUserMedia =
Navigator.getUserMedia ||
Navigator.webkitUserMedia ||
Navigator.mozUserMedia ||
Navigator.msUserMedia;
const modelParams = {
flipHorizontal: true, // flip e.g for video
imageScaleFactor: 0.7, // reduce input image size for gains in speed.
maxNumBoxes: 20, // maximum number of boxes to detect
iouThreshold: 0.5, // ioU threshold for non-max suppression
scoreThreshold: 0.79, // confidence threshold for predictions.
};
const video = document.querySelector("#video");
const audio = document.querySelector("#audio");
const canvas = document.querySelector("canvas");
const context = canvas.getContext("2d");
let model;
handTrack.startVidseo(video)
.then((status) => {
if (status) {
navigator.getUserMedia(
{ video: {} },
(stream) => {
video.srcObject = stream;
setInterval(doTheDetection, 1000);
},
(err) => console.log(err)
);
}
});
function doTheDetection() {
model.detect(video)
.then(pred => {
console.log(pred);
});
}
handTrack.load(modelParams).then((lmodel) => {
model = lmodel;
`});````
this is the result:
https://github.com/tensorflow/tfjs/releases
tfjs recently deprecated methods loadFronzenModel (as well as using .pb models) and fromPixels. The model should be converted to .json
whenever I move the hand up or down I want to press up key using hand tracking method.
Can you please provide sample code to achieve that.
Thanks
i just stumbled on your model and so far i really like it
i do have few questions:
notes state that tfjs graph model is just converted from tf model, but model signatures,
internal operations and sizes (when manually converted and equally quantized) do not match?
model input size is marked as dynamic shape: [1, -1, -1, 3]
,
but models model names indicate 320
for ssd320fpnlite
and 640
for ssd640fpnlite
?
and notes further confuse the issue since they state that both models are trained on inputs 450 * 380
???
tensorlistreserve, enter, tensorlistfromtensor, merge, loopcond, switch,
exit, tensorliststack, nextiteration, tensorlistsetitem, tensorlistgetitem,
reciprocal, shape, split, where
those ops are not supported by tfjs, but it seems it doesn't impact model execution at all
(defined but unused?)
my use case is so the hand detection model can be comined with other models
(e.g. detailed finger tracking, gesture analysis, sign language recognition, etc.)
the thing is, all those models are trained on vertially oriented hand
if hand detection model returned approximate rotation angle, then image could be
rotated and cropped before being used for further analysis
Hello, I wasn't able to get this to run in my phone's browser. Do you have a version of this code that can?
Thank you.
Please help me to convert the hand detection model to tflite.
Dear author,
I have face with this problem when try to do example
const video = document.getElementById('largeVideo');
// const canvas = document.getElementById("canvas");
// const context = canvas.getContext("2d");
// Load the model.
handTrack.load().then(model => {
// detect objects in the image.
console.log("model loaded")
model.detect(video).then(predictions => {
console.log('Predictions: ', predictions);
// model.renderPredictions(predictions, canvas, context, video);
// requestAnimationFrame(runDetection);
});
});
-->
"UnhandledError: Request to https://cdn.jsdelivr.net/npm/handtrackjs@latest/models/webmodel/centernet512fpn/base/model.json failed with status code 404. Please verify this URL points to the model JSON of the model to load"
Could you help me. Thanks
Is there a way I can disable face tracking? All I want is open, closed, pinch, point, point tip, and pinch tip.
Actually, I am facing this problem when I tried to write some code using OpenCV.js, so I want to find out how others solve this problem, but your demo seems have same problem, the usage of memory of the browser is keep increasing. The JavaScript didn't recycle the memory correctly. I am not sure am I right or wrong, because i am new to AI and web.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.