Giter VIP home page Giter VIP logo

tracking.js's Introduction

Banner

👉 #395 👈


tracking.js

Build Status DevDependencies Status

The tracking.js library brings different computer vision algorithms and techniques into the browser environment. By using modern HTML5 specifications, we enable you to do real-time color tracking, face detection and much more — all that with a lightweight core (~7 KB) and intuitive interface.

Install

Install via Bower, npm, or download as a zip:

bower install tracking
npm install tracking

Examples

Demo 1 Demo 2 Demo 3 Demo 4 Demo 5

Features

Browser Support

You can plug tracking.js into some well supported HTML elements such as <canvas>, <video> and <img>.

IE Chrome Firefox Opera Safari
IE 9+ ✔ Latest ✔ Latest ✔ Latest ✔ Latest ✔

However, the browser support may vary if you request the user's camera (which relies on getUserMedia API).

Roadmap

  • Optical flow
  • Face recognition
  • Pose estimation
  • Faster keypoint descriptor (BRIEF)
  • More trainings (Hand, car plate, etc)

Contributing

  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -m 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request :D

History

For detailed changelog, check Releases.

Team

tracking.js is maintained by these people and a bunch of awesome contributors.

Eduardo Lundgren Thiago Rocha Zeno Rocha Pablo Carvalho Maira Bello Jerome Etienne
Eduardo Lundgren Thiago Rocha Zeno Rocha Pablo Carvalho Maira Bello Jerome Etienne

License

BSD License © Eduardo Lundgren

tracking.js's People

Contributors

brunocoelho avatar caroisawesome avatar cesarpachon avatar eduardolundgren avatar henvic avatar jeromeetienne avatar kkirsche avatar lucascmelo avatar mairatma avatar pborreli avatar reypena avatar sibelius avatar tomersimis avatar zenorocha avatar zzjin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tracking.js's Issues

Possible to track facial expression?

Is it theoretically possible to use the tracking to ineptret a person's facial expression? E.g. determining how much a person is smiling by measuring the angle from the corner of one's mouth to the center?

Perf Guidelines

I'm trying to use tracking.js with getUserMedia and a face object tracker and am experiencing unusable framerates.

The source video is 640x480, and I'm using the following tracker options:

      tracker.setInitialScale(4)
      tracker.setStepSize(2)
      tracker.setEdgesDensity(0.1)

My question is: can any of the tracker options be used to affect perf vs accuracy, or is the largest factor affecting perf the size of the source video?

Track both face and eyes

Hi, thanks for a great project!
I noticed that when enabling two "human" trackers, such as the "frontal_face" and "eye", only one of them works.
Can you make it so that the two can work simultaneously?
Thanks again.
Yuval

Tracker still running even after calling stop

It seems that when you track a video, after you call run on trackerTask, there is now way to stop it again.
I stripped my code to this, you can see that the video is very slow after clicking the button, the chrome profile tab shows me that it's still running the tracking.
Am I doing anything wrong or it's a bug ?

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8">
    </head>
    <body>
        <video></video>
        <button>Take photo</button>
        <script src="tracking/tracking.js"></script>
        <script src="tracking/eye-min.js"></script>
        <script src="script.js"></script>
    </body>
</html>
var video = document.querySelector('video')
var button = document.querySelector('button')

navigator.getUserMedia({ video: true, audio: false },
    function(stream) {
        video.src = URL.createObjectURL(stream)
        video.play()
    }
  , function(err) {
        console.log(err)
    }
)

var tracker = new tracking.ObjectTracker('eye')
var trackerTask = tracking.track(video, tracker)
trackerTask.stop()

var track = function() {
    trackerTask.run()
    tracker.once('track', function(e) {
        trackerTask.stop()
        e.data.forEach(function(rect, i) {
            // do stuff
        })
    })
}

button.addEventListener('click', function(e) {
    track()
})

Decouple from DOM/make it node-compatible

Hi,

I'd love to try and use tracking.js on the server-side with node.js. LearnBoost has an awesome module called node-canvas (https://github.com/learnboost/node-canvas) which implements the canvas API using a Cairo backend.

I don't think it would be too hard to decouple tracking.js from the DOM. A couple of points that are needed or would make this easier to achieve:

  • Being able to pass a canvas instance instead of creating one in the library - this would also probably make it clear how to use it for a standalone canvas, such as drawing a static PNG to a canvas and running the recognition on it
  • Concider separating some of the logic into different modules/files so that the webcam/getUserMedia-related parts doesn't have to be included in node.js
  • Register tracking.js as an UMD-module (https://github.com/umdjs/umd) instead of setting it on window
  • Perhaps implement the matchers as UMD-modules as well, so they are easy to use in node.js, the browser and using AMD-loaders such as RequireJS

I'd be happy to take a look and try to implement some of this stuff, but I want to know if you agree with the suggestions first, and if you have any plans to do this yourselves.

Add ability to define width and height

For example:

videoCamera.track({
    type: 'color',
    color: 'magenta',
    width: 500,
    height: 500,
    onFound: function(track) {
        console.log(track.x + track.y + track.z);
    },
    onNotFound: function() {}
});

Erro on line: var videoCamera = new tracking.VideoCamera().render();

I'm following a tutorial lecture zeno rock on augmented reality , but unfortunately when adc . this line gave the following error : " Uncaught TypeError : undefined is not a function" as if the function was not defined . I checked the tracking library js and js ColorTrack were also linked , but are all ok :/

Adds converter task to haar cascade data from OpenCV

Adds build task that converts new OpenCV training data XML to tracking.js format. That would allow importing all training data available (https://github.com/Itseez/opencv/tree/master/data/haarcascades) to run on the web.

After OpenCV changed the old format to this new XML we need to figure out how this data is placed in order to convert it:

New format:
https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_alt2.xml#L55

tracking.js format:

[20,20,0.822689414024353,3,0,2,3,7,14,4...

allow tracking.tracker => tracking#tracker

Both on Maira's talk at FrontinBH - great presentation btw - and at the documentation examples I saw this kind of example:

var myTracker = new tracking.MyTracker();

myTracker.on('track', function(event) {
  // Results are inside the event
});

tracking.track('#myVideo', myTracker);

After talking with @eduardolundgren, I got that it might be important to keep tracking.track( 'id', <instanced_tracking_obj> );, this is even good for API backwards compatibility.

I wonder if it's possible to extend tracking prototype with a track method to call tracking.track giving the instanced obj, like this (following the last examples):

myTracker.track('#myVideo');

I believe it could be crude done this way:

tracking.prototype.track = function( elem ) {
    tracking.track.call( elem, this );
};

The benefits would be directly related to a simple usage, it looks redundant calling tracking again.

Through by this issue's perspective, it would even be better to have tracking to Tracking to encourage its use only as a obj class/constructor. I need to go deep on the code to even allow my perception and build a better judgement.

Question- track mouth and change colors

Your project is amazing! I'm looking at it as a way to track a user's mouth (actually teeth) in the webcam and re-color their teeth.

Can I track just the mouth in the webcam or is that just for images?

Any suggestions on how to change the colors in real-time? I know you can manipulate pixels in canvas.

Sorry this is not an "issue", but I don't see any discussion board for tracking.js .

Thanks,
Don

How to track colours other than defaults?

I really need to be able to track red. I tried to edit color.js and duplicate the magenta function. I renamed it "red" and did it like this:

red: function(r, g, b) {
        var threshold = 50,
            dx = r-255,
            dy = g-0,
            dz = b-0;

        if ((r - g) >= threshold && (b - g) >= threshold) {
            return true;
        }

        return Math.sqrt(dx*dx + dy*dy + dz*dz) < 100;
    }

No use. The move_controller_single_pixels.html still tracks magenta, even though I clearly state in the HTML file:

videoCamera.track({
    type: 'color',
    color: 'blue',

... And so on.

Any ideas how to do this? Even better would be a way to pass any colour (e.g. using hexadecimal) to the tracker as a parameter.

Question: Generate new Object Tracker

Hello,
I hope i'm not asking a dumb question, but i'd like to know how to generate training data in order to make a new tracker.
I have no experience with computer vision but i have no fear to get my hands dirty, though it is not clear to me what i have to use to generate training data (if it's possible at the moment, that is). I have to do something similar to issue #109 my logos are monochromatic images on white paper (the concept is not that different than QR codes, just that they are made of one single color), from what i have understood i have to generate data for an implementation of the viola-jones algorithm (seems to be the right choice for my task, even though it's specialized in face detection) and then import it with the script contained in opencv_haarcascade_converter.html, is that right?

I can understand that it's not your responsability to provide a guide for that but, If possible, it would be nice to have a very brief guide or even just some suggestions on what software to use to do that would be apprecciated (i just need to be pointed in the right direction)

Definir áreas cegas ou áreas de ativação.

Cara, seria interessante que fosse possível definir uma área de ponto cego. Para que não fosse interpretado nada dentro daqueles pixels. Ou contrário, que pudesse ser definido apenas a área que eu quero capturar. Tanto faz. Se você não entender muito bem o que eu tou querendo dizer, conversa com o Zeno, que conversei com ele e daí ele pode te explicar melhor.

Bug: onFound is called even if not found in human.js

Hi!
I was noticing that onFound is firing even before I allowed the use of the camera, and when I looked at the code at human.js I think I saw that it is firing regardless to the result of the tracker (but I might be mistaken, since I don't really know my way around that code).

Improves tracker performance using transferrable objects

JavaScript is a single-threaded environment, and some computer vision algorithm requires lots of computation. One possibility to address this issue is to create a ThreadableTracker abstract class to split computation in multiple Workers.

The Web Workers specification defines an API for spawning background scripts in your web application. Most browsers implement the structured cloning algorithm, which allows you to pass more complex types in/out of Workers such as File, Blob, ArrayBuffer, and JSON objects. However, when passing these types of data using postMessage(), a copy is still made. Therefore, if you're passing a large 50MB file (for example), there's a noticeable overhead in getting that file between the worker and the main thread. However, when passing these types of data using postMessage(), a copy is still made. Therefore, if you're passing a large 50MB file (for example), there's a noticeable overhead in getting that file between the worker and the main thread. More information here: http://www.html5rocks.com/en/tutorials/workers/basics/

Mobile iOS

There are no workarounds for getting the getUserMedia() streaming video working in mobile safari is there? Love the project, would really like to get it working on mobile.

How to track green color?

I've duplicated magenta color and turned RGB to green (0, 255, 0), but still mess with magenta color.

Variável global tracking inicar com letra maiúscula "Tracking"

Conhece a algum tempo este padrão, creio que será mais intuitivo pra todos os devs se o nome da variável global iniciar com letra maiúscula... é um padrão adotado por outros projetos também, como Modernizr https://github.com/Modernizr/Modernizr/blob/master/modernizr.js#L26 e Hammer https://github.com/EightMedia/hammer.js/blob/master/hammer.js#L8

. . . No caso do tracking é uma linha https://github.com/eduardolundgren/tracking.js/blob/master/src/tracking.js#L635

Adding image or div as overlay.

Hi Guys,

At first, you created a really great piece of software!

I have the following problem, i hope you guys can help me out.

I added the yellow colour to my videocamera, and now i want to create an overlay with an image instead of the border/stroke that is in now.

This is my code:

var t1 = videoCamera.track(
        {
            type: 'color',
            color: 'yellow',
            onFound: function(track) {
               
                var size = 60 - track.z;

                videoCamera.canvas.context.strokeStyle = "rgb(255,0,255)";        
                videoCamera.canvas.context.lineWidth = 3;
                videoCamera.canvas.context.strokeRect(track.x - size*0.5, track.y - size*0.5, size, size);
            }
        }
    );

I tried the following code to use an image, unfortunately is immediately crashes.

 var t1 = videoCamera.track(
        {
            type: 'color',
            color: 'yellow',
            onFound: function(track) {
              

                var size = 60 - track.z;

                videoCamera.canvas.context.strokeStyle = "rgb(255,0,255)";
                base_image = new Image();
                  base_image.src = 'http://www.html5canvastutorials.com/demos/assets/darth-vader.jpg';
                  base_image.onload = function(){
                    videoCamera.canvas.context.drawImage(base_image, 100, 100);
                  }
            
                 //videoCamera.canvas.context.fillStyle = '#000';
                 //videoCamera.canvas.context.fillRect(track.x - size*0.5, track.y - size*0.5, size, size);
                 
                  
                videoCamera.canvas.context.lineWidth = 3;
                videoCamera.canvas.context.strokeRect(track.x - size*0.5, track.y - size*0.5, size, size);
            }
        }
    );

I hope you guys can help me, thanks in advance.

combined with werrtc

I'm trying to combine tracking.js with webrtc, but tracking.js has already use getUserMedia, so I use socket to emit localstream to client.js. When adding localstream, there's an error:
Uncaught TypeError: Type error
createPeerConnection
(anonymous function)
EventEmitter.emit
SocketNamespace.onPacket
Socket.onPacket
Transport.onPacket
Transport.onData
websocket.onmessage

Document build process

Document process of minifying JS or does grunt handle this? If so is anything required to build minified versions or does a node instance restart minify?

Accept a still image or video stream from a non-webcam

There are certain places such as on planes and in secure facilities where turning webcam on would be frowned up. This makes it difficult to hack on this project from those places.

if tracking.js accepted a static image or a video file (I think video file should be easy) then it would make testing and development a lot more accessible :)

Change ObjectTracker training data

Face detection is currently based on training data that is performing poorly in some cases.
New training data should be tested to improve results.
FDDB [1] data can be used to generate input for the ObjectTracker. Better results can probably be obtained if small transformations are applied on the input.

Including training data for new objects

I can't find any reference or example in the docs for including new objects via trained classifiers from OpenVC. (i.e: "a toy car")

In the docs there's the "classifiers" array (Holds the HAAR cascade classifiers converted from OpenCV training), but I would like to know how I could do the conversion from OpenCV data (.xml?) to this array for Viola-Jones.

I'd appreciate if you can point me to some resource where I can find how to do it.

Thanks! Cool project!

Camera is not requested

For the examples that require camera, tracking does not request the camera and the examples never start. Tried on Chrome, Firefox and IE.

Hash para identificação individual

Pelos exemplos que vi e principalmente pelo "face_tag_friends", ao identificar um rosto, tanto em imagens, quanto em videos, no evento de track:

tracker.on('track', function(event) {
        event.data.forEach(function(rect) {

        });
});

o objeto "rect" contém os seguintes atributos: "x", "y", "width", "height" e "total", possibilitando achar o rosto em uma imagem, mas sem ter uma identidade do mesmo.

Imagino eu, que o algoritmo que detecta um rosto deve verificar algo como existência de olhos, nariz e boca e distancia entre eles. Agora imaginando que detectei um rosto em uma imagem e preciso verificar a existência do mesmo rosto em outra imagem (ou até mesmo na mesma imagem, em uma ocasião futura), algo parecido com o exemplo do Johnny Depp que o @zenorocha deu no TDC, pelo que entendi da api não será possível bater um rosto com outro utilizando somente os atributos "x", "y", "width", "height" e "total" (ou é?), então porque não retornar juntamente com esses atributos um hash a partir da "existência de olhos, nariz e boca e distancia entre eles"?

O algoritmo possibilita algo assim? É assertivo utilizar isso pra detectar um rosto em múltiplas imagens?

Unexpected Feature Behavior

So I may be barking up the wrong tree here but I was using the feature detection just to do some basic edge detection of an object and I guess maybe I took the wrong approach to the problem but I expected the edges of the object to be "features" and not minor details inside of the feature.. I tried different thresholds to no avail..

http://i.imgur.com/k2V3Pz6.png <-- Note a lot of "Edges" are not shown as features despite the threshold being low (5).

Add hand tracking to the list of human elements to support

It would be amazing if you could track fingers individually or even just a whole hand.

My end goal is to be able to measure the width of fingers so I can use a reference point (magnetic stripe from a credit card) to measure the size of a specific finger. The idea is that once I know the size of a finger it helps consumers purchase "rings" and it'd make for a really easy to way to get your significant others ring size!

Would this be relatively easy to add the data for? Also is there anywhere I can read about how this data is generated? I assume it's a neural net or so? It seems kinda black magic right now.. :) Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.