mmbuw / gispl Goto Github PK
View Code? Open in Web Editor NEWJavaScript implementation of the GISpL gesture recognizer
Home Page: https://dl.acm.org/citation.cfm?id=2148181
License: GNU Lesser General Public License v3.0
JavaScript implementation of the GISpL gesture recognizer
Home Page: https://dl.acm.org/citation.cfm?id=2148181
License: GNU Lesser General Public License v3.0
Add a way to notify the user that the screen is not ready to use.
Flags work correctly when a single flag set, e.g. either sticky, or bubble, or oneshot. Check if they work correctly for different combinations. mostly sticky and oneshot together, and bubble and oneshot together.
Otherwise, bubble overrides sticky. From the spec:
When a gesture has the "bubble" flag set, then the result gesture will be sent to all regions that have been crossed by participating input events, even if the gesture itself has also been flagged as "sticky".
The "duration" parameter specifies how far back the history of input events for the containing region should be considered for this gesture. The first value determines the starting point in the history, counting backwards from the present. For example, a value of [ 5.0 ] means "the last five seconds". The optional second value determines the end point. E.g. a value of [ 5.0, 3.0 ] means "between 3 and 5 seconds earlier". When values are not specified as floating point numbers, but as integers, the unit changes from seconds to "ticks", i.e. sensor readings. The special value of [ -1 ] means "the entire history". If no duration value is given, the default is [ 1 ], meaning "the last sensor reading".
When a gesture has the "bubble" flag set, then the result gesture will be sent to all regions that have been crossed by participating input events, even if the gesture itself has also been flagged as "sticky".
Check how many refresh events get triggered per second by the tuio client, as mentioned in #20 . If it's normally more than 60, reduce it to around 60, since it is not necessary. Should not generally happen because reactiVision runs at 60fps.
Either
requestAnimationFrame
Change defining duration and delay length from seconds to milliseconds.
Match range of unique parent IDs (e.g. for fingers belonging to specific user)
will be difficult to test in actual usage because tuio v1 does not support this
Currently, the whole input history is used if no gesture duration is defined, whereas the specification says:
... If no duration value is given, the default is [ 1 ], meaning "the last sensor reading".
Complete implementation of the path feature:
Tuio.js doesn't send the current object list when calling tuioClient.getTuioObjects()
, instead it gives back a list of all previous objects, including removed ones.
Complete implementation of the motion feature
When moving too slowly, motion is not recognized because the relative screen tuio position (e.g. x: 0.4 > x: 0.4 + small increment) gets mapped to the same screen pixel, e.g screenX: 1000 > e.g. screenX > 1000
When a gesture has the "sticky" flag, then once this gesture has been triggered for the first time, all input events with participating IDs will continue to be sent to the original region, even if they subsequently leave the original boundaries.
add a method to the eventobject, like a native event would have - to prevent the event from bubbling to parent nodes.
Method chaining not supported for on
, off
, emit
methods.
Fires too late, if the gesture is recognized when the pointer has stopped. For instance, path can be recognized, gesture not because the duration was not valid. If the finger stays where it is, it should eventually trigger the gesture once the duration is valid. It does not because TuioClient doesn't put any additional points in the path, if the pointer parameters don't change (e.g. pointer that doesn't move). Therefore the time difference is at the moment inaccurate for non-moving pointers.
Fix by using the actual timestamp of the pointer as last timestamp, not by checking the last entry in the path.
Match group of input objects (min. count, max. count, max. radius). Result = count/centroid
Implement rotation feature. Depends on #12 because tuio1 cursor input has no rotation information.
From http://www.tuio.org/?specification
/tuio/2Dobj set s i x y a X Y A m r
s Session ID (temporary object ID) int32
i Class ID (e.g. marker ID) int32
x,y Position float32, range 0...1
a Angle float32, range 0..2PI
X,Y Velocity vector (motion speed & direction) float32
A Rotation velocity vector (rotation speed & direction) float32
m Motion acceleration float32
r Rotation acceleration float32
Getting 'cannot read screenX of undefined' because checking before the screen was calibrated. It should not throw this error, just not work before the screen gets calibrated.
Sometimes, the rotation flips 180 degrees, e.g. when moving counter clockwise it should be -10deg (350deg), but it is 170.
Currently finding a node based on x, y
coordinates of the viewport, and then sending an array of the node and its ancestors.
Too complicated right now because of adding sticky or bubble flags. Change to
bubble
optional parameter and if true, include parent nodesfalse
The load
method of any gesture object currently gives back either true
or false
if the gesture is recognized. Implementing flags #6 and later #7, there is a need for the gesture to control on which nodes the event is triggered.
e.g. for sticky flag only on the original node, for bubble on all nodes that were crossed with the input path.
There is an emitOn
event in the gesture at the moment, but change this so that the load
method gives back a node array instead of a boolean, and remove the emitOn
method.
Currently, the input history is retrieved from the input point itself; Tuio.js keeps track of a moving input as one point with the same session id, and a history of all points starting from the first contact. This works in general, but won't work for the example 'double click/tap' gesture from the spec:
{
"name":"doubleclick",
"flags":"oneshot",
"filters":8192,
"features":[
{ "type":"Count", "constraints":[0,0], "duration":[150,100], "result":[] },
{ "type":"Count", "constraints":[1,1], "duration":[100, 50], "result":[] },
{ "type":"Count", "constraints":[0,0], "duration":[ 50, 1], "result":[] },
{ "type":"Count", "constraints":[1,1], "duration":[ 1], "result":[] }
]
It doesn't work at the moment because the input history is lost once the input object (e.g. finger) is lifted from the screen.
Add a method to the eventobject to prevent other events from firing after the method was called.
Somewhat related to #20 . Moving slowly still results in motion not being recognized because the pointer.path
has two identical points. Interestingly, it is not an issue with the Tuio2Simulator
, which implies that
Not sure anymore if Tuio1 can just send a list of ALIVE IDs, or if it always sends current position information in the osc bundle.
Rotation of tuio objects does not work.
Implemented the duration option for gestures, but individual features can also have their own duration
optionally, a duration value can also be specified which will override the duration value specified for the whole gesture.
Does this mean the feature always overrides the gesture? E.g.
{
duration: [2, 3],
features: [{
duration: [0, 1]
}]
}
if it always overrides, then the valid duration is [0,1]. If it only overrides for this one feature, then this gesture will never be valid because the overall gesture duration must be between 2 and 3 seconds, and its one feature must be between 0 and 1 seconds.
Recognize feature only if the input matches the given input filter of the feature (e.g. for filter '1' only if the filter matches tuio input with value 1 - right index finger).
The last known inputState gets passed to the callback, once a gesture is recognized. But for most gestures, it makes sense to have an actual value passed as a parameter. Otherwise, they need to be recalculated in the callback. So an event object should contain:
1.5
, or 0.44
The demo is pretty useless atm. Add some pictures and allow them to be moved on 'motion' gesture.
So, the calibration object does screenX to x which is clientX right now
Rotation with more than two fingers works kind of bad. Check.
When a gesture has the "oneshot" flag, then it can only be triggered once by a given set of input IDs. Repeated triggering is only possible when the set of IDs captured by the containing region changes.
Implement delay feature. Feature/gesture is only valid after a delay of x frames.
Number of frames since first object entered, lower and upper limits
Add some more complex demos:
The demo doesn't work on the latest Firefox on Ubuntu because the browser chokes on the number of WebSocket messages. Not sure why or if it's fixable. Not a problem on Mac.
Match range of unique IDs (e.g. for tangible objects)
e.g.
"features":[
{
"type":"ObjectID",
"constraints":[320,321]
}
]
Currently, checking for the presence of pointers (tuio2) and cursors (tuio1) only (i.e. finger input). Check explicitly for other types and use them all as input.
Currently storing rotation values for objects as
{
objects: {
sessionId: value
}
}
It would be better to use the object id (marker id) instead of sessionId to differentiate different rotation values.
Add these properties to the event object instead of referring to this
. One conflict with the native touch event objects is that the target
property points to the element where the touch first happened.
Touch.target Read only
Returns the Element on which the touch point started when it was first placed on the surface, even if the touch point has since moved outside the interactive area of that element or even been removed from the document.
Not sure if I want to implement it like this because it collides with the sticky/non-sticky flag of the gesture.
Consider dropping the this
binding.
Create separate demos per feature:
There are cases where it is necessary to know when the input stops. For example, when scaling from 1 to 0.5 and removing the fingers, the next time we scale for 50%, we need to set the value to 0.25. For this, we need to know that at the time of the removal, the last scale factor was 0.5.
One idea is to have a 'fixed' gesture like 'touchend' (and even touchstart) like native events.
Rename current demo and all future demos according to the features they show.
e.g. photos.html -> 2-3-finger-motion.html
When a gesture has a bubble flag, it should emit events on all nodes that it encounters, but not multiple times.
Complete implementation of the count feature:
Doesn't work quite correctly when rotating counter clockwise around zero (no rotation).
Implement scale feature
relative size change, lower and upper limits
Duration is implemented completely wrong. Should be
duration: [a, b]
where a is e.g. starting from 5 seconds ago
, and b is e.g. until 3 seconds ago
. At the moment, it is implemented as a e.g. gesture started at least a seconds ago
, and b e.g. gesture lasting not longer than b seconds
.
Currently, the only information about the position of the point in the inputState is {screenX, screenY}
. Needed
relative to the screen:
screenX
screenY
relative to the viewport
clientX
clientY
relative to the document
pageX
pageY
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.