prisonerjohn / ofxazurekinect Goto Github PK
View Code? Open in Web Editor NEWAn openFrameworks addon for the Azure Kinect.
License: MIT License
An openFrameworks addon for the Azure Kinect.
License: MIT License
I tried this with v1.2 and its dropping lot of errors
something to do with conversion
i.e.
[ error ] ofxAzureKinect::Device::updateDepthInColorFrame: Failed to convert depth map to color camera geometry!
I am guess you are using version 1.1 will this be updated soon?
Thanks!
Hello,
I'm running the body tracking example and it's working OK apart from the latency.
Framerate is fixed at 60 and even so, there's a 1s delay between the image and movement.
This doesn't seem to be affected by number of bodies or computer performance.
That isn`t the case in the Body Tracking Viewer application, in which there is absolutely no delay.
Would anyone know what`s going on?
Thank you,
Antonio
I would like being able to play records with the addon.
With the SDK it is already possible to record as a .mkv containing color and depth (https://docs.microsoft.com/en-us/azure/kinect-dk/azure-kinect-recorder)
I will be looking into adding this feature but I am not sure how yet. please let me know if you have any guidance and help
Thank you
Hi prisonerjohn,
I am testing example-streams on a pc with Intel i7 GTX1060 running both Windows 10 + Visual Studio 2017 and Ubuntu 18.04 + Qt Creator. The Kinect grabbing frame rate doesn't reach 30 fps as it does in the Microsft SDK example, that is in the specs of the camera.
Windows: app 50 fps k4a: 15 fps.
the console gives first a list of "[warning] ofGLProgrammableRenderer: draw(): texture is not allocated"
the it gives "[warning] ofxAzureKinect::Stream::updatePixels: No Color capture found (358)!" once per second.
Ubuntu: app 60fps k4a: 19 fps.
Do you have any suggestion? It's a strong limitation...
Thanks a lot,
Paolo
The point cloud example uses:
kinectSettings.updateColor = true;
When this is true Device.cpp line 400 is:
if (this->bUpdateColor)
{
this->updatePointsCache(colorImg, this->colorToWorldImg);
}
else
{
this->updatePointsCache(depthImg, this->depthToWorldImg);
}
This suggests it will create the points cache by iterating through the points in the colour frame and getting xyz values and texture coordinates from the pixlels in the full colour frame.
In the example however the full colour frame is not bound to the vbo, the colorInDepth tex is:
if (this->kinectDevice.getColorInDepthTex().isAllocated())
{
this->kinectDevice.getColorInDepthTex().bind();
}
this->kinectDevice.getPointCloudVbo().draw( GL_POINTS, 0, this >kinectDevice.getPointCloudVbo().getNumVertices());
if (this->kinectDevice.getColorInDepthTex().isAllocated())
{
this->kinectDevice.getColorInDepthTex().unbind();
}
Which would be appropriate if:
this->updatePointsCache(depthImg, this->depthToWorldImg);
were being called, but this will never be called when
kinectSettings.updateColor = true;
And to have the colorInDepthTex
needed for the texturing kinectSettings.updateColor
must be set to true;
If I manually change the Device.cpp to only use:
this->updatePointsCache(depthImg, this->depthToWorldImg);
Then the results are correct, however
this->updatePointsCache(colorImg, this->colorToWorldImg);
never returns the correct points, it should make a point cloud where the full colour texture can be bound and it would have the correct positions and texture coordinates.
I am not sure where the error comes from with the larger interpolated depth point cloud though.
Cheers
Hi @prisonerjohn,
Thanks for the addon! I am trying to use the body tracker example. I have the latest SDKs installed and the environment variables set up as your description. I have copied over the
dnn_model_2_0_op11.onnx file in the bin folder as well. I am getting the following errors. I am also not seeing any of the joints being drawn. The shader is drawing the depth image. Could you possibly know why this might happen?
Thanks a bunch!
Trying to run example-streams and am unable to draw the texture returned from getDepthTex(). All of the other streams are working, and I have set my Windows environment variables properly. Any idea why that might be ?
I'm running the most recent openFrameworks environment on Visual Studio 2017
I just finished teaching with the addon, and unless you throw away everything but the addons.make, src folder and bin folder before you run the Project Generator, it fails to properly link everything. I would reccomend stripping out the other files in the examples here.
I am in the process of getting everything to work in a Linux environment, and so far so good, but I'm having difficulty with the turbojpeg part of the build process.
Specifically, I have added the following in the addon_config.mk
file
linux64:
ADDON_INCLUDE += libs/turbojpeg/include
ADDON_INCLUDES += /usr/include/k4a
ADDON_INCLUDES += /usr/include
ADDON_LIBS += /opt/libjpeg-turbo/lib64/libjpeg.so
ADDON_LIBS += /usr/lib/x86_64-linux-gnu/libk4a.so
ADDON_LIBS += /usr/lib/libk4abt.so
And when I try to compile I get linker errors:
/home/random/of_v0.11.0_linux64gcc6_release/addons/obj/linux64/Release/ofxAzureKinect/src/ofxAzureKinect/Device.o:Device.cpp:function ofxAzureKinect::Device::Device(): error: undefined reference to 'tjInitDecompress'
/home/random/of_v0.11.0_linux64gcc6_release/addons/obj/linux64/Release/ofxAzureKinect/src/ofxAzureKinect/Device.o:Device.cpp:function ofxAzureKinect::Device::~Device(): error: undefined reference to 'tjDestroy'
/home/random/of_v0.11.0_linux64gcc6_release/addons/obj/linux64/Release/ofxAzureKinect/src/ofxAzureKinect/Device.o:Device.cpp:function ofxAzureKinect::Device::updateCameras(ofEventArgs&): error: undefined reference to 'tjDecompress2'
Is there a particular version (1.5? 2+?) of libjpeg-turbo I should be installing? I'm also not entirely sure how the compiling+linking of this was done for Windows so I'm not very clear on what needs to be changed?
Hi,
I am having an issue with getting the depth camera to work in the stream example. I have not edited the original code in any way. Upon running the example, I can clearly see the RGB camera capture clearly. However I can barely see the IR camera feed (does not appear the same way it does in the Azure Kinect Viewer application, and cannot see the depth camera at all.
To reproduce, navigate to the example-streams folder, and run the example-streams VS solution. Expected is that the live feed would look identical to the live feed from the Azure Kinect Viewer application, but this is not the case.
Running:
Windows 10, of_v0.11.0, Visual Studio 2019
After compiling the of project, it's necessary to install the SDK on the deploy machine? Or the required data is built into the executable at compiling time
fatal error C1083: Cannot open include file: 'k4a/k4a.hpp'
i installed the sdk, and configured the path in environnent variable
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.