bitcraze / aideck-gap8-examples Goto Github PK
View Code? Open in Web Editor NEWExamples on how to use the GAP8 on the AI-deck
Examples on how to use the GAP8 on the AI-deck
Couple of forumpost about the flipped image:
https://forum.bitcraze.io/viewtopic.php?f=21&t=4581&p=20971#p20971
Reset gap8 from Nina so that standalone use would be possible
Hello,
I'm studying to use test ai-deck's camera function. When I'm trying to flash the AIdeck_examples/GAP8/test_functionalities/test_camera
module, my flash program blocks at "Opening Himax Camera". The detailed outputs are as follows.
Initialising GAP8 JTAG TAP
Info : adv debug unit is configured with option BOOT MODE JTAG
Info : adv debug unit is configured with option ADBG_USE_HISPEED
Info : gdb port disabled
Loading binary through JTAG
Info : tcl server disabled
Info : telnet server disabled
Warn : Burst read timed out
*** PMSIS Camera with LCD Example ***
Entering main controller
Testing normal camera capture
Opening Himax camera
^Z
Furthermore, when I flashed and tested the wifi stream example, the viewer.py
blocked at socked connected
and no image displayed.
I don't think it's a hardware problem because I've tried on two different ai-deck boards. And I'm wondering if you could give me some idea on how to fix this issue.
Thanks,
David
Hi! I was wondering if there are some examples regarding the Bluetooth module present on the NINA-W102.
Thank you for the support.
Manuel
Hey,
I'm kinda confused regarding memory/RAM on the AI-Deck. On the product/github page the AI-Deck is advertised to have
512 Mbit HyperFlash and 64 Mbit HyperRAM but when I look at the GAP8 product page
It's much less. Is this a typo? But, when I look at the schematics of the AI-Deck it's also mentioned there.
So, if this hypermemory exists is there any example code showing how to use the extended memory?
Thanks,
-Nick
Hi,
Is object detection possible on board on AIdeck? Do you have any test about this? Does it support tf2 model zoo SSD models? Or do you suggest off board detection?
Thank you
to-do:
The docker images for the gap8 is displaying unstable behavior when connecting a programmer through USB. During RUN or Flash, the usb seems to disconnect several times.
If this is happening to you, just install gap_sdk locally on your machine...
Hey guys!
I was trying to work with AI-deck and the himax camera without the AEG turned on. We'd like to be able to fly the crazyflie in very low lighting conditions and wanted to turn off the AEG, fix integration time (exposure), analog gain, and digital gain to have more reproducible images.
However, it seems like changing the appropriate parameters does not result in any change in the captured images. I've created an MWE to better visualize what I mean:
https://github.com/bitcraze/AIdeck_examples/compare/master...zeroos:mwe/manual_exposure?expand=1
I would expect that running this code would result in images with constantly changed exposure, but all the images are exactly the same.
Am I missing something or is it a bug in HIMAX?
This is needed to let the neural network work for incoming camera data
to-do:
The UART1 connection will be conflicting with the lighthouse deck. Moreover, the UART connection with the GAP8 will be disrupted when the cluster is turned on, which might cause problems for users in the end anyway.
Improve the WiFi demo so that it supports connection after the AI deck has been started, re-connection and streaming full JPEG images.
Hello, I see that you presented we could install the requirements and flash the nina using docker in the Docker Nina. But I cannot find the dockerfile in this doc.
Hey @knmcguire ,
it seems like in this commit: 6151a48
you moved the demosaicking-function to a seperate file (something like common/img_proc.c/h
?) but the file was not commited. Is it possible to commit the demosaicking-function or was it left out on purpose?
This also breaks the build of the test_camera example: https://github.com/bitcraze/AIdeck_examples/blob/0517b2b9c1924c052ac42ec533d69d8f4094e57c/GAP8/test_functionalities/test_camera/test.c#L23
Best,
-Nick
hello
i am using ubuntu 18.04 and have gap_sdk version 3.6 my issue is that when i run this example i keep getting this error Makefile:48: recipe for target 'GenTile' failed
make: *** [GenTile] Error 1. Could you let me know how to solve this problem.
thank you
(async) camera capture
Image resize
Recently got an AI-deck.
Had to do the following before it could flash:
adapter_khz 20000
line to the olimex-arm-usb-tiny-h.cfg
tcl
folder from openocd-esp32
to the NINA/firmware
dirprogram_esp32
in the one-liner to program_esp
:~/esp/openocd-esp32/src/openocd -f interface/ftdi/olimex-arm-usb-tiny-h.cfg -f board/esp-wroom-32.cfg -c "program_esp build/partitions_singleapp.bin 0x8000 verify" -c "program_esp build/bootloader/bootloader.bin 0x1000 verify" -c "program_esp build/ai-deck-jpeg-streamer-demo.bin 0x10000 verify reset exit"
Hello, I firstly used the docker to flash the nina under the official ubuntu virtual machine, but the command didn't work as expected. There is something trouble of the command 'cd' in the dockerflie.
So, I turn to use the docker under the local environment, macOS.
But I meet an another problem when I want to execute the 'docker run' command,
docker run --rm -it -v $PWD:/module/ --device /dev/ttyUSB0 --privileged -P espidf:3.3.1 /bin/bash -c "make menuconfig; make clean; make all; /openocd-esp32/bin/openocd -f interface/ftdi/olimex-arm-usb-tiny-h.cfg -f board/esp-wroom-32.cfg -c 'program_esp32 build/partitions_singleapp.bin 0x8000 verify' -c 'program_esp32 build/bootloader/bootloader.bin 0x1000 verify' -c 'program_esp32 build/ai-deck-jpeg-streamer-demo.bin 0x10000 verify reset exit'"
You know, there is not device file in the macOS, so, I can't set the value of the --device
, is there any advice to solve it?
Thanks!
I finish the work according to the Docker Nina
A lot of examples, especially those that work with the streamer, work only in 3.5 and not in 3.6 (it blocks). Moreover the CNN examples seem to be broken, even if I try to go back to 3.4. It might be due to the fact that the autotiler has updated.
the greenwave technology link main website is not correct
The newest sdk 3.9 is out, so we should update the docker files and test out all the examples
Hello kimberley
I installed the gap v3.7, I run the test_camera example and it worked and I was surprised to get a color picture, then I installed the face detection example, when I run it I couldn’t get it to work, I changed the setting in the makefile to USE_CAMERA, but it doesn’t work, so I tried to run the test camera example again, now I keep getting this message
Info: gdb port disabled
Info: halt timed out, wake up GDB
error: timed out while waiting for target halted
i can not flash any other example
I noticed that only the NINA led is coming on.
Could you let me know what I can do to fix this problem
regards
rodrigo
I tried following the instructions at https://www.bitcraze.io/documentation/repository/AIdeck_examples/master/getting-started/getting-started/ on Ubuntu 20.04. I tried different versions of the SDK (3.8.1, 4.0.0) and usually get a
cc1plus: error: bad value (‘tigerlake’) for ‘-march=’ switch
when making the SDK.
For the steps, I oriented myself at the docker instructions in https://github.com/bitcraze/docker-aideck/blob/master/src/Dockerfile as well as the readme for the official SDK.
Is the recommended way here to use Docker, or did anybody succeed at using a native installation? The current Docker image uses SDK 4.0.0 and not 3.8.1 as stated in the documentation, correct?
Write a simple demosaicking algorithm
Stream the color image instead of raw
Installing the gapsdk and the espressif toolchains and such can be quite a difficult job... especially when so many libraries rely on it. The idea is to provide dockerfiles, so that people can make their own small environment with the containers with all the correct libraries installed and build and flash the examples from there without having to install all of those on their local machine (kind of like a small virtual machine).
Hey, the docker image has a Linux Ubuntu 18.04 distribution .
here :
https://github.com/bitcraze/AIdeck_examples/blob/master/docs/getting-started/docker-gap8.md
it says This will install the AutoTiler, which requires you to register your email and get a special URL token to download and install the AutoTiler,
but the email I got says there is only AutoTiler for Ubuntu 16.04
Hello!!!!
I have many crazyflie and I want to make them to stream images to my PC using my home wifi. My idea is to join the images using OpenCV in my computer, but for that I need the crazyflies to be wifi clients rather than servers. How can I do it??
The problem with the JPEG streamer in the examples is that I cannot receive images by openCV neither I know how to get several images at the same time. Any idea?
While trying to setup the GAP8 Docker container I run into several errors.
The first of which I was able to work around, and created a pull request to fix.
The next one I ran into, I haven't been able to sort out yet. After running
sudo docker build --tag gapsdk:${GAP_SDK_VERSION} --build-arg GAP_SDK_VERSION=$GAP_SDK_VERSION .
I eventually get the error
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/universe/libk/libkml/libkmlbase1_1.3.0-5_amd64.deb Undetermined Error [IP: 91.189.88.152 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
The command '/bin/sh -c apt-get install -y libopencv-dev python3-opencv' returned a non-zero code: 100
Could this perhaps be fixed with the new SDK version
Hey all,
in the release version 3.7
of the gap sdk there is a hard-coded download from a master branch of tensorflow, see here: https://github.com/GreenWaves-Technologies/gap_sdk/blob/d33d6c90c6a6d1e87572515f61ee8642e09847f0/tools/nntool/Makefile#L56
Unfortunately, this breaks the nntool for version 3.7
when using the docker. A quick workaround that worked for me is to replace master
by some older commit like 6be604aaacd9d270de01c37ec6e9a9a077397848
being inside the docker then rebuilding the nntool and commiting my changes. But I guess that is not substantially enough for everybody.
FYI, in the release version 3.7.2
of the gap sdk this was fixed (https://github.com/GreenWaves-Technologies/gap_sdk/blob/a934747441481ea3d9c029719d721780cdff9e46/tools/nntool/Makefile#L58) but building the docker with version 3.7.2
results in a different issue which was already addressed here: GreenWaves-Technologies/gap_sdk#174
Best,
-Nick
Hi,
When I try to run the "Testing the Himax camera", it will always stop at certain point like below:
Therefore, I try to find the program stop at which scentence,and find:
But 3 of my teammates can run this example successfully, and we have the same virtual envirnment.
So, I want to know is this problem may be caused by computer configuration? or there are other causes?
By the way, my teammates and I all used the offical docker. And we refer to this page:https://www.bitcraze.io/documentation/repository/AIdeck_examples/master/test-functions/test-camera/
Regards,
Mathilda
I am looking for existing examples of recording the video stream from the AI deck, along side with the control inputs to the Crazyflie. Does something like this exist? I know how to view the stream, but I want to record it as a data collection
I tested the himax camera demo and image classification example, can this two demos use together? Now picture is set by Makefile, if I want to capture pictures and inference it onboard, shall I use L3 MEM on the AIdeck? I search pulp_dronet codes but not find camera loop, thanks for your reply.
It's probably because of the registry that is not correct in the gap_sdk. We made an issue on their repo for that:
hello
i have uploaded the face detection image on the ai deck, now i need to know how to run this code and receive images on the laptop. Could you explain how i can achieve this. Do i need to build an application to run on the laptop?
thank you
regards
rodrigo calvo
There has been some reported timing issues of sending the image to viewer.py. See here:
From what I understand, the Himax camera should be able to adjust the exposure continuously to adapt to different lighting conditions. In the WiFi streaming example, this is not the case. If you turn on the camera in a dark room, and then turn the lights on, the image will be completely washed out and the camera does not adjust the new lighting conditions. If you start the camera in a room that is already bright, the exposure will be set correctly. It seems that the exposure is only set at the time the camera is turned on, and then it stays the same.
I have the AI deck v1.1
Hi!
I am working on the AIdeck and I am trying to make the wifi streaming example work.
I am using the last bitcraze commit and last gap_sdk version 3.8.1.
My problem is that when I run the viewer.py script to visualize the streamed images, the bottom part of the image is either out of sinc or black.
I debugged hardly the greenwaves code of the streamer, but I didnt find the problem.
On AIdeck side: after the camera acquisition I dump the raw image, this looks correct. So the camera works.
If I run the test camera program (write the image to the PC via debug interface) I always get the right image. This proofs again the working camera.
The problem I think is either in the Jpeg encoder, in the trasmission, or its a buffer size problem at receiving side.
in the viewer.py file i tried many buffer sizes (wrt to the original value of 512). I went down to 100 bytes and up to 10000 bytes, but the same problem occurs.
does anyone has the same problm that I do?
refer here for an example image: https://forum.bitcraze.io/viewtopic.php?f=21&t=4643
This problem with the images from the new grayscale camera was noted in this forum post. https://forum.bitcraze.io/viewtopic.php?f=21&t=4740. Maybe this an issue that can be fixed with the new SDK (#41) or related to the latest compression fixes (#40) ?
To show how to send characters from the gap8 to nina
So currently we are working on a docker hub container so it should be much easier to retrieve the container (so that people don't need to build it themselves all the time).
This will be done without the autotiler as that one needs user registration, so people will need to install this seperately on the downloaded container, but since the first step almost takes one hour it will save a lot of time !
Hey,
I have one problem when using the camera of the AI-Deck I couldn't fix properly. I tested out the test_camera
-example for the GAP8-chip and got some weird results (but this is also applicable to other examples as far as it seems to me).
When I try to capture an image it gets tiled in a weird way like so:
What got me thinking is that this only occurs for the first image. If you wrap it in a loop and continuously capture images like in the wifi streaming example or the GAP8-IO example the problem is gone starting from the second image. So I was investigating what caused this issue and how to solve it. So far the only solution that worked for me is to start and stop the camera once before starting the actual capture process but also after setting the registers for the orientation or other parameters (for instance around line 95)
pi_camera_control(&camera, PI_CAMERA_CMD_START, 0);
pi_camera_control(&camera, PI_CAMERA_CMD_STOP, 0);
Doing so is exactly what happens when capturing images continuously but leaving out the capture call.
Therefore this resolves the issue and generates a correct image
But doesn't seem like the correct or proper solution. This also happened on two different AI-Decks so it don't think my specific board might be faulty.
So this results in a couple of open questions:
Cheers,
-Nick
Hi,
I'm working on a project on the AIdeck which only implements GAP8 chip + NINA module functionalities. I bought an Olimex ARM-USB-TINY-H with ARM JTAG 20 to 10 pin adapter, as suggested, to program the board.
Saddly I couldn't make it to work, both by building the GAP_SDK myself and by using the docker image provided.
I will post the output I receive from every example of GAP8 i tried:
Open On-Chip Debugger 0.10.0+dev-00002-ga9347474 (2020-11-14-17:27)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
DEPRECATED! use 'adapter speed' not 'adapter_khz'
DEPRECATED! use 'adapter driver' not 'interface'
Warn : Interface already configured, ignoring
TARGET create
Info : core 0 selected
0
Info : gap8_adv_debug_itf tap selected
Info : adv_dbg_unit debug unit selected
Info : Option 7 is passed to adv_dbg_unit debug unit
GAP8 INIT TARGET
Info : clock speed 1500 kHz
Error: JTAG scan chain interrogation failed: all zeroes
Error: Check JTAG interface, timings, target power, etc.
Error: Trying to use configured scan chain anyway...
Error: gap8.cpu: IR capture error; saw 0x00 not 0x01
Warn : Bypassing JTAG setup events due to errors
GAP8 examine target
Init jtag
Initialising GAP8 JTAG TAP
Info : adv debug unit is configured with option BOOT MODE JTAG
Info : adv debug unit is configured with option ADBG_USE_HISPEED
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out
I'm using the latest release of gap_sdk on an Ubuntu 20.04 machine.
(note: out of the box, when powered, AIdeck setup the WI-FI hotspot and stream the camera images as expected)
Thank you in advance for your time.
Best regards,
Lorenzo Gualniera
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.