bit-bots / bitbots_vision Goto Github PK
View Code? Open in Web Editor NEWVision ROS package of the Hamburg Bit-Bots Robocup Humanoid Soccer Team
License: MIT License
Vision ROS package of the Hamburg Bit-Bots Robocup Humanoid Soccer Team
License: MIT License
Currently, the vision debug printer just reimplements the actual ROS named loggers. You can provide a logger name like this: rospy.logdebug('message', logger_name='field_boundary_detector')
. All logged messages will be shown and can be filtered int the rqt log console and the logger leven in the actual screen output can be controlled by using the rqt logger level plugin or by calling the set_logger_level
service. The logger name in the above example would be /rosout/<node name>/field_boundary_detector
.
Using these loggers would also reduce external dependencies of the modules.
Do not cache any data of field boundary detector while dynamic color space is running, because the color space of the color detector could have changed in the meantime.
Add arguments like basler:=true
and logitech:=true
to the vision launch file
As in #43 there should be a proper way to disable or reduce the number of non line points.
FCNN handler: add param for publishing debug image like in the color detector
Also add this parameter to the launch file.
Currently, only the top candidate gets added to the ball candidates message, but the message is designed to contain all candidates.
The PixelListColorDetector should not be dynamic. But another new detector inheriting from it should.
In idle mode the rate at which images are processed should be drastically reduced
We could use the Camera position to estimate a horizon in the picture without inspecting it. This could increase performance.
Currently, print
is used in the live_classifier and live_fcnn_03 files. The debug printer should be used instead.
To save some resources while the RoboCup world championship 2019, the parameter line_detector_linepoints_range
has been set to 0
.
This parameter determines the number of created line points.
Please reset this afterward to default 200
.
This issue is related to #64.
As the message format is defind to for multiple balls, the topic should be named "balls_in_image".
Currently, we are using the head tilt angle to determine whether or not we will use the reversed field boundary search method. It would be nicer to get this angle from the transform from the base_footprint to the camera_optical_frame. In this case we are considering the whole kinematic chain and drop the config parameter because the real horizon always occurs at 90 degrees.
Referring to a code comment, we should find a better name for the function "_obstacle_detector_distance".
In case the config value for the ball detector does not match any of the detectors what should be the behavior. Add a dummy detector?
There should be a parameter set for the „Feldraum“ that can easily be used, for example with feldraum:=true
.
Config files need more comments describing every parameters effect.
For localization it would be nice to also have a list of points, on which no lines were detected.
The horizon detector needs to work more stable and a bit faster. The most dangerous Situation are fields next to each other.
The line detector throws a numpy exception if no line points are detected.
We should only reconfigure changed parameters in the dynamic reconfigure callback.
Some checks have recently been added in pull request #59.
Please rename field boundary finder to detector in config and cfg files
Create separate classes for field boundary search methods. Therefore inherit from "basic" field boundary detector.
Publish lines as a binary image mask.
colorpicker, hsv colorpicker and colorspace tool could be merged into one tool
Please notify the user to rebuild the vision node, if the colorspaces or models folder has changed.
This is referring to #46.
Instead of handling, whether the vision is use for simulation or not, inside the vision, the parameters should be overwritten by an additional simulation parameter file in the launch file.
We should evaluate in which cases we want to use the runtime evaluator in the future. The current solution with the evaluator removed via code comments seems not like a permanent solution. Maybe we enhance it further (with activation via the launch parameter/config) or we use common profiling tools like profilehooks
.
evaluate the vision params for optimal performance on the jetson.
The FCNN has to be retrained on a new image set with and without the ball from the robot's point of view.
We should evaluate the ROS log levels. This is related to #18.
This Repo needs a slight refactoring to clean up the code, add comments and merge the bitbots_vision directory with the bitbots_vision_common directory.
currently, we are not using any information saved from the last image. When we do, we need to be able to reset it all.
I just stumbled upon resource_retriever and it looks pretty useful.
Can you utilize it so that for example colorspaces dont have to be specified as an absolute path?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.