Giter VIP home page Giter VIP logo

ethzasl_msf's Introduction

ethzasl_msf

Description

Time delay compensated single and multi sensor fusion framework based on an EKF. Please see the wiki for more information: https://github.com/ethz-asl/ethzasl_msf/wiki

Documentation

The API is documented here: http://ethz-asl.github.io/ethzasl_msf

Contributing

You are welcome contributing to the package by opening a pull-request: Please make yourself familiar with the Google c++ style guide: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml

License

The source code is released under the Apache 2.0 license

Other frameworks

MaRS is a actively maintained continuation of ethzasl_msf. Among others it also has online sensor integration, better initialization methods, multi-sensor scalability, and a more elaborate documentation. Please consider using this for new projects! Visit: https://github.com/aau-cns/mars_lib

ethzasl_msf's People

Contributors

burrimi avatar clanegge avatar enricgalceran avatar ffurrer avatar flaviofontana avatar fmina avatar georgwi avatar helenol avatar omaris avatar otim avatar patripfr avatar prclibo avatar rikba avatar simonlynen avatar zacharytaylor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ethzasl_msf's Issues

Different between ethzasl_msf and ethzasl_sensor_fusion

Hi, I'm a graduated student who are working on vision-imu sensor fusion.
Could anyone tell me what is different between ethzasl_msf and ethzasl_sensor_fusion?
Some of folder are similar. Like ssf_updates, ssf_core and ssf_updates.
If I want to use your EKF framework, which of these repos you suggest me to use?
Thanks

No output in msf_core/state_out and fixed_covariance error

I'm new to the framework but very interested in using it for a UAV with visual odometry, IMU, GPS, etc.

I'm trying to learn the perks of the framework and went to the tutorials and read a lot from here and there. Been at this for nearly a week already before asking you guys.

At the moment, I'm trying to use the dataset.bag that you guys provide in the tutorial for the old version (ssf), which contains the IMU and the Vicon data, but I'm not getting anywhere.
I was planning on writing a new tutorial if I get this dataset working and sending it to you guys to update the old one.

The bag file publishes in two topics:
-> topic: /vicon/auk/auk has a [geometry_msgs/TransformedStamped] with information about the translation and rotation of the object.
translation Vector3 and Rotation Quaternion

-> topic: /auk/fcu/imu has the Flight Control Unit information all the IMU data in a [sensor_msgs/Imu] message.
Orientation in Quaternion form, angular velocities and linear acc in Vector3. It also contains all this vectors covariances.

This should be suitable to use the vicon_sensor launch file. That launches:
msf_viconpos_sensor, msf_core, and msf_updates.

I then remap:
(IMU data)

But for the vicon part, /msf_updates/pose_input and /vicon/auk/auk are different msg types. One is PoseStamped, and the other one is TransformedStamped. So I created a small node that organices the data from one msg type to the other, and publishes in /vicon/auk/translated.
So I then remap:
(Vicon Data).

I’m not getting any output in msf_core/state_out. And I cannot seem to fix it.

I’m also getting a message as follows in my viconpos launch node:
[ WARN] [1423854183.381641819]: Provided message type without covariance but set fixed_covariance ==false at the same time. Discarding message.

But the thing is that the message type that msg_updates/pose_input is of type PoseStamped, so it does not have all the covariance things, and in my viconpos_sensor_fix.yaml, I actually have:
/pose_sensor/pose_sensor/pose_use_fixed_covariance: true

So I don’t understand why he is telling me that fixed_covariance is set to false…

Do you have any insight on the topic?? Could you please help my a little bit??
Thanks a lot!!

msf_setup

Using multiple (2) pose sources at once

What would be the procedure if we were to use two pose sources? I have 2 visual SLAM sources onboard (forward looking and downward looking for mapping and odometry). I want to use the two sources simultaneously for robustness and redundancy.

Thanks,
Kabir

Define / use suitable message types for spherical sensor

currently it's 2x geometry_msgs/Point stamped, one for azimuth/elevation and the other one for the range measurement. Any ideas / suggestions? I'd like to avoid the leica package depending on this one and vice versa. Any suitable ros messages that I overlooked? Ideally, we would also have uncertainty in the message.

Set up help with lsd-slam

So i'm trying to get this framework to work with lsd-slam and i'm not quite sure if i set up everything correctly.

I already rotated the output of my vslam so that it aligns with my imu as seen here (small one is the imu)
screenshot from 2015-02-22 22 45 38

I'm using the following launch file:
screenshot from 2015-02-22 22 51 29

and the following config file
screenshot from 2015-02-22 22 53 21

So what i want/expect is a metric scaled position output ( set absolute_measurements: false)

I did not set up a covariance matrix in lsd slam. I think it just sends zeros. ( fixed_covariance: true)
Is this correct? Or do i have to send dummy values?

Not quite sure about the q_ic values but the pose output of the framework seems to be aligned. I should not have to rotate anything because my sensors are already aligned?!
Also do still have to set fixed p_ic and q_ic to true?

I'm asking all this because i see some mixed results in the accuracy of the position - ( they are really bad and often delayed). Its better if i set absolute_mearurements= true but this is expected since it just hogs to the lsd unscaled position values.

Also i 'm getting this message here quite often:
[ WARN] [1424640218.513812822]: msf_core: imu message drop curr seq:263297 expected: 263296
[ WARN] [1424640218.824028297]: msf_core: imu message drop curr seq:263329 expected: 263328
[ WARN] [1424640219.164891690]: msf_core: imu message drop curr seq:263363 expected: 263362
I tried to change the delay value but it did not help.

Adding optical flow module

I am trying to add a optical flow update module to the framework, but have been unable to do so. I'm rather confused on how to integrate the optical flow velocity into the estimate.

I read the older tutorial in single sensor fusion for adding a custom sensor, but didn't understand too much.

@simonlynen Would you be able to help with the steps? Some examples for a similar velocity update sensor would be greatly helpful too.

quaternion residual computation order

Hi,

About this line of the code:

q_err = (state.Get<StateQwvIdx>()
    * state.Get<StateDefinition_T::q>()
    * state.Get<StateQicIdx>()).conjugate() * z_q_;

I suspect that the q_err should be inversed, according to eq (8) in the paper. I want to verify it with you: in eq (8), if X means the observed value and \hat{X} means the predicted value. For position, it does X-\hat{X}, so I think for the orientation it should do X*inv(\hat{X}).

Please let me know if I interprete anything wrong:-)

Accelerometer aiding

Hi,

I'm currently using MSF with ETHZ PTAM in a masters project and have a few questions.

Are the accelerometers used in the determination of pitch and roll?

  • Currently I cannot see anywhere this is the case. Is the angle estimate corrected towards the estimate coming from PTAM? I am not 100% sure that my current setup of PTAM is estimating attitude correctly, especially if the initial coordinate frame of PTAM is not gravity aligned. If not, based on the MSF framework it doesnt look like accel aiding would be too difficult to implement?
    I believe my inaccurate angle estimates from PTAM might be incorrectly estimating accelerometer and gyro biases causing me to read incorrect velocity estimates if I pitch / roll my setup.

What corrects the orientation estimate in the case of GNSS only?

  • if PTAM is not available then there are no angle measurements available to correct the gyro based estimates (based on my current understanding)

What units is the pressure input?

  • In pose_pressure_sensor there looks to be a pressure sensor used, but I can't find anywhere that suggests the scale. Is it already computed into metres?

Thanks!
Nick

No outputs on msf_core/pose topic

I am using you framework for fusing IMU data with visual odometry.

I am sending IMU data at 200Hz into the framework and visual odometry at 30Hz. Yet, there is no output on the pose topic.

I managed to get it working before, but somehow, cannot get it working anymore.

Kabir

GPS reference frame and position_init_yaw

Hi guys, congratulation for the great job with this framework and thank you for having decided to share it.
Right now I am trying to implement a basic GPS+IMU configuration (this should be the first step towards a more complex IMU+GPS+Compass+Vision implementation).
I'm getting a little confused by the reference you use for the GPS. Looking at the code I think you convert the standard Lat Lon Alt data in ECEF frame, but I don't understand how should I set the 'position_yaw_init' parameter.
Has this to be set with respect to the ECEF frame or to the North (as it would be measured by a compass)?
Also, with respect to which frame/origin is the fused position from '/msf_core/state_out' expressed when using GPS?

Noise Spectral Densities

Hi guys,
I've a few questions about getting the right Noise Spectral Densities from the datasheets.

Noise Spectral Densities

For my tests I am currently using a Pixhawk and its MPU6000 IMU.
The datasheet for this sensor gives:

accNSD = 0.0392 m/s^2/sqrt(Hz)
gyrNSD = 0.03142 °/s/sqrt(Hz)

Both the values refer to a cut off frequency of the internal Low Pass Filter of 10Hz
But in my case the LPF is set to cut at 42Hz. Assuming the ideal white Gaussian noise case, the NSD is constant along the frequencies spectrum; is it therefore safe to assume the updated densities to be:

accNSD' = accNSD*4.2
gyrNSD' = gyrNSD*4.2

????

Magnetometer Noise Floor parameter

The only noise related parameter, from the datasheet of the magnetometer I am using (HMC5883L), is

Noise Floor (NF) = 2 mG (standard deviation 100 samples)

Following the same approach described in this video https://www.youtube.com/watch?v=ywChrIRIXWQ I would compute the NSD as

magNSD = NF / sqrt(f)

Would you suggest to do the same ?

Experimental Noise Spectral Density characterization

Do you have some references/application notes about experimental characterization of the NSD ?
I tried to apply the same approach of the cited video for acc and gyr NSDs (adjusting the LowPass Filter frequency of the sensor to 10Hz) but it resulted in results very different from the ones on the datasheet listed in question 1).

High fluctuation after initialization because of old state inputs

When the pose_msf is running along with an externally propagated state like on the asctec fcu, a problem arises when we initialize the filter. It seems that messages from the topic hl_state_input aka ekf_state_out are arriving after the msf has been initialized which still contain an old state. This state can consist of arbitrary values since it has probably only integrated imu measurements. As soon as the msf has been initialized it tries to correct the arriving states which leads to high fluctuation of the state estimate.

The following output shows the problem. I simply print the position directly in StateCallback

p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
[ INFO] [1424870597.619524987]: initial measurement pos:[  0.00162001 -0.000360376   0.00339364] orientation: [0.603, -0.000112, -0.798, 0.00252]
[ WARN] [1424870597.734423432]: Using simulated core plus fixed diag initial error state covariance.
[ WARN] [1424870597.740409060]: imu time is off by 24.794 ms
[ INFO] [1424870597.744663346]: Initializing msf_core (built: Feb 25 2015)
[ INFO] [1424870597.744723675]: Core parameters: 
fixed_bias: 0
fuzzythres: 0.1
noise_acc:  0.083
noise_accbias:  5e-05
noise_gyr:  0.0013
noise_gyrbias:  1.3e-06

[ INFO] [1424870597.744866759]: Core init with state: 
--------- State at time 597.634s: ---------
0 : [0-2]    : Matrix<3, 1>         : [ 0.0984355 0.00263127  0.0588503]
1 : [3-5]    : Matrix<3, 1>         : [0 0 0]
2 : [6-9]    : Quaternion (w,x,y,z) : [0.00593, 0.0152, -0.000825, -1]
3 : [10-12]  : Matrix<3, 1>         : [0 0 0]
4 : [13-15]  : Matrix<3, 1>         : [0 0 0]
5 : [16-16]  : Matrix<1, 1>         : [1]
6 : [17-20]  : Quaternion (w,x,y,z) : [1, 0, 0, 0]
7 : [21-23]  : Matrix<3, 1>         : [0 0 0]
8 : [24-27]  : Quaternion (w,x,y,z) : [0.00171, 0.788, -0.0043, 0.615]
9 : [28-30]  : Matrix<3, 1>         : [ 0.09848 0.001745 -0.05249]
-------------------------------------------------------
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb = -36.2338  3.88895 -6.06195
p in fcu state out cb =  0.0984355 0.00263127  0.0588503
p in fcu state out cb =  0.0984318 0.00263357  0.0588495
p in fcu state out cb =  0.0984185 0.00264162  0.0588469

The correction sent to via correction aka ekf_state_in would then contain something like [36.1... , -3.88..., 6.01...]

Compilation Conflict With asctec_mav_framework Package

Hi,

Great work! Thanks for providing this great framework to the community!

I'm having trouble in installing the asctec_mav_framework. This is the process I ran through:

Got the "catkin_simple" package installed;

Got the "glog_catkin" package install;

Created a folder ethz_asctec_mav_framework(this can be arbitrary)/src;
Download the zip file and unzip inside the src folder;
Run "catkin_make" outside of the src;

The compilation system doesn't like this and just gave me the error:
_Error_***
"CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:75 (find_package):
Could not find a package configuration file provided by
"sensor_fusion_comm" with any of the following names:

sensor_fusion_commConfig.cmake
sensor_fusion_comm-config.cmake"
_EndError_***
From previous issue, I found this is because I didn't install the sensor_fusion package. I tried to run similar process with the "ethzasl_msf" package and found this error:
_Error_***
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:75 (find_package):
Could not find a package configuration file provided by "asctec_hl_comm"
with any of the following names:

asctec_hl_commConfig.cmake
asctec_hl_comm-config.cmake
_EndError_***
Obviously this is because the framework hasn't been installed.

Could you please give me some suggestions on this issue?

Thanks!
Xinke

Integrating onboard MAV with multiple sensors sources

OK, so I've been reading up on whatever I've been able to find on your framework. I would like to thank you guys for the amazing job you've done and for keeping this framework updated.

I'm working on a small quadrotor which should be able to transition seamlessly from indoor to outdoor environments and switch between sensor sources depending on availability/data quality. I have the following sensors onboard : 9DOF IMU (MPU 6000 accel-gyro + HMC5883L magneto), Baro (MS5611), GPS, Forward facing monocular camera (for PTAM), Downward facing optical flow camera (PX4Flow) and downward pointing Sonar/Laser rangefinder.

Now, the PTAM and IMU data can be used anywhere, but the GPS, optical flow, barometer and sonar availability changes with outdoor <> indoor transition and altitude above ground.

My questions might sound a bit noobish, but please bear with me.

  1. Does the present IMU implementation take into account the magnetometer data? Fusing the mag would improve PTAM's yaw drift I think?
  2. Does this framework output a pose estimate which is then fed into the flight controller's position controller? Or does it output a final attitude + positon? Would this mean that absolutely no sensor fusion would be done on the low level FC?
  3. I am using an Odroid U3 at the moment for running ROS. Would this be enough computational power or should I try and get hold of a Intel board or similar?
  4. Is the global position and local position transition handled already in the framework? Could we keep an approximate global estimate from the last good GPS fix before transitioning to local sensors?
  5. I read in your IROS 2013 paper that you had flown trajectories at 4m/s. How were you handling the navigation?
  6. In my FC, any position data (GPS/optical flow) is fused with inertial data before being passed onto the controllers to improve performance. How would this be handled if I use this framework?

I'm sure that I will have more questions in the future about getting this fully integrated onboard so please keep this issue open. THANKS!!

Also, I'd just like you guys to know that I'm not affiliated to any college / university. I'm just a 16 year old student with a crazy interest in MAVs.

Kabir

unresolved #includes

In msf_simulate/include/mav_planning_utils/uncoupled_motion.h: #include <mav_planning_utils/dynamic_model.hpp>
In msf_simulate/src/test/test_simulated_trajectory.cc: #include <msf_simulate/pose_sensormanager.h>

Quaternion convention

I know it is a highly controversal topic, however I am not happy with having different quaternion notations around in the lab. Aslam/aslam visual inertial uses sm kinematics which uses JPL. The msf/helis use eigen which is hamilton. Should we unify this at some point?

Configuring for forward-looking camera

@simonlynen @stephanweiss I'm integrating MSF with a forward looking camera, now that I got success with a downward looking camera.

The camera is mounted on front of a quadrocopter, with Z pointing out from lens (i.e forward), X right and Y down.
The IMU in in NED frame, with X pointing forward, Y left and Z up.

What would be the required rotation parameters for this setup?

How to integrate this framework with self-made multirotor?

Hi all,
After I read the tutorial on the ROS ethzasl_sensor_fusion wiki, I have a brief overview. Here is my goal - let my miltirotor could fly indoor enviroment(You can see more information here). In this thread, Stephan said old ethzasl_sensor_fusion is possible to integrate without asctec helicopters. But something I'm trying to figure out.

1)I want to integrate ethzasl_ptam(visual slam 6 DOF) and pressure sensor(1DOF)with my multirotor. Which launch file closes to my scenario?

2)relate with question 1. I found this launch file(pose_pressure_sensor.launch) very close to my scenario, but I'm not very sure. I have a look at this launch file and its rosparam file(position_sensor_fix.yaml). In ths position_sensor_fix.yaml:

core/core_noise_acc: 0.083
core/core_noise_accbias: 0.0083
core/core_noise_gyr: 0.0013
core/core_noise_gyrbias: 0.00013

Are these bias represtented the ekf initial state?

3)This ROS package(ethzasl_msf) need the dependency asctec_mav_framework. Is there any way to remove it? Since we do not have any asctec mavs.

4)The data rate of any sensor. Does this framework have some limitation of sensor data rate? Because discrete kalman filter timing is critical, I assume this framework act like old ethzasl_sensor_fusion. As tutorial mentioned, Prediction(or call propagation) is continously base on IMU and update measurement is done in lower speed(ptam is 20hz).

Thanks for reading this long question. I'm glad you open this framework for whom are working on sensor fusion research.

FBoris

Forward facing camera, LSD-SLAM

Hi, could someone help me with the proper rotation setup for a forward facing camera running LSD-SLAM on a MAV with a Pixhawk?

The IMU data is coming in at 50Hz, x is positive forward, y is positive left, z is positive up. For LSD-SLAM I have output on a PoseStampedWithCovariance topic, though I disregard the covariance and use a fixed one. The camera is mounted at the fron facing forward. Z positive is forward, x positive is right and y positive is down.

--> Does q_ic essentially stand for the rotation such that I rotate the LSD frame until its coordinate axes align with those of the IMU? In this case, this would mean either:

pose_sensor/init/q_ic/w: 0.5
pose_sensor/init/q_ic/x: 0.5
pose_sensor/init/q_ic/y: 0.5
pose_sensor/init/q_ic/z: 0.5

or:

pose_sensor/init/q_ic/w: 0.5
pose_sensor/init/q_ic/x: -0.5
pose_sensor/init/q_ic/y: -0.5
pose_sensor/init/q_ic/z: -0.5

based on whether I rotate the IMU into the camera frame, or the camera into the IMU frame.... which one is correct?

Also, no matter what rotation I have, I always get this warning: (a lot of those warnings)

[ WARN] [1419351729.086496805]: Wanted to insert a value to the sorted container at time 1419351728.914192200 but the map already contained a value at this time. discarding.

and also no output on the pose topic... I know I'm doing something very basic and fundamental wrong. Maybe someone can help me out...Thanks!

Marc

PS: here's the full sensor config:

data_playback: false
core/core_fixed_bias: false

###IMU PARAMETERS
# PIXHAWK

core/core_noise_acc: 0.013
core/core_noise_accbias: 0.000108
core/core_noise_gyr: 0.00065
core/core_noise_gyrbias: 0.00000212

pose_sensor/pose_fixed_scale: false
pose_sensor/pose_noise_scale: 0.0
pose_sensor/pose_noise_p_wv: 0.0
pose_sensor/pose_noise_q_wv: 0.0
pose_sensor/pose_noise_q_ic: 0.0
pose_sensor/pose_noise_p_ic: 0.0
pose_sensor/pose_delay: 0.02
pose_sensor/pose_noise_meas_p: 0.005
pose_sensor/pose_noise_meas_q: 0.02
pose_sensor/pose_initial_scale: 1

#initialization of camera w.r.t. IMU

pose_sensor/init/q_ic/w: 0.5
pose_sensor/init/q_ic/x: -0.5
pose_sensor/init/q_ic/y: -0.5
pose_sensor/init/q_ic/z: -0.5

pose_sensor/init/p_ic/x: 0.02
pose_sensor/init/p_ic/y: 0.0
pose_sensor/init/p_ic/z: 0.01

pose_sensor/pose_absolute_measurements: true
pose_sensor/pose_use_fixed_covariance: true
pose_sensor/pose_measurement_world_sensor: false

pose_sensor/pose_fixed_scale: false
pose_sensor/pose_fixed_p_ic: true
pose_sensor/pose_fixed_q_ic: true
pose_sensor/pose_fixed_p_wv: false
pose_sensor/pose_fixed_q_wv: false

and the initial output after initializing over rqt_reconfigure:

[ INFO] [1419351728.873066700]: initial measurement pos:[ 0.0134631 -0.0297135 -0.105087] orientation: [0.999, -0.00101, 0.038, 0.0128]
[ WARN] [1419351729.010553487]: Using simulated core plus fixed diag initial error state covariance.
[ INFO] [1419351729.020954716]: Initializing msf_core (built: Dec 23 2014)
[ INFO] [1419351729.021107834]: Core parameters:
fixed_bias: 0
fuzzythres: 0.1
noise_acc: 0.013
noise_accbias: 0.000108
noise_gyr: 0.00065
noise_gyrbias: 2.12e-06

[ INFO] [1419351729.021295650]: Core init with state:
--------- State at time 1728.91s: ---------
0 : [0-2] : Matrix<3, 1> : [0.00400788 -0.0499617 -0.104307]
1 : [3-5] : Matrix<3, 1> : [0 0 0]
2 : [6-9] : Quaternion (w,x,y,z) : [0.475, 0.512, 0.525, 0.487]
3 : [10-12] : Matrix<3, 1> : [0 0 0]
4 : [13-15] : Matrix<3, 1> : [0 0 0]
5 : [16-16] : Matrix<1, 1> : [1]
6 : [17-20] : Quaternion (w,x,y,z) : [1, 0, 0, 0]
7 : [21-23] : Matrix<3, 1> : [0 0 0]
8 : [24-27] : Quaternion (w,x,y,z) : [0.5, -0.5, -0.5, -0.5]

9 : [28-30] : Matrix<3, 1> : [0.02 0 0.01]

Glog library not found

I cloned the repository today and tried to compile it on ubuntu 12.04 and 13.10 and I got the same error on both stating:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
GLOG_LIBRARY (ADVANCED)
linked by target "test_similaritytransform" in directory [...]
linked by target "test_static_statelist" in directory [...]

How do I get glog to be installed and found properly?

Thanks for your help

Inquiry for gravity handlering in initialization

Hi, I was reading learning to use msf recently and got this question:

I assume that a_m is the acceleration of IMU in its own ego-motion frame. This seems consistent with this line:

dv = (state_new->template Get<StateDefinition_T::q>().toRotationMatrix() *
        ea + state_old->template Get<StateDefinition_T::q>().toRotationMatrix() *
        eaold) / 2;

But in this line 166 from pose_sensormanager.h, a_m is directly initialized as the gravity, which is in the world frame:

a_m = g;    /// Initial acceleration.

So just suggest to postpone this line 166 after q is initialized and set:

a_m = q.inverse() * g

Sensor configuration and Noise levels

Reading through the issue list I see a lot of questions 1, 2, 3, regarding how to set the sensor configuration correctly.

core/core_noise_acc
core/core_noise_gyr
pose_sensor/pose_noise_meas_p
pose_sensor/pose_noise_meas_q

There seems to be a lot of confusion in what units these values are. I will write down some terms and units that we talk about the same.

Position Acc
Standard deviation std dev m m/s^2
Variance var m^2 m^2/s^4
Noise spectral density NSD m/sqrt(Hz) m/s^2/sqrt(Hz)
Power spectral density PSD m^2/Hz m^2/s^4/Hz

I think that the configuration throughout the framework is not consistent yet and that me need to adapt some comments.

in the Core cfg the comment is std/dev however I suspect what you enter here is treated as a NSD. I can trace it until the calcQCore.h but then I can not understand what happens with it any more. But according to multiple comments by the authors this is a continuous noise spectral density.

Then for the pose sensor cfg the comment is again std dev. And I think here it is ok. Looking at the code.
If we provide a pose with covariance the provided covariance is used directly. If fixed_covariance_ is set we take the std dev from the config and square it to get the covariances.

To sum it up can we please write down the units for the config file. My suggestion is the following

core/core_noise_acc   # NSD [m/s^2 / sqrt(Hz)] can be taken directly from the datasheet
core/core_noise_gyr    # NSD [rad/s / sqrt(Hz)] can be taken directly from the datasheet
pose_sensor/pose_noise_meas_p             # std dev [m] 
pose_sensor/pose_noise_meas_q              # std dev ??? what is the std dev for quaternions ???

Once the problem is discussed we are happy to help fixing the comments in the cfg file.

Gruss Flavio and Antonio

delay and pose sensor setup

hi guys,
When I was using pose_sensor.launch as my launch file, I found I always got these message. I'm using imu(very poor imu MPU9250) at 800hz with svo at 60hz.

[ WARN] [1403628860.088846747]: msf_core: imu message drop curr seq:44574 expected: 44561
[ WARN] [1403628860.089542803]: You tried to give me a measurement which is too far in the past. Are you sure your clocks are synced and delays compensated correctly? [measurement: 8859.94 (s) first state in buffer: 8859.96 (s)]

My question is how to set the delay correctly?

Sometime, the fuzzy tracking had been triggered. Then, the filter went to diverge. I refer the fuzzy tracking that mentioned by @simonlynen (ethz-asl/ethzasl_sensor_fusion#20). I checked my quaternion of camera w.r.t imu. It is my opinion: If I set some rotation wrong, it could triggered the fuzzy tracking to the end. But It did not happen. I'm not sure these two issue are related.

Last question, are these two values for pose sensor translation and rotation noise configuration?
in pose_sensor.launch

pose_sensor/pose_noise_meas_p: 0.02
pose_sensor/pose_noise_meas_q: 0.02

Magnetometer integration

Hi,

I am working in order to add full magnetometer support to the msf framework. I am trying to follow the approach described in Weiss' PhD thesis (chapter 3) and in the vismaggps branch of this repository (https://github.com/ethz-asl/ethzasl_msf/tree/vismaggps/vismaggps_fusion). Therefore I added three states to the state vector:

  • qm_i: magnetometer - imu rotation quaternion
  • alpha: (elevation) angle of the magnetic vector
  • beta: (azimuth) angle of the magnetic vector

Since the coding part is almost over I would like to start with some testing in this days, but I still would like to ask you some questions/confirmations about the approach:

qm_i, alpha and beta process noises
I think it is correct to model these auxiliary states as noise-free (Weiss, page 91). Is it ok?

alpha and beta init
If I've understood correctly I have to initialize the alpha (elevation) angle with the "inclination" value that can be found, for example, here: http://magnetic-declination.com/.
For Zurich the value is 63° 18' (i.e. 1.1 rads) but in the vismaggps code the value is initialized to -1.1 (https://github.com/ethz-asl/ethzasl_msf/blob/vismaggps/vismaggps_fusion/src/vismaggps_measurements.h#L77). I think this is due to some reference system adjustment for your setup. Can you confirm this ?
Also, what about the beta angle? Can I fix it to an arbitrary value (e.g. 0) ?

Noise settings in Dynamic Reconfigure tool
I don't understand the convention chosen for the noise representation in the ROS Dynamic Reconfigure tool and in .yaml files.
It seems from #54 (comment) that I should use the continuos time noise densities; but in the .cfg files it seems I should use directly the standard deviation (e.g. "noise for measurement sensor (std. dev)" in https://github.com/ethz-asl/ethzasl_msf/blob/master/msf_updates/cfg/PositionPoseSensor.cfg#L55 )

Thanks

EKF heading output and flight controller's heading

Hey guys,
This came up during a discussion session. Currently, in our systems, what we do is we reset the flight controller's yaw to the heading from the EKF, which is a bit hacky.
How do you guys handle this 'properly' in your systems?
Kabir

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.