Giter VIP home page Giter VIP logo

tudat's People

Contributors

aleixpinardell avatar briha avatar cfortunylombra avatar deepsourcebot avatar delfi-c3 avatar dominicdirkx avatar dominikstiller avatar elmarputs avatar filippooggionni avatar frankhogervorst avatar fringuels avatar gaffarelj avatar geoffreygarrett avatar ifodde avatar jo11he avatar kartikkumar avatar kimonito98 avatar magnific0 avatar miguelavillez avatar mvandenbroeck avatar reneh107 avatar rodyoldenhuis avatar samfayolle avatar simon-van-hulle avatar terminalqz avatar transferorbit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tudat's Issues

Add interface to change density model inputs

Currently, the density values are retrieved from NRLMSISE-00 based on the date of the simulation. It would be nice to be able to play around with the inputs of the density model itself, without being tied to the date.

test_aerodynamics_TabulatedAtmosphere failing on OSX

On OSX, the test_aerodynamics_TabulatedAtmosphere is failing, preventing the tudat conda package to be built.

The full error page can be accessed here. The error message is the following:

        Start   6: test_aerodynamics_TabulatedAtmosphere

6: Test command: $SRC_DIR/build/tests/test_aerodynamics_TabulatedAtmosphere
6: Test timeout computed to be: 10000000
6: Running 10 test cases...
6: unknown location:0: fatal error: in "test_tabulated_atmosphere/testMultiDimensionalTabulatedAtmosphere": std::runtime_error: Data file could not be opened: /Users/runner/.tudat/resource/atmosphere_tables/MCDMeanAtmosphereTimeAverage/density.dat
6: $SRC_DIR/tests/src/astro/aerodynamics/unitTestTabulatedAtmosphere.cpp:176: last checkpoint: "testMultiDimensionalTabulatedAtmosphere" test entry
6: unknown location:0: fatal error: in "test_tabulated_atmosphere/testMultiDimensionalTabulatedAtmosphereWithInterpolationAndShuffledVariables": std::runtime_error: Data file could not be opened: /Users/runner/.tudat/resource/atmosphere_tables/MCDMeanAtmosphereTimeAverage/specificHeatRatio.dat
6: $SRC_DIR/tests/src/astro/aerodynamics/unitTestTabulatedAtmosphere.cpp:210: last checkpoint: "testMultiDimensionalTabulatedAtmosphereWithInterpolationAndShuffledVariables" test entry
6: unknown location:0: fatal error: in "test_tabulated_atmosphere/testMultiDimensionalTabulatedAtmosphereDefaultExtrapolation": std::runtime_error: Data file could not be opened: /Users/runner/.tudat/resource/atmosphere_tables/MCDMeanAtmosphereTimeAverage/pressure.dat
6: $SRC_DIR/tests/src/astro/aerodynamics/unitTestTabulatedAtmosphere.cpp:249: last checkpoint: "testMultiDimensionalTabulatedAtmosphereDefaultExtrapolation" test entry
6: 
6: *** 3 failures are detected in the test module "Master Test Suite"
6: 
  6/220 Test   #6: test_aerodynamics_TabulatedAtmosphere ...................................***Failed    0.13 sec

This test has been disabled with c97c3aa, waiting for a fix.

Incorrect mass propagation setup handling

When the mass of a body is propagated, and the initial_body_masses list is left empty in numerical_simulation.propagation_setup.propagator.mass, an error is printed when the numerical_simulation.SingleArcSimulator() function is called. This error is the following:

python: /home/jerem/miniconda3/envs/tudat-bundle/include/eigen3/Eigen/src/Core/Block.h:146: Eigen::Block<XprType, BlockRows, BlockCols, InnerPanel>::Block(XprType&, Eigen::Index, Eigen::Index, Eigen::Index, Eigen::Index) [with XprType = const Eigen::Matrix<double, -1, 1>; int BlockRows = -1; int BlockCols = 1; bool InnerPanel = false; Eigen::Index = long int]: Assertion `startRow >= 0 && blockRows >= 0 && startRow <= xpr.rows() - blockRows && startCol >= 0 && blockCols >= 0 && startCol <= xpr.cols() - blockCols' failed.
Aborted

This error is not user-friendly, as it doesn't hint that the mass is simply missing from the inputs, and it is thrown only when the propagation is started rather than during the setup.
I suggest adding a condition when numerical_simulation.propagation_setup.propagator.mass is called to verify that the initial_body_masses is of the correct length (and its elements of the correct type).

Also, if the mass_rate_models are incorrect, the propagation will still work, but the mass will not be propagated and the following warning is printed:

Warning when making from-thrust mass-rate model, no thrust model is found; no thust is used

I suggest either editing the warning to more explicitly tell the user that the mass will not be propagated, or to be less nice and throw an error so that the user fixes the mass rate models.

Point mass gravity cannot be saved as dependent variable for non-Cowell propagators

When setting up a dependent variable with the acceleration norm of a point_mass_gravity_type, I consistently get the following error:

RuntimeError: Error when getting acceleration between bodies MAV and Mars of type 1, no such acceleration found

I get this both when I compile tudat-bundle, or when I use the latest conda build.

I am currently investigating, it seems that no acceleration model is available in the corresponding accelerationModelList.
This issue seems related particularly to the accelerationModelList acceleration. I didn't encounter it with any other acceleration type.

Create the tudat C++ API documentation

A website similar to the API documentation of tudatpy.readthedocs.io should be created with the API documentation for tudat in C++.

This will require editing all the docstrings that are present in tudat source code, to convert them to regular comments (jo11he can indicate how to do this).

Quite a bit of C++ API documentation is already present in the main docstrings, but it is expected that some cleaning and improvements will be required.

This task should wait at least a couple of weeks for now.

Create separate function to estimate covariance

Currently, the covariance from an estimation problem can only be determined by doing a single iteration of the estimation. This may also modify the values of the parameters themselves, leading to undesired behaviour. A separate function will be written to only calculate the covariance

Error in costateBasedThrustGuidance reference frame transformation

I ran into an issue where the costateBasedThrustGuidance would return different thrust angles than expected.

currentForceDirection_ = reference_frames::getVelocityBasedLvlhToInertialRotation(
currentState, Eigen::Vector6d::Zero( ), false ) *
( ( Eigen::Vector3d( ) <<
cos( thrustAngleAlpha ) * cos( thrustAngleBeta ), sin( thrustAngleAlpha ) * cos( thrustAngleBeta ) ,
sin( thrustAngleBeta ) ).finished( ).normalized( ) );

The thrust angles (α,β) in the guidance law are defined in an RSW-frame whereas they're currently transformed to the inertial frame using the LVLH reference frame. I believe this is wrong and can be fixed by either using the inertial to RSW that is available in reference_frames:

Eigen::Vector3d currentRSWForceDirection = (Eigen::Vector3d() <<
    sin(thrustAngleAlpha) * cos(thrustAngleBeta),
    cos(thrustAngleAlpha) * cos(thrustAngleBeta),
    sin(thrustAngleBeta)
).finished().normalized();
Eigen::Matrix3d intertialToRswRotMat = reference_frames::getInertialToRswSatelliteCenteredFrameRotationMatrix(currentState);
currentForceDirection_ = intertialToRswRotMat.transpose() * currentRSWForceDirection;

or technically also simply by determining the flight-path angle and use that to define α in the LVLH frame. But the former seems the more clean and obvious solution.

The reason that I only noticed this issue now is because I've been working with near-circular orbits, and this error only becomes more pronounced for increasing eccentricty. I also discussed this with Marie.

Unit test: termination_condition_bug_fix

Issue with how how the dynamics simulator and variational equations handle various post-processing issues - shown in integration_completed_successfully checks after each integration of EoMs in estimation application.

Checking the od_manager.variational_solver.dynamics_simulator after having used the od_manager object in the simulation of observations

estimation.simulate_observations(
observation_simulation_settings,
my_od_manager.observation_simulators,
bodies)

The integration_completed_successfully check fails here, with the following details:
terminated_on_exact_condition = 1
termination_reason = propagation_never_run

Quick-fix in https://github.com/tudat-team/tudat/tree/feature/termination_condition_bug_fix, unit test still open.

Add custom dependent variable

  • Add to the list of available dependent variable a "custom" variable. The user can provide a function to compute the dependent variable.
  • Expose it in tudatpy
  • Document it in API reference and user guide

Solar activity not found when starting in monthly predictions

It appears that no solar activity can be found when a simulation starts more than 7257.5 JD after J2000 (the 14th of November 2019). This was discovered using the nrlmsise00 atmosphere model that uses solar activity as an input.

However, starting before 7257.5 JD and propagating for years works.

Coincidentally, the space weather resource file switches from daily predictions to monthly predictions the14th of November 2019.

Coincidence? No, manually updating the data to a more recent space weather, the switch from daily predictions to monthly predictions is on the 20th of February 2022. Sure enough, the same error as before still happens, but now for this new date.

In short: starting a simulation at a time where the space weather is in monthly predictions causes the solar activity data to be inaccessible while starting in observations or daily predictions and then later switching to monthly predictions seems to work fine.

Error checking for empty body settings

Hi all,

Apparently it is possible to create a body with empty settings, which eventually causes the code crashes when creating the dynamics simulator... so somewhere it should be checked whether a given body has empty settings, and an error thrown if so.

Code triggering the issue is attached: empty body settings created in line 78, code crashes in line 290.

Assigned the issue to Jérémie... but not completely sure whether you're the responsible person

test_tudat_bundle.py.zip

Implementing a Warning when the list of estimated parameters is arranged differently from the order defined

Hi all,

I noticed that the list of estimated parameters is arranged differently from the order defined. For instance, if I use this definition for parameters_settings:

parameter_settings = estimation_setup.parameter.initial_states(propagator_settings,bodies)
parameter_settings.append(estimation_setup.parameter.ground_station_position("Mars", reflector_name))
parameter_settings.append(estimation_setup.parameter.core_factor("Mars"))
parameter_settings.append(estimation_setup.parameter.free_core_nutation_rate("Mars"))
parameter_settings.append(estimation_setup.parameter.periodic_spin_variations("Mars"))
parameter_settings.append(estimation_setup.parameter.polar_motion_amplitudes("Mars"))

and then, I print the identifiers and indices of parameters (using the estimation_setup.print_parameter_names(parameters_set) function):

Parameter start index, Parameter definition
0, translational state of (Mars).
6,  core factor of the celestial body of (Mars).
7,  free core nutation rate of the celestial bodyof (Mars).
8, ground station position of (Mars, LaRa).
11,  periodic spin variation for full planetary rotational model of (Mars).
19,  polar motion amplitude for full planetary rotational modelof (Mars).

This shows that the ground station position elements are not found just after the translational state. Hence, it would be beneficial that warming is popped out when the list of estimated parameters is arranged differently from the order defined. This is because one will need to add an a priori parameter correction and inverse a priori covariance, and has to understand beforehand the order of the estimated parameters.

Hopefully, with this implementation, students will not fall into the trap by thinking that the list of estimated parameters is always arranged as defined.

Thank you in advance.

Greetings,
Carlos

Bug when defining the viability settings are added to the simulation settings

Hi all,

I found out that the addViabilityToObservationSimulationSettings(measurementSimulationInput,observationViabilitySettings); does not work properly. This is because the number of observations is not reduced when viability requirements are defined and added to the simulation settings. Hence, this makes me understand that the simulation is performed without taking into account the viability settings.

I noticed this when I was running this code. Here, you can find an example of how it is defined.

Thank you in advance.

Greetings,
Carlos

Feature Request: addDependentVariableToPropagatorSettings

Requesting a feature addDependentVariableToPropagatorSettings, which allows the user to add a (list of) SingleDependentVariableSaveSettings to an existing PropagatorSettings object.

(Analogous to the existing addDependentVariablesToObservationSimulationSettings function for observation simulation settings.)

Failing tests for Clang build on Windows using MSVC libraries

The tests fail due to tolerances in tests being exceeded. This must be resolved, but for now these tests will be omitted from the feedstock releases in the YOLO/YOLOAddTestCase.cmake file (on tudat/develop branch), in order to meet deadlines for Numerical Astrodynamics.

The following tests FAILED:
         12 - test_basic_astro_OrbitalElementConversions (Failed)
         21 - test_basic_astro_TimeConversions (Failed)
         26 - test_basic_astro_SphericalOrbitStateConversions (Failed)
         70 - test_shape_based_HodographicShaping (Failed)
         92 - test_orbit_determination_EstimationInput (Failed)
         95 - test_orbit_determination_HybridArcStateEstimation (Failed)
        142 - test_propagators_VariationalEquations (Failed)
        160 - test_basics_TimeTypes (Failed)
        221 - test_io_MultiArrayReader (Failed)
Errors while running CTest

Either we change the tolerances or determine why occurs with MSVC libraries compiled with Clang.

Long double precision tests failing on some compilers

Dear all,

I have installed tudat-bundle following the instructions given in README.md. However, 7 tudat tests failed, and the details of the error messages can be found in the txt file attached. For this, I used the develop branch. Also, I would like to point out that the README.md says that there are 224 tests, but it seems that there are only 219 in my build directory.

In addition to this, the OS and compiler specifications that I utilized to compile everything can be found below:

`C:\WINDOWS\system32\wsl.exe --distribution Ubuntu-20.04 --exec /usr/bin/zsh -c "export CLION_IDE=TRUE && export JETBRAINS_IDE=TRUE && cd /home/cfortunylombra/tudat-bundle/cmake-build-release-wsl && /usr/bin/cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=/home/cfortunylombra/anaconda3/envs/tudat-bundle -DCMAKE_CXX_STANDARD=14 -DBoost_NO_BOOST_CMAKE=ON -G 'CodeBlocks - Unix Makefiles' /home/cfortunylombra/tudat-bundle; exitcode=$?; sleep 0.001; (exit $exitcode)"
-- CMAKE_PREFIX_PATH: /home/cfortunylombra/anaconda3/envs/tudat-bundle
-- CMAKE_BINARY_DIR: /home/cfortunylombra/tudat-bundle/cmake-build-release-wsl
CMake Warning (dev) at CMakeLists.txt:23 (project):
  Policy CMP0048 is not set: project() command manages VERSION variables.
  Run "cmake --help-policy CMP0048" for policy details.  Use the cmake_policy
  command to set the policy and suppress this warning.

  The following variable(s) would be set to empty:

    CMAKE_PROJECT_VERSION
    CMAKE_PROJECT_VERSION_MAJOR
    CMAKE_PROJECT_VERSION_MINOR
    CMAKE_PROJECT_VERSION_PATCH
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Using UNDOCUMENTED tudat
-- System name: Linux
-- ******************** BUILD CONFIGURATION ********************
-- TUDAT_BUILD_TESTS                                     ON
-- TUDAT_BUILD_WITH_PROPAGATION_TESTS                    OFF
-- TUDAT_BUILD_WITH_ESTIMATION_TOOLS                     ON
-- TUDAT_BUILD_TUDAT_TUTORIALS                           ON
-- TUDAT_BUILD_STATIC_LIBRARY                            ON
-- TUDAT_BUILD_WITH_FILTERS                              OFF
-- TUDAT_BUILD_WITH_SOFA_INTERFACE                       ON
-- TUDAT_BUILD_WITH_JSON_INTERFACE                       OFF
-- TUDAT_BUILD_WITH_NRLMSISE00                           OFF
-- TUDAT_BUILD_WITH_EXTENDED_PRECISION_PROPAGATION_TOOLS OFF
-- TUDAT_DOWNLOAD_AND_BUILD_BOOST                        OFF
-- << Tudat (Release - ) >>

*** COMPILER CONFIGURATION ***

-- CMAKE_C_COMPILER:   /usr/bin/cc
-- CMAKE_CXX_COMPILER: /usr/bin/c++
-- CMAKE_CXX_COMPILER_ID: GNU
-- CMAKE_CXX_SIMULATE_ID: 
-- CMAKE_CXX_COMPILER_VERSION: 9.3.0
-- CMAKE_CXX_COMPILER_VERSION 9.3.0
-- Using gnucxx compiler.
-- Building with flags: -Wall -std=c++1z -Wextra -Wno-unused-parameter -Wno-unused-variable -Woverloaded-virtual -Wold-style-cast -Wnon-virtual-dtor -Wunused-but-set-variable -Wsign-compare.
-- Required Boost libraries: filesystem;system;regex;date_time;thread;chrono;atomic;unit_test_framework
-- Found Boost: /home/cfortunylombra/anaconda3/envs/tudat-bundle/include (found suitable version "1.72.0", minimum required is "1.72.0") found components: filesystem system regex date_time thread chrono atomic unit_test_framework 
-- Detected Boost version: 107200
-- Boost include dirs: /home/cfortunylombra/anaconda3/envs/tudat-bundle/include
-- Checking for _GLIBCXX_USE_CXX11_ABI definition...
-- _GLIBCXX_USE_CXX11_ABI was not found.
/home/cfortunylombra/anaconda3/envs/tudat-bundle/lib/cmake/tudatresources/../../../include
-- Extended precision propagation disabled!
Building Tudat tutorials.
-- TUDAT_DATA_DIR_RELATIVE_TO_INSTALL_PREFIX: data/tudat
-- Using UNDOCUMENTED tudatpy
-- System name: Linux
-- tudatpy version: 0.5.23
-- << tudatpy (Release - ) >>
-- Found Boost: /home/cfortunylombra/anaconda3/envs/tudat-bundle/include (found suitable version "1.72.0", minimum required is "1.72.0") found components: thread date_time system unit_test_framework filesystem regex chrono atomic 
-- Tudat:[Tudat_PROPAGATION_LIBRARIES]Tudat::tudat_propagation_setupTudat::tudat_shape_based_methodsTudat::tudat_low_thrust_trajectoriesTudat::tudat_environment_setupTudat::tudat_ground_stationsTudat::tudat_aerodynamicsTudat::tudat_system_modelsTudat::tudat_geometric_shapesTudat::tudat_relativityTudat::tudat_gravitationTudat::tudat_mission_segmentsTudat::tudat_electromagnetismTudat::tudat_propulsionTudat::tudat_ephemeridesTudat::tudat_earth_orientationTudat::tudat_numerical_integratorsTudat::tudat_reference_framesTudat::tudat_statisticsTudat::tudat_propagatorsTudat::tudat_spice_interfaceTudat::tudat_sofa_interfaceTudat::tudat_basic_astrodynamicsTudat::tudat_numerical_quadratureTudat::tudat_interpolatorsTudat::tudat_root_findersTudat::tudat_basic_mathematicsTudat::tudat_input_outputTudat::tudat_basics
-- Tudat:[Tudat_INCLUDE_DIRS]
-- Python modules do NOT require linking to the Python library.
-- Python interpreter: /home/cfortunylombra/anaconda3/envs/tudat-bundle/bin/python
-- Python interpreter version: 3.8.12
-- Python include dir: /home/cfortunylombra/anaconda3/envs/tudat-bundle/include/python3.8
-- Generic UNIX platform detected.
-- Python modules install path: lib/python3.8/site-packages
-- Setting up the compilation of the Python module 'kernel'.
-- Setting up extra compiler flag '-fwrapv' for the Python module 'kernel'.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/cfortunylombra/tudat-bundle/cmake-build-release-wsl

Thank you in advance.

Best,
Carlos

Integrators refactoring

We currently advise users to use RungeKuttaVariableStepSizeIntegrator with a tolerance of inf if they wish to use high-order RK methods with a fixed step. Doing so, the RungeKuttaVariableStepSizeIntegrator still tries to compute the optimum step size.
I suggest two improvements.

First (and the easiest) is to detect when the tolerance is inf in RungeKuttaVariableStepSizeIntegrator, and to simply return the minimum step size in this case in the computeNextStepSizeAndValidateResult and computeNewStepSize. This would bypass useless computations (and avoid potential error if the step size computation goes to infinite values, see issue #77 for instance).

Second, we could add a new RungeKuttaIntegrator that, does fixed-step integrations, much like the RungeKutta4Integrator. The user would then only have to specify the initial epoch, the coefficient set, and the step size. In this setup, we could actually remove the RungeKutta4Integrator, as the user could now use the new RungeKuttaIntegrator and specify the coefficient set for this to become RK4. But removing RungeKutta4Integrator would break backward compatibility (we can simply remove it from the doc and/or the Python exposition).

@DominicDirkx also mentioned another improvement to make variable step size computation more efficient:
I propose adding new integrator options, where instead of evaluating all stages of (for instance) the RKF7(8), we evaluate only the stages needed for the 7th or 8th order one.

Broken tests after data path changes + install step

In short: the installation step was added in order to make integration with conda-smithy (toward conda packaging) as headache free as possible. Around 30 tests have been temporarily removed for two separate categorical reasons.

All test data that was stored within the source tree, needs to be removed and added to the tests folder. It's failing in the conda process as it appears that the src tree is no longer available (it is deleted or moved) prior to the build tree being tested. The test data needs to be temporarily available before packaging. From my current knowledge there are a few ways that these can be resolved.

  1. Host the test data and use the ExternalData module in cmake, if we wish to continue with CTest.

  2. Add a header file within the tests directory of the source that facilitates the build tree paths (like include/tudat/paths.hpp - which may be merged with the config.hpp - in the build tree).

  3. Only test the installation, moving all the test data from source to the install temporarily and use the files key in the meta.yaml to avoid this data being packaged.

     # Fails without install, as it looks for installed data paths
     test_aerodynamics_AerodynamicMomentAndAerodynamicForce
     test_aerodynamics_TabulatedAtmosphere
     test_aerodynamics_ControlSurfaceIncrements
     test_aerodynamics_WindModel
     test_ephemerides_ApproximatePlanetPositions
     test_ephemerides_TabulatedEphemeris
     test_spice_SpiceInterface
     test_simulation_EnvironmentModelSetup
     test_simulation_AccelerationModelSetup
     test_io_BasicInputOutput
    
     # Fails without test data, as the source tree isn't available after building (conda)
     test_aerodynamics_AerodynamicCoefficientsFromFile
     test_basic_astro_EmpiricalAcceleration
     test_earth_orientation_EarthOrientationCalculator
     test_earth_orientation_EopReader
     test_earth_orientation_PolarMotionCalculator
     test_earth_orientation_TimeScaleConverter
     test_earth_orientation_ShortPeriodEopCorrections
     test_electromagnetism_PanelledRadiationPressure
     test_interpolators_CubicSplineInterpolator
     test_interpolators_LinearInterpolator
     test_interpolators_MultiLinearInterpolator
     test_integrators_EulerIntegrator
     test_integrators_RungeKutta4Integrator
     test_integrators_RungeKuttaFehlberg45Integrator
     test_integrators_RungeKuttaFehlberg78Integrator
     test_integrators_RungeKutta87DormandPrinceIntegrator
     test_quadrature_GaussianQuadrature
     test_io_MapTextFileReader
     test_io_MatrixTextFileReader
     test_io_TwoLineElementsTextFileReader
     test_io_MissileDatcomReader
     test_io_MissileDatcomData
     test_io_DictionaryInputSystem
     test_io_MultiArrayReader
     test_io_MultiArrayWriter
     test_io_AerodynamicCoefficientReader
    

Request: Dedicated, early Error Message for missing bias settings

When adding acceleration model parameters to the parameter set, there is a nice mechanism that checks for the required acceleration model information to be present during the creation of the parameter set.
A similar feature does not seem to exist for observation model parameters (at least not range biases).

As of now it seems as if there is no mechanism to warn the user about missing observation model information when adding observation model parameters to the parameter set.
Instead, the user is confronted with a

RuntimeError: std::exception

during the tss::OrbitDeterminationManager< >::estimateParameters function.

A feature similar to that checking for consistency between acceleration model and acceleration model parameters would be appreciated for observation model and observation model parameters.

JSON tests are not passing correctly on all platforms

The OS specific error logs should be posted in this issue, I currently don't have them at hand. The JSON manager raises warnings on the macos clang build and fails on linux gnu. These tests are currently being skipped in tudat/compiler_modules/yolo/YOLOAddTestCase as they are not required to be passing for the Numerical Astrodynamics course deadlines.

            test_json_Acceleration
            test_json_Aerodynamics
            test_json_Body
            test_json_Ephemeris
            test_json_GroundStation
            test_json_Interpolation
            test_json_Propagator
            test_json_SimulationSingleSatellite
            test_json_SimulationSinglePerturbedSatellite
            test_json_SimulationInnerSolarSystem
            test_json_SimulationGalileoConstellation
            test_json_SimulationThrustAlongVelocityVector
            test_json_SimulationThrustAccelerationFromFile
            test_json_State
            test_json_Thrust

Improved FlightConditions error output

When trying to define aerodynamic angles, without FlightConditions having been created, this error is thrown:

RuntimeError: Error when setting constant aerodynamic angles, body Capsule has no FlightConditions

This is not particularly explanatory on what should be done. Unfortunately, he FlightConditions cannot be created 'on the fly', because this needs the specification of a central body. But, a more clear error output on what to do should be provided.

Implementing a Warning for rotation matrix from body fixed to inertial frame

Hi everyone,

I just noticed that I get an identity matrix when I define a rotation matrix from body fixed to the inertial frame utilizing the body_fixed_to_inertial_frame attribute from the feature/estimation_updates branch. I understood that the problem is that I should add the time in order to obtain the specific rotation matrix. However, it would be beneficial that a warning is popped out when I write the following example without a specific time:

rotation_from_Earth_body_frame_to_inertial_frame = bodies.get_body("Earth").body_fixed_to_inertial_frame

Hopefully, with this implementation, students will not fall into the trap by thinking that the rotation matrix is defined correctly.

Thank you in advance.

Greetings,
Carlos

New general fixed-step integrator method

The Euler and fixed-step RK4 integrators should be re-made into a new fixed-step integrator method that takes (amongst others) a Butcher tableau as input.

The Euler and RK4 integrators can then be replaced by this method with the corresponding tableau.

Because this new method should be generic, different tableaus (taken from a new or already implemented enum/switch) allows for a more straightforward implementation of fixed-step RK5, 6, 7, 8, etc integrators (instead of using variable step integrators with tolerances of inf).

Default reference frame for spherical harmonics

Defining a spherical harmonic gravity field manually requires the specification of the reference frame in which it is defined. This is the associated_reference_frame input in tudatpy:.

https://tudatpy.readthedocs.io/en/latest/gravity_field.html#tudatpy.numerical_simulation.environment_setup.gravity_field.spherical_harmonic

Since this presently has to be the body-fixed frame of the body, I propose to make this input an optional one, with the default being whatever the present identifier of the body-fixed frame is.

Converting LinkEndID, LinkEnds typedefs to class/struct

Make classes / structs out of typedefs. (followup meeting 14.01., API docs observation module).
Will have functional benefits (flexibility in type of LinkeEndID, etc), more homogeneous interface for python users.

input from tudat team, especially new naming, here

  • LinkEnds

  • LinkeEndID

  • rename new classes

Tudat estimation tools can't be built

The CMake has not been updated to support some libraries which were not being tested in the initial refractoring, one group of which are the estimation libraries.

Could not find boost while creating Tudat application from CMakeLists.txt example

Hello,

I need to create a C++ Tudat app, using CMake, on Ubuntu 20.04. When I run CMake: Configure in Visual Studio Code on the example Tudat CMakeLists.txt, i.e., https://github.com/Tudat/tudatExampleApplications/blob/master/templateApplication/TemplateApplication/CMakeLists.txt, I receive the following error:

[cmake] CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
[cmake]   Could NOT find Boost (missing: Boost_INCLUDE_DIR thread date_time system
[cmake]   unit_test_framework filesystem regex) (Required is at least version
[cmake]   "1.45.0")
[cmake] Call Stack (most recent call first):
[cmake]   /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:393 (_FPHSA_FAILURE_MESSAGE)
[cmake]   /usr/share/cmake-3.16/Modules/FindBoost.cmake:2179 (find_package_handle_standard_args)
[cmake]   CMakeLists.txt:99 (find_package)
[cmake] 
[cmake] 
[cmake] CMake Warning (dev) in /usr/share/cmake-3.16/Modules/FindBoost.cmake:
[cmake]   Policy CMP0011 is not set: Included scripts do automatic cmake_policy PUSH
[cmake]   and POP.  Run "cmake --help-policy CMP0011" for policy details.  Use the
[cmake]   cmake_policy command to set the policy and suppress this warning.
[cmake] 
[cmake]   The included script
[cmake] 
[cmake]     /usr/share/cmake-3.16/Modules/FindBoost.cmake
[cmake] 
[cmake]   affects policy settings.  CMake is implying the NO_POLICY_SCOPE option for
[cmake]   compatibility, so the effects are applied to the including context.
[cmake] Call Stack (most recent call first):
[cmake]   CMakeLists.txt:99 (find_package)
[cmake] This warning is for project developers.  Use -Wno-dev to suppress it.
[cmake] 
[cmake] -- Configuring incomplete, errors occurred!
[cmake] See also "/home/piotr/pg/st/sat/aocs_design_suite_2/build/CMakeFiles/CMakeOutput.log".
[cmake] See also "/home/piotr/pg/st/sat/aocs_design_suite_2/build/CMakeFiles/CMakeError.log".

despite that I have installed the Boost components, i.e., thread, date_time, system, test, filesystem and regex in version 1.71.0.

It may be relevant that I have installed Tudat using a .yaml file:

name: tudat-test
channels:
  - conda-forge
  - tudat-team
dependencies:
  - tudat
  - matplotlib
  - scipy
  - pandas

as recommended in #51

Homogeneous triaxial ellipsoid gravity leads to unrealistic values

Using createHomogeneousTriAxialEllipsoidGravitySettings for gravitation settings leads to unrealistically high values, as if the gravity field was of abnormally high intensity.

This is also reflected in the function exposed in tudatpy: spherical_harmonic_triaxial_body.

NaN ephemeris time passed to SPICE spkezr_c

When I run a Mars ascent propagation with a RKF7(8) integrator with an increasingly small time step, and that my propagation needs for some reason to call the spkezr_c method from SPICE, I get the following error:

Toolkit version: N0066

SPICE(DAFNEGADDR) --

Negative value for BEGIN address: -2147053925

A traceback follows.  The name of the highest level module is first.
spkezr_c --> SPKEZR --> SPKEZ --> SPKGEO --> SPKPVN --> SPKR02 --> DAFGDA

Oh, by the way:  The SPICELIB error handling actions are USER-TAILORABLE.  You
can choose whether the Toolkit aborts or continues when errors occur, which
error messages to output, and where to send the output.  Please read the ERROR
"Required Reading" file, or see the routines ERRACT, ERRDEV, and ERRPRT.

After some testing, I changed the getBodyCartesianStateAtEpoch() function from spiceInterface.cpp to print the ephemerisTime input, and it appears that, after some time into the propagagtion, the ephemeris time that is input to this function is -nan.

Now I still have no idea why tudat decides to change my ephemeris time (from 9.82325e+08) to -nan after some time. I expect that this error is caused by something that breaks in my simulation at small time steps.

Nevetherless, I suggest to add a check to the getBodyCartesianStateAtEpoch() function to trigger an error if the ephemerisTime is negative or nan. We could also print a warning and return a zero vector as the body cartesian position, but that could lead to more issues down the line.

Issue with pass-by-reference STL containers in low-thrust module

What function is involved?
void ShapeBasedMethod::getTrajectory(std::vector< double >& epochsVector, std::map< double, Eigen::Vector6d >& propagatedTrajectory ) in the shapeBasedMethod.cpp file.

Expected behaviour
When this function is called with a non-empty epochsVector containing the epochs at which one wants to compute the state, the map propagatedTrajectory (which is still empty) should now contain key, value pairs with the epoch as key and the state as value for the computed trajectory.

Actual behaviour
After the function call, the propagatedTrajectory map is still empty.

Potential cause
It is known that pybind11 does not allow passing STL containers by reference out of the box since it unpacks them to deduce the template parameters. This means that the contents are copied to a new variable, breaking the reference (see also: https://pybind11.readthedocs.io/en/master/advanced/cast/stl.html#making-opaque-types)

Solution without modifying Tudat
Follow the instructions on the page mentioned above: add explicit declarations for frequently used container types and expose custom Python wrappers for these types.
Drawbacks: a lot of work since there are many combinations that would have to be made explicit and it removes the implicit typing in Python as one would have to declare a variable type explicitly (e.g.: epochsvector = double_vector())

Solution with modifying Tudat
Add and/or change the functions that cause trouble, by making either the arguments passable by value or moving an argument to the return.
Drawback: probable loss of performance since container contents now have to be copied all the time.

Issue with loading multiple Spice kernels in Windows

In Windows, it has long been an issue that loading multiple SPK kernels (e.g. one for planets, one for moons, one for spacecraft) does not work, and we are forced to provide a single merged kernel ourselves.

It is not yet clear if this problem persists in the new build. If it does, it should be resolved somehow. To check if the problem still exists, I have added a test in a new branch (loading an additional kernel for Mars Express).

e34dbf3

For the purposes of this test, I have added an additional kernel to tudat-resources

Add warning when no accelerations set

A printed warning should be added when no accelerations have been defined for a body for which translational dynamics have to be propagated. The same needs to be checked/implemented for torque and rotational dynamics.

Request: easier access to covariance history in TNW

(sorry for python syntax)

Enable easier access to TNW covariance history by implementing two features:

  • transform_covariance_history_to_frame( inertial_covariance_history, state_history, target_frame_id ) function
  • enable retrieval of state_history from Estimator object

such that user can do the following to obtain TNW (post-fit) covariances along an given tracking arc:

`// exists
Estimator = numerical_simulation.Estimator()
pod_output = Estimator.perform_estimation()
initial_covariance = pod_output.covariance

// new
propagation_timeline = list(Estimator.state_history)
states = np.vstack(Estimator.state_history.values())

// exists
covariance_history = estimation.propagate_covariance(
initial_covariance,
Estimator.state_transition_interface,
propagation_timeline) <-- use timeline from Estimator object here for same resolution

// new
covariance_history_in_tnw = transform_covariance_history_to_frame(
covariance_history, <-- automatically have covariance_history and
states, <-- state_history at same timeline here
target_frame_id )`

Add results size check when processing simulation results

When propagating dynamics and/or variational equations, and the output is 'too small' (e.g. < 4 times), the code will crash, as it cannot properly create the interpolators. The only error that is provided is something along the lines of:

IndexError: vector::_M_range_check: __n (which is 2) >= this->size() (which is 2)

The error output should be made more clear, and this situation should not result in a crash.

Request: propagate variational dynamics at the same time as mass state

It is currently not possible to propagate variational dynamics when the mass of the body is propagated at the same time as its translational state. Errors like these are raised instead:

RuntimeError: Warning, dependency of central gravity on body masses not yet implemented
RuntimeError: Warning, dependency of aerodynamic acceleration on body masses not yet implemented

In particular, a (somewhat tricky) link to thrust acceleration must be made.

This feature request is low-priority.

Feature request: dep. vars. based on multiple bodies

A set of new dependent variables could be added that are based on multiple simulation bodies.

This allows for instance to save over time:

  • Amongst a constellation of satellites, what is the closest to a given sat, and what is the distance between them.
  • From a satellite constellation, what is the closest to a given coordinate on Earth (a ground station for instance), and what is the distance between them.

This dependent variable would then save two elements:

  • What is the name of the body the closest to a specified one.
  • What is the distance between the two bodies.

Maximum Elevation Angle Viability

Hi all,

Since tudat does not have a viability setting for a maximum elevation angle, I have made some changes in a couple of files. These files are attached to this GitHub issue (observationViabilityCalculator.cpp; observationViabilityCalculator.h; createObservationViability.cpp; createObservationViability.h; expose_observation_setup.cpp). What I have done is basically copy and paste the definitions of the minimum elevation angle.

The changes made can also be shortened by inserting a boolean in the definition of isObservationViable function (in observationViabilityCalculator.cpp). I tried to do this approach, but it did not work out for me.

To make sure that the function was written correctly, the unitTestObservationViabilityCalculators.cpp has been updated as well (the test did not not failed).

Greetings,
Carlos

Attachments:
ElevationAngleViability.zip

Conda instal: UnsatisfiableError

On Ubuntu 20.04 (all packages up to the date), while installing by conda install -c tudat-team tudat, I encountered:

`
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package libstdcxx-ng conflicts for:
python=3.9 -> libstdcxx-ng[version='>=7.3.0|>=7.5.0']
tudat -> libstdcxx-ng[version='>=7.5.0|>=9.3.0|>=9.4.0']The following specifications were found to be incompatible with your system:

  • feature:/linux-64::__glibc==2.31=0
  • feature:|@/linux-64::__glibc==2.31=0
  • python=3.9 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']

Your installed version is: 2.31
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.