Giter VIP home page Giter VIP logo

api-examples's Introduction

CAN bus API examples (Python/MATLAB) | CANedge [LEGACY]

Update: Legacy notice + new Python/MATLAB integration methods

If you need to work with the CANedge data in Python/MATLAB, we now recommend to use the methods described in the below documentations:

The Python methods/modules shown in repository under examples/data-processing/ can still be used, but will not be updated going forward (this refers to the sub modules of our Python API, mdf-iter, canedge-browser and can-decoder). We instead refer to our new integration with python-can - and our examples of how to work with DBC decoded Parquet data lakes in Python. The script examples found in examples/other/ will still be relevant going forward.


Overview

This project includes Python and MATLAB examples of how to process MF4 log files with CAN/LIN data from your CANedge data loggers. Most examples focus on the use of our Python API modules (canedge_browser, mdf_iter, can_decoder) for use with the CANedge log file formats (MF4, MFC, MFE, MFM). However, you'll also find other script examples incl. for the asammdf Python API, MATLAB, S3 and more.


Features

For most use cases we recommend to start with the below examples:
- data-processing: List log files, load them and DBC decode the data (local, S3)

For some use cases the below examples may be useful:
- other/asammdf-basics: Load and concatenate MF4 logs, DBC decode them - and save as new MF4 files
- other/matlab-basics: Examples of how to load and use MF4/MAT CAN bus data 
- other/s3-basics: Examples of how to download, upload or list specific objects on your server
- other/s3-events: Using AWS Lambda or MinIO notifications (for event based data processing)
- other/misc: Example of automating the use of the MDF4 converters and misc tools


Installation

  • Install Python 3.9.13 for Windows (32 bit/64 bit) or Linux (enable 'Add to PATH')
  • Download this project as a zip via the green button and unzip it
  • Open the folder with the requirements.txt file and enter below in your command prompt:
Windows
python -m venv env & env\Scripts\activate & pip install -r requirements.txt
python script_to_run.py
Linux
python -m venv env && source env/bin/activate && pip install -r requirements.txt
python script_to_run.py

If you later need to re-activate the virtual environment, use env\Scripts\activate.


Sample data (MDF4 & DBC)

The various folders include sample log files and DBC files. Once you've tested a script with the sample data, you can replace it with your own.


Usage info

  • Some example folders contain their own README.md files for extra information
  • These example scripts are designed to be minimal and to help you get started - not for production
  • Some S3 scripts use hardcoded credentials to ease testing - for production see e.g. this guide

Which API modules to use?

There are many ways that you can work with the data from your CANedge devices. Most automation use cases involve fetching data from a specific device and time period - and DBC decoding this into a dataframe for further processing. Here, we recommend to look at the examples from the data-processing/ folder. These examples use our custom modules designed for use with the CANedge: The mdf_iter (for loading MDF4 data), the canedge_browser (for fetching specific data locally or from S3) and the can_decoder (for DBC decoding the data). In combination, these modules serve to support most use cases.

If you have needs that are not covered by these modules, you can check out the other examples using the asammdf API, the AWS/MinIO S3 API and our MDF4 converters.

If in doubt, contact us for sparring.


About the CANedge

For details on installation and how to get started, see the documentation:


Contribution & support

Feature suggestions, pull requests or questions are welcome!

You can contact us at CSS Electronics below:

api-examples's People

Contributors

matinf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api-examples's Issues

Request: Add example for retrieving VIN (UDS)?

The tutorial for UDS uses an example of retrieving VIN: https://www.csselectronics.com/pages/uds-protocol-tutorial-unified-diagnostic-services

but there's no associated DBC file or example for retrieving this. Since VIN is a more universal of a usecase compared to electric vehicles, it would be nice if a DBC file + associated example was provided for it: https://github.com/CSS-Electronics/api-examples/blob/master/examples/data-processing/process_tp_data.py#L33

Support compressed/encrypted log files (.MFC, .MFE, .MFM)

Outline the feature request
Support native handling of compressed/encrypted log files from the CANedge within the canedge_browser module and the mdf_iter module.

What is the use case?
This would enable native support for all types of CANedge log files, removing the need for using the MDF4 converters as part of Python automation scripts.

Please comment if you'd like to see this feature added as well, or if you have any thoughts on it.

requirements installation problem

i tried to install requirements with "pip install -r requirements.txt", first i had problem with numpy version, i changed to latest version then i had problem with wheel "ERROR: Could not build wheels for multidict, pandas, yarl, which is required to install pyproject.toml-based projects" as suggested on internet i upgraded pip.
here is error log file:
errors.txt
how can i fix these errors?

Setting index of CAN channel in python API

Hi,

I am trying to load and decode an MF4 file using the python API. In the MATLAB API, the index of the CAN channel is passed to the read method:

can_idx = 8;
rawTimeTable = read(m,can_idx,m.ChannelNames{can_idx});

I could not find out how to do the same thing when using the python API. Currently my code looks like this:

db = can_decoder.load_dbc(pathToDBC)
df_decoder = can_decoder.DataFrameDecoder(db)

with open(pathToMF4, "rb") as handle:
    mdf_file = mdf_iter.MdfFile(handle)
    df_raw = mdf_file.get_data_frame()

df = df_decoder.decode_frame(df_raw)

This does decode the data, but the dataframe is not ordered in groups like when using MATLAB. Is it possible to do this with the python API?

CAN ID Source Address should be matched automatically

Hello, CSS Team

I've noticed that in the MultiFrame Decoder Class, the 8 bit source address is hardcoded to be 254 (EF).
See L493 @ utils.py :

can_id = (6 << 26) | (pgn << 8) | 254

This can cause some issues as the address of the original sender of the TP broadcast is lost, and can make the use with more than one source difficult.

Is there any way you could change it so the Source Address gets carried over from the BAM message with PGN EC00 to the final "joint" message?

Best regards,

mdf-iter on arm architecture

Hello CSS team,

First of all, let me state that i am amateur in Python and coding stuff.

But recently I have tried to set up server and dashboard on local network (localy hosted) on our raspberry pi 4B at our workplace.

I have managed to set up local minio server (CanEdge2 succesfully sent data to the server.), influxdb writer and grafana dashboard but recently hit an obstacle when I tried to install mdf-iter 2.0.5 .

From what I see in download files the package is compiled only for x86 architecture.

Is there any way how to install this package to arm architecture, or how to compile it to it?

Thank you in advance

best regards

Tomas

DBC files: Enable utils.py to parse DBC files in a Bus Channel specific manner

Currently DBC files are applied across the entire log file. An improvement could be to enable parsing a DBC object ala below:

dbc_paths = {"CAN": [("dbc_files/CSS-Electronics-SAE-J1939-DEMO.dbc", 0)]]}

Here, 0 would refer to the DBC being applied across all channels, 1 would mean only for Bus Channel 1, 2 would mean only for Bus Channel 2.

This requires a modification to the logic in load_dbc_files and extract_phys.

Problem with mdf-iter for Linux

Hey there,
im trying to install the CAN-Bus Python API on my Raspberry Pi. The problem is, everytime I try to install the packages from the requirements.txt list, I get an error with the mdf-iter package. It says:

ERROR: Could not find a version that statisfies the requirement mdf_iter>=2.0.10
ERROR: No matching distribution found for mdf_iter>=2.0.10

I also tried installing mdf-iter seperately and older versions, but it doesnt help. I would be very happy if anyone can help.
Thank you in advance !

With kind regards,

Faraz

KeyError: 'HDComment.Device Information.serial number

hi Martin

I followed the instructions you mentioned last time, but now I get this error, which I don't understand why it happens, I'm using the downloaded project with the sample files

`Found a total of 2 log files
Traceback (most recent call last):
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\process_data.py", line 31, in
df_raw, device_id = proc.get_raw_data(log_file, passwords=pw)
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\utils.py", line 221, in get_raw_data
device_id = self.get_device_id(mdf_file)
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\utils.py", line 234, in get_device_id
return mdf_file.get_metadata()["HDComment.Device Information.serial number"]["value_raw"]
KeyError: 'HDComment.Device Information.serial number'

Process finished with exit code 1
`

Random (?) call ".to_csv" in method restructure_data()

df_phys_join.to_csv("output_joined.csv")

It seems kinda out of place that the method "restructure_data()" outputs a .csv file.

In the example "process_data.py", there is a specific call to .to_csv() after restructure_data(), resulting in a write and then an overwrite of the .csv file:

"process_data.py":
df_phys_join = restructure_data(df_phys=df_phys_all, res="1S") ### Writes .csv file
df_phys_join.to_csv("output_joined.csv") ### Overwrites the above csv file

Same issue in repo "dashboard-writer"

Is Release 1.0.9 compatible with CANedge2 1.7.1?

1.7.1 recorded mdf:

(env) user@vm$ python process_tp_data.py 
Traceback (most recent call last):
  File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/process_tp_data.py", line 38, in <module>
    process_tp_example(devices, dbc_paths, "uds")
  File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/process_tp_data.py", line 17, in process_tp_example
    df_raw, device_id = proc.get_raw_data(log_file)
  File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/utils.py", line 230, in get_raw_data
    df_raw = mdf_file.get_data_frame()
RuntimeError: An unexpected error occurred while obtaining a CAN iterator: Not finalized?

1.6.1 recorded mdf:

(env) user@vm$ python process_tp_data.py 
Finished saving CSV output for devices: ['//LOG/2F6913DB']
(env) user@vm$ 

Value error merge float64 and datetime64[ns, UTC]

when some files are in the conversion process I get the following error I have read the documentation of the library but I can't find the error

Device: 4F0BBBD2 | Log file: /LOG/958D2219/00002501/00000001-63D2969F.MF4 [Extracted 127276 decoded frames]
Period: 2023-01-26 14:57:17.962700+00:00 - 2023-01-26 15:04:02.828300+00:00

Traceback (most recent call last):
File "C:\Users\hugoa\OneDrive\Escritorio\Telemetria\Decoder MF4\Decoder-MF4\data-processing\process_data.py", line 52, in
df_phys_join = restructure_data(df_phys=df_phys_all, res="1S", full_col_names=True)
File "C:\Users\hugoa\OneDrive\Escritorio\Telemetria\Decoder MF4\Decoder-MF4\data-processing\utils.py", line 100, in restructure_data
df_phys_join = pd.merge_ordered(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 321, in merge_ordered
result = _merger(left, right)
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 290, in _merger
op = _OrderedMerge(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 1623, in init
_MergeOperation.init(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 703, in init
self._maybe_coerce_merge_keys()
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 1262, in _maybe_coerce_merge_keys
raise ValueError(msg)
ValueError: You are trying to merge on float64 and datetime64[ns, UTC] columns. If you wish to proceed you should use pd.concat

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.