Giter VIP home page Giter VIP logo

travia's People

Contributors

danielskatz avatar dependabot[bot] avatar olgersiebinga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

travia's Issues

Travia visualize.py fails to start on Ubuntu

Following the recommended installation, also on Ubuntu with Python 3.8, in a clean environment, I see the same error that @rusu24edward reports in openjournals/joss-reviews#3607 (comment)

Is it possible that this is related to the error reported in this QT forum post on using PyQT5 with OpenCV? Someone in that thread suggests switching to opencv-python-headless.

If I run:

pip uninstall opencv-python
pip install opencv-python-headless
python visualize.py

Then I get at least as far as the initial loading screen (processing data).

I suggest testing this on Windows too, then possibly updating requirements.txt

Error when running visualize.py with Pneuma Data

I tried to run the package with Pneuma data but it ran into some errors.
The data file I used for it was 20181024_d5_1000_1030.csv, where, the date is 24/10/2018, the drone number is 5 and the time is 10:00 to 10:30.
As the data from Pneuma is arranged in a single column with ( ; ) as the delimiter.
The first 4 columns include information about the trajectory like the unique trackID, the type of vehicle, the distance traveled in meters and the average speed of the vehicle in km/hthe next 6 columns are then repeated every 6 columns based on the time frequency. For example, column_5 contains the latitude of the vehicle at time column_10, and column­­­_11 contains the latitude of the vehicle at time column_16.
But important thing to note is that, the data format is not consistent in the CSV file.
In my case, the vehicle number 1's data was continued in the second row because the first row became full.
But the code for Trivia is designed to find the entry as
vehicle = [int(as_list[0]), str(as_list[1]), float(as_list[2]), float(as_list[3])] (pneumadataset.py, ln 94)
So it tries to read the first entry in row 2 which turns out to be a float point and not int

hence the error i encountered is,

Traceback (most recent call last):
File "C:\Desktop\Travia\travia-master\visualize.py", line 144, in
data = PNeumaDataset.load(dataset_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Desktop\Travia\travia-master\dataobjects\pneumadataset.py", line 60, in load
dataset = PNeumaDataset.read_pneuma_csv(dataset_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Desktop\Travia\travia-master\dataobjects\pneumadataset.py", line 94, in read_pneuma_csv
vehicle = [int(as_list[0]), str(as_list[1]), float(as_list[2]), float(as_list[3])]
^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '3.728195'

Now the fix would probably include some check for whether the next row starts with the new vehicle or not and If it does not then the data needs to be aggregated with the first row but the values are for the last 6 columns.

Uploaded is the ss of csv file, you can see that the 1st vehicle's data continues for a number of rows.
Screenshot 2023-11-14 031056

The windows version is Win 11 Home
Python version is 3.12.0
IDE used is PyCharm Community Edition

Provide dataset type and id without editing script

As a user, I would like to avoid the need to edit the script to load different (supported) datasets.

This could be something managed through the GUI, with a pre-loading screen or through a "File > Load dataset" menu.

Alternatively, the choice of dataset could be provided as command-line arguments:

$ python visualize.py --type pneuma --dataset D181030_T0930_1000_DR1
$ python visualize.py -t ngsim -d US101_0805_0820

The command-line version is a fairly small patch.

Add an import at the top of the script:

+import argparse

Add a function to parse arguments:

+def parse_arguments():
+    parser = argparse.ArgumentParser(description="the parser")
+    parser.add_argument(
+        "-t",
+        "--type",
+        type=str,
+        choices=["highd", "ngsim", "pneuma"],
+        help="The dataset type, one of highd, ngsim, pneuma",
+        required=True,
+    )
+    parser.add_argument(
+        "-d",
+        "--dataset",
+        type=str,
+        help="The dataset id",
+        required=True,
+    )
+    return parser.parse_known_args()

Change the commented-out sections to if/elif/else cases:

 if __name__ == '__main__':
     """
     To visualise data, you need to define the dataset ID. These ID's are predefined in enums and contain information like the file path for the data and images.
     With this ID, it is possible to load the dataset. All projects have their own enum for ID's and class for datasets. Please look at the examples below to see
     how to load a dataset.
     """
-    app = QtWidgets.QApplication(sys.argv)
-
-    "For loading a HighD dataset, uncomment the next two lines: "
-    # dataset_id = HighDDatasetID.DATASET_01
-    # data = HighDDataset.load(dataset_id)
-
-    "For loading a NGSIM dataset, uncomment the next two lines: "
-    dataset_id = NGSimDatasetID.US101_0805_0820
-    data = NGSimDataset.load(dataset_id)
-
-    "For loading a PNeuma dataset, uncomment the next two lines: "
-    # dataset_id = PNeumaDatasetID.D181029_T1000_1030_DR8
-    # data = PNeumaDataset.load(dataset_id)
+    args, qtargs = parse_arguments()
+
+    app = QtWidgets.QApplication(qtargs)
+
+    if args.type == 'highd':
+        # For loading a HighD dataset
+        dataset_id = HighDDatasetID[args.dataset]  # e.g. DATASET_01
+        data = HighDDataset.load(dataset_id)
+    elif args.type == 'ngsim':
+        # For loading a NGSIM dataset
+        dataset_id = NGSimDatasetID[args.dataset] # e.g. US101_0805_0820
+        data = NGSimDataset.load(dataset_id)
+    elif args.type == 'pneuma':
+        # For loading a PNeuma dataset
+        dataset_id = PNeumaDatasetID[args.dataset] # e.g. D181029_T1000_1030_DR8
+        data = PNeumaDataset.load(dataset_id)
+    else:
+        raise ValueError("Dataset type not recognised, expected one of highd, ngsim, pneuma")

     visualize_traffic_data(data, dataset_id, app)

How to get the data (in .csv) after smoothing

hi!
I used travia to visualize data in pNEUMA and it works very well! Now I‘m trying to get the dataframe after loading and smoothing just like the screenshot below, but I don't know how to get it. Can I ask for some help?
image

Pneuma image data not found

I've downloaded a CSV for dataset D181030_T0930_1000_DR1 from https://open-traffic.epfl.ch/index.php/downloads/

I can see options for the various CSVs on the pneuma site, but no obvious images or aerial photos (unlike NGSIM).

The data loads and processes okay (with a few warnings) but then fails with this error:

Traceback (most recent call last):
  File "visualize.py", line 103, in <module>
    visualize_traffic_data(data, dataset_id, app)
  File "visualize.py", line 33, in visualize_traffic_data
    gui = TrafficVisualizerGui(data)
  File "travia/gui/gui.py", line 53, in __init__
    self.view = WorldView(self, dataset.dataset_id)
  File "travia/gui/worldview.py", line 43, in __init__
    self._load_background(dataset_id)
  File "travia/gui/worldview.py", line 91, in _load_background
    with open(path_to_file + '.tfw', 'r') as map_info_file:
FileNotFoundError: [Errno 2] No such file or directory: 'data/pNeuma/images/pneuma.tfw'
Warnings, in case of interest
WARNING: smoothing for vehicle with ID 28 failed
WARNING: smoothing for vehicle with ID 75 failed
WARNING: smoothing for vehicle with ID 84 failed
WARNING: smoothing for vehicle with ID 86 failed
WARNING: smoothing for vehicle with ID 88 failed
WARNING: smoothing for vehicle with ID 89 failed
WARNING: smoothing for vehicle with ID 90 failed
WARNING: smoothing for vehicle with ID 176 failed
WARNING: smoothing for vehicle with ID 209 failed
WARNING: smoothing for vehicle with ID 338 failed
WARNING: smoothing for vehicle with ID 356 failed
WARNING: smoothing for vehicle with ID 376 failed
WARNING: smoothing for vehicle with ID 380 failed
WARNING: smoothing for vehicle with ID 381 failed
WARNING: smoothing for vehicle with ID 405 failed
WARNING: smoothing for vehicle with ID 475 failed
WARNING: smoothing for vehicle with ID 650 failed
WARNING: smoothing for vehicle with ID 742 failed

Can you point me to an image file to download? Have I missed any instructions in the README or data folder?

Open example datasets

I would find it helpful as a potential travia user to have access to synthetic data for all three datasets. It would then be possible to download them automatically and test the software without jumping through the non-open data hoops.

I'd suggest publishing these in an open research repository such as Zenodo - downloading and setting up example data can the be scripted with something like zenodo_get.

.travia path error when saving pickles

When running python visualize.py for the first time, travia fails to create the ~/.travia folder:

Traceback (most recent call last):
  File "visualize.py", line 71, in <module>
    data = NGSimDataset.load(dataset_id)
  File "travia/dataobjects/ngsimdataset.py", line 50, in load
    dataset.save()
  File "travia/dataobjects/ngsimdataset.py", line 41, in save
    save_encrypted_pickle('data/' + self.dataset_id.data_sub_folder + self.dataset_id.data_file_name + '.pkl', self)
  File "travia/processing/encryptiontools.py", line 41, in save_encrypted_pickle
    key = _get_key()
  File "travia/processing/encryptiontools.py", line 29, in _get_key
    os.mkdir(home_folder + '\\.travia\\')
PermissionError: [Errno 13] Permission denied: '/home/username\\.travia\\'

Suggested fix: use os.path.join to avoid hard-coding OS-dependent path separators.

 def _get_key():
     home_folder = os.path.expanduser("~")
+    travia_folder = os.path.join(home_folder, '.travia')
+    key_file = os.path.join(travia_folder, '.traffic_data_key')

-    if not os.path.isfile(home_folder + '\\.travia\\.traffic_data_key'):
-        os.mkdir(home_folder + '\\.travia\\')
+    if not os.path.isfile(key_file):
+        os.mkdir(travia_folder)
         new_key = Fernet.generate_key()
-        with open(home_folder + '\\.travia\\.traffic_data_key', 'wb') as file:
+        with open(key_file, 'wb') as file:
             file.write(new_key)

Consider travia data specification/format

Travia could consider defining its own data format and specification. For the tracks data, this could be as simple as CSV with a set of required (and optional?) column names. Each example dataset could be transformed into this format, and future datasets could be transformed into the same format to be visualised and annotated using travia.

This would help in part by documenting what the application expects as input. It could simplify the application code if some of the preprocessing is done outside of a more focussed GUI/visualiser. It would also suit constructing small examples to demonstrate features (or reproduce bugs).

A metadata schema or specification for this kind of traffic data might also help this work fit with related approaches to analysing and modelling trajectories: for data formats, it might be worth relating to GPX or other exchange formats. Libraries such as movingpandas or related packages may also be interesting for data exchange and related analysis.

pNeuma image offset

The TIFF provided in the repository is not aligned with the tracks data from pNeuma D181030_T0930_1000_DR1, so the vehicles show up on a grey background outside of the image area.

Should I test with a different pNeuma dataset, or another image?

See screenshot:

offset background

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.