Giter VIP home page Giter VIP logo

opendata's Introduction

GoldenCheetah

About

GoldenCheetah is a desktop application for cyclists and triathletes and coaches

  • Analyse using summary metrics like BikeStress, TRIMP or RPE
  • Extract insight via models like Critical Power and W'bal
  • Track and predict performance using models like Banister and PMC
  • Optimise aerodynamics using Virtual Elevation
  • Train indoors with ANT and BTLE trainers
  • Upload and Download with many cloud services including Strava, Withings and Todays Plan
  • Import and export data to and from a wide range of bike computers and file formats
  • Track body measures, equipment use and setup your own metadata to track

GoldenCheetah provides tools for users to develop their own own metrics, models and charts

  • A high-performance and powerful built-in scripting language
  • Local Python runtime or embedding a user installed runtime
  • Embedded user installed R runtime

GoldenCheetah supports community sharing via the Cloud

  • Upload and download user developed metrics
  • Upload and download user, Python or R charts
  • Import indoor workouts from the ErgDB
  • Share anonymised data with researchers via the OpenData initiative

GoldenCheetah is free for everyone to use and modify, released under the GPL v2 open source license with pre-built binaries for Mac, Windows and Linux.

Installing

Golden Cheetah install and build instructions are documented for each platform;

INSTALL-WIN32 For building on Microsoft Windows

INSTALL-LINUX For building on Linux

INSTALL-MAC For building on Apple MacOS

macOS and Linux: Build Status

Windows: Build status

Coverity Status

Official release builds, snapshots and development builds are all available from http://www.goldencheetah.org

NOTIO Fork

If you are looking for the NOTIO fork of GoldenCheetah it can be found here: https://github.com/notio-technologies/GCNotio

opendata's People

Contributors

aartgoossens avatar liversedge avatar sladkovm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opendata's Issues

OSF rate limiting

I'm regularly bumping into 429 RuntimeErrors when downloading data. Apparently access to OSF is rate limited:

Authenticated requests have a rate limit of 10,000/day.
Unauthenticated requests have a rate limit of 100/hour.

How I make use of authenticated requests? It appears that having an account at osf.io is not enough. Do I need to fork the project or request access to the project to be able to make use of the higher authenticated rate limit?

Resting Heart rate

Hi,

Do you know where can I find a resting heart rate for each athletes ?

Thanks.

Power and HR metrics in JSON file

Hi all,

I am a data engineer at Giant Group, and now I am working with sports science team to do some research about cycling.

Before I ask my question, I'd like to thank you for your work and your contribution to this open data project. I also appreciate that all of the participants joining this project and their willingness to share their data. The data is very valueable to me. Thank you. ๐Ÿ™‚

For each athlete, they all have a JSON file, and there are metrics as follows,

 'average_hr': ['112.11171', '7045.00000'],
 'average_ct': ['37.62149', '7045.00000'],
 'heartbeats': '13163.78335',
 'average_cad': ['46.85570', '4366.00000'],
 'average_temp': ['18.43206', '7043.00000'],
 'max_heartrate': '157.00000',
 'min_heartrate': '82.00000',
 'max_ct': '37.88746',
 'max_speed': '50.78880',
 'max_cadence': '107.00000',
 'max_temp': '21.00000',
 'min_temp': '17.00000',
 'ninety_five_percent_hr': '142.00000',
 'vam': '83.55111',
 'gradient': '0.50495',
 'total_kcalories': '890.76413',
 'activity_crc': '1361582490897.00000',
 'cp_setting': '211.00000',
 'cpsolver_best_r': '-255.00000',
 'time_in_zone_H1': '1347.00000',
 'time_in_zone_H2': '116.00000',
 'percent_in_zone_H1': ['17.74236', '7592.00000'],
 'percent_in_zone_H2': ['1.52792', '7592.00000'],
 'time_in_zone_P1': '254.00000',
 'time_in_zone_P2': '224.00000',
 'time_in_zone_P3': '266.00000',
 'time_in_zone_P4': '375.00000',
 'time_in_zone_P5': '4859.00000',
 'percent_in_zone_P1': ['3.34563', '7592.00000'],
 'percent_in_zone_P2': ['2.95047', '7592.00000'],
 'percent_in_zone_P3': ['3.50369', '7592.00000'],
 'percent_in_zone_P4': ['4.93941', '7592.00000'],
 'percent_in_zone_P5': ['64.00158', '7592.00000'],
 'best_50m': '0.06667',
 'best_100m': '0.13333',
 'best_200m': '0.25000',
 'best_400m': '0.51667',
 'best_500m': '0.66667',
 'best_800m': '1.23333',
 'best_1000m': '1.68333',
 'best_1500m': '2.75000',
 'best_2000m': '4.18333',
 'best_3000m': '7.13333',
 'best_4000m': '9.30000',
 'best_5000m': '11.45000',
 'best_10km': '27.70000',
 'best_15km': '42.18333',
 'best_20km': '57.71667',
 'best_half_marathon': '61.01667',
 'best_30km': '104.20000',
 '1m_critical_power_hr': '105.66102',
 '5m_critical_power_hr': '117.78595',
 '10m_critical_power_hr': '123.15902',
 '20m_critical_power_hr': '121.68988',
 '30m_critical_power_hr': '121.15335',
 '60m_critical_power_hr': '120.85567',
 '1m_peak_hr': '152.52500',
 '2m_peak_hr': '146.14167',
 '3m_peak_hr': '144.13704',
 '5m_peak_hr': '137.04133',
 '8m_peak_hr': '127.97396',
 '10m_peak_hr': '127.73917',
 '20m_peak_hr': '122.53867',
 '30m_peak_hr': '121.54956',
 '60m_peak_hr': '116.19792',
 '90m_peak_hr': '110.95799',

This is from one of the athlete. It's really exciting to see these metrics, but I am wondering if there is any documentation that states how some of these metrics (critical_power_hr, gradient, for instance) are computed and the meaning of some abbreviations.

On the other hand, I also observed some of the csv files are incomplete, maybe the power or heart rate values were missing. I assumed that this is the reason why the METRICS in JSON were a bit different from each activity. Is it correct?

Thanks again.

Interpretation of data

I'm having trouble interpreting some data. For example, the 'data' field : "TDSPHC-A-L-E---". I assume there's a description for what each of those letters means, and I could guess quite a few, but not all. I've parsed the data, read the README, OpenData project wiki, and the Jupyter notebook. Those got me pretty far, but for my application I'll have lots of questions about the the data that aren't apparent in those. I haven't gone through all the Golden Cheetah documentation itself...is it in GC, or just not developed yet?

pipenv install development version

pipenv install -e git+https://github.com/GoldenCheetah/OpenData.git#egg=opendata does not work

Will add the error code seen on Mac OS later...

changing opendatastorage location

Local storage works fine when using the default location. However when changing location the directory is not created and files are no longer saved locally (either in the desired or default location) nor are any error messages reported. To change location I saved a file opendata.ini with the content:

[Storage]
local_storage_path = P:\mpJupyter

I open python from the Anaconda prompt with:
P:\mpJupyter>jupyter notebook

If I change the path specified in the .ini file to a location that doesn't exit I get an error as might be expected.

Thanks for your your help and work on this library.

mike

Python library for working with OpenData

Continuing this discussion here.

I am working on some Python code to make working with OpenData easier. It's far from finished (it only sort of works for my use-case now) but I would like to share it and putting in this repository makes sense. Before I spend more time on polishing it I'd like some input on what the library should look like.

Features I would like to have in the library:

  1. View metadata of all athletes: currently the metadata lives in the blob for each athlete so you need to download all the data to view it. I propose to create a metadata file in the root of this repo that is updated every once in a while to reflect new/changed files in the OSF directory.
  2. Tool to selectively download data: Only download a specific athlete, or only athletes with specific data types, date ranges, amounts of data, etc. based on the metadata.
  3. Should return the activities in a general purpose data format. I propose to use a pandas.DataFrame for this.
  4. Tool to make running computations on large amounts of activities easier: Not sure how to do this yet but with the amount of data that's already in OpenData it's impossible to have it all in memory so some clever batch-processing is needed there and I think some tooling might help there and has it's place in this library.

Any input is welcome!

Errors in data

I came across 3 types of invalidities in the data:

  1. Filenames that do not match the yyyy_mm_dd_HH_MM_SS.csv format.
  2. Metadata files that are not valid json.
  3. Activity files for which there is no metadata.

I attached a gzip (sorry, Github did not accept plain csv files) to this issue with every occurrence of each of these errors.

Although I can (and probably will) add proper error handling to the Python library so it does not stumble over these errors I think it might be worth taking the time to fix these errors in the data so people that try to work with the data do not have to manage these errors.

invalid_data.tar.gz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.