Giter VIP home page Giter VIP logo

moonboardrnn's Introduction

MoonBoardRNN

This repository features 3 key models:

(1) BetaMove, a preprocessing code which converts a MoonBoard problem into a move sequence that is similar to the predicted move sequence of climbing experts.

(2) DeepRouteSet, which was trained using the sequence data generated by BetaMove. Similar to music generation, DeepRouteSet learns the pattern between adjacent moves from existing MoonBoard problems, and is able to generate new problems.

(3) GradeNet, which was also trained using the sequence data generated by BetaMove, and is able to predict the Grade of a MoonBoard problem.

This is the project repository for the CS230 Spring 2020 course project, and is jointly developed by Yi-Shiou Duh and Ray Chang. All our experiments were done in jupyter notebook format.

Overview of our repository

raw_data

This folder contains the hold difficulty scores evaluated by climbing experts (\raw_data\HoldFeature2016LeftHand.csv, \raw_data\HoldFeature2016RightHand.csv, \raw_data\HoldFeature2016.xlsx) and our scraped raw data from the MoonBoard website (\raw_data\moonGen_scrape_2016_final).

The raw data from the MoonBoard website was scraped using the code in https://github.com/gestalt-howard/moonGen. We acknowledged the permission from Howard (Cheng-Hao) Tai to use that code.

preprocessing

There are 4 jupyter notebooks in the preprocessing folder: Step1_data_preprocessing_v2.ipynb: separate the problems based on (1)whether the problems is benchmarked or not and (2)whether there is a user grading. Step2_BetaMove.ipynb: This program first computes the success score of each move by the relative distance between holds and the difficulty scale of each hold. It then finds the best route using beam search algorithm. Step3_partition_train_test_set_v2.ipynb: This program divide the dataset into training/dev/test sets.

The final files that are used for the training and evaluation of GradeNet are training_seq_n_12_rmrp0, dev_seq_n_12_rmrp0, and test_seq_n_12_rmrp0.

The final files that are used for the training of DeepRouteSet are nonbenchmarkNoGrade_handString_seq_X, benchmark_handString_seq_X, benchmarkNoGrade_handString_seq_X, and nonbenchmark_handString_seq_X.

model

This folder contains 3 files that are critical to run the experiment and repeat our results.

GradeNet

To run GradeNet, open the jupyter notebook in \model\GradeNet.ipynb and follow the instructions. You can either re-run the experiments, or load the pretrained weights \model\GradeNet.h5.

DeepRouteSet

To run GradeNet, open the jupyter notebook in \model\DeepRouteSet_v4.ipynb and follow the instructions. You can either re-run the experiments, or load the pretrained weights \model\DeepRouteSetMedium_v1.h5. The code in this file is largely modified from a Coursera problem exercise "Improvise a Jazz Solo with an LSTM Network", which is originally modified from https://github.com/jisungk/deepjazz.

Predict the grade of generated problems

To evaluate the generated problems, open the jupyter notebook in \model\Evaluate_Generated_Output_v3.ipynb and follow the instructions. Please remember to check if raw_path is correct.

out

This folder contains our generated problems. The folder \out\DeepRouteSet_v1 contains style prediction results of our generated data. Those style predictions are very preliminary, and please ignore that part.

website

This is a static website that shows the 65 generated MoonBoard problems using DeepRouteSet. The link to the website: https://jrchang612.github.io/MoonBoardRNN/website/.

The layout of this website is modified from https://github.com/andrew-houghton/moon-board-climbing and is granted permission from the authors.

Potential future items

  • StyleNet

moonboardrnn's People

Contributors

jrchang612 avatar

Stargazers

Toshimasa Tahara avatar Vincent Dumoulin avatar  avatar Neil Powers avatar  avatar Ryan Yi Ming Lim avatar Colin Billhardt avatar Mikko Heine avatar Matthew Grant  avatar  avatar Rohan Rao avatar Keelin Sekerka avatar Sam Green avatar Mark Steve Samson avatar Michael Baumgartner avatar Teddy Koker avatar Mark Fingerhuth avatar Oleksandr Redko avatar Sean Lim avatar  avatar Zack A. Kite avatar snoop2head avatar  avatar Frank Stapel avatar Javier Escalada avatar K avatar Alex Malins avatar

Watchers

James Cloos avatar  avatar

moonboardrnn's Issues

Step 1 Preprocessing notebook doesn't work

First of all, amazing project; as a climber myself I enjoyed reading your paper. I actually haven't tried MoonBoard yet but I would assume that some holds are much better than another ones. One could deduce which holds are good: if one hold is primarily used in very difficult boulders, this would imply that it's not a good hold. Is information like this incorporated in the Neural Network architecture?

I also found some issues with running the code you posted. The notebook attached below doesn't work as there are some undefined parameters in the code. Most of those bugs are very easy to fix:
https://github.com/jrchang612/MoonBoardRNN/blob/master/preprocessing/Step1_data_preprocessing_v2.ipynb

Step 3 Notebook also doesn't work with newer Tensorflow versions as it's impossible to use the custom loss function that corresponds to two layers the way you did. Could you add which package versions you used in your project?

TouchEndHold can never be 3

There is code within if TouchEndHold == 3. However, the variable can only be incremented when the value is zero, so the maximum possible value is 1.

Missing SuccessScoreSequenece

The processed_data_seq file contains x/y location data, -1/1 for left and right hand movement, and finally a list called successScoreSequence. The code to generate this array is missing. Is it some combination of difficulty and the Gaussian function?

addNextHand() has superfluous code

The last lines in this function don’t really do anything, do they? Both finalCom and distance are set but never used.
It looks like everything after # after add a new hold is unnecessary. Is this a bug or can it just get cut out?

getleftHandOrder() and getrightHandOrder()

These two functions are identical

def getleftHandOrder(self):
        """
        Return a num of the last left hand hold's oreder
        (in processed data from bottom to top)
        """
        lastIndexOfRight = ''.join(self.handOperator).rindex('R') / 2
        return self.handSequence[int(lastIndexOfRight)]

Should rindex('R') actually be rindex('L')?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.