Giter VIP home page Giter VIP logo

Comments (7)

arturjordao avatar arturjordao commented on June 13, 2024

Dear @mylovecc2020,

Do I have to turn the hyperparameters for every paper‘s code?
No. Once we have set the hyperparameters (batch size, epochs, learning rate, optimizer, etc.) we use them on all datasets and methods. Unfortunately, due to computational constraints, we were not able to try many combinations of hyperparameters. Thus, different values to batch size, learning and so on can lead to better results
(as you mentioned). Finally, I would like to mention that the same convolutional architecture can present different results when running on TensorFlow/Keras or PyTorch (see this link).

How can I quick repetition of the results in the paper?
For this purpose, I recommend you use the source code in the repository since they are with the experimental setup (i.e., hyperparameters) used in the paper. Alternatively, you can check the hyperparameters used in our paper in this file (i.e., cm.loss, cm.bs, cm.n_ep) and use them on your PyTorch implementation.

If you have any other questions, please let me know.

Best regards,
Artur Jordão.

from wearablesensordata.

Mohammadtvk avatar Mohammadtvk commented on June 13, 2024

hi @arturjordao

I am struggling with results and their reproducibility, for example you mentioned in your paper that mean accuracy(Semi non overlapping leave one subject out) of ChenXue model in WISDM dataset is 83.89%. but i got 70.58% running your code. and around 25% running my own code.
WISDM Dataset has 51 subjects (link) while your data is for 36 subjects. and also different sensors are not synced so we can use only one that will result in (batch_Size, channel, time, 3) shape and yours is (batch_size, channel, time, 6)

@mylovecc2020 can i have your pytorch implementation of these models?

Tnx alot!

from wearablesensordata.

arturjordao avatar arturjordao commented on June 13, 2024

Hi @Mohammadtvk ,

I have no idea the reason for these poor results because other papers have employed these implementations and achieved stable results.

Regarding the WISDM dataset, since some subjects do not perform all activities, we cannot consider all subjects in our evaluation. I didn't understand the synchronization issue. Indeed, we do not synchronize the sensors because the raw data provided by the dataset is already synchronized (i.e., all samples have the same number of columns - sensors).

If your problems persist, you can wait for other implementations or implement this benchmark by yourself, as all codes are simple and easy to follow. Good Lucky.

from wearablesensordata.

Mohammadtvk avatar Mohammadtvk commented on June 13, 2024

Thanks for the reply @arturjordao .

I did not change anything in your implementation and just run it. I got results which are inconsistent and this not just WISDM dataset. USCHAD dataset has the same issue. But in handcrafted models results are consistent. I don't know what's going on!

And about WISDM dataset, which subject are removed from dataset? in the dataset that i have there are 4 files: accel_phone, accel_watch, gyro_phone and gyro_watch. in each file there is a timestamp which is not equal in different sensor modalities.

from wearablesensordata.

arturjordao avatar arturjordao commented on June 13, 2024

@Mohammadtvk,

What is the keras+tf version you are using?

Regarding the WISDM dataset, I believe we are using different versions. I just checked the data using this link and there were some modifications, e.g., when I convert the files there were no "phone" and "watch". In addition, the files were created around July 2019 and I converted the WISDM files around 2017 (the first version of the paper). Unfortunately, I have no more the raw data I have used. I will check if the other authors have these files and answer you as soon as possible.

Best regards,
Artur Jordão.

from wearablesensordata.

arturjordao avatar arturjordao commented on June 13, 2024

@Mohammadtvk,

What is the keras+tf version you are using?

Regarding the WISDM dataset, I believe we are using different versions. I just checked the data using this link and there were some modifications, e.g., when I convert the files there were no "phone" and "watch". In addition, the files were created around July 2019 and I converted the WISDM files around 2017 (the first version of the paper). Unfortunately, I have no more the raw data I have used. I will check if the other authors have these files and answer you as soon as possible.

Best regards,
Artur Jordão.

from wearablesensordata.

arturjordao avatar arturjordao commented on June 13, 2024

@Mohammadtvk

I found the raw data I have used at this link. Also, I have attached the converted raw data, where you can check the number of subjects and other information.
converted_data.zip

from wearablesensordata.

Related Issues (5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.