Giter VIP home page Giter VIP logo

greenedge-energy-aware-classification's Introduction

CHALLENGE


Greenedge Contest: Energy Aware Image Classification

In this competition, we are calling on participants to develop a model that excels in both energy efficiency and accuracy when classifying the TinyImageNet test set. Specifically, the participants will tackle a multi-class classification problem on this dataset, focusing on optimizing the energy-accuracy trade-off during the inference phase. We’re eager to see innovative approaches that effectively balance minimizing the average energy consumption with delivering accurate results.

Useful Links

Who Can Participate?

BS, MS level and PhD students are invited to participate in the challenge. The submitted projects will undergo an evaluation by a team of scientists from the GREENEDGE consortium. The winners will be invited to participate in the final GREENEDGE workshop where they will present their work and will receive a prize. The final workshop will be co-located with an international conference and will take place around September 2024 (detailed instructions will follow). You can participate via this link.

Participation Deadline: May 15, 2024

Webinar

We will be holding a webinar on 22nd May 2024. More details about the webinar will be shared soon. We will meet the teams, providing guidance and help in the definition of the objectives, clarifying any possible questions that the teams may have.

Participation Instructions

  • Teams of up to 3 people are allowed.
  • Subscribe to the challenge using the form at this link.
  • Final project submission from the teams: June 15, 2024. (A submission link will be sent to the participants in due time).
  • Contest winners’ announcement: July 30, 2024.

Evaluation Phase

An evaluation report will be generated for each received solution. The final score will be decided by a review committee. The evaluation will be based on the following criteria:

  • adopted methodology (10 points);
  • achieved results (10 points);
  • code (8 points);
  • presentation (8 points).

Acknowledgements

The present work has received funding from the European Union’s Horizon 2020 Marie Skłodowska Curie Innovative Training Network Greenedge (GA. No. 953775).

Contact Information

To ask questions, please feel free to create an issue via the Issues tab.

We look forward to your participation!

greenedge-energy-aware-classification's People

Contributors

marcellobullo avatar selimfirat avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

greenedge-energy-aware-classification's Issues

Batch distribution using deeplake dataset.

Here you can see an image of the label distribution in batches of 256 images. The batches are done with the following snippet:
train_data = deeplake.load("hub://activeloop/tiny-imagenet-train")
dataloader = train_data.pytorch( num_workers=0, tensors = ['images', 'labels'], transform = tiny_image_net_transform(tform), batch_size=256, shuffle=True )

This is the label distribution for the first 40 batches:

e5df4d26-1433-455d-ac02-e62a86729131

You can notice that the distribution is not uniform over the 200 labels but every batch contains just around 17 labels per batch.

You can find the notebook used to obtain the results at this link (@https://colab.research.google.com/drive/118tVGt7Fqwk5FAqAKBnI_-ONc8LlAUS6?usp=sharing), hope everything is correct.

Andrea and Luca :)

Mismatch between Tiny-ImageNet and ImageNet Labels

I noticed that the Tiny-ImageNet labels are not in the range 0-199 nor in the range 0-999, this caused some problems when testing the dataset with a pretrained model on ImageNet, such as EfficientNet, since it seems like there is no direct relation between the two datasets labels.
For this reason, I had a conversation with the Deep Lake staff today to ask how to correctly map the Tiny-ImageNet labels to the ImageNet labels.
They informed me that by the end of the week, they will update the Deep Lake Tiny-ImageNet dataset to correct the labels so that they correspond to the actual ImageNet labels.

Label distributions

Hello, I was trying to run the exact code without any modification, but I saw the accuracy is 0.0, I understand that the data has a problem so I did some logs below and the labels were not 0-199.
As you see in the screenshot below, there are data and labels.
Screenshot 2024-05-26 172421

Also during training, I did some logs to see the predicted and true labels, and the reason is that label distributions are not correct.

Screenshot 2024-05-26 165529

Also the results:
photo1716636580

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.