Giter VIP home page Giter VIP logo

deepweeds's People

Contributors

alexolsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepweeds's Issues

Enhancement / scale / rotation invariance built into network

I was reading your paper -
. Then, each image was randomly scaled both vertically and horizontally in the range of [0.5, 1]. Each colour channel was randomly shifted within the range of ±25 (i.e. approximately ±10% of the maximum available 8-bit colour encoding range [0, 255]). To account for illumination variance, pixel intensity was randomly shifted within the [−25, +25] range, shifting all colour channels uniformly. In addition, pixel intensity was randomly scaled within the [0.75, 1.25] range

Are you familiar with this repo https://github.com/tueimage/SE2CNN ?
It may be more fruitful in optimizing network by automagically providing roto scale invariance.

Inference on single images

Hi Alex, thanks so much for sharing the code! I am new to deep learning and found your commented code very helpful and clear.

I am attempting to run inference on single images from your study, using one of your pre-trained models. I think I’m running into some issues though. I downloaded the ResNet-50 model (resnet.hdf5) and loaded it to make predictions on single images at a time (using deepweeds.inference() ). However all the predictions are for the Negative class, since this probability is always much higher than the remaining classes.

Also tried running model.predict_generator() (from deepweeds.cross_validate() ) on just a test subset of data (‘test_subset0.csv’), to see if the predictions turned out differently. This was done on a Google Colab notebook with GPU, but it seems to be hanging and not completing with both ~3500 images and ~10 images (to see if runtime was the issue).

Do you know what I might be doing wrong?

Thanks!

No labels in the images.zip

There, I am interested in working on the data. Thanks for your work.

However, I can not find the labels in the images.zip.

There are no folders in the unzip folder. All images are just in one single folder.

Thanks

Can u Help Me Type Error

sikinti
Good luck with. I have encountered such an error while instructing the model according to your instructions. I get a Type error in "train_data_generator" in "deepweeds.py". How can I solve this error? If you help. I'm glad. Thank you good work.

Request: Original or higher resolution dataset

Hi Alex and Team, thanks for your great work.

Would it be possible to obtain the original dataset of images?

I've found that a model can be trained and tested with high accuracy after replicating your process with Resnet-50 and PyTorch, however I'm struggling with inference on images outside the dataset - they're generally much poorer (in particular, the confusion matrix results between Lantana, Snake Weed, and Rubber Vine). I would like to experiment with different transform techniques as I believe preserving aspect ratio of the weed, management of color/contrast (etc) may help.

Cheers,
Mitch

EDIT: Some samples, confusion matrix, etc.
DeepWeeds_Ten_Samples_OutofdatasetInference_1Oct2019.pdf

Paper details clarification

Hi Alex,

This is really a very good work. Congrats !!

I would like to ask you some help to understand 2 points in your work.

  1. I’m a little bit confused because you are using transfer learning, and I saw in other works that for transfer learning in general we only train a new 'top'. But you make all the layers from the pre-trained ResNet ‘trainable’ and then train the model for just two single epochs. How could just two epochs give you a good accuracy with only aprox. 1k images per class training the full network? Also, there is just a single Dense layer resposible for the prediction (binary_crossentropy).

  2. You are using a ‘negative’ class. But the number of negative examples is bigger than the total of all the other classes. If someone decide to use a negative class as well, how to calculate (or balance) the negative class with the other classes ?

Best Regards.
Kleyson Rios.

Souce code and model request

I read your research paper and downloaded your images. It's a great effort compared to other available datasets. Could you please share some source code and model to replicate your results presented in the paper? I need it for a class project and it would be a huge task if I were to build the system for basic frameworks. Thanks.

Issues with global variables

@AlexOlsen, thank you for putting this dataset together. After playing with your code and reading your publication, I think I see an error in your global variables in deepweeds.py:

  • Line 37: Should MAX_EPOCH = 32?
  • Line 40: Should STOPPING_PATIENCE = 2?
  • Line 43: Shouldn't these be strings? CLASSES = [str(i) for i in range(9)]

Location data for images

Hi,

Thanks for providing this dataset.

Since you've collected your dataset from different named locations (Black River, Charter Towers etc.), I would like to test how well my model generalises by learning about a particular species from one location and testing it in another location. Do you have data on the location where each image was taken? Is it perhaps something that could be inferred from the date?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.