Giter VIP home page Giter VIP logo

flower-recognition's People

Contributors

gogul09 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

flower-recognition's Issues

MemoryError

Traceback (most recent call last): File "extract_features.py", line 126, in <module> h5f_data.create_dataset('dataset_1', data=np.array(features)) MemoryError
I am getting this error how to overcome this

which preprocess_input used in extract_features.py

In extract_features.py, the code define seven preprocess_input:
from keras.applications.vgg16 import VGG16, preprocess_input from keras.applications.vgg19 import VGG19, preprocess_input from keras.applications.xception import Xception, preprocess_input from keras.applications.resnet50 import ResNet50, preprocess_input from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input from keras.applications.mobilenet import MobileNet, preprocess_input from keras.applications.inception_v3 import InceptionV3, preprocess_input
So, when call x = preprocess_input(x) ,which preprocess_input was called?

need help to change some thing in the result file

i understand that "support " word in the result file mean number of samples that tested per class
(you can correct me if i am wrong )
but i need to test the same number of samples per class what should i do
Thanks

ValueError: No such layer: custom

Hello,
I get the following error when I run 'extract_features.py'

Using TensorFlow backend.
[STATUS] start time - 2018-01-17 18:41
2018-01-17 18:41:56.202633: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
Traceback (most recent call last):
File "extract_features.py", line 65, in
model = Model(input=base_model.input, output=base_model.get_layer('custom').output)
File "/Users/sriram/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 1887, in get_layer
raise ValueError('No such layer: ' + name)
ValueError: No such layer: custom

After checking out on internet, I found out that we can create custom layers. Is there any simple alternative solution? I am trying this out on MacBook pro.

Some minor issues in the code

In Train, I saw Rank-1 and Rank-5. The result I got was that the accuracy of Rank-5 was significantly better than that of Rank-1, but in Test, the accuracy of Rank-1 was returned

I want to write to you ,but I don't know your email. So I change the way

I have three questions about the code of flower-recognition-deep-learning .
(1) I do not know how to prepare the datasetโ€™s labels .I have downloaded the dataset and ran organize-flowers17.py ,create 17 empty files .
(2) If I want to train the network from imagenet ,I need to download the model of imagenet ?
(3) I have decided to use vgg16 architecture to pre-train ,but when I run extract-feature.py ,I get two files with size of 2KB in the format of .h5.
I'm just a beginner , so I hope to get your help . I am looking forward to your replay.

Very low accuracy results with FLOWERS17

Thanks for putting up this code. I'm just a bit confused as to why I'm getting vey low accuracy results from running your code. Any idea where it might be going wrong?

rank-1: 4.66%
rank-5: 28.19%

             precision    recall  f1-score   support

          0       0.09      0.13      0.11        23
          1       0.00      0.00      0.00        23
          2       0.00      0.00      0.00        27
          3       0.04      0.03      0.04        31
          4       0.04      0.03      0.04        29
          5       0.10      0.10      0.10        21
          6       0.06      0.04      0.05        25
          7       0.05      0.04      0.04        26
          8       0.00      0.00      0.00        23
          9       0.00      0.00      0.00        24
         10       0.07      0.05      0.06        21
         11       0.00      0.00      0.00        22
         12       0.08      0.10      0.09        20
         13       0.04      0.03      0.04        30
         14       0.09      0.09      0.09        22
         15       0.04      0.06      0.05        18
         16       0.12      0.13      0.12        23

My conf/config.json

{
        "model"                 : "vgg19",
        "weights"               : "imagenet",
        "include_top"           : false,

        "train_path"            : "/home/ubuntu/flower/flowers",
        "features_path"         : "output/flowers_17/vgg19/features.h5",
        "labels_path"           : "output/flowers_17/vgg19/labels.h5",
        "results"               : "output/flowers_17/vgg19/results.txt",
        "classifier_path"       : "output/flowers_17/vgg19/classifier.cpickle",

        "test_size"             : 0.30,
        "seed"                  : 9,
        "num_classes"           : 17
}

I've downloaded the FLOWERS17 dataset, and ran organize_flowers17.py. It all looks good:

ubuntu@ubuntu:~/flower/flowers$ ls
bluebell   coltsfoot  crocus    daisy      fritillary  lilyvalley  snowdrop   tigerlily  windflower
buttercup  cowslip    daffodil  dandelion  iris        pansy       sunflower  tulip
ubuntu@ubuntu:~/flower/flowers$ ls -l windflower/
total 3568
-rwxr-xr-x 1 ubuntu ubuntu  28427 Jul 23 23:32 image_0006.jpg
-rwxr-xr-x 1 ubuntu ubuntu  41580 Jul 23 23:32 image_0029.jpg
-rwxr-xr-x 1 ubuntu ubuntu  37322 Jul 23 23:32 image_0034.jpg
...

they all seem to have 80 images:

ubuntu@ubuntu:~/flower/flowers$ ls -1 windflower/ | wc -l
80
ubuntu@ubuntu:~/flower/flowers$ ls -1 daisy/ | wc -l
80

the hd5 files seem to be filled:

ubuntu@ubuntu:~/flower/output/flowers_17/vgg19$ ls -l
total 23448
-rw-rw-r-- 1 ubuntu ubuntu  1700188 Jul 24 02:23 classifier.cpickle
-rw-rw-r-- 1 ubuntu ubuntu 22284384 Jul 24 02:07 features.h5
-rw-rw-r-- 1 ubuntu ubuntu    13024 Jul 24 02:07 labels.h5
-rw-rw-r-- 1 ubuntu ubuntu     1040 Jul 24 02:23 results.txt

Interestingly, if I download your output/flowers_17/vgg19/features.h5 and, using that, run train.py I get your results. So it seems like the problem has something to do with extract_features.py

Since I'm using Linux, I had to change a few parts of the code to let me use unix-style paths (e.g., '/' vs ''). But I don't really see why it should matter since it still seems to be picking up all the files.

Different classifiers

It works well for logistic regression but what if I want to try with different classifiers. How to do that?
I tried using kernel SVM but it shows error that- predict_proba function doesn't exist.

The system cannot find the file specified.

I am getting the error even after giving the right input and output path. The folders are getting created but there are no images in that folder the error says "The system cannot find the file specified."

Deep learning using GPU of Odroid XU4

i executed this source code using Odroid XU4 with Unbuntu, Keras, Theano.
I success executing this code on CPU of Odroid XU4 but want to executing that on GPU of Odroid XU4.
So I attempted that but failed
My Questions are below that...

  1. Did you attempt using the GPU of Odroid XU4 when executing this code?
  2. if you used GPU of Odroid XU4, Can you notified me how using the that

Testing classifier using my own test data

Hello! I have read your post but I still don't understand few things.

You created a folder named: 'dataset' with two other folders: 'train' and 'test'. The 'organize_flowers_17.py' script creates 17 folders in output_path (\flower-recognition\dataset\train) and copies images into these labeled folders. In your conf.json file you defined "train_path": "dataset/flowers_17" but there is no folder dataset/flowers_17 in your folder structure. Why do we need 'dataset/test' folder?

And the most important question to me. How can I use my own test images? How algorithm will know which class the image belongs to? And then, do we need "test_size" setting in conf.json file? I would be grateful for your reply. :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.