Giter VIP home page Giter VIP logo

Comments (15)

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024 1

Cool, let me have a look.

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

Can you provide the model for debugging?

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

@MarsTechHAN

Model link: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing

Full c++ inference program

#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <opencv2/ml.hpp>
#include <iterator>
#include <fstream>
#include "dirent.h"
#include "net.h"

using namespace cv::ml;
using namespace std;
using namespace cv;

/* Function is taken from one of NCNN examples */
static int print_topk(const std::vector<float>& cls_scores, int topk)
{
    // partial sort topk with index
    int size = cls_scores.size();
    std::vector<std::pair<float, int> > vec;
    vec.resize(size);
    for (int i = 0; i < size; i++)
    {
        vec[i] = std::make_pair(cls_scores[i], i);
    }

    std::partial_sort(vec.begin(), vec.begin() + topk, vec.end(),
                      std::greater<std::pair<float, int> >());

    // print topk and score
    for (int i = 0; i < topk; i++)
    {
        float score = vec[i].first;
        int index = vec[i].second;
        fprintf(stderr, "%d = %f\n", index, score);
    }

    return 0;
}

int main()
{
    	// reading image in cv Mat format
	Mat image = imread("eyes.jpg"); //read image
	// Converting image from BGR to RGB
	cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
	// Resizing image to 128x128
	Mat img;   
	resize(image, img, Size(128, 128));
	// Convert image to float by and normalize by dividing it by 255
	img.convertTo(img,CV_32F);
	img /= 255.0;
	// Load ncnn model
	ncnn::Net smallCNN;
	smallCNN.load_param("cnn128.param");
	smallCNN.load_model("cnn128.bin");
	// converting cv::Mat image to ncnn::Mat
	ncnn::Mat in = ncnn::Mat::from_pixels(img.data, ncnn::Mat::PIXEL_RGB, img.cols, img.rows);
	// Give input to ncnn model
	ncnn::Extractor ex = smallCNN.create_extractor();
	ex.input("conv2d_8_input_blob", in);

        ncnn::Mat out;
    	ex.extract("dense_5_Softmax_blob", out);

	vector<float> v1;
	std::vector<float> cls_scores;
	cls_scores.resize(out.w);
    	for (int j = 0; j < out.w; j++)
    	{
       	 	cls_scores[j] = out[j];
		std::cout<<"score: "<<out[j];
    	}
	print_topk(cls_scores, 3);
    	return 0;
}

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

Thanks! Can you also provide the original keras model comparing?

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

Uploaded kerass model same folder here: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

Here is the keras2ncnn debug mode output (by inputing random data), it seems like the network forward path is completely identical, so the problem may be in the pre-processing stage of the network. Can you try to extract the conv2d_8_input and conv2d_8 layers and comparing them to the image in the line pred = model.predict(image)[0]

conv2d_8_input
==================================
Layer Name: conv2d_8, Layer Shape: keras->(1, 43, 43, 16) ncnn->(16, 43, 43)
Max:    keras->0.819 ncnn->0.819        Min: keras->-0.964 ncnn->-0.964
Mean:   keras->-0.083 ncnn->-0.083      Var: keras->0.215 ncnn->0.215
Cosine Similarity: 0.00000
Keras Feature Map:      [ 0.084  0.128 -0.216 -0.194  0.106 -0.197 -0.051  0.18  -0.159 -0.058]
Ncnn Feature Map:       [ 0.084  0.128 -0.216 -0.194  0.106 -0.197 -0.051  0.18  -0.159 -0.058]
==================================
Layer Name: max_pooling2d_8, Layer Shape: keras->(1, 22, 22, 16) ncnn->(16, 22, 22)
Max:    keras->0.819 ncnn->0.819        Min: keras->-0.693 ncnn->-0.693
Mean:   keras->0.097 ncnn->0.097        Var: keras->0.178 ncnn->0.178
Cosine Similarity: 0.00000
Keras Feature Map:      [0.128 0.016 0.106 0.18  0.16  0.01  0.062 0.162 0.175 0.135]
Ncnn Feature Map:       [0.128 0.016 0.106 0.18  0.16  0.01  0.062 0.162 0.175 0.135]
==================================
Layer Name: conv2d_9, Layer Shape: keras->(1, 11, 11, 32) ncnn->(32, 11, 11)
Max:    keras->0.994 ncnn->0.994        Min: keras->-1.331 ncnn->-1.331
Mean:   keras->-0.115 ncnn->-0.115      Var: keras->0.370 ncnn->0.370
Cosine Similarity: -0.00000
Keras Feature Map:      [0.594 0.511 0.582 0.539 0.646 0.273 0.217 0.661 0.864 0.311]
Ncnn Feature Map:       [0.594 0.511 0.582 0.539 0.646 0.273 0.217 0.661 0.864 0.311]
==================================
Layer Name: max_pooling2d_9, Layer Shape: keras->(1, 6, 6, 32) ncnn->(32, 6, 6)
Max:    keras->0.994 ncnn->0.994        Min: keras->-0.900 ncnn->-0.900
Mean:   keras->0.034 ncnn->0.034        Var: keras->0.345 ncnn->0.345
Cosine Similarity: 0.00000
Keras Feature Map:      [0.594 0.721 0.724 0.661 0.864 0.175]
Ncnn Feature Map:       [0.594 0.721 0.724 0.661 0.864 0.175]
==================================
Layer Name: conv2d_10, Layer Shape: keras->(1, 6, 6, 64) ncnn->(64, 6, 6)
Max:    keras->1.508 ncnn->1.508        Min: keras->-2.416 ncnn->-2.416
Mean:   keras->-0.259 ncnn->-0.259      Var: keras->0.535 ncnn->0.535
Cosine Similarity: 0.00000
Keras Feature Map:      [-0.803 -0.622 -0.666 -0.458 -0.862 -0.529]
Ncnn Feature Map:       [-0.803 -0.622 -0.666 -0.458 -0.862 -0.529]
==================================
Layer Name: max_pooling2d_10, Layer Shape: keras->(1, 3, 3, 64) ncnn->(64, 3, 3)
Max:    keras->1.508 ncnn->1.508        Min: keras->-1.635 ncnn->-1.635
Mean:   keras->0.068 ncnn->0.068        Var: keras->0.479 ncnn->0.479
Cosine Similarity: 0.00000
Keras Feature Map:      [-0.622 -0.458 -0.529]
Ncnn Feature Map:       [-0.622 -0.458 -0.529]
==================================
Layer Name: conv2d_11, Layer Shape: keras->(1, 3, 3, 64) ncnn->(64, 3, 3)
Max:    keras->2.364 ncnn->2.364        Min: keras->-3.498 ncnn->-3.498
Mean:   keras->-0.388 ncnn->-0.388      Var: keras->0.852 ncnn->0.852
Cosine Similarity: -0.00000
Keras Feature Map:      [-0.083  0.35   1.077]
Ncnn Feature Map:       [-0.083  0.35   1.077]
==================================
Layer Name: max_pooling2d_11, Layer Shape: keras->(1, 2, 2, 64) ncnn->(64, 2, 2)
Max:    keras->2.364 ncnn->2.364        Min: keras->-2.076 ncnn->-2.076
Mean:   keras->-0.044 ncnn->-0.044      Var: keras->0.713 ncnn->0.713
Cosine Similarity: 0.00000
Keras Feature Map:      [0.35  1.077]
Ncnn Feature Map:       [0.35  1.077]
==================================
Layer Name: flatten_2, Layer Shape: keras->(1, 256) ncnn->(1, 1, 256)
Max:    keras->2.364 ncnn->2.364        Min: keras->-2.076 ncnn->-2.076
Mean:   keras->-0.044 ncnn->-0.044      Var: keras->0.713 ncnn->0.713
Cosine Similarity: 0.00000
Keras Feature Map:      [ 0.35   0.873 -0.533 -0.701 -1.271  0.158 -0.739 -0.816  0.883 -0.247]
Ncnn Feature Map:       [ 0.35   0.873 -0.533 -0.701 -1.271  0.158 -0.739 -0.816  0.883 -0.247]
Top-k:
Keras Top-k:    88:2.364, 174:1.651, 127:1.642, 195:1.427, 25:1.396
ncnn Top-k:     88:2.364, 174:1.651, 127:1.642, 195:1.427, 25:1.396
==================================
Layer Name: dense_4, Layer Shape: keras->(1, 256) ncnn->(1, 1, 256)
Max:    keras->6.627 ncnn->6.627        Min: keras->0.000 ncnn->0.000
Mean:   keras->0.997 ncnn->0.997        Var: keras->1.479 ncnn->1.479
Cosine Similarity: 0.00000
Keras Feature Map:      [0.    0.    1.482 1.711 2.979 5.397 0.    0.248 0.    1.833]
Ncnn Feature Map:       [0.    0.    1.482 1.711 2.979 5.397 0.    0.248 0.    1.833]
Top-k:
Keras Top-k:    82:6.627, 12:6.439, 90:5.565, 129:5.425, 201:5.404
ncnn Top-k:     82:6.627, 12:6.439, 90:5.565, 129:5.425, 201:5.404
==================================
Layer Name: dense_5_Softmax, Layer Shape: keras->(1, 9) ncnn->(1, 1, 9)
Max:    keras->1.000 ncnn->1.000        Min: keras->0.000 ncnn->0.000
Mean:   keras->0.111 ncnn->0.111        Var: keras->0.314 ncnn->0.314
Cosine Similarity: -0.00000
Keras Feature Map:      [0. 0. 0. 0. 0. 0. 0. 0. 1.]
Ncnn Feature Map:       [0. 0. 0. 0. 0. 0. 0. 0. 1.]
Top-k:
Keras Top-k:    8:1.000, 2:0.000, 5:0.000, 1:0.000, 0:0.000
ncnn Top-k:     8:1.000, 2:0.000, 5:0.000, 7:0.000, 6:0.000
Done!

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

Something like this happened before... but I just not able to find where the accuracy mismatched.

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

yes, same for me too. But will try to print the output of first layer in both keras and NCNN, then we might be able to solve it. Will let you know if I get any success ! Thanks though

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

Maybe you can also give me a test picture and expected output

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

@MarsTechHAN

Image is updated in the same drive link: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing

Expected output: 0

where 0 is the label index.

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

@MarsTechHAN

How did you print these values ?

If possible can you please help me with the program which prints it ?

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

@MarsTechHAN

How did you print these values ?

If possible can you please help me with the program which prints it ?

It's keras2ncnn's build-in debug mode, by running:

python3 -mkeras2ncnn -i MODEL_FILE.h5 -d

However, it still contains a lot of bug. I am thinking of using docker for it. You can refer this code snap for your program:

static int dump_mat(const ncnn::Mat& m, const char *m_name)
{
char filename[1000] = "";
sprintf(filename, "%s-%d-%d-%d-layer_dump.dat", m_name, m.c, m.h, m.w);
FILE* fp = fopen(filename, "w+");
if(fp == NULL){
return -1;
}
for (int q=0; q<m.c; q++)
{
const float* ptr = m.channel(q);
fwrite(ptr, sizeof(float), m.w * m.h, fp);
}
fclose(fp);
return 0;
}

and this for loading:
ncnn_det_out[mat_sp[0]] = np.fromfile(os.path.join(self.tmp_dir, 'ncnn', 'build', 'benchmark', mat_file),
dtype='float32').reshape(*list(map(int, mat_sp[1:4])))

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

@MarsTechHAN

One thing I noticed is this, in keras the dimensions of input are type: float32[?,128,128,3] and in NCNN is type: float32[128,128,3]

Keras is 4D and NCNN is 3D, do you know how to convert the NCNN image to 4D ?

from keras2ncnn.

AIGirl10 avatar AIGirl10 commented on May 18, 2024

@MarsTechHAN

While using the debugger I am getting this error

python3 -mkeras2ncnn -i cnn128.h5 -d

error

ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

from keras2ncnn.

MarsTechHAN avatar MarsTechHAN commented on May 18, 2024

@MarsTechHAN

One thing I noticed is this, in keras the dimensions of input are type: float32[?,128,128,3] and in NCNN is type: float32[128,128,3]

Keras is 4D and NCNN is 3D, do you know how to convert the NCNN image to 4D ?

ncnn does not have 4D tensor and the batch dim. The dim of the keras is 4-D because of the batch, you dont need that in the ncnn. So no worrying, it's correct.

@MarsTechHAN

While using the debugger I am getting this error

python3 -mkeras2ncnn -i cnn128.h5 -d

error

ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))

Can you paste the full ouptut log? It seems like not the issue from the debugger

from keras2ncnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.