Giter VIP home page Giter VIP logo

mobile's People

Contributors

cs2be avatar daming-lu avatar hedaoyuan avatar luotao1 avatar nhzlx avatar nickyfantasy avatar shanyi15 avatar wangkuiyi avatar xreki avatar yeyupiaoling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobile's Issues

PaddlePaddle新年大礼!周边产品送不停!

各位亲爱的PaddlePaddle用户:

2017年已经过去,感谢你们一直陪伴PaddlePaddle在深度学习领域共同成长,无论是应用、贡献、还是review、纠错;每一个人的付出对于PaddlePaddle都是至关重要的。值此新年,PaddlePaddle准备了一批周边小礼物赠送给各位,与各位一同迎接更好的2018!PaddlePaddle,一起加油!

周边礼物领取方式如下:
请各位使用者、贡献者将如下信息发到[email protected]
【“姓名“+“github账号名” +“周边礼物快递邮寄地址”+”快递联系电话“+”定制卫衣号码(S:160,M:165,L:170,XL:175,xxL:180)“】 →发到[email protected]
我们会根据邮件中的地址为您送上PaddlePaddle周边礼物一份。
您也可以同时回复此issue-”你对PaddlePaddle的祝福和嘱咐“或“PaddlePaddle,一起加油!”来提醒我们查收您的邮件。

礼物内容如下:

每位发送邮件的用户都能获得PaddlePaddle礼袋一个,内置“LOGO卫衣+贴纸+纪念小徽章“

image

重要贡献用户随机加发下列周边产品之一:

PaddlePaddle定制款-膳魔师保温水杯

image

PaddlePaddle定制款-机械键盘

image

PaddlePaddle定制款-树莓派

image

如有其他疑问,可以直接回复本issue~~感谢大家的支持!

Android加载PaddlePaddle库失败

我创建一个Android项目,使用官方的FindPaddle.cmakeCMakeLists.txt,只是修改下面部分,报错,无法构建项目

add_library( # Sets the name of the library.
             paddle_image_recognizer

             # Sets the library as a shared library.
             SHARED

             # Provides a relative path to your source file(s).
             src/main/cpp/image_recognizer_jni.cpp
             src/main/cpp/paddle_image_recognizer.cpp
             src/main/cpp/binary_reader.cpp
             src/main/cpp/image_utils.cpp )

报错如下:
image

输出日志:
CMakeOutput.log

如果不是使用官方的CMakeLists.txt
就不会报上面的错,可能是我没有修改里面的东西,这就没有加载PaddlePaddle库,就报错如下:
image

SSD Demo on Android and iOS

We are planing to build a SSD (Single Shot MultiBox Detector) demo running on Android and iOS. PaddlePaddle has integrated the SSD algorithm and posted an example to demonstrate how to use the SSD model for object detection, https://github.com/PaddlePaddle/models/tree/develop/ssd.

Goals

To show PaddlePaddle's ability on mobile, we choose to run inference of SSD model on Android and iOS with following goals:

  • Build a demo application which can use mobile's camera to capture images and show detected objects to users.
  • Run fast enough to show the results in real-time.

Tasks

  • Training SSD model based on mobilenet, with input image of size 224 x 224 (@NHZlX , 2017-11-13)
  • Anything needed to do on back-end (@Xreki)
  • A mobile demo application to show on Baidu World on 2017-11-16 (@nickyfantasy )
    • iOS for high priority
    • Using camera to capture images in real-time
    • Showing the rectangle, category, and score of the detected objects
    • Ready for testing on 2017-11-14

Details

  • Input: pixels of a colored image

    • Shape, 300 x 300 for current vgg based model (224 x 224 for mobilenet bases model).
    • Data type: float
    • Storage format: CHW order, that is [RRRRRR][GGGGGG][BBBBBB]
  • Output

    The inference's output type is paddle_matrix. The height of the matrix is the number of detected objects, and the width is fixed to 7.

    • row[i][0]: the index in a minibatch. For our case, the minibatch is fixed to 1, so row[i][0] always is 0.0.
    • row[i][1]: the label of the object. You can find the label list in https://github.com/PaddlePaddle/models/blob/develop/ssd/data/label_list. If row[i][1] is 15, it means the detected object is a person.
    • row[i][2]: the score of detected rectangle and object.
    • row[i][3] - row[i][6]: (xmin, ymin, xmax, ymax), the relative coordinate of the rectangle.
    $ ./build/vgg_ssd_demo 
    I1107 06:36:18.600690 16092 Util.cpp:166] commandline:  --use_gpu=False 
    Prob: 7 x 7
    row 0: 0.000000 5.000000 0.010291 0.605270 0.749781 0.668338 0.848811 
    row 1: 0.000000 12.000000 0.530176 0.078279 0.640581 0.721344 0.995839 
    row 2: 0.000000 12.000000 0.017214 0.069217 0.000000 1.000000 0.972674 
    row 3: 0.000000 15.000000 0.998061 0.091996 0.000000 0.995694 1.000000 
    row 4: 0.000000 15.000000 0.040476 0.835338 0.014217 1.000000 0.446740 
    row 5: 0.000000 15.000000 0.010271 0.718238 0.006743 0.993035 0.659929 
    row 6: 0.000000 18.000000 0.012227 0.069217 0.000000 1.000000 0.972674 
  • Show

    The rectangle, category, and score of the detected objects are wished to be correctly shown, like
    image

Reference

  1. tensorflow: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
  2. caffe2: https://github.com/bwasti/AICamera
  3. caffe2: https://caffe2.ai/docs/mobile-integration.html

capi加载模型出错mapGet(type, creatorMap_, &creator) Unknown class type: data

ar_infer.paddle.zip
我使用了merge_model 生成了.paddle文件以供capi加载

init没报错应该是成功了,paddle_gradient_machine_create_for_inference_with_parameters报错了,传入的参数buf无为空,size对比模型文件本身大小是一致的,错误log如下:

I0109 20:28:47.075923 3058023296 Util.cpp:166] commandline:  --use_gpu=False --pool_limit_size=0 
F0109 20:28:58.992383 3058023296 ClassRegistrar.h:65] Check failed: mapGet(type, creatorMap_, &creator) Unknown class type: data
*** Check failure stack trace: ***
2018-01-09 20:28:59.041616+0800 IphoneCom[3630:2417368] Task <A23BCE12-DE2B-4E27-92CD-C29598C63E49>.<2> finished with error - code: -1001
2018-01-09 20:28:59.047358+0800 IphoneCom[3630:2417138] TIC TCP Conn Failed [2:0x1c0176500]: 3:-9802 Err(-9802)
2018-01-09 20:28:59.048243+0800 IphoneCom[3630:2417138] Task <A23BCE12-DE2B-4E27-92CD-C29598C63E49>.<2> HTTP load failed (error code: -999 [1:89])
    @        0x1059942e0  google::LogMessage::Fail()
    @        0x105993408  google::LogMessage::SendToLog()
    @        0x105993bdc  google::LogMessage::Flush()
    @        0x1059971b4  google::LogMessageFatal::~LogMessageFatal()
    @        0x105994620  google::LogMessageFatal::~LogMessageFatal()
    @        0x105918f10  _ZN6paddle14ClassRegistrarINS_5LayerEJNS_11LayerConfigEEE12createByTypeERKNSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES2_
    @        0x105918d94  paddle::Layer::create()
    @        0x10591a75c  paddle::NeuralNetwork::init()::$_0::operator()()
    @        0x10591a010  paddle::NeuralNetwork::init()
    @        0x10591f250  paddle::GradientMachine::create()
    @        0x105917410  paddle_gradient_machine_create_for_inference_with_parameters
    @        0x1043e0548  init_machine()
    @        0x1043e0280  -[BNLocationManager init]
    @        0x1043dff10  +[BNLocationManager GetInstance]
    @        0x104520878  +[BNCoreServices LocationService]
    @        0x10451c988  -[BNCoreServices doAfterInitEngine]
    @        0x1045195bc  __40-[BNCoreServices startSericesAsyn:fail:]_block_invoke_2
    @        0x10f87d2cc  _dispatch_call_block_and_release
    @        0x10f87d28c  _dispatch_client_callout
    @        0x10f881ea0  _dispatch_main_queue_callback_4CF
    @        0x1851b2544  <redacted>
    @        0x1851b0120  <redacted>
    @        0x1850cfe58  CFRunLoopRunSpecific
    @        0x186f7cf84  GSEventRunModal
    @        0x18e74f67c  UIApplicationMain
    @        0x102ee0c30  main
    @        0x184bec56c  <redacted>

benchmark failed because of the update of protobuf's version

See the detailed error:

Linking CXX executable inference
/home/work/.jumbo/bin/cmake -E cmake_link_script CMakeFiles/inference.dir/link.txt --verbose=1
/home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/aarch64-linux-android-g++  --sysroot=/home/work/liuyiqun/install/android/toolchains/arm64-android-21/sysroot  -ffunction-sections -fdata-sections -march=armv8-a  -O3 -DNDEBUG  -pie -fPIE -Wl,--gc-sections  CMakeFiles/inference.dir/inference.cc.o  -o inference  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/lib/arm64-v8a  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/gflags/lib/arm64-v8a  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/glog/lib/arm64-v8a  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/protobuf/lib/arm64-v8a  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/zip/lib/arm64-v8a  -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/openblas/lib/arm64-v8a  -Wl,--start-group -Wl,--whole-archive -lpaddle_capi_layers -Wl,--no-whole-archive -lpaddle_capi_engine -Wl,--end-group -lglog -lgflags -lprotobuf -lz -lopenblas 
/home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld: /lib64/libz.so.1: no version information available (required by /home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld)
/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/protobuf/lib/arm64-v8a/libprotobuf.a(common.cc.o): In function `google::protobuf::internal::DefaultLogHandler(google::protobuf::LogLevel, char const*, int, std::string const&)':
common.cc:(.text._ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs[_ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs]+0xc0): undefined reference to `__android_log_write'
common.cc:(.text._ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs[_ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs]+0x12c): undefined reference to `__android_log_write'
collect2: error: ld returned 1 exit status
make[2]: *** [inference] Error 1
make[2]: Leaving directory `/home/work/liuyiqun/PaddlePaddle/Mobile/benchmark/tool/C/build_android'
make[1]: *** [CMakeFiles/inference.dir/all] Error 2
make[1]: Leaving directory `/home/work/liuyiqun/PaddlePaddle/Mobile/benchmark/tool/C/build_android'
make: *** [all] Error 2

Need to link Android's log library.

Android手机读取图像问题

@Xreki 我根据Android的Dome做了一个图像分类的,但是在读取出了点问题。在Java中,是把图像转成字节数组的,所以我也是这个样做,如下代码。但是pixels这个字节数组得到的结果是不定长的,但是图像明明都是3*32*32的。大小应该都是3072才对。这是为什么呢?

    public String infer(String img_path) {
        //把图像读取成一个Bitmap对象
        Bitmap bitmap = BitmapFactory.decodeFile(img_path);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
        //把图像生成一个字节数组
        byte[] pixels = baos.toByteArray();
        Log.i("datas大小为", String.valueOf(pixels.length));
        try {
            baos.close();
        } catch (IOException e) {
            e.printStackTrace();
        }

        if (mRgbBytes == null) {
            mRgbBytes = new byte[3072];
        }
        for (int i = 0; i < pixels.length; i++) {
            mRgbBytes[i] = pixels[i];
            Log.i("ImageRecognition", String.valueOf(mRgbBytes[i]));
        }
        // 获取预测结果
        float[] result = infer(mRgbBytes);
        // 把概率最大的结果提取出来
        float max = 0;
        int number = 0;
        for (int i = 0; i < result.length; i++) {
            if (result[i] > max) {
                max = result[i];
                number = i;
            }
        }
        String msg = "类别为:" + clasName[number] + ",可信度为:" + max;
        Log.i("ImageRecognition", msg);

        return msg;
    }

就是因为上面的大小没有充满矩阵,导致array很多都是0的。

    for (size_t c = 0; c < 3; ++c) {
        for (size_t h = 0; h < 32; ++h) {
            for (size_t w = 0; w < 32; ++w) {
                array[index] =
                        static_cast<float>(((pixels[(h * 32 + w) * 3 + c]) - means[c]) / 255.0);
                LOGI("array_src:%f", array[index]);
                index++;
            }
        }
    }

这个怎么解决???求解答。

add merge batch normalization tools

When the training process is finished, we can merge the batch normalization with the convolution or fully connected layer. Merge batch normalization speeds up the forward process by around 30%.

编译Paddle Android demo,在生成inference时,make出现错误

参照 安卓文档,在进行第二步 make时提示下面的错误:

root@wyf-virtual-machine:/home/wyf/Paddle-Android/Mobile/benchmark/tool/C/build# make
Scanning dependencies of target inference
[100%] Building CXX object CMakeFiles/inference.dir/inference.cc.o
Linking CXX executable inference
/home/wyf/Paddle-Android/arm64_standalone_toolchain/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld: cannot find -lprotobuf
collect2: error: ld returned 1 exit status
make[2]: *** [inference] 错误 1
make[1]: *** [CMakeFiles/inference.dir/all] 错误 2
make: *** [all] 错误 2

然后参考了 Fix linking problem of protobuf-3.2,但没有看明白怎样修改的CMakeLists.txt,我的 ANDROID_ABI = arm64-v8a,能否较详细说下怎样修改呢?谢谢

add rounding evaluating

Given floating parameters V, first our goal is to represent V as 8-bit integers V'. And then we transformed back V' back into its approximate high-precision value by performing the inverse of the quantization operation. At last, we perform gzip to our quantized && inverse-quantized model. The whole process can reduces our model by 70%.

demo linux

我在树莓派上编译demo/linux.报如下的错误,提示找不到protobuf,可是protobuf明明在啊,请问是为什么呢

-- Found PaddlePaddle (include: /usr/include; library: /usr/lib/libpaddle_capi_layers.a, /usr/lib/libpaddle_capi_engine.a)
CMake Warning at FindPaddle.cmake:70 (message):
Cannot find protobuf under /home/paddle/_install/third_party
Call Stack (most recent call first):
FindPaddle.cmake:75 (third_party_library)
CMakeLists.txt:19 (find_package)

-- Found glog: /home/paddle/_install/third_party/glog/lib/libglog.a
-- Found openblas: /home/paddle/_install/third_party/openblas/lib/libopenblas.a
-- Found gflags: /home/paddle/_install/third_party/gflags/lib/libgflags.a
-- Found zlib: /home/paddle/_install/third_party/zlib/lib/libz.a
-- Configuring done
-- Generating done
-- Build files have been written to: /home/paddle/Mobile/Demo/linux/build

Android初始化PaddlePaddle问题

看Android得例子,初始化代码应该是这一段

void ImageRecognizer::init_paddle() {
static bool called = false;
if (!called) {
// Initalize Paddle
char* argv[] = {const_cast<char*>("--use_gpu=False"),
const_cast<char*>("--pool_limit_size=0")};
CHECK(paddle_init(2, (char**)argv));
called = true;
}
}

其中--pool_limit_size=0是什么意思,是否可以在初始化PaddlePaddle的时候指定使用使用GPU
我看到代码指定是否使用GPU是这一段

paddle_matrix mat = paddle_matrix_create(
/* sample_num */ 1,
/* size */ normed_channel_ * normed_height_ * normed_width_,
/* useGPU */ false);

Cannot use |= to paddle_error

We use |= to paddle_error in our benchmark.

inline paddle_error& operator |=(paddle_error& a, paddle_error b) {
return a =
static_cast<paddle_error>(static_cast<int>(a) | static_cast<int>(b));
}

However, the definition of paddle_error is:
https://github.com/PaddlePaddle/Paddle/blob/c8d4efb20eecab5a2edd55ccf923dac78afc6d78/paddle/capi/error.h#L23-L30

typedef enum {
  kPD_NO_ERROR = 0,
  kPD_NULLPTR = 1,
  kPD_OUT_OF_RANGE = 2,
  kPD_PROTOBUF_ERROR = 3,
  kPD_NOT_SUPPORTED = 4,
  kPD_UNDEFINED_ERROR = -1,
} paddle_error;

The different bit of different error may be the same. If we use |= to two variables of paddle_error, we may lose the error information.

Also, we need to print the error information, or it will be difficult for users to debug, like in #48 .

Error when building linux ssd demo

I followed this guide to build the ssd demo on Ubuntu 16.04.
Everything is ok untill make , the error is:

/home/zfq/Mobile/Demo/linux/paddle_image_recognizer.h:23:59: error: ‘const char* paddle_error_string(paddle_error)’ was declared ‘extern’ and later ‘static’ [-fpermissive]

I am new at C++, so could anyone help me, please, thanks!

训练报错

错误如下:
net = mobile_net(image)
File "mobilenet_pruning.py", line 70, in mobile_net
stride=1)
File "mobilenet_pruning.py", line 43, in depthwise_separable
pa0 = ParamAttr(update_hooks = Hook('dynamic_pruning', sparsity_upper_bound=0.75))
TypeError: init() got an unexpected keyword argument 'sparsity_upper_bound'

" Unknown class type: recurrent_layer_group" when using capi in ios

" Unknown class type: recurrent_layer_group" error occur when I run inference using RNN model
capi: download from the pre-compiled libs in paddle mobile
ios:iphone7

F0322 17:16:18.261178 3028687744 ClassRegistrar.h:65] Check failed: mapGet(type, creatorMap_, &creator) Unknown class type: recurrent_layer_group
*** Check failure stack trace: ***
@ 0x102c7e0bc google::LogMessage::Fail()
@ 0x102c7d1e4 google::LogMessage::SendToLog()
@ 0x102c7d9b8 google::LogMessage::Flush()
@ 0x102c80f90 google::LogMessageFatal::~LogMessageFatal()
@ 0x102c7e3fc google::LogMessageFatal::~LogMessageFatal()
@ 0x102bc0a54 ZN6paddle14ClassRegistrarINS_5LayerEJNS_11LayerConfigEEE12createByTypeERKNSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES2
@ 0x102bc08d8 paddle::Layer::create()
@ 0x102be650c paddle::NeuralNetwork::init()::$_0::operator()()
@ 0x102be5dc0 paddle::NeuralNetwork::init()
@ 0x102beaae0 paddle::GradientMachine::create()
@ 0x102c0689c paddle_gradient_machine_create_for_inference_with_parameters

Refine `models` directory

The models directory needs to contain two parts. The first part is some standard model configuration files, such as MobileNet, ResNet, etc., that can be used as benchmark test data. The second part contains some well-trained model parameter files, such as the mobilenet ssd pascal model, which can be directly converted to inference model files for deployment to mobile.

Move tools into deployment

The deployment directory can contain the following two parts.

  1. Compile optimization for Paddle Mobile Inference Library.
  2. Compile optimization of Paddle training model to Paddle inference model (see #51).

The num_filters of depthwise_separable seems wrong in mobilenet

The mobile net paper said "We use depthwise convolutions to apply a single filter per each input channel (input depth)".
But in paddlepaddle implementation

    tmp = depthwise_separable(tmp,
                              num_filters1=32,
                              num_filters2=64,
                              num_groups=32,
                              stride=1, scale = scale)

the num_filters1 is the same as num_groups and the input channel is 32, it will cause a single filter be precessed by 32 filter. so the num_filters1 here should be 1 rather than 32
I‘m glad to fix if possible:)

refine the doc of the inference demo

the following Sentence is confused to users.

./inference --merged_model ./model/mobilenet.paddle --input_size 150528

1.what's the input_size
2.how to generate the mobilenet.paddle

这条命令的参数是什么意思,我是这是有没有出错?

我在这个教程的最后一步的命令,我执行了这个代码./inference --merged_model ./mobilenet.paddle --input_size 784,因为我是使用paddlepaddle的手写数字识别的,报以下的错误,那条命令的的第一个参数是什么?

generic:/data/local/tmp # ./inference --merged_model ./mobilenet.paddle --input_size 784                            
WARNING: linker: /data/local/tmp/inference: unused DT entry: type 0xf arg 0x826
I1208 12:45:52.141201  2114 Util.cpp:166] commandline:  
Time of init paddle 1085.49 ms.
Time of create from merged model file 300.789 ms.
Time of forward time 0.0035536 ms.
paddle forward error!

Compile training model into inference model

Based on the previous survey, various inference frameworks have a phase that transforms a training model into an inference model. Like Compilation in AndroidNN, Conversion in CoreML, Build in TensorRT, Converter in Tensorflow-Lite. Paddle Mobile also needs a compilation tool that transforms the training model into an inference model.

This compilation tool needs to be able to support the following features:

  • Compile Paddle's training config and parameter files into one inference file.
  • Support Rounding-based parameter compression.
  • Supports model optimization for Merge Batch Normalization.
  • Support float32 to float16 parameter compression optimization.
  • Support float32 to unit8 parameter compression optimization.

安装android版paddle,cmake提示错误

按照文档Inference demo配置android,cmake提示下面的错误:

CMake Error at CMakeLists.txt:32 (project):
The CMAKE_CXX_COMPILER:

/Home/wyf/android-ndk-r14b/build/tools/arm64_standlone_toolchain/bin/aarch64-linux-android-g++

is not a full path to an existing compiler tool.

Tell CMake where to find the compiler by setting either the environment
variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
to the compiler, or to the compiler name if it is in the PATH.

CMake Error at CMakeLists.txt:32 (project):
The CMAKE_C_COMPILER:

/Home/wyf/android-ndk-r14b/build/tools/arm64_standlone_toolchain/bin/aarch64-linux-android-gcc

is not a full path to an existing compiler tool.

Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.

-- Configuring incomplete, errors occurred!

什么原因呢?是什么路径不对吗?还是缺少了哪一步?

缕清几个Android 应用Paddle时的概念

示例程序提到的生成merged model(也就是 *.paddle文件)要准备好准备好模型配置文件(.py)和参数文件(.tar.gz)。
问题:
1.参数文件(.tar.gz)是经过PC训练生成的吧?
2.模型配置文件mobilenet.py,是个啥东东?超链接打不开,看不到是啥
3.在PC上使用模型预测时,不需要这个“模型配置文件”吧?
4.Android APP我在Windows环境下用过Android Studio开发过,请问将paddle库、merge生成的 *.paddle文件分别放到对应的文件夹下,之后就可以在Android Studio 调api了,最后打包成apk,是这样吗?还是要像[示例程序]第4步?
谢谢。。。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.