paddlepaddle / mobile Goto Github PK
View Code? Open in Web Editor NEWEmbedded and Mobile Deployment
License: Apache License 2.0
Embedded and Mobile Deployment
License: Apache License 2.0
FlatBuffers are the serialization format used by TensorFlowLite: https://google.github.io/flatbuffers/
请问:是不是我只有一个模型就不需要用到这一步:使用模型配置文件(如config.py)和模型参数(例如params_pass_0.tar.gz)来获得合并的模型文件?
各位亲爱的PaddlePaddle用户:
2017年已经过去,感谢你们一直陪伴PaddlePaddle在深度学习领域共同成长,无论是应用、贡献、还是review、纠错;每一个人的付出对于PaddlePaddle都是至关重要的。值此新年,PaddlePaddle准备了一批周边小礼物赠送给各位,与各位一同迎接更好的2018!PaddlePaddle,一起加油!
周边礼物领取方式如下:
请各位使用者、贡献者将如下信息发到[email protected]
【“姓名“+“github账号名” +“周边礼物快递邮寄地址”+”快递联系电话“+”定制卫衣号码(S:160,M:165,L:170,XL:175,xxL:180)“】 →发到[email protected]
我们会根据邮件中的地址为您送上PaddlePaddle周边礼物一份。
您也可以同时回复此issue-”你对PaddlePaddle的祝福和嘱咐“或“PaddlePaddle,一起加油!”来提醒我们查收您的邮件。
每位发送邮件的用户都能获得PaddlePaddle礼袋一个,内置“LOGO卫衣+贴纸+纪念小徽章“
Here are the links to my previous survey so far:
由于arm64架构要求Android API不小于21
如果只是使用到armeabi-v7a
,最低支持的Android API是多少?
To make a model work with TensorFlowLite, we need to have a converter that could convert any standard model to a tflite model format.
我创建一个Android项目,使用官方的FindPaddle.cmake和CMakeLists.txt,只是修改下面部分,报错,无法构建项目
add_library( # Sets the name of the library.
paddle_image_recognizer
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
src/main/cpp/image_recognizer_jni.cpp
src/main/cpp/paddle_image_recognizer.cpp
src/main/cpp/binary_reader.cpp
src/main/cpp/image_utils.cpp )
输出日志:
CMakeOutput.log
如果不是使用官方的CMakeLists.txt
就不会报上面的错,可能是我没有修改里面的东西,这就没有加载PaddlePaddle库,就报错如下:
We are planing to build a SSD (Single Shot MultiBox Detector) demo running on Android and iOS. PaddlePaddle has integrated the SSD algorithm and posted an example to demonstrate how to use the SSD model for object detection, https://github.com/PaddlePaddle/models/tree/develop/ssd.
To show PaddlePaddle's ability on mobile, we choose to run inference of SSD model on Android and iOS with following goals:
300 x 300
, means [104, 117, 124]
ImageRecoginizer
with three inferfaces: init()
, infer()
, release()
Input: pixels of a colored image
[RRRRRR][GGGGGG][BBBBBB]
Output
The inference's output type is paddle_matrix
. The height of the matrix is the number of detected objects, and the width is fixed to 7.
0.0
.person
.(xmin, ymin, xmax, ymax)
, the relative coordinate of the rectangle.$ ./build/vgg_ssd_demo
I1107 06:36:18.600690 16092 Util.cpp:166] commandline: --use_gpu=False
Prob: 7 x 7
row 0: 0.000000 5.000000 0.010291 0.605270 0.749781 0.668338 0.848811
row 1: 0.000000 12.000000 0.530176 0.078279 0.640581 0.721344 0.995839
row 2: 0.000000 12.000000 0.017214 0.069217 0.000000 1.000000 0.972674
row 3: 0.000000 15.000000 0.998061 0.091996 0.000000 0.995694 1.000000
row 4: 0.000000 15.000000 0.040476 0.835338 0.014217 1.000000 0.446740
row 5: 0.000000 15.000000 0.010271 0.718238 0.006743 0.993035 0.659929
row 6: 0.000000 18.000000 0.012227 0.069217 0.000000 1.000000 0.972674
Show
The rectangle, category, and score of the detected objects are wished to be correctly shown, like
ar_infer.paddle.zip
我使用了merge_model 生成了.paddle文件以供capi加载
init没报错应该是成功了,paddle_gradient_machine_create_for_inference_with_parameters报错了,传入的参数buf无为空,size对比模型文件本身大小是一致的,错误log如下:
I0109 20:28:47.075923 3058023296 Util.cpp:166] commandline: --use_gpu=False --pool_limit_size=0
F0109 20:28:58.992383 3058023296 ClassRegistrar.h:65] Check failed: mapGet(type, creatorMap_, &creator) Unknown class type: data
*** Check failure stack trace: ***
2018-01-09 20:28:59.041616+0800 IphoneCom[3630:2417368] Task <A23BCE12-DE2B-4E27-92CD-C29598C63E49>.<2> finished with error - code: -1001
2018-01-09 20:28:59.047358+0800 IphoneCom[3630:2417138] TIC TCP Conn Failed [2:0x1c0176500]: 3:-9802 Err(-9802)
2018-01-09 20:28:59.048243+0800 IphoneCom[3630:2417138] Task <A23BCE12-DE2B-4E27-92CD-C29598C63E49>.<2> HTTP load failed (error code: -999 [1:89])
@ 0x1059942e0 google::LogMessage::Fail()
@ 0x105993408 google::LogMessage::SendToLog()
@ 0x105993bdc google::LogMessage::Flush()
@ 0x1059971b4 google::LogMessageFatal::~LogMessageFatal()
@ 0x105994620 google::LogMessageFatal::~LogMessageFatal()
@ 0x105918f10 _ZN6paddle14ClassRegistrarINS_5LayerEJNS_11LayerConfigEEE12createByTypeERKNSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES2_
@ 0x105918d94 paddle::Layer::create()
@ 0x10591a75c paddle::NeuralNetwork::init()::$_0::operator()()
@ 0x10591a010 paddle::NeuralNetwork::init()
@ 0x10591f250 paddle::GradientMachine::create()
@ 0x105917410 paddle_gradient_machine_create_for_inference_with_parameters
@ 0x1043e0548 init_machine()
@ 0x1043e0280 -[BNLocationManager init]
@ 0x1043dff10 +[BNLocationManager GetInstance]
@ 0x104520878 +[BNCoreServices LocationService]
@ 0x10451c988 -[BNCoreServices doAfterInitEngine]
@ 0x1045195bc __40-[BNCoreServices startSericesAsyn:fail:]_block_invoke_2
@ 0x10f87d2cc _dispatch_call_block_and_release
@ 0x10f87d28c _dispatch_client_callout
@ 0x10f881ea0 _dispatch_main_queue_callback_4CF
@ 0x1851b2544 <redacted>
@ 0x1851b0120 <redacted>
@ 0x1850cfe58 CFRunLoopRunSpecific
@ 0x186f7cf84 GSEventRunModal
@ 0x18e74f67c UIApplicationMain
@ 0x102ee0c30 main
@ 0x184bec56c <redacted>
Mobilenet.py and resnet.py are duplicated in this two directories.
See the detailed error:
Linking CXX executable inference
/home/work/.jumbo/bin/cmake -E cmake_link_script CMakeFiles/inference.dir/link.txt --verbose=1
/home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/aarch64-linux-android-g++ --sysroot=/home/work/liuyiqun/install/android/toolchains/arm64-android-21/sysroot -ffunction-sections -fdata-sections -march=armv8-a -O3 -DNDEBUG -pie -fPIE -Wl,--gc-sections CMakeFiles/inference.dir/inference.cc.o -o inference -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/lib/arm64-v8a -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/gflags/lib/arm64-v8a -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/glog/lib/arm64-v8a -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/protobuf/lib/arm64-v8a -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/zip/lib/arm64-v8a -L/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/openblas/lib/arm64-v8a -Wl,--start-group -Wl,--whole-archive -lpaddle_capi_layers -Wl,--no-whole-archive -lpaddle_capi_engine -Wl,--end-group -lglog -lgflags -lprotobuf -lz -lopenblas
/home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld: /lib64/libz.so.1: no version information available (required by /home/work/liuyiqun/install/android/toolchains/arm64-android-21/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld)
/home/work/liuyiqun/PaddlePaddle/Paddle/build_paddle/dist_android/third_party/protobuf/lib/arm64-v8a/libprotobuf.a(common.cc.o): In function `google::protobuf::internal::DefaultLogHandler(google::protobuf::LogLevel, char const*, int, std::string const&)':
common.cc:(.text._ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs[_ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs]+0xc0): undefined reference to `__android_log_write'
common.cc:(.text._ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs[_ZN6google8protobuf8internal17DefaultLogHandlerENS0_8LogLevelEPKciRKSs]+0x12c): undefined reference to `__android_log_write'
collect2: error: ld returned 1 exit status
make[2]: *** [inference] Error 1
make[2]: Leaving directory `/home/work/liuyiqun/PaddlePaddle/Mobile/benchmark/tool/C/build_android'
make[1]: *** [CMakeFiles/inference.dir/all] Error 2
make[1]: Leaving directory `/home/work/liuyiqun/PaddlePaddle/Mobile/benchmark/tool/C/build_android'
make: *** [all] Error 2
Need to link Android's log
library.
@Xreki 我根据Android的Dome做了一个图像分类的,但是在读取出了点问题。在Java中,是把图像转成字节数组的,所以我也是这个样做,如下代码。但是pixels这个字节数组得到的结果是不定长的,但是图像明明都是3*32*32的。大小应该都是3072才对。这是为什么呢?
public String infer(String img_path) {
//把图像读取成一个Bitmap对象
Bitmap bitmap = BitmapFactory.decodeFile(img_path);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
//把图像生成一个字节数组
byte[] pixels = baos.toByteArray();
Log.i("datas大小为", String.valueOf(pixels.length));
try {
baos.close();
} catch (IOException e) {
e.printStackTrace();
}
if (mRgbBytes == null) {
mRgbBytes = new byte[3072];
}
for (int i = 0; i < pixels.length; i++) {
mRgbBytes[i] = pixels[i];
Log.i("ImageRecognition", String.valueOf(mRgbBytes[i]));
}
// 获取预测结果
float[] result = infer(mRgbBytes);
// 把概率最大的结果提取出来
float max = 0;
int number = 0;
for (int i = 0; i < result.length; i++) {
if (result[i] > max) {
max = result[i];
number = i;
}
}
String msg = "类别为:" + clasName[number] + ",可信度为:" + max;
Log.i("ImageRecognition", msg);
return msg;
}
就是因为上面的大小没有充满矩阵,导致array很多都是0的。
for (size_t c = 0; c < 3; ++c) {
for (size_t h = 0; h < 32; ++h) {
for (size_t w = 0; w < 32; ++w) {
array[index] =
static_cast<float>(((pixels[(h * 32 + w) * 3 + c]) - means[c]) / 255.0);
LOGI("array_src:%f", array[index]);
index++;
}
}
}
这个怎么解决???求解答。
When the training process is finished, we can merge the batch normalization with the convolution or fully connected layer. Merge batch normalization speeds up the forward process by around 30%.
paddle_merge_model can integrate the model config file and parameter to one files.
It would be very convenient if there is a script.
下载了Android分支的demo,编译之后安装之后发现和iOS的不一样,android的相机界面没有识别出来物体
参照 安卓文档,在进行第二步 make时提示下面的错误:
root@wyf-virtual-machine:/home/wyf/Paddle-Android/Mobile/benchmark/tool/C/build# make
Scanning dependencies of target inference
[100%] Building CXX object CMakeFiles/inference.dir/inference.cc.o
Linking CXX executable inference
/home/wyf/Paddle-Android/arm64_standalone_toolchain/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld: cannot find -lprotobuf
collect2: error: ld returned 1 exit status
make[2]: *** [inference] 错误 1
make[1]: *** [CMakeFiles/inference.dir/all] 错误 2
make: *** [all] 错误 2
然后参考了 Fix linking problem of protobuf-3.2,但没有看明白怎样修改的CMakeLists.txt,我的 ANDROID_ABI = arm64-v8a,能否较详细说下怎样修改呢?谢谢
Given floating parameters V
, first our goal is to represent V
as 8-bit integers V'
. And then we transformed back V'
back into its approximate high-precision value by performing the inverse of the quantization operation. At last, we perform gzip to our quantized && inverse-quantized model. The whole process can reduces our model by 70%.
我在树莓派上编译demo/linux.报如下的错误,提示找不到protobuf,可是protobuf明明在啊,请问是为什么呢
-- Found PaddlePaddle (include: /usr/include; library: /usr/lib/libpaddle_capi_layers.a, /usr/lib/libpaddle_capi_engine.a)
CMake Warning at FindPaddle.cmake:70 (message):
Cannot find protobuf under /home/paddle/_install/third_party
Call Stack (most recent call first):
FindPaddle.cmake:75 (third_party_library)
CMakeLists.txt:19 (find_package)
-- Found glog: /home/paddle/_install/third_party/glog/lib/libglog.a
-- Found openblas: /home/paddle/_install/third_party/openblas/lib/libopenblas.a
-- Found gflags: /home/paddle/_install/third_party/gflags/lib/libgflags.a
-- Found zlib: /home/paddle/_install/third_party/zlib/lib/libz.a
-- Configuring done
-- Generating done
-- Build files have been written to: /home/paddle/Mobile/Demo/linux/build
看Android得例子,初始化代码应该是这一段
其中--pool_limit_size=0
是什么意思,是否可以在初始化PaddlePaddle的时候指定使用使用GPU
我看到代码指定是否使用GPU是这一段
Mobile/Demo/Android/AICamera/app/src/main/cpp/paddle_image_recognizer.cpp
Lines 239 to 242 in 4771867
We use |=
to paddle_error
in our benchmark.
Mobile/benchmark/tool/C/inference.cc
Lines 21 to 24 in 6b68be7
However, the definition of paddle_error
is:
https://github.com/PaddlePaddle/Paddle/blob/c8d4efb20eecab5a2edd55ccf923dac78afc6d78/paddle/capi/error.h#L23-L30
typedef enum {
kPD_NO_ERROR = 0,
kPD_NULLPTR = 1,
kPD_OUT_OF_RANGE = 2,
kPD_PROTOBUF_ERROR = 3,
kPD_NOT_SUPPORTED = 4,
kPD_UNDEFINED_ERROR = -1,
} paddle_error;
The different bit of different error may be the same. If we use |=
to two variables of paddle_error
, we may lose the error information.
Also, we need to print the error information, or it will be difficult for users to debug, like in #48 .
https://github.com/PaddlePaddle/Mobile/tree/develop/benchmark/tool/C
Refer to this document to compile the Android version of PaddlePaddle
The link to this document
is a dead link. What does the PADDLE_ROOT
mean in the cmake arguments?
cmake .. \
-DANDROID_ABI=arm64-v8a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm64_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel
训练mobilenet_pruning,跟mobilenet那个相比,由12M变为11M,并没有那里面提到的变为4.3M,是为什么呢?
TensorFlow lite已经支持了,这个convert工具最好paddle官方也能支持
I followed this guide to build the ssd demo on Ubuntu 16.04.
Everything is ok untill make
, the error is:
/home/zfq/Mobile/Demo/linux/paddle_image_recognizer.h:23:59: error: ‘const char* paddle_error_string(paddle_error)’ was declared ‘extern’ and later ‘static’ [-fpermissive]
I am new at C++, so could anyone help me, please, thanks!
There are two links under How to build PaddlePaddle for mobile. They are
And they are both broken right now. This might relate to the recent rearrangement in Paddle.
add batch normalization and rounding tool to merge_model tools
错误如下:
net = mobile_net(image)
File "mobilenet_pruning.py", line 70, in mobile_net
stride=1)
File "mobilenet_pruning.py", line 43, in depthwise_separable
pa0 = ParamAttr(update_hooks = Hook('dynamic_pruning', sparsity_upper_bound=0.75))
TypeError: init() got an unexpected keyword argument 'sparsity_upper_bound'
" Unknown class type: recurrent_layer_group" error occur when I run inference using RNN model
capi: download from the pre-compiled libs in paddle mobile
ios:iphone7
F0322 17:16:18.261178 3028687744 ClassRegistrar.h:65] Check failed: mapGet(type, creatorMap_, &creator) Unknown class type: recurrent_layer_group
*** Check failure stack trace: ***
@ 0x102c7e0bc google::LogMessage::Fail()
@ 0x102c7d1e4 google::LogMessage::SendToLog()
@ 0x102c7d9b8 google::LogMessage::Flush()
@ 0x102c80f90 google::LogMessageFatal::~LogMessageFatal()
@ 0x102c7e3fc google::LogMessageFatal::~LogMessageFatal()
@ 0x102bc0a54 ZN6paddle14ClassRegistrarINS_5LayerEJNS_11LayerConfigEEE12createByTypeERKNSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES2
@ 0x102bc08d8 paddle::Layer::create()
@ 0x102be650c paddle::NeuralNetwork::init()::$_0::operator()()
@ 0x102be5dc0 paddle::NeuralNetwork::init()
@ 0x102beaae0 paddle::GradientMachine::create()
@ 0x102c0689c paddle_gradient_machine_create_for_inference_with_parameters
The models
directory needs to contain two parts. The first part is some standard model configuration files, such as MobileNet, ResNet, etc., that can be used as benchmark test data. The second part contains some well-trained model parameter files, such as the mobilenet ssd pascal model
, which can be directly converted to inference model files for deployment to mobile.
The deployment
directory can contain the following two parts.
The mobile net paper said "We use depthwise convolutions to apply a single filter per each input channel (input depth)".
But in paddlepaddle implementation
tmp = depthwise_separable(tmp,
num_filters1=32,
num_filters2=64,
num_groups=32,
stride=1, scale = scale)
the num_filters1 is the same as num_groups and the input channel is 32, it will cause a single filter be precessed by 32 filter. so the num_filters1 here should be 1 rather than 32
I‘m glad to fix if possible:)
项目https://github.com/PaddlePaddle/Mobile/tree/develop/Demo/Android/AICamera
这个项目有很多目录都没有,我不知道它的结构是什么,能否把整一个项目目录发我 ,我的邮箱为:[email protected]
the following Sentence is confused to users.
./inference --merged_model ./model/mobilenet.paddle --input_size 150528
1.what's the input_size
2.how to generate the mobilenet.paddle
add the ssd mobilenet to models
我在这个教程的最后一步的命令,我执行了这个代码./inference --merged_model ./mobilenet.paddle --input_size 784
,因为我是使用paddlepaddle的手写数字识别的,报以下的错误,那条命令的的第一个参数是什么?
generic:/data/local/tmp # ./inference --merged_model ./mobilenet.paddle --input_size 784
WARNING: linker: /data/local/tmp/inference: unused DT entry: type 0xf arg 0x826
I1208 12:45:52.141201 2114 Util.cpp:166] commandline:
Time of init paddle 1085.49 ms.
Time of create from merged model file 300.789 ms.
Time of forward time 0.0035536 ms.
paddle forward error!
在用Android Studio创建Android项目时,要选择C++支持吗,还是默认就可以了?
Based on the previous survey, various inference frameworks have a phase that transforms a training model into an inference model. Like Compilation
in AndroidNN, Conversion
in CoreML, Build
in TensorRT, Converter
in Tensorflow-Lite. Paddle Mobile also needs a compilation tool that transforms the training model into an inference model.
This compilation tool needs to be able to support the following features:
Hi,
我看文档介绍都是从源码编译出库来再使用,通常是比较费时费力,且提高了使用门槛。 建议加上官方发布的release版本,直接提供下载使用。
按照文档Inference demo配置android,cmake提示下面的错误:
CMake Error at CMakeLists.txt:32 (project):
The CMAKE_CXX_COMPILER:
/Home/wyf/android-ndk-r14b/build/tools/arm64_standlone_toolchain/bin/aarch64-linux-android-g++
is not a full path to an existing compiler tool.
Tell CMake where to find the compiler by setting either the environment
variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
to the compiler, or to the compiler name if it is in the PATH.
CMake Error at CMakeLists.txt:32 (project):
The CMAKE_C_COMPILER:
/Home/wyf/android-ndk-r14b/build/tools/arm64_standlone_toolchain/bin/aarch64-linux-android-gcc
is not a full path to an existing compiler tool.
Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
什么原因呢?是什么路径不对吗?还是缺少了哪一步?
Why push code that bugs exist and cannot run normally to github ? And to mislead others on purpose.
示例程序提到的生成merged model(也就是 *.paddle文件)要准备好准备好模型配置文件(.py)和参数文件(.tar.gz)。
问题:
1.参数文件(.tar.gz)是经过PC训练生成的吧?
2.模型配置文件mobilenet.py,是个啥东东?超链接打不开,看不到是啥
3.在PC上使用模型预测时,不需要这个“模型配置文件”吧?
4.Android APP我在Windows环境下用过Android Studio开发过,请问将paddle库、merge生成的 *.paddle文件分别放到对应的文件夹下,之后就可以在Android Studio 调api了,最后打包成apk,是这样吗?还是要像[示例程序]第4步?
谢谢。。。
我看了那么久的文档,都不知道怎么使用,比如在手机上识别手写数字
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.