Giter VIP home page Giter VIP logo

megcc's Introduction

MegEngine

MegEngine is a fast, scalable, and user friendly deep learning framework with 3 key features.

  • Unified framework for both training and inference
    • Quantization, dynamic shape/image pre-processing, and even derivation with a single model.
    • After training, put everything into your model to inference on any platform with speed and precision. Check here for a quick guide.
  • The lowest hardware requirements
    • The memory usage of the GPU can be reduced to one-third of the original memory usage when DTR algorithm is enabled.
    • Inference models with the lowest memory usage by leveraging our Pushdown memory planner.
  • Inference efficiently on all platforms
    • Inference with speed and high-precision on x86, Arm, CUDA, and RoCM.
    • Supports Linux, Windows, iOS, Android, TEE, etc.
    • Optimize performance and memory usage by leveraging our advanced features.

Installation

NOTE: MegEngine now supports Python installation on Linux-64bit/Windows-64bit/MacOS(CPU-Only)-10.14+/Android 7+(CPU-Only) platforms with Python from 3.6 to 3.9. On Windows 10 you can either install the Linux distribution through Windows Subsystem for Linux (WSL) or install the Windows distribution directly. Many other platforms are supported for inference.

Binaries

To install the pre-built binaries via pip wheels:

python3 -m pip install --upgrade pip
python3 -m pip install megengine -f https://megengine.org.cn/whl/mge.html

Building from Source

How to Contribute

We strive to build an open and friendly community. We aim to power humanity with AI.

How to Contact Us

Resources

License

MegEngine is licensed under the Apache License, Version 2.0

Citation

If you use MegEngine in your publication,please cite it by using the following BibTeX entry.

@Misc{MegEngine,
  institution = {megvii},
  title =  {MegEngine:A fast, scalable and easy-to-use deep learning framework},
  howpublished = {\url{https://github.com/MegEngine/MegEngine}},
  year = {2020}
}

Copyright (c) 2014-2021 Megvii Inc. All rights reserved.

megcc's People

Contributors

asthestarsfalll avatar chen-2569 avatar chenqy4933 avatar cyli-tiger avatar jsonlee0x01 avatar leikang123 avatar li-ming-xin avatar lry89757 avatar megvii-mge avatar qsingle avatar tpoisonooo avatar violet73 avatar wanwan1996 avatar xxr3376 avatar yeasoon avatar zchrissirhcz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

megcc's Issues

支持torchscript模型编译

目前MegEngine只支持了MegEngine的模型编译,用户只能使用MegEngine模型进行编译,onnx需要转换到MegEngine,考虑直接支持torchscript模型

fuse任意mode和任意数量的elemwise

目前MegCC中Elemwise的生成还是通过mode,然后判断其broadcast的情况进行然后进行具体kernel的生成,这样的情况下对elemwise mode限制非常多,并且fuse的elemwise数量也受限。

希望,通过一个optpass将所有相邻的elemwise进行fuse,然后将相关信息保存在IR的参数中,然后在kernel生成的时候,应用这些IR的参数指导生成这些计算kernel,这样可以进一步提高性能

compile error when build megcc

hi
I get the latest source code, I build megcc source code as the user guide, the step as follows:

  1. apt-get install cmake ninja-build
    2./third_party/prepare.sh
  2. cd megcc/compiler
  3. mkdir build
  4. cmake .. -G Ninja
  5. ninja

the error log as below:
../tools/mgb-to-tinynn/mgb-to-tinynn.cpp:102:19: error: ‘kernel_exporter’ has not been declared
export_cv_opr(kernel_exporter, dump_info->cv_impl);
^~~~~~~~~~~~~~~
../tools/mgb-to-tinynn/mgb-to-tinynn.cpp:102:36: error: ‘dump_info’ has not been declared
export_cv_opr(kernel_exporter, dump_info->cv_impl);
^~~~~~~~~
../tools/mgb-to-tinynn/mgb-to-tinynn.cpp:102:45: error: expected ‘,’ or ‘...’ before ‘->’ token
export_cv_opr(kernel_exporter, dump_info->cv_impl);
^~
../tools/mgb-to-tinynn/mgb-to-tinynn.cpp:102:54: error: ISO C++ forbids declaration of ‘export_cv_opr’ with no type [-fpermissive]
export_cv_opr(kernel_exporter, dump_info->cv_impl);

可爱,想..

想把你的 README 改成英文主页的, international 一点对 star 友好一点。

如何支持新的算子?

我看算子列表的 opr 个数并不多,如果碰到个不支持的 (例如 topk)应该怎么办?

tensor_c.h缺少函数实现

tensor_c.h里面
LITE_API int LITE_make_tensor(const LiteTensorDesc tensor_describe, LiteTensor* tensor);

tensor.c未找到实现,请问下目前版本是不支持吗?

MegCC中添加 MegEngine 中的loader opr

目前很多模型都会选择NPU来进行加速,但是很多模型需要的前后处理NPU不支持,需要在CPU上运行。为了解决这个问题,MegEngine中使用https://github.com/MegEngine/MegEngine/blob/master/src/serialization/include/megbrain/serialization/extern_c_opr.h extern c opr的形式来描述这个NPU模型,然后在MegEngine中视为一个Operator。为了让这类模型可以正常编译,需要在MegEngine中支持这个Op的编译。

MegCC中添加Benchmarker

MegCC的一大优势是可以获得极致的最小二进制文件大小,和一定的性能优势,但目前整个工程中只有一个yolox的example,缺少一个Benchmarker来测试一些传统模型的使用MegCC运行之后的性能。

TODO:添加一个对经典模型进行Benchmarker的工具,经典模型包括:

  • mobilenet,resnet18,efficientnet等

x86上mobilenetv3模型变慢

mobilenetv3 large模型,直接从onnx格式转成mge格式,使用megengine运行,在输入尺寸为320x320的情况下,大概需要36ms,
使用megcc的

mgb-to-tinynn mobilenetv3.mge ./x86 --enable_nchw44 --mgb_fuse_kernel

./runtime/scripts/runtime_build.py --kernel_dir ./x86

命令转成.tiny格式,用tinynn_test_lite测试耗时大概为86ms,请问这个可能是什么原因?多谢!

fix typo

          @yeasoon 有时间统一修一下吧,“可修改 .pre-commit-config.yaml,去掉 --skip 看看实际有多少 typo ..”

Originally posted by @chenqy4933 in #39 (comment)

how to contribute

请问这个 repo PR 的规矩和 mge 一样要先 squash 成一个 commit 么?

啊希望不要。

yolox_example 在 aarch64 linux 下编译失败

build target 为 aarch64 的 linux ,按照 https://github.com/MegEngine/MegCC/blob/main/yolox_example/README.md 中的说明操作,当执行 python ../runtime/scripts/runtime_build.py --cross_build --kernel_dir ./kernel_yolox_s_arm/ --remove_old_build --cross_build_target_os LINUX --cross_build_target_arch aarch64 时,会出现如下错误:

/home/xxxx/Downloads/megcc_release/release_megcc/runtime/../immigration/include/marm_neon.h:241:62: error: pragma or attribute ‘target("dotprod")’ is not valid
  241 |                                                              int8x16_t b) {
      |                                                              ^~~~~~~~~

编译工具链为gcc-aarch64-linux-gnu
MegCC版本为v0.1.2

prepare.sh 编译 nvcc

我的环境里有 TRT_HOME / CUDA_HOME/ CUDNN_HOME 这些变量,似乎 prepare.sh 期间开始用 nvcc build 东西。

看 megcc README 应该不涉及 nvidia 的东西。

是否应该把 megbrain build 那里裁一下?

附我的 env

CUDA_ROOT=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
GIT_EDITOR=vim
TRT_HOME=/home/khj/trt8431/TensorRT-8.4.3.1
CPLUS_INCLUDE_PATH=:/home/khj/trt8431/TensorRT-8.4.3.1/include:/usr/local/cuda/include:/home/khj/cudnn84/cudnn-linux-x86_64-8.4.1.50_cuda11.6-archive/include
LD_LIBRARY_PATH=:/home/khj/trt8431/TensorRT-8.4.3.1/lib:/home/khj/cudnn84/cudnn-linux-x86_64-8.4.1.50_cuda11.6-archive/lib
CUDNN_HOME=/home/khj/cudnn84/cudnn-linux-x86_64-8.4.1.50_cuda11.6-archive
DOCKER_HOST=unix:///run/user/1000/docker.sock
CONDA_EXE=/home/khj/miniconda3/bin/conda
_CE_M=
_CE_CONDA=
CONDA_PYTHON_EXE=/home/khj/miniconda3/bin/python
CONDA_SHLVL=1
CONDA_PREFIX=/home/khj/miniconda3
CONDA_DEFAULT_ENV=base
CONDA_PROMPT_MODIFIER=(base)

源码编译不过

源码编译报错:
compiler/tools/mgb-to-tinynn/mgb-to-tinynn.cpp:102:19: error: ‘kernel_exporter’ has not been declared
export_cv_opr(kernel_exporter, dump_info->cv_impl);

我看了好像确实没有kernel_exporter的声明。
是我哪里设置错了吗?还是这个开源代码就没有被正常编译通过?

编译libTinyNN.so 缺少文件version.ld

报这个错误
2023/08/25 15:43:04 - DEBUG - cmake build: cd /home/work/wanxin/megcc_root_release/release_megcc/yolox_example/kernel_yolox_s_arm_v2/runtime && ninja install/strip
ninja: error: '/home/work/wanxin/megcc_root_release/release_megcc/runtime/version.ld', needed by 'libTinyNN.so', missing and no known rule to make it
Traceback (most recent call last):
File "/home/work/wanxin/megcc_root_release/release_megcc/yolox_example/../runtime/scripts/runtime_build.py", line 424, in
b.build()
File "/home/work/wanxin/megcc_root_release/release_megcc/yolox_example/../runtime/scripts/runtime_build.py", line 415, in build
subprocess.check_call('bash -c "{}"'.format(build_cmd), shell=True)
File "/home/work/miniconda3/envs/wx_mge_env/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'bash -c "cd /home/work/wanxin/megcc_root_release/release_megcc/yolox_example/kernel_yolox_s_arm_v2/runtime && ninja install/strip "' returned non-zero exit status 1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.