Comments (7)
Hi,
Have you used my script to install onnxruntime from source?
Also in CmakeLists you also need to specify onnxruntime_INCLUDE_DIRS, like this
set(onnxruntime_INSTALL_PREFIX /usr/local)
set(onnxruntime_INCLUDE_DIRS
${onnxruntime_INSTALL_PREFIX}/include/onnxruntime
${onnxruntime_INSTALL_PREFIX}/include/onnxruntime/core/session
)
and include it
from onnx_runtime_cpp.
I have installed it, but I can't find the #include file. It seems to be C++17 only!
from onnx_runtime_cpp.
yes. I have used some small functions (std::optional) of C++17, so currently support C++17 only. Those parts of C++17 can be rewritten into C++11 actually. Please test with C++17 first. If you need to port it into C++11, I can help.
from onnx_runtime_cpp.
@xmba15 I've already rewritten it for c++14. I'll test it later. Thank you
from onnx_runtime_cpp.
I will close the issue now. Feel free to open it again if troubles happen.
from onnx_runtime_cpp.
@tcxia Hello, Do you mind sharing what did you change to compile it in C++14?
from onnx_runtime_cpp.
Solved. Just in case anyone need.
std::optional to boost::optional
std::nullopt to boost::none
.has_value to .is_initialized()
from onnx_runtime_cpp.
Related Issues (20)
- The link of PaddleSeg's bisenetv2 is missing HOT 3
- 运行superglue 中的convert_to_onnx.py 报错如下,不懂怎么解决,求助 HOT 1
- 您好,能否发一下SuperGlue的ONNX模型测试一下,转模型那个脚本没成功
- hello,when i run C++ onnxruntime for superglue ,have some erro 。
- pipline for superpoint HOT 2
- how to build? HOT 3
- You can consider supporting LISRD and LoFTR, I personally think SuperGlue and SuperPoint are not very robust HOT 18
- if the superpoint and superglue use cuda when inference? i can't find the cuda information in the code? could you please explain it , thanks ! HOT 2
- ONNX Runtime inefence time
- ONNX Runtime inefence time HOT 1
- About the loftr result HOT 22
- About loftr.onnx HOT 1
- build error HOT 5
- ASpanFormer (LofTR-like) support HOT 7
- Hello, I need to get a fixed-size input-output model. When converting the loftr.onnx file, it can be converted successfully, but it is a verification model, and this error occurs.
- Question about TensorRT environment
- Does exist any Windows compatible c++ CLI.exe that can run ONNX face restoration models?
- docker build -f ./dockerfiles/ubuntu2004.dockerfile -t onnx_runtime . FAIL HOT 3
- C++多输入多输出的例子
- Does MobileSAM instance splitting support?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnx_runtime_cpp.