同步操作将从 xboot/libonnx 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
A lightweight, portable pure C99
onnx
inference engine
for embedded devices with hardware acceleration support.
The library's .c and .h files can be dropped into a project and compiled along with it. Before use, should be allocated struct onnx_context_t *
and you can pass an array of struct resolver_t *
for hardware acceleration.
The filename is path to the format of onnx
model.
struct onnx_context_t * ctx = onnx_context_alloc_from_file(filename, NULL, 0);
Then, you can get input and output tensor using onnx_tensor_search
function.
struct onnx_tensor_t * input = onnx_tensor_search(ctx, "input-tensor-name");
struct onnx_tensor_t * output = onnx_tensor_search(ctx, "output-tensor-name");
When the input tensor has been setting, you can run inference engine using onnx_run
function and the result will putting into the output tensor.
onnx_run(ctx);
Finally, you must free struct onnx_context_t *
using onnx_context_free
function.
onnx_context_free(ctx);
Just type make
at the root directory, you will see a static library and some binary of examples and tests for usage.
cd libonnx
make
TBD.
This library based on the onnx version 1.8.0 with the newest operator set 13
support. The supported operator table in the documents directory.
This library is free software; you can redistribute it and or modify it under the terms of the MIT license. See MIT License for details.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。