mirror of
https://github.com/bytecodealliance/wasm-micro-runtime.git
synced 2025-02-06 15:05:19 +00:00
1614ce12fa
- Split logic in several dockers - runtime: wasi-nn-cpu and wasi-nn- Nvidia-gpu. - compilation: wasi-nn-compile. Prepare the testing wasm and generates the TFLites. - Implement GPU support for TFLite with Opencl. |
||
---|---|---|
.. | ||
src | ||
test | ||
README.md | ||
wasi_nn_types.h | ||
wasi_nn.cmake | ||
wasi_nn.h |
WASI-NN
How to use
Enable WASI-NN in the WAMR by spefiying it in the cmake building configuration as follows,
set (WAMR_BUILD_WASI_NN 1)
The definition of the functions provided by WASI-NN is in the header file core/iwasm/libraries/wasi-nn/wasi_nn.h
.
By only including this file in your WASM application you will bind WASI-NN into your module.
Tests
To run the tests we assume that the current directory is the root of the repository.
Build the runtime
Build the runtime base image,
docker build -t wasi-nn-base -f core/iwasm/libraries/wasi-nn/test/Dockerfile.base .
Build the runtime image for your execution target type.
EXECUTION_TYPE
can be:
cpu
nvidia-gpu
EXECUTION_TYPE=cpu
docker build -t wasi-nn-${EXECUTION_TYPE} -f core/iwasm/libraries/wasi-nn/test/Dockerfile.${EXECUTION_TYPE} .
Build wasm app
docker build -t wasi-nn-compile -f core/iwasm/libraries/wasi-nn/test/Dockerfile.compile .
docker run -v $PWD/core/iwasm/libraries/wasi-nn:/wasi-nn wasi-nn-compile
Run wasm app
If all the tests have run properly you will the the following message in the terminal,
Tests: passed!
- CPU
docker run \
-v $PWD/core/iwasm/libraries/wasi-nn/test:/assets wasi-nn-cpu \
--dir=/assets \
--env="TARGET=cpu" \
/assets/test_tensorflow.wasm
- (NVIDIA) GPU
docker run \
--runtime=nvidia \
-v $PWD/core/iwasm/libraries/wasi-nn/test:/assets wasi-nn-nvidia-gpu \
--dir=/assets \
--env="TARGET=gpu" \
/assets/test_tensorflow.wasm
Requirements:
What is missing
Supported:
- Only 1 WASM app at a time.
- Only 1 model at a time.
graph
andgraph-execution-context
are ignored.
- Graph encoding:
tensorflowlite
. - Execution target:
cpu
andgpu
. - Tensor type:
fp32
.