wasm-micro-runtime/core/iwasm/libraries/wasi-nn
ayakoakasaka ed6b8efade
Avoid re-installing if Tensorflow is already installed for WASI-NN (#2148)
Since the Tensorflow library is already installed in many cases(especially in the
case of the embedded system), move the installation code to find_package.
2023-04-27 08:19:18 +08:00
..
cmake Avoid re-installing if Tensorflow is already installed for WASI-NN (#2148) 2023-04-27 08:19:18 +08:00
src wasi-nn: Support multiple TFLite models (#2002) 2023-03-08 15:54:06 +08:00
test Bump tensorflow in /core/iwasm/libraries/wasi-nn/test (#2061) 2023-03-28 16:36:59 +08:00
README.md wasi-nn: Support multiple TFLite models (#2002) 2023-03-08 15:54:06 +08:00
wasi_nn_types.h Refactor WASI-NN to simplify the support for multiple frameworks (#1834) 2023-01-25 18:32:40 +08:00
wasi_nn.cmake Avoid re-installing if Tensorflow is already installed for WASI-NN (#2148) 2023-04-27 08:19:18 +08:00
wasi_nn.h Refactor WASI-NN to simplify the support for multiple frameworks (#1834) 2023-01-25 18:32:40 +08:00

WASI-NN

How to use

Enable WASI-NN in the WAMR by spefiying it in the cmake building configuration as follows,

set (WAMR_BUILD_WASI_NN  1)

The definition of the functions provided by WASI-NN is in the header file core/iwasm/libraries/wasi-nn/wasi_nn.h.

By only including this file in your WASM application you will bind WASI-NN into your module.

Tests

To run the tests we assume that the current directory is the root of the repository.

Build the runtime

Build the runtime image for your execution target type.

EXECUTION_TYPE can be:

  • cpu
  • nvidia-gpu
EXECUTION_TYPE=cpu
docker build -t wasi-nn-${EXECUTION_TYPE} -f core/iwasm/libraries/wasi-nn/test/Dockerfile.${EXECUTION_TYPE} .

Build wasm app

docker build -t wasi-nn-compile -f core/iwasm/libraries/wasi-nn/test/Dockerfile.compile .
docker run -v $PWD/core/iwasm/libraries/wasi-nn:/wasi-nn wasi-nn-compile

Run wasm app

If all the tests have run properly you will the the following message in the terminal,

Tests: passed!
  • CPU
docker run \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets wasi-nn-cpu \
    --dir=/assets \
    --env="TARGET=cpu" \
    /assets/test_tensorflow.wasm
  • (NVIDIA) GPU
docker run \
    --runtime=nvidia \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets wasi-nn-nvidia-gpu \
    --dir=/assets \
    --env="TARGET=gpu" \
    /assets/test_tensorflow.wasm

Requirements:

What is missing

Supported:

  • Graph encoding: tensorflowlite.
  • Execution target: cpu and gpu.
  • Tensor type: fp32.