wasm-micro-runtime/core/iwasm/libraries/wasi-nn
tonibofarull b45d014112
wasi-nn: Improve TPU support (#2447)
1. Allow TPU and GPU support at the same time.
2. Add Dockerfile to run example with [Coral USB](https://coral.ai/products/accelerator/).
2023-08-14 20:03:56 +08:00
..
cmake wasi-nn: Improve TPU support (#2447) 2023-08-14 20:03:56 +08:00
external wasi-nn: Add support of wasi-nn as shared lib (#2310) 2023-06-27 18:18:26 +08:00
include wasi-nn: Simplify cmake and headers' location (#2308) 2023-06-26 09:29:05 +08:00
src wasi-nn: Improve TPU support (#2447) 2023-08-14 20:03:56 +08:00
test wasi-nn: Improve TPU support (#2447) 2023-08-14 20:03:56 +08:00
.gitignore wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
README.md wasi-nn: Improve TPU support (#2447) 2023-08-14 20:03:56 +08:00

WASI-NN

How to use

Enable WASI-NN in the WAMR by spefiying it in the cmake building configuration as follows,

set (WAMR_BUILD_WASI_NN  1)

The definition of the functions provided by WASI-NN is in the header file core/iwasm/libraries/wasi-nn/wasi_nn.h.

By only including this file in your WASM application you will bind WASI-NN into your module.

Tests

To run the tests we assume that the current directory is the root of the repository.

Build the runtime

Build the runtime image for your execution target type.

EXECUTION_TYPE can be:

  • cpu
  • nvidia-gpu
  • vx-delegate
  • tpu
EXECUTION_TYPE=cpu
docker build -t wasi-nn-${EXECUTION_TYPE} -f core/iwasm/libraries/wasi-nn/test/Dockerfile.${EXECUTION_TYPE} .

Build wasm app

docker build -t wasi-nn-compile -f core/iwasm/libraries/wasi-nn/test/Dockerfile.compile .
docker run -v $PWD/core/iwasm/libraries/wasi-nn:/wasi-nn wasi-nn-compile

Run wasm app

If all the tests have run properly you will the the following message in the terminal,

Tests: passed!
  • CPU
docker run \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets \
    -v $PWD/core/iwasm/libraries/wasi-nn/test/models:/models \
    wasi-nn-cpu \
    --dir=/ \
    --env="TARGET=cpu" \
    /assets/test_tensorflow.wasm
docker run \
    --runtime=nvidia \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets \
    -v $PWD/core/iwasm/libraries/wasi-nn/test/models:/models \
    wasi-nn-nvidia-gpu \
    --dir=/ \
    --env="TARGET=gpu" \
    /assets/test_tensorflow.wasm
  • vx-delegate for NPU (x86 simulator)
docker run \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets \
    wasi-nn-vx-delegate \
    --dir=/ \
    --env="TARGET=gpu" \
    /assets/test_tensorflow_quantized.wasm
docker run \
    --privileged \
    --device=/dev/bus/usb:/dev/bus/usb \
    -v $PWD/core/iwasm/libraries/wasi-nn/test:/assets \
    wasi-nn-tpu \
    --dir=/ \
    --env="TARGET=tpu" \
    /assets/test_tensorflow_quantized.wasm

What is missing

Supported:

  • Graph encoding: tensorflowlite.
  • Execution target: cpu, gpu and tpu.
  • Tensor type: fp32.