wasm-micro-runtime/core/iwasm/libraries/wasi-nn/test
tonibofarull 0b0af1b3df
wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.

Example generated from `quantized_model.py`:
![Screenshot from 2023-08-07 17-57-05](https://github.com/bytecodealliance/wasm-micro-runtime/assets/80318361/91f12ff6-870c-427a-b1dc-e307f7d1f5ee)

Visualization with [netron](https://netron.app/).
2023-08-11 07:55:40 +08:00
..
models wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
build.sh wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
Dockerfile.compile Fix dockerfile linter warnings (#2291) 2023-06-15 16:52:48 +08:00
Dockerfile.cpu wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
Dockerfile.nvidia-gpu wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
Dockerfile.vx-delegate Fix dockerfile linter warnings (#2291) 2023-06-15 16:52:48 +08:00
requirements.txt Bump tensorflow in /core/iwasm/libraries/wasi-nn/test (#2061) 2023-03-28 16:36:59 +08:00
test_tensorflow_quantized.c wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
test_tensorflow.c wasi-nn: Improve tests paths for local dev (#2309) 2023-06-27 08:07:30 +08:00
utils.c wasi-nn: Support multiple TFLite models (#2002) 2023-03-08 15:54:06 +08:00
utils.h wasi-nn: Support multiple TFLite models (#2002) 2023-03-08 15:54:06 +08:00