wasm-micro-runtime/core/iwasm/libraries/wasi-nn/test/models
tonibofarull 0b0af1b3df
wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.

Example generated from `quantized_model.py`:
![Screenshot from 2023-08-07 17-57-05](https://github.com/bytecodealliance/wasm-micro-runtime/assets/80318361/91f12ff6-870c-427a-b1dc-e307f7d1f5ee)

Visualization with [netron](https://netron.app/).
2023-08-11 07:55:40 +08:00
..
average.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00
max.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00
mult_dimension.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00
mult_outputs.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00
quantized.py wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
sum.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00
utils.py Integrate WASI-NN into WAMR (#1521) 2022-10-12 12:09:29 +08:00