wasm-micro-runtime/core/iwasm
tonibofarull 0b0af1b3df
wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.

Example generated from `quantized_model.py`:
![Screenshot from 2023-08-07 17-57-05](https://github.com/bytecodealliance/wasm-micro-runtime/assets/80318361/91f12ff6-870c-427a-b1dc-e307f7d1f5ee)

Visualization with [netron](https://netron.app/).
2023-08-11 07:55:40 +08:00
..
aot Introduce WASMModuleInstanceExtraCommon (#2429) 2023-08-08 09:35:29 +08:00
common Introduce WASMModuleInstanceExtraCommon (#2429) 2023-08-08 09:35:29 +08:00
compilation Fix ExpandMemoryOpPass doesn't work properly (#2399) 2023-07-29 10:28:09 +08:00
doc Add architecture diagram for wasm globals and classic-interp stack frame (#2058) 2023-03-25 09:39:20 +08:00
fast-jit Fix some check issues on table operations (#2392) 2023-07-27 21:53:48 +08:00
include wasm_export.h: Fix struct wasm_val_t (#2435) 2023-08-09 09:43:20 +08:00
interpreter wasm_export.h: Fix struct wasm_val_t (#2435) 2023-08-09 09:43:20 +08:00
libraries wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
README.md Add architecture diagram for wasm globals and classic-interp stack frame (#2058) 2023-03-25 09:39:20 +08:00