mirror of
https://github.com/bytecodealliance/wasm-micro-runtime.git
synced 2025-03-12 08:55:28 +00:00

Support (non-full) uint8 quantized networks. Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn. Example generated from `quantized_model.py`:  Visualization with [netron](https://netron.app/).
3 lines
22 B
Plaintext
3 lines
22 B
Plaintext
**/*.wasm
|
|
**/*.tflite
|