mirror of
https://github.com/bytecodealliance/wasm-micro-runtime.git
synced 2025-05-11 20:21:11 +00:00
![]() Support (non-full) uint8 quantized networks. Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn. Example generated from `quantized_model.py`:  Visualization with [netron](https://netron.app/). |
||
---|---|---|
.. | ||
models | ||
build.sh | ||
Dockerfile.compile | ||
Dockerfile.cpu | ||
Dockerfile.nvidia-gpu | ||
Dockerfile.vx-delegate | ||
requirements.txt | ||
test_tensorflow_quantized.c | ||
test_tensorflow.c | ||
utils.c | ||
utils.h |