tonibofarull
|
0b0af1b3df
|
wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.
Example generated from `quantized_model.py`:

Visualization with [netron](https://netron.app/).
|
2023-08-11 07:55:40 +08:00 |
|