mirror of
https://github.com/bytecodealliance/wasm-micro-runtime.git
synced 2025-09-06 01:41:35 +00:00
![]() for wasi_ephemeral_nn,
* implement u8 input
* stop dealing with quantization.
* wasi-nn doesn't have a concept of quantization or pre/post-processing.
i can't think of any ways to make the backend perform zero-point/scale
processing without risking to break other applications.
* there seems to be applications which just use u8 inputs/outputs for
a quantized model. (see [1] for an example.)
for certain kinds of inputs/outputs, it usually just works.
this commit keeps the legacy wasi_nn logic intact for now.
tested with [1] with [2] applied.
WAMR with this patch:
```
Read graph weights, size in bytes: 3561598
[wasi_nn.c:297 WARNING] load_by_name_with_config() not found
[wasi_nn_tensorflowlite.cpp:272 WARNING] Default encoding is CPU.
Loaded graph into wasi-nn with ID: Graph#0
Read input tensor, size in bytes: 150528
1.) [166](198)Aix galericulata
2.) [34](1)Gallus gallus domesticus
3.) [158](1)Coccothraustes coccothraustes
4.) [778](1)Sitta europaea
5.) [819](1)Anas platyrhynchos
```
wasmedge:
```
Read graph weights, size in bytes: 3561598
Loaded graph into wasi-nn with ID: Graph#0
Read input tensor, size in bytes: 150528
1.) [166](198)Aix galericulata
2.) [34](1)Gallus gallus domesticus
3.) [158](1)Coccothraustes coccothraustes
4.) [778](1)Sitta europaea
5.) [819](1)Anas platyrhynchos
```
and "Aix galericulata" seems like a reasonable classification
of the image to my eyes.
[1]
|
||
---|---|---|
.. | ||
aot | ||
common | ||
compilation | ||
doc | ||
fast-jit | ||
include | ||
interpreter | ||
libraries | ||
README.md |