wasm-micro-runtime/core/iwasm/libraries/wasi-nn/test
liang.he 0599351262
wasi-nn: Add a new target for llama.cpp as a wasi-nn backend (#3709)
Minimum support:
- [x] accept (WasmEdge) customized model parameters. metadata.
- [x] Target [wasmedge-ggml examples](https://github.com/second-state/WasmEdge-WASINN-examples/tree/master/wasmedge-ggml)
  - [x] basic
  - [x] chatml
  - [x] gemma
  - [x] llama
  - [x] qwen

---

In the future, to support if required:
- [ ] Target [wasmedge-ggml examples](https://github.com/second-state/WasmEdge-WASINN-examples/tree/master/wasmedge-ggml)
  - [ ] command-r. (>70G memory requirement)
  - [ ] embedding. (embedding mode)
  - [ ] grammar. (use the grammar option to constrain the model to generate the JSON output)
  - [ ] llama-stream. (new APIS `compute_single`, `get_output_single`, `fini_single`)
  - [ ] llava. (image representation)
  - [ ] llava-base64-stream. (image representation)
  - [ ] multimodel. (image representation)
- [ ] Target [llamaedge](https://github.com/LlamaEdge/LlamaEdge)
2024-09-10 08:45:18 +08:00
..
models wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
build.sh wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
bump_wasi_nn_to_0_6_0.patch [wasi-nn] Add a new wasi-nn backend openvino (#3603) 2024-07-22 17:16:41 +08:00
Dockerfile.compile Fix dockerfile linter warnings (#2291) 2023-06-15 16:52:48 +08:00
Dockerfile.cpu Make wasi-nn backends as separated shared libraries (#3509) 2024-06-14 12:06:56 +08:00
Dockerfile.nvidia-gpu Make wasi-nn backends as separated shared libraries (#3509) 2024-06-14 12:06:56 +08:00
Dockerfile.tpu Make wasi-nn backends as separated shared libraries (#3509) 2024-06-14 12:06:56 +08:00
Dockerfile.vx-delegate Fix dockerfile linter warnings (#2291) 2023-06-15 16:52:48 +08:00
Dockerfile.wasi-nn-smoke wasi-nn: Add a new target for llama.cpp as a wasi-nn backend (#3709) 2024-09-10 08:45:18 +08:00
requirements.txt build(deps): bump tensorflow in /core/iwasm/libraries/wasi-nn/test (#3675) 2024-08-02 09:17:12 +08:00
run_smoke_test.py wasi-nn: Add a new target for llama.cpp as a wasi-nn backend (#3709) 2024-09-10 08:45:18 +08:00
test_tensorflow_quantized.c wasi-nn: Support uint8 quantized networks (#2433) 2023-08-11 07:55:40 +08:00
test_tensorflow.c wasi-nn: Improve tests paths for local dev (#2309) 2023-06-27 08:07:30 +08:00
utils.c sync up with latest wasi-nn spec (#3530) 2024-06-17 14:58:09 +08:00
utils.h Make wasi-nn backends as separated shared libraries (#3509) 2024-06-14 12:06:56 +08:00