wasm-micro-runtime/core/iwasm/libraries/wasi-nn/cmake/Findcjson.cmake
liang.he 0599351262
wasi-nn: Add a new target for llama.cpp as a wasi-nn backend (#3709)
Minimum support:
- [x] accept (WasmEdge) customized model parameters. metadata.
- [x] Target [wasmedge-ggml examples](https://github.com/second-state/WasmEdge-WASINN-examples/tree/master/wasmedge-ggml)
  - [x] basic
  - [x] chatml
  - [x] gemma
  - [x] llama
  - [x] qwen

---

In the future, to support if required:
- [ ] Target [wasmedge-ggml examples](https://github.com/second-state/WasmEdge-WASINN-examples/tree/master/wasmedge-ggml)
  - [ ] command-r. (>70G memory requirement)
  - [ ] embedding. (embedding mode)
  - [ ] grammar. (use the grammar option to constrain the model to generate the JSON output)
  - [ ] llama-stream. (new APIS `compute_single`, `get_output_single`, `fini_single`)
  - [ ] llava. (image representation)
  - [ ] llava-base64-stream. (image representation)
  - [ ] multimodel. (image representation)
- [ ] Target [llamaedge](https://github.com/LlamaEdge/LlamaEdge)
2024-09-10 08:45:18 +08:00

18 lines
540 B
CMake

# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
include(FetchContent)
set(CJSON_SOURCE_DIR "${WAMR_ROOT_DIR}/core/deps/cjson")
FetchContent_Declare(
cjson
GIT_REPOSITORY https://github.com/DaveGamble/cJSON.git
GIT_TAG v1.7.18
SOURCE_DIR ${CJSON_SOURCE_DIR}
)
set(ENABLE_CJSON_TEST OFF CACHE INTERNAL "Turn off tests")
set(ENABLE_CJSON_UNINSTALL OFF CACHE INTERNAL "Turn off uninstall to avoid targets conflict")
FetchContent_MakeAvailable(cjson)