Compare commits

...

38 Commits

Author SHA1 Message Date
Huang Qi
cf5b8901fa
Merge 91dd6f0a0e into d598c0d0d3 2025-07-01 12:12:48 -07:00
YAMAMOTO Takashi
d598c0d0d3
CI: add wamr_wasi_extensions to the release assets (#4425)
you can find an example of the release asset at:
https://github.com/yamt/wasm-micro-runtime/releases/download/WAMR-2.3.999/wamr-wasi-extensions-2.3.999.zip

note: this is a static library for wasm32-wasi. no need to provide
per host OS (macOS, ubuntu, etc) binaries.
2025-07-01 19:32:01 +08:00
YAMAMOTO Takashi
da6019f749
wasi_nn_llamacpp.c: reject invalid graph and execution context (#4422)
* return valid graph and execution context instead of using stack garbage.
  (always 0 for now because we don't implement multiple graph/context
  for this backend.)

* validate user-given graph and execution context values. reject
  invalid ones.
2025-07-01 19:31:00 +08:00
YAMAMOTO Takashi
ebf1404ad1
wasi_nn_openvino.c: avoid self-assignment warning (#4434) 2025-07-01 19:19:36 +08:00
liang.he
c7148a6823
Fix potential integer overflow issues (#4429)
It is reported as "Multiplication result converted to larger type".
And "Multiplication result may overflow 'Type A' before it is
converted to 'Type B'." Type A is a larger type than Type B.

Since the conversion applies after the multiplication, arithmetic
overflow may still occur.

> The rule flags every multiplication of two non-constant integer expressions
> that is (explicitly or implicitly) converted to a larger integer type. The
> conversion is an indication that the expression would produce a result that
> would be too large to fit in the smaller integer type.
2025-07-01 13:39:30 +08:00
Liu Jia
8949797c84
Improve run.py of regression (#4417)
* Improve run.py of regression
1. Fix script interruption on case failure
2. improve statistics logic
3. enable select specific issue ids
2025-07-01 10:44:53 +08:00
YAMAMOTO Takashi
38fe056cc6
wasi-nn: reduce code duplication a bit (#4433) 2025-07-01 10:37:12 +08:00
liang.he
430cc5e5ef
Refactor AOTObjectData definition to use a forward declaration (#4428)
> core/iwasm/compilation/aot_emit_aot_file.c:85:3:
    error: redefinition of typedef 'AOTObjectData' is a C11 feature
2025-07-01 10:10:11 +08:00
dependabot[bot]
cb233ec042
build(deps): Bump github/codeql-action from 3.29.0 to 3.29.1 (#4436)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.29.0 to 3.29.1.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Commits](https://github.com/github/codeql-action/compare/v3.29.0...v3.29.1)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 10:07:48 +08:00
YAMAMOTO Takashi
4fbb372f15
CI: revert SGX retry attempts (#4421)
* Revert "Improve spec test execution by adding retry logic for transient errors (#4393)"

This reverts commit 64cafaff1e.

* Revert "Add error handling for sgx ci (#4222)"

This reverts commit 8ad47897d1.
2025-06-30 12:58:20 +08:00
Zhenwei Jin
0127eafbe5
loader: fix a potential overflow issue (#4427) 2025-06-30 12:57:57 +08:00
YAMAMOTO Takashi
7a6a6a39e9
wasi_nn_openvino.c: fix a debug build (#4416)
after "wasi_nn_openvino.c: implement multiple models per instance" change.
(https://github.com/bytecodealliance/wasm-micro-runtime/pull/4380)
2025-06-30 12:57:44 +08:00
YAMAMOTO Takashi
18d4227ab6
CI: build wamr-wasi-extensions (#4394)
* wamr-wasi-extensions: separate test scripts
also, allow to specify the prefix directory.
for the convenience of the CI.

* CI: build wamr-wasi-extensions
fragments are copied from compilation_on_macos.yml.
(thus intel copyright notice)
2025-06-27 12:28:46 +08:00
liang.he
0641dd1506
Fix few shadow warnings (#4409)
- declaration of ‘memidx’ shadows a previous local
- declaration of ‘count’ shadows a previous local
2025-06-27 11:55:32 +08:00
YAMAMOTO Takashi
8ed89e2ab2
wasi_nn_llamacpp.c: remove an unused variable (#4415) 2025-06-27 11:55:08 +08:00
YAMAMOTO Takashi
d6876f1e9f
wasi_nn_llamacpp.c: fix buffer overruns in set_input (#4420)
note: for some reasons, wasmedge seems to ignore type/dimensions
for the input of ggml. some user code relies on it.
cf. https://github.com/second-state/WasmEdge-WASINN-examples/issues/196

note: despite the comment in our code, the input doesn't seem
nul-terminated.
2025-06-27 11:51:03 +08:00
YAMAMOTO Takashi
2372a472aa
wasi-nn: make the host use the wasi_ephemeral_nn version of tensor_data (#4411)
the motivations:

* make the actual input size available to the backends.
  (currently the backends have to make a guess from shape/type.)

* make the host logic look a bit similar to wasi_ephemeral_nn.

this is a backend api/abi change.
2025-06-27 07:41:42 +08:00
TianlongLiang
23799a2cb6
Collective fix (#4413)
* Fix vector growth check and typos in core (#9)
* Fix resource cleanup in memory and running modes tests (#10)
* Add end of file empty line in wasm_running_modes_test.cc
2025-06-26 10:20:40 +08:00
TianlongLiang
5b32130955
fix bug in bh_vector when extending (#4414) 2025-06-26 10:18:24 +08:00
YAMAMOTO Takashi
a7aae9d2cc
wasi_nn_llamacpp.c: make this compilable (#4403) 2025-06-26 07:05:45 +08:00
Liu Jia
535004dedc
Fix handling of non-nullable global_type during global import (#4408) 2025-06-26 06:59:57 +08:00
Zhenwei Jin
1e41519977
loader: add type index checking (#4402) 2025-06-24 20:38:39 +08:00
liang.he
e414a327a0
Refactor copy callstack feature (#4401)
- Change `WAMR_ENABLE_COPY_CALLSTACK` to `WAMR_BUILD_COPY_CALL_STACK`, as
  `WAMR_BUILD` is the prefix for a command line option.
- Change `WAMR_ENABLE_COPY_CALLSTACK` to `WASM_ENABLE_COPY_CALL_STACK`, as
  `WASM_ENABLE` is the prefix for a macro in the source code.
- Change `CALLSTACK` to `CALL_STACK` to align with the existing
  `DUMP_CALL_STACK` feature.
- Continue using `WASMCApiFrame` instead of `wasm_frame_t` outside of
  *wasm_c_api.xxx* to avoid a typedef redefinition warning, which is
  identified by Clang.
2025-06-24 20:38:30 +08:00
YAMAMOTO Takashi
8289452abb
wasi_nn_tensorflowlite.cpp: fix get_output return size (#4390)
it should be byte size, not the number of (fp32) values.

i'm ambivalent about how to deal with the compatibility for
the legacy wamr-specific "wasi_nn". for now, i avoided changing it.
(so that existing tests using the legacy abi, namely test_tensorflow.c
and test_tensorflow_quantized.c, passes as they are.)
if we have any users who still want to use the legacy abi,
i suppose they consider the compatibility is more important
than the consistency with other backends.

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4376
2025-06-24 20:38:19 +08:00
YAMAMOTO Takashi
70c39bae77
wasi-nn: fix context lifetime issues (#4396)
* wasi-nn: fix context lifetime issues

use the module instance context api instead of trying to roll
our own with a hashmap. this fixes context lifetime problems mentioned in
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313.

namely,

* wasi-nn resources will be freed earlier now. before this change,
  they used to be kept until the runtime shutdown. (wasm_runtime_destroy)
  after this change, they will be freed together with the associated
  instances.

* wasm_module_inst_t pointer uniqueness assumption (which is wrong
  after wasm_runtime_deinstantiate) was lifted.

as a side effect, this change also makes a context shared among threads
within a cluster. note that this is a user-visible api/abi breaking change.
before this change, wasi-nn "handles" like wasi_ephemeral_nn_graph were
thread-local. after this change, they are shared among threads within
a cluster, similarly to wasi file descriptors. spec-wise, either behavior
should be ok simply because wasi officially doesn't have threads yet.
althogh i feel the latter semantics is more intuitive, if your application
depends on the thread-local behavior, this change breaks your application.

tested with wamr-wasi-extensions/samples/nn-cli, modified to
call each wasi-nn operations on different threads. (if you are
interested, you can find the modification at
https://github.com/yamt/wasm-micro-runtime/tree/yamt-nn-wip-20250619.)

cf.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2430

* runtime_lib.cmake: enable WAMR_BUILD_MODULE_INST_CONTEXT for wasi-nn

as we do for wasi (WAMR_BUILD_LIBC_WASI)
2025-06-24 20:37:56 +08:00
YAMAMOTO Takashi
92e5f5f123
CI: fix the description of upload_url (#4407) 2025-06-24 20:35:19 +08:00
YAMAMOTO Takashi
7471d5a5d0
wamr-wasi-extensions/socket: disable reference-types (#4392)
and add a comment to explain why.
2025-06-20 15:50:48 +08:00
YAMAMOTO Takashi
f449b79a31
wasi_nn_openvino.c: implement multiple models per instance (#4380)
tested with two models:
```
--load-graph=id=graph1,file=public/license-plate-recognition-barrier-0007/FP32/license-plate-recognition-barrier-0007.xml,file=public/license-plate-recognition-barrier-0007/FP32/license-plate-recognition-barrier-0007.bin \
--load-graph=id=graph2,file=classify/model.xml,file=classify/model.bin \
--init-execution-context=id=exec1,graph-id=graph1 \
--init-execution-context=id=exec2,graph-id=graph2 \
--set-input=context-id=exec1,dim=1,dim=24,dim=94,dim=3,file=out.bin \
--set-input=context-id=exec2,file=classify/banana-3x224x224-bgr.bin,dim=1,dim=3,dim=224,dim=224 \
--compute=context-id=exec1 \
--compute=context-id=exec2 \
--get-output=context-id=exec1,file=exec1-result.bin \
--get-output=context-id=exec2,file=exec2-result.bin
```

a detailed HOWTO: https://github.com/bytecodealliance/wasm-micro-runtime/pull/4380#issuecomment-2986882718
2025-06-20 15:50:29 +08:00
liang.he
64cafaff1e
Improve spec test execution by adding retry logic for transient errors (#4393) 2025-06-20 15:49:43 +08:00
YAMAMOTO Takashi
ea408ab6c0
wasi-nn: add minimum serialization on WASINNContext (#4387)
currently this is not necessary because context (WASINNContext) is
local to instance. (wasm_module_instance_t)

i plan to make a context shared among instances in a cluster when
fixing https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313.
this is a preparation for that direction.

an obvious alternative is to tweak the module instance context APIs
to allow declaring some kind of contexts instance-local. but i feel,
in this particular case, it's more natural to make "wasi-nn handles"
shared among threads within a "process".

note that, spec-wise, how wasi-nn behaves wrt threads is not defined
at all because wasi officially doesn't have threads yet. i suppose, at
this point, that how wasi-nn interacts with wasi-threads is something
we need to define by ourselves, especially when we are using an outdated
wasi-nn version.

with this change, if a thread attempts to access a context while
another thread is using it, we simply make the operation fail with
the "busy" error. this is intended for the mimimum serialization to
avoid problems like crashes/leaks/etc. this is not intended to allow
parallelism or such.

no functional changes are intended at this point yet.

cf.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2430
2025-06-20 09:48:55 +08:00
YAMAMOTO Takashi
71c07f3e4e
deprecate legacy WAMR-specific "wasi_nn" module (#4382)
wasi_nn.h: deprecate legacy "wasi_nn"

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4326
2025-06-19 14:32:26 +08:00
YAMAMOTO Takashi
e5091e47ea
enable WAMR_BUILD_WASI_EPHEMERAL_NN by default (#4381)
cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4326
2025-06-19 14:30:44 +08:00
YAMAMOTO Takashi
aa53d648fa
wasi-nn: fix tensor_data abi for wasi_ephemeral_nn (#4379)
it's "(list u8)" in the witx definition.

the new definition matches both of our own host definition
(struct tensor_wasm) and wasmtime.

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4352
2025-06-19 14:18:36 +08:00
YAMAMOTO Takashi
a29f3943ef
core/iwasm/libraries/wasi-nn/test: use the correct version of keras (#4383) 2025-06-18 19:24:06 +08:00
liang.he
8414a20dfe
Fix several issues related to night-run CI and test scripts. (#4385)
- remove duplicated options
- fix test script
- change ci to use binary
2025-06-18 19:16:47 +08:00
YAMAMOTO Takashi
db7714f0f5
wasi_nn_tensorflowlite.cpp: reject non-fp32 input earlier (#4388)
this backend assumes fp32 here and there.
it's safer to reject unexpected inputs explicitly.
2025-06-18 19:08:57 +08:00
YAMAMOTO Takashi
4bf799c3af
core/iwasm/libraries/wasi-nn/test/build.sh: add a tip for intel mac (#4389)
i keep forgetting this and had to re-investigate it at least twice.
hopefully this can be helpful for others too.
2025-06-18 19:06:57 +08:00
Huang Qi
91dd6f0a0e Link libc++ statically to reduce runtime dependency of wamrc 2024-04-18 10:12:43 +08:00
60 changed files with 940 additions and 492 deletions

View File

@ -23,7 +23,7 @@ on:
type: string type: string
required: true required: true
upload_url: upload_url:
description: a semantic version number. it is required when `release` is true. description: upload binary assets to the URL of release
type: string type: string
required: false required: false
ver_num: ver_num:

View File

@ -0,0 +1,57 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
name: build wamr_wasi_extensions release
on:
workflow_call:
inputs:
upload_url:
description: upload binary assets to the URL of release
type: string
required: false
ver_num:
description: a semantic version number. it is required when `release` is true.
type: string
required: false
permissions:
contents: read
jobs:
build_wamr_wasi_extensions:
runs-on: ${{ matrix.os }}
permissions:
contents: write # for uploading release artifacts
strategy:
matrix:
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@v4
- name: install-wasi-sdk-wabt
uses: ./.github/actions/install-wasi-sdk-wabt
with:
os: ${{ matrix.os }}
- name: Build wamr-wasi-extensions
run: |
mkdir dist
./build_libs.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Compress the binary
run: |
zip -r wamr-wasi-extensions-${{ inputs.ver_num }}.zip wamr-wasi-extensions
working-directory: wamr-wasi-extensions/dist
- name: Upload release zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ inputs.upload_url }}
asset_path: wamr-wasi-extensions/dist/wamr-wasi-extensions-${{ inputs.ver_num }}.zip
asset_name: wamr-wasi-extensions-${{ inputs.ver_num }}.zip
asset_content_type: application/zip

View File

@ -23,7 +23,7 @@ on:
type: string type: string
required: true required: true
upload_url: upload_url:
description: a semantic version number. it is required when `release` is true. description: upload binary assets to the URL of release
type: string type: string
required: false required: false
ver_num: ver_num:

View File

@ -53,7 +53,7 @@ jobs:
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v3.29.0 uses: github/codeql-action/init@v3.29.1
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
@ -70,7 +70,7 @@ jobs:
- run: | - run: |
./.github/scripts/codeql_buildscript.sh ./.github/scripts/codeql_buildscript.sh
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3.29.0 uses: github/codeql-action/analyze@v3.29.1
with: with:
category: "/language:${{matrix.language}}" category: "/language:${{matrix.language}}"
upload: false upload: false
@ -99,7 +99,7 @@ jobs:
output: ${{ steps.step1.outputs.sarif-output }}/cpp.sarif output: ${{ steps.step1.outputs.sarif-output }}/cpp.sarif
- name: Upload CodeQL results to code scanning - name: Upload CodeQL results to code scanning
uses: github/codeql-action/upload-sarif@v3.29.0 uses: github/codeql-action/upload-sarif@v3.29.1
with: with:
sarif_file: ${{ steps.step1.outputs.sarif-output }} sarif_file: ${{ steps.step1.outputs.sarif-output }}
category: "/language:${{matrix.language}}" category: "/language:${{matrix.language}}"

View File

@ -290,28 +290,6 @@ jobs:
- name: run spec tests - name: run spec tests
run: | run: |
set +e
source /opt/intel/sgxsdk/environment source /opt/intel/sgxsdk/environment
attempts=0 ./test_wamr.sh ${{ matrix.test_option }} -t ${{ matrix.running_mode }}
max_attempts=3
while [ $attempts -lt $max_attempts ]; do
./test_wamr.sh ${{ matrix.test_option }} -t ${{ matrix.running_mode }}
exitcode="$?"
if [ $exitcode -eq 0 ]; then
echo "Spec test passed"
exit 0
elif [ $exitcode -ne 143 ]; then
echo "Spec test failed with error code $exitcode"
exit 1
fi
echo "$exitcode is a known GitHub-hosted runner issue"
echo "::notice::Re-running the spec test due to error code 143"
attempts=$((attempts + 1))
done
echo "::notice::Report an error with code 143 in SGX CI after $max_attempts attempts"
exit 143
working-directory: ./tests/wamr-test-suites working-directory: ./tests/wamr-test-suites

View File

@ -36,12 +36,11 @@ env:
LLVM_EAGER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=0 -DWAMR_BUILD_FAST_JIT=0 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=0" LLVM_EAGER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=0 -DWAMR_BUILD_FAST_JIT=0 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=0"
MULTI_TIER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=1 -DWAMR_BUILD_FAST_JIT=1 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=1" MULTI_TIER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=1 -DWAMR_BUILD_FAST_JIT=1 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=1"
# For Spec Test # For Spec Test
# FIXME: use binary release(adding -b) instead of building from source after upgrading to 22.04 DEFAULT_TEST_OPTIONS: "-s spec -b -P"
DEFAULT_TEST_OPTIONS: "-s spec -P" MULTI_MODULES_TEST_OPTIONS: "-s spec -b -P -M"
MULTI_MODULES_TEST_OPTIONS: "-s spec -M -P" SIMD_TEST_OPTIONS: "-s spec -b -P -S"
SIMD_TEST_OPTIONS: "-s spec -S -P" THREADS_TEST_OPTIONS: "-s spec -b -P -p"
THREADS_TEST_OPTIONS: "-s spec -p -P" X86_32_TARGET_TEST_OPTIONS: "-m x86_32"
X86_32_TARGET_TEST_OPTIONS: "-m x86_32 -P"
WASI_TEST_OPTIONS: "-s wasi_certification -w" WASI_TEST_OPTIONS: "-s wasi_certification -w"
permissions: permissions:

View File

@ -239,3 +239,12 @@ jobs:
arch: universal arch: universal
upload_url: ${{ needs.create_release.outputs.upload_url }} upload_url: ${{ needs.create_release.outputs.upload_url }}
ver_num: ${{ needs.create_tag.outputs.new_ver}} ver_num: ${{ needs.create_tag.outputs.new_ver}}
release_wamr_wasi_extensions:
permissions:
contents: write # upload release artifact
needs: [create_tag, create_release]
uses: ./.github/workflows/build_wamr_wasi_extensions.yml
with:
upload_url: ${{ needs.create_release.outputs.upload_url }}
ver_num: ${{ needs.create_tag.outputs.new_ver }}

View File

@ -60,6 +60,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard. # Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning" - name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@2847b7f7ab9f48fc49eca90a53fff6007285f399 uses: github/codeql-action/upload-sarif@4c57370d0304fbff638216539f81d9163f77712a
with: with:
sarif_file: results.sarif sarif_file: results.sarif

View File

@ -0,0 +1,57 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
name: wamr_wasi_extensions
on:
pull_request:
types:
- opened
- synchronize
paths:
- ".github/workflows/wamr_wasi_extensions.yml"
- "wamr_wasi_extensios/**"
- "core/iwasm/libraries/wasi-nn/include/**"
- "core/iwasm/libraries/lib-socket/**"
# allow to be triggered manually
workflow_dispatch:
# Cancel any in-flight jobs for the same PR/branch so there's only one active
# at a time
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_wamr_wasi_extensions:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-22.04, macos-13, macos-14]
steps:
- name: checkout
uses: actions/checkout@v4
- name: install-wasi-sdk-wabt
uses: ./.github/actions/install-wasi-sdk-wabt
with:
os: ${{ matrix.os }}
- name: Build wamr-wasi-extensions
run: |
mkdir dist
./build_libs.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Build wamr-wasi-extensions samples
run: |
./build_samples.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Upload artifacts
if: matrix.os == 'macos-14'
uses: actions/upload-artifact@v4
with:
name: wamr-wasi-extensions
path: wamr-wasi-extensions/dist
retention-days: 10

View File

@ -99,9 +99,9 @@ if (NOT DEFINED WAMR_BUILD_LIB_WASI_THREADS)
set (WAMR_BUILD_LIB_WASI_THREADS 0) set (WAMR_BUILD_LIB_WASI_THREADS 0)
endif () endif ()
if (NOT DEFINED WAMR_ENABLE_COPY_CALLSTACK) if (NOT DEFINED WAMR_BUILD_COPY_CALL_STACK)
# Disable copy callstack by default # Disable copy callstack by default
set (WAMR_ENABLE_COPY_CALLSTACK 0) set (WAMR_BUILD_COPY_CALL_STACK 0)
endif() endif()
if (NOT DEFINED WAMR_BUILD_MINI_LOADER) if (NOT DEFINED WAMR_BUILD_MINI_LOADER)

View File

@ -66,6 +66,7 @@ def build_llvm(llvm_dir, platform, backends, projects, use_clang=False, extra_fl
"-DLLVM_INCLUDE_UTILS:BOOL=OFF", "-DLLVM_INCLUDE_UTILS:BOOL=OFF",
"-DLLVM_INCLUDE_TESTS:BOOL=OFF", "-DLLVM_INCLUDE_TESTS:BOOL=OFF",
"-DLLVM_OPTIMIZED_TABLEGEN:BOOL=ON", "-DLLVM_OPTIMIZED_TABLEGEN:BOOL=ON",
"-DLLVM_STATIC_LINK_CXX_STDLIB=ON",
] ]
# ccache is not available on Windows # ccache is not available on Windows

View File

@ -334,15 +334,10 @@ if (WAMR_BUILD_SHARED_HEAP EQUAL 1)
add_definitions (-DWASM_ENABLE_SHARED_HEAP=1) add_definitions (-DWASM_ENABLE_SHARED_HEAP=1)
message (" Shared heap enabled") message (" Shared heap enabled")
endif() endif()
if (WAMR_BUILD_COPY_CALL_STACK EQUAL 1)
if (WAMR_ENABLE_COPY_CALLSTACK EQUAL 1) add_definitions (-DWASM_ENABLE_COPY_CALL_STACK=1)
add_definitions (-DWAMR_ENABLE_COPY_CALLSTACK=1)
message(" Copy callstack enabled") message(" Copy callstack enabled")
else ()
add_definitions (-DWAMR_ENABLE_COPY_CALLSTACK=0)
message(" Copy callstack disabled")
endif() endif()
if (WAMR_BUILD_MEMORY64 EQUAL 1) if (WAMR_BUILD_MEMORY64 EQUAL 1)
# if native is 32-bit or cross-compiled to 32-bit # if native is 32-bit or cross-compiled to 32-bit
if (NOT WAMR_BUILD_TARGET MATCHES ".*64.*") if (NOT WAMR_BUILD_TARGET MATCHES ".*64.*")
@ -539,6 +534,9 @@ if (WAMR_BUILD_WASI_NN EQUAL 1)
if (DEFINED WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH) if (DEFINED WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH)
add_definitions (-DWASM_WASI_NN_EXTERNAL_DELEGATE_PATH="${WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH}") add_definitions (-DWASM_WASI_NN_EXTERNAL_DELEGATE_PATH="${WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH}")
endif () endif ()
if (NOT DEFINED WAMR_BUILD_WASI_EPHEMERAL_NN)
set(WAMR_BUILD_WASI_EPHEMERAL_NN 1)
endif()
if (WAMR_BUILD_WASI_EPHEMERAL_NN EQUAL 1) if (WAMR_BUILD_WASI_EPHEMERAL_NN EQUAL 1)
message (" WASI-NN: use 'wasi_ephemeral_nn' instead of 'wasi-nn'") message (" WASI-NN: use 'wasi_ephemeral_nn' instead of 'wasi-nn'")
add_definitions (-DWASM_ENABLE_WASI_EPHEMERAL_NN=1) add_definitions (-DWASM_ENABLE_WASI_EPHEMERAL_NN=1)

View File

@ -106,6 +106,7 @@ endif ()
if (WAMR_BUILD_WASI_NN EQUAL 1) if (WAMR_BUILD_WASI_NN EQUAL 1)
include (${IWASM_DIR}/libraries/wasi-nn/cmake/wasi_nn.cmake) include (${IWASM_DIR}/libraries/wasi-nn/cmake/wasi_nn.cmake)
set (WAMR_BUILD_MODULE_INST_CONTEXT 1)
endif () endif ()
if (WAMR_BUILD_LIB_PTHREAD EQUAL 1) if (WAMR_BUILD_LIB_PTHREAD EQUAL 1)

View File

@ -4,7 +4,6 @@
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception # SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# #
import argparse import argparse
import re
from pathlib import Path from pathlib import Path
import re import re
import shlex import shlex
@ -39,7 +38,7 @@ INVALID_FILE_NAME_SEGMENT = r"([a-zA-Z0-9]+\-[a-zA-Z0-9]+)"
def locate_command(command: str) -> bool: def locate_command(command: str) -> bool:
if not shutil.which(command): if not shutil.which(command):
print(f"Command '{command}'' not found") print(f"Command '{command}' not found")
return False return False
return True return True

View File

@ -193,8 +193,8 @@
#error "Heap aux stack allocation must be enabled for WASI threads" #error "Heap aux stack allocation must be enabled for WASI threads"
#endif #endif
#ifndef WAMR_ENABLE_COPY_CALLSTACK #ifndef WASM_ENABLE_COPY_CALL_STACK
#define WAMR_ENABLE_COPY_CALLSTACK 0 #define WASM_ENABLE_COPY_CALL_STACK 0
#endif #endif
#ifndef WASM_ENABLE_BASE_LIB #ifndef WASM_ENABLE_BASE_LIB

View File

@ -1730,6 +1730,12 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
(void)u8; (void)u8;
read_uint32(buf, buf_end, j); read_uint32(buf, buf_end, j);
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (j >= module->type_count) {
set_error_buf(error_buf, error_buf_size, "invalid type index");
goto fail;
}
#endif
if (module->types[j]->ref_count == UINT16_MAX) { if (module->types[j]->ref_count == UINT16_MAX) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"wasm type's ref count too large"); "wasm type's ref count too large");
@ -1993,6 +1999,13 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
AOTType *cur_type = module->types[j]; AOTType *cur_type = module->types[j];
parent_type_idx = cur_type->parent_type_idx; parent_type_idx = cur_type->parent_type_idx;
if (parent_type_idx != (uint32)-1) { /* has parent */ if (parent_type_idx != (uint32)-1) { /* has parent */
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (parent_type_idx >= module->type_count) {
set_error_buf(error_buf, error_buf_size,
"invalid parent type index");
goto fail;
}
#endif
AOTType *parent_type = module->types[parent_type_idx]; AOTType *parent_type = module->types[parent_type_idx];
module->types[j]->parent_type = parent_type; module->types[j]->parent_type = parent_type;
@ -2016,6 +2029,13 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
AOTType *cur_type = module->types[j]; AOTType *cur_type = module->types[j];
parent_type_idx = cur_type->parent_type_idx; parent_type_idx = cur_type->parent_type_idx;
if (parent_type_idx != (uint32)-1) { /* has parent */ if (parent_type_idx != (uint32)-1) { /* has parent */
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (parent_type_idx >= module->type_count) {
set_error_buf(error_buf, error_buf_size,
"invalid parent type index");
goto fail;
}
#endif
AOTType *parent_type = module->types[parent_type_idx]; AOTType *parent_type = module->types[parent_type_idx];
/* subtyping has been checked during compilation */ /* subtyping has been checked during compilation */
bh_assert(wasm_type_is_subtype_of( bh_assert(wasm_type_is_subtype_of(

View File

@ -3639,7 +3639,7 @@ aot_get_module_inst_mem_consumption(const AOTModuleInstance *module_inst,
for (i = 0; i < module_inst->memory_count; i++) { for (i = 0; i < module_inst->memory_count; i++) {
AOTMemoryInstance *mem_inst = module_inst->memories[i]; AOTMemoryInstance *mem_inst = module_inst->memories[i];
mem_conspn->memories_size += mem_conspn->memories_size +=
mem_inst->num_bytes_per_page * mem_inst->cur_page_count; (uint64)mem_inst->num_bytes_per_page * mem_inst->cur_page_count;
mem_conspn->app_heap_size = mem_conspn->app_heap_size =
mem_inst->heap_data_end - mem_inst->heap_data; mem_inst->heap_data_end - mem_inst->heap_data;
/* size of app heap structure */ /* size of app heap structure */
@ -4137,9 +4137,9 @@ aot_frame_update_profile_info(WASMExecEnv *exec_env, bool alloc_frame)
} }
#endif /* end of WASM_ENABLE_AOT_STACK_FRAME != 0 */ #endif /* end of WASM_ENABLE_AOT_STACK_FRAME != 0 */
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, const uint32 length, const uint32 skip_n,
char *error_buf, uint32 error_buf_size) char *error_buf, uint32 error_buf_size)
{ {
@ -4193,7 +4193,7 @@ aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
uint32 uint32
aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, const uint32 length, const uint32 skip_n,
char *error_buf, uint32_t error_buf_size) char *error_buf, uint32_t error_buf_size)
{ {
@ -4243,7 +4243,7 @@ aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
uint32 uint32
aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -4265,7 +4265,7 @@ aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer,
error_buf, error_buf_size); error_buf, error_buf_size);
} }
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
bool bool

View File

@ -787,12 +787,12 @@ aot_frame_update_profile_info(WASMExecEnv *exec_env, bool alloc_frame);
bool bool
aot_create_call_stack(struct WASMExecEnv *exec_env); aot_create_call_stack(struct WASMExecEnv *exec_env);
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size); uint32_t error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
/** /**
* @brief Dump wasm call stack or get the size * @brief Dump wasm call stack or get the size

View File

@ -1743,9 +1743,9 @@ wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env)
wasm_exec_env_destroy(exec_env); wasm_exec_env_destroy(exec_env);
} }
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -1780,7 +1780,7 @@ wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer,
strncpy(error_buf, err_msg, error_buf_size); strncpy(error_buf, err_msg, error_buf_size);
return 0; return 0;
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
bool bool
wasm_runtime_init_thread_env(void) wasm_runtime_init_thread_env(void)

View File

@ -758,12 +758,12 @@ wasm_runtime_create_exec_env(WASMModuleInstanceCommon *module_inst,
WASM_RUNTIME_API_EXTERN void WASM_RUNTIME_API_EXTERN void
wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env); wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env);
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
WASM_RUNTIME_API_EXTERN uint32_t WASM_RUNTIME_API_EXTERN uint32_t
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32 error_buf_size); uint32 error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
/* See wasm_export.h for description */ /* See wasm_export.h for description */
WASM_RUNTIME_API_EXTERN WASMModuleInstanceCommon * WASM_RUNTIME_API_EXTERN WASMModuleInstanceCommon *

View File

@ -48,7 +48,7 @@ typedef struct AOTSymbolList {
} AOTSymbolList; } AOTSymbolList;
/* AOT object data */ /* AOT object data */
typedef struct AOTObjectData { struct AOTObjectData {
AOTCompContext *comp_ctx; AOTCompContext *comp_ctx;
LLVMMemoryBufferRef mem_buf; LLVMMemoryBufferRef mem_buf;
@ -82,7 +82,7 @@ typedef struct AOTObjectData {
const char *stack_sizes_section_name; const char *stack_sizes_section_name;
uint32 stack_sizes_offset; uint32 stack_sizes_offset;
uint32 *stack_sizes; uint32 *stack_sizes;
} AOTObjectData; };
#if 0 #if 0
static void dump_buf(uint8 *buf, uint32 size, char *title) static void dump_buf(uint8 *buf, uint32 size, char *title)
@ -302,8 +302,8 @@ get_init_expr_size(const AOTCompContext *comp_ctx, const AOTCompData *comp_data,
/* array_elem_type + type_index + len + elems */ /* array_elem_type + type_index + len + elems */
size += sizeof(uint32) * 3 size += sizeof(uint32) * 3
+ wasm_value_type_size_internal(array_type->elem_type, + (uint64)wasm_value_type_size_internal(
comp_ctx->pointer_size) array_type->elem_type, comp_ctx->pointer_size)
* value_count; * value_count;
break; break;
} }

View File

@ -347,7 +347,8 @@ call_aot_invoke_c_api_native(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
/* Get &c_api_func_imports[func_idx], note size of CApiFuncImport /* Get &c_api_func_imports[func_idx], note size of CApiFuncImport
is pointer_size * 3 */ is pointer_size * 3 */
offset = I32_CONST((comp_ctx->pointer_size * 3) * import_func_idx); offset = I32_CONST((unsigned long long)comp_ctx->pointer_size * 3
* import_func_idx);
CHECK_LLVM_CONST(offset); CHECK_LLVM_CONST(offset);
c_api_func_import = c_api_func_import =
LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, c_api_func_imports, LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, c_api_func_imports,

View File

@ -3999,7 +3999,7 @@ aot_get_func_from_table(const AOTCompContext *comp_ctx, LLVMValueRef base,
if (!(func = if (!(func =
LLVMBuildBitCast(comp_ctx->builder, func, func_type, "func"))) { LLVMBuildBitCast(comp_ctx->builder, func, func_type, "func"))) {
aot_set_last_error("cast function fialed."); aot_set_last_error("cast function failed.");
goto fail; goto fail;
} }
@ -4068,7 +4068,7 @@ aot_load_const_from_table(AOTCompContext *comp_ctx, LLVMValueRef base,
if (!(const_addr = LLVMBuildBitCast(comp_ctx->builder, const_addr, if (!(const_addr = LLVMBuildBitCast(comp_ctx->builder, const_addr,
const_ptr_type, "const_addr"))) { const_ptr_type, "const_addr"))) {
aot_set_last_error("cast const fialed."); aot_set_last_error("cast const failed.");
return NULL; return NULL;
} }

View File

@ -139,8 +139,6 @@ typedef struct wasm_frame_t {
uint32_t *lp; uint32_t *lp;
} WASMCApiFrame; } WASMCApiFrame;
typedef WASMCApiFrame wasm_frame_t;
/* WASM section */ /* WASM section */
typedef struct wasm_section_t { typedef struct wasm_section_t {
struct wasm_section_t *next; struct wasm_section_t *next;
@ -904,7 +902,7 @@ wasm_runtime_destroy_exec_env(wasm_exec_env_t exec_env);
* @return number of copied frames * @return number of copied frames
*/ */
WASM_RUNTIME_API_EXTERN uint32_t WASM_RUNTIME_API_EXTERN uint32_t
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32_t length, const uint32_t skip_n, const uint32_t length, const uint32_t skip_n,
char *error_buf, uint32_t error_buf_size); char *error_buf, uint32_t error_buf_size);

View File

@ -4088,7 +4088,7 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
case WASM_OP_STRING_ENCODE_LOSSY_UTF8_ARRAY: case WASM_OP_STRING_ENCODE_LOSSY_UTF8_ARRAY:
case WASM_OP_STRING_ENCODE_WTF8_ARRAY: case WASM_OP_STRING_ENCODE_WTF8_ARRAY:
{ {
uint32 start, array_len, count; uint32 start, array_len;
int32 bytes_written; int32 bytes_written;
EncodingFlag flag = WTF8; EncodingFlag flag = WTF8;
WASMArrayType *array_type; WASMArrayType *array_type;

View File

@ -2042,9 +2042,9 @@ load_type_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
"recursive type count too large"); "recursive type count too large");
return false; return false;
} }
module->type_count += rec_count - 1;
new_total_size = new_total_size =
sizeof(WASMFuncType *) * (uint64)module->type_count; sizeof(WASMFuncType *)
* (uint64)(module->type_count + rec_count - 1);
if (new_total_size > UINT32_MAX) { if (new_total_size > UINT32_MAX) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"allocate memory failed"); "allocate memory failed");
@ -2052,6 +2052,7 @@ load_type_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
} }
MEM_REALLOC(module->types, (uint32)total_size, MEM_REALLOC(module->types, (uint32)total_size,
(uint32)new_total_size); (uint32)new_total_size);
module->type_count += rec_count - 1;
total_size = new_total_size; total_size = new_total_size;
} }
@ -3351,7 +3352,8 @@ load_import_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
/* valtype */ /* valtype */
CHECK_BUF(p, p_end, 1); CHECK_BUF(p, p_end, 1);
global_type = read_uint8(p); global_type = read_uint8(p);
if (wasm_is_reftype_htref_nullable(global_type)) { if (wasm_is_reftype_htref_nullable(global_type)
|| wasm_is_reftype_htref_non_nullable(global_type)) {
int32 heap_type; int32 heap_type;
read_leb_int32(p, p_end, heap_type); read_leb_int32(p, p_end, heap_type);
(void)heap_type; (void)heap_type;
@ -15023,8 +15025,6 @@ re_scan:
case WASM_OP_STRING_NEW_LOSSY_UTF8: case WASM_OP_STRING_NEW_LOSSY_UTF8:
case WASM_OP_STRING_NEW_WTF8: case WASM_OP_STRING_NEW_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15036,7 +15036,6 @@ re_scan:
POP_I32(); POP_I32();
POP_I32(); POP_I32();
PUSH_REF(REF_TYPE_STRINGREF); PUSH_REF(REF_TYPE_STRINGREF);
(void)memidx;
break; break;
} }
case WASM_OP_STRING_CONST: case WASM_OP_STRING_CONST:
@ -15064,8 +15063,6 @@ re_scan:
case WASM_OP_STRING_ENCODE_LOSSY_UTF8: case WASM_OP_STRING_ENCODE_LOSSY_UTF8:
case WASM_OP_STRING_ENCODE_WTF8: case WASM_OP_STRING_ENCODE_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15077,7 +15074,6 @@ re_scan:
POP_I32(); POP_I32();
POP_STRINGREF(); POP_STRINGREF();
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRING_CONCAT: case WASM_OP_STRING_CONCAT:
@ -15118,8 +15114,6 @@ re_scan:
case WASM_OP_STRINGVIEW_WTF8_ENCODE_LOSSY_UTF8: case WASM_OP_STRINGVIEW_WTF8_ENCODE_LOSSY_UTF8:
case WASM_OP_STRINGVIEW_WTF8_ENCODE_WTF8: case WASM_OP_STRINGVIEW_WTF8_ENCODE_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15134,7 +15128,6 @@ re_scan:
POP_REF(REF_TYPE_STRINGVIEWWTF8); POP_REF(REF_TYPE_STRINGVIEWWTF8);
PUSH_I32(); PUSH_I32();
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRINGVIEW_WTF8_SLICE: case WASM_OP_STRINGVIEW_WTF8_SLICE:
@ -15166,8 +15159,6 @@ re_scan:
} }
case WASM_OP_STRINGVIEW_WTF16_ENCODE: case WASM_OP_STRINGVIEW_WTF16_ENCODE:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15181,7 +15172,6 @@ re_scan:
POP_I32(); POP_I32();
POP_REF(REF_TYPE_STRINGVIEWWTF16); POP_REF(REF_TYPE_STRINGVIEWWTF16);
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRINGVIEW_WTF16_SLICE: case WASM_OP_STRINGVIEW_WTF16_SLICE:

View File

@ -2668,7 +2668,7 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
} }
STORE_PTR((void **)global_data, func_obj); STORE_PTR((void **)global_data, func_obj);
global_data += sizeof(void *); global_data += sizeof(void *);
/* Also update the inital_value since other globals may /* Also update the initial_value since other globals may
* refer to this */ * refer to this */
global->initial_value.gc_obj = (wasm_obj_t)func_obj; global->initial_value.gc_obj = (wasm_obj_t)func_obj;
break; break;
@ -4161,7 +4161,7 @@ wasm_get_module_inst_mem_consumption(const WASMModuleInstance *module_inst,
sizeof(WASMMemoryInstance *) * module_inst->memory_count; sizeof(WASMMemoryInstance *) * module_inst->memory_count;
for (i = 0; i < module_inst->memory_count; i++) { for (i = 0; i < module_inst->memory_count; i++) {
WASMMemoryInstance *memory = module_inst->memories[i]; WASMMemoryInstance *memory = module_inst->memories[i];
size = memory->num_bytes_per_page * memory->cur_page_count; size = (uint64)memory->num_bytes_per_page * memory->cur_page_count;
mem_conspn->memories_size += size; mem_conspn->memories_size += size;
mem_conspn->app_heap_size += memory->heap_data_end - memory->heap_data; mem_conspn->app_heap_size += memory->heap_data_end - memory->heap_data;
/* size of app heap structure */ /* size of app heap structure */
@ -4195,9 +4195,9 @@ wasm_get_module_inst_mem_consumption(const WASMModuleInstance *module_inst,
#endif /* end of (WASM_ENABLE_MEMORY_PROFILING != 0) \ #endif /* end of (WASM_ENABLE_MEMORY_PROFILING != 0) \
|| (WASM_ENABLE_MEMORY_TRACING != 0) */ || (WASM_ENABLE_MEMORY_TRACING != 0) */
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, wasm_interp_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
uint32 length, uint32 skip_n, char *error_buf, uint32 length, uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -4242,7 +4242,7 @@ wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
return count >= skip_n ? count - skip_n : 0; return count >= skip_n ? count - skip_n : 0;
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
bool bool

View File

@ -731,12 +731,12 @@ wasm_get_table_inst(const WASMModuleInstance *module_inst, uint32 tbl_idx)
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, wasm_interp_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
uint32 length, uint32 skip_n, char *error_buf, uint32 length, uint32 skip_n, char *error_buf,
uint32_t error_buf_size); uint32_t error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
bool bool
wasm_interp_create_call_stack(struct WASMExecEnv *exec_env); wasm_interp_create_call_stack(struct WASMExecEnv *exec_env);

View File

@ -301,7 +301,8 @@ wasm_cluster_create(WASMExecEnv *exec_env)
aux_stack_start -= cluster->stack_size; aux_stack_start -= cluster->stack_size;
for (i = 0; i < cluster_max_thread_num; i++) { for (i = 0; i < cluster_max_thread_num; i++) {
cluster->stack_tops[i] = aux_stack_start - cluster->stack_size * i; cluster->stack_tops[i] =
aux_stack_start - (uint64)cluster->stack_size * i;
} }
} }
#endif #endif

View File

@ -21,6 +21,7 @@
#else #else
#define WASI_NN_IMPORT(name) \ #define WASI_NN_IMPORT(name) \
__attribute__((import_module("wasi_nn"), import_name(name))) __attribute__((import_module("wasi_nn"), import_name(name)))
#warning You are using "wasi_nn", which is a legacy WAMR-specific ABI. It's deperecated and will likely be removed in future versions of WAMR. Please use "wasi_ephemeral_nn" instead. (For a WASM module, use the wasi_ephemeral_nn.h header instead. For the runtime configurations, enable WASM_ENABLE_WASI_EPHEMERAL_NN/WAMR_BUILD_WASI_EPHEMERAL_NN.)
#endif #endif
/** /**
@ -108,14 +109,13 @@ WASI_NN_NAME(compute)
WASI_NN_ERROR_TYPE WASI_NN_ERROR_TYPE
WASI_NN_NAME(get_output) WASI_NN_NAME(get_output)
(WASI_NN_NAME(graph_execution_context) ctx, uint32_t index, (WASI_NN_NAME(graph_execution_context) ctx, uint32_t index,
WASI_NN_NAME(tensor_data) output_tensor, uint32_t output_tensor_max_size, uint8_t *output_tensor, uint32_t output_tensor_max_size,
uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output"); uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output");
#else #else
WASI_NN_ERROR_TYPE WASI_NN_ERROR_TYPE
WASI_NN_NAME(get_output) WASI_NN_NAME(get_output)
(graph_execution_context ctx, uint32_t index, (graph_execution_context ctx, uint32_t index, uint8_t *output_tensor,
WASI_NN_NAME(tensor_data) output_tensor, uint32_t *output_tensor_size) uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output");
WASI_NN_IMPORT("get_output");
#endif #endif
#endif #endif

View File

@ -99,7 +99,14 @@ typedef enum {
// 4-byte f32 elements would have a data array of length 16). Naturally, this // 4-byte f32 elements would have a data array of length 16). Naturally, this
// representation requires some knowledge of how to lay out data in // representation requires some knowledge of how to lay out data in
// memory--e.g., using row-major ordering--and could perhaps be improved. // memory--e.g., using row-major ordering--and could perhaps be improved.
#if !defined(__wasm__) || WASM_ENABLE_WASI_EPHEMERAL_NN != 0
typedef struct {
uint8_t *buf;
uint32_t size;
} WASI_NN_NAME(tensor_data);
#else
typedef uint8_t *WASI_NN_NAME(tensor_data); typedef uint8_t *WASI_NN_NAME(tensor_data);
#endif
// A tensor. // A tensor.
typedef struct { typedef struct {

View File

@ -99,7 +99,8 @@ graph_builder_array_app_native(wasm_module_inst_t instance,
static wasi_nn_error static wasi_nn_error
tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements, tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements,
tensor_wasm *input_tensor_wasm, tensor_data *data) tensor_wasm *input_tensor_wasm, void **data,
uint32_t *size)
{ {
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
#define data_size input_tensor_wasm->data_size #define data_size input_tensor_wasm->data_size
@ -113,8 +114,9 @@ tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements,
NN_ERR_PRINTF("input_tensor_wasm->data_offset is invalid"); NN_ERR_PRINTF("input_tensor_wasm->data_offset is invalid");
return invalid_argument; return invalid_argument;
} }
*data = (tensor_data)wasm_runtime_addr_app_to_native( *data = wasm_runtime_addr_app_to_native(
instance, (uint64)input_tensor_wasm->data_offset); instance, (uint64)input_tensor_wasm->data_offset);
*size = data_size;
return success; return success;
#undef data_size #undef data_size
} }
@ -188,16 +190,19 @@ tensor_app_native(wasm_module_inst_t instance, tensor_wasm *input_tensor_wasm,
NN_DBG_PRINTF("Tensor type: %d", input_tensor_wasm->type); NN_DBG_PRINTF("Tensor type: %d", input_tensor_wasm->type);
NN_DBG_PRINTF("Total number of elements: %d", total_elements); NN_DBG_PRINTF("Total number of elements: %d", total_elements);
tensor_data data = NULL; void *data = NULL;
uint32_t datasize;
if (success if (success
!= (res = tensor_data_app_native(instance, total_elements, != (res =
input_tensor_wasm, &data))) { tensor_data_app_native(instance, total_elements,
input_tensor_wasm, &data, &datasize))) {
wasm_runtime_free(dimensions); wasm_runtime_free(dimensions);
return res; return res;
} }
input_tensor->type = input_tensor_wasm->type; input_tensor->type = input_tensor_wasm->type;
input_tensor->dimensions = dimensions; input_tensor->dimensions = dimensions;
input_tensor->data = data; input_tensor->data.buf = data;
input_tensor->data.size = datasize;
return success; return success;
} }

View File

@ -20,6 +20,10 @@
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#include "wasm_export.h" #include "wasm_export.h"
#if WASM_ENABLE_WASI_EPHEMERAL_NN == 0
#warning You are using "wasi_nn", which is a legacy WAMR-specific ABI. It's deperecated and will likely be removed in future versions of WAMR. Please use "wasi_ephemeral_nn" instead. (For a WASM module, use the wasi_ephemeral_nn.h header instead. For the runtime configurations, enable WASM_ENABLE_WASI_EPHEMERAL_NN/WAMR_BUILD_WASI_EPHEMERAL_NN.)
#endif
#define HASHMAP_INITIAL_SIZE 20 #define HASHMAP_INITIAL_SIZE 20
#if defined(__APPLE__) #if defined(__APPLE__)
#define LIB_EXTENTION ".dylib" #define LIB_EXTENTION ".dylib"
@ -51,53 +55,21 @@ struct backends_api_functions {
NN_ERR_PRINTF("Error %s() -> %d", #func, wasi_error); \ NN_ERR_PRINTF("Error %s() -> %d", #func, wasi_error); \
} while (0) } while (0)
/* HashMap utils */ static void *wasi_nn_key;
static HashMap *hashmap;
static uint32
hash_func(const void *key)
{
// fnv1a_hash
const uint32 FNV_PRIME = 16777619;
const uint32 FNV_OFFSET_BASIS = 2166136261U;
uint32 hash = FNV_OFFSET_BASIS;
const unsigned char *bytes = (const unsigned char *)key;
for (size_t i = 0; i < sizeof(uintptr_t); ++i) {
hash ^= bytes[i];
hash *= FNV_PRIME;
}
return hash;
}
static bool
key_equal_func(void *key1, void *key2)
{
return key1 == key2;
}
static void
key_destroy_func(void *key1)
{
/* key type is wasm_module_inst_t*. do nothing */
}
static void static void
wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx) wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx)
{ {
NN_DBG_PRINTF("[WASI NN] DEINIT...");
if (wasi_nn_ctx == NULL) { if (wasi_nn_ctx == NULL) {
NN_ERR_PRINTF(
"Error when deallocating memory. WASI-NN context is NULL");
return; return;
} }
NN_DBG_PRINTF("[WASI NN] DEINIT...");
NN_DBG_PRINTF("Freeing wasi-nn"); NN_DBG_PRINTF("Freeing wasi-nn");
NN_DBG_PRINTF("-> is_model_loaded: %d", wasi_nn_ctx->is_model_loaded); NN_DBG_PRINTF("-> is_model_loaded: %d", wasi_nn_ctx->is_model_loaded);
NN_DBG_PRINTF("-> current_encoding: %d", wasi_nn_ctx->backend); NN_DBG_PRINTF("-> current_encoding: %d", wasi_nn_ctx->backend);
bh_assert(!wasi_nn_ctx->busy);
/* deinit() the backend */ /* deinit() the backend */
if (wasi_nn_ctx->is_backend_ctx_initialized) { if (wasi_nn_ctx->is_backend_ctx_initialized) {
wasi_nn_error res; wasi_nn_error res;
@ -105,13 +77,14 @@ wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx)
wasi_nn_ctx->backend_ctx); wasi_nn_ctx->backend_ctx);
} }
os_mutex_destroy(&wasi_nn_ctx->lock);
wasm_runtime_free(wasi_nn_ctx); wasm_runtime_free(wasi_nn_ctx);
} }
static void static void
value_destroy_func(void *value) dtor(wasm_module_inst_t inst, void *ctx)
{ {
wasi_nn_ctx_destroy((WASINNContext *)value); wasi_nn_ctx_destroy(ctx);
} }
bool bool
@ -124,12 +97,9 @@ wasi_nn_initialize()
return false; return false;
} }
// hashmap { instance: wasi_nn_ctx } wasi_nn_key = wasm_runtime_create_context_key(dtor);
hashmap = bh_hash_map_create(HASHMAP_INITIAL_SIZE, true, hash_func, if (wasi_nn_key == NULL) {
key_equal_func, key_destroy_func, NN_ERR_PRINTF("Failed to create context key");
value_destroy_func);
if (hashmap == NULL) {
NN_ERR_PRINTF("Error while initializing hashmap");
os_mutex_destroy(&wasi_nn_lock); os_mutex_destroy(&wasi_nn_lock);
return false; return false;
} }
@ -150,6 +120,11 @@ wasi_nn_initialize_context()
} }
memset(wasi_nn_ctx, 0, sizeof(WASINNContext)); memset(wasi_nn_ctx, 0, sizeof(WASINNContext));
if (os_mutex_init(&wasi_nn_ctx->lock)) {
NN_ERR_PRINTF("Error when initializing a lock for WASI-NN context");
wasm_runtime_free(wasi_nn_ctx);
return NULL;
}
return wasi_nn_ctx; return wasi_nn_ctx;
} }
@ -158,29 +133,59 @@ static WASINNContext *
wasm_runtime_get_wasi_nn_ctx(wasm_module_inst_t instance) wasm_runtime_get_wasi_nn_ctx(wasm_module_inst_t instance)
{ {
WASINNContext *wasi_nn_ctx = WASINNContext *wasi_nn_ctx =
(WASINNContext *)bh_hash_map_find(hashmap, (void *)instance); wasm_runtime_get_context(instance, wasi_nn_key);
if (wasi_nn_ctx == NULL) { if (wasi_nn_ctx == NULL) {
wasi_nn_ctx = wasi_nn_initialize_context(); WASINNContext *newctx = wasi_nn_initialize_context();
if (wasi_nn_ctx == NULL) if (newctx == NULL)
return NULL;
bool ok =
bh_hash_map_insert(hashmap, (void *)instance, (void *)wasi_nn_ctx);
if (!ok) {
NN_ERR_PRINTF("Error while storing context");
wasi_nn_ctx_destroy(wasi_nn_ctx);
return NULL; return NULL;
os_mutex_lock(&wasi_nn_lock);
wasi_nn_ctx = wasm_runtime_get_context(instance, wasi_nn_key);
if (wasi_nn_ctx == NULL) {
wasm_runtime_set_context_spread(instance, wasi_nn_key, newctx);
wasi_nn_ctx = newctx;
newctx = NULL;
}
os_mutex_unlock(&wasi_nn_lock);
if (newctx != NULL) {
wasi_nn_ctx_destroy(newctx);
} }
} }
return wasi_nn_ctx; return wasi_nn_ctx;
} }
static WASINNContext *
lock_ctx(wasm_module_inst_t instance)
{
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
if (wasi_nn_ctx == NULL) {
return NULL;
}
os_mutex_lock(&wasi_nn_ctx->lock);
if (wasi_nn_ctx->busy) {
os_mutex_unlock(&wasi_nn_ctx->lock);
return NULL;
}
wasi_nn_ctx->busy = true;
os_mutex_unlock(&wasi_nn_ctx->lock);
return wasi_nn_ctx;
}
static void
unlock_ctx(WASINNContext *wasi_nn_ctx)
{
if (wasi_nn_ctx == NULL) {
return;
}
os_mutex_lock(&wasi_nn_ctx->lock);
bh_assert(wasi_nn_ctx->busy);
wasi_nn_ctx->busy = false;
os_mutex_unlock(&wasi_nn_ctx->lock);
}
void void
wasi_nn_destroy() wasi_nn_destroy()
{ {
// destroy hashmap will destroy keys and values wasm_runtime_destroy_context_key(wasi_nn_key);
bh_hash_map_destroy(hashmap);
// close backends' libraries and registered functions // close backends' libraries and registered functions
for (unsigned i = 0; i < sizeof(lookup) / sizeof(lookup[0]); i++) { for (unsigned i = 0; i < sizeof(lookup) / sizeof(lookup[0]); i++) {
@ -401,7 +406,7 @@ detect_and_load_backend(graph_encoding backend_hint,
static wasi_nn_error static wasi_nn_error
ensure_backend(wasm_module_inst_t instance, graph_encoding encoding, ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
WASINNContext **wasi_nn_ctx_ptr) WASINNContext *wasi_nn_ctx)
{ {
wasi_nn_error res; wasi_nn_error res;
@ -412,7 +417,6 @@ ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
goto fail; goto fail;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
if (wasi_nn_ctx->is_backend_ctx_initialized) { if (wasi_nn_ctx->is_backend_ctx_initialized) {
if (wasi_nn_ctx->backend != loaded_backend) { if (wasi_nn_ctx->backend != loaded_backend) {
res = unsupported_operation; res = unsupported_operation;
@ -430,7 +434,6 @@ ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
wasi_nn_ctx->is_backend_ctx_initialized = true; wasi_nn_ctx->is_backend_ctx_initialized = true;
} }
*wasi_nn_ctx_ptr = wasi_nn_ctx;
return success; return success;
fail: fail:
return res; return res;
@ -458,17 +461,23 @@ wasi_nn_load(wasm_exec_env_t exec_env, graph_builder_array_wasm *builder,
if (!instance) if (!instance)
return runtime_error; return runtime_error;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
graph_builder_array builder_native = { 0 }; graph_builder_array builder_native = { 0 };
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
if (success if (success
!= (res = graph_builder_array_app_native( != (res = graph_builder_array_app_native(
instance, builder, builder_wasm_size, &builder_native))) instance, builder, builder_wasm_size, &builder_native)))
return res; goto fail;
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */ #else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
if (success if (success
!= (res = graph_builder_array_app_native(instance, builder, != (res = graph_builder_array_app_native(instance, builder,
&builder_native))) &builder_native)))
return res; goto fail;
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ #endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */
if (!wasm_runtime_validate_native_addr(instance, g, if (!wasm_runtime_validate_native_addr(instance, g,
@ -478,8 +487,7 @@ wasi_nn_load(wasm_exec_env_t exec_env, graph_builder_array_wasm *builder,
goto fail; goto fail;
} }
WASINNContext *wasi_nn_ctx; res = ensure_backend(instance, encoding, wasi_nn_ctx);
res = ensure_backend(instance, encoding, &wasi_nn_ctx);
if (res != success) if (res != success)
goto fail; goto fail;
@ -494,6 +502,7 @@ fail:
// XXX: Free intermediate structure pointers // XXX: Free intermediate structure pointers
if (builder_native.buf) if (builder_native.buf)
wasm_runtime_free(builder_native.buf); wasm_runtime_free(builder_native.buf);
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -527,18 +536,26 @@ wasi_nn_load_by_name(wasm_exec_env_t exec_env, char *name, uint32_t name_len,
NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME %s...", name); NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME %s...", name);
WASINNContext *wasi_nn_ctx; WASINNContext *wasi_nn_ctx = lock_ctx(instance);
res = ensure_backend(instance, autodetect, &wasi_nn_ctx); if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
res = ensure_backend(instance, autodetect, wasi_nn_ctx);
if (res != success) if (res != success)
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name, res, call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name, res,
wasi_nn_ctx->backend_ctx, name, name_len, g); wasi_nn_ctx->backend_ctx, name, name_len, g);
if (res != success) if (res != success)
return res; goto fail;
wasi_nn_ctx->is_model_loaded = true; wasi_nn_ctx->is_model_loaded = true;
return success; res = success;
fail:
unlock_ctx(wasi_nn_ctx);
return res;
} }
wasi_nn_error wasi_nn_error
@ -576,19 +593,28 @@ wasi_nn_load_by_name_with_config(wasm_exec_env_t exec_env, char *name,
NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME_WITH_CONFIG %s %s...", name, config); NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME_WITH_CONFIG %s %s...", name, config);
WASINNContext *wasi_nn_ctx; WASINNContext *wasi_nn_ctx = lock_ctx(instance);
res = ensure_backend(instance, autodetect, &wasi_nn_ctx); if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
res = ensure_backend(instance, autodetect, wasi_nn_ctx);
if (res != success) if (res != success)
return res; goto fail;
;
call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name_with_config, res, call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name_with_config, res,
wasi_nn_ctx->backend_ctx, name, name_len, config, wasi_nn_ctx->backend_ctx, name, name_len, config,
config_len, g); config_len, g);
if (res != success) if (res != success)
return res; goto fail;
wasi_nn_ctx->is_model_loaded = true; wasi_nn_ctx->is_model_loaded = true;
return success; res = success;
fail:
unlock_ctx(wasi_nn_ctx);
return res;
} }
wasi_nn_error wasi_nn_error
@ -602,20 +628,27 @@ wasi_nn_init_execution_context(wasm_exec_env_t exec_env, graph g,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
if (!wasm_runtime_validate_native_addr( if (!wasm_runtime_validate_native_addr(
instance, ctx, (uint64)sizeof(graph_execution_context))) { instance, ctx, (uint64)sizeof(graph_execution_context))) {
NN_ERR_PRINTF("ctx is invalid"); NN_ERR_PRINTF("ctx is invalid");
return invalid_argument; res = invalid_argument;
goto fail;
} }
call_wasi_nn_func(wasi_nn_ctx->backend, init_execution_context, res, call_wasi_nn_func(wasi_nn_ctx->backend, init_execution_context, res,
wasi_nn_ctx->backend_ctx, g, ctx); wasi_nn_ctx->backend_ctx, g, ctx);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -630,17 +663,21 @@ wasi_nn_set_input(wasm_exec_env_t exec_env, graph_execution_context ctx,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
tensor input_tensor_native = { 0 }; tensor input_tensor_native = { 0 };
if (success if (success
!= (res = tensor_app_native(instance, input_tensor, != (res = tensor_app_native(instance, input_tensor,
&input_tensor_native))) &input_tensor_native)))
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, set_input, res, call_wasi_nn_func(wasi_nn_ctx->backend, set_input, res,
wasi_nn_ctx->backend_ctx, ctx, index, wasi_nn_ctx->backend_ctx, ctx, index,
@ -648,7 +685,8 @@ wasi_nn_set_input(wasm_exec_env_t exec_env, graph_execution_context ctx,
// XXX: Free intermediate structure pointers // XXX: Free intermediate structure pointers
if (input_tensor_native.dimensions) if (input_tensor_native.dimensions)
wasm_runtime_free(input_tensor_native.dimensions); wasm_runtime_free(input_tensor_native.dimensions);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -662,26 +700,32 @@ wasi_nn_compute(wasm_exec_env_t exec_env, graph_execution_context ctx)
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, compute, res, call_wasi_nn_func(wasi_nn_ctx->backend, compute, res,
wasi_nn_ctx->backend_ctx, ctx); wasi_nn_ctx->backend_ctx, ctx);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
wasi_nn_error wasi_nn_error
wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx, wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
uint32_t index, tensor_data output_tensor, uint32_t index, void *output_tensor,
uint32_t output_tensor_len, uint32_t *output_tensor_size) uint32_t output_tensor_len, uint32_t *output_tensor_size)
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */ #else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
wasi_nn_error wasi_nn_error
wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx, wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
uint32_t index, tensor_data output_tensor, uint32_t index, void *output_tensor,
uint32_t *output_tensor_size) uint32_t *output_tensor_size)
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ #endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */
{ {
@ -692,28 +736,36 @@ wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
if (!wasm_runtime_validate_native_addr(instance, output_tensor_size, if (!wasm_runtime_validate_native_addr(instance, output_tensor_size,
(uint64)sizeof(uint32_t))) { (uint64)sizeof(uint32_t))) {
NN_ERR_PRINTF("output_tensor_size is invalid"); NN_ERR_PRINTF("output_tensor_size is invalid");
return invalid_argument; res = invalid_argument;
goto fail;
} }
tensor_data tensor = {
.buf = output_tensor,
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
.size = output_tensor_len,
#else
.size = *output_tensor_size,
#endif
};
call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res, call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res,
wasi_nn_ctx->backend_ctx, ctx, index, output_tensor, wasi_nn_ctx->backend_ctx, ctx, index, &tensor,
&output_tensor_len);
*output_tensor_size = output_tensor_len;
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res,
wasi_nn_ctx->backend_ctx, ctx, index, output_tensor,
output_tensor_size); output_tensor_size);
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }

View File

@ -3,15 +3,26 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#ifndef WASI_NN_OPENVINO_HPP #ifndef WASI_NN_BACKEND_H
#define WASI_NN_OPENVINO_HPP #define WASI_NN_BACKEND_H
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#ifdef __cplusplus
extern "C" {
#endif
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
load(void *ctx, graph_builder_array *builder, graph_encoding encoding, load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g); execution_target target, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *tflite_ctx, const char *name, uint32_t namelen, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name_with_config(void *ctx, const char *name, uint32_t namelen,
const char *config, uint32_t config_len, graph *g);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx); init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx);
@ -24,7 +35,7 @@ compute(void *ctx, graph_execution_context exec_ctx);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size); tensor_data *output_tensor, uint32_t *output_tensor_size);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_backend(void **ctx); init_backend(void **ctx);
@ -32,4 +43,8 @@ init_backend(void **ctx);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *ctx); deinit_backend(void *ctx);
#endif /* WASI_NN_OPENVINO_HPP */ #ifdef __cplusplus
}
#endif
#endif /* WASI_NN_BACKEND_H */

View File

@ -2,7 +2,10 @@
* Copyright (C) 2019 Intel Corporation. All rights reserved. * Copyright (C) 2019 Intel Corporation. All rights reserved.
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_types.h"
#include <stdlib.h>
#include "wasi_nn_backend.h"
#include "utils/logger.h" #include "utils/logger.h"
#include "llama.h" #include "llama.h"
#include "ggml.h" #include "ggml.h"
@ -286,7 +289,7 @@ deinit_backend(void *ctx)
llama_backend_free(); llama_backend_free();
os_free(backend_ctx); free(backend_ctx);
return success; return success;
} }
@ -302,6 +305,11 @@ __load_by_name_with_configuration(void *ctx, const char *filename, graph *g)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (backend_ctx->model != NULL) {
// we only implement a single graph
return unsupported_operation;
}
// make sure backend_ctx->config is initialized // make sure backend_ctx->config is initialized
struct llama_model_params model_params = struct llama_model_params model_params =
@ -320,6 +328,7 @@ __load_by_name_with_configuration(void *ctx, const char *filename, graph *g)
#endif #endif
backend_ctx->model = model; backend_ctx->model = model;
*g = 0;
return success; return success;
} }
@ -360,6 +369,16 @@ init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (g != 0 || backend_ctx->model == NULL) {
// we only implement a single graph
return runtime_error;
}
if (backend_ctx->ctx != NULL) {
// we only implement a single context
return unsupported_operation;
}
struct llama_context_params ctx_params = struct llama_context_params ctx_params =
llama_context_params_from_wasi_nn_llama_config(&backend_ctx->config); llama_context_params_from_wasi_nn_llama_config(&backend_ctx->config);
struct llama_context *llama_ctx = struct llama_context *llama_ctx =
@ -370,6 +389,7 @@ init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
} }
backend_ctx->ctx = llama_ctx; backend_ctx->ctx = llama_ctx;
*exec_ctx = 0;
NN_INFO_PRINTF("n_predict = %d, n_ctx = %d", backend_ctx->config.n_predict, NN_INFO_PRINTF("n_predict = %d, n_ctx = %d", backend_ctx->config.n_predict,
llama_n_ctx(backend_ctx->ctx)); llama_n_ctx(backend_ctx->ctx));
@ -381,18 +401,24 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor *wasi_nn_tensor) tensor *wasi_nn_tensor)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
// tensor->data is the prompt string. ends with \0
char *prompt_text = (char *)wasi_nn_tensor->data; if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
// tensor->data is the prompt string.
char *prompt_text = (char *)wasi_nn_tensor->data.buf;
uint32_t prompt_text_len = wasi_nn_tensor->data.size;
#ifndef NDEBUG #ifndef NDEBUG
NN_DBG_PRINTF("--------------------------------------------------"); NN_DBG_PRINTF("--------------------------------------------------");
NN_DBG_PRINTF("prompt_text: %s", prompt_text); NN_DBG_PRINTF("prompt_text: %.*s", (int)prompt_text_len, prompt_text);
NN_DBG_PRINTF("--------------------------------------------------"); NN_DBG_PRINTF("--------------------------------------------------");
#endif #endif
// tokenize the prompt // tokenize the prompt
uint32_t n_token_max = llama_n_ctx(backend_ctx->ctx); uint32_t n_token_max = llama_n_ctx(backend_ctx->ctx);
uint32_t prompt_text_len = strlen(prompt_text);
if (backend_ctx->prompt == NULL) { if (backend_ctx->prompt == NULL) {
backend_ctx->prompt = calloc(n_token_max, sizeof(llama_token)); backend_ctx->prompt = calloc(n_token_max, sizeof(llama_token));
@ -430,6 +456,11 @@ compute(void *ctx, graph_execution_context exec_ctx)
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
wasi_nn_error ret = runtime_error; wasi_nn_error ret = runtime_error;
if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
// reset the generation buffer // reset the generation buffer
if (backend_ctx->generation == NULL) { if (backend_ctx->generation == NULL) {
backend_ctx->generation = backend_ctx->generation =
@ -477,7 +508,6 @@ compute(void *ctx, graph_execution_context exec_ctx)
// main loop // main loop
int32_t n_cur = batch.n_tokens; int32_t n_cur = batch.n_tokens;
int n_decode = 0;
int32_t n_vocab = llama_n_vocab(backend_ctx->model); int32_t n_vocab = llama_n_vocab(backend_ctx->model);
llama_token_data *candidates = NULL; llama_token_data *candidates = NULL;
@ -528,7 +558,6 @@ compute(void *ctx, graph_execution_context exec_ctx)
// push this new token for next evaluation // push this new token for next evaluation
llama_batch_add(&batch, new_token_id, n_cur, seq_ids, llama_batch_add(&batch, new_token_id, n_cur, seq_ids,
sizeof(seq_ids) / sizeof(seq_ids[0]), true); sizeof(seq_ids) / sizeof(seq_ids[0]), true);
n_decode++;
n_cur++; n_cur++;
if (llama_decode(backend_ctx->ctx, batch) != 0) { if (llama_decode(backend_ctx->ctx, batch) != 0) {
@ -549,10 +578,15 @@ fail:
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
// Compatibility with WasmEdge // Compatibility with WasmEdge
if (index > 1) { if (index > 1) {
NN_ERR_PRINTF("Invalid output index %d", index); NN_ERR_PRINTF("Invalid output index %d", index);
@ -568,7 +602,7 @@ get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
printf("%s\n", output_metadata); printf("%s\n", output_metadata);
} }
memcpy(output_tensor, output_metadata, strlen(output_metadata)); memcpy(output_tensor->buf, output_metadata, strlen(output_metadata));
*output_tensor_size = strlen(output_metadata); *output_tensor_size = strlen(output_metadata);
return success; return success;
} }
@ -588,7 +622,7 @@ get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
printf("%s", buf); printf("%s", buf);
} }
memcpy(output_tensor + end_pos, buf, strlen(buf)); memcpy(output_tensor->buf + end_pos, buf, strlen(buf));
end_pos += strlen(buf); end_pos += strlen(buf);
} }

View File

@ -3,8 +3,7 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_types.h" #include "wasi_nn_backend.h"
#include "wasi_nn_openvino.h"
#include "utils/logger.h" #include "utils/logger.h"
#include "bh_platform.h" #include "bh_platform.h"
@ -26,15 +25,25 @@
* from 4. to 6. is the Inference Loop * from 4. to 6. is the Inference Loop
*/ */
/* these limits are arbitrary. */
#define MAX_GRAPHS 4
#define MAX_EXECUTION_CONTEXTS 4
typedef struct { typedef struct {
ov_core_t *core; ov_core_t *core;
/* keep input model files */ /* keep input model files */
void *weight_data; struct OpenVINOGraph {
ov_tensor_t *weights_tensor; void *weight_data;
ov_model_t *model; ov_tensor_t *weights_tensor;
ov_compiled_model_t *compiled_model; ov_model_t *model;
ov_infer_request_t *infer_request; ov_compiled_model_t *compiled_model;
ov_tensor_t *input_tensor; } graphs[MAX_GRAPHS];
struct OpenVINOExecutionContext {
struct OpenVINOGraph *graph;
ov_infer_request_t *infer_request;
} execution_contexts[MAX_EXECUTION_CONTEXTS];
unsigned int n_graphs;
unsigned int n_execution_contexts;
} OpenVINOContext; } OpenVINOContext;
/* /*
@ -134,7 +143,7 @@ print_model_input_output_info(ov_model_t *model)
output_port = NULL; output_port = NULL;
} }
ov_error = ov_error; (void)ov_error;
fail: fail:
if (friendly_name) if (friendly_name)
ov_free(friendly_name); ov_free(friendly_name);
@ -179,6 +188,29 @@ wasi_nn_tensor_type_to_openvino_element_type(tensor_type wasi_nn_type)
return UNDEFINED; return UNDEFINED;
} }
static void
free_graph(struct OpenVINOGraph *graph)
{
if (graph->weight_data)
os_free(graph->weight_data);
if (graph->weights_tensor)
ov_tensor_free(graph->weights_tensor);
if (graph->model)
ov_model_free(graph->model);
if (graph->compiled_model)
ov_compiled_model_free(graph->compiled_model);
}
static void
free_execution_context(struct OpenVINOExecutionContext *c)
{
if (c->infer_request)
ov_infer_request_free(c->infer_request);
}
static wasi_nn_error static wasi_nn_error
uint32_array_to_int64_array(uint32_t array_size, uint32_t *src, int64_t **dst) uint32_array_to_int64_array(uint32_t array_size, uint32_t *src, int64_t **dst)
{ {
@ -198,6 +230,8 @@ load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g) execution_target target, graph *g)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
unsigned int graph_idx;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
if (encoding != openvino) { if (encoding != openvino) {
@ -223,33 +257,47 @@ load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
graph_builder xml = builder->buf[0]; graph_builder xml = builder->buf[0];
graph_builder weight = builder->buf[1]; graph_builder weight = builder->buf[1];
graph_idx = ov_ctx->n_graphs;
if (graph_idx >= MAX_GRAPHS) {
return runtime_error;
}
graph = &ov_ctx->graphs[graph_idx];
memset(graph, 0, sizeof(*graph));
/* transfer weight to an ov tensor */ /* transfer weight to an ov tensor */
{ {
ov_ctx->weight_data = os_malloc(weight.size); graph->weight_data = os_malloc(weight.size);
if (!ov_ctx->weight_data) if (!graph->weight_data)
goto fail; goto fail;
memcpy(ov_ctx->weight_data, weight.buf, weight.size); memcpy(graph->weight_data, weight.buf, weight.size);
ov_element_type_e type = U8; ov_element_type_e type = U8;
int64_t dims[1] = { weight.size }; int64_t dims[1] = { weight.size };
ov_shape_t shape = { 1, dims }; ov_shape_t shape = { 1, dims };
CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(type, shape, CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(type, shape,
ov_ctx->weight_data, graph->weight_data,
&ov_ctx->weights_tensor), &graph->weights_tensor),
ret); ret);
} }
/* load model from buffer */ /* load model from buffer */
CHECK_OV_STATUS(ov_core_read_model_from_memory_buffer( CHECK_OV_STATUS(ov_core_read_model_from_memory_buffer(
ov_ctx->core, (char *)xml.buf, xml.size, ov_ctx->core, (char *)xml.buf, xml.size,
ov_ctx->weights_tensor, &ov_ctx->model), graph->weights_tensor, &graph->model),
ret); ret);
#ifndef NDEBUG #ifndef NDEBUG
print_model_input_output_info(ov_ctx->model); print_model_input_output_info(graph->model);
#endif #endif
ret = success; CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, graph->model, "CPU", 0,
&graph->compiled_model),
ret);
*g = graph_idx;
ov_ctx->n_graphs++;
return success;
fail: fail:
free_graph(graph);
return ret; return ret;
} }
@ -257,20 +305,62 @@ __attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *ctx, const char *filename, uint32_t filename_len, graph *g) load_by_name(void *ctx, const char *filename, uint32_t filename_len, graph *g)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
unsigned int graph_idx;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
CHECK_OV_STATUS( graph_idx = ov_ctx->n_graphs;
ov_core_read_model(ov_ctx->core, filename, NULL, &ov_ctx->model), ret); if (graph_idx >= MAX_GRAPHS) {
return runtime_error;
}
graph = &ov_ctx->graphs[graph_idx];
ret = success; memset(graph, 0, sizeof(*graph));
CHECK_OV_STATUS(
ov_core_read_model(ov_ctx->core, filename, NULL, &graph->model), ret);
CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, graph->model, "CPU", 0,
&graph->compiled_model),
ret);
*g = graph_idx;
ov_ctx->n_graphs++;
return success;
fail: fail:
free_graph(graph);
return ret; return ret;
} }
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx) init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
struct OpenVINOExecutionContext *exec;
unsigned int exec_idx;
wasi_nn_error ret;
if (g >= ov_ctx->n_graphs)
return runtime_error;
graph = &ov_ctx->graphs[g];
exec_idx = ov_ctx->n_execution_contexts;
if (exec_idx >= MAX_EXECUTION_CONTEXTS)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_idx];
memset(exec, 0, sizeof(*exec));
exec->graph = graph;
CHECK_OV_STATUS(ov_compiled_model_create_infer_request(
graph->compiled_model, &exec->infer_request),
ret);
*exec_ctx = exec_idx;
ov_ctx->n_execution_contexts++;
return success; return success;
fail:
return ret;
} }
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
@ -278,10 +368,16 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor *wasi_nn_tensor) tensor *wasi_nn_tensor)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
ov_shape_t input_shape = { 0 }; ov_shape_t input_shape = { 0 };
ov_tensor_t *input_tensor = NULL;
int64_t *ov_dims = NULL; int64_t *ov_dims = NULL;
if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
/* wasi_nn_tensor -> ov_tensor */ /* wasi_nn_tensor -> ov_tensor */
{ {
ret = uint32_array_to_int64_array(wasi_nn_tensor->dimensions->size, ret = uint32_array_to_int64_array(wasi_nn_tensor->dimensions->size,
@ -305,28 +401,21 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
shape_info); shape_info);
CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape,
wasi_nn_tensor->data, wasi_nn_tensor->data.buf,
&ov_ctx->input_tensor), &input_tensor),
ret); ret);
} }
CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, ov_ctx->model, "CPU", 0,
&ov_ctx->compiled_model),
ret);
CHECK_OV_STATUS(ov_compiled_model_create_infer_request(
ov_ctx->compiled_model, &ov_ctx->infer_request),
ret);
/* install ov_tensor -> infer_request */ /* install ov_tensor -> infer_request */
CHECK_OV_STATUS(ov_infer_request_set_input_tensor_by_index( CHECK_OV_STATUS(ov_infer_request_set_input_tensor_by_index(
ov_ctx->infer_request, index, ov_ctx->input_tensor), exec->infer_request, index, input_tensor),
ret); ret);
ret = success; ret = success;
fail: fail:
if (ov_dims) if (ov_dims)
os_free(ov_dims); os_free(ov_dims);
if (input_tensor)
ov_tensor_free(input_tensor);
ov_shape_free(&input_shape); ov_shape_free(&input_shape);
return ret; return ret;
@ -336,9 +425,14 @@ __attribute__((visibility("default"))) wasi_nn_error
compute(void *ctx, graph_execution_context exec_ctx) compute(void *ctx, graph_execution_context exec_ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
CHECK_OV_STATUS(ov_infer_request_infer(ov_ctx->infer_request), ret); if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
CHECK_OV_STATUS(ov_infer_request_infer(exec->infer_request), ret);
ret = success; ret = success;
fail: fail:
return ret; return ret;
@ -346,28 +440,33 @@ fail:
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
ov_tensor_t *ov_tensor = NULL; ov_tensor_t *ov_tensor = NULL;
void *data = NULL; void *data = NULL;
size_t byte_size = 0; size_t byte_size = 0;
if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
CHECK_OV_STATUS(ov_infer_request_get_output_tensor_by_index( CHECK_OV_STATUS(ov_infer_request_get_output_tensor_by_index(
ov_ctx->infer_request, index, &ov_tensor), exec->infer_request, index, &ov_tensor),
ret); ret);
CHECK_OV_STATUS(ov_tensor_get_byte_size(ov_tensor, &byte_size), ret); CHECK_OV_STATUS(ov_tensor_get_byte_size(ov_tensor, &byte_size), ret);
if (byte_size > *output_tensor_size) { if (byte_size > output_tensor->size) {
ret = too_large; ret = too_large;
goto fail; goto fail;
} }
CHECK_OV_STATUS(ov_tensor_data(ov_tensor, &data), ret); CHECK_OV_STATUS(ov_tensor_data(ov_tensor, &data), ret);
memcpy(output_tensor, data, byte_size); memcpy(output_tensor->buf, data, byte_size);
*output_tensor_size = (uint32_t)byte_size; *output_tensor_size = (uint32_t)byte_size;
@ -421,27 +520,16 @@ __attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *ctx) deinit_backend(void *ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
unsigned int i;
if (!ov_ctx) if (!ov_ctx)
return invalid_argument; return invalid_argument;
if (ov_ctx->weight_data) for (i = 0; i < ov_ctx->n_execution_contexts; i++)
os_free(ov_ctx->weight_data); free_execution_context(&ov_ctx->execution_contexts[i]);
if (ov_ctx->weights_tensor) for (i = 0; i < ov_ctx->n_graphs; i++)
ov_tensor_free(ov_ctx->weights_tensor); free_graph(&ov_ctx->graphs[i]);
if (ov_ctx->input_tensor)
ov_tensor_free(ov_ctx->input_tensor);
if (ov_ctx->infer_request)
ov_infer_request_free(ov_ctx->infer_request);
if (ov_ctx->compiled_model)
ov_compiled_model_free(ov_ctx->compiled_model);
if (ov_ctx->model)
ov_model_free(ov_ctx->model);
if (ov_ctx->core) if (ov_ctx->core)
ov_core_free(ov_ctx->core); ov_core_free(ov_ctx->core);

View File

@ -9,7 +9,11 @@
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#include "wasm_export.h" #include "wasm_export.h"
#include "bh_platform.h"
typedef struct { typedef struct {
korp_mutex lock;
bool busy;
bool is_backend_ctx_initialized; bool is_backend_ctx_initialized;
bool is_model_loaded; bool is_model_loaded;
graph_encoding backend; graph_encoding backend;
@ -28,7 +32,7 @@ typedef wasi_nn_error (*SET_INPUT)(void *, graph_execution_context, uint32_t,
tensor *); tensor *);
typedef wasi_nn_error (*COMPUTE)(void *, graph_execution_context); typedef wasi_nn_error (*COMPUTE)(void *, graph_execution_context);
typedef wasi_nn_error (*GET_OUTPUT)(void *, graph_execution_context, uint32_t, typedef wasi_nn_error (*GET_OUTPUT)(void *, graph_execution_context, uint32_t,
tensor_data, uint32_t *); tensor_data *, uint32_t *);
/* wasi-nn general APIs */ /* wasi-nn general APIs */
typedef wasi_nn_error (*BACKEND_INITIALIZE)(void **); typedef wasi_nn_error (*BACKEND_INITIALIZE)(void **);
typedef wasi_nn_error (*BACKEND_DEINITIALIZE)(void *); typedef wasi_nn_error (*BACKEND_DEINITIALIZE)(void *);

View File

@ -3,11 +3,10 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_tensorflowlite.hpp"
#include "utils/logger.h" #include "utils/logger.h"
#include "bh_platform.h" #include "bh_platform.h"
#include "wasi_nn_types.h" #include "wasi_nn_backend.h"
#include "wasm_export.h" #include "wasm_export.h"
#include <tensorflow/lite/interpreter.h> #include <tensorflow/lite/interpreter.h>
@ -281,6 +280,11 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
{ {
TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx; TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx;
if (input_tensor->type != fp32) {
NN_ERR_PRINTF("unsupported input tensor type %u", input_tensor->type);
return runtime_error;
}
wasi_nn_error res; wasi_nn_error res;
if (success != (res = is_valid_graph_execution_context(tfl_ctx, ctx))) if (success != (res = is_valid_graph_execution_context(tfl_ctx, ctx)))
return res; return res;
@ -319,7 +323,7 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
index); index);
int size = model_tensor_size * sizeof(float); int size = model_tensor_size * sizeof(float);
bh_memcpy_s(it, size, input_tensor->data, size); bh_memcpy_s(it, size, input_tensor->data.buf, size);
} }
else { // TODO: Assuming uint8 quantized networks. else { // TODO: Assuming uint8 quantized networks.
TfLiteAffineQuantization *quant_info = TfLiteAffineQuantization *quant_info =
@ -337,7 +341,7 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_DBG_PRINTF("input tensor: (scale, offset) = (%f, %f)", scale, NN_DBG_PRINTF("input tensor: (scale, offset) = (%f, %f)", scale,
zero_point); zero_point);
float *input_tensor_f = (float *)input_tensor->data; float *input_tensor_f = (float *)input_tensor->data.buf;
for (uint32_t i = 0; i < model_tensor_size; ++i) { for (uint32_t i = 0; i < model_tensor_size; ++i) {
it[i] = (uint8_t)(input_tensor_f[i] / scale + zero_point); it[i] = (uint8_t)(input_tensor_f[i] / scale + zero_point);
} }
@ -361,7 +365,7 @@ compute(void *tflite_ctx, graph_execution_context ctx)
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index, get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx; TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx;
@ -384,23 +388,34 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
return too_large; return too_large;
} }
uint32_t model_tensor_size = 1;
for (int i = 0; i < (int)tensor->dims->size; ++i)
model_tensor_size *= (uint32_t)tensor->dims->data[i];
if (*output_tensor_size < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
if (tensor->quantization.type == kTfLiteNoQuantization) { if (tensor->quantization.type == kTfLiteNoQuantization) {
NN_DBG_PRINTF("No quantization information"); NN_DBG_PRINTF("No quantization information");
float *ot = #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
tfl_ctx->interpreters[ctx].interpreter->typed_output_tensor<float>( if (output_tensor->size < tensor->bytes) {
index); NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
int size = model_tensor_size * sizeof(float); }
bh_memcpy_s(output_tensor, size, ot, size); #else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
if (output_tensor->size < tensor->bytes / sizeof(float)) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#endif
bh_memcpy_s(output_tensor->buf, output_tensor->size, tensor->data.data,
tensor->bytes);
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
*output_tensor_size = tensor->bytes;
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
*output_tensor_size = tensor->bytes / sizeof(float);
#endif
} }
else { // TODO: Assuming uint8 quantized networks. else { // TODO: Assuming uint8 quantized networks.
TfLiteAffineQuantization *quant_info = TfLiteAffineQuantization *quant_info =
@ -409,6 +424,27 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_ERR_PRINTF("Quantization per channel is not supported"); NN_ERR_PRINTF("Quantization per channel is not supported");
return runtime_error; return runtime_error;
} }
uint32_t model_tensor_size = 1;
for (int i = 0; i < (int)tensor->dims->size; ++i)
model_tensor_size *= (uint32_t)tensor->dims->data[i];
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
if (output_tensor->size / sizeof(float) < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
if (output_tensor->size < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#endif
uint8_t *ot = tfl_ctx->interpreters[ctx] uint8_t *ot = tfl_ctx->interpreters[ctx]
.interpreter->typed_output_tensor<uint8_t>(index); .interpreter->typed_output_tensor<uint8_t>(index);
@ -417,13 +453,22 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_DBG_PRINTF("output tensor: (scale, offset) = (%f, %f)", scale, NN_DBG_PRINTF("output tensor: (scale, offset) = (%f, %f)", scale,
zero_point); zero_point);
float *output_tensor_f = (float *)output_tensor; float *output_tensor_f = (float *)output_tensor->buf;
for (uint32_t i = 0; i < model_tensor_size; ++i) { for (uint32_t i = 0; i < model_tensor_size; ++i) {
output_tensor_f[i] = (ot[i] - zero_point) * scale; output_tensor_f[i] = (ot[i] - zero_point) * scale;
} }
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
*output_tensor_size = model_tensor_size * sizeof(float);
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
*output_tensor_size = model_tensor_size;
#endif
} }
*output_tensor_size = model_tensor_size;
return success; return success;
} }

View File

@ -1,47 +0,0 @@
/*
* Copyright (C) 2019 Intel Corporation. All rights reserved.
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/
#ifndef WASI_NN_TENSORFLOWLITE_HPP
#define WASI_NN_TENSORFLOWLITE_HPP
#include "wasi_nn_types.h"
#ifdef __cplusplus
extern "C" {
#endif
__attribute__((visibility("default"))) wasi_nn_error
load(void *tflite_ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *tflite_ctx, const char *filename, uint32_t filename_len,
graph *g);
__attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *tflite_ctx, graph g, graph_execution_context *ctx);
__attribute__((visibility("default"))) wasi_nn_error
set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor *input_tensor);
__attribute__((visibility("default"))) wasi_nn_error
compute(void *tflite_ctx, graph_execution_context ctx);
__attribute__((visibility("default"))) wasi_nn_error
get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size);
__attribute__((visibility("default"))) wasi_nn_error
init_backend(void **tflite_ctx);
__attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *tflite_ctx);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -3,6 +3,17 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved. # Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception # SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# on intel mac, this ends up with a lot of the following error.
#
# AttributeError: 'Sequential' object has no attribute '_get_save_spec'.
#
# * "pip install tensorflow" installs tensorflow 2.16.2 on intel mac.
# (because it's the last version before tf deprecated the target.)
# * keras 3 support in the version seems incomplete (thus the error)
# * a workaround: use keras 2 as mentioned in:
# https://github.com/tensorflow/tensorflow/releases/tag/v2.16.1
# https://blog.tensorflow.org/2024/03/whats-new-in-tensorflow-216.html
CURR_PATH=$(cd $(dirname $0) && pwd -P) CURR_PATH=$(cd $(dirname $0) && pwd -P)
# WASM application that uses WASI-NN # WASM application that uses WASI-NN

View File

@ -3,7 +3,7 @@
import tensorflow as tf import tensorflow as tf
import numpy as np import numpy as np
from keras.layers import AveragePooling2D, Conv2D from tensorflow.keras.layers import AveragePooling2D, Conv2D
from tensorflow.keras import Input, Model from tensorflow.keras import Input, Model

View File

@ -35,8 +35,8 @@ extend_vector(Vector *vector, size_t length)
if (length <= vector->max_elems) if (length <= vector->max_elems)
return true; return true;
if (length < vector->size_elem * 3 / 2) if (length < vector->max_elems * 3 / 2)
length = vector->size_elem * 3 / 2; length = vector->max_elems * 3 / 2;
if (!(data = alloc_vector_data(length, vector->size_elem))) { if (!(data = alloc_vector_data(length, vector->size_elem))) {
return false; return false;
@ -194,12 +194,12 @@ bh_vector_append(Vector *vector, const void *elem_buf)
goto just_return; goto just_return;
} }
/* make sure one more slot is used by the thread who allocas it */ /* make sure one more slot is used by the thread who allocates it */
if (vector->lock) if (vector->lock)
os_mutex_lock(vector->lock); os_mutex_lock(vector->lock);
if (!extend_vector(vector, vector->num_elems + 1)) { if (!extend_vector(vector, vector->num_elems + 1)) {
LOG_ERROR("Append ector elem failed: extend vector failed.\n"); LOG_ERROR("Append vector elem failed: extend vector failed.\n");
goto unlock_return; goto unlock_return;
} }

View File

@ -102,6 +102,7 @@ cmake -DWAMR_BUILD_PLATFORM=linux -DWAMR_BUILD_TARGET=ARM
### **Enable lib wasi-nn** ### **Enable lib wasi-nn**
- **WAMR_BUILD_WASI_NN**=1/0, default to disable if not set - **WAMR_BUILD_WASI_NN**=1/0, default to disable if not set
> Note: WAMR_BUILD_WASI_NN without WAMR_BUILD_WASI_EPHEMERAL_NN is deprecated and will likely be removed in future versions of WAMR. Please consider to enable WAMR_BUILD_WASI_EPHEMERAL_NN as well.
> Note: See [WASI-NN](../core/iwasm/libraries/wasi-nn) for more details. > Note: See [WASI-NN](../core/iwasm/libraries/wasi-nn) for more details.
### **Enable lib wasi-nn GPU mode** ### **Enable lib wasi-nn GPU mode**
@ -113,7 +114,7 @@ cmake -DWAMR_BUILD_PLATFORM=linux -DWAMR_BUILD_TARGET=ARM
- **WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH**=Path to the external delegate shared library (e.g. `libedgetpu.so.1.0` for Coral USB) - **WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH**=Path to the external delegate shared library (e.g. `libedgetpu.so.1.0` for Coral USB)
### **Enable lib wasi-nn with `wasi_ephemeral_nn` module support** ### **Enable lib wasi-nn with `wasi_ephemeral_nn` module support**
- **WAMR_BUILD_WASI_EPHEMERAL_NN**=1/0, default to disable if not set - **WAMR_BUILD_WASI_EPHEMERAL_NN**=1/0, default to enable if not set
### **Disable boundary check with hardware trap** ### **Disable boundary check with hardware trap**
- **WAMR_DISABLE_HW_BOUND_CHECK**=1/0, default to enable if not set and supported by platform - **WAMR_DISABLE_HW_BOUND_CHECK**=1/0, default to enable if not set and supported by platform

View File

@ -72,7 +72,7 @@ def to_json(inst, cls):
class Fuzzing(db.Model): class Fuzzing(db.Model):
__tablename__ = 'fazzing_task' __tablename__ = 'fuzzing_task'
id = db.Column(db.Integer, autoincrement=True, id = db.Column(db.Integer, autoincrement=True,
primary_key=True, nullable=False) primary_key=True, nullable=False)
repo = db.Column(db.String(200), nullable=False, default='') repo = db.Column(db.String(200), nullable=False, default='')
@ -96,7 +96,7 @@ class TaskError(db.Model):
__tablename__ = 'task_error' __tablename__ = 'task_error'
id = db.Column(db.Integer, autoincrement=True, id = db.Column(db.Integer, autoincrement=True,
primary_key=True, nullable=False) primary_key=True, nullable=False)
fazzing_id = db.Column(db.Integer, db.ForeignKey("fazzing_task.id")) fuzzing_id = db.Column(db.Integer, db.ForeignKey("fuzzing_task.id"))
name = db.Column(db.String(200), nullable=False, default='') name = db.Column(db.String(200), nullable=False, default='')
std_out = db.Column(db.Text, default='') std_out = db.Column(db.Text, default='')
data = db.Column(db.JSON) data = db.Column(db.JSON)
@ -119,9 +119,9 @@ def to_data(data):
def error_count(data): def error_count(data):
error = len(TaskError.query.filter( error = len(TaskError.query.filter(
TaskError.fazzing_id == data.get('id'), TaskError.status.in_([1, 2])).all()) TaskError.fuzzing_id == data.get('id'), TaskError.status.in_([1, 2])).all())
end_error = len(TaskError.query.filter( end_error = len(TaskError.query.filter(
TaskError.fazzing_id == data.get('id'), TaskError.status == 0).all()) TaskError.fuzzing_id == data.get('id'), TaskError.status == 0).all())
data['error'] = error data['error'] = error
data['end_error'] = end_error data['end_error'] = end_error
return data return data
@ -159,11 +159,11 @@ def show_fuzz_list():
id = data.get('id') id = data.get('id')
if id: if id:
all_error = TaskError.query.filter( all_error = TaskError.query.filter(
TaskError.fazzing_id == id).with_entities(TaskError.id, TaskError.fazzing_id, TaskError.fuzzing_id == id).with_entities(TaskError.id, TaskError.fuzzing_id,
TaskError.create_time, TaskError.data, TaskError.create_time, TaskError.data,
TaskError.name, TaskError.status, TaskError.name, TaskError.status,
TaskError.update_time, TaskError.comment).order_by(TaskError.status.desc(), TaskError.update_time.desc(), TaskError.id.desc()).all() TaskError.update_time, TaskError.comment).order_by(TaskError.status.desc(), TaskError.update_time.desc(), TaskError.id.desc()).all()
data_message = [{'id': error['id'], "fuzzing_id": error['fazzing_id'], data_message = [{'id': error['id'], "fuzzing_id": error['fuzzing_id'],
"name": error['name'], "data": error['data'], "name": error['name'], "data": error['data'],
'create_time': error['create_time'].strftime('%Y-%m-%d %H:%M:%S'), 'create_time': error['create_time'].strftime('%Y-%m-%d %H:%M:%S'),
'update_time': error['update_time'].strftime('%Y-%m-%d %H:%M:%S'), 'update_time': error['update_time'].strftime('%Y-%m-%d %H:%M:%S'),
@ -204,7 +204,7 @@ def New_fuzzing():
# curd.set_error_status_to(list(map(lambda x: x.id, error_list)), db) # curd.set_error_status_to(list(map(lambda x: x.id, error_list)), db)
# Fuzzing.query.filter_by(id=fuzz.id).delete() # Fuzzing.query.filter_by(id=fuzz.id).delete()
fuzz.data = {'error': "Clone repo Error"} fuzz.data = {'error': "Clone repo Error"}
db.commit() db.session.commit()
return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"}) return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"})
wamr_path_parent = fuzz_dir.parent.parent wamr_path_parent = fuzz_dir.parent.parent
@ -277,7 +277,7 @@ def scheduler_run_task():
for fuzz in fuzz_query: for fuzz in fuzz_query:
all_error = TaskError.query.filter( all_error = TaskError.query.filter(
TaskError.fazzing_id == fuzz.id).with_entities(TaskError.name).all() TaskError.fuzzing_id == fuzz.id).with_entities(TaskError.name).all()
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{fuzz.id}' 'workspace' / f'build_{fuzz.id}'
dir_list = filter(lambda x: x.startswith( dir_list = filter(lambda x: x.startswith(
@ -287,7 +287,7 @@ def scheduler_run_task():
for dir in dir_list: for dir in dir_list:
cmd = f'cd {fuzz_cmd} && ./wasm_mutator_fuzz {dir}' cmd = f'cd {fuzz_cmd} && ./wasm_mutator_fuzz {dir}'
status, resp = getstatusoutput(cmd) status, resp = getstatusoutput(cmd)
task_error = TaskError(name=dir, std_out=resp, fazzing_id=fuzz.id, task_error = TaskError(name=dir, std_out=resp, fuzzing_id=fuzz.id,
create_time=datetime.utcnow() + timedelta(hours=8)) create_time=datetime.utcnow() + timedelta(hours=8))
db.session.add(task_error) db.session.add(task_error)
db.session.commit() db.session.commit()
@ -312,7 +312,7 @@ def get_error_txt():
return jsonify({"status": 0, "results": [], 'msg': "Error"}) return jsonify({"status": 0, "results": [], 'msg': "Error"})
error = TaskError.query.get(id) error = TaskError.query.get(id)
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{error.fazzing_id}' 'workspace' / f'build_{error.fuzzing_id}'
file_cmd = fuzz_cmd / error.name file_cmd = fuzz_cmd / error.name
response = send_file(file_cmd, as_attachment=True, response = send_file(file_cmd, as_attachment=True,
@ -351,7 +351,7 @@ def get_cases_zip():
with ZipFile(memory_file, "w", ZIP_DEFLATED) as zf: with ZipFile(memory_file, "w", ZIP_DEFLATED) as zf:
for task_error in task_query: for task_error in task_query:
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{task_error.fazzing_id}' 'workspace' / f'build_{task_error.fuzzing_id}'
file_cmd = fuzz_cmd / task_error.name file_cmd = fuzz_cmd / task_error.name
zf.write(str(file_cmd), arcname=task_error.name) zf.write(str(file_cmd), arcname=task_error.name)
memory_file.seek(0) memory_file.seek(0)
@ -399,7 +399,7 @@ def error_restart():
if run_status: if run_status:
return jsonify({"status": 0, "results": [], 'msg': "There are already tasks in progress"}) return jsonify({"status": 0, "results": [], 'msg': "There are already tasks in progress"})
task_query = TaskError.query.filter(TaskError.id.in_(id_list)).all() task_query = TaskError.query.filter(TaskError.id.in_(id_list)).all()
fuzzing_id = task_query[0].fazzing_id fuzzing_id = task_query[0].fuzzing_id
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{fuzzing_id}' 'workspace' / f'build_{fuzzing_id}'
restart_cmd = wasm_mutator_dir / \ restart_cmd = wasm_mutator_dir / \
@ -412,7 +412,7 @@ def error_restart():
if not Path(restart_cmd / 'wamr').exists(): if not Path(restart_cmd / 'wamr').exists():
print('------ error: clone repo not folder exists ------') print('------ error: clone repo not folder exists ------')
# fuzz.data = {'error': "Clone repo Error"} # fuzz.data = {'error': "Clone repo Error"}
db.commit() db.session.commit()
return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"}) return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"})
wamr_path_parent = fuzz_dir.parent.parent wamr_path_parent = fuzz_dir.parent.parent
wamr_path = wamr_path_parent / 'wamr' wamr_path = wamr_path_parent / 'wamr'

View File

@ -218,22 +218,57 @@ simply run `run.py`
./run.py ./run.py
``` ```
Specify a specific issue with option `--issues`/`-i`
```shell
./run.py --issues 2833 # test 1 issue #2833
./run.py -i 2833,2834,2835 # test 3 issues #2833 #2834 #2835
```
If everything went well, you should see similarly output in your command line output If everything went well, you should see similarly output in your command line output
```shell ```shell
Finish testing, 22/22 of test cases passed, no more issues should further test ==== Test results ====
Total: 22
Passed: 22
Failed: 0
Left issues in folder: no more
Cases in JSON but not found in folder: no more
``` ```
If you add the test case under directory `issues` but forget to add the running config in json file, the output can be something like If you add the test case under directory `issues` but forget to add the running config in json file, the output can be something like
```shell ```shell
Finish testing, 21/21 of test cases passed, {2945} issue(s) should further test ==== Test results ====
Total: 21
Passed: 21
Failed: 0
missed: 0
Left issues in folder: #3022
Cases in JSON but not found in folder: no more
```
If you add the test case in `running_config.json` but used the wrong id or forget to add the test case under directory `issues`, the output can be someting like
```shell
==== Test results ====
Total: 21
Passed: 21
Failed: 0
missed: 0
Left issues in folder: #2855
Cases in JSON but not found in folder: #12345
``` ```
If some test case are failing, then it will be something like If some test case are failing, then it will be something like
```shell ```shell
Finish testing, 21/22 of test cases passed, no more issue(s) should further test ==== Test results ====
Total: 22
Passed: 21
Failed: 1
Left issues in folder: no more
Cases in JSON but not found in folder: no more
``` ```
And a log file named `issues_tests.log` will be generated and inside it will display the details of the failing cases, for example: And a log file named `issues_tests.log` will be generated and inside it will display the details of the failing cases, for example:

View File

@ -10,7 +10,9 @@ import os
import subprocess import subprocess
import glob import glob
import re import re
from typing import Dict import argparse
from typing import Dict, Optional, List
WORK_DIR = os.getcwd() WORK_DIR = os.getcwd()
TEST_WASM_COMMAND = ( TEST_WASM_COMMAND = (
@ -45,7 +47,12 @@ def dump_error_log(failing_issue_id, command_lists, exit_code_cmp, stdout_cmp):
) )
def get_issue_ids_should_test(): def get_issue_ids_should_test(selected_ids: Optional[List[int]] = None):
"""Find all issue IDs that should be tested in folder issues."""
# If specific issue IDs are provided, return them as a set
if selected_ids:
return set(selected_ids)
# Define the path pattern # Define the path pattern
path_pattern = "issues/issue-*" path_pattern = "issues/issue-*"
@ -60,8 +67,8 @@ def get_issue_ids_should_test():
# Extract the issue number using regular expression # Extract the issue number using regular expression
match = re.search(pattern, dir_path) match = re.search(pattern, dir_path)
if match: if match:
issue_number = match.group(1) issue_number = int(match.group(1))
issue_numbers.add(int(issue_number)) issue_numbers.add(issue_number)
# Print the set of issue numbers # Print the set of issue numbers
return issue_numbers return issue_numbers
@ -77,10 +84,10 @@ def get_and_check(d, key, default=None, nullable=False):
def run_and_compare_results( def run_and_compare_results(
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content issue_id, cmd, description, ret_code, stdout_content
): ) -> bool:
print(f"####################################") print(f"####################################")
print(f"test BA issue #{issue_id} `{description}`: {cmd}") print(f"test BA issue #{issue_id} `{description}`...")
command_list = cmd.split() command_list = cmd.split()
result = subprocess.run( result = subprocess.run(
command_list, command_list,
@ -95,19 +102,21 @@ def run_and_compare_results(
exit_code_cmp = f"exit code (actual, expected) : {actual_exit_code, ret_code}" exit_code_cmp = f"exit code (actual, expected) : {actual_exit_code, ret_code}"
stdout_cmp = f"stdout (actual, expected) : {actual_output, stdout_content}" stdout_cmp = f"stdout (actual, expected) : {actual_output, stdout_content}"
print(exit_code_cmp)
print(stdout_cmp)
if actual_exit_code == ret_code and ( if actual_exit_code == ret_code and (
actual_output == stdout_content actual_output == stdout_content
or (stdout_content == "Compile success" or (
and actual_output.find(stdout_content) != -1) stdout_content == "Compile success"
and actual_output.find(stdout_content) != -1
)
or (len(stdout_content) > 30 and actual_output.find(stdout_content) != -1) or (len(stdout_content) > 30 and actual_output.find(stdout_content) != -1)
): ):
passed_ids.add(issue_id)
print("== PASS ==") print("== PASS ==")
return True
else: else:
failed_ids.add(issue_id) print(cmd)
print(exit_code_cmp)
print(stdout_cmp)
print(f"== FAILED: {issue_id} ==") print(f"== FAILED: {issue_id} ==")
dump_error_log( dump_error_log(
issue_id, issue_id,
@ -115,15 +124,11 @@ def run_and_compare_results(
exit_code_cmp, exit_code_cmp,
stdout_cmp, stdout_cmp,
) )
return False
print("")
def run_issue_test_wamrc( def run_issue_test_wamrc(issue_id, compile_options):
passed_ids, failed_ids, issue_id, compile_options, stdout_only_cmp_last_line=False
):
compiler = get_and_check(compile_options, "compiler") compiler = get_and_check(compile_options, "compiler")
only_compile = get_and_check(compile_options, "only compile")
in_file = get_and_check(compile_options, "in file") in_file = get_and_check(compile_options, "in file")
out_file = get_and_check(compile_options, "out file") out_file = get_and_check(compile_options, "out file")
options = get_and_check(compile_options, "options") options = get_and_check(compile_options, "options")
@ -145,14 +150,10 @@ def run_issue_test_wamrc(
compiler=compiler, options=options, out_file=out_file_path, in_file=in_file_path compiler=compiler, options=options, out_file=out_file_path, in_file=in_file_path
) )
run_and_compare_results( return run_and_compare_results(issue_id, cmd, description, ret_code, stdout_content)
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content
)
return only_compile
def run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case): def run_issue_test_iwasm(issue_id, test_case) -> bool:
runtime = get_and_check(test_case, "runtime") runtime = get_and_check(test_case, "runtime")
mode = get_and_check(test_case, "mode") mode = get_and_check(test_case, "mode")
file = get_and_check(test_case, "file") file = get_and_check(test_case, "file")
@ -194,17 +195,19 @@ def run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case):
argument=argument, argument=argument,
) )
run_and_compare_results( return run_and_compare_results(issue_id, cmd, description, ret_code, stdout_content)
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content
)
def process_and_run_test_cases(data: Dict[str, Dict]): def process_and_run_test_cases(
issue_ids_should_test = get_issue_ids_should_test() data: Dict[str, Dict], selected_ids: Optional[List[int]] = None
):
issue_ids_should_test = get_issue_ids_should_test(selected_ids)
passed_ids = set() passed_ids = set()
failed_ids = set() failed_ids = set()
json_only_ids = set()
# Iterate through each test case in the json data
for test_case in data.get("test cases", []): for test_case in data.get("test cases", []):
is_deprecated = get_and_check(test_case, "deprecated") is_deprecated = get_and_check(test_case, "deprecated")
issue_ids = get_and_check(test_case, "ids", default=[]) issue_ids = get_and_check(test_case, "ids", default=[])
@ -214,33 +217,79 @@ def process_and_run_test_cases(data: Dict[str, Dict]):
continue continue
compile_options = get_and_check(test_case, "compile_options", nullable=True) compile_options = get_and_check(test_case, "compile_options", nullable=True)
for issue_id in issue_ids:
only_compile = False
# if this issue needs to test wamrc to compile the test case first
if compile_options:
only_compile = compile_options["only compile"]
run_issue_test_wamrc(passed_ids, failed_ids, issue_id, compile_options)
# if this issue requires to test iwasm to run the test case for issue_id in issue_ids:
if not only_compile: if issue_id not in issue_ids_should_test:
run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case) json_only_ids.add(issue_id)
continue
# cross out the this issue_id in the should test set # cross out the this issue_id in the should test set
issue_ids_should_test.remove(issue_id) issue_ids_should_test.remove(issue_id)
only_compile = False
# if this issue needs to test wamrc to compile the test case first
if compile_options:
only_compile = compile_options["only compile"]
compile_res = run_issue_test_wamrc(issue_id, compile_options)
if only_compile:
if compile_res:
passed_ids.add(issue_id)
else:
failed_ids.add(issue_id)
continue
else:
# if compile success, then continue to test iwasm
if not compile_res:
failed_ids.add(issue_id)
continue
# if this issue requires to test iwasm to run the test case
if not only_compile:
if run_issue_test_iwasm(issue_id, test_case):
passed_ids.add(issue_id)
else:
failed_ids.add(issue_id)
total = len(passed_ids) + len(failed_ids) total = len(passed_ids) + len(failed_ids)
passed = len(passed_ids) passed = len(passed_ids)
failed = len(failed_ids) failed = len(failed_ids)
issue_ids_should_test = (
issue_ids_should_test if issue_ids_should_test else "no more" format_issue_ids_should_test = (
" ".join(f"#{x}" for x in issue_ids_should_test)
if issue_ids_should_test
else "no more"
) )
format_json_only_ids = (
" ".join(f"#{x}" for x in json_only_ids) if json_only_ids else "no more"
)
print(f"####################################")
print(f"==== Test results ====") print(f"==== Test results ====")
print(f" Total: {total}") print(f" Total: {total}")
print(f" Passed: {passed}") print(f" Passed: {passed}")
print(f" Failed: {failed}") print(f" Failed: {failed}")
if not selected_ids:
print(f" Left issues in folder: {format_issue_ids_should_test}")
print(f" Cases in JSON but not found in folder: {format_json_only_ids}")
else:
print(f" Issues not found in folder: {format_issue_ids_should_test}")
def main(): def main():
parser = argparse.ArgumentParser(description="Run BA issue tests.")
parser.add_argument(
"-i",
"--issues",
type=str,
help="Comma separated list of issue ids to run, e.g. 1,2,3. Default: all.",
)
args = parser.parse_args()
selected_ids = None
if args.issues:
selected_ids = [int(x) for x in args.issues.split(",") if x.strip().isdigit()]
# Path to the JSON file # Path to the JSON file
file_path = "running_config.json" file_path = "running_config.json"
@ -256,7 +305,7 @@ def main():
os.remove(LOG_FILE) os.remove(LOG_FILE)
# Process the data # Process the data
process_and_run_test_cases(data) process_and_run_test_cases(data, selected_ids)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -17,7 +17,7 @@ git apply ../../../wamr-test-suites/spec-test-script/gc_ignore_cases.patch
# Set OCaml compiler environment # Set OCaml compiler environment
eval $(opam config env) eval $(opam config env)
echo "compile the reference intepreter" echo "compile the reference interpreter"
pushd interpreter pushd interpreter
make make
popd popd

View File

@ -9,7 +9,7 @@ import os
from collections import OrderedDict from collections import OrderedDict
def CLI_ARGS_GENREATOR(running_modes_supported: list[str]) -> list[str]: def CLI_ARGS_GENERATOR(running_modes_supported: list[str]) -> list[str]:
res = [] res = []
list_2d = [["--default-running-mode={} --module-running-mode={}".format(i, j) list_2d = [["--default-running-mode={} --module-running-mode={}".format(i, j)
for i in running_modes_supported] for j in running_modes_supported] for i in running_modes_supported] for j in running_modes_supported]
@ -35,16 +35,16 @@ def main():
] ]
# Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion. # Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion.
# just to be safe, using orderreddict # just to be safe, using OrderedDict
# key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested} # key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested}
test_options = OrderedDict({ test_options = OrderedDict({
"INTERP": {"compile_flag": COMPILE_FLAGS[0], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:1])}, "INTERP": {"compile_flag": COMPILE_FLAGS[0], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:1])},
"FAST_JIT": {"compile_flag": COMPILE_FLAGS[1], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:2])}, "FAST_JIT": {"compile_flag": COMPILE_FLAGS[1], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:2])},
"LLVM_JIT": {"compile_flag": COMPILE_FLAGS[2], "LLVM_JIT": {"compile_flag": COMPILE_FLAGS[2],
"cli_args": CLI_ARGS_GENREATOR([RUNNING_MODES[0], RUNNING_MODES[2]])}, "cli_args": CLI_ARGS_GENERATOR([RUNNING_MODES[0], RUNNING_MODES[2]])},
"MULTI_TIER_JIT": {"compile_flag": COMPILE_FLAGS[3], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES)}, "MULTI_TIER_JIT": {"compile_flag": COMPILE_FLAGS[3], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES)},
"EAGER_JIT_WITH_BOTH_JIT": {"compile_flag": COMPILE_FLAGS[4], "EAGER_JIT_WITH_BOTH_JIT": {"compile_flag": COMPILE_FLAGS[4],
"cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:3])} "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:3])}
}) })
build_cmd = "./build_c_embed.sh \"{build_flag}\"" build_cmd = "./build_c_embed.sh \"{build_flag}\""

View File

@ -29,7 +29,7 @@ def main():
] ]
# Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion. # Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion.
# just to be safe, using orderreddict # just to be safe, using OrderedDict
# key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested} # key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested}
test_options = OrderedDict({ test_options = OrderedDict({
"INTERP": {"compile_flag": COMPILE_FLAGS[0], "iwasm_cli_args": IWASM_CLI_ARGS[:1]}, "INTERP": {"compile_flag": COMPILE_FLAGS[0], "iwasm_cli_args": IWASM_CLI_ARGS[:1]},

View File

@ -31,7 +31,7 @@ class memory64_atomic_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -56,6 +56,8 @@ class memory64_atomic_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }

View File

@ -31,7 +31,7 @@ class memory64_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -56,11 +56,13 @@ class memory64_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }
void destory_exec_env() void destroy_exec_env()
{ {
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
wasm_runtime_deinstantiate(module_inst); wasm_runtime_deinstantiate(module_inst);
@ -201,7 +203,7 @@ TEST_P(memory64_test_suite, memory_8GB)
i64 = 0xbeefdead; i64 = 0xbeefdead;
ASSERT_EQ(i64, GET_U64_FROM_ADDR(wasm_argv)); ASSERT_EQ(i64, GET_U64_FROM_ADDR(wasm_argv));
destory_exec_env(); destroy_exec_env();
} }
TEST_P(memory64_test_suite, mem64_from_clang) TEST_P(memory64_test_suite, mem64_from_clang)
@ -228,7 +230,7 @@ TEST_P(memory64_test_suite, mem64_from_clang)
i32 = 0x109; i32 = 0x109;
ASSERT_EQ(i32, wasm_argv[0]); ASSERT_EQ(i32, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
} }
INSTANTIATE_TEST_CASE_P(RunningMode, memory64_test_suite, INSTANTIATE_TEST_CASE_P(RunningMode, memory64_test_suite,

View File

@ -21,7 +21,7 @@ std::string TEST_WASM1 = "/hello.wasm";
std::string TEST_WASM2 = "/mytest.wasm"; std::string TEST_WASM2 = "/mytest.wasm";
char *WASM_FILE_1; char *WASM_FILE_1;
char *WASM_FILE_2; char *WASM_FILE_2;
std::vector<RunningMode> running_mode_supportted = { Mode_Interp, std::vector<RunningMode> running_mode_supported = { Mode_Interp,
#if WASM_ENABLE_FAST_JIT != 0 #if WASM_ENABLE_FAST_JIT != 0
Mode_Fast_JIT, Mode_Fast_JIT,
#endif #endif
@ -76,7 +76,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -101,11 +101,13 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }
void destory_exec_env() void destroy_exec_env()
{ {
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
wasm_runtime_deinstantiate(module_inst); wasm_runtime_deinstantiate(module_inst);
@ -139,7 +141,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
ASSERT_EQ(10, wasm_argv[0]); ASSERT_EQ(10, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
} }
void run_wasm_complex(char *filename1, char *filename2, void run_wasm_complex(char *filename1, char *filename2,
@ -168,7 +170,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
ASSERT_EQ(10, wasm_argv[0]); ASSERT_EQ(10, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
/* run wasm file 2 in running_mode */ /* run wasm file 2 in running_mode */
ret = load_wasm_file(filename2); ret = load_wasm_file(filename2);
@ -184,7 +186,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ret = wasm_runtime_call_wasm(exec_env, main, 2, wasm_argv); ret = wasm_runtime_call_wasm(exec_env, main, 2, wasm_argv);
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
destory_exec_env(); destroy_exec_env();
} }
public: public:
@ -246,7 +248,7 @@ TEST_F(wasm_running_modes_test_suite, wasm_runtime_is_running_mode_supported)
// normal situation // normal situation
ASSERT_EQ(true, wasm_runtime_is_running_mode_supported( ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(
static_cast<RunningMode>(Mode_Default))); static_cast<RunningMode>(Mode_Default)));
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(running_mode)); ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(running_mode));
} }
@ -264,7 +266,7 @@ TEST_F(wasm_running_modes_test_suite, wasm_runtime_set_default_running_mode)
// normal situation: only set up // normal situation: only set up
ASSERT_EQ(true, wasm_runtime_set_default_running_mode( ASSERT_EQ(true, wasm_runtime_set_default_running_mode(
static_cast<RunningMode>(Mode_Default))); static_cast<RunningMode>(Mode_Default)));
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
ASSERT_EQ(true, wasm_runtime_set_default_running_mode(running_mode)); ASSERT_EQ(true, wasm_runtime_set_default_running_mode(running_mode));
} }
@ -296,13 +298,13 @@ TEST_P(wasm_running_modes_test_suite,
wasm_runtime_set_and_get_running_mode_complex) wasm_runtime_set_and_get_running_mode_complex)
{ {
RunningMode default_running_mode = GetParam(); RunningMode default_running_mode = GetParam();
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
run_wasm_complex(WASM_FILE_1, WASM_FILE_2, default_running_mode, run_wasm_complex(WASM_FILE_1, WASM_FILE_2, default_running_mode,
running_mode); running_mode);
} }
} }
INSTANTIATE_TEST_CASE_P(RunningMode, wasm_running_modes_test_suite, INSTANTIATE_TEST_CASE_P(RunningMode, wasm_running_modes_test_suite,
testing::ValuesIn(running_mode_supportted)); testing::ValuesIn(running_mode_supported));
} }

View File

@ -362,31 +362,31 @@ function sightglass_test()
function setup_wabt() function setup_wabt()
{ {
# please sync with .github/actions/install-wasi-sdk-wabt/action.yml # please sync with .github/actions/install-wasi-sdk-wabt/action.yml
case ${PLATFORM} in
cosmopolitan)
;;
linux)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-ubuntu-20.04.tar.gz
WABT_VERSION=1.0.37
;;
darwin)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.36/wabt-1.0.36-macos-12.tar.gz
WABT_VERSION=1.0.36
;;
windows)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-windows.tar.gz
WABT_VERSION=1.0.37
;;
*)
echo "wabt platform for ${PLATFORM} in unknown"
exit 1
;;
esac
if [ ${WABT_BINARY_RELEASE} == "YES" ]; then if [ ${WABT_BINARY_RELEASE} == "YES" ]; then
echo "download a binary release and install" echo "download a binary release and install"
local WAT2WASM=${WORK_DIR}/wabt/out/gcc/Release/wat2wasm local WAT2WASM=${WORK_DIR}/wabt/out/gcc/Release/wat2wasm
if [ ! -f ${WAT2WASM} ]; then if [ ! -f ${WAT2WASM} ]; then
case ${PLATFORM} in
cosmopolitan)
;;
linux)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-ubuntu-20.04.tar.gz
WABT_VERSION=1.0.37
;;
darwin)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.36/wabt-1.0.36-macos-12.tar.gz
WABT_VERSION=1.0.36
;;
windows)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-windows.tar.gz
WABT_VERSION=1.0.37
;;
*)
echo "wabt platform for ${PLATFORM} in unknown"
exit 1
;;
esac
pushd /tmp pushd /tmp
wget -O wabt-tar.gz --progress=dot:giga ${WABT_URL} wget -O wabt-tar.gz --progress=dot:giga ${WABT_URL}
tar xf wabt-tar.gz tar xf wabt-tar.gz
@ -414,7 +414,7 @@ function setup_wabt()
function compile_reference_interpreter() function compile_reference_interpreter()
{ {
echo "compile the reference intepreter" echo "compile the reference interpreter"
pushd interpreter pushd interpreter
make make
if [ $? -ne 0 ] if [ $? -ne 0 ]

View File

@ -391,6 +391,12 @@ if (NOT MSVC)
target_link_libraries (wamrc ssp.a ws2_32) target_link_libraries (wamrc ssp.a ws2_32)
else() else()
target_link_libraries (wamrc -ldl) target_link_libraries (wamrc -ldl)
# Link libc++ statically to reduce the runtime dependency
target_link_libraries (wamrc -static-libstdc++)
# If not on macOS, link libgcc statically
if (NOT APPLE)
target_link_libraries (wamrc -static-libgcc)
endif()
endif() endif()
else() else()
target_link_libraries (wamrc aotclib vmlib ${lib_lldb} ${WAMRC_LINK_LLVM_LIBS} ${lib_ubsan} target_link_libraries (wamrc aotclib vmlib ${lib_lldb} ${WAMRC_LINK_LLVM_LIBS} ${lib_ubsan}

View File

@ -0,0 +1,15 @@
#! /bin/sh
# Copyright (C) 2025 Midokura Japan KK. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
set -e
PREFIX=${1:-/tmp/wamr}
WASI_SDK=${WASI_SDK:-/opt/wasi-sdk}
cmake -B build-lib \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \
-DCMAKE_INSTALL_PREFIX=${PREFIX} \
.
cmake --build build-lib -t install

View File

@ -0,0 +1,33 @@
#! /bin/sh
# Copyright (C) 2025 Midokura Japan KK. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
set -e
PREFIX=${1:-/tmp/wamr}
WASI_SDK=${WASI_SDK:-/opt/wasi-sdk}
cmake -B build-app-nn \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/nn
cmake --build build-app-nn
cmake -B build-app-nn-cli \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/nn-cli
cmake --build build-app-nn-cli
cmake -B build-app-socket-nslookup \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk-pthread.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/socket-nslookup
cmake --build build-app-socket-nslookup
cmake -B build-app-socket-tcp-udp \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk-pthread.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/socket-tcp-udp
cmake --build build-app-socket-tcp-udp

View File

@ -266,7 +266,8 @@ set_input(char *options)
wasi_ephemeral_nn_error nnret; wasi_ephemeral_nn_error nnret;
wasi_ephemeral_nn_graph_execution_context c = wasi_ephemeral_nn_graph_execution_context c =
map_get(&contexts, context_id); map_get(&contexts, context_id);
tensor.data = buf; tensor.data.buf = buf;
tensor.data.size = sz;
nnret = wasi_ephemeral_nn_set_input(c, idx, &tensor); nnret = wasi_ephemeral_nn_set_input(c, idx, &tensor);
unmap_file(buf, sz); unmap_file(buf, sz);
if (nnret != wasi_ephemeral_nn_error_success) { if (nnret != wasi_ephemeral_nn_error_success) {

View File

@ -147,7 +147,8 @@ main(int argc, char **argv)
wasi_ephemeral_nn_tensor tensor = { wasi_ephemeral_nn_tensor tensor = {
.dimensions = { .buf = (uint32_t[]){1, 3, 224, 224,}, .size = 4, }, .dimensions = { .buf = (uint32_t[]){1, 3, 224, 224,}, .size = 4, },
.type = wasi_ephemeral_nn_type_fp32, .type = wasi_ephemeral_nn_type_fp32,
.data = tensordata, .data.buf = tensordata,
.data.size = tensordatasz,
}; };
nnret = wasi_ephemeral_nn_set_input(ctx, 0, &tensor); nnret = wasi_ephemeral_nn_set_input(ctx, 0, &tensor);
unmap_file(tensordata, tensordatasz); unmap_file(tensordata, tensordatasz);

View File

@ -13,6 +13,12 @@ target_include_directories(wamr-wasi-socket
$<BUILD_INTERFACE:${wasi_socket_header_dir}> $<BUILD_INTERFACE:${wasi_socket_header_dir}>
$<INSTALL_INTERFACE:include>) $<INSTALL_INTERFACE:include>)
# as this is a library, be extra conservative about wasm features
# to improve compatibilities. as this particular library is just a
# simple static stub, extra wasm features won't benefit us much anyway.
# note that LLVM-19 enables reference-types by default.
target_compile_options(wamr-wasi-socket PRIVATE -mno-reference-types)
install(TARGETS wamr-wasi-socket install(TARGETS wamr-wasi-socket
EXPORT wamr-wasi-socket-config EXPORT wamr-wasi-socket-config
PUBLIC_HEADER DESTINATION include) PUBLIC_HEADER DESTINATION include)

View File

@ -5,35 +5,7 @@
set -e set -e
PREFIX=/tmp/wamr PREFIX=${1:-/tmp/wamr}
WASI_SDK=${WASI_SDK:-/opt/wasi-sdk}
cmake -B build-lib \ ./build_libs.sh ${PREFIX}
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \ ./build_samples.sh ${PREFIX}
-DCMAKE_INSTALL_PREFIX=${PREFIX} \
.
cmake --build build-lib -t install
cmake -B build-app-nn \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/nn
cmake --build build-app-nn
cmake -B build-app-nn-cli \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/nn-cli
cmake --build build-app-nn-cli
cmake -B build-app-socket-nslookup \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk-pthread.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/socket-nslookup
cmake --build build-app-socket-nslookup
cmake -B build-app-socket-tcp-udp \
-DCMAKE_TOOLCHAIN_FILE=${WASI_SDK}/share/cmake/wasi-sdk-pthread.cmake \
-DCMAKE_PREFIX_PATH=${PREFIX} \
samples/socket-tcp-udp
cmake --build build-app-socket-tcp-udp