Compare commits

...

52 Commits

Author SHA1 Message Date
YAMAMOTO Takashi
17be90d8f0
posix os_socket_addr_resolve: return the consistent max_info_size (#4467)
Some checks are pending
compilation on SGX / run_samples_file (-DWAMR_BUILD_SGX_IPFS=1, $AOT_BUILD_OPTIONS, ${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, linux-sgx) (push) Blocked by required conditions
compilation on SGX / run_samples_file (-DWAMR_BUILD_SGX_IPFS=1, $CLASSIC_INTERP_BUILD_OPTIONS, ${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, linux-sgx) (push) Blocked by required conditions
compilation on SGX / run_samples_file (-DWAMR_BUILD_SGX_IPFS=1, $FAST_INTERP_BUILD_OPTIONS, ${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, linux-sgx) (push) Blocked by required conditions
compilation on SGX / run_samples_file (-DWAMR_BUILD_SGX_IPFS=1, $FAST_JIT_BUILD_OPTIONS, ${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, linux-sgx) (push) Blocked by required conditions
compilation on SGX / spec_test_default (${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, aot, $DEFAULT_TEST_OPTIONS) (push) Blocked by required conditions
compilation on SGX / spec_test_default (${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, aot, $SIMD_TEST_OPTIONS) (push) Blocked by required conditions
compilation on SGX / spec_test_default (${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, aot, $XIP_TEST_OPTIONS) (push) Blocked by required conditions
compilation on SGX / spec_test_default (${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, classic-interp, $DEFAULT_TEST_OPTIONS) (push) Blocked by required conditions
compilation on SGX / spec_test_default (${{ needs.build_llvm_libraries.outputs.cache_key }}, ubuntu-22.04, fast-jit, $DEFAULT_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / build_llvm_libraries_on_windows (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_AOT=0) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_INTERP=0) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_CUSTOM_NAME_SECTION=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_DEBUG_INTERP=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_LIBC_UVWASI=0 -DWAMR_BUILD_LIBC_WASI=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_LIB_PTHREAD=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_LIB_WASI_THREADS=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_REF_TYPES=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_SIMD=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_BUILD_TAIL_CALL=1) (push) Waiting to run
compilation on windows-latest / build_iwasm (-DWAMR_DISABLE_HW_BOUND_CHECK=1) (push) Waiting to run
compilation on windows-latest / build_wamrc (${{ needs.build_llvm_libraries_on_windows.outputs.cache_key }}, windows-latest) (push) Blocked by required conditions
compilation on windows-latest / test (classic-interp, $DEFAULT_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (classic-interp, $MULTI_MODULES_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (classic-interp, $THREADS_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (classic-interp, $WASI_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (fast-interp, $DEFAULT_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (fast-interp, $MULTI_MODULES_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (fast-interp, $THREADS_TEST_OPTIONS) (push) Blocked by required conditions
compilation on windows-latest / test (fast-interp, $WASI_TEST_OPTIONS) (push) Blocked by required conditions
return the same value for max_info_size regardless of addr_info_size.
2025-07-10 13:42:57 +08:00
Zhenwei Jin
9e92f5ebe1
fix a wamrc debug mode compile issue (#4470) 2025-07-10 08:29:31 +08:00
Zhenwei Jin
334b4f8cb5
Add readme for extended const (#4471) 2025-07-10 08:29:03 +08:00
YAMAMOTO Takashi
56f87b7ee9
wasi-nn: do not pretend to support legacy abi in openvino and llamacpp (#4468)
as tested by core/iwasm/libraries/wasi-nn/test/test_tensorflow.c,
the legacy "wasi_nn" abi uses the number of fp32 for get_output.
because these backends don't implement the abi, bail out explicitly
in build time.

cf.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4376
2025-07-10 08:28:08 +08:00
YAMAMOTO Takashi
933e49df18
appease a few compiler warnings (-Wstrict-prototypes) (#4465) 2025-07-10 08:28:00 +08:00
Zhenwei Jin
d6fc18e197
enable aux stack frame for aot compiler fuzz test (#4462) 2025-07-10 08:27:42 +08:00
dependabot[bot]
cd4712d939
build(deps): Bump github/codeql-action from 3.29.1 to 3.29.2 (#4459)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.29.1 to 3.29.2.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Commits](https://github.com/github/codeql-action/compare/v3.29.1...v3.29.2)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-10 08:27:32 +08:00
Liu Jia
903a5c1f8c
improve logic of heap_type validation when ref.null (#4372)
* Follow-up to PR #4300: prevent potential overflow

PR #4300 introduced the rationale for validating heap_type.
This patch moves the validation before the computation of
type1 to prevent potential overflow.
2025-07-10 08:27:11 +08:00
YAMAMOTO Takashi
fbd27e5e03
wasi_nn_llamacpp.c: explicitly reject unimplemented input index (#4446)
note: wasmedge seems to accept index 1 for metadata. we don't
implement it.
2025-07-09 10:35:46 +08:00
liang.he
d3b0b5c066
Add security issue runbook (#4450)
This runbook provides step-by-step guidance on handling a security advisory
2025-07-08 09:26:45 +08:00
YAMAMOTO Takashi
0eceed2ba9
wasi: avoid user-triggerable 0-sized allocations (#4452)
might fix https://github.com/bytecodealliance/wasm-micro-runtime/issues/4451
2025-07-08 09:25:50 +08:00
TianlongLiang
7d05dbc988
Support extended constant expressions (#4432)
* implement extended const expr (#4318)
* add a toggle to enable extended const on wamrc (#4412)
2025-07-07 13:34:02 +08:00
TianlongLiang
be33a40ba7
Fix socket shutdown (#12) (#4449) 2025-07-07 02:02:28 +08:00
TianlongLiang
8a55a1e7a1
Shared heap enhancements for Interpreter and AOT (#4400)
Propose two enhancements:

- Shared heap created from preallocated memory buffer: The user can create a shared heap from a pre-allocated buffer and see that memory region as one large chunk; there's no need to dynamically manage it(malloc/free). The user needs to make sure the native address and size of that memory region are valid.
- Introduce shared heap chain: The user can create a shared heap chain, from the wasm app point of view, it's still a continuous memory region in wasm app's point of view while in the native it can consist of multiple shared heaps (each of which is a continuous memory region). For example, one 500MB shared heap 1 and one 500 MB shared heap 2 form a chain, in Wasm's point of view, it's one 1GB shared heap.

After these enhancements, the data sharing between wasm apps, and between hosts can be more efficient and flexible. Admittedly shared heap management can be more complex for users, but it's similar to the zero-overhead principle. No overhead will be imposed for the users who don't use the shared heap enhancement or don't use the shared heap at all.
2025-07-04 10:44:51 +08:00
YAMAMOTO Takashi
ee056d8076
wasi_nn_llamacpp.c: validate input tensor type/dimensions (#4442) 2025-07-03 10:17:19 +08:00
Michiel Van Kenhove
68d5ae10d4
docs: fix cmake variable typo (#4441) 2025-07-02 09:37:29 +08:00
YAMAMOTO Takashi
d598c0d0d3
CI: add wamr_wasi_extensions to the release assets (#4425)
you can find an example of the release asset at:
https://github.com/yamt/wasm-micro-runtime/releases/download/WAMR-2.3.999/wamr-wasi-extensions-2.3.999.zip

note: this is a static library for wasm32-wasi. no need to provide
per host OS (macOS, ubuntu, etc) binaries.
2025-07-01 19:32:01 +08:00
YAMAMOTO Takashi
da6019f749
wasi_nn_llamacpp.c: reject invalid graph and execution context (#4422)
* return valid graph and execution context instead of using stack garbage.
  (always 0 for now because we don't implement multiple graph/context
  for this backend.)

* validate user-given graph and execution context values. reject
  invalid ones.
2025-07-01 19:31:00 +08:00
YAMAMOTO Takashi
ebf1404ad1
wasi_nn_openvino.c: avoid self-assignment warning (#4434) 2025-07-01 19:19:36 +08:00
liang.he
c7148a6823
Fix potential integer overflow issues (#4429)
It is reported as "Multiplication result converted to larger type".
And "Multiplication result may overflow 'Type A' before it is
converted to 'Type B'." Type A is a larger type than Type B.

Since the conversion applies after the multiplication, arithmetic
overflow may still occur.

> The rule flags every multiplication of two non-constant integer expressions
> that is (explicitly or implicitly) converted to a larger integer type. The
> conversion is an indication that the expression would produce a result that
> would be too large to fit in the smaller integer type.
2025-07-01 13:39:30 +08:00
Liu Jia
8949797c84
Improve run.py of regression (#4417)
* Improve run.py of regression
1. Fix script interruption on case failure
2. improve statistics logic
3. enable select specific issue ids
2025-07-01 10:44:53 +08:00
YAMAMOTO Takashi
38fe056cc6
wasi-nn: reduce code duplication a bit (#4433) 2025-07-01 10:37:12 +08:00
liang.he
430cc5e5ef
Refactor AOTObjectData definition to use a forward declaration (#4428)
> core/iwasm/compilation/aot_emit_aot_file.c:85:3:
    error: redefinition of typedef 'AOTObjectData' is a C11 feature
2025-07-01 10:10:11 +08:00
dependabot[bot]
cb233ec042
build(deps): Bump github/codeql-action from 3.29.0 to 3.29.1 (#4436)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.29.0 to 3.29.1.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Commits](https://github.com/github/codeql-action/compare/v3.29.0...v3.29.1)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.29.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 10:07:48 +08:00
YAMAMOTO Takashi
4fbb372f15
CI: revert SGX retry attempts (#4421)
* Revert "Improve spec test execution by adding retry logic for transient errors (#4393)"

This reverts commit 64cafaff1e.

* Revert "Add error handling for sgx ci (#4222)"

This reverts commit 8ad47897d1.
2025-06-30 12:58:20 +08:00
Zhenwei Jin
0127eafbe5
loader: fix a potential overflow issue (#4427) 2025-06-30 12:57:57 +08:00
YAMAMOTO Takashi
7a6a6a39e9
wasi_nn_openvino.c: fix a debug build (#4416)
after "wasi_nn_openvino.c: implement multiple models per instance" change.
(https://github.com/bytecodealliance/wasm-micro-runtime/pull/4380)
2025-06-30 12:57:44 +08:00
YAMAMOTO Takashi
18d4227ab6
CI: build wamr-wasi-extensions (#4394)
* wamr-wasi-extensions: separate test scripts
also, allow to specify the prefix directory.
for the convenience of the CI.

* CI: build wamr-wasi-extensions
fragments are copied from compilation_on_macos.yml.
(thus intel copyright notice)
2025-06-27 12:28:46 +08:00
liang.he
0641dd1506
Fix few shadow warnings (#4409)
- declaration of ‘memidx’ shadows a previous local
- declaration of ‘count’ shadows a previous local
2025-06-27 11:55:32 +08:00
YAMAMOTO Takashi
8ed89e2ab2
wasi_nn_llamacpp.c: remove an unused variable (#4415) 2025-06-27 11:55:08 +08:00
YAMAMOTO Takashi
d6876f1e9f
wasi_nn_llamacpp.c: fix buffer overruns in set_input (#4420)
note: for some reasons, wasmedge seems to ignore type/dimensions
for the input of ggml. some user code relies on it.
cf. https://github.com/second-state/WasmEdge-WASINN-examples/issues/196

note: despite the comment in our code, the input doesn't seem
nul-terminated.
2025-06-27 11:51:03 +08:00
YAMAMOTO Takashi
2372a472aa
wasi-nn: make the host use the wasi_ephemeral_nn version of tensor_data (#4411)
the motivations:

* make the actual input size available to the backends.
  (currently the backends have to make a guess from shape/type.)

* make the host logic look a bit similar to wasi_ephemeral_nn.

this is a backend api/abi change.
2025-06-27 07:41:42 +08:00
TianlongLiang
23799a2cb6
Collective fix (#4413)
* Fix vector growth check and typos in core (#9)
* Fix resource cleanup in memory and running modes tests (#10)
* Add end of file empty line in wasm_running_modes_test.cc
2025-06-26 10:20:40 +08:00
TianlongLiang
5b32130955
fix bug in bh_vector when extending (#4414) 2025-06-26 10:18:24 +08:00
YAMAMOTO Takashi
a7aae9d2cc
wasi_nn_llamacpp.c: make this compilable (#4403) 2025-06-26 07:05:45 +08:00
Liu Jia
535004dedc
Fix handling of non-nullable global_type during global import (#4408) 2025-06-26 06:59:57 +08:00
Zhenwei Jin
1e41519977
loader: add type index checking (#4402) 2025-06-24 20:38:39 +08:00
liang.he
e414a327a0
Refactor copy callstack feature (#4401)
- Change `WAMR_ENABLE_COPY_CALLSTACK` to `WAMR_BUILD_COPY_CALL_STACK`, as
  `WAMR_BUILD` is the prefix for a command line option.
- Change `WAMR_ENABLE_COPY_CALLSTACK` to `WASM_ENABLE_COPY_CALL_STACK`, as
  `WASM_ENABLE` is the prefix for a macro in the source code.
- Change `CALLSTACK` to `CALL_STACK` to align with the existing
  `DUMP_CALL_STACK` feature.
- Continue using `WASMCApiFrame` instead of `wasm_frame_t` outside of
  *wasm_c_api.xxx* to avoid a typedef redefinition warning, which is
  identified by Clang.
2025-06-24 20:38:30 +08:00
YAMAMOTO Takashi
8289452abb
wasi_nn_tensorflowlite.cpp: fix get_output return size (#4390)
it should be byte size, not the number of (fp32) values.

i'm ambivalent about how to deal with the compatibility for
the legacy wamr-specific "wasi_nn". for now, i avoided changing it.
(so that existing tests using the legacy abi, namely test_tensorflow.c
and test_tensorflow_quantized.c, passes as they are.)
if we have any users who still want to use the legacy abi,
i suppose they consider the compatibility is more important
than the consistency with other backends.

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4376
2025-06-24 20:38:19 +08:00
YAMAMOTO Takashi
70c39bae77
wasi-nn: fix context lifetime issues (#4396)
* wasi-nn: fix context lifetime issues

use the module instance context api instead of trying to roll
our own with a hashmap. this fixes context lifetime problems mentioned in
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313.

namely,

* wasi-nn resources will be freed earlier now. before this change,
  they used to be kept until the runtime shutdown. (wasm_runtime_destroy)
  after this change, they will be freed together with the associated
  instances.

* wasm_module_inst_t pointer uniqueness assumption (which is wrong
  after wasm_runtime_deinstantiate) was lifted.

as a side effect, this change also makes a context shared among threads
within a cluster. note that this is a user-visible api/abi breaking change.
before this change, wasi-nn "handles" like wasi_ephemeral_nn_graph were
thread-local. after this change, they are shared among threads within
a cluster, similarly to wasi file descriptors. spec-wise, either behavior
should be ok simply because wasi officially doesn't have threads yet.
althogh i feel the latter semantics is more intuitive, if your application
depends on the thread-local behavior, this change breaks your application.

tested with wamr-wasi-extensions/samples/nn-cli, modified to
call each wasi-nn operations on different threads. (if you are
interested, you can find the modification at
https://github.com/yamt/wasm-micro-runtime/tree/yamt-nn-wip-20250619.)

cf.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2430

* runtime_lib.cmake: enable WAMR_BUILD_MODULE_INST_CONTEXT for wasi-nn

as we do for wasi (WAMR_BUILD_LIBC_WASI)
2025-06-24 20:37:56 +08:00
YAMAMOTO Takashi
92e5f5f123
CI: fix the description of upload_url (#4407) 2025-06-24 20:35:19 +08:00
YAMAMOTO Takashi
7471d5a5d0
wamr-wasi-extensions/socket: disable reference-types (#4392)
and add a comment to explain why.
2025-06-20 15:50:48 +08:00
YAMAMOTO Takashi
f449b79a31
wasi_nn_openvino.c: implement multiple models per instance (#4380)
tested with two models:
```
--load-graph=id=graph1,file=public/license-plate-recognition-barrier-0007/FP32/license-plate-recognition-barrier-0007.xml,file=public/license-plate-recognition-barrier-0007/FP32/license-plate-recognition-barrier-0007.bin \
--load-graph=id=graph2,file=classify/model.xml,file=classify/model.bin \
--init-execution-context=id=exec1,graph-id=graph1 \
--init-execution-context=id=exec2,graph-id=graph2 \
--set-input=context-id=exec1,dim=1,dim=24,dim=94,dim=3,file=out.bin \
--set-input=context-id=exec2,file=classify/banana-3x224x224-bgr.bin,dim=1,dim=3,dim=224,dim=224 \
--compute=context-id=exec1 \
--compute=context-id=exec2 \
--get-output=context-id=exec1,file=exec1-result.bin \
--get-output=context-id=exec2,file=exec2-result.bin
```

a detailed HOWTO: https://github.com/bytecodealliance/wasm-micro-runtime/pull/4380#issuecomment-2986882718
2025-06-20 15:50:29 +08:00
liang.he
64cafaff1e
Improve spec test execution by adding retry logic for transient errors (#4393) 2025-06-20 15:49:43 +08:00
YAMAMOTO Takashi
ea408ab6c0
wasi-nn: add minimum serialization on WASINNContext (#4387)
currently this is not necessary because context (WASINNContext) is
local to instance. (wasm_module_instance_t)

i plan to make a context shared among instances in a cluster when
fixing https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313.
this is a preparation for that direction.

an obvious alternative is to tweak the module instance context APIs
to allow declaring some kind of contexts instance-local. but i feel,
in this particular case, it's more natural to make "wasi-nn handles"
shared among threads within a "process".

note that, spec-wise, how wasi-nn behaves wrt threads is not defined
at all because wasi officially doesn't have threads yet. i suppose, at
this point, that how wasi-nn interacts with wasi-threads is something
we need to define by ourselves, especially when we are using an outdated
wasi-nn version.

with this change, if a thread attempts to access a context while
another thread is using it, we simply make the operation fail with
the "busy" error. this is intended for the mimimum serialization to
avoid problems like crashes/leaks/etc. this is not intended to allow
parallelism or such.

no functional changes are intended at this point yet.

cf.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/4313
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2430
2025-06-20 09:48:55 +08:00
YAMAMOTO Takashi
71c07f3e4e
deprecate legacy WAMR-specific "wasi_nn" module (#4382)
wasi_nn.h: deprecate legacy "wasi_nn"

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4326
2025-06-19 14:32:26 +08:00
YAMAMOTO Takashi
e5091e47ea
enable WAMR_BUILD_WASI_EPHEMERAL_NN by default (#4381)
cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4326
2025-06-19 14:30:44 +08:00
YAMAMOTO Takashi
aa53d648fa
wasi-nn: fix tensor_data abi for wasi_ephemeral_nn (#4379)
it's "(list u8)" in the witx definition.

the new definition matches both of our own host definition
(struct tensor_wasm) and wasmtime.

cf. https://github.com/bytecodealliance/wasm-micro-runtime/issues/4352
2025-06-19 14:18:36 +08:00
YAMAMOTO Takashi
a29f3943ef
core/iwasm/libraries/wasi-nn/test: use the correct version of keras (#4383) 2025-06-18 19:24:06 +08:00
liang.he
8414a20dfe
Fix several issues related to night-run CI and test scripts. (#4385)
- remove duplicated options
- fix test script
- change ci to use binary
2025-06-18 19:16:47 +08:00
YAMAMOTO Takashi
db7714f0f5
wasi_nn_tensorflowlite.cpp: reject non-fp32 input earlier (#4388)
this backend assumes fp32 here and there.
it's safer to reject unexpected inputs explicitly.
2025-06-18 19:08:57 +08:00
YAMAMOTO Takashi
4bf799c3af
core/iwasm/libraries/wasi-nn/test/build.sh: add a tip for intel mac (#4389)
i keep forgetting this and had to re-investigate it at least twice.
hopefully this can be helpful for others too.
2025-06-18 19:06:57 +08:00
106 changed files with 5602 additions and 1562 deletions

View File

@ -23,7 +23,7 @@ on:
type: string type: string
required: true required: true
upload_url: upload_url:
description: a semantic version number. it is required when `release` is true. description: upload binary assets to the URL of release
type: string type: string
required: false required: false
ver_num: ver_num:

View File

@ -0,0 +1,57 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
name: build wamr_wasi_extensions release
on:
workflow_call:
inputs:
upload_url:
description: upload binary assets to the URL of release
type: string
required: false
ver_num:
description: a semantic version number. it is required when `release` is true.
type: string
required: false
permissions:
contents: read
jobs:
build_wamr_wasi_extensions:
runs-on: ${{ matrix.os }}
permissions:
contents: write # for uploading release artifacts
strategy:
matrix:
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@v4
- name: install-wasi-sdk-wabt
uses: ./.github/actions/install-wasi-sdk-wabt
with:
os: ${{ matrix.os }}
- name: Build wamr-wasi-extensions
run: |
mkdir dist
./build_libs.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Compress the binary
run: |
zip -r wamr-wasi-extensions-${{ inputs.ver_num }}.zip wamr-wasi-extensions
working-directory: wamr-wasi-extensions/dist
- name: Upload release zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ inputs.upload_url }}
asset_path: wamr-wasi-extensions/dist/wamr-wasi-extensions-${{ inputs.ver_num }}.zip
asset_name: wamr-wasi-extensions-${{ inputs.ver_num }}.zip
asset_content_type: application/zip

View File

@ -23,7 +23,7 @@ on:
type: string type: string
required: true required: true
upload_url: upload_url:
description: a semantic version number. it is required when `release` is true. description: upload binary assets to the URL of release
type: string type: string
required: false required: false
ver_num: ver_num:

View File

@ -53,7 +53,7 @@ jobs:
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v3.29.0 uses: github/codeql-action/init@v3.29.2
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
@ -70,7 +70,7 @@ jobs:
- run: | - run: |
./.github/scripts/codeql_buildscript.sh ./.github/scripts/codeql_buildscript.sh
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3.29.0 uses: github/codeql-action/analyze@v3.29.2
with: with:
category: "/language:${{matrix.language}}" category: "/language:${{matrix.language}}"
upload: false upload: false
@ -99,7 +99,7 @@ jobs:
output: ${{ steps.step1.outputs.sarif-output }}/cpp.sarif output: ${{ steps.step1.outputs.sarif-output }}/cpp.sarif
- name: Upload CodeQL results to code scanning - name: Upload CodeQL results to code scanning
uses: github/codeql-action/upload-sarif@v3.29.0 uses: github/codeql-action/upload-sarif@v3.29.2
with: with:
sarif_file: ${{ steps.step1.outputs.sarif-output }} sarif_file: ${{ steps.step1.outputs.sarif-output }}
category: "/language:${{matrix.language}}" category: "/language:${{matrix.language}}"

View File

@ -69,6 +69,7 @@ env:
GC_TEST_OPTIONS: "-s spec -G -b -P" GC_TEST_OPTIONS: "-s spec -G -b -P"
MEMORY64_TEST_OPTIONS: "-s spec -W -b -P" MEMORY64_TEST_OPTIONS: "-s spec -W -b -P"
MULTI_MEMORY_TEST_OPTIONS: "-s spec -E -b -P" MULTI_MEMORY_TEST_OPTIONS: "-s spec -E -b -P"
EXTENDED_CONST_EXPR_TEST_OPTIONS: "-s spec -N -b -P"
permissions: permissions:
contents: read contents: read
@ -164,6 +165,7 @@ jobs:
"-DWAMR_BUILD_MEMORY64=1", "-DWAMR_BUILD_MEMORY64=1",
"-DWAMR_BUILD_MULTI_MEMORY=1", "-DWAMR_BUILD_MULTI_MEMORY=1",
"-DWAMR_BUILD_SHARED=1", "-DWAMR_BUILD_SHARED=1",
"-DWAMR_BUILD_EXTENDED_CONST_EXPR=1",
] ]
os: [ubuntu-22.04] os: [ubuntu-22.04]
platform: [android, linux] platform: [android, linux]
@ -609,6 +611,7 @@ jobs:
$GC_TEST_OPTIONS, $GC_TEST_OPTIONS,
$MEMORY64_TEST_OPTIONS, $MEMORY64_TEST_OPTIONS,
$MULTI_MEMORY_TEST_OPTIONS, $MULTI_MEMORY_TEST_OPTIONS,
$EXTENDED_CONST_EXPR_TEST_OPTIONS,
] ]
include: include:
- os: ubuntu-22.04 - os: ubuntu-22.04

View File

@ -142,6 +142,7 @@ jobs:
"-DWAMR_BUILD_SIMD=1", "-DWAMR_BUILD_SIMD=1",
"-DWAMR_BUILD_TAIL_CALL=1", "-DWAMR_BUILD_TAIL_CALL=1",
"-DWAMR_DISABLE_HW_BOUND_CHECK=1", "-DWAMR_DISABLE_HW_BOUND_CHECK=1",
"-DWAMR_BUILD_EXTENDED_CONST_EXPR=1",
] ]
os: [macos-13] os: [macos-13]
platform: [darwin] platform: [darwin]

View File

@ -100,6 +100,7 @@ jobs:
"-DWAMR_BUILD_MULTI_MODULE=1", "-DWAMR_BUILD_MULTI_MODULE=1",
"-DWAMR_BUILD_PERF_PROFILING=1", "-DWAMR_BUILD_PERF_PROFILING=1",
"-DWAMR_BUILD_REF_TYPES=1", "-DWAMR_BUILD_REF_TYPES=1",
"-DWAMR_BUILD_EXTENDED_CONST_EXPR=1",
# doesn't support # doesn't support
"-DWAMR_BUILD_SIMD=0", "-DWAMR_BUILD_SIMD=0",
"-DWAMR_BUILD_TAIL_CALL=1", "-DWAMR_BUILD_TAIL_CALL=1",
@ -290,28 +291,6 @@ jobs:
- name: run spec tests - name: run spec tests
run: | run: |
set +e
source /opt/intel/sgxsdk/environment source /opt/intel/sgxsdk/environment
attempts=0 ./test_wamr.sh ${{ matrix.test_option }} -t ${{ matrix.running_mode }}
max_attempts=3
while [ $attempts -lt $max_attempts ]; do
./test_wamr.sh ${{ matrix.test_option }} -t ${{ matrix.running_mode }}
exitcode="$?"
if [ $exitcode -eq 0 ]; then
echo "Spec test passed"
exit 0
elif [ $exitcode -ne 143 ]; then
echo "Spec test failed with error code $exitcode"
exit 1
fi
echo "$exitcode is a known GitHub-hosted runner issue"
echo "::notice::Re-running the spec test due to error code 143"
attempts=$((attempts + 1))
done
echo "::notice::Report an error with code 143 in SGX CI after $max_attempts attempts"
exit 143
working-directory: ./tests/wamr-test-suites working-directory: ./tests/wamr-test-suites

View File

@ -36,12 +36,12 @@ env:
LLVM_EAGER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=0 -DWAMR_BUILD_FAST_JIT=0 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=0" LLVM_EAGER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=0 -DWAMR_BUILD_FAST_JIT=0 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=0"
MULTI_TIER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=1 -DWAMR_BUILD_FAST_JIT=1 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=1" MULTI_TIER_JIT_BUILD_OPTIONS: "-DWAMR_BUILD_AOT=1 -DWAMR_BUILD_FAST_INTERP=0 -DWAMR_BUILD_INTERP=1 -DWAMR_BUILD_FAST_JIT=1 -DWAMR_BUILD_JIT=1 -DWAMR_BUILD_LAZY_JIT=1"
# For Spec Test # For Spec Test
# FIXME: use binary release(adding -b) instead of building from source after upgrading to 22.04 DEFAULT_TEST_OPTIONS: "-s spec -b -P"
DEFAULT_TEST_OPTIONS: "-s spec -P" EXTENDED_CONST_EXPR_TEST_OPTIONS: "-s spec -b -P -N"
MULTI_MODULES_TEST_OPTIONS: "-s spec -M -P" MULTI_MODULES_TEST_OPTIONS: "-s spec -b -P -M"
SIMD_TEST_OPTIONS: "-s spec -S -P" SIMD_TEST_OPTIONS: "-s spec -b -P -S"
THREADS_TEST_OPTIONS: "-s spec -p -P" THREADS_TEST_OPTIONS: "-s spec -b -P -p"
X86_32_TARGET_TEST_OPTIONS: "-m x86_32 -P" X86_32_TARGET_TEST_OPTIONS: "-m x86_32"
WASI_TEST_OPTIONS: "-s wasi_certification -w" WASI_TEST_OPTIONS: "-s wasi_certification -w"
permissions: permissions:
@ -129,6 +129,7 @@ jobs:
"-DWAMR_BUILD_MEMORY64=1", "-DWAMR_BUILD_MEMORY64=1",
"-DWAMR_BUILD_MULTI_MEMORY=1", "-DWAMR_BUILD_MULTI_MEMORY=1",
"-DWAMR_BUILD_SHARED=1", "-DWAMR_BUILD_SHARED=1",
"-DWAMR_BUILD_EXTENDED_CONST_EXPR=1",
] ]
os: [ubuntu-22.04] os: [ubuntu-22.04]
platform: [android, linux] platform: [android, linux]
@ -589,6 +590,7 @@ jobs:
$DEFAULT_TEST_OPTIONS, $DEFAULT_TEST_OPTIONS,
$MULTI_MODULES_TEST_OPTIONS, $MULTI_MODULES_TEST_OPTIONS,
$SIMD_TEST_OPTIONS, $SIMD_TEST_OPTIONS,
$EXTENDED_CONST_EXPR_TEST_OPTIONS,
$THREADS_TEST_OPTIONS, $THREADS_TEST_OPTIONS,
$WASI_TEST_OPTIONS, $WASI_TEST_OPTIONS,
] ]

View File

@ -239,3 +239,12 @@ jobs:
arch: universal arch: universal
upload_url: ${{ needs.create_release.outputs.upload_url }} upload_url: ${{ needs.create_release.outputs.upload_url }}
ver_num: ${{ needs.create_tag.outputs.new_ver}} ver_num: ${{ needs.create_tag.outputs.new_ver}}
release_wamr_wasi_extensions:
permissions:
contents: write # upload release artifact
needs: [create_tag, create_release]
uses: ./.github/workflows/build_wamr_wasi_extensions.yml
with:
upload_url: ${{ needs.create_release.outputs.upload_url }}
ver_num: ${{ needs.create_tag.outputs.new_ver }}

View File

@ -60,6 +60,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard. # Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning" - name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@2847b7f7ab9f48fc49eca90a53fff6007285f399 uses: github/codeql-action/upload-sarif@b69421388d5449cc5a5e1ca344d71926bda69e07
with: with:
sarif_file: results.sarif sarif_file: results.sarif

View File

@ -0,0 +1,57 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
name: wamr_wasi_extensions
on:
pull_request:
types:
- opened
- synchronize
paths:
- ".github/workflows/wamr_wasi_extensions.yml"
- "wamr_wasi_extensios/**"
- "core/iwasm/libraries/wasi-nn/include/**"
- "core/iwasm/libraries/lib-socket/**"
# allow to be triggered manually
workflow_dispatch:
# Cancel any in-flight jobs for the same PR/branch so there's only one active
# at a time
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_wamr_wasi_extensions:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-22.04, macos-13, macos-14]
steps:
- name: checkout
uses: actions/checkout@v4
- name: install-wasi-sdk-wabt
uses: ./.github/actions/install-wasi-sdk-wabt
with:
os: ${{ matrix.os }}
- name: Build wamr-wasi-extensions
run: |
mkdir dist
./build_libs.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Build wamr-wasi-extensions samples
run: |
./build_samples.sh $(pwd)/dist/wamr-wasi-extensions
working-directory: wamr-wasi-extensions
- name: Upload artifacts
if: matrix.os == 'macos-14'
uses: actions/upload-artifact@v4
with:
name: wamr-wasi-extensions
path: wamr-wasi-extensions/dist
retention-days: 10

View File

@ -99,9 +99,9 @@ if (NOT DEFINED WAMR_BUILD_LIB_WASI_THREADS)
set (WAMR_BUILD_LIB_WASI_THREADS 0) set (WAMR_BUILD_LIB_WASI_THREADS 0)
endif () endif ()
if (NOT DEFINED WAMR_ENABLE_COPY_CALLSTACK) if (NOT DEFINED WAMR_BUILD_COPY_CALL_STACK)
# Disable copy callstack by default # Disable copy callstack by default
set (WAMR_ENABLE_COPY_CALLSTACK 0) set (WAMR_BUILD_COPY_CALL_STACK 0)
endif() endif()
if (NOT DEFINED WAMR_BUILD_MINI_LOADER) if (NOT DEFINED WAMR_BUILD_MINI_LOADER)

View File

@ -48,6 +48,7 @@ WebAssembly Micro Runtime (WAMR) is a lightweight standalone WebAssembly (Wasm)
- [Reference Types](https://github.com/WebAssembly/reference-types), ref to [document](doc/ref_types.md) and [sample](samples/ref-types) - [Reference Types](https://github.com/WebAssembly/reference-types), ref to [document](doc/ref_types.md) and [sample](samples/ref-types)
- [Bulk memory operations](https://github.com/WebAssembly/bulk-memory-operations), [Shared memory](https://github.com/WebAssembly/threads/blob/main/proposals/threads/Overview.md#shared-linear-memory), [Memory64](https://github.com/WebAssembly/memory64) - [Bulk memory operations](https://github.com/WebAssembly/bulk-memory-operations), [Shared memory](https://github.com/WebAssembly/threads/blob/main/proposals/threads/Overview.md#shared-linear-memory), [Memory64](https://github.com/WebAssembly/memory64)
- [Tail-call](https://github.com/WebAssembly/tail-call), [Garbage Collection](https://github.com/WebAssembly/gc), [Exception Handling](https://github.com/WebAssembly/exception-handling) - [Tail-call](https://github.com/WebAssembly/tail-call), [Garbage Collection](https://github.com/WebAssembly/gc), [Exception Handling](https://github.com/WebAssembly/exception-handling)
- [Extended Constant Expressions](https://github.com/WebAssembly/extended-const)
### Supported architectures and platforms ### Supported architectures and platforms
The WAMR VMcore supports the following architectures: The WAMR VMcore supports the following architectures:

View File

@ -211,6 +211,10 @@ if (NOT DEFINED WAMR_BUILD_TAIL_CALL)
set (WAMR_BUILD_TAIL_CALL 0) set (WAMR_BUILD_TAIL_CALL 0)
endif () endif ()
if (NOT DEFINED WAMR_BUILD_EXTENDED_CONST_EXPR)
set (WAMR_BUILD_EXTENDED_CONST_EXPR 0)
endif ()
######################################## ########################################
# Compilation options to marco # Compilation options to marco
######################################## ########################################
@ -334,15 +338,10 @@ if (WAMR_BUILD_SHARED_HEAP EQUAL 1)
add_definitions (-DWASM_ENABLE_SHARED_HEAP=1) add_definitions (-DWASM_ENABLE_SHARED_HEAP=1)
message (" Shared heap enabled") message (" Shared heap enabled")
endif() endif()
if (WAMR_BUILD_COPY_CALL_STACK EQUAL 1)
if (WAMR_ENABLE_COPY_CALLSTACK EQUAL 1) add_definitions (-DWASM_ENABLE_COPY_CALL_STACK=1)
add_definitions (-DWAMR_ENABLE_COPY_CALLSTACK=1)
message(" Copy callstack enabled") message(" Copy callstack enabled")
else ()
add_definitions (-DWAMR_ENABLE_COPY_CALLSTACK=0)
message(" Copy callstack disabled")
endif() endif()
if (WAMR_BUILD_MEMORY64 EQUAL 1) if (WAMR_BUILD_MEMORY64 EQUAL 1)
# if native is 32-bit or cross-compiled to 32-bit # if native is 32-bit or cross-compiled to 32-bit
if (NOT WAMR_BUILD_TARGET MATCHES ".*64.*") if (NOT WAMR_BUILD_TARGET MATCHES ".*64.*")
@ -539,6 +538,9 @@ if (WAMR_BUILD_WASI_NN EQUAL 1)
if (DEFINED WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH) if (DEFINED WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH)
add_definitions (-DWASM_WASI_NN_EXTERNAL_DELEGATE_PATH="${WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH}") add_definitions (-DWASM_WASI_NN_EXTERNAL_DELEGATE_PATH="${WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH}")
endif () endif ()
if (NOT DEFINED WAMR_BUILD_WASI_EPHEMERAL_NN)
set(WAMR_BUILD_WASI_EPHEMERAL_NN 1)
endif()
if (WAMR_BUILD_WASI_EPHEMERAL_NN EQUAL 1) if (WAMR_BUILD_WASI_EPHEMERAL_NN EQUAL 1)
message (" WASI-NN: use 'wasi_ephemeral_nn' instead of 'wasi-nn'") message (" WASI-NN: use 'wasi_ephemeral_nn' instead of 'wasi-nn'")
add_definitions (-DWASM_ENABLE_WASI_EPHEMERAL_NN=1) add_definitions (-DWASM_ENABLE_WASI_EPHEMERAL_NN=1)
@ -675,7 +677,13 @@ if (WAMR_BUILD_INSTRUCTION_METERING EQUAL 1)
message (" Instruction metering enabled") message (" Instruction metering enabled")
add_definitions (-DWASM_ENABLE_INSTRUCTION_METERING=1) add_definitions (-DWASM_ENABLE_INSTRUCTION_METERING=1)
endif () endif ()
if (WAMR_BUILD_EXTENDED_CONST_EXPR EQUAL 1)
message (" Extended constant expression enabled")
add_definitions(-DWASM_ENABLE_EXTENDED_CONST_EXPR=1)
else()
message (" Extended constant expression disabled")
add_definitions(-DWASM_ENABLE_EXTENDED_CONST_EXPR=0)
endif ()
######################################## ########################################
# Show Phase4 Wasm proposals status. # Show Phase4 Wasm proposals status.
######################################## ########################################
@ -689,6 +697,7 @@ message (
" \"WebAssembly C and C++ API\"\n" " \"WebAssembly C and C++ API\"\n"
" Configurable. 0 is OFF. 1 is ON:\n" " Configurable. 0 is OFF. 1 is ON:\n"
" \"Bulk Memory Operation\" via WAMR_BUILD_BULK_MEMORY: ${WAMR_BUILD_BULK_MEMORY}\n" " \"Bulk Memory Operation\" via WAMR_BUILD_BULK_MEMORY: ${WAMR_BUILD_BULK_MEMORY}\n"
" \"Extended Constant Expressions\" via WAMR_BUILD_EXTENDED_CONST_EXPR: ${WAMR_BUILD_EXTENDED_CONST_EXPR}\n"
" \"Fixed-width SIMD\" via WAMR_BUILD_SIMD: ${WAMR_BUILD_SIMD}\n" " \"Fixed-width SIMD\" via WAMR_BUILD_SIMD: ${WAMR_BUILD_SIMD}\n"
" \"Garbage collection\" via WAMR_BUILD_GC: ${WAMR_BUILD_GC}\n" " \"Garbage collection\" via WAMR_BUILD_GC: ${WAMR_BUILD_GC}\n"
" \"Legacy Exception handling\" via WAMR_BUILD_EXCE_HANDLING: ${WAMR_BUILD_EXCE_HANDLING}\n" " \"Legacy Exception handling\" via WAMR_BUILD_EXCE_HANDLING: ${WAMR_BUILD_EXCE_HANDLING}\n"
@ -703,7 +712,6 @@ message (
" \"Branch Hinting\"\n" " \"Branch Hinting\"\n"
" \"Custom Annotation Syntax in the Text Format\"\n" " \"Custom Annotation Syntax in the Text Format\"\n"
" \"Exception handling\"\n" " \"Exception handling\"\n"
" \"Extended Constant Expressions\"\n"
" \"Import/Export of Mutable Globals\"\n" " \"Import/Export of Mutable Globals\"\n"
" \"JS String Builtins\"\n" " \"JS String Builtins\"\n"
" \"Relaxed SIMD\"\n" " \"Relaxed SIMD\"\n"

View File

@ -106,6 +106,7 @@ endif ()
if (WAMR_BUILD_WASI_NN EQUAL 1) if (WAMR_BUILD_WASI_NN EQUAL 1)
include (${IWASM_DIR}/libraries/wasi-nn/cmake/wasi_nn.cmake) include (${IWASM_DIR}/libraries/wasi-nn/cmake/wasi_nn.cmake)
set (WAMR_BUILD_MODULE_INST_CONTEXT 1)
endif () endif ()
if (WAMR_BUILD_LIB_PTHREAD EQUAL 1) if (WAMR_BUILD_LIB_PTHREAD EQUAL 1)

View File

@ -4,7 +4,6 @@
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception # SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# #
import argparse import argparse
import re
from pathlib import Path from pathlib import Path
import re import re
import shlex import shlex
@ -39,7 +38,7 @@ INVALID_FILE_NAME_SEGMENT = r"([a-zA-Z0-9]+\-[a-zA-Z0-9]+)"
def locate_command(command: str) -> bool: def locate_command(command: str) -> bool:
if not shutil.which(command): if not shutil.which(command):
print(f"Command '{command}'' not found") print(f"Command '{command}' not found")
return False return False
return True return True

View File

@ -193,8 +193,8 @@
#error "Heap aux stack allocation must be enabled for WASI threads" #error "Heap aux stack allocation must be enabled for WASI threads"
#endif #endif
#ifndef WAMR_ENABLE_COPY_CALLSTACK #ifndef WASM_ENABLE_COPY_CALL_STACK
#define WAMR_ENABLE_COPY_CALLSTACK 0 #define WASM_ENABLE_COPY_CALL_STACK 0
#endif #endif
#ifndef WASM_ENABLE_BASE_LIB #ifndef WASM_ENABLE_BASE_LIB
@ -720,4 +720,8 @@ unless used elsewhere */
#define WASM_ENABLE_INSTRUCTION_METERING 0 #define WASM_ENABLE_INSTRUCTION_METERING 0
#endif #endif
#ifndef WASM_ENABLE_EXTENDED_CONST_EXPR
#define WASM_ENABLE_EXTENDED_CONST_EXPR 0
#endif
#endif /* end of _CONFIG_H_ */ #endif /* end of _CONFIG_H_ */

View File

@ -968,6 +968,35 @@ fail:
return false; return false;
} }
#if WASM_ENABLE_GC != 0 || WASM_ENABLE_EXTENDED_CONST_EXPR != 0
static void
destroy_init_expr(InitializerExpression *expr)
{
#if WASM_ENABLE_GC != 0
if (expr->init_expr_type == INIT_EXPR_TYPE_STRUCT_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_FIXED) {
wasm_runtime_free(expr->u.unary.v.data);
}
#endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
// free left expr and right expr for binary oprand
if (!is_expr_binary_op(expr->init_expr_type)) {
return;
}
if (expr->u.binary.l_expr) {
destroy_init_expr_recursive(expr->u.binary.l_expr);
}
if (expr->u.binary.r_expr) {
destroy_init_expr_recursive(expr->u.binary.r_expr);
}
expr->u.binary.l_expr = expr->u.binary.r_expr = NULL;
#endif
}
#endif /* end of WASM_ENABLE_GC != 0 || WASM_ENABLE_EXTENDED_CONST_EXPR != 0 \
*/
static void static void
destroy_import_memories(AOTImportMemory *import_memories) destroy_import_memories(AOTImportMemory *import_memories)
{ {
@ -993,6 +1022,10 @@ destroy_mem_init_data_list(AOTModule *module, AOTMemInitData **data_list,
/* If the module owns the binary data, free the bytes buffer */ /* If the module owns the binary data, free the bytes buffer */
if (module->is_binary_freeable && data_list[i]->bytes) if (module->is_binary_freeable && data_list[i]->bytes)
wasm_runtime_free(data_list[i]->bytes); wasm_runtime_free(data_list[i]->bytes);
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(&data_list[i]->offset);
#endif
/* Free the data segment structure itself */ /* Free the data segment structure itself */
wasm_runtime_free(data_list[i]); wasm_runtime_free(data_list[i]);
} }
@ -1043,11 +1076,11 @@ load_mem_init_data_list(const uint8 **p_buf, const uint8 *buf_end,
uint32 byte_count; uint32 byte_count;
uint32 is_passive; uint32 is_passive;
uint32 memory_index; uint32 memory_index;
InitializerExpression init_value; InitializerExpression offset_expr;
read_uint32(buf, buf_end, is_passive); read_uint32(buf, buf_end, is_passive);
read_uint32(buf, buf_end, memory_index); read_uint32(buf, buf_end, memory_index);
if (!load_init_expr(&buf, buf_end, module, &init_value, error_buf, if (!load_init_expr(&buf, buf_end, module, &offset_expr, error_buf,
error_buf_size)) { error_buf_size)) {
return false; return false;
} }
@ -1062,8 +1095,7 @@ load_mem_init_data_list(const uint8 **p_buf, const uint8 *buf_end,
data_list[i]->is_passive = (bool)is_passive; data_list[i]->is_passive = (bool)is_passive;
data_list[i]->memory_index = memory_index; data_list[i]->memory_index = memory_index;
#endif #endif
data_list[i]->offset.init_expr_type = init_value.init_expr_type; data_list[i]->offset = offset_expr;
data_list[i]->offset.u = init_value.u;
data_list[i]->byte_count = byte_count; data_list[i]->byte_count = byte_count;
data_list[i]->bytes = NULL; data_list[i]->bytes = NULL;
/* If the module owns the binary data, clone the bytes buffer */ /* If the module owns the binary data, clone the bytes buffer */
@ -1148,18 +1180,6 @@ fail:
return false; return false;
} }
#if WASM_ENABLE_GC != 0
static void
destroy_init_expr(InitializerExpression *expr)
{
if (expr->init_expr_type == INIT_EXPR_TYPE_STRUCT_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_FIXED) {
wasm_runtime_free(expr->u.data);
}
}
#endif /* end of WASM_ENABLE_GC != 0 */
static void static void
destroy_import_tables(AOTImportTable *import_tables) destroy_import_tables(AOTImportTable *import_tables)
{ {
@ -1183,6 +1203,9 @@ destroy_table_init_data_list(AOTTableInitData **data_list, uint32 count)
for (j = 0; j < data_list[i]->value_count; j++) { for (j = 0; j < data_list[i]->value_count; j++) {
destroy_init_expr(&data_list[i]->init_values[j]); destroy_init_expr(&data_list[i]->init_values[j]);
} }
#endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(&data_list[i]->offset);
#endif #endif
wasm_runtime_free(data_list[i]); wasm_runtime_free(data_list[i]);
} }
@ -1208,34 +1231,34 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
break; break;
case INIT_EXPR_TYPE_I32_CONST: case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_F32_CONST: case INIT_EXPR_TYPE_F32_CONST:
read_uint32(buf, buf_end, expr->u.i32); read_uint32(buf, buf_end, expr->u.unary.v.i32);
break; break;
case INIT_EXPR_TYPE_I64_CONST: case INIT_EXPR_TYPE_I64_CONST:
case INIT_EXPR_TYPE_F64_CONST: case INIT_EXPR_TYPE_F64_CONST:
read_uint64(buf, buf_end, expr->u.i64); read_uint64(buf, buf_end, expr->u.unary.v.i64);
break; break;
case INIT_EXPR_TYPE_V128_CONST: case INIT_EXPR_TYPE_V128_CONST:
i64x2 = (uint64 *)expr->u.v128.i64x2; i64x2 = (uint64 *)expr->u.unary.v.v128.i64x2;
CHECK_BUF(buf, buf_end, sizeof(uint64) * 2); CHECK_BUF(buf, buf_end, sizeof(uint64) * 2);
wasm_runtime_read_v128(buf, &i64x2[0], &i64x2[1]); wasm_runtime_read_v128(buf, &i64x2[0], &i64x2[1]);
buf += sizeof(uint64) * 2; buf += sizeof(uint64) * 2;
break; break;
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
read_uint32(buf, buf_end, expr->u.global_index); read_uint32(buf, buf_end, expr->u.unary.v.global_index);
break; break;
/* INIT_EXPR_TYPE_FUNCREF_CONST can be used when /* INIT_EXPR_TYPE_FUNCREF_CONST can be used when
both reference types and GC are disabled */ both reference types and GC are disabled */
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
read_uint32(buf, buf_end, expr->u.ref_index); read_uint32(buf, buf_end, expr->u.unary.v.ref_index);
break; break;
#if WASM_ENABLE_GC != 0 || WASM_ENABLE_REF_TYPES != 0 #if WASM_ENABLE_GC != 0 || WASM_ENABLE_REF_TYPES != 0
case INIT_EXPR_TYPE_REFNULL_CONST: case INIT_EXPR_TYPE_REFNULL_CONST:
read_uint32(buf, buf_end, expr->u.ref_index); read_uint32(buf, buf_end, expr->u.unary.v.ref_index);
break; break;
#endif /* end of WASM_ENABLE_GC != 0 || WASM_ENABLE_REF_TYPES != 0 */ #endif /* end of WASM_ENABLE_GC != 0 || WASM_ENABLE_REF_TYPES != 0 */
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
read_uint32(buf, buf_end, expr->u.i32); read_uint32(buf, buf_end, expr->u.unary.v.i32);
break; break;
case INIT_EXPR_TYPE_STRUCT_NEW: case INIT_EXPR_TYPE_STRUCT_NEW:
{ {
@ -1256,7 +1279,7 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
free_if_fail = true; free_if_fail = true;
init_values->count = field_count; init_values->count = field_count;
init_values->type_idx = type_idx; init_values->type_idx = type_idx;
expr->u.data = init_values; expr->u.unary.v.data = init_values;
if (type_idx >= module->type_count) { if (type_idx >= module->type_count) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
@ -1294,7 +1317,7 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
break; break;
} }
case INIT_EXPR_TYPE_STRUCT_NEW_DEFAULT: case INIT_EXPR_TYPE_STRUCT_NEW_DEFAULT:
read_uint32(buf, buf_end, expr->u.type_index); read_uint32(buf, buf_end, expr->u.unary.v.type_index);
break; break;
case INIT_EXPR_TYPE_ARRAY_NEW: case INIT_EXPR_TYPE_ARRAY_NEW:
case INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT: case INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT:
@ -1317,8 +1340,8 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
} }
if (init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) { if (init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) {
expr->u.array_new_default.type_index = type_idx; expr->u.unary.v.array_new_default.type_index = type_idx;
expr->u.array_new_default.length = length; expr->u.unary.v.array_new_default.length = length;
} }
else { else {
uint32 i, elem_size, elem_data_count; uint32 i, elem_size, elem_data_count;
@ -1329,7 +1352,7 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
return false; return false;
} }
free_if_fail = true; free_if_fail = true;
expr->u.data = init_values; expr->u.unary.v.data = init_values;
init_values->type_idx = type_idx; init_values->type_idx = type_idx;
init_values->length = length; init_values->length = length;
@ -1357,6 +1380,34 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
break; break;
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
{
expr->u.binary.l_expr = expr->u.binary.r_expr = NULL;
if (!(expr->u.binary.l_expr =
loader_malloc(sizeof(InitializerExpression), error_buf,
error_buf_size))) {
goto fail;
}
if (!load_init_expr(&buf, buf_end, module, expr->u.binary.l_expr,
error_buf, error_buf_size))
goto fail;
if (!(expr->u.binary.r_expr =
loader_malloc(sizeof(InitializerExpression), error_buf,
error_buf_size))) {
goto fail;
}
if (!load_init_expr(&buf, buf_end, module, expr->u.binary.r_expr,
error_buf, error_buf_size))
goto fail;
break;
}
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */
default: default:
set_error_buf(error_buf, error_buf_size, "invalid init expr type."); set_error_buf(error_buf, error_buf_size, "invalid init expr type.");
return false; return false;
@ -1369,10 +1420,13 @@ load_init_expr(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
fail: fail:
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
if (free_if_fail) { if (free_if_fail) {
wasm_runtime_free(expr->u.data); wasm_runtime_free(expr->u.unary.v.data);
} }
#else #else
(void)free_if_fail; (void)free_if_fail;
#endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(expr);
#endif #endif
return false; return false;
} }
@ -1535,14 +1589,16 @@ load_table_init_data_list(const uint8 **p_buf, const uint8 *buf_end,
/* Create each table data segment */ /* Create each table data segment */
for (i = 0; i < module->table_init_data_count; i++) { for (i = 0; i < module->table_init_data_count; i++) {
uint32 mode, elem_type; uint32 mode, elem_type;
uint32 table_index, init_expr_type, value_count; uint32 table_index, value_count;
uint64 init_expr_value, size1; uint64 size1;
InitializerExpression offset_expr;
read_uint32(buf, buf_end, mode); read_uint32(buf, buf_end, mode);
read_uint32(buf, buf_end, elem_type); read_uint32(buf, buf_end, elem_type);
read_uint32(buf, buf_end, table_index); read_uint32(buf, buf_end, table_index);
read_uint32(buf, buf_end, init_expr_type); if (!load_init_expr(&buf, buf_end, module, &offset_expr, error_buf,
read_uint64(buf, buf_end, init_expr_value); error_buf_size))
return false;
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
if (wasm_is_type_multi_byte_type(elem_type)) { if (wasm_is_type_multi_byte_type(elem_type)) {
uint16 ref_type, nullable; uint16 ref_type, nullable;
@ -1588,8 +1644,7 @@ load_table_init_data_list(const uint8 **p_buf, const uint8 *buf_end,
} }
} }
#endif #endif
data_list[i]->offset.init_expr_type = (uint8)init_expr_type; data_list[i]->offset = offset_expr;
data_list[i]->offset.u.i64 = (int64)init_expr_value;
data_list[i]->value_count = value_count; data_list[i]->value_count = value_count;
for (j = 0; j < data_list[i]->value_count; j++) { for (j = 0; j < data_list[i]->value_count; j++) {
if (!load_init_expr(&buf, buf_end, module, if (!load_init_expr(&buf, buf_end, module,
@ -1730,6 +1785,12 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
(void)u8; (void)u8;
read_uint32(buf, buf_end, j); read_uint32(buf, buf_end, j);
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (j >= module->type_count) {
set_error_buf(error_buf, error_buf_size, "invalid type index");
goto fail;
}
#endif
if (module->types[j]->ref_count == UINT16_MAX) { if (module->types[j]->ref_count == UINT16_MAX) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"wasm type's ref count too large"); "wasm type's ref count too large");
@ -1993,6 +2054,13 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
AOTType *cur_type = module->types[j]; AOTType *cur_type = module->types[j];
parent_type_idx = cur_type->parent_type_idx; parent_type_idx = cur_type->parent_type_idx;
if (parent_type_idx != (uint32)-1) { /* has parent */ if (parent_type_idx != (uint32)-1) { /* has parent */
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (parent_type_idx >= module->type_count) {
set_error_buf(error_buf, error_buf_size,
"invalid parent type index");
goto fail;
}
#endif
AOTType *parent_type = module->types[parent_type_idx]; AOTType *parent_type = module->types[parent_type_idx];
module->types[j]->parent_type = parent_type; module->types[j]->parent_type = parent_type;
@ -2016,6 +2084,13 @@ load_types(const uint8 **p_buf, const uint8 *buf_end, AOTModule *module,
AOTType *cur_type = module->types[j]; AOTType *cur_type = module->types[j];
parent_type_idx = cur_type->parent_type_idx; parent_type_idx = cur_type->parent_type_idx;
if (parent_type_idx != (uint32)-1) { /* has parent */ if (parent_type_idx != (uint32)-1) { /* has parent */
#if WASM_ENABLE_AOT_VALIDATOR != 0
if (parent_type_idx >= module->type_count) {
set_error_buf(error_buf, error_buf_size,
"invalid parent type index");
goto fail;
}
#endif
AOTType *parent_type = module->types[parent_type_idx]; AOTType *parent_type = module->types[parent_type_idx];
/* subtyping has been checked during compilation */ /* subtyping has been checked during compilation */
bh_assert(wasm_type_is_subtype_of( bh_assert(wasm_type_is_subtype_of(
@ -4480,7 +4555,7 @@ aot_unload(AOTModule *module)
destroy_import_globals(module->import_globals); destroy_import_globals(module->import_globals);
if (module->globals) { if (module->globals) {
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0 || WASM_ENABLE_EXTENDED_CONST_EXPR != 0
uint32 i; uint32 i;
for (i = 0; i < module->global_count; i++) { for (i = 0; i < module->global_count; i++) {
destroy_init_expr(&module->globals[i].init_expr); destroy_init_expr(&module->globals[i].init_expr);
@ -4636,7 +4711,7 @@ aot_unload(AOTModule *module)
} }
uint32 uint32
aot_get_plt_table_size() aot_get_plt_table_size(void)
{ {
return get_plt_table_size(); return get_plt_table_size();
} }

View File

@ -185,6 +185,13 @@ typedef struct {
#define REG_STRINGREF_SYM() #define REG_STRINGREF_SYM()
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
#define REG_SHARED_HEAP_SYM() \
REG_SYM(wasm_runtime_check_and_update_last_used_shared_heap),
#else
#define REG_SHARED_HEAP_SYM()
#endif
#define REG_COMMON_SYMBOLS \ #define REG_COMMON_SYMBOLS \
REG_SYM(aot_set_exception_with_id), \ REG_SYM(aot_set_exception_with_id), \
REG_SYM(aot_invoke_native), \ REG_SYM(aot_invoke_native), \
@ -218,6 +225,7 @@ typedef struct {
REG_LLVM_PGO_SYM() \ REG_LLVM_PGO_SYM() \
REG_GC_SYM() \ REG_GC_SYM() \
REG_STRINGREF_SYM() \ REG_STRINGREF_SYM() \
REG_SHARED_HEAP_SYM() \
#define CHECK_RELOC_OFFSET(data_size) do { \ #define CHECK_RELOC_OFFSET(data_size) do { \
if (!check_reloc_offset(target_section_size, \ if (!check_reloc_offset(target_section_size, \

View File

@ -60,6 +60,16 @@ bh_static_assert(offsetof(AOTModuleInstanceExtra, stack_sizes) == 0);
bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap_base_addr_adj) bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap_base_addr_adj)
== 8); == 8);
bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap_start_off) == 16); bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap_start_off) == 16);
bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap_end_off) == 24);
bh_static_assert(offsetof(AOTModuleInstanceExtra, shared_heap) == 32);
bh_static_assert(offsetof(WASMSharedHeap, next) == 0);
bh_static_assert(offsetof(WASMSharedHeap, chain_next) == 8);
bh_static_assert(offsetof(WASMSharedHeap, heap_handle) == 16);
bh_static_assert(offsetof(WASMSharedHeap, base_addr) == 24);
bh_static_assert(offsetof(WASMSharedHeap, size) == 32);
bh_static_assert(offsetof(WASMSharedHeap, start_off_mem64) == 40);
bh_static_assert(offsetof(WASMSharedHeap, start_off_mem32) == 48);
bh_static_assert(sizeof(CApiFuncImport) == sizeof(uintptr_t) * 3); bh_static_assert(sizeof(CApiFuncImport) == sizeof(uintptr_t) * 3);
@ -279,18 +289,21 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
switch (flag) { switch (flag) {
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
{ {
if (!check_global_init_expr(module, init_expr->u.global_index, if (!check_global_init_expr(module,
init_expr->u.unary.v.global_index,
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
return false; return false;
} }
if (init_expr->u.global_index < module->import_global_count) { if (init_expr->u.unary.v.global_index
< module->import_global_count) {
PUT_REF_TO_ADDR( PUT_REF_TO_ADDR(
addr, module->import_globals[init_expr->u.global_index] addr,
.global_data_linked.gc_obj); module->import_globals[init_expr->u.unary.v.global_index]
.global_data_linked.gc_obj);
} }
else { else {
uint32 global_idx = uint32 global_idx = init_expr->u.unary.v.global_index
init_expr->u.global_index - module->import_global_count; - module->import_global_count;
return assign_table_init_value( return assign_table_init_value(
module_inst, module, &module->globals[global_idx].init_expr, module_inst, module, &module->globals[global_idx].init_expr,
addr, error_buf, error_buf_size); addr, error_buf, error_buf_size);
@ -306,7 +319,7 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
{ {
WASMFuncObjectRef func_obj = NULL; WASMFuncObjectRef func_obj = NULL;
uint32 func_idx = init_expr->u.u32; uint32 func_idx = init_expr->u.unary.v.u32;
if (func_idx != UINT32_MAX) { if (func_idx != UINT32_MAX) {
if (!(func_obj = if (!(func_obj =
@ -321,7 +334,8 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
} }
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
{ {
WASMI31ObjectRef i31_obj = wasm_i31_obj_new(init_expr->u.i32); WASMI31ObjectRef i31_obj =
wasm_i31_obj_new(init_expr->u.unary.v.i32);
PUT_REF_TO_ADDR(addr, i31_obj); PUT_REF_TO_ADDR(addr, i31_obj);
break; break;
} }
@ -335,11 +349,12 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
uint32 type_idx; uint32 type_idx;
if (flag == INIT_EXPR_TYPE_STRUCT_NEW) { if (flag == INIT_EXPR_TYPE_STRUCT_NEW) {
init_values = (WASMStructNewInitValues *)init_expr->u.data; init_values =
(WASMStructNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
} }
else { else {
type_idx = init_expr->u.type_index; type_idx = init_expr->u.unary.v.type_index;
} }
struct_type = (WASMStructType *)module->types[type_idx]; struct_type = (WASMStructType *)module->types[type_idx];
@ -388,12 +403,13 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
uint32 type_idx, len; uint32 type_idx, len;
if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) { if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) {
type_idx = init_expr->u.array_new_default.type_index; type_idx = init_expr->u.unary.v.array_new_default.type_index;
len = init_expr->u.array_new_default.length; len = init_expr->u.unary.v.array_new_default.length;
arr_init_val = &empty_val; arr_init_val = &empty_val;
} }
else { else {
init_values = (WASMArrayNewInitValues *)init_expr->u.data; init_values =
(WASMArrayNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
len = init_values->length; len = init_values->length;
@ -444,6 +460,90 @@ assign_table_init_value(AOTModuleInstance *module_inst, AOTModule *module,
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
static bool
get_init_value_recursive(AOTModuleInstance *module_inst, AOTModule *module,
InitializerExpression *expr, WASMValue *value,
char *error_buf, uint32 error_buf_size)
{
uint8 flag = expr->init_expr_type;
switch (flag) {
case INIT_EXPR_TYPE_GET_GLOBAL:
{
if (!check_global_init_expr(module, expr->u.unary.v.global_index,
error_buf, error_buf_size)) {
return false;
}
#if WASM_ENABLE_GC == 0
*value = module->import_globals[expr->u.unary.v.global_index]
.global_data_linked;
#else
if (expr->u.unary.v.global_index < module->import_global_count) {
*value = module->import_globals[expr->u.unary.v.global_index]
.global_data_linked;
}
else {
*value = module
->globals[expr->u.unary.v.global_index
- module->import_global_count]
.init_expr.u.unary.v;
}
#endif
break;
}
case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_I64_CONST:
{
*value = expr->u.unary.v;
break;
}
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
{
WASMValue l_value, r_value;
if (!get_init_value_recursive(module_inst, module,
expr->u.binary.l_expr, &l_value,
error_buf, error_buf_size)) {
return false;
}
if (!get_init_value_recursive(module_inst, module,
expr->u.binary.r_expr, &r_value,
error_buf, error_buf_size)) {
return false;
}
if (flag == INIT_EXPR_TYPE_I32_ADD) {
value->i32 = l_value.i32 + r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I32_SUB) {
value->i32 = l_value.i32 - r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I32_MUL) {
value->i32 = l_value.i32 * r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I64_ADD) {
value->i64 = l_value.i64 + r_value.i64;
}
else if (flag == INIT_EXPR_TYPE_I64_SUB) {
value->i64 = l_value.i64 - r_value.i64;
}
else if (flag == INIT_EXPR_TYPE_I64_MUL) {
value->i64 = l_value.i64 * r_value.i64;
}
break;
}
#endif
default:
return false;
}
return true;
}
static bool static bool
global_instantiate(AOTModuleInstance *module_inst, AOTModule *module, global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
char *error_buf, uint32 error_buf_size) char *error_buf, uint32 error_buf_size)
@ -472,30 +572,24 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
flag = init_expr->init_expr_type; flag = init_expr->init_expr_type;
switch (flag) { switch (flag) {
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_I64_CONST:
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
#endif
{ {
if (!check_global_init_expr(module, init_expr->u.global_index, WASMValue value;
error_buf, error_buf_size)) { if (!get_init_value_recursive(module_inst, module, init_expr,
&value, error_buf,
error_buf_size)) {
return false; return false;
} }
#if WASM_ENABLE_GC == 0 init_global_data(p, global->type.val_type, &value);
init_global_data(
p, global->type.val_type,
&module->import_globals[init_expr->u.global_index]
.global_data_linked);
#else
if (init_expr->u.global_index < module->import_global_count) {
init_global_data(
p, global->type.val_type,
&module->import_globals[init_expr->u.global_index]
.global_data_linked);
}
else {
uint32 global_idx =
init_expr->u.global_index - module->import_global_count;
init_global_data(p, global->type.val_type,
&module->globals[global_idx].init_expr.u);
}
#endif
break; break;
} }
#if WASM_ENABLE_GC == 0 && WASM_ENABLE_REF_TYPES != 0 #if WASM_ENABLE_GC == 0 && WASM_ENABLE_REF_TYPES != 0
@ -516,7 +610,7 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
{ {
WASMFuncObjectRef func_obj = NULL; WASMFuncObjectRef func_obj = NULL;
uint32 func_idx = init_expr->u.u32; uint32 func_idx = init_expr->u.unary.v.ref_index;
if (func_idx != UINT32_MAX) { if (func_idx != UINT32_MAX) {
if (!(func_obj = if (!(func_obj =
@ -531,7 +625,8 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
} }
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
{ {
WASMI31ObjectRef i31_obj = wasm_i31_obj_new(init_expr->u.i32); WASMI31ObjectRef i31_obj =
wasm_i31_obj_new(init_expr->u.unary.v.i32);
PUT_REF_TO_ADDR(p, i31_obj); PUT_REF_TO_ADDR(p, i31_obj);
break; break;
} }
@ -545,11 +640,12 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
uint32 type_idx; uint32 type_idx;
if (flag == INIT_EXPR_TYPE_STRUCT_NEW) { if (flag == INIT_EXPR_TYPE_STRUCT_NEW) {
init_values = (WASMStructNewInitValues *)init_expr->u.data; init_values =
(WASMStructNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
} }
else { else {
type_idx = init_expr->u.type_index; type_idx = init_expr->u.unary.v.type_index;
} }
struct_type = (WASMStructType *)module->types[type_idx]; struct_type = (WASMStructType *)module->types[type_idx];
@ -599,12 +695,14 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
uint32 type_idx, len; uint32 type_idx, len;
if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) { if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) {
type_idx = init_expr->u.array_new_default.type_index; type_idx =
len = init_expr->u.array_new_default.length; init_expr->u.unary.v.array_new_default.type_index;
len = init_expr->u.unary.v.array_new_default.length;
arr_init_val = &empty_val; arr_init_val = &empty_val;
} }
else { else {
init_values = (WASMArrayNewInitValues *)init_expr->u.data; init_values =
(WASMArrayNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
len = init_values->length; len = init_values->length;
@ -650,7 +748,8 @@ global_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
default: default:
{ {
init_global_data(p, global->type.val_type, &init_expr->u); init_global_data(p, global->type.val_type,
&init_expr->u.unary.v);
break; break;
} }
} }
@ -671,6 +770,7 @@ tables_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
uint64 total_size; uint64 total_size;
AOTTableInitData *table_seg; AOTTableInitData *table_seg;
AOTTableInstance *tbl_inst = first_tbl_inst; AOTTableInstance *tbl_inst = first_tbl_inst;
uint8 offset_flag;
total_size = (uint64)sizeof(AOTTableInstance *) * module_inst->table_count; total_size = (uint64)sizeof(AOTTableInstance *) * module_inst->table_count;
if (total_size > 0 if (total_size > 0
@ -743,28 +843,25 @@ tables_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
tbl_inst = module_inst->tables[table_seg->table_index]; tbl_inst = module_inst->tables[table_seg->table_index];
bh_assert(tbl_inst); bh_assert(tbl_inst);
offset_flag = table_seg->offset.init_expr_type;
#if WASM_ENABLE_REF_TYPES != 0 #if WASM_ENABLE_REF_TYPES != 0
bh_assert( bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
table_seg->offset.init_expr_type || offset_flag == INIT_EXPR_TYPE_FUNCREF_CONST
== (tbl_inst->is_table64 ? INIT_EXPR_TYPE_I64_CONST || offset_flag == INIT_EXPR_TYPE_REFNULL_CONST
: INIT_EXPR_TYPE_I32_CONST) || (tbl_inst->is_table64 ? is_valid_i64_offset(offset_flag)
|| table_seg->offset.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL : is_valid_i32_offset(offset_flag)));
|| table_seg->offset.init_expr_type == INIT_EXPR_TYPE_FUNCREF_CONST
|| table_seg->offset.init_expr_type
== INIT_EXPR_TYPE_REFNULL_CONST);
#else #else
bh_assert(table_seg->offset.init_expr_type bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
== (tbl_inst->is_table64 ? INIT_EXPR_TYPE_I64_CONST || (tbl_inst->is_table64 ? is_valid_i64_offset(offset_flag)
: INIT_EXPR_TYPE_I32_CONST) : is_valid_i32_offset(offset_flag)));
|| table_seg->offset.init_expr_type
== INIT_EXPR_TYPE_GET_GLOBAL);
#endif #endif
/* Resolve table data base offset */ /* Resolve table data base offset */
/* TODO: The table64 current implementation assumes table max size /* TODO: The table64 current implementation assumes table max size
* UINT32_MAX, so the offset conversion here is safe */ * UINT32_MAX, so the offset conversion here is safe */
if (table_seg->offset.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) { if (offset_flag == INIT_EXPR_TYPE_GET_GLOBAL) {
global_index = table_seg->offset.u.global_index; global_index = table_seg->offset.u.unary.v.global_index;
if (!check_global_init_expr(module, global_index, error_buf, if (!check_global_init_expr(module, global_index, error_buf,
error_buf_size)) { error_buf_size)) {
@ -782,8 +879,15 @@ tables_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
base_offset = base_offset =
*(uint32 *)(module_inst->global_data + global_data_offset); *(uint32 *)(module_inst->global_data + global_data_offset);
} }
else else {
base_offset = (uint32)table_seg->offset.u.i32; WASMValue offset_value;
if (!get_init_value_recursive(module_inst, module,
&table_seg->offset, &offset_value,
error_buf, error_buf_size)) {
return false;
}
base_offset = (uint32)offset_value.i32;
}
/* Copy table data */ /* Copy table data */
/* base_offset only since length might negative */ /* base_offset only since length might negative */
@ -818,7 +922,7 @@ tables_instantiate(AOTModuleInstance *module_inst, AOTModule *module,
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
for (j = 0; j < length; j++) { for (j = 0; j < length; j++) {
tbl_inst->elems[base_offset + j] = tbl_inst->elems[base_offset + j] =
table_seg->init_values[j].u.ref_index; table_seg->init_values[j].u.unary.v.ref_index;
} }
#endif #endif
} }
@ -1118,6 +1222,7 @@ memories_instantiate(AOTModuleInstance *module_inst, AOTModuleInstance *parent,
AOTMemInitData *data_seg; AOTMemInitData *data_seg;
uint64 total_size; uint64 total_size;
mem_offset_t base_offset; mem_offset_t base_offset;
uint8 offset_flag;
module_inst->memory_count = memory_count; module_inst->memory_count = memory_count;
total_size = sizeof(AOTMemoryInstance *) * (uint64)memory_count; total_size = sizeof(AOTMemoryInstance *) * (uint64)memory_count;
@ -1156,15 +1261,15 @@ memories_instantiate(AOTModuleInstance *module_inst, AOTModuleInstance *parent,
initialized */ initialized */
continue; continue;
bh_assert(data_seg->offset.init_expr_type offset_flag = data_seg->offset.init_expr_type;
== (memory_inst->is_memory64 ? INIT_EXPR_TYPE_I64_CONST bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
: INIT_EXPR_TYPE_I32_CONST) || (memory_inst->is_memory64
|| data_seg->offset.init_expr_type ? is_valid_i64_offset(offset_flag)
== INIT_EXPR_TYPE_GET_GLOBAL); : is_valid_i32_offset(offset_flag)));
/* Resolve memory data base offset */ /* Resolve memory data base offset */
if (data_seg->offset.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) { if (offset_flag == INIT_EXPR_TYPE_GET_GLOBAL) {
global_index = data_seg->offset.u.global_index; global_index = data_seg->offset.u.unary.v.global_index;
if (!check_global_init_expr(module, global_index, error_buf, if (!check_global_init_expr(module, global_index, error_buf,
error_buf_size)) { error_buf_size)) {
@ -1192,14 +1297,20 @@ memories_instantiate(AOTModuleInstance *module_inst, AOTModuleInstance *parent,
} }
} }
else { else {
WASMValue offset_value;
if (!get_init_value_recursive(module_inst, module,
&data_seg->offset, &offset_value,
error_buf, error_buf_size)) {
return false;
}
#if WASM_ENABLE_MEMORY64 != 0 #if WASM_ENABLE_MEMORY64 != 0
if (memory_inst->is_memory64) { if (memory_inst->is_memory64) {
base_offset = data_seg->offset.u.i64; base_offset = offset_value.i64;
} }
else else
#endif #endif
{ {
base_offset = data_seg->offset.u.u32; base_offset = offset_value.u32;
} }
} }
@ -1989,6 +2100,8 @@ aot_instantiate(AOTModule *module, AOTModuleInstance *parent,
#else #else
extra->shared_heap_start_off.u32[0] = UINT32_MAX; extra->shared_heap_start_off.u32[0] = UINT32_MAX;
#endif #endif
/* After shared heap chain, will early stop if shared heap is NULL */
extra->shared_heap = NULL;
#if WASM_ENABLE_PERF_PROFILING != 0 #if WASM_ENABLE_PERF_PROFILING != 0
total_size = sizeof(AOTFuncPerfProfInfo) total_size = sizeof(AOTFuncPerfProfInfo)
@ -2043,6 +2156,7 @@ aot_instantiate(AOTModule *module, AOTModuleInstance *parent,
uint8 tbl_elem_type; uint8 tbl_elem_type;
uint32 tbl_init_size, tbl_max_size, j; uint32 tbl_init_size, tbl_max_size, j;
WASMRefType *tbl_elem_ref_type; WASMRefType *tbl_elem_ref_type;
WASMValue offset_value;
bh_assert(table_init_data); bh_assert(table_init_data);
@ -2074,69 +2188,73 @@ aot_instantiate(AOTModule *module, AOTModuleInstance *parent,
if (!wasm_elem_is_active(table_init_data->mode)) { if (!wasm_elem_is_active(table_init_data->mode)) {
continue; continue;
} }
uint8 offset_flag = table_init_data->offset.init_expr_type;
bh_assert(table_init_data->offset.init_expr_type bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
== INIT_EXPR_TYPE_I32_CONST || offset_flag == INIT_EXPR_TYPE_FUNCREF_CONST
|| table_init_data->offset.init_expr_type || offset_flag == INIT_EXPR_TYPE_REFNULL_CONST
== INIT_EXPR_TYPE_GET_GLOBAL || offset_flag == INIT_EXPR_TYPE_I32_CONST
|| table_init_data->offset.init_expr_type || offset_flag == INIT_EXPR_TYPE_I32_ADD
== INIT_EXPR_TYPE_FUNCREF_CONST || offset_flag == INIT_EXPR_TYPE_I32_SUB
|| table_init_data->offset.init_expr_type || offset_flag == INIT_EXPR_TYPE_I32_MUL);
== INIT_EXPR_TYPE_REFNULL_CONST);
/* init vec(funcidx) or vec(expr) */ /* init vec(funcidx) or vec(expr) */
if (table_init_data->offset.init_expr_type if (offset_flag == INIT_EXPR_TYPE_GET_GLOBAL) {
== INIT_EXPR_TYPE_GET_GLOBAL) {
uint32 data_offset; uint32 data_offset;
if (!check_global_init_expr(module, if (!check_global_init_expr(
table_init_data->offset.u.global_index, module, table_init_data->offset.u.unary.v.global_index,
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
if (table_init_data->offset.u.global_index if (table_init_data->offset.u.unary.v.global_index
< module->import_global_count) { < module->import_global_count) {
data_offset = data_offset = module
module ->import_globals[table_init_data->offset.u
->import_globals[table_init_data->offset.u.global_index] .unary.v.global_index]
.data_offset; .data_offset;
} }
else { else {
data_offset = data_offset =
module module
->globals[table_init_data->offset.u.global_index ->globals[table_init_data->offset.u.unary.v.global_index
- module->import_global_count] - module->import_global_count]
.data_offset; .data_offset;
} }
offset_value.i32 =
table_init_data->offset.u.i32 =
*(uint32 *)(module_inst->global_data + data_offset); *(uint32 *)(module_inst->global_data + data_offset);
} }
else {
if (!get_init_value_recursive(
module_inst, module, &table_init_data->offset,
&offset_value, error_buf, error_buf_size)) {
goto fail;
}
}
/* check offset since length might negative */ /* check offset since length might negative */
if ((uint32)table_init_data->offset.u.i32 > table->cur_size) { if ((uint32)offset_value.i32 > table->cur_size) {
LOG_DEBUG("base_offset(%d) > table->cur_size(%d)", LOG_DEBUG("base_offset(%d) > table->cur_size(%d)", offset_value.i32,
table_init_data->offset.u.i32, table->cur_size); table->cur_size);
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"out of bounds table access"); "out of bounds table access");
goto fail; goto fail;
} }
if ((uint32)table_init_data->offset.u.i32 + table_init_data->value_count if ((uint32)offset_value.i32 + table_init_data->value_count
> table->cur_size) { > table->cur_size) {
LOG_DEBUG("base_offset(%d) + length(%d) > table->cur_size(%d)", LOG_DEBUG("base_offset(%d) + length(%d) > table->cur_size(%d)",
table_init_data->offset.u.i32, offset_value.i32, table_init_data->value_count,
table_init_data->value_count, table->cur_size); table->cur_size);
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"out of bounds table access"); "out of bounds table access");
goto fail; goto fail;
} }
for (j = 0; j < module->table_init_data_list[i]->value_count; j++) { for (j = 0; j < module->table_init_data_list[i]->value_count; j++) {
if (!assign_table_init_value( if (!assign_table_init_value(module_inst, module,
module_inst, module, &table_init_data->init_values[j], &table_init_data->init_values[j],
table_data + table_init_data->offset.u.i32 + j, error_buf, table_data + offset_value.i32 + j,
error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
} }
@ -3639,7 +3757,7 @@ aot_get_module_inst_mem_consumption(const AOTModuleInstance *module_inst,
for (i = 0; i < module_inst->memory_count; i++) { for (i = 0; i < module_inst->memory_count; i++) {
AOTMemoryInstance *mem_inst = module_inst->memories[i]; AOTMemoryInstance *mem_inst = module_inst->memories[i];
mem_conspn->memories_size += mem_conspn->memories_size +=
mem_inst->num_bytes_per_page * mem_inst->cur_page_count; (uint64)mem_inst->num_bytes_per_page * mem_inst->cur_page_count;
mem_conspn->app_heap_size = mem_conspn->app_heap_size =
mem_inst->heap_data_end - mem_inst->heap_data; mem_inst->heap_data_end - mem_inst->heap_data;
/* size of app heap structure */ /* size of app heap structure */
@ -3729,10 +3847,10 @@ aot_table_init(AOTModuleInstance *module_inst, uint32 tbl_idx,
for (i = 0; i < length; i++) { for (i = 0; i < length; i++) {
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
/* UINT32_MAX indicates that it is a null ref */ /* UINT32_MAX indicates that it is a null ref */
if (init_values[i].u.ref_index != UINT32_MAX) { if (init_values[i].u.unary.v.ref_index != UINT32_MAX) {
if (!(func_obj = aot_create_func_obj(module_inst, if (!(func_obj = aot_create_func_obj(
init_values[i].u.ref_index, module_inst, init_values[i].u.unary.v.ref_index, true,
true, NULL, 0))) { NULL, 0))) {
aot_set_exception_with_id(module_inst, EXCE_NULL_FUNC_OBJ); aot_set_exception_with_id(module_inst, EXCE_NULL_FUNC_OBJ);
return; return;
} }
@ -3742,7 +3860,7 @@ aot_table_init(AOTModuleInstance *module_inst, uint32 tbl_idx,
table_elems[i] = NULL_REF; table_elems[i] = NULL_REF;
} }
#else #else
table_elems[i] = init_values[i].u.ref_index; table_elems[i] = init_values[i].u.unary.v.ref_index;
#endif #endif
} }
} }
@ -4137,9 +4255,9 @@ aot_frame_update_profile_info(WASMExecEnv *exec_env, bool alloc_frame)
} }
#endif /* end of WASM_ENABLE_AOT_STACK_FRAME != 0 */ #endif /* end of WASM_ENABLE_AOT_STACK_FRAME != 0 */
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, const uint32 length, const uint32 skip_n,
char *error_buf, uint32 error_buf_size) char *error_buf, uint32 error_buf_size)
{ {
@ -4193,7 +4311,7 @@ aot_copy_callstack_tiny_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
uint32 uint32
aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, const uint32 length, const uint32 skip_n,
char *error_buf, uint32_t error_buf_size) char *error_buf, uint32_t error_buf_size)
{ {
@ -4243,7 +4361,7 @@ aot_copy_callstack_standard_frame(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
uint32 uint32
aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -4265,7 +4383,7 @@ aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer,
error_buf, error_buf_size); error_buf, error_buf_size);
} }
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
bool bool

View File

@ -125,6 +125,8 @@ typedef struct AOTModuleInstanceExtra {
*/ */
DefPointer(uint8 *, shared_heap_base_addr_adj); DefPointer(uint8 *, shared_heap_base_addr_adj);
MemBound shared_heap_start_off; MemBound shared_heap_start_off;
MemBound shared_heap_end_off;
DefPointer(WASMSharedHeap *, shared_heap);
WASMModuleInstanceExtraCommon common; WASMModuleInstanceExtraCommon common;
@ -142,9 +144,6 @@ typedef struct AOTModuleInstanceExtra {
WASMModuleInstanceCommon **import_func_module_insts; WASMModuleInstanceCommon **import_func_module_insts;
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
WASMSharedHeap *shared_heap;
#endif
} AOTModuleInstanceExtra; } AOTModuleInstanceExtra;
#if defined(BUILD_TARGET_X86_64) || defined(BUILD_TARGET_AMD_64) #if defined(BUILD_TARGET_X86_64) || defined(BUILD_TARGET_AMD_64)
@ -787,12 +786,12 @@ aot_frame_update_profile_info(WASMExecEnv *exec_env, bool alloc_frame);
bool bool
aot_create_call_stack(struct WASMExecEnv *exec_env); aot_create_call_stack(struct WASMExecEnv *exec_env);
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
aot_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, aot_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size); uint32_t error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
/** /**
* @brief Dump wasm call stack or get the size * @brief Dump wasm call stack or get the size

View File

@ -225,3 +225,18 @@ read_leb(uint8 **p_buf, const uint8 *buf_end, uint32 maxbits, bool sign,
return false; return false;
} }
} }
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
void
destroy_init_expr_recursive(InitializerExpression *expr)
{
if (expr == NULL) {
return;
}
if (is_expr_binary_op(expr->init_expr_type)) {
destroy_init_expr_recursive(expr->u.binary.l_expr);
destroy_init_expr_recursive(expr->u.binary.r_expr);
}
wasm_runtime_free(expr);
}
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */

View File

@ -50,6 +50,11 @@ void
wasm_loader_set_error_buf(char *error_buf, uint32 error_buf_size, wasm_loader_set_error_buf(char *error_buf, uint32 error_buf_size,
const char *string, bool is_aot); const char *string, bool is_aot);
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
void
destroy_init_expr_recursive(InitializerExpression *expr);
#endif
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif

View File

@ -143,7 +143,7 @@ is_bounds_checks_enabled(WASMModuleInstanceCommon *module_inst)
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
static void * static void *
wasm_mmap_linear_memory(uint64_t map_size, uint64 commit_size); wasm_mmap_linear_memory(uint64 map_size, uint64 commit_size);
static void static void
wasm_munmap_linear_memory(void *mapped_mem, uint64 commit_size, wasm_munmap_linear_memory(void *mapped_mem, uint64 commit_size,
uint64 map_size); uint64 map_size);
@ -177,39 +177,54 @@ wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args)
goto fail1; goto fail1;
} }
if (!(heap->heap_handle = size = align_uint(size, os_getpagesize());
runtime_malloc(mem_allocator_get_heap_struct_size()))) { if (size > APP_HEAP_SIZE_MAX || size < APP_HEAP_SIZE_MIN) {
LOG_WARNING("Invalid size of shared heap");
goto fail2; goto fail2;
} }
size = align_uint(size, os_getpagesize());
heap->size = size; heap->size = size;
heap->start_off_mem64 = UINT64_MAX - heap->size + 1; heap->start_off_mem64 = UINT64_MAX - heap->size + 1;
heap->start_off_mem32 = UINT32_MAX - heap->size + 1; heap->start_off_mem32 = UINT32_MAX - heap->size + 1;
heap->attached_count = 0;
if (size > APP_HEAP_SIZE_MAX || size < APP_HEAP_SIZE_MIN) { if (init_args->pre_allocated_addr != NULL) {
LOG_WARNING("Invalid size of shared heap"); /* Create shared heap from a pre allocated buffer, its size need to
goto fail3; * align with system page */
if (size != init_args->size) {
LOG_WARNING("Pre allocated size need to be aligned with system "
"page size to create shared heap");
goto fail2;
}
heap->heap_handle = NULL;
heap->base_addr = init_args->pre_allocated_addr;
} }
else {
if (!(heap->heap_handle =
runtime_malloc(mem_allocator_get_heap_struct_size()))) {
goto fail2;
}
#ifndef OS_ENABLE_HW_BOUND_CHECK #ifndef OS_ENABLE_HW_BOUND_CHECK
map_size = size; map_size = size;
#else #else
/* Totally 8G is mapped, the opcode load/store address range is 0 to 8G: /* Totally 8G is mapped, the opcode load/store address range is 0 to 8G:
* ea = i + memarg.offset * ea = i + memarg.offset
* both i and memarg.offset are u32 in range 0 to 4G * both i and memarg.offset are u32 in range 0 to 4G
* so the range of ea is 0 to 8G * so the range of ea is 0 to 8G
*/ */
map_size = 8 * (uint64)BH_GB; map_size = 8 * (uint64)BH_GB;
#endif #endif
if (!(heap->base_addr = wasm_mmap_linear_memory(map_size, size))) { if (!(heap->base_addr = wasm_mmap_linear_memory(map_size, size))) {
goto fail3; goto fail3;
} }
if (!mem_allocator_create_with_struct_and_pool( if (!mem_allocator_create_with_struct_and_pool(
heap->heap_handle, heap_struct_size, heap->base_addr, size)) { heap->heap_handle, heap_struct_size, heap->base_addr, size)) {
LOG_WARNING("init share heap failed"); LOG_WARNING("init share heap failed");
goto fail4; goto fail4;
}
} }
os_mutex_lock(&shared_heap_list_lock); os_mutex_lock(&shared_heap_list_lock);
@ -233,6 +248,219 @@ fail1:
return NULL; return NULL;
} }
WASMSharedHeap *
wasm_runtime_chain_shared_heaps(WASMSharedHeap *head, WASMSharedHeap *body)
{
WASMSharedHeap *cur;
bool heap_handle_exist = false;
if (!head || !body) {
LOG_WARNING("Invalid shared heap to chain.");
return NULL;
}
heap_handle_exist = head->heap_handle != NULL;
os_mutex_lock(&shared_heap_list_lock);
if (head->attached_count != 0 || body->attached_count != 0) {
LOG_WARNING("To create shared heap chain, all shared heap need to be "
"detached first.");
os_mutex_unlock(&shared_heap_list_lock);
return NULL;
}
for (cur = shared_heap_list; cur; cur = cur->next) {
if (cur->chain_next == body || cur->chain_next == head) {
LOG_WARNING(
"To create shared heap chain, both the 'head' and 'body' "
"shared heap can't already be the 'body' in another a chain");
os_mutex_unlock(&shared_heap_list_lock);
return NULL;
}
if (cur == head && cur->chain_next) {
LOG_WARNING(
"To create shared heap chain, the 'head' shared heap can't "
"already be the 'head' in another a chain");
os_mutex_unlock(&shared_heap_list_lock);
return NULL;
}
}
for (cur = body; cur; cur = cur->chain_next) {
if (cur->heap_handle && heap_handle_exist) {
LOG_WARNING(
"To create shared heap chain, only one of shared heap can "
"dynamically shared_heap_malloc and shared_heap_free, the rest "
"can only be pre-allocated shared heap");
os_mutex_unlock(&shared_heap_list_lock);
return NULL;
}
if (cur->heap_handle)
heap_handle_exist = true;
}
head->start_off_mem64 = body->start_off_mem64 - head->size;
head->start_off_mem32 = body->start_off_mem32 - head->size;
head->chain_next = body;
os_mutex_unlock(&shared_heap_list_lock);
return head;
}
WASMSharedHeap *
wasm_runtime_unchain_shared_heaps(WASMSharedHeap *head, bool entire_chain)
{
WASMSharedHeap *cur, *tmp;
if (!head || !head->chain_next) {
LOG_WARNING("Invalid shared heap chain to disconnect the head from.");
return NULL;
}
os_mutex_lock(&shared_heap_list_lock);
if (head->attached_count != 0) {
LOG_WARNING("To disconnect the shared heap head from the shared heap "
"chain, the shared heap chain needs to be detached first.");
os_mutex_unlock(&shared_heap_list_lock);
return NULL;
}
cur = head;
while (cur && cur->chain_next) {
cur->start_off_mem64 = UINT64_MAX - cur->size + 1;
cur->start_off_mem32 = UINT32_MAX - cur->size + 1;
tmp = cur;
cur = cur->chain_next;
tmp->chain_next = NULL;
if (!entire_chain)
break;
}
os_mutex_unlock(&shared_heap_list_lock);
return cur;
}
static uint8 *
get_last_used_shared_heap_base_addr_adj(WASMModuleInstanceCommon *module_inst)
{
#if WASM_ENABLE_INTERP != 0
if (module_inst->module_type == Wasm_Module_Bytecode) {
WASMModuleInstanceExtra *e =
(WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e;
return e->shared_heap_base_addr_adj;
}
#endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0
if (module_inst->module_type == Wasm_Module_AoT) {
AOTModuleInstanceExtra *e =
(AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e;
return e->shared_heap_base_addr_adj;
}
#endif /* end of WASM_ENABLE_AOT != 0 */
return 0;
}
static uintptr_t
get_last_used_shared_heap_start_offset(WASMModuleInstanceCommon *module_inst)
{
#if WASM_ENABLE_INTERP != 0
if (module_inst->module_type == Wasm_Module_Bytecode) {
WASMModuleInstanceExtra *e =
(WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
return e->shared_heap_start_off.u64;
#else
return e->shared_heap_start_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0
if (module_inst->module_type == Wasm_Module_AoT) {
AOTModuleInstanceExtra *e =
(AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
return e->shared_heap_start_off.u64;
#else
return e->shared_heap_start_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_AOT != 0 */
return 0;
}
static uintptr_t
get_last_used_shared_heap_end_offset(WASMModuleInstanceCommon *module_inst)
{
#if WASM_ENABLE_INTERP != 0
if (module_inst->module_type == Wasm_Module_Bytecode) {
WASMModuleInstanceExtra *e =
(WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
return e->shared_heap_end_off.u64;
#else
return e->shared_heap_end_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0
if (module_inst->module_type == Wasm_Module_AoT) {
AOTModuleInstanceExtra *e =
(AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
return e->shared_heap_end_off.u64;
#else
return e->shared_heap_end_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_AOT != 0 */
return 0;
}
static void
update_last_used_shared_heap(WASMModuleInstanceCommon *module_inst,
WASMSharedHeap *shared_heap, bool is_memory64)
{
#if WASM_ENABLE_INTERP != 0
if (module_inst->module_type == Wasm_Module_Bytecode) {
WASMModuleInstanceExtra *e =
(WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
if (is_memory64)
e->shared_heap_start_off.u64 = shared_heap->start_off_mem64;
else
e->shared_heap_start_off.u64 = shared_heap->start_off_mem32;
e->shared_heap_end_off.u64 =
e->shared_heap_start_off.u64 - 1 + shared_heap->size;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u64;
#else
e->shared_heap_start_off.u32[0] = (uint32)shared_heap->start_off_mem32;
e->shared_heap_end_off.u32[0] =
e->shared_heap_start_off.u32[0] - 1 + shared_heap->size;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0
if (module_inst->module_type == Wasm_Module_AoT) {
AOTModuleInstanceExtra *e =
(AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e;
#if UINTPTR_MAX == UINT64_MAX
if (is_memory64)
e->shared_heap_start_off.u64 = shared_heap->start_off_mem64;
else
e->shared_heap_start_off.u64 = shared_heap->start_off_mem32;
e->shared_heap_end_off.u64 =
e->shared_heap_start_off.u64 - 1 + shared_heap->size;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u64;
#else
e->shared_heap_start_off.u32[0] = (uint32)shared_heap->start_off_mem32;
e->shared_heap_end_off.u32[0] =
e->shared_heap_start_off.u32[0] - 1 + shared_heap->size;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u32[0];
#endif
}
#endif /* end of WASM_ENABLE_AOT != 0 */
}
bool bool
wasm_runtime_attach_shared_heap_internal(WASMModuleInstanceCommon *module_inst, wasm_runtime_attach_shared_heap_internal(WASMModuleInstanceCommon *module_inst,
WASMSharedHeap *shared_heap) WASMSharedHeap *shared_heap)
@ -263,20 +491,6 @@ wasm_runtime_attach_shared_heap_internal(WASMModuleInstanceCommon *module_inst,
return false; return false;
} }
e->shared_heap = shared_heap; e->shared_heap = shared_heap;
#if WASM_ENABLE_JIT != 0
#if UINTPTR_MAX == UINT64_MAX
if (memory->is_memory64)
e->shared_heap_start_off.u64 = shared_heap->start_off_mem64;
else
e->shared_heap_start_off.u64 = shared_heap->start_off_mem32;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u64;
#else
e->shared_heap_start_off.u32[0] = (uint32)shared_heap->start_off_mem32;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u32[0];
#endif
#endif /* end of WASM_ENABLE_JIT != 0 */
} }
#endif /* end of WASM_ENABLE_INTERP != 0 */ #endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0 #if WASM_ENABLE_AOT != 0
@ -288,21 +502,13 @@ wasm_runtime_attach_shared_heap_internal(WASMModuleInstanceCommon *module_inst,
return false; return false;
} }
e->shared_heap = shared_heap; e->shared_heap = shared_heap;
#if UINTPTR_MAX == UINT64_MAX
if (memory->is_memory64)
e->shared_heap_start_off.u64 = shared_heap->start_off_mem64;
else
e->shared_heap_start_off.u64 = shared_heap->start_off_mem32;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u64;
#else
e->shared_heap_start_off.u32[0] = (uint32)shared_heap->start_off_mem32;
e->shared_heap_base_addr_adj =
shared_heap->base_addr - e->shared_heap_start_off.u32[0];
#endif
} }
#endif /* end of WASM_ENABLE_AOT != 0 */ #endif /* end of WASM_ENABLE_AOT != 0 */
update_last_used_shared_heap(module_inst, shared_heap, memory->is_memory64);
os_mutex_lock(&shared_heap_list_lock);
shared_heap->attached_count++;
os_mutex_unlock(&shared_heap_list_lock);
return true; return true;
} }
@ -320,30 +526,46 @@ wasm_runtime_attach_shared_heap(WASMModuleInstanceCommon *module_inst,
void void
wasm_runtime_detach_shared_heap_internal(WASMModuleInstanceCommon *module_inst) wasm_runtime_detach_shared_heap_internal(WASMModuleInstanceCommon *module_inst)
{ {
/* Reset shared_heap_end_off = UINT64/32_MAX - 1 to handling a corner case,
app_offset >= shared_heap_start && app_offset <= shared_heap_end-bytes+1
when bytes=1 and both e->shared_heap_start_off and e->shared_heap_end_off
is 0xffffffff */
#if WASM_ENABLE_INTERP != 0 #if WASM_ENABLE_INTERP != 0
if (module_inst->module_type == Wasm_Module_Bytecode) { if (module_inst->module_type == Wasm_Module_Bytecode) {
WASMModuleInstanceExtra *e = WASMModuleInstanceExtra *e =
(WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e; (WASMModuleInstanceExtra *)((WASMModuleInstance *)module_inst)->e;
if (e->shared_heap != NULL) {
os_mutex_lock(&shared_heap_list_lock);
e->shared_heap->attached_count--;
os_mutex_unlock(&shared_heap_list_lock);
}
e->shared_heap = NULL; e->shared_heap = NULL;
#if WASM_ENABLE_JIT != 0
#if UINTPTR_MAX == UINT64_MAX #if UINTPTR_MAX == UINT64_MAX
e->shared_heap_start_off.u64 = UINT64_MAX; e->shared_heap_start_off.u64 = UINT64_MAX;
e->shared_heap_end_off.u64 = UINT64_MAX - 1;
#else #else
e->shared_heap_start_off.u32[0] = UINT32_MAX; e->shared_heap_start_off.u32[0] = UINT32_MAX;
e->shared_heap_end_off.u32[0] = UINT32_MAX - 1;
#endif #endif
e->shared_heap_base_addr_adj = NULL; e->shared_heap_base_addr_adj = NULL;
#endif
} }
#endif /* end of WASM_ENABLE_INTERP != 0 */ #endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_AOT != 0 #if WASM_ENABLE_AOT != 0
if (module_inst->module_type == Wasm_Module_AoT) { if (module_inst->module_type == Wasm_Module_AoT) {
AOTModuleInstanceExtra *e = AOTModuleInstanceExtra *e =
(AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e; (AOTModuleInstanceExtra *)((AOTModuleInstance *)module_inst)->e;
if (e->shared_heap != NULL) {
os_mutex_lock(&shared_heap_list_lock);
e->shared_heap->attached_count--;
os_mutex_unlock(&shared_heap_list_lock);
}
e->shared_heap = NULL; e->shared_heap = NULL;
#if UINTPTR_MAX == UINT64_MAX #if UINTPTR_MAX == UINT64_MAX
e->shared_heap_start_off.u64 = UINT64_MAX; e->shared_heap_start_off.u64 = UINT64_MAX;
e->shared_heap_end_off.u64 = UINT64_MAX - 1;
#else #else
e->shared_heap_start_off.u32[0] = UINT32_MAX; e->shared_heap_start_off.u32[0] = UINT32_MAX;
e->shared_heap_end_off.u32[0] = UINT32_MAX - 1;
#endif #endif
e->shared_heap_base_addr_adj = NULL; e->shared_heap_base_addr_adj = NULL;
} }
@ -385,71 +607,93 @@ wasm_runtime_get_shared_heap(WASMModuleInstanceCommon *module_inst_comm)
return get_shared_heap(module_inst_comm); return get_shared_heap(module_inst_comm);
} }
static bool bool
is_app_addr_in_shared_heap(WASMModuleInstanceCommon *module_inst, is_app_addr_in_shared_heap(WASMModuleInstanceCommon *module_inst,
bool is_memory64, uint64 app_offset, uint32 bytes) bool is_memory64, uint64 app_offset, uint32 bytes)
{ {
WASMSharedHeap *heap = get_shared_heap(module_inst); WASMSharedHeap *heap = get_shared_heap(module_inst), *cur;
uint64 shared_heap_start, shared_heap_end;
if (!heap) { if (!heap) {
return false; goto fail;
} }
if (bytes == 0) { if (bytes == 0) {
bytes = 1; bytes = 1;
} }
if (!is_memory64) { shared_heap_start =
if (app_offset >= heap->start_off_mem32 (uint64)get_last_used_shared_heap_start_offset(module_inst);
&& app_offset <= UINT32_MAX - bytes + 1) { shared_heap_end = (uint64)get_last_used_shared_heap_end_offset(module_inst);
return true; if (bytes - 1 <= shared_heap_end && app_offset >= shared_heap_start
} && app_offset <= shared_heap_end - bytes + 1) {
return true;
} }
else {
if (app_offset >= heap->start_off_mem64 /* Early stop for app start address not in the shared heap(chain) at all */
&& app_offset <= UINT64_MAX - bytes + 1) { shared_heap_start =
is_memory64 ? heap->start_off_mem64 : heap->start_off_mem32;
shared_heap_end = is_memory64 ? UINT64_MAX : UINT32_MAX;
if (bytes - 1 > shared_heap_end || app_offset < shared_heap_start
|| app_offset > shared_heap_end - bytes + 1) {
goto fail;
}
/* Find the exact shared heap that app addr is in, and update last used
* shared heap info in module inst extra */
for (cur = heap; cur; cur = cur->chain_next) {
shared_heap_start =
is_memory64 ? cur->start_off_mem64 : cur->start_off_mem32;
shared_heap_end = shared_heap_start - 1 + cur->size;
if (bytes - 1 <= shared_heap_end && app_offset >= shared_heap_start
&& app_offset <= shared_heap_end - bytes + 1) {
update_last_used_shared_heap(module_inst, cur, is_memory64);
return true; return true;
} }
} }
fail:
return false; return false;
} }
static bool static bool
is_native_addr_in_shared_heap(WASMModuleInstanceCommon *module_inst, is_native_addr_in_shared_heap(WASMModuleInstanceCommon *module_inst,
uint8 *addr, uint32 bytes) bool is_memory64, uint8 *addr, uint32 bytes)
{ {
WASMSharedHeap *heap = get_shared_heap(module_inst); WASMSharedHeap *cur, *heap = get_shared_heap(module_inst);
uintptr_t base_addr; uintptr_t base_addr, addr_int, end_addr;
uintptr_t addr_int;
uintptr_t end_addr;
if (!heap) { if (!heap) {
return false; goto fail;
} }
base_addr = (uintptr_t)heap->base_addr; /* Iterate through shared heap chain to find whether native addr in one of
addr_int = (uintptr_t)addr; * shared heap */
if (addr_int < base_addr) { for (cur = heap; cur != NULL; cur = cur->chain_next) {
return false; base_addr = (uintptr_t)cur->base_addr;
addr_int = (uintptr_t)addr;
if (addr_int < base_addr)
continue;
end_addr = addr_int + bytes;
/* Check for overflow */
if (end_addr <= addr_int)
continue;
if (end_addr > base_addr + cur->size)
continue;
update_last_used_shared_heap(module_inst, cur, is_memory64);
return true;
} }
end_addr = addr_int + bytes; fail:
/* Check for overflow */ return false;
if (end_addr <= addr_int) {
return false;
}
if (end_addr > base_addr + heap->size) {
return false;
}
return true;
} }
uint64 uint64
wasm_runtime_shared_heap_malloc(WASMModuleInstanceCommon *module_inst, wasm_runtime_shared_heap_malloc(WASMModuleInstanceCommon *module_inst,
uint64_t size, void **p_native_addr) uint64 size, void **p_native_addr)
{ {
WASMMemoryInstance *memory = WASMMemoryInstance *memory =
wasm_get_default_memory((WASMModuleInstance *)module_inst); wasm_get_default_memory((WASMModuleInstance *)module_inst);
@ -459,6 +703,14 @@ wasm_runtime_shared_heap_malloc(WASMModuleInstanceCommon *module_inst,
if (!memory || !shared_heap) if (!memory || !shared_heap)
return 0; return 0;
while (shared_heap && !shared_heap->heap_handle) {
shared_heap = shared_heap->chain_next;
}
if (!shared_heap) {
LOG_WARNING("Can't allocate from pre allocated shared heap");
return 0;
}
native_addr = mem_allocator_malloc(shared_heap->heap_handle, size); native_addr = mem_allocator_malloc(shared_heap->heap_handle, size);
if (!native_addr) if (!native_addr)
return 0; return 0;
@ -467,12 +719,10 @@ wasm_runtime_shared_heap_malloc(WASMModuleInstanceCommon *module_inst,
*p_native_addr = native_addr; *p_native_addr = native_addr;
} }
if (memory->is_memory64) return memory->is_memory64
return shared_heap->start_off_mem64 ? shared_heap->start_off_mem64
+ ((uint8 *)native_addr - shared_heap->base_addr); : shared_heap->start_off_mem32
else + ((uint8 *)native_addr - shared_heap->base_addr);
return shared_heap->start_off_mem32
+ ((uint8 *)native_addr - shared_heap->base_addr);
} }
void void
@ -487,6 +737,14 @@ wasm_runtime_shared_heap_free(WASMModuleInstanceCommon *module_inst, uint64 ptr)
return; return;
} }
while (shared_heap && !shared_heap->heap_handle) {
shared_heap = shared_heap->chain_next;
}
if (!shared_heap) {
LOG_WARNING("The address to free is from pre allocated shared heap");
return;
}
if (memory->is_memory64) { if (memory->is_memory64) {
if (ptr < shared_heap->start_off_mem64) { /* ptr can not > UINT64_MAX */ if (ptr < shared_heap->start_off_mem64) { /* ptr can not > UINT64_MAX */
LOG_WARNING("The address to free isn't in shared heap"); LOG_WARNING("The address to free isn't in shared heap");
@ -564,14 +822,16 @@ destroy_shared_heaps()
while (heap) { while (heap) {
cur = heap; cur = heap;
heap = heap->next; heap = heap->next;
mem_allocator_destroy(cur->heap_handle); if (cur->heap_handle) {
wasm_runtime_free(cur->heap_handle); mem_allocator_destroy(cur->heap_handle);
wasm_runtime_free(cur->heap_handle);
#ifndef OS_ENABLE_HW_BOUND_CHECK #ifndef OS_ENABLE_HW_BOUND_CHECK
map_size = cur->size; map_size = cur->size;
#else #else
map_size = 8 * (uint64)BH_GB; map_size = 8 * (uint64)BH_GB;
#endif #endif
wasm_munmap_linear_memory(cur->base_addr, cur->size, map_size); wasm_munmap_linear_memory(cur->base_addr, cur->size, map_size);
}
wasm_runtime_free(cur); wasm_runtime_free(cur);
} }
os_mutex_destroy(&shared_heap_list_lock); os_mutex_destroy(&shared_heap_list_lock);
@ -798,6 +1058,10 @@ wasm_runtime_validate_app_str_addr(WASMModuleInstanceCommon *module_inst_comm,
WASMMemoryInstance *memory_inst; WASMMemoryInstance *memory_inst;
uint64 app_end_offset, max_linear_memory_size = MAX_LINEAR_MEMORY_SIZE; uint64 app_end_offset, max_linear_memory_size = MAX_LINEAR_MEMORY_SIZE;
char *str, *str_end; char *str, *str_end;
#if WASM_ENABLE_SHARED_HEAP != 0
uintptr_t shared_heap_end_off;
char *shared_heap_base_addr_adj;
#endif
bh_assert(module_inst_comm->module_type == Wasm_Module_Bytecode bh_assert(module_inst_comm->module_type == Wasm_Module_Bytecode
|| module_inst_comm->module_type == Wasm_Module_AoT); || module_inst_comm->module_type == Wasm_Module_AoT);
@ -814,12 +1078,12 @@ wasm_runtime_validate_app_str_addr(WASMModuleInstanceCommon *module_inst_comm,
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
if (is_app_addr_in_shared_heap(module_inst_comm, memory_inst->is_memory64, if (is_app_addr_in_shared_heap(module_inst_comm, memory_inst->is_memory64,
app_str_offset, 1)) { app_str_offset, 1)) {
WASMSharedHeap *shared_heap = get_shared_heap(module_inst_comm); shared_heap_end_off =
str = (char *)shared_heap->base_addr get_last_used_shared_heap_end_offset(module_inst_comm);
+ (memory_inst->is_memory64 shared_heap_base_addr_adj =
? (app_str_offset - shared_heap->start_off_mem64) (char *)get_last_used_shared_heap_base_addr_adj(module_inst_comm);
: (app_str_offset - shared_heap->start_off_mem32)); str = shared_heap_base_addr_adj + app_str_offset;
str_end = (char *)shared_heap->base_addr + shared_heap->size; str_end = shared_heap_base_addr_adj + shared_heap_end_off + 1;
} }
else else
#endif #endif
@ -884,7 +1148,8 @@ wasm_runtime_validate_native_addr(WASMModuleInstanceCommon *module_inst_comm,
} }
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
if (is_native_addr_in_shared_heap(module_inst_comm, native_ptr, size)) { if (is_native_addr_in_shared_heap(
module_inst_comm, memory_inst->is_memory64, native_ptr, size)) {
return true; return true;
} }
#endif #endif
@ -926,17 +1191,8 @@ wasm_runtime_addr_app_to_native(WASMModuleInstanceCommon *module_inst_comm,
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
if (is_app_addr_in_shared_heap(module_inst_comm, memory_inst->is_memory64, if (is_app_addr_in_shared_heap(module_inst_comm, memory_inst->is_memory64,
app_offset, 1)) { app_offset, 1)) {
WASMSharedHeap *shared_heap = get_shared_heap(module_inst_comm); return get_last_used_shared_heap_base_addr_adj(module_inst_comm)
uint64 shared_heap_start = 0; + app_offset;
if (memory_inst && !memory_inst->is_memory64) {
shared_heap_start = shared_heap->start_off_mem32;
}
else if (memory_inst && memory_inst->is_memory64) {
shared_heap_start = shared_heap->start_off_mem64;
}
return shared_heap->base_addr + app_offset - shared_heap_start;
} }
#endif #endif
@ -974,29 +1230,17 @@ wasm_runtime_addr_native_to_app(WASMModuleInstanceCommon *module_inst_comm,
bounds_checks = is_bounds_checks_enabled(module_inst_comm); bounds_checks = is_bounds_checks_enabled(module_inst_comm);
#if WASM_ENABLE_SHARED_HEAP != 0
/* If shared heap is enabled, bounds check is always needed */
bounds_checks = true;
#endif
memory_inst = wasm_get_default_memory(module_inst); memory_inst = wasm_get_default_memory(module_inst);
if (!memory_inst) { if (!memory_inst) {
return 0; return 0;
} }
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
if (is_native_addr_in_shared_heap(module_inst_comm, addr, 1)) { if (is_native_addr_in_shared_heap(module_inst_comm,
WASMSharedHeap *shared_heap = get_shared_heap(module_inst_comm); memory_inst->is_memory64, addr, 1)) {
uint64 shared_heap_start = 0; return (uint64)(uintptr_t)(addr
- get_last_used_shared_heap_base_addr_adj(
if (memory_inst && !memory_inst->is_memory64) { module_inst_comm));
shared_heap_start = shared_heap->start_off_mem32;
}
else if (memory_inst && memory_inst->is_memory64) {
shared_heap_start = shared_heap->start_off_mem64;
}
return shared_heap_start + (addr - shared_heap->base_addr);
} }
#endif #endif
@ -1098,8 +1342,8 @@ wasm_check_app_addr_and_convert(WASMModuleInstance *module_inst, bool is_str,
uint8 *native_addr; uint8 *native_addr;
bool bounds_checks; bool bounds_checks;
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
WASMSharedHeap *shared_heap; uint8 *shared_heap_base_addr_adj = NULL;
bool is_in_shared_heap = false; uintptr_t shared_heap_end_off = 0;
#endif #endif
bh_assert(app_buf_addr <= UINTPTR_MAX && app_buf_size <= UINTPTR_MAX); bh_assert(app_buf_addr <= UINTPTR_MAX && app_buf_size <= UINTPTR_MAX);
@ -1113,36 +1357,17 @@ wasm_check_app_addr_and_convert(WASMModuleInstance *module_inst, bool is_str,
if (is_app_addr_in_shared_heap((WASMModuleInstanceCommon *)module_inst, if (is_app_addr_in_shared_heap((WASMModuleInstanceCommon *)module_inst,
memory_inst->is_memory64, app_buf_addr, memory_inst->is_memory64, app_buf_addr,
app_buf_size)) { app_buf_size)) {
shared_heap = get_shared_heap((WASMModuleInstanceCommon *)module_inst);
native_addr = shared_heap->base_addr
+ (memory_inst->is_memory64
? (app_buf_addr - shared_heap->start_off_mem64)
: (app_buf_addr - shared_heap->start_off_mem32));
is_in_shared_heap = true;
}
else
#endif
{
native_addr = memory_inst->memory_data + (uintptr_t)app_buf_addr;
}
bounds_checks =
is_bounds_checks_enabled((WASMModuleInstanceCommon *)module_inst);
if (!bounds_checks) {
if (app_buf_addr == 0) {
native_addr = NULL;
}
goto success;
}
#if WASM_ENABLE_SHARED_HEAP != 0
if (is_in_shared_heap) {
const char *str, *str_end; const char *str, *str_end;
shared_heap_base_addr_adj = get_last_used_shared_heap_base_addr_adj(
(WASMModuleInstanceCommon *)module_inst);
shared_heap_end_off = get_last_used_shared_heap_end_offset(
(WASMModuleInstanceCommon *)module_inst);
native_addr = shared_heap_base_addr_adj + (uintptr_t)app_buf_addr;
/* The whole string must be in the linear memory */ /* The whole string must be in the shared heap */
str = (const char *)native_addr; str = (const char *)native_addr;
str_end = (const char *)shared_heap->base_addr + shared_heap->size; str_end =
(const char *)shared_heap_base_addr_adj + shared_heap_end_off + 1;
while (str < str_end && *str != '\0') while (str < str_end && *str != '\0')
str++; str++;
if (str == str_end) { if (str == str_end) {
@ -1154,6 +1379,17 @@ wasm_check_app_addr_and_convert(WASMModuleInstance *module_inst, bool is_str,
} }
#endif #endif
native_addr = memory_inst->memory_data + (uintptr_t)app_buf_addr;
bounds_checks =
is_bounds_checks_enabled((WASMModuleInstanceCommon *)module_inst);
if (!bounds_checks) {
if (app_buf_addr == 0) {
native_addr = NULL;
}
goto success;
}
/* No need to check the app_offset and buf_size if memory access /* No need to check the app_offset and buf_size if memory access
boundary check with hardware trap is enabled */ boundary check with hardware trap is enabled */
#ifndef OS_ENABLE_HW_BOUND_CHECK #ifndef OS_ENABLE_HW_BOUND_CHECK

View File

@ -41,10 +41,60 @@ SET_LINEAR_MEMORY_SIZE(WASMMemoryInstance *memory, uint64 size)
#define SET_LINEAR_MEMORY_SIZE(memory, size) memory->memory_data_size = size #define SET_LINEAR_MEMORY_SIZE(memory, size) memory->memory_data_size = size
#endif #endif
#if WASM_ENABLE_INTERP != 0
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
#if WASM_ENABLE_MULTI_MEMORY != 0
/* Only enable shared heap for the default memory */
#define is_default_memory (memidx == 0)
#else
#define is_default_memory true
#endif
#if UINTPTR_MAX == UINT64_MAX
#define get_shared_heap_end_off() module->e->shared_heap_end_off.u64
#else
#define get_shared_heap_end_off() \
(uint64)(module->e->shared_heap_end_off.u32[0])
#endif
#if WASM_ENABLE_MEMORY64 != 0
#define shared_heap_is_memory64 is_memory64
#else
#define shared_heap_is_memory64 false
#endif
#define app_addr_in_shared_heap(app_addr, bytes) \
(is_default_memory \
&& is_app_addr_in_shared_heap((WASMModuleInstanceCommon *)module, \
shared_heap_is_memory64, (uint64)app_addr, \
bytes))
#define shared_heap_addr_app_to_native(app_addr, native_addr) \
native_addr = module->e->shared_heap_base_addr_adj + app_addr
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr) \
if (app_addr_in_shared_heap(app_addr, bytes)) \
shared_heap_addr_app_to_native(app_addr, native_addr); \
else
#else /* else of WASM_ENABLE_SHARED_HEAP != 0 */
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr)
#endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */
#endif /* end of WASM_ENABLE_INTERP != 0 */
#if WASM_ENABLE_SHARED_HEAP != 0
bool
is_app_addr_in_shared_heap(WASMModuleInstanceCommon *module_inst,
bool is_memory64, uint64 app_offset, uint32 bytes);
WASMSharedHeap * WASMSharedHeap *
wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args); wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args);
WASMSharedHeap *
wasm_runtime_chain_shared_heaps(WASMSharedHeap *head, WASMSharedHeap *body);
WASMSharedHeap *
wasm_runtime_unchain_shared_heaps(WASMSharedHeap *head, bool entire_chain);
bool bool
wasm_runtime_attach_shared_heap(WASMModuleInstanceCommon *module_inst, wasm_runtime_attach_shared_heap(WASMModuleInstanceCommon *module_inst,
WASMSharedHeap *shared_heap); WASMSharedHeap *shared_heap);
@ -68,7 +118,7 @@ wasm_runtime_shared_heap_malloc(WASMModuleInstanceCommon *module_inst,
void void
wasm_runtime_shared_heap_free(WASMModuleInstanceCommon *module_inst, wasm_runtime_shared_heap_free(WASMModuleInstanceCommon *module_inst,
uint64 ptr); uint64 ptr);
#endif #endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */
bool bool
wasm_runtime_memory_init(mem_alloc_type_t mem_alloc_type, wasm_runtime_memory_init(mem_alloc_type_t mem_alloc_type,

View File

@ -1743,9 +1743,9 @@ wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env)
wasm_exec_env_destroy(exec_env); wasm_exec_env_destroy(exec_env);
} }
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -1780,7 +1780,7 @@ wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer,
strncpy(error_buf, err_msg, error_buf_size); strncpy(error_buf, err_msg, error_buf_size);
return 0; return 0;
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
bool bool
wasm_runtime_init_thread_env(void) wasm_runtime_init_thread_env(void)
@ -7883,3 +7883,37 @@ wasm_runtime_is_underlying_binary_freeable(WASMModuleCommon *const module)
return true; return true;
} }
#if WASM_ENABLE_SHARED_HEAP != 0
bool
wasm_runtime_check_and_update_last_used_shared_heap(
WASMModuleInstanceCommon *module_inst, uintptr_t app_offset, size_t bytes,
uintptr_t *shared_heap_start_off_p, uintptr_t *shared_heap_end_off_p,
uint8 **shared_heap_base_addr_adj_p, bool is_memory64)
{
WASMSharedHeap *heap = wasm_runtime_get_shared_heap(module_inst), *cur;
uint64 shared_heap_start, shared_heap_end;
if (bytes == 0) {
bytes = 1;
}
/* Find the exact shared heap that app addr is in, and update last used
* shared heap info in func context */
for (cur = heap; cur; cur = cur->chain_next) {
shared_heap_start =
is_memory64 ? cur->start_off_mem64 : cur->start_off_mem32;
shared_heap_end = shared_heap_start - 1 + cur->size;
if (bytes - 1 <= shared_heap_end && app_offset >= shared_heap_start
&& app_offset <= shared_heap_end - bytes + 1) {
*shared_heap_start_off_p = (uintptr_t)shared_heap_start;
*shared_heap_end_off_p = (uintptr_t)shared_heap_end;
*shared_heap_base_addr_adj_p =
cur->base_addr - (uintptr_t)shared_heap_start;
return true;
}
}
return false;
}
#endif

View File

@ -758,12 +758,12 @@ wasm_runtime_create_exec_env(WASMModuleInstanceCommon *module_inst,
WASM_RUNTIME_API_EXTERN void WASM_RUNTIME_API_EXTERN void
wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env); wasm_runtime_destroy_exec_env(WASMExecEnv *exec_env);
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
WASM_RUNTIME_API_EXTERN uint32_t WASM_RUNTIME_API_EXTERN uint32_t
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32 length, const uint32 skip_n, char *error_buf, const uint32 length, const uint32 skip_n, char *error_buf,
uint32 error_buf_size); uint32 error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
/* See wasm_export.h for description */ /* See wasm_export.h for description */
WASM_RUNTIME_API_EXTERN WASMModuleInstanceCommon * WASM_RUNTIME_API_EXTERN WASMModuleInstanceCommon *
@ -1336,6 +1336,14 @@ void
wasm_runtime_set_linux_perf(bool flag); wasm_runtime_set_linux_perf(bool flag);
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
bool
wasm_runtime_check_and_update_last_used_shared_heap(
WASMModuleInstanceCommon *module_inst, uintptr_t app_offset, size_t bytes,
uintptr_t *shared_heap_start_off_p, uintptr_t *shared_heap_end_off_p,
uint8 **shared_heap_base_addr_adj_p, bool is_memory64);
#endif
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif

View File

@ -8,7 +8,7 @@
static char aot_error[128]; static char aot_error[128];
char * char *
aot_get_last_error() aot_get_last_error(void)
{ {
return aot_error[0] == '\0' ? "" : aot_error; return aot_error[0] == '\0' ? "" : aot_error;
} }

View File

@ -48,7 +48,7 @@ typedef struct AOTSymbolList {
} AOTSymbolList; } AOTSymbolList;
/* AOT object data */ /* AOT object data */
typedef struct AOTObjectData { struct AOTObjectData {
AOTCompContext *comp_ctx; AOTCompContext *comp_ctx;
LLVMMemoryBufferRef mem_buf; LLVMMemoryBufferRef mem_buf;
@ -82,7 +82,7 @@ typedef struct AOTObjectData {
const char *stack_sizes_section_name; const char *stack_sizes_section_name;
uint32 stack_sizes_offset; uint32 stack_sizes_offset;
uint32 *stack_sizes; uint32 *stack_sizes;
} AOTObjectData; };
#if 0 #if 0
static void dump_buf(uint8 *buf, uint32 size, char *title) static void dump_buf(uint8 *buf, uint32 size, char *title)
@ -216,7 +216,7 @@ get_init_expr_size(const AOTCompContext *comp_ctx, const AOTCompData *comp_data,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
WASMModule *module = comp_data->wasm_module; WASMModule *module = comp_data->wasm_module;
#endif #endif
bh_assert(expr != NULL);
/* + init value size */ /* + init value size */
switch (expr->init_expr_type) { switch (expr->init_expr_type) {
case INIT_EXPR_NONE: case INIT_EXPR_NONE:
@ -248,7 +248,7 @@ get_init_expr_size(const AOTCompContext *comp_ctx, const AOTCompData *comp_data,
{ {
uint32 i; uint32 i;
WASMStructNewInitValues *struct_new_init_values = WASMStructNewInitValues *struct_new_init_values =
(WASMStructNewInitValues *)expr->u.data; (WASMStructNewInitValues *)expr->u.unary.v.data;
/* type_index + field_count + fields */ /* type_index + field_count + fields */
size += sizeof(uint32) + sizeof(uint32); size += sizeof(uint32) + sizeof(uint32);
@ -285,7 +285,7 @@ get_init_expr_size(const AOTCompContext *comp_ctx, const AOTCompData *comp_data,
case INIT_EXPR_TYPE_ARRAY_NEW_FIXED: case INIT_EXPR_TYPE_ARRAY_NEW_FIXED:
{ {
WASMArrayNewInitValues *array_new_init_values = WASMArrayNewInitValues *array_new_init_values =
(WASMArrayNewInitValues *)expr->u.data; (WASMArrayNewInitValues *)expr->u.unary.v.data;
WASMArrayType *array_type = NULL; WASMArrayType *array_type = NULL;
uint32 value_count; uint32 value_count;
@ -302,12 +302,27 @@ get_init_expr_size(const AOTCompContext *comp_ctx, const AOTCompData *comp_data,
/* array_elem_type + type_index + len + elems */ /* array_elem_type + type_index + len + elems */
size += sizeof(uint32) * 3 size += sizeof(uint32) * 3
+ wasm_value_type_size_internal(array_type->elem_type, + (uint64)wasm_value_type_size_internal(
comp_ctx->pointer_size) array_type->elem_type, comp_ctx->pointer_size)
* value_count; * value_count;
break; break;
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
{
size +=
get_init_expr_size(comp_ctx, comp_data, expr->u.binary.l_expr);
size +=
get_init_expr_size(comp_ctx, comp_data, expr->u.binary.r_expr);
break;
}
#endif
default: default:
bh_assert(0); bh_assert(0);
} }
@ -324,15 +339,16 @@ get_table_init_data_size(AOTCompContext *comp_ctx,
/* /*
* mode (4 bytes), elem_type (4 bytes) * mode (4 bytes), elem_type (4 bytes)
* *
* table_index(4 bytes) + init expr type (4 bytes) + init expr value (8 * table_index(4 bytes)
* bytes)
*/ */
size = (uint32)(sizeof(uint32) * 2 + sizeof(uint32) + sizeof(uint32) size = (uint32)(sizeof(uint32) * 2 + sizeof(uint32))
+ sizeof(uint64))
/* Size of WasmRefType - inner padding (ref type + nullable + /* Size of WasmRefType - inner padding (ref type + nullable +
heap_type) */ heap_type) */
+ 8; + 8;
size += get_init_expr_size(comp_ctx, comp_ctx->comp_data,
&table_init_data->offset);
/* + value count/func index count (4 bytes) + init_values */ /* + value count/func index count (4 bytes) + init_values */
size += sizeof(uint32); size += sizeof(uint32);
for (i = 0; i < table_init_data->value_count; i++) { for (i = 0; i < table_init_data->value_count; i++) {
@ -1811,6 +1827,10 @@ static bool
aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset, aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
AOTCompContext *comp_ctx, InitializerExpression *expr) AOTCompContext *comp_ctx, InitializerExpression *expr)
{ {
if (expr == NULL) {
aot_set_last_error("invalid init expr.");
return false;
}
uint32 offset = *p_offset; uint32 offset = *p_offset;
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
WASMModule *module = comp_ctx->comp_data->wasm_module; WASMModule *module = comp_ctx->comp_data->wasm_module;
@ -1824,31 +1844,31 @@ aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
break; break;
case INIT_EXPR_TYPE_I32_CONST: case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_F32_CONST: case INIT_EXPR_TYPE_F32_CONST:
EMIT_U32(expr->u.i32); EMIT_U32(expr->u.unary.v.i32);
break; break;
case INIT_EXPR_TYPE_I64_CONST: case INIT_EXPR_TYPE_I64_CONST:
case INIT_EXPR_TYPE_F64_CONST: case INIT_EXPR_TYPE_F64_CONST:
EMIT_U64(expr->u.i64); EMIT_U64(expr->u.unary.v.i64);
break; break;
case INIT_EXPR_TYPE_V128_CONST: case INIT_EXPR_TYPE_V128_CONST:
EMIT_V128(expr->u.v128); EMIT_V128(expr->u.unary.v.v128);
break; break;
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
EMIT_U32(expr->u.global_index); EMIT_U32(expr->u.unary.v.global_index);
break; break;
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
case INIT_EXPR_TYPE_REFNULL_CONST: case INIT_EXPR_TYPE_REFNULL_CONST:
EMIT_U32(expr->u.ref_index); EMIT_U32(expr->u.unary.v.ref_index);
break; break;
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
EMIT_U32(expr->u.i32); EMIT_U32(expr->u.unary.v.i32);
break; break;
case INIT_EXPR_TYPE_STRUCT_NEW: case INIT_EXPR_TYPE_STRUCT_NEW:
{ {
uint32 i; uint32 i;
WASMStructNewInitValues *init_values = WASMStructNewInitValues *init_values =
(WASMStructNewInitValues *)expr->u.data; (WASMStructNewInitValues *)expr->u.unary.v.data;
WASMStructType *struct_type = NULL; WASMStructType *struct_type = NULL;
EMIT_U32(init_values->type_idx); EMIT_U32(init_values->type_idx);
@ -1879,21 +1899,21 @@ aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
break; break;
} }
case INIT_EXPR_TYPE_STRUCT_NEW_DEFAULT: case INIT_EXPR_TYPE_STRUCT_NEW_DEFAULT:
EMIT_U32(expr->u.type_index); EMIT_U32(expr->u.unary.v.type_index);
break; break;
case INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT: case INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT:
{ {
WASMArrayType *array_type = NULL; WASMArrayType *array_type = NULL;
bh_assert(expr->u.array_new_default.type_index bh_assert(expr->u.unary.v.array_new_default.type_index
< module->type_count); < module->type_count);
array_type = array_type =
(WASMArrayType *) (WASMArrayType *)
module->types[expr->u.array_new_default.type_index]; module->types[expr->u.unary.v.array_new_default.type_index];
EMIT_U32(array_type->elem_type); EMIT_U32(array_type->elem_type);
EMIT_U32(expr->u.array_new_default.type_index); EMIT_U32(expr->u.unary.v.array_new_default.type_index);
EMIT_U32(expr->u.array_new_default.length); EMIT_U32(expr->u.unary.v.array_new_default.length);
break; break;
} }
case INIT_EXPR_TYPE_ARRAY_NEW: case INIT_EXPR_TYPE_ARRAY_NEW:
@ -1901,7 +1921,7 @@ aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
{ {
uint32 value_count, i, field_size; uint32 value_count, i, field_size;
WASMArrayNewInitValues *init_values = WASMArrayNewInitValues *init_values =
(WASMArrayNewInitValues *)expr->u.data; (WASMArrayNewInitValues *)expr->u.unary.v.data;
WASMArrayType *array_type = NULL; WASMArrayType *array_type = NULL;
bh_assert(init_values->type_idx < module->type_count); bh_assert(init_values->type_idx < module->type_count);
@ -1933,6 +1953,25 @@ aot_emit_init_expr(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
break; break;
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
if (comp_ctx->enable_extended_const) {
if (!aot_emit_init_expr(buf, buf_end, &offset, comp_ctx,
expr->u.binary.l_expr)) {
return false;
}
if (!aot_emit_init_expr(buf, buf_end, &offset, comp_ctx,
expr->u.binary.r_expr)) {
return false;
}
}
break;
#endif
default: default:
aot_set_last_error("invalid init expr type."); aot_set_last_error("invalid init expr type.");
return false; return false;
@ -2034,8 +2073,10 @@ aot_emit_table_info(uint8 *buf, uint8 *buf_end, uint32 *p_offset,
EMIT_U32(init_datas[i]->mode); EMIT_U32(init_datas[i]->mode);
EMIT_U32(init_datas[i]->elem_type); EMIT_U32(init_datas[i]->elem_type);
EMIT_U32(init_datas[i]->table_index); EMIT_U32(init_datas[i]->table_index);
EMIT_U32(init_datas[i]->offset.init_expr_type); if (!aot_emit_init_expr(buf, buf_end, &offset, comp_ctx,
EMIT_U64(init_datas[i]->offset.u.i64); &init_datas[i]->offset))
return false;
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
if (comp_ctx->enable_gc && init_datas[i]->elem_ref_type) { if (comp_ctx->enable_gc && init_datas[i]->elem_ref_type) {
EMIT_U16(init_datas[i]->elem_ref_type->ref_ht_common.ref_type); EMIT_U16(init_datas[i]->elem_ref_type->ref_ht_common.ref_type);

View File

@ -347,7 +347,8 @@ call_aot_invoke_c_api_native(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
/* Get &c_api_func_imports[func_idx], note size of CApiFuncImport /* Get &c_api_func_imports[func_idx], note size of CApiFuncImport
is pointer_size * 3 */ is pointer_size * 3 */
offset = I32_CONST((comp_ctx->pointer_size * 3) * import_func_idx); offset = I32_CONST((unsigned long long)comp_ctx->pointer_size * 3
* import_func_idx);
CHECK_LLVM_CONST(offset); CHECK_LLVM_CONST(offset);
c_api_func_import = c_api_func_import =
LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, c_api_func_imports, LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, c_api_func_imports,

View File

@ -10,6 +10,40 @@
#include "aot_intrinsic.h" #include "aot_intrinsic.h"
#include "aot_emit_control.h" #include "aot_emit_control.h"
#define BUILD_IS_NOT_NULL(value, res, name) \
do { \
if (!(res = LLVMBuildIsNotNull(comp_ctx->builder, value, name))) { \
aot_set_last_error("llvm build is not null failed."); \
goto fail; \
} \
} while (0)
#define BUILD_BR(llvm_block) \
do { \
if (!LLVMBuildBr(comp_ctx->builder, llvm_block)) { \
aot_set_last_error("llvm build br failed."); \
goto fail; \
} \
} while (0)
#define BUILD_COND_BR(value_if, block_then, block_else) \
do { \
if (!LLVMBuildCondBr(comp_ctx->builder, value_if, block_then, \
block_else)) { \
aot_set_last_error("llvm build cond br failed."); \
goto fail; \
} \
} while (0)
#define BUILD_TRUNC(value, data_type) \
do { \
if (!(value = LLVMBuildTrunc(comp_ctx->builder, value, data_type, \
"val_trunc"))) { \
aot_set_last_error("llvm build trunc failed."); \
goto fail; \
} \
} while (0)
#define BUILD_ICMP(op, left, right, res, name) \ #define BUILD_ICMP(op, left, right, res, name) \
do { \ do { \
if (!(res = \ if (!(res = \
@ -111,6 +145,418 @@ ffs(int n)
static LLVMValueRef static LLVMValueRef
get_memory_curr_page_count(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx); get_memory_curr_page_count(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx);
#if WASM_ENABLE_SHARED_HEAP != 0
uint32
get_module_inst_extra_offset(AOTCompContext *comp_ctx);
#define BUILD_LOAD_PTR(ptr, data_type, res) \
do { \
if (!(res = LLVMBuildLoad2(comp_ctx->builder, data_type, ptr, \
"load_value"))) { \
aot_set_last_error("llvm build load failed"); \
goto fail; \
} \
} while (0)
/* Update last used shared heap info(alloc ptr) in function ctx:
* 1. shared_heap_start_off 2. shared_heap_end_off 3. shared_heap_base_addr_adj
*/
bool
aot_check_shared_heap_chain_and_update(AOTCompContext *comp_ctx,
AOTFuncContext *func_ctx,
LLVMBasicBlockRef check_succ,
LLVMValueRef start_offset,
LLVMValueRef bytes, bool is_memory64)
{
LLVMValueRef param_values[7], ret_value, func, value, cmp;
LLVMTypeRef param_types[7], ret_type, func_type, func_ptr_type;
param_types[0] = INT8_PTR_TYPE;
param_types[1] = INTPTR_T_TYPE;
param_types[2] = SIZE_T_TYPE;
param_types[3] = INTPTR_T_PTR_TYPE;
param_types[4] = INTPTR_T_PTR_TYPE;
param_types[5] = INT8_PTR_TYPE;
param_types[6] = INT8_TYPE;
ret_type = INT8_TYPE;
GET_AOT_FUNCTION(wasm_runtime_check_and_update_last_used_shared_heap, 7);
/* Call function */
param_values[0] = func_ctx->aot_inst;
param_values[1] = start_offset;
param_values[2] = bytes;
/* pass alloc ptr */
param_values[3] = func_ctx->shared_heap_start_off;
param_values[4] = func_ctx->shared_heap_end_off;
param_values[5] = func_ctx->shared_heap_base_addr_adj;
param_values[6] = is_memory64 ? I8_ONE : I8_ZERO;
if (!(ret_value = LLVMBuildCall2(comp_ctx->builder, func_type, func,
param_values, 7, "call"))) {
aot_set_last_error("llvm build call failed.");
goto fail;
}
BUILD_ICMP(LLVMIntEQ, ret_value, I8_ZERO, cmp, "shared_heap_oob");
if (!aot_emit_exception(comp_ctx, func_ctx,
EXCE_OUT_OF_BOUNDS_MEMORY_ACCESS, true, cmp,
check_succ)) {
goto fail;
}
return true;
fail:
return false;
}
/*
* Setup the basic blocks for shared heap and shared chain memory checks.
*
* Arguments:
* block_curr: The current basic block.
* app_addr_in_cache_shared_heap: Output, block for cache shared heap.
* app_addr_in_linear_mem: Output, block for linear memory.
* app_addr_in_shared_heap_chain: Output, block for shared heap chain
* (only for shared heap chain).
* check_shared_heap_chain: Output, block for checking shared heap chain
* (only for shared heap chain).
*
* Topology:
* If enable_shared_heap:
* block_curr -> app_addr_in_cache_shared_heap
* -> app_addr_in_linear_mem
* If enable_shared_chain:
* block_curr -> app_addr_in_shared_heap_chain
* -> app_addr_in_cache_shared_heap
* -> check_shared_heap_chain
* -> app_addr_in_linear_mem
*/
static bool
setup_shared_heap_blocks(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
LLVMBasicBlockRef block_curr,
LLVMBasicBlockRef *app_addr_in_cache_shared_heap,
LLVMBasicBlockRef *app_addr_in_linear_mem,
LLVMBasicBlockRef *app_addr_in_shared_heap_chain,
LLVMBasicBlockRef *check_shared_heap_chain)
{
ADD_BASIC_BLOCK(*app_addr_in_cache_shared_heap,
"app_addr_in_cache_shared_heap");
ADD_BASIC_BLOCK(*app_addr_in_linear_mem, "app_addr_in_linear_mem");
if (comp_ctx->enable_shared_heap) {
LLVMMoveBasicBlockAfter(*app_addr_in_cache_shared_heap, block_curr);
LLVMMoveBasicBlockAfter(*app_addr_in_linear_mem,
*app_addr_in_cache_shared_heap);
}
else if (comp_ctx->enable_shared_chain) {
ADD_BASIC_BLOCK(*app_addr_in_shared_heap_chain,
"app_addr_in_shared_heap_chain");
ADD_BASIC_BLOCK(*check_shared_heap_chain, "check_shared_heap_chain");
LLVMMoveBasicBlockAfter(*app_addr_in_shared_heap_chain, block_curr);
LLVMMoveBasicBlockAfter(*app_addr_in_cache_shared_heap,
*app_addr_in_shared_heap_chain);
LLVMMoveBasicBlockAfter(*check_shared_heap_chain,
*app_addr_in_cache_shared_heap);
LLVMMoveBasicBlockAfter(*app_addr_in_linear_mem,
*app_addr_in_cache_shared_heap);
}
return true;
fail:
return false;
}
/*
* Build a branch to check if start_offset is in the shared heap chain region.
*
* Arguments:
* start_offset: The offset to check.
* app_addr_in_shared_heap_chain: Block to branch if in shared heap chain.
* app_addr_in_linear_mem: Block to branch if not in shared heap chain.
*/
static bool
build_check_app_addr_in_shared_heap_chain(
AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
LLVMValueRef start_offset, LLVMBasicBlockRef app_addr_in_shared_heap_chain,
LLVMBasicBlockRef app_addr_in_linear_mem)
{
LLVMValueRef is_in_shared_heap = NULL;
/* Use start_offset > func_ctx->shared_heap_head_start_off to test
* start_off falls in shared heap chain memory region. The shared heap
* chain oob will be detected in app_addr_in_shared_heap block or
* aot_check_shared_heap_chain_and_update function
*/
BUILD_ICMP(LLVMIntUGT, start_offset, func_ctx->shared_heap_head_start_off,
is_in_shared_heap, "shared_heap_lb_cmp");
BUILD_COND_BR(is_in_shared_heap, app_addr_in_shared_heap_chain,
app_addr_in_linear_mem);
SET_BUILD_POS(app_addr_in_shared_heap_chain);
return true;
fail:
return false;
}
/*
* Build the conditional branch for cache shared heap or shared heap chain.
*
* Arguments:
* cmp: The condition for being in cache shared heap.
* app_addr_in_cache_shared_heap: Block for cache shared heap.
* app_addr_in_linear_mem: Block for linear memory.
* check_shared_heap_chain: Block for checking shared heap chain.
* bytes: The access size in bytes.
* start_offset: The offset to check.
* is_memory64: Whether memory is 64-bit.
*/
static bool
build_shared_heap_conditional_branching(
AOTCompContext *comp_ctx, AOTFuncContext *func_ctx, LLVMValueRef cmp,
LLVMBasicBlockRef app_addr_in_cache_shared_heap,
LLVMBasicBlockRef app_addr_in_linear_mem,
LLVMBasicBlockRef check_shared_heap_chain, LLVMValueRef bytes,
LLVMValueRef start_offset, bool is_memory64)
{
if (comp_ctx->enable_shared_heap) {
BUILD_COND_BR(cmp, app_addr_in_cache_shared_heap,
app_addr_in_linear_mem);
}
else if (comp_ctx->enable_shared_chain) {
BUILD_COND_BR(cmp, app_addr_in_cache_shared_heap,
check_shared_heap_chain);
SET_BUILD_POS(check_shared_heap_chain);
if (!aot_check_shared_heap_chain_and_update(
comp_ctx, func_ctx, app_addr_in_cache_shared_heap, start_offset,
bytes, is_memory64))
goto fail;
}
return true;
fail:
return false;
}
/*
* Get the native address in the cache shared heap.
*
* Arguments:
* start_offset: The offset to use for address calculation.
* maddr: Output, the native address that in the cache shared heap.
*/
static bool
build_get_maddr_in_cache_shared_heap(AOTCompContext *comp_ctx,
AOTFuncContext *func_ctx,
LLVMValueRef start_offset,
LLVMValueRef *maddr)
{
LLVMValueRef shared_heap_base_addr_adj;
/* load the local variable */
BUILD_LOAD_PTR(func_ctx->shared_heap_base_addr_adj, INT8_PTR_TYPE,
shared_heap_base_addr_adj);
if (!(*maddr = LLVMBuildInBoundsGEP2(
comp_ctx->builder, INT8_TYPE, shared_heap_base_addr_adj,
&start_offset, 1, "maddr_cache_shared_heap"))) {
aot_set_last_error("llvm build inbounds gep failed");
goto fail;
}
return true;
fail:
return false;
}
/*
* Check for memory overflow in shared heap for normal memory access.
*
* Arguments:
* block_curr: The current basic block.
* block_maddr_phi: The phi block for memory address.
* maddr_phi: The phi node for memory address.
* start_offset: The first offset to check.
* mem_base_addr: The base address of memory. Only used with segue.
* bytes_u32: The access size in bytes.
* is_memory64: Whether memory is wasm64 memory.
* is_target_64bit: Whether target is 64-bit.
* enable_segue: Whether to use segment register addressing.
*/
static bool
aot_check_shared_heap_memory_overflow(
AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
LLVMBasicBlockRef block_curr, LLVMBasicBlockRef block_maddr_phi,
LLVMValueRef maddr_phi, LLVMValueRef start_offset,
LLVMValueRef mem_base_addr, uint32 bytes_u32, bool is_memory64,
bool is_target_64bit, bool enable_segue)
{
LLVMBasicBlockRef app_addr_in_cache_shared_heap, app_addr_in_linear_mem;
LLVMBasicBlockRef app_addr_in_shared_heap_chain = NULL,
check_shared_heap_chain = NULL;
LLVMValueRef cmp, cmp1, cmp2, shared_heap_start_off, shared_heap_end_off,
shared_heap_check_bound, maddr = NULL;
/* On 64/32-bit target, the offset is 64/32-bit */
LLVMTypeRef offset_type = is_target_64bit ? I64_TYPE : I32_TYPE;
LLVMValueRef length, bytes;
if (!setup_shared_heap_blocks(
comp_ctx, func_ctx, block_curr, &app_addr_in_cache_shared_heap,
&app_addr_in_linear_mem, &app_addr_in_shared_heap_chain,
&check_shared_heap_chain))
goto fail;
LLVMMoveBasicBlockAfter(block_maddr_phi, app_addr_in_linear_mem);
/* Early branching when it's not in shared heap chain at all */
if (comp_ctx->enable_shared_chain
&& !build_check_app_addr_in_shared_heap_chain(
comp_ctx, func_ctx, start_offset, app_addr_in_shared_heap_chain,
app_addr_in_linear_mem))
goto fail;
/* Load the local variable of the function */
BUILD_LOAD_PTR(func_ctx->shared_heap_start_off, offset_type,
shared_heap_start_off);
BUILD_LOAD_PTR(func_ctx->shared_heap_end_off, offset_type,
shared_heap_end_off);
/* Check if the app address is in the cache shared heap range.
* If yes, branch to the cache branch; if not, check the shared heap chain
*/
BUILD_ICMP(LLVMIntUGE, start_offset, shared_heap_start_off, cmp,
"cmp_cache_shared_heap_start");
length =
is_target_64bit ? I64_CONST(bytes_u32 - 1) : I32_CONST(bytes_u32 - 1);
CHECK_LLVM_CONST(length);
BUILD_OP(Sub, shared_heap_end_off, length, shared_heap_check_bound,
"cache_shared_heap_end_bound");
BUILD_ICMP(LLVMIntULE, start_offset, shared_heap_check_bound, cmp1,
"cmp_cache_shared_heap_end");
BUILD_OP(And, cmp, cmp1, cmp2, "is_in_cache_shared_heap");
/* Conditional branching based on whether in cached shared heap */
bytes = is_target_64bit ? I64_CONST(bytes_u32) : I32_CONST(bytes_u32);
if (!build_shared_heap_conditional_branching(
comp_ctx, func_ctx, cmp2, app_addr_in_cache_shared_heap,
app_addr_in_linear_mem, check_shared_heap_chain, bytes,
start_offset, is_memory64))
goto fail;
SET_BUILD_POS(app_addr_in_cache_shared_heap);
if (!build_get_maddr_in_cache_shared_heap(comp_ctx, func_ctx, start_offset,
&maddr))
goto fail;
if (enable_segue) {
LLVMValueRef mem_base_addr_u64, maddr_u64, offset_to_mem_base;
if (!(maddr_u64 = LLVMBuildPtrToInt(comp_ctx->builder, maddr, I64_TYPE,
"maddr_u64"))
|| !(mem_base_addr_u64 =
LLVMBuildPtrToInt(comp_ctx->builder, mem_base_addr,
I64_TYPE, "mem_base_addr_u64"))) {
aot_set_last_error("llvm build ptr to int failed");
goto fail;
}
if (!(offset_to_mem_base =
LLVMBuildSub(comp_ctx->builder, maddr_u64, mem_base_addr_u64,
"offset_to_mem_base"))) {
aot_set_last_error("llvm build sub failed");
goto fail;
}
if (!(maddr = LLVMBuildIntToPtr(comp_ctx->builder, offset_to_mem_base,
INT8_PTR_TYPE_GS,
"maddr_shared_heap_segue"))) {
aot_set_last_error("llvm build int to ptr failed.");
goto fail;
}
}
LLVMAddIncoming(maddr_phi, &maddr, &app_addr_in_cache_shared_heap, 1);
BUILD_BR(block_maddr_phi);
SET_BUILD_POS(app_addr_in_linear_mem);
return true;
fail:
return false;
}
/*
* Check for memory overflow in shared heap for bulk memory access.
*
* Arguments:
* block_curr: The current basic block.
* block_maddr_phi: The phi block for memory address.
* check_succ: The block to branch to on success.
* maddr_phi: The phi node for memory address.
* start_offset: The offset to check.
* max_addr: The maximum address to check.
* bytes: The access size in bytes (LLVMValueRef).
* is_memory64: Whether memory is wasm64 memory.
* is_target_64bit: Whether target is 64-bit.
*/
static bool
aot_check_bulk_memory_shared_heap_memory_overflow(
AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
LLVMBasicBlockRef block_curr, LLVMBasicBlockRef block_maddr_phi,
LLVMBasicBlockRef check_succ, LLVMValueRef maddr_phi,
LLVMValueRef start_offset, LLVMValueRef max_addr, LLVMValueRef bytes,
bool is_memory64, bool is_target_64bit)
{
LLVMBasicBlockRef app_addr_in_cache_shared_heap, app_addr_in_linear_mem;
LLVMBasicBlockRef app_addr_in_shared_heap_chain = NULL,
check_shared_heap_chain = NULL;
LLVMValueRef cmp, cmp1, cmp2, shared_heap_start_off, shared_heap_end_off,
maddr = NULL, max_offset;
/* On 64/32-bit target, the offset is 64/32-bit */
LLVMTypeRef offset_type = is_target_64bit ? I64_TYPE : I32_TYPE;
if (!setup_shared_heap_blocks(
comp_ctx, func_ctx, block_curr, &app_addr_in_cache_shared_heap,
&app_addr_in_linear_mem, &app_addr_in_shared_heap_chain,
&check_shared_heap_chain))
goto fail;
LLVMMoveBasicBlockAfter(block_maddr_phi, check_succ);
/* Early branching when it's not in shared heap chain at all */
if (comp_ctx->enable_shared_chain
&& !build_check_app_addr_in_shared_heap_chain(
comp_ctx, func_ctx, start_offset, app_addr_in_shared_heap_chain,
app_addr_in_linear_mem))
goto fail;
/* Load the local variable of the function */
BUILD_LOAD_PTR(func_ctx->shared_heap_start_off, offset_type,
shared_heap_start_off);
BUILD_LOAD_PTR(func_ctx->shared_heap_end_off, offset_type,
shared_heap_end_off);
/* Check if the app address is in the cache shared heap range.
* If yes, branch to the cache branch; if not, check the shared heap chain
*/
BUILD_ICMP(LLVMIntUGE, start_offset, shared_heap_start_off, cmp,
"cmp_cache_shared_heap_start");
BUILD_OP(Add, max_addr, is_target_64bit ? I64_NEG_ONE : I32_NEG_ONE,
max_offset, "max_offset");
BUILD_ICMP(LLVMIntULE, max_offset, shared_heap_end_off, cmp1,
"cmp_cache_shared_heap_end");
BUILD_OP(And, cmp, cmp1, cmp2, "is_in_cache_shared_heap");
/* Conditional branching based on whether in cached shared heap */
if (!build_shared_heap_conditional_branching(
comp_ctx, func_ctx, cmp2, app_addr_in_cache_shared_heap,
app_addr_in_linear_mem, check_shared_heap_chain, bytes,
start_offset, is_memory64))
goto fail;
SET_BUILD_POS(app_addr_in_cache_shared_heap);
if (!build_get_maddr_in_cache_shared_heap(comp_ctx, func_ctx, start_offset,
&maddr))
goto fail;
LLVMAddIncoming(maddr_phi, &maddr, &app_addr_in_cache_shared_heap, 1);
BUILD_BR(block_maddr_phi);
SET_BUILD_POS(app_addr_in_linear_mem);
return true;
fail:
return false;
}
#endif
LLVMValueRef LLVMValueRef
aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx, aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
mem_offset_t offset, uint32 bytes, bool enable_segue, mem_offset_t offset, uint32 bytes, bool enable_segue,
@ -118,10 +564,10 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
{ {
LLVMValueRef offset_const = LLVMValueRef offset_const =
MEMORY64_COND_VALUE(I64_CONST(offset), I32_CONST(offset)); MEMORY64_COND_VALUE(I64_CONST(offset), I32_CONST(offset));
LLVMValueRef addr, maddr, maddr_phi = NULL, offset1, cmp1, cmp2, cmp; LLVMValueRef addr, maddr, offset1, cmp1, cmp;
LLVMValueRef mem_base_addr, mem_check_bound; LLVMValueRef mem_base_addr, mem_check_bound;
LLVMBasicBlockRef block_curr = LLVMGetInsertBlock(comp_ctx->builder); LLVMBasicBlockRef block_curr = LLVMGetInsertBlock(comp_ctx->builder);
LLVMBasicBlockRef check_succ, block_maddr_phi = NULL; LLVMBasicBlockRef check_succ;
AOTValue *aot_value_top; AOTValue *aot_value_top;
uint32 local_idx_of_aot_value = 0; uint32 local_idx_of_aot_value = 0;
uint64 const_value; uint64 const_value;
@ -136,6 +582,10 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
#else #else
bool is_memory64 = IS_MEMORY64; bool is_memory64 = IS_MEMORY64;
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
LLVMValueRef maddr_phi = NULL;
LLVMBasicBlockRef block_maddr_phi = NULL;
#endif
is_target_64bit = (comp_ctx->pointer_size == sizeof(uint64)) ? true : false; is_target_64bit = (comp_ctx->pointer_size == sizeof(uint64)) ? true : false;
@ -262,6 +712,13 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
*alignp = 1; *alignp = 1;
} }
/* The overflow check needs to be done under following conditions:
* 1. In 64-bit target, offset and addr will be extended to 64-bit
* 1.1 offset + addr can overflow when it's memory64
* 1.2 no overflow when it's memory32
* 2. In 32-bit target, offset and addr will be 32-bit
* 2.1 offset + addr can overflow when it's memory32
*/
if (is_target_64bit) { if (is_target_64bit) {
if (!(offset_const = LLVMBuildZExt(comp_ctx->builder, offset_const, if (!(offset_const = LLVMBuildZExt(comp_ctx->builder, offset_const,
I64_TYPE, "offset_i64")) I64_TYPE, "offset_i64"))
@ -275,7 +732,9 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
/* offset1 = offset + addr; */ /* offset1 = offset + addr; */
BUILD_OP(Add, offset_const, addr, offset1, "offset1"); BUILD_OP(Add, offset_const, addr, offset1, "offset1");
if (is_memory64 && comp_ctx->enable_bound_check) { /* 1.1 offset + addr can overflow when it's memory64
* 2.1 Or when it's on 32-bit platform */
if (is_memory64 || !is_target_64bit) {
/* Check whether integer overflow occurs in offset + addr */ /* Check whether integer overflow occurs in offset + addr */
LLVMBasicBlockRef check_integer_overflow_end; LLVMBasicBlockRef check_integer_overflow_end;
ADD_BASIC_BLOCK(check_integer_overflow_end, ADD_BASIC_BLOCK(check_integer_overflow_end,
@ -289,23 +748,14 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
goto fail; goto fail;
} }
SET_BUILD_POS(check_integer_overflow_end); SET_BUILD_POS(check_integer_overflow_end);
block_curr = check_integer_overflow_end;
} }
if (comp_ctx->enable_shared_heap /* TODO: && mem_idx == 0 */) { #if WASM_ENABLE_SHARED_HEAP != 0
LLVMBasicBlockRef app_addr_in_shared_heap, app_addr_in_linear_mem; if (comp_ctx->enable_shared_heap
LLVMValueRef is_in_shared_heap, shared_heap_check_bound = NULL; || comp_ctx->enable_shared_chain /* TODO: && mem_idx == 0 */) {
/* Add basic blocks */
ADD_BASIC_BLOCK(app_addr_in_shared_heap, "app_addr_in_shared_heap");
ADD_BASIC_BLOCK(app_addr_in_linear_mem, "app_addr_in_linear_mem");
ADD_BASIC_BLOCK(block_maddr_phi, "maddr_phi"); ADD_BASIC_BLOCK(block_maddr_phi, "maddr_phi");
SET_BUILD_POS(block_maddr_phi);
LLVMMoveBasicBlockAfter(app_addr_in_shared_heap, block_curr);
LLVMMoveBasicBlockAfter(app_addr_in_linear_mem,
app_addr_in_shared_heap);
LLVMMoveBasicBlockAfter(block_maddr_phi, app_addr_in_linear_mem);
LLVMPositionBuilderAtEnd(comp_ctx->builder, block_maddr_phi);
if (!(maddr_phi = if (!(maddr_phi =
LLVMBuildPhi(comp_ctx->builder, LLVMBuildPhi(comp_ctx->builder,
enable_segue ? INT8_PTR_TYPE_GS : INT8_PTR_TYPE, enable_segue ? INT8_PTR_TYPE_GS : INT8_PTR_TYPE,
@ -313,110 +763,16 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
aot_set_last_error("llvm build phi failed"); aot_set_last_error("llvm build phi failed");
goto fail; goto fail;
} }
SET_BUILD_POS(block_curr);
LLVMPositionBuilderAtEnd(comp_ctx->builder, block_curr); if (!aot_check_shared_heap_memory_overflow(
comp_ctx, func_ctx, block_curr, block_maddr_phi, maddr_phi,
if (!is_target_64bit) { offset1, mem_base_addr, bytes, is_memory64, is_target_64bit,
/* Check whether integer overflow occurs in addr + offset */ enable_segue)) {
LLVMBasicBlockRef check_integer_overflow_end;
ADD_BASIC_BLOCK(check_integer_overflow_end,
"check_integer_overflow_end");
LLVMMoveBasicBlockAfter(check_integer_overflow_end, block_curr);
BUILD_ICMP(LLVMIntULT, offset1, addr, cmp1, "cmp1");
if (!aot_emit_exception(comp_ctx, func_ctx,
EXCE_OUT_OF_BOUNDS_MEMORY_ACCESS, true,
cmp1, check_integer_overflow_end)) {
goto fail;
}
SET_BUILD_POS(check_integer_overflow_end);
}
shared_heap_check_bound =
is_memory64 ? I64_CONST(UINT64_MAX - bytes + 1)
: (comp_ctx->pointer_size == sizeof(uint64)
? I64_CONST(UINT32_MAX - bytes + 1)
: I32_CONST(UINT32_MAX - bytes + 1));
CHECK_LLVM_CONST(shared_heap_check_bound);
/* Check whether the bytes to access are in shared heap */
if (!comp_ctx->enable_bound_check) {
/* Use IntUGT but not IntUGE to compare, since (1) in the ems
memory allocator, the hmu node includes hmu header and hmu
memory, only the latter is returned to the caller as the
allocated memory, the hmu header isn't returned so the
first byte of the shared heap won't be accessed, (2) using
IntUGT gets better performance than IntUGE in some cases */
BUILD_ICMP(LLVMIntUGT, offset1, func_ctx->shared_heap_start_off,
is_in_shared_heap, "is_in_shared_heap");
/* We don't check the shared heap's upper boundary if boundary
check isn't enabled, the runtime may also use the guard pages
of shared heap to check the boundary if hardware boundary
check feature is enabled. */
}
else {
/* Use IntUGT but not IntUGE to compare, same as above */
BUILD_ICMP(LLVMIntUGT, offset1, func_ctx->shared_heap_start_off,
cmp1, "cmp1");
/* Check the shared heap's upper boundary if boundary check is
enabled */
BUILD_ICMP(LLVMIntULE, offset1, shared_heap_check_bound, cmp2,
"cmp2");
BUILD_OP(And, cmp1, cmp2, is_in_shared_heap, "is_in_shared_heap");
}
if (!LLVMBuildCondBr(comp_ctx->builder, is_in_shared_heap,
app_addr_in_shared_heap, app_addr_in_linear_mem)) {
aot_set_last_error("llvm build cond br failed");
goto fail; goto fail;
} }
LLVMPositionBuilderAtEnd(comp_ctx->builder, app_addr_in_shared_heap);
/* Get native address inside shared heap */
if (!(maddr =
LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE,
func_ctx->shared_heap_base_addr_adj,
&offset1, 1, "maddr_shared_heap"))) {
aot_set_last_error("llvm build inbounds gep failed");
goto fail;
}
if (enable_segue) {
LLVMValueRef mem_base_addr_u64, maddr_u64, offset_to_mem_base;
if (!(maddr_u64 = LLVMBuildPtrToInt(comp_ctx->builder, maddr,
I64_TYPE, "maddr_u64"))
|| !(mem_base_addr_u64 =
LLVMBuildPtrToInt(comp_ctx->builder, mem_base_addr,
I64_TYPE, "mem_base_addr_u64"))) {
aot_set_last_error("llvm build ptr to int failed");
goto fail;
}
if (!(offset_to_mem_base =
LLVMBuildSub(comp_ctx->builder, maddr_u64,
mem_base_addr_u64, "offset_to_mem_base"))) {
aot_set_last_error("llvm build sub failed");
goto fail;
}
if (!(maddr = LLVMBuildIntToPtr(
comp_ctx->builder, offset_to_mem_base, INT8_PTR_TYPE_GS,
"maddr_shared_heap_segue"))) {
aot_set_last_error("llvm build int to ptr failed.");
goto fail;
}
}
LLVMAddIncoming(maddr_phi, &maddr, &app_addr_in_shared_heap, 1);
if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) {
aot_set_last_error("llvm build br failed");
goto fail;
}
LLVMPositionBuilderAtEnd(comp_ctx->builder, app_addr_in_linear_mem);
block_curr = LLVMGetInsertBlock(comp_ctx->builder);
} }
#endif
if (comp_ctx->enable_bound_check if (comp_ctx->enable_bound_check
&& !(is_local_of_aot_value && !(is_local_of_aot_value
@ -449,21 +805,7 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
goto fail; goto fail;
} }
if (is_target_64bit) { BUILD_ICMP(LLVMIntUGT, offset1, mem_check_bound, cmp, "cmp");
BUILD_ICMP(LLVMIntUGT, offset1, mem_check_bound, cmp, "cmp");
}
else {
if (comp_ctx->enable_shared_heap /* TODO: && mem_idx == 0 */) {
/* Check integer overflow has been checked above */
BUILD_ICMP(LLVMIntUGT, offset1, mem_check_bound, cmp, "cmp");
}
else {
/* Check integer overflow */
BUILD_ICMP(LLVMIntULT, offset1, addr, cmp1, "cmp1");
BUILD_ICMP(LLVMIntUGT, offset1, mem_check_bound, cmp2, "cmp2");
BUILD_OP(Or, cmp1, cmp2, cmp, "cmp");
}
}
/* Add basic blocks */ /* Add basic blocks */
ADD_BASIC_BLOCK(check_succ, "check_succ"); ADD_BASIC_BLOCK(check_succ, "check_succ");
@ -509,17 +851,20 @@ aot_check_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
} }
} }
if (comp_ctx->enable_shared_heap /* TODO: && mem_idx == 0 */) { #if WASM_ENABLE_SHARED_HEAP != 0
if (comp_ctx->enable_shared_heap
|| comp_ctx->enable_shared_chain /* TODO: && mem_idx == 0 */) {
block_curr = LLVMGetInsertBlock(comp_ctx->builder); block_curr = LLVMGetInsertBlock(comp_ctx->builder);
LLVMAddIncoming(maddr_phi, &maddr, &block_curr, 1); LLVMAddIncoming(maddr_phi, &maddr, &block_curr, 1);
if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) { if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) {
aot_set_last_error("llvm build br failed"); aot_set_last_error("llvm build br failed");
goto fail; goto fail;
} }
LLVMPositionBuilderAtEnd(comp_ctx->builder, block_maddr_phi); SET_BUILD_POS(block_maddr_phi);
return maddr_phi; return maddr_phi;
} }
else else
#endif
return maddr; return maddr;
fail: fail:
return NULL; return NULL;
@ -544,15 +889,6 @@ fail:
LLVMSetAlignment(value, known_align); \ LLVMSetAlignment(value, known_align); \
} while (0) } while (0)
#define BUILD_TRUNC(value, data_type) \
do { \
if (!(value = LLVMBuildTrunc(comp_ctx->builder, value, data_type, \
"val_trunc"))) { \
aot_set_last_error("llvm build trunc failed."); \
goto fail; \
} \
} while (0)
#define BUILD_STORE() \ #define BUILD_STORE() \
do { \ do { \
LLVMValueRef res; \ LLVMValueRef res; \
@ -1150,16 +1486,23 @@ LLVMValueRef
check_bulk_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx, check_bulk_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
LLVMValueRef offset, LLVMValueRef bytes) LLVMValueRef offset, LLVMValueRef bytes)
{ {
LLVMValueRef maddr, max_addr, cmp; LLVMValueRef maddr, max_addr, cmp, cmp1;
LLVMValueRef mem_base_addr, maddr_phi = NULL; LLVMValueRef mem_base_addr;
LLVMBasicBlockRef block_curr = LLVMGetInsertBlock(comp_ctx->builder); LLVMBasicBlockRef block_curr = LLVMGetInsertBlock(comp_ctx->builder);
LLVMBasicBlockRef check_succ, block_maddr_phi = NULL; LLVMBasicBlockRef check_succ;
LLVMValueRef mem_size; LLVMValueRef mem_size;
bool is_target_64bit;
#if WASM_ENABLE_MEMORY64 == 0 #if WASM_ENABLE_MEMORY64 == 0
bool is_memory64 = false; bool is_memory64 = false;
#else #else
bool is_memory64 = IS_MEMORY64; bool is_memory64 = IS_MEMORY64;
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
LLVMValueRef maddr_phi = NULL;
LLVMBasicBlockRef block_maddr_phi = NULL;
#endif
is_target_64bit = (comp_ctx->pointer_size == sizeof(uint64)) ? true : false;
/* Get memory base address and memory data size */ /* Get memory base address and memory data size */
#if WASM_ENABLE_SHARED_MEMORY != 0 #if WASM_ENABLE_SHARED_MEMORY != 0
@ -1221,111 +1564,71 @@ check_bulk_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
ADD_BASIC_BLOCK(check_succ, "check_succ"); ADD_BASIC_BLOCK(check_succ, "check_succ");
LLVMMoveBasicBlockAfter(check_succ, block_curr); LLVMMoveBasicBlockAfter(check_succ, block_curr);
offset = /* Same logic with aot_check_memory_overflow, offset and bytes are 32/64
LLVMBuildZExt(comp_ctx->builder, offset, I64_TYPE, "extend_offset"); * bits on 32/64 bits platform */
bytes = LLVMBuildZExt(comp_ctx->builder, bytes, I64_TYPE, "extend_len"); if (is_target_64bit) {
if (!offset || !bytes) { offset =
aot_set_last_error("llvm build zext failed."); LLVMBuildZExt(comp_ctx->builder, offset, I64_TYPE, "extend_offset");
goto fail; bytes = LLVMBuildZExt(comp_ctx->builder, bytes, I64_TYPE, "extend_len");
if (!offset || !bytes) {
aot_set_last_error("llvm build zext failed.");
goto fail;
}
} }
BUILD_OP(Add, offset, bytes, max_addr, "max_addr"); BUILD_OP(Add, offset, bytes, max_addr, "max_addr");
if (is_memory64 && comp_ctx->enable_bound_check) { /* Check overflow when it's memory64 or it's on 32 bits platform */
/* Check whether integer overflow occurs in offset + addr */ if (is_memory64 || !is_target_64bit) {
/* Check whether integer overflow occurs in offset + bytes */
LLVMBasicBlockRef check_integer_overflow_end; LLVMBasicBlockRef check_integer_overflow_end;
ADD_BASIC_BLOCK(check_integer_overflow_end, ADD_BASIC_BLOCK(check_integer_overflow_end,
"check_integer_overflow_end"); "check_integer_overflow_end");
LLVMMoveBasicBlockAfter(check_integer_overflow_end, block_curr); LLVMMoveBasicBlockAfter(check_integer_overflow_end, block_curr);
/* offset + bytes can overflow yet is valid(for example, 0xffffffff, 1),
* allow it to be 0(either 0, 0 or overflow and valid) */
BUILD_ICMP(LLVMIntULT, max_addr, offset, cmp, "cmp"); BUILD_ICMP(LLVMIntULT, max_addr, offset, cmp, "cmp");
BUILD_ICMP(LLVMIntNE, max_addr, is_target_64bit ? I64_ZERO : I32_ZERO,
cmp1, "cmp1");
BUILD_OP(And, cmp, cmp1, cmp, "overflow");
if (!aot_emit_exception(comp_ctx, func_ctx, if (!aot_emit_exception(comp_ctx, func_ctx,
EXCE_OUT_OF_BOUNDS_MEMORY_ACCESS, true, cmp, EXCE_OUT_OF_BOUNDS_MEMORY_ACCESS, true, cmp,
check_integer_overflow_end)) { check_integer_overflow_end)) {
goto fail; goto fail;
} }
SET_BUILD_POS(check_integer_overflow_end); SET_BUILD_POS(check_integer_overflow_end);
block_curr = check_integer_overflow_end;
} }
if (comp_ctx->enable_shared_heap /* TODO: && mem_idx == 0 */) { #if WASM_ENABLE_SHARED_HEAP != 0
LLVMBasicBlockRef app_addr_in_shared_heap, app_addr_in_linear_mem; if (comp_ctx->enable_shared_heap
LLVMValueRef shared_heap_start_off, shared_heap_check_bound; || comp_ctx->enable_shared_chain /* TODO: && mem_idx == 0 */) {
LLVMValueRef max_offset, cmp1, cmp2, is_in_shared_heap;
/* Add basic blocks */
ADD_BASIC_BLOCK(app_addr_in_shared_heap, "app_addr_in_shared_heap");
ADD_BASIC_BLOCK(app_addr_in_linear_mem, "app_addr_in_linear_mem");
ADD_BASIC_BLOCK(block_maddr_phi, "maddr_phi"); ADD_BASIC_BLOCK(block_maddr_phi, "maddr_phi");
SET_BUILD_POS(block_maddr_phi);
LLVMMoveBasicBlockAfter(app_addr_in_shared_heap, block_curr);
LLVMMoveBasicBlockAfter(app_addr_in_linear_mem,
app_addr_in_shared_heap);
LLVMMoveBasicBlockAfter(block_maddr_phi, check_succ);
LLVMPositionBuilderAtEnd(comp_ctx->builder, block_maddr_phi);
if (!(maddr_phi = LLVMBuildPhi(comp_ctx->builder, INT8_PTR_TYPE, if (!(maddr_phi = LLVMBuildPhi(comp_ctx->builder, INT8_PTR_TYPE,
"maddr_phi"))) { "maddr_phi"))) {
aot_set_last_error("llvm build phi failed"); aot_set_last_error("llvm build phi failed");
goto fail; goto fail;
} }
SET_BUILD_POS(block_curr);
LLVMPositionBuilderAtEnd(comp_ctx->builder, block_curr); if (!aot_check_bulk_memory_shared_heap_memory_overflow(
comp_ctx, func_ctx, block_curr, block_maddr_phi, check_succ,
shared_heap_start_off = func_ctx->shared_heap_start_off; maddr_phi, offset, max_addr, bytes, is_memory64,
if (comp_ctx->pointer_size == sizeof(uint32)) { is_target_64bit)) {
if (!(shared_heap_start_off =
LLVMBuildZExt(comp_ctx->builder, shared_heap_start_off,
I64_TYPE, "shared_heap_start_off_u64"))) {
aot_set_last_error("llvm build zext failed");
goto fail;
}
}
shared_heap_check_bound =
is_memory64 ? I64_CONST(UINT64_MAX) : I64_CONST(UINT32_MAX);
CHECK_LLVM_CONST(shared_heap_check_bound);
/* Check whether the bytes to access are in shared heap */
if (!comp_ctx->enable_bound_check) {
/* Use IntUGT but not IntUGE to compare, same as the check
in aot_check_memory_overflow */
BUILD_ICMP(LLVMIntUGT, offset, func_ctx->shared_heap_start_off,
is_in_shared_heap, "is_in_shared_heap");
}
else {
BUILD_ICMP(LLVMIntUGT, offset, func_ctx->shared_heap_start_off,
cmp1, "cmp1");
BUILD_OP(Add, max_addr, I64_NEG_ONE, max_offset, "max_offset");
BUILD_ICMP(LLVMIntULE, max_offset, shared_heap_check_bound, cmp2,
"cmp2");
BUILD_OP(And, cmp1, cmp2, is_in_shared_heap, "is_in_shared_heap");
}
if (!LLVMBuildCondBr(comp_ctx->builder, is_in_shared_heap,
app_addr_in_shared_heap, app_addr_in_linear_mem)) {
aot_set_last_error("llvm build cond br failed");
goto fail; goto fail;
} }
LLVMPositionBuilderAtEnd(comp_ctx->builder, app_addr_in_shared_heap);
/* Get native address inside shared heap */
if (!(maddr = LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE,
func_ctx->shared_heap_base_addr_adj,
&offset, 1, "maddr_shared_heap"))) {
aot_set_last_error("llvm build inbounds gep failed");
goto fail;
}
LLVMAddIncoming(maddr_phi, &maddr, &app_addr_in_shared_heap, 1);
if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) {
aot_set_last_error("llvm build br failed");
goto fail;
}
LLVMPositionBuilderAtEnd(comp_ctx->builder, app_addr_in_linear_mem);
block_curr = LLVMGetInsertBlock(comp_ctx->builder);
} }
#endif
/* mem_size is always 64-bit, extend max_addr on 32 bits platform */
if (!is_target_64bit
&& !(max_addr = LLVMBuildZExt(comp_ctx->builder, max_addr, I64_TYPE,
"extend_max_addr"))) {
aot_set_last_error("llvm build zext failed.");
goto fail;
}
BUILD_ICMP(LLVMIntUGT, max_addr, mem_size, cmp, "cmp_max_mem_addr"); BUILD_ICMP(LLVMIntUGT, max_addr, mem_size, cmp, "cmp_max_mem_addr");
if (!aot_emit_exception(comp_ctx, func_ctx, if (!aot_emit_exception(comp_ctx, func_ctx,
@ -1341,7 +1644,9 @@ check_bulk_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
goto fail; goto fail;
} }
if (comp_ctx->enable_shared_heap /* TODO: && mem_idx == 0 */) { #if WASM_ENABLE_SHARED_HEAP != 0
if (comp_ctx->enable_shared_heap
|| comp_ctx->enable_shared_chain /* TODO: && mem_idx == 0 */) {
block_curr = LLVMGetInsertBlock(comp_ctx->builder); block_curr = LLVMGetInsertBlock(comp_ctx->builder);
LLVMAddIncoming(maddr_phi, &maddr, &block_curr, 1); LLVMAddIncoming(maddr_phi, &maddr, &block_curr, 1);
if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) { if (!LLVMBuildBr(comp_ctx->builder, block_maddr_phi)) {
@ -1352,6 +1657,7 @@ check_bulk_memory_overflow(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
return maddr_phi; return maddr_phi;
} }
else else
#endif
return maddr; return maddr;
fail: fail:
return NULL; return NULL;

View File

@ -1517,73 +1517,154 @@ create_memory_info(const AOTCompContext *comp_ctx, AOTFuncContext *func_ctx,
return true; return true;
} }
#define BUILD_IS_NOT_NULL(value, res, name) \
do { \
if (!(res = LLVMBuildIsNotNull(comp_ctx->builder, value, name))) { \
aot_set_last_error("llvm build is not null failed."); \
goto fail; \
} \
} while (0)
#define get_module_extra_field_offset(field) \
do { \
offset_u32 = get_module_inst_extra_offset(comp_ctx); \
if (comp_ctx->is_jit_mode) \
offset_u32 += offsetof(WASMModuleInstanceExtra, field); \
else \
offset_u32 += offsetof(AOTModuleInstanceExtra, field); \
} while (0)
#define LOAD_MODULE_EXTRA_FIELD_AND_ALLOCA(field, type) \
do { \
get_module_extra_field_offset(field); \
offset = I32_CONST(offset_u32); \
CHECK_LLVM_CONST(offset); \
if (!(field_p = LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, \
func_ctx->aot_inst, &offset, 1, \
#field "_p"))) { \
aot_set_last_error("llvm build inbounds gep failed"); \
goto fail; \
} \
if (!(load_val = \
LLVMBuildLoad2(comp_ctx->builder, type, field_p, #field))) { \
aot_set_last_error("llvm build load failed"); \
goto fail; \
} \
if (!(func_ctx->field = \
LLVMBuildAlloca(comp_ctx->builder, type, #field))) { \
aot_set_last_error("llvm build alloca failed"); \
goto fail; \
} \
if (!LLVMBuildStore(comp_ctx->builder, load_val, func_ctx->field)) { \
aot_set_last_error("llvm build store failed"); \
goto fail; \
} \
} while (0)
static bool static bool
create_shared_heap_info(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx) create_shared_heap_info(AOTCompContext *comp_ctx, AOTFuncContext *func_ctx)
{ {
LLVMValueRef offset, base_addr_p, start_off_p, cmp; #if WASM_ENABLE_SHARED_HEAP != 0
LLVMValueRef offset, field_p, load_val, shared_heap_head_p,
shared_heap_head, cmp, field_p_or_default, shared_heap_head_start_off,
shared_heap_head_start_off_minus_one;
LLVMTypeRef shared_heap_offset_type;
uint32 offset_u32; uint32 offset_u32;
#if WASM_ENABLE_MEMORY64 == 0
/* Load aot_inst->e->shared_heap_base_addr_adj */ bool is_memory64 = false;
offset_u32 = get_module_inst_extra_offset(comp_ctx); #else
#if WASM_ENABLE_JIT != 0 && WASM_ENABLE_SHARED_HEAP != 0 bool is_memory64 = IS_MEMORY64;
if (comp_ctx->is_jit_mode)
offset_u32 +=
offsetof(WASMModuleInstanceExtra, shared_heap_base_addr_adj);
else
#endif #endif
offset_u32 +=
offsetof(AOTModuleInstanceExtra, shared_heap_base_addr_adj); shared_heap_offset_type =
comp_ctx->pointer_size == sizeof(uint64) ? I64_TYPE : I32_TYPE;
/* shared_heap_base_addr_adj, shared_heap_start_off, and
* shared_heap_end_off can be updated later, use local variable to
* represent them */
LOAD_MODULE_EXTRA_FIELD_AND_ALLOCA(shared_heap_base_addr_adj,
INT8_PTR_TYPE);
LOAD_MODULE_EXTRA_FIELD_AND_ALLOCA(shared_heap_start_off,
shared_heap_offset_type);
LOAD_MODULE_EXTRA_FIELD_AND_ALLOCA(shared_heap_end_off,
shared_heap_offset_type);
/* Shared Heap head start off won't be updated, no need to alloca */
get_module_extra_field_offset(shared_heap);
offset = I32_CONST(offset_u32); offset = I32_CONST(offset_u32);
CHECK_LLVM_CONST(offset); CHECK_LLVM_CONST(offset);
if (!(shared_heap_head_p = LLVMBuildInBoundsGEP2(
if (!(base_addr_p = LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, comp_ctx->builder, INT8_TYPE, func_ctx->aot_inst, &offset, 1,
func_ctx->aot_inst, &offset, 1, "shared_heap_head_p"))) {
"shared_heap_base_addr_adj_p"))) {
aot_set_last_error("llvm build inbounds gep failed"); aot_set_last_error("llvm build inbounds gep failed");
return false; goto fail;
} }
if (!(func_ctx->shared_heap_base_addr_adj = if (!(shared_heap_head =
LLVMBuildLoad2(comp_ctx->builder, INT8_PTR_TYPE, base_addr_p, LLVMBuildLoad2(comp_ctx->builder, INT8_PTR_TYPE,
"shared_heap_base_addr_adj"))) { shared_heap_head_p, "shared_heap_head"))) {
aot_set_last_error("llvm build load failed"); aot_set_last_error("llvm build load failed");
return false; goto fail;
} }
BUILD_IS_NOT_NULL(shared_heap_head, cmp, "has_shared_heap");
/* Load aot_inst->e->shared_heap_start_off */ if (is_memory64) {
offset_u32 = get_module_inst_extra_offset(comp_ctx); offset_u32 = offsetof(WASMSharedHeap, start_off_mem64);
#if WASM_ENABLE_JIT != 0 && WASM_ENABLE_SHARED_HEAP != 0 }
if (comp_ctx->is_jit_mode) else {
offset_u32 += offsetof(WASMModuleInstanceExtra, shared_heap_start_off); offset_u32 = offsetof(WASMSharedHeap, start_off_mem32);
else }
#endif
offset_u32 += offsetof(AOTModuleInstanceExtra, shared_heap_start_off);
offset = I32_CONST(offset_u32); offset = I32_CONST(offset_u32);
CHECK_LLVM_CONST(offset); CHECK_LLVM_CONST(offset);
if (!(field_p = LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE,
if (!(start_off_p = LLVMBuildInBoundsGEP2(comp_ctx->builder, INT8_TYPE, shared_heap_head, &offset, 1,
func_ctx->aot_inst, &offset, 1, "head_start_off_p"))) {
"shared_heap_start_off_p"))) {
aot_set_last_error("llvm build inbounds gep failed"); aot_set_last_error("llvm build inbounds gep failed");
return false; goto fail;
} }
if (!(func_ctx->shared_heap_start_off = LLVMBuildLoad2(
comp_ctx->builder, /* Select a valid shared heap head ptr or safe alloca ptr stores
comp_ctx->pointer_size == sizeof(uint64) ? I64_TYPE : I32_TYPE, * shared_heap_start_off(UINT32_MAX/UINT64_MAX) */
start_off_p, "shared_heap_start_off"))) { if (!(field_p_or_default = LLVMBuildSelect(comp_ctx->builder, cmp, field_p,
func_ctx->shared_heap_start_off,
"ptr_or_default"))) {
aot_set_last_error("llvm build select failed");
goto fail;
}
if (!(shared_heap_head_start_off = LLVMBuildLoad2(
comp_ctx->builder, shared_heap_offset_type, field_p_or_default,
"shared_heap_head_start_off"))) {
aot_set_last_error("llvm build load failed"); aot_set_last_error("llvm build load failed");
return false; goto fail;
}
if (!(shared_heap_head_start_off_minus_one = LLVMBuildAdd(
comp_ctx->builder, shared_heap_head_start_off,
comp_ctx->pointer_size == sizeof(uint64) ? I64_NEG_ONE
: I32_NEG_ONE,
"head_start_off_minus_one"))) {
aot_set_last_error("llvm build load failed");
goto fail;
} }
if (!(cmp = LLVMBuildIsNotNull(comp_ctx->builder, /* if there is attached shared heap(s), the value will be valid start_off-1,
func_ctx->shared_heap_base_addr_adj, * otherwise it will be UINT32_MAX/UINT64_MAX, so during the bounds checks,
"has_shared_heap"))) { * when has attached shared heap:
aot_set_last_error("llvm build is not null failed"); * offset > start_off - 1 => offset >= start_off
return false; * when no attached shared heap:
* offset > UINT32_MAX/UINT64_MAX is always false
* */
if (!(func_ctx->shared_heap_head_start_off = LLVMBuildSelect(
comp_ctx->builder, cmp, shared_heap_head_start_off_minus_one,
shared_heap_head_start_off, "head_start_off"))) {
aot_set_last_error("llvm build select failed");
goto fail;
} }
return true; return true;
fail: fail:
return false; return false;
#else /* else of WASM_ENABLE_SHARED_HEAP != 0 */
return true;
#endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */
} }
static bool static bool
@ -1877,7 +1958,7 @@ aot_create_func_context(const AOTCompData *comp_data, AOTCompContext *comp_ctx,
} }
/* Load shared heap, shared heap start off mem32 or mem64 */ /* Load shared heap, shared heap start off mem32 or mem64 */
if (comp_ctx->enable_shared_heap if ((comp_ctx->enable_shared_heap || comp_ctx->enable_shared_chain)
&& !create_shared_heap_info(comp_ctx, func_ctx)) { && !create_shared_heap_info(comp_ctx, func_ctx)) {
goto fail; goto fail;
} }
@ -2703,6 +2784,12 @@ aot_create_comp_context(const AOTCompData *comp_data, aot_comp_option_t option)
if (option->enable_shared_heap) if (option->enable_shared_heap)
comp_ctx->enable_shared_heap = true; comp_ctx->enable_shared_heap = true;
if (option->enable_shared_chain)
comp_ctx->enable_shared_chain = true;
if (option->enable_extended_const)
comp_ctx->enable_extended_const = true;
comp_ctx->opt_level = option->opt_level; comp_ctx->opt_level = option->opt_level;
comp_ctx->size_level = option->size_level; comp_ctx->size_level = option->size_level;
@ -3999,7 +4086,7 @@ aot_get_func_from_table(const AOTCompContext *comp_ctx, LLVMValueRef base,
if (!(func = if (!(func =
LLVMBuildBitCast(comp_ctx->builder, func, func_type, "func"))) { LLVMBuildBitCast(comp_ctx->builder, func, func_type, "func"))) {
aot_set_last_error("cast function fialed."); aot_set_last_error("cast function failed.");
goto fail; goto fail;
} }
@ -4068,7 +4155,7 @@ aot_load_const_from_table(AOTCompContext *comp_ctx, LLVMValueRef base,
if (!(const_addr = LLVMBuildBitCast(comp_ctx->builder, const_addr, if (!(const_addr = LLVMBuildBitCast(comp_ctx->builder, const_addr,
const_ptr_type, "const_addr"))) { const_ptr_type, "const_addr"))) {
aot_set_last_error("cast const fialed."); aot_set_last_error("cast const failed.");
return NULL; return NULL;
} }

View File

@ -254,8 +254,12 @@ typedef struct AOTFuncContext {
bool mem_space_unchanged; bool mem_space_unchanged;
AOTCheckedAddrList checked_addr_list; AOTCheckedAddrList checked_addr_list;
/* The last accessed shared heap info */
LLVMValueRef shared_heap_base_addr_adj; LLVMValueRef shared_heap_base_addr_adj;
LLVMValueRef shared_heap_start_off; LLVMValueRef shared_heap_start_off;
LLVMValueRef shared_heap_end_off;
/* The start offset of the head of shared heap chain */
LLVMValueRef shared_heap_head_start_off;
LLVMBasicBlockRef got_exception_block; LLVMBasicBlockRef got_exception_block;
LLVMBasicBlockRef func_return_block; LLVMBasicBlockRef func_return_block;
@ -457,6 +461,9 @@ typedef struct AOTCompContext {
/* Enable LLVM PGO (Profile-Guided Optimization) */ /* Enable LLVM PGO (Profile-Guided Optimization) */
bool enable_llvm_pgo; bool enable_llvm_pgo;
/* Enable extended constant expression */
bool enable_extended_const;
/* Treat unknown import function as wasm-c-api import function /* Treat unknown import function as wasm-c-api import function
and allow to directly invoke it from AOT/JIT code */ and allow to directly invoke it from AOT/JIT code */
bool quick_invoke_c_api_import; bool quick_invoke_c_api_import;
@ -486,6 +493,7 @@ typedef struct AOTCompContext {
bool enable_gc; bool enable_gc;
bool enable_shared_heap; bool enable_shared_heap;
bool enable_shared_chain;
uint32 opt_level; uint32 opt_level;
uint32 size_level; uint32 size_level;

View File

@ -121,7 +121,8 @@ wasm_init_table(WASMModuleInstance *inst, uint32 tbl_idx, uint32 seg_idx,
+ dst_offset * sizeof(table_elem_type_t)); + dst_offset * sizeof(table_elem_type_t));
init_values = tbl_seg_init_values + src_offset; init_values = tbl_seg_init_values + src_offset;
for (i = 0; i < len; i++) { for (i = 0; i < len; i++) {
addr[i] = (table_elem_type_t)(uintptr_t)init_values[+i].u.ref_index; addr[i] =
(table_elem_type_t)(uintptr_t)init_values[+i].u.unary.v.ref_index;
} }
return 0; return 0;

View File

@ -68,6 +68,7 @@ typedef struct AOTCompOption {
bool enable_ref_types; bool enable_ref_types;
bool enable_gc; bool enable_gc;
bool enable_aux_stack_check; bool enable_aux_stack_check;
bool enable_extended_const;
AOTStackFrameType aux_stack_frame_type; AOTStackFrameType aux_stack_frame_type;
AOTCallStackFeatures call_stack_features; AOTCallStackFeatures call_stack_features;
bool enable_perf_profiling; bool enable_perf_profiling;
@ -79,6 +80,7 @@ typedef struct AOTCompOption {
bool enable_stack_estimation; bool enable_stack_estimation;
bool quick_invoke_c_api_import; bool quick_invoke_c_api_import;
bool enable_shared_heap; bool enable_shared_heap;
bool enable_shared_chain;
char *use_prof_file; char *use_prof_file;
uint32_t opt_level; uint32_t opt_level;
uint32_t size_level; uint32_t size_level;

View File

@ -98,10 +98,10 @@ void
aot_destroy_aot_file(uint8_t *aot_file); aot_destroy_aot_file(uint8_t *aot_file);
char * char *
aot_get_last_error(); aot_get_last_error(void);
uint32_t uint32_t
aot_get_plt_table_size(); aot_get_plt_table_size(void);
#ifdef __cplusplus #ifdef __cplusplus
} }

View File

@ -139,8 +139,6 @@ typedef struct wasm_frame_t {
uint32_t *lp; uint32_t *lp;
} WASMCApiFrame; } WASMCApiFrame;
typedef WASMCApiFrame wasm_frame_t;
/* WASM section */ /* WASM section */
typedef struct wasm_section_t { typedef struct wasm_section_t {
struct wasm_section_t *next; struct wasm_section_t *next;
@ -351,6 +349,7 @@ typedef enum {
typedef struct SharedHeapInitArgs { typedef struct SharedHeapInitArgs {
uint32_t size; uint32_t size;
void *pre_allocated_addr;
} SharedHeapInitArgs; } SharedHeapInitArgs;
/** /**
@ -904,7 +903,7 @@ wasm_runtime_destroy_exec_env(wasm_exec_env_t exec_env);
* @return number of copied frames * @return number of copied frames
*/ */
WASM_RUNTIME_API_EXTERN uint32_t WASM_RUNTIME_API_EXTERN uint32_t
wasm_copy_callstack(const wasm_exec_env_t exec_env, wasm_frame_t *buffer, wasm_copy_callstack(const wasm_exec_env_t exec_env, WASMCApiFrame *buffer,
const uint32_t length, const uint32_t skip_n, const uint32_t length, const uint32_t skip_n,
char *error_buf, uint32_t error_buf_size); char *error_buf, uint32_t error_buf_size);
@ -2316,7 +2315,37 @@ WASM_RUNTIME_API_EXTERN wasm_shared_heap_t
wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args); wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args);
/** /**
* Attach a shared heap to a module instance * This function links two shared heap(lists), `head` and `body` in to a single
* shared heap list, where `head` becomes the new shared heap list head. The
* shared heap list remains one continuous shared heap in wasm app's point of
* view. At most one shared heap in shared heap list can be dynamically
* allocated, the rest have to be the pre-allocated shared heap. *
*
* @param head The head of the shared heap chain.
* @param body The body of the shared heap chain to be appended.
* @return The new head of the shared heap chain. NULL if failed.
*/
WASM_RUNTIME_API_EXTERN wasm_shared_heap_t
wasm_runtime_chain_shared_heaps(wasm_shared_heap_t head,
wasm_shared_heap_t body);
/**
* This function unchains the shared heaps from the given head. If
* `entire_chain` is true, it will unchain the entire chain of shared heaps.
* Otherwise, it will unchain only the first shared heap in the chain.
*
* @param head The head of the shared heap chain.
* @param entire_chain A boolean flag indicating whether to unchain the entire
* chain.
* @return The new head of the shared heap chain. Or the last shared heap in the
* chain if `entire_chain` is true.
*/
wasm_shared_heap_t
wasm_runtime_unchain_shared_heaps(wasm_shared_heap_t head, bool entire_chain);
/**
* Attach a shared heap, it can be the head of shared heap chain, in that case,
* attach the shared heap chain, to a module instance
* *
* @param module_inst the module instance * @param module_inst the module instance
* @param shared_heap the shared heap * @param shared_heap the shared heap
@ -2335,7 +2364,8 @@ WASM_RUNTIME_API_EXTERN void
wasm_runtime_detach_shared_heap(wasm_module_inst_t module_inst); wasm_runtime_detach_shared_heap(wasm_module_inst_t module_inst);
/** /**
* Allocate memory from a shared heap * Allocate memory from a shared heap, or the non-preallocated shared heap from
* the shared heap chain
* *
* @param module_inst the module instance * @param module_inst the module instance
* @param size required memory size * @param size required memory size
@ -2352,7 +2382,8 @@ wasm_runtime_shared_heap_malloc(wasm_module_inst_t module_inst, uint64_t size,
void **p_native_addr); void **p_native_addr);
/** /**
* Free the memory allocated from shared heap * Free the memory allocated from shared heap, or the non-preallocated shared
* heap from the shared heap chain
* *
* @param module_inst the module instance * @param module_inst the module instance
* @param ptr the offset in wasm app * @param ptr the offset in wasm app

View File

@ -135,6 +135,12 @@ typedef void *table_elem_type_t;
#define INIT_EXPR_TYPE_F64_CONST 0x44 #define INIT_EXPR_TYPE_F64_CONST 0x44
#define INIT_EXPR_TYPE_V128_CONST 0xFD #define INIT_EXPR_TYPE_V128_CONST 0xFD
#define INIT_EXPR_TYPE_GET_GLOBAL 0x23 #define INIT_EXPR_TYPE_GET_GLOBAL 0x23
#define INIT_EXPR_TYPE_I32_ADD 0x6A
#define INIT_EXPR_TYPE_I32_SUB 0x6B
#define INIT_EXPR_TYPE_I32_MUL 0x6C
#define INIT_EXPR_TYPE_I64_ADD 0x7C
#define INIT_EXPR_TYPE_I64_SUB 0x7D
#define INIT_EXPR_TYPE_I64_MUL 0x7E
#define INIT_EXPR_TYPE_REFNULL_CONST 0xD0 #define INIT_EXPR_TYPE_REFNULL_CONST 0xD0
#define INIT_EXPR_TYPE_FUNCREF_CONST 0xD2 #define INIT_EXPR_TYPE_FUNCREF_CONST 0xD2
#define INIT_EXPR_TYPE_STRUCT_NEW 0xD3 #define INIT_EXPR_TYPE_STRUCT_NEW 0xD3
@ -277,9 +283,41 @@ typedef struct InitializerExpression {
/* type of INIT_EXPR_TYPE_XXX, which is an instruction of /* type of INIT_EXPR_TYPE_XXX, which is an instruction of
constant expression */ constant expression */
uint8 init_expr_type; uint8 init_expr_type;
WASMValue u; union {
struct {
WASMValue v;
} unary;
struct {
struct InitializerExpression *l_expr;
struct InitializerExpression *r_expr;
} binary;
} u;
} InitializerExpression; } InitializerExpression;
static inline bool
is_expr_binary_op(uint8 flag)
{
return flag == INIT_EXPR_TYPE_I32_ADD || flag == INIT_EXPR_TYPE_I32_SUB
|| flag == INIT_EXPR_TYPE_I32_MUL || flag == INIT_EXPR_TYPE_I64_ADD
|| flag == INIT_EXPR_TYPE_I64_SUB || flag == INIT_EXPR_TYPE_I64_MUL;
}
/* check if table or data offset is valid for i32 offset */
static inline bool
is_valid_i32_offset(uint8 flag)
{
return flag == INIT_EXPR_TYPE_I32_CONST || flag == INIT_EXPR_TYPE_I32_ADD
|| flag == INIT_EXPR_TYPE_I32_SUB || flag == INIT_EXPR_TYPE_I32_MUL;
}
/* check if table or data offset is valid for i64 offset */
static inline bool
is_valid_i64_offset(uint8 flag)
{
return flag == INIT_EXPR_TYPE_I64_CONST || flag == INIT_EXPR_TYPE_I64_ADD
|| flag == INIT_EXPR_TYPE_I64_SUB || flag == INIT_EXPR_TYPE_I64_MUL;
}
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
/** /**
* Reference type of (ref null ht) or (ref ht), * Reference type of (ref null ht) or (ref ht),

View File

@ -46,28 +46,6 @@ typedef float64 CellType_F64;
#define get_linear_mem_size() GET_LINEAR_MEMORY_SIZE(memory) #define get_linear_mem_size() GET_LINEAR_MEMORY_SIZE(memory)
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
#if WASM_ENABLE_MULTI_MEMORY != 0
/* Only enable shared heap for the default memory */
#define is_default_memory (memidx == 0)
#else
#define is_default_memory true
#endif
#define app_addr_in_shared_heap(app_addr, bytes) \
(shared_heap && is_default_memory && (app_addr) >= shared_heap_start_off \
&& (app_addr) <= shared_heap_end_off - bytes + 1)
#define shared_heap_addr_app_to_native(app_addr, native_addr) \
native_addr = shared_heap_base_addr + ((app_addr)-shared_heap_start_off)
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr) \
if (app_addr_in_shared_heap(app_addr, bytes)) \
shared_heap_addr_app_to_native(app_addr, native_addr); \
else
#else
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr)
#endif
#if WASM_ENABLE_MEMORY64 == 0 #if WASM_ENABLE_MEMORY64 == 0
#if (!defined(OS_ENABLE_HW_BOUND_CHECK) \ #if (!defined(OS_ENABLE_HW_BOUND_CHECK) \
@ -1670,22 +1648,6 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
if (memory) if (memory)
is_memory64 = memory->is_memory64; is_memory64 = memory->is_memory64;
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
WASMSharedHeap *shared_heap = module->e->shared_heap;
uint8 *shared_heap_base_addr = shared_heap ? shared_heap->base_addr : NULL;
#if WASM_ENABLE_MEMORY64 != 0
uint64 shared_heap_start_off =
shared_heap ? (is_memory64 ? shared_heap->start_off_mem64
: shared_heap->start_off_mem32)
: 0;
uint64 shared_heap_end_off =
shared_heap ? (is_memory64 ? UINT64_MAX : UINT32_MAX) : 0;
#else
uint64 shared_heap_start_off =
shared_heap ? shared_heap->start_off_mem32 : 0;
uint64 shared_heap_end_off = shared_heap ? UINT32_MAX : 0;
#endif
#endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */
#if WASM_ENABLE_MULTI_MEMORY != 0 #if WASM_ENABLE_MULTI_MEMORY != 0
uint32 memidx = 0; uint32 memidx = 0;
uint32 memidx_cached = (uint32)-1; uint32 memidx_cached = (uint32)-1;
@ -4088,7 +4050,7 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
case WASM_OP_STRING_ENCODE_LOSSY_UTF8_ARRAY: case WASM_OP_STRING_ENCODE_LOSSY_UTF8_ARRAY:
case WASM_OP_STRING_ENCODE_WTF8_ARRAY: case WASM_OP_STRING_ENCODE_WTF8_ARRAY:
{ {
uint32 start, array_len, count; uint32 start, array_len;
int32 bytes_written; int32 bytes_written;
EncodingFlag flag = WTF8; EncodingFlag flag = WTF8;
WASMArrayType *array_type; WASMArrayType *array_type;
@ -5996,12 +5958,14 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
|| init_values[i].init_expr_type || init_values[i].init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST); == INIT_EXPR_TYPE_FUNCREF_CONST);
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
table_elems[i] = table_elems[i] = (table_elem_type_t)init_values[i]
(table_elem_type_t)init_values[i].u.ref_index; .u.unary.v.ref_index;
#else #else
if (init_values[i].u.ref_index != UINT32_MAX) { if (init_values[i].u.unary.v.ref_index
!= UINT32_MAX) {
if (!(func_obj = wasm_create_func_obj( if (!(func_obj = wasm_create_func_obj(
module, init_values[i].u.ref_index, module,
init_values[i].u.unary.v.ref_index,
true, NULL, 0))) { true, NULL, 0))) {
goto got_exception; goto got_exception;
} }

View File

@ -41,22 +41,6 @@ typedef float64 CellType_F64;
#define get_linear_mem_size() GET_LINEAR_MEMORY_SIZE(memory) #define get_linear_mem_size() GET_LINEAR_MEMORY_SIZE(memory)
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0
#define app_addr_in_shared_heap(app_addr, bytes) \
(shared_heap && (app_addr) >= shared_heap_start_off \
&& (app_addr) <= shared_heap_end_off - bytes + 1)
#define shared_heap_addr_app_to_native(app_addr, native_addr) \
native_addr = shared_heap_base_addr + ((app_addr)-shared_heap_start_off)
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr) \
if (app_addr_in_shared_heap(app_addr, bytes)) \
shared_heap_addr_app_to_native(app_addr, native_addr); \
else
#else
#define CHECK_SHARED_HEAP_OVERFLOW(app_addr, bytes, native_addr)
#endif
#if !defined(OS_ENABLE_HW_BOUND_CHECK) \ #if !defined(OS_ENABLE_HW_BOUND_CHECK) \
|| WASM_CPU_SUPPORTS_UNALIGNED_ADDR_ACCESS == 0 || WASM_CPU_SUPPORTS_UNALIGNED_ADDR_ACCESS == 0
#define CHECK_MEMORY_OVERFLOW(bytes) \ #define CHECK_MEMORY_OVERFLOW(bytes) \
@ -1590,21 +1574,13 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
bool is_return_call = false; bool is_return_call = false;
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
WASMSharedHeap *shared_heap = module->e ? module->e->shared_heap : NULL; /* TODO: currently flowing two variables are only dummy for shared heap
uint8 *shared_heap_base_addr = shared_heap ? shared_heap->base_addr : NULL; * boundary check, need to be updated when multi-memory or memory64
/* * proposals are to be implemented */
#if WASM_ENABLE_MEMORY64 != 0 bool is_memory64 = false;
uint64 shared_heap_start_off = uint32 memidx = 0;
shared_heap ? (is_memory64 ? shared_heap->start_off_mem64 (void)is_memory64;
: shared_heap->start_off_mem32) (void)memidx;
: 0;
uint64 shared_heap_end_off =
shared_heap ? (is_memory64 ? UINT64_MAX : UINT32_MAX) : 0;
#else
*/ /* TODO: uncomment the code when memory64 is enabled for fast-interp */
uint64 shared_heap_start_off =
shared_heap ? shared_heap->start_off_mem32 : 0;
uint64 shared_heap_end_off = shared_heap ? UINT32_MAX : 0;
/* #endif */ /* #endif */
#endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */ #endif /* end of WASM_ENABLE_SHARED_HEAP != 0 */
@ -5374,12 +5350,14 @@ wasm_interp_call_func_bytecode(WASMModuleInstance *module,
|| init_values[i].init_expr_type || init_values[i].init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST); == INIT_EXPR_TYPE_FUNCREF_CONST);
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
table_elems[i] = table_elems[i] = (table_elem_type_t)init_values[i]
(table_elem_type_t)init_values[i].u.ref_index; .u.unary.v.ref_index;
#else #else
if (init_values[i].u.ref_index != UINT32_MAX) { if (init_values[i].u.unary.v.ref_index
!= UINT32_MAX) {
if (!(func_obj = wasm_create_func_obj( if (!(func_obj = wasm_create_func_obj(
module, init_values[i].u.ref_index, module,
init_values[i].u.unary.v.ref_index,
true, NULL, 0))) { true, NULL, 0))) {
goto got_exception; goto got_exception;
} }

View File

@ -453,6 +453,9 @@ typedef struct InitValue {
WASMRefType ref_type; WASMRefType ref_type;
#endif #endif
WASMValue value; WASMValue value;
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *expr;
#endif
} InitValue; } InitValue;
typedef struct ConstExprContext { typedef struct ConstExprContext {
@ -477,7 +480,11 @@ push_const_expr_stack(ConstExprContext *ctx, uint8 flag, uint8 type,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
WASMRefType *ref_type, uint8 gc_opcode, WASMRefType *ref_type, uint8 gc_opcode,
#endif #endif
WASMValue *value, char *error_buf, uint32 error_buf_size) WASMValue *value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *expr,
#endif
char *error_buf, uint32 error_buf_size)
{ {
InitValue *cur_value; InitValue *cur_value;
@ -503,6 +510,10 @@ push_const_expr_stack(ConstExprContext *ctx, uint8 flag, uint8 type,
cur_value->flag = flag; cur_value->flag = flag;
cur_value->value = *value; cur_value->value = *value;
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
cur_value->expr = expr;
#endif
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
cur_value->gc_opcode = gc_opcode; cur_value->gc_opcode = gc_opcode;
if (wasm_is_type_multi_byte_type(type)) { if (wasm_is_type_multi_byte_type(type)) {
@ -587,7 +598,11 @@ pop_const_expr_stack(ConstExprContext *ctx, uint8 *p_flag, uint8 type,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
WASMRefType *ref_type, uint8 *p_gc_opcode, WASMRefType *ref_type, uint8 *p_gc_opcode,
#endif #endif
WASMValue *p_value, char *error_buf, uint32 error_buf_size) WASMValue *p_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression **p_expr,
#endif
char *error_buf, uint32 error_buf_size)
{ {
InitValue *cur_value; InitValue *cur_value;
@ -623,7 +638,10 @@ pop_const_expr_stack(ConstExprContext *ctx, uint8 *p_flag, uint8 type,
if (p_gc_opcode) if (p_gc_opcode)
*p_gc_opcode = cur_value->gc_opcode; *p_gc_opcode = cur_value->gc_opcode;
#endif #endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (p_expr)
*p_expr = cur_value->expr;
#endif
return true; return true;
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
@ -639,7 +657,7 @@ fail:
} }
static void static void
destroy_const_expr_stack(ConstExprContext *ctx) destroy_const_expr_stack(ConstExprContext *ctx, bool free_exprs)
{ {
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
uint32 i; uint32 i;
@ -654,24 +672,62 @@ destroy_const_expr_stack(ConstExprContext *ctx)
} }
} }
#endif #endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (free_exprs) {
for (uint32 j = 0; j < ctx->sp; j++) {
if (is_expr_binary_op(ctx->stack[j].expr->init_expr_type)) {
destroy_init_expr_recursive(ctx->stack[j].expr);
ctx->stack[j].expr = NULL;
}
}
}
#endif
if (ctx->stack != ctx->data) { if (ctx->stack != ctx->data) {
wasm_runtime_free(ctx->stack); wasm_runtime_free(ctx->stack);
} }
} }
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0 || WASM_ENABLE_EXTENDED_CONST_EXPR != 0
static void static void
destroy_init_expr(WASMModule *module, InitializerExpression *expr) destroy_init_expr(WASMModule *module, InitializerExpression *expr)
{ {
#if WASM_ENABLE_GC != 0
if (expr->init_expr_type == INIT_EXPR_TYPE_STRUCT_NEW if (expr->init_expr_type == INIT_EXPR_TYPE_STRUCT_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW || expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW
|| expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_FIXED) { || expr->init_expr_type == INIT_EXPR_TYPE_ARRAY_NEW_FIXED) {
destroy_init_expr_data_recursive(module, expr->u.data); destroy_init_expr_data_recursive(module, expr->u.unary.v.data);
} }
} #endif
#endif /* end of WASM_ENABLE_GC != 0 */
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
// free left expr and right exprs for binary oprand
if (!is_expr_binary_op(expr->init_expr_type)) {
return;
}
if (expr->u.binary.l_expr) {
destroy_init_expr_recursive(expr->u.binary.l_expr);
}
if (expr->u.binary.r_expr) {
destroy_init_expr_recursive(expr->u.binary.r_expr);
}
expr->u.binary.l_expr = expr->u.binary.r_expr = NULL;
#endif
}
#endif
/* for init expr
* (data (i32.add (i32.const 0) (i32.sub (i32.const 1) (i32.const 2)))),
* the binary format is
* 0x11: 41 00 ; i32.const 0
* 0x13: 41 01 ; i32.const 1
* 0x15: 41 02 ; i32.const 2
* 0x17: 6b ; i32.sub
* 0x18: 6a ; i32.add
* for traversal: read opcodes and push them onto the stack. When encountering
* a binary opcode, pop two values from the stack which become the left and
* right child nodes of this binary operation node.
*/
static bool static bool
load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end, load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
InitializerExpression *init_expr, uint8 type, void *ref_type, InitializerExpression *init_expr, uint8 type, void *ref_type,
@ -687,6 +743,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
uint8 opcode; uint8 opcode;
WASMRefType cur_ref_type = { 0 }; WASMRefType cur_ref_type = { 0 };
#endif #endif
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *cur_expr = NULL;
#endif
init_const_expr_stack(&const_expr_ctx, module); init_const_expr_stack(&const_expr_ctx, module);
@ -699,24 +758,32 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
case INIT_EXPR_TYPE_I32_CONST: case INIT_EXPR_TYPE_I32_CONST:
read_leb_int32(p, p_end, cur_value.i32); read_leb_int32(p, p_end, cur_value.i32);
if (!push_const_expr_stack( if (!push_const_expr_stack(&const_expr_ctx, flag,
&const_expr_ctx, flag, VALUE_TYPE_I32, VALUE_TYPE_I32,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
NULL, 0, NULL, 0,
#endif #endif
&cur_value, error_buf, error_buf_size)) &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
break; break;
/* i64.const */ /* i64.const */
case INIT_EXPR_TYPE_I64_CONST: case INIT_EXPR_TYPE_I64_CONST:
read_leb_int64(p, p_end, cur_value.i64); read_leb_int64(p, p_end, cur_value.i64);
if (!push_const_expr_stack( if (!push_const_expr_stack(&const_expr_ctx, flag,
&const_expr_ctx, flag, VALUE_TYPE_I64, VALUE_TYPE_I64,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
NULL, 0, NULL, 0,
#endif #endif
&cur_value, error_buf, error_buf_size)) &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
break; break;
/* f32.const */ /* f32.const */
@ -726,12 +793,16 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
for (i = 0; i < sizeof(float32); i++) for (i = 0; i < sizeof(float32); i++)
*p_float++ = *p++; *p_float++ = *p++;
if (!push_const_expr_stack( if (!push_const_expr_stack(&const_expr_ctx, flag,
&const_expr_ctx, flag, VALUE_TYPE_F32, VALUE_TYPE_F32,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
NULL, 0, NULL, 0,
#endif #endif
&cur_value, error_buf, error_buf_size)) &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
break; break;
/* f64.const */ /* f64.const */
@ -741,12 +812,16 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
for (i = 0; i < sizeof(float64); i++) for (i = 0; i < sizeof(float64); i++)
*p_float++ = *p++; *p_float++ = *p++;
if (!push_const_expr_stack( if (!push_const_expr_stack(&const_expr_ctx, flag,
&const_expr_ctx, flag, VALUE_TYPE_F64, VALUE_TYPE_F64,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
NULL, 0, NULL, 0,
#endif #endif
&cur_value, error_buf, error_buf_size)) &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
break; break;
#if WASM_ENABLE_SIMD != 0 #if WASM_ENABLE_SIMD != 0
@ -767,12 +842,16 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
cur_value.v128.i64x2[0] = high; cur_value.v128.i64x2[0] = high;
cur_value.v128.i64x2[1] = low; cur_value.v128.i64x2[1] = low;
if (!push_const_expr_stack( if (!push_const_expr_stack(&const_expr_ctx, flag,
&const_expr_ctx, flag, VALUE_TYPE_V128, VALUE_TYPE_V128,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
NULL, 0, NULL, 0,
#endif #endif
&cur_value, error_buf, error_buf_size)) &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
#if WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_WAMR_COMPILER != 0
/* If any init_expr is v128.const, mark SIMD used */ /* If any init_expr is v128.const, mark SIMD used */
@ -783,7 +862,92 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
#endif /* end of (WASM_ENABLE_WAMR_COMPILER != 0) || (WASM_ENABLE_JIT != 0) || \ #endif /* end of (WASM_ENABLE_WAMR_COMPILER != 0) || (WASM_ENABLE_JIT != 0) || \
(WASM_ENABLE_FAST_INTERP != 0) */ (WASM_ENABLE_FAST_INTERP != 0) */
#endif /* end of WASM_ENABLE_SIMD */ #endif /* end of WASM_ENABLE_SIMD */
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
{
InitializerExpression *l_expr, *r_expr;
WASMValue l_value, r_value;
uint8 l_flag, r_flag;
uint8 value_type;
if (flag == INIT_EXPR_TYPE_I32_ADD
|| flag == INIT_EXPR_TYPE_I32_SUB
|| flag == INIT_EXPR_TYPE_I32_MUL) {
value_type = VALUE_TYPE_I32;
}
else {
value_type = VALUE_TYPE_I64;
}
/* If right flag indicates a binary operation, right expr will
* be popped from stack. Otherwise, allocate a new expr for
* right expr. Same for left expr.
*/
if (!(pop_const_expr_stack(&const_expr_ctx, &r_flag, value_type,
#if WASM_ENABLE_GC != 0
NULL, NULL,
#endif
&r_value, &r_expr, error_buf,
error_buf_size))) {
goto fail;
}
if (!is_expr_binary_op(r_flag)) {
if (!(r_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
goto fail;
}
r_expr->init_expr_type = r_flag;
r_expr->u.unary.v = r_value;
}
if (!(pop_const_expr_stack(&const_expr_ctx, &l_flag, value_type,
#if WASM_ENABLE_GC != 0
NULL, NULL,
#endif
&l_value, &l_expr, error_buf,
error_buf_size))) {
destroy_init_expr_recursive(r_expr);
goto fail;
}
if (!is_expr_binary_op(l_flag)) {
if (!(l_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
destroy_init_expr_recursive(r_expr);
goto fail;
}
l_expr->init_expr_type = l_flag;
l_expr->u.unary.v = l_value;
}
if (!(cur_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
destroy_init_expr_recursive(l_expr);
destroy_init_expr_recursive(r_expr);
goto fail;
}
cur_expr->init_expr_type = flag;
cur_expr->u.binary.l_expr = l_expr;
cur_expr->u.binary.r_expr = r_expr;
if (!push_const_expr_stack(&const_expr_ctx, flag, value_type,
#if WASM_ENABLE_GC != 0
NULL, 0,
#endif
&cur_value, cur_expr, error_buf,
error_buf_size)) {
destroy_init_expr_recursive(cur_expr);
goto fail;
}
break;
}
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR */
#if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0 #if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0
/* ref.func */ /* ref.func */
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
@ -799,6 +963,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_FUNCREF, &cur_value, VALUE_TYPE_FUNCREF, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) error_buf, error_buf_size))
goto fail; goto fail;
#else #else
@ -816,8 +983,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
false, type_idx); false, type_idx);
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
cur_ref_type.ref_type, &cur_ref_type, cur_ref_type.ref_type, &cur_ref_type,
0, &cur_value, error_buf, 0, &cur_value,
error_buf_size)) #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
#endif #endif
#if WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_WAMR_COMPILER != 0
@ -829,45 +999,71 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
/* ref.null */ /* ref.null */
case INIT_EXPR_TYPE_REFNULL_CONST: case INIT_EXPR_TYPE_REFNULL_CONST:
{ {
uint8 type1;
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
uint8 type1;
CHECK_BUF(p, p_end, 1); CHECK_BUF(p, p_end, 1);
type1 = read_uint8(p); type1 = read_uint8(p);
cur_value.ref_index = NULL_REF; cur_value.ref_index = NULL_REF;
if (!push_const_expr_stack(&const_expr_ctx, flag, type1, if (!push_const_expr_stack(&const_expr_ctx, flag, type1,
&cur_value, error_buf, &cur_value,
error_buf_size)) #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
#else #else
/*
* According to the current GC SPEC rules, the heap_type must be
* validated when ref.null is used. It can be an absheaptype,
* or the type C.types[type_idx] must be defined in the context.
*/
int32 heap_type; int32 heap_type;
read_leb_int32(p, p_end, heap_type); read_leb_int32(p, p_end, heap_type);
type1 = (uint8)((int32)0x80 + heap_type);
cur_value.gc_obj = NULL_REF; cur_value.gc_obj = NULL_REF;
if (!is_byte_a_type(type1) /*
|| !wasm_is_valid_heap_type(heap_type) * The current check of heap_type can deterministically infer
|| wasm_is_type_multi_byte_type(type1)) { * the result of the previous condition
p--; * `(!is_byte_a_type(type1) ||
read_leb_uint32(p, p_end, type_idx); * wasm_is_type_multi_byte_type(type1))`. Therefore, the
if (!check_type_index(module, module->type_count, type_idx, * original condition is redundant and has been removed.
error_buf, error_buf_size)) *
goto fail; * This logic is consistent with the implementation of the
* `WASM_OP_REF_NULL` case in the `wasm_loader_prepare_bytecode`
* function.
*/
if (heap_type >= 0) {
if (!check_type_index(module, module->type_count, heap_type,
error_buf, error_buf_size)) {
goto fail;
}
wasm_set_refheaptype_typeidx(&cur_ref_type.ref_ht_typeidx, wasm_set_refheaptype_typeidx(&cur_ref_type.ref_ht_typeidx,
true, type_idx); true, heap_type);
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
cur_ref_type.ref_type, cur_ref_type.ref_type,
&cur_ref_type, 0, &cur_value, &cur_ref_type, 0, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) error_buf, error_buf_size))
goto fail; goto fail;
} }
else { else {
if (!push_const_expr_stack(&const_expr_ctx, flag, type1, if (!wasm_is_valid_heap_type(heap_type)) {
NULL, 0, &cur_value, error_buf, set_error_buf_v(error_buf, error_buf_size,
error_buf_size)) "unknown type %d", heap_type);
goto fail;
}
cur_ref_type.ref_ht_common.ref_type =
(uint8)((int32)0x80 + heap_type);
if (!push_const_expr_stack(&const_expr_ctx, flag,
cur_ref_type.ref_type, NULL, 0,
&cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
} }
#endif #endif
@ -956,8 +1152,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
&cur_ref_type, 0, &cur_ref_type, 0,
#endif #endif
&cur_value, error_buf, &cur_value,
error_buf_size)) #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size))
goto fail; goto fail;
break; break;
@ -1020,6 +1219,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
&const_expr_ctx, NULL, field_type, &const_expr_ctx, NULL, field_type,
field_ref_type, NULL, field_ref_type, NULL,
&struct_init_values->fields[field_idx], &struct_init_values->fields[field_idx],
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
destroy_init_expr_data_recursive( destroy_init_expr_data_recursive(
module, struct_init_values); module, struct_init_values);
@ -1033,6 +1235,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack( if (!push_const_expr_stack(
&const_expr_ctx, flag, cur_ref_type.ref_type, &const_expr_ctx, flag, cur_ref_type.ref_type,
&cur_ref_type, (uint8)opcode1, &cur_value, &cur_ref_type, (uint8)opcode1, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
destroy_init_expr_data_recursive( destroy_init_expr_data_recursive(
module, struct_init_values); module, struct_init_values);
@ -1064,6 +1269,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack( if (!push_const_expr_stack(
&const_expr_ctx, flag, cur_ref_type.ref_type, &const_expr_ctx, flag, cur_ref_type.ref_type,
&cur_ref_type, (uint8)opcode1, &cur_value, &cur_ref_type, (uint8)opcode1, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
@ -1112,8 +1320,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!pop_const_expr_stack( if (!pop_const_expr_stack(
&const_expr_ctx, NULL, VALUE_TYPE_I32, &const_expr_ctx, NULL, VALUE_TYPE_I32,
NULL, NULL, &len_val, error_buf, NULL, NULL, &len_val,
error_buf_size)) { #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) {
goto fail; goto fail;
} }
@ -1132,6 +1343,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
&const_expr_ctx, NULL, elem_type, &const_expr_ctx, NULL, elem_type,
elem_ref_type, NULL, elem_ref_type, NULL,
&array_init_values->elem_data[0], &array_init_values->elem_data[0],
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
destroy_init_expr_data_recursive( destroy_init_expr_data_recursive(
module, array_init_values); module, array_init_values);
@ -1164,6 +1378,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
elem_ref_type, NULL, elem_ref_type, NULL,
&array_init_values &array_init_values
->elem_data[i - 1], ->elem_data[i - 1],
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
destroy_init_expr_data_recursive( destroy_init_expr_data_recursive(
module, array_init_values); module, array_init_values);
@ -1180,10 +1397,13 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
uint32 len; uint32 len;
/* POP(i32) */ /* POP(i32) */
if (!pop_const_expr_stack(&const_expr_ctx, NULL, if (!pop_const_expr_stack(
VALUE_TYPE_I32, NULL, &const_expr_ctx, NULL, VALUE_TYPE_I32, NULL,
NULL, &len_val, error_buf, NULL, &len_val,
error_buf_size)) { #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) {
goto fail; goto fail;
} }
len = len_val.i32; len = len_val.i32;
@ -1197,6 +1417,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack( if (!push_const_expr_stack(
&const_expr_ctx, flag, cur_ref_type.ref_type, &const_expr_ctx, flag, cur_ref_type.ref_type,
&cur_ref_type, (uint8)opcode1, &cur_value, &cur_ref_type, (uint8)opcode1, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
if (array_init_values) { if (array_init_values) {
destroy_init_expr_data_recursive( destroy_init_expr_data_recursive(
@ -1223,9 +1446,13 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
case WASM_OP_REF_I31: case WASM_OP_REF_I31:
{ {
/* POP(i32) */ /* POP(i32) */
if (!pop_const_expr_stack( if (!pop_const_expr_stack(&const_expr_ctx, NULL,
&const_expr_ctx, NULL, VALUE_TYPE_I32, NULL, VALUE_TYPE_I32, NULL, NULL,
NULL, &cur_value, error_buf, error_buf_size)) { &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) {
goto fail; goto fail;
} }
@ -1234,6 +1461,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack( if (!push_const_expr_stack(
&const_expr_ctx, flag, cur_ref_type.ref_type, &const_expr_ctx, flag, cur_ref_type.ref_type,
&cur_ref_type, (uint8)opcode1, &cur_value, &cur_ref_type, (uint8)opcode1, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
@ -1268,7 +1498,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
ref_type, &opcode, ref_type, &opcode,
#endif #endif
&cur_value, error_buf, error_buf_size)) { &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
&cur_expr,
#endif
error_buf, error_buf_size)) {
goto fail; goto fail;
} }
@ -1278,8 +1512,21 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
goto fail; goto fail;
} }
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (cur_expr != NULL) {
bh_memcpy_s(init_expr, sizeof(InitializerExpression), cur_expr,
sizeof(InitializerExpression));
wasm_runtime_free(cur_expr);
}
else {
init_expr->init_expr_type = flag;
init_expr->u.unary.v = cur_value;
}
#else
init_expr->init_expr_type = flag; init_expr->init_expr_type = flag;
init_expr->u = cur_value; init_expr->u.unary.v = cur_value;
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
if (init_expr->init_expr_type == WASM_OP_GC_PREFIX) { if (init_expr->init_expr_type == WASM_OP_GC_PREFIX) {
@ -1310,11 +1557,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
*p_buf = p; *p_buf = p;
destroy_const_expr_stack(&const_expr_ctx); destroy_const_expr_stack(&const_expr_ctx, false);
return true; return true;
fail: fail:
destroy_const_expr_stack(&const_expr_ctx); destroy_const_expr_stack(&const_expr_ctx, true);
return false; return false;
} }
@ -2042,9 +2289,9 @@ load_type_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
"recursive type count too large"); "recursive type count too large");
return false; return false;
} }
module->type_count += rec_count - 1;
new_total_size = new_total_size =
sizeof(WASMFuncType *) * (uint64)module->type_count; sizeof(WASMFuncType *)
* (uint64)(module->type_count + rec_count - 1);
if (new_total_size > UINT32_MAX) { if (new_total_size > UINT32_MAX) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"allocate memory failed"); "allocate memory failed");
@ -2052,6 +2299,7 @@ load_type_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
} }
MEM_REALLOC(module->types, (uint32)total_size, MEM_REALLOC(module->types, (uint32)total_size,
(uint32)new_total_size); (uint32)new_total_size);
module->type_count += rec_count - 1;
total_size = new_total_size; total_size = new_total_size;
} }
@ -3351,7 +3599,8 @@ load_import_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
/* valtype */ /* valtype */
CHECK_BUF(p, p_end, 1); CHECK_BUF(p, p_end, 1);
global_type = read_uint8(p); global_type = read_uint8(p);
if (wasm_is_reftype_htref_nullable(global_type)) { if (wasm_is_reftype_htref_nullable(global_type)
|| wasm_is_reftype_htref_non_nullable(global_type)) {
int32 heap_type; int32 heap_type;
read_leb_int32(p, p_end, heap_type); read_leb_int32(p, p_end, heap_type);
(void)heap_type; (void)heap_type;
@ -4070,9 +4319,9 @@ load_global_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
if (global->init_expr.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) { if (global->init_expr.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) {
uint8 global_type; uint8 global_type;
WASMRefType *global_ref_type; WASMRefType *global_ref_type;
uint32 global_idx = global->init_expr.u.global_index; uint32 global_idx = global->init_expr.u.unary.v.global_index;
if (global->init_expr.u.global_index if (global->init_expr.u.unary.v.global_index
>= module->import_global_count + i) { >= module->import_global_count + i) {
set_error_buf(error_buf, error_buf_size, "unknown global"); set_error_buf(error_buf, error_buf_size, "unknown global");
return false; return false;
@ -4469,7 +4718,7 @@ load_func_index_vec(const uint8 **p_buf, const uint8 *buf_end,
} }
init_expr->init_expr_type = INIT_EXPR_TYPE_FUNCREF_CONST; init_expr->init_expr_type = INIT_EXPR_TYPE_FUNCREF_CONST;
init_expr->u.ref_index = function_index; init_expr->u.unary.v.ref_index = function_index;
} }
*p_buf = p; *p_buf = p;
@ -4742,7 +4991,7 @@ load_table_segment_section(const uint8 *buf, const uint8 *buf_end,
#if WASM_ENABLE_MEMORY64 != 0 #if WASM_ENABLE_MEMORY64 != 0
if (table_elem_idx_type == VALUE_TYPE_I64 if (table_elem_idx_type == VALUE_TYPE_I64
&& table_segment->base_offset.u.u64 > UINT32_MAX) { && table_segment->base_offset.u.unary.v.u64 > UINT32_MAX) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"In table64, table base offset can't be " "In table64, table base offset can't be "
"larger than UINT32_MAX"); "larger than UINT32_MAX");
@ -4902,6 +5151,9 @@ load_data_segment_section(const uint8 *buf, const uint8 *buf_end,
if (!(dataseg = module->data_segments[i] = loader_malloc( if (!(dataseg = module->data_segments[i] = loader_malloc(
sizeof(WASMDataSeg), error_buf, error_buf_size))) { sizeof(WASMDataSeg), error_buf, error_buf_size))) {
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(module, &init_expr);
#endif
return false; return false;
} }
@ -6029,7 +6281,8 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST) { == INIT_EXPR_TYPE_I32_CONST) {
aux_heap_base_global = global; aux_heap_base_global = global;
aux_heap_base = (uint64)(uint32)global->init_expr.u.i32; aux_heap_base =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
aux_heap_base_global_index = export->index; aux_heap_base_global_index = export->index;
LOG_VERBOSE("Found aux __heap_base global, value: %" PRIu64, LOG_VERBOSE("Found aux __heap_base global, value: %" PRIu64,
aux_heap_base); aux_heap_base);
@ -6050,7 +6303,8 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST) { == INIT_EXPR_TYPE_I32_CONST) {
aux_data_end_global = global; aux_data_end_global = global;
aux_data_end = (uint64)(uint32)global->init_expr.u.i32; aux_data_end =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
aux_data_end_global_index = export->index; aux_data_end_global_index = export->index;
LOG_VERBOSE("Found aux __data_end global, value: %" PRIu64, LOG_VERBOSE("Found aux __data_end global, value: %" PRIu64,
aux_data_end); aux_data_end);
@ -6091,10 +6345,11 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->type.val_type == VALUE_TYPE_I32 && global->type.val_type == VALUE_TYPE_I32
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST == INIT_EXPR_TYPE_I32_CONST
&& (uint64)(uint32)global->init_expr.u.i32 && (uint64)(uint32)global->init_expr.u.unary.v.i32
<= aux_heap_base) { <= aux_heap_base) {
aux_stack_top_global = global; aux_stack_top_global = global;
aux_stack_top = (uint64)(uint32)global->init_expr.u.i32; aux_stack_top =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
module->aux_stack_top_global_index = module->aux_stack_top_global_index =
module->import_global_count + global_index; module->import_global_count + global_index;
module->aux_stack_bottom = aux_stack_top; module->aux_stack_bottom = aux_stack_top;
@ -6945,7 +7200,7 @@ wasm_loader_unload(WASMModule *module)
wasm_runtime_free(module->memories); wasm_runtime_free(module->memories);
if (module->globals) { if (module->globals) {
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0 || WASM_ENABLE_EXTENDED_CONST_EXPR != 0
for (i = 0; i < module->global_count; i++) { for (i = 0; i < module->global_count; i++) {
destroy_init_expr(module, &module->globals[i].init_expr); destroy_init_expr(module, &module->globals[i].init_expr);
} }
@ -6978,6 +7233,9 @@ wasm_loader_unload(WASMModule *module)
#endif #endif
wasm_runtime_free(module->table_segments[i].init_values); wasm_runtime_free(module->table_segments[i].init_values);
} }
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(module, &module->table_segments[i].base_offset);
#endif
} }
wasm_runtime_free(module->table_segments); wasm_runtime_free(module->table_segments);
} }
@ -6987,6 +7245,10 @@ wasm_loader_unload(WASMModule *module)
if (module->data_segments[i]) { if (module->data_segments[i]) {
if (module->data_segments[i]->is_data_cloned) if (module->data_segments[i]->is_data_cloned)
wasm_runtime_free(module->data_segments[i]->data); wasm_runtime_free(module->data_segments[i]->data);
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(module,
&(module->data_segments[i]->base_offset));
#endif
wasm_runtime_free(module->data_segments[i]); wasm_runtime_free(module->data_segments[i]);
} }
} }
@ -13258,7 +13520,8 @@ re_scan:
== VALUE_TYPE_FUNCREF == VALUE_TYPE_FUNCREF
&& module->globals[i].init_expr.init_expr_type && module->globals[i].init_expr.init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST == INIT_EXPR_TYPE_FUNCREF_CONST
&& module->globals[i].init_expr.u.u32 == func_idx) { && module->globals[i].init_expr.u.unary.v.u32
== func_idx) {
func_declared = true; func_declared = true;
break; break;
} }
@ -13287,7 +13550,8 @@ re_scan:
#endif #endif
) { ) {
for (j = 0; j < table_seg->value_count; j++) { for (j = 0; j < table_seg->value_count; j++) {
if (table_seg->init_values[j].u.ref_index if (table_seg->init_values[j]
.u.unary.v.ref_index
== func_idx) { == func_idx) {
func_declared = true; func_declared = true;
break; break;
@ -15023,8 +15287,6 @@ re_scan:
case WASM_OP_STRING_NEW_LOSSY_UTF8: case WASM_OP_STRING_NEW_LOSSY_UTF8:
case WASM_OP_STRING_NEW_WTF8: case WASM_OP_STRING_NEW_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15036,7 +15298,6 @@ re_scan:
POP_I32(); POP_I32();
POP_I32(); POP_I32();
PUSH_REF(REF_TYPE_STRINGREF); PUSH_REF(REF_TYPE_STRINGREF);
(void)memidx;
break; break;
} }
case WASM_OP_STRING_CONST: case WASM_OP_STRING_CONST:
@ -15064,8 +15325,6 @@ re_scan:
case WASM_OP_STRING_ENCODE_LOSSY_UTF8: case WASM_OP_STRING_ENCODE_LOSSY_UTF8:
case WASM_OP_STRING_ENCODE_WTF8: case WASM_OP_STRING_ENCODE_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15077,7 +15336,6 @@ re_scan:
POP_I32(); POP_I32();
POP_STRINGREF(); POP_STRINGREF();
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRING_CONCAT: case WASM_OP_STRING_CONCAT:
@ -15118,8 +15376,6 @@ re_scan:
case WASM_OP_STRINGVIEW_WTF8_ENCODE_LOSSY_UTF8: case WASM_OP_STRINGVIEW_WTF8_ENCODE_LOSSY_UTF8:
case WASM_OP_STRINGVIEW_WTF8_ENCODE_WTF8: case WASM_OP_STRINGVIEW_WTF8_ENCODE_WTF8:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15134,7 +15390,6 @@ re_scan:
POP_REF(REF_TYPE_STRINGVIEWWTF8); POP_REF(REF_TYPE_STRINGVIEWWTF8);
PUSH_I32(); PUSH_I32();
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRINGVIEW_WTF8_SLICE: case WASM_OP_STRINGVIEW_WTF8_SLICE:
@ -15166,8 +15421,6 @@ re_scan:
} }
case WASM_OP_STRINGVIEW_WTF16_ENCODE: case WASM_OP_STRINGVIEW_WTF16_ENCODE:
{ {
uint32 memidx;
#if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0 #if WASM_ENABLE_JIT != 0 || WASM_ENABLE_WAMR_COMPILER != 0
func->has_memory_operations = true; func->has_memory_operations = true;
#endif #endif
@ -15181,7 +15434,6 @@ re_scan:
POP_I32(); POP_I32();
POP_REF(REF_TYPE_STRINGVIEWWTF16); POP_REF(REF_TYPE_STRINGVIEWWTF16);
PUSH_I32(); PUSH_I32();
(void)memidx;
break; break;
} }
case WASM_OP_STRINGVIEW_WTF16_SLICE: case WASM_OP_STRINGVIEW_WTF16_SLICE:

View File

@ -261,6 +261,9 @@ typedef struct InitValue {
uint8 type; uint8 type;
uint8 flag; uint8 flag;
WASMValue value; WASMValue value;
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *expr;
#endif
} InitValue; } InitValue;
typedef struct ConstExprContext { typedef struct ConstExprContext {
@ -282,7 +285,11 @@ init_const_expr_stack(ConstExprContext *ctx, WASMModule *module)
static bool static bool
push_const_expr_stack(ConstExprContext *ctx, uint8 flag, uint8 type, push_const_expr_stack(ConstExprContext *ctx, uint8 flag, uint8 type,
WASMValue *value, char *error_buf, uint32 error_buf_size) WASMValue *value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *expr,
#endif
char *error_buf, uint32 error_buf_size)
{ {
InitValue *cur_value; InitValue *cur_value;
@ -305,6 +312,9 @@ push_const_expr_stack(ConstExprContext *ctx, uint8 flag, uint8 type,
cur_value->type = type; cur_value->type = type;
cur_value->flag = flag; cur_value->flag = flag;
cur_value->value = *value; cur_value->value = *value;
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
cur_value->expr = expr;
#endif
return true; return true;
fail: fail:
@ -313,7 +323,11 @@ fail:
static bool static bool
pop_const_expr_stack(ConstExprContext *ctx, uint8 *p_flag, uint8 type, pop_const_expr_stack(ConstExprContext *ctx, uint8 *p_flag, uint8 type,
WASMValue *p_value, char *error_buf, uint32 error_buf_size) WASMValue *p_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression **p_expr,
#endif
char *error_buf, uint32 error_buf_size)
{ {
InitValue *cur_value; InitValue *cur_value;
@ -331,18 +345,50 @@ pop_const_expr_stack(ConstExprContext *ctx, uint8 *p_flag, uint8 type,
*p_flag = cur_value->flag; *p_flag = cur_value->flag;
if (p_value) if (p_value)
*p_value = cur_value->value; *p_value = cur_value->value;
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (p_expr)
*p_expr = cur_value->expr;
#endif
return true; return true;
} }
static void static void
destroy_const_expr_stack(ConstExprContext *ctx) destroy_const_expr_stack(ConstExprContext *ctx, bool free_exprs)
{ {
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (free_exprs) {
for (uint32 j = 0; j < ctx->sp; j++) {
if (is_expr_binary_op(ctx->stack[j].expr->init_expr_type)) {
destroy_init_expr_recursive(ctx->stack[j].expr);
ctx->stack[j].expr = NULL;
}
}
}
#endif
if (ctx->stack != ctx->data) { if (ctx->stack != ctx->data) {
wasm_runtime_free(ctx->stack); wasm_runtime_free(ctx->stack);
} }
} }
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
static void
destroy_init_expr(InitializerExpression *expr)
{
// free left expr and right exprs for binary oprand
if (is_expr_binary_op(expr->init_expr_type)) {
return;
}
if (expr->u.binary.l_expr) {
destroy_init_expr_recursive(expr->u.binary.l_expr);
}
if (expr->u.binary.r_expr) {
destroy_init_expr_recursive(expr->u.binary.r_expr);
}
expr->u.binary.l_expr = expr->u.binary.r_expr = NULL;
}
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */
static bool static bool
load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end, load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
InitializerExpression *init_expr, uint8 type, char *error_buf, InitializerExpression *init_expr, uint8 type, char *error_buf,
@ -353,6 +399,9 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
uint32 i; uint32 i;
ConstExprContext const_expr_ctx = { 0 }; ConstExprContext const_expr_ctx = { 0 };
WASMValue cur_value = { 0 }; WASMValue cur_value = { 0 };
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
InitializerExpression *cur_expr = NULL;
#endif
init_const_expr_stack(&const_expr_ctx, module); init_const_expr_stack(&const_expr_ctx, module);
@ -367,8 +416,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_I32, &cur_value, VALUE_TYPE_I32, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
break; break;
/* i64.const */ /* i64.const */
@ -377,8 +429,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_I64, &cur_value, VALUE_TYPE_I64, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
break; break;
/* f32.const */ /* f32.const */
@ -390,8 +445,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_F32, &cur_value, VALUE_TYPE_F32, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
break; break;
/* f64.const */ /* f64.const */
@ -403,8 +461,11 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_F64, &cur_value, VALUE_TYPE_F64, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
break; break;
@ -417,13 +478,16 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
cur_value.ref_index = func_idx; cur_value.ref_index = func_idx;
if (!check_function_index(module, func_idx, error_buf, if (!check_function_index(module, func_idx, error_buf,
error_buf_size)) { error_buf_size)) {
bh_assert(0); goto fail;
} }
if (!push_const_expr_stack(&const_expr_ctx, flag, if (!push_const_expr_stack(&const_expr_ctx, flag,
VALUE_TYPE_FUNCREF, &cur_value, VALUE_TYPE_FUNCREF, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
NULL,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
break; break;
} }
@ -438,9 +502,12 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
cur_value.ref_index = UINT32_MAX; cur_value.ref_index = UINT32_MAX;
if (!push_const_expr_stack(&const_expr_ctx, flag, type1, if (!push_const_expr_stack(&const_expr_ctx, flag, type1,
&cur_value, error_buf, &cur_value,
error_buf_size)) { #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
bh_assert(0); NULL,
#endif
error_buf, error_buf_size)) {
goto fail;
} }
break; break;
} }
@ -471,15 +538,93 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
} }
if (!push_const_expr_stack(&const_expr_ctx, flag, global_type, if (!push_const_expr_stack(&const_expr_ctx, flag, global_type,
&cur_value, error_buf, &cur_value,
error_buf_size)) #if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
bh_assert(0); NULL,
#endif
error_buf, error_buf_size))
goto fail;
break; break;
} }
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_MUL:
{
InitializerExpression *l_expr, *r_expr;
WASMValue l_value, r_value;
uint8 l_flag, r_flag;
uint8 value_type;
if (flag == INIT_EXPR_TYPE_I32_ADD
|| flag == INIT_EXPR_TYPE_I32_SUB
|| flag == INIT_EXPR_TYPE_I32_MUL) {
value_type = VALUE_TYPE_I32;
}
else {
value_type = VALUE_TYPE_I64;
}
/* If right flag indicates a binary operation, right expr will
* be popped from stack. Otherwise, allocate a new expr for
* right expr. Same for left expr.
*/
if (!(pop_const_expr_stack(&const_expr_ctx, &r_flag, value_type,
&r_value, &r_expr, error_buf,
error_buf_size))) {
goto fail;
}
if (!is_expr_binary_op(r_flag)) {
if (!(r_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
goto fail;
}
r_expr->init_expr_type = r_flag;
r_expr->u.unary.v = r_value;
}
if (!(pop_const_expr_stack(&const_expr_ctx, &l_flag, value_type,
&l_value, &l_expr, error_buf,
error_buf_size))) {
destroy_init_expr_recursive(r_expr);
goto fail;
}
if (!is_expr_binary_op(l_flag)) {
if (!(l_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
destroy_init_expr_recursive(r_expr);
goto fail;
}
l_expr->init_expr_type = l_flag;
l_expr->u.unary.v = l_value;
}
if (!(cur_expr = loader_malloc(sizeof(InitializerExpression),
error_buf, error_buf_size))) {
destroy_init_expr_recursive(l_expr);
destroy_init_expr_recursive(r_expr);
goto fail;
}
cur_expr->init_expr_type = flag;
cur_expr->u.binary.l_expr = l_expr;
cur_expr->u.binary.r_expr = r_expr;
if (!push_const_expr_stack(&const_expr_ctx, flag, value_type,
&cur_value, cur_expr, error_buf,
error_buf_size)) {
destroy_init_expr_recursive(cur_expr);
goto fail;
}
break;
}
#endif
default: default:
{ {
bh_assert(0); goto fail;
} }
} }
@ -489,18 +634,42 @@ load_init_expr(WASMModule *module, const uint8 **p_buf, const uint8 *buf_end,
/* There should be only one value left on the init value stack */ /* There should be only one value left on the init value stack */
if (!pop_const_expr_stack(&const_expr_ctx, &flag, type, &cur_value, if (!pop_const_expr_stack(&const_expr_ctx, &flag, type, &cur_value,
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
&cur_expr,
#endif
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
bh_assert(0); goto fail;
} }
bh_assert(const_expr_ctx.sp == 0); if (const_expr_ctx.sp != 0) {
set_error_buf(error_buf, error_buf_size,
"type mismatch: illegal constant opcode sequence");
goto fail;
}
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
if (cur_expr != NULL) {
bh_memcpy_s(init_expr, sizeof(InitializerExpression), cur_expr,
sizeof(InitializerExpression));
wasm_runtime_free(cur_expr);
}
else {
init_expr->init_expr_type = flag;
init_expr->u.unary.v = cur_value;
}
#else
init_expr->init_expr_type = flag; init_expr->init_expr_type = flag;
init_expr->u = cur_value; init_expr->u.unary.v = cur_value;
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */
*p_buf = p; *p_buf = p;
destroy_const_expr_stack(&const_expr_ctx); destroy_const_expr_stack(&const_expr_ctx, false);
return true; return true;
fail:
destroy_const_expr_stack(&const_expr_ctx, true);
return false;
} }
static bool static bool
@ -1385,13 +1554,14 @@ load_global_section(const uint8 *buf, const uint8 *buf_end, WASMModule *module,
* global.get instructions are * global.get instructions are
* only allowed to refer to imported globals. * only allowed to refer to imported globals.
*/ */
uint32 target_global_index = global->init_expr.u.global_index; uint32 target_global_index =
global->init_expr.u.unary.v.global_index;
bh_assert(target_global_index < module->import_global_count); bh_assert(target_global_index < module->import_global_count);
(void)target_global_index; (void)target_global_index;
} }
else if (INIT_EXPR_TYPE_FUNCREF_CONST else if (INIT_EXPR_TYPE_FUNCREF_CONST
== global->init_expr.init_expr_type) { == global->init_expr.init_expr_type) {
bh_assert(global->init_expr.u.ref_index bh_assert(global->init_expr.u.unary.v.ref_index
< module->import_function_count < module->import_function_count
+ module->function_count); + module->function_count);
} }
@ -1575,7 +1745,7 @@ load_func_index_vec(const uint8 **p_buf, const uint8 *buf_end,
} }
init_expr->init_expr_type = INIT_EXPR_TYPE_FUNCREF_CONST; init_expr->init_expr_type = INIT_EXPR_TYPE_FUNCREF_CONST;
init_expr->u.ref_index = function_index; init_expr->u.unary.v.ref_index = function_index;
} }
*p_buf = p; *p_buf = p;
@ -1890,6 +2060,9 @@ load_data_segment_section(const uint8 *buf, const uint8 *buf_end,
if (!(dataseg = module->data_segments[i] = loader_malloc( if (!(dataseg = module->data_segments[i] = loader_malloc(
sizeof(WASMDataSeg), error_buf, error_buf_size))) { sizeof(WASMDataSeg), error_buf, error_buf_size))) {
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(&init_expr);
#endif
return false; return false;
} }
@ -2778,7 +2951,8 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST) { == INIT_EXPR_TYPE_I32_CONST) {
aux_heap_base_global = global; aux_heap_base_global = global;
aux_heap_base = (uint64)(uint32)global->init_expr.u.i32; aux_heap_base =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
aux_heap_base_global_index = export->index; aux_heap_base_global_index = export->index;
LOG_VERBOSE("Found aux __heap_base global, value: %" PRIu64, LOG_VERBOSE("Found aux __heap_base global, value: %" PRIu64,
aux_heap_base); aux_heap_base);
@ -2798,7 +2972,8 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST) { == INIT_EXPR_TYPE_I32_CONST) {
aux_data_end_global = global; aux_data_end_global = global;
aux_data_end = (uint64)(uint32)global->init_expr.u.i32; aux_data_end =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
aux_data_end_global_index = export->index; aux_data_end_global_index = export->index;
LOG_VERBOSE("Found aux __data_end global, value: %" PRIu64, LOG_VERBOSE("Found aux __data_end global, value: %" PRIu64,
aux_data_end); aux_data_end);
@ -2838,10 +3013,11 @@ load_from_sections(WASMModule *module, WASMSection *sections,
&& global->type.val_type == VALUE_TYPE_I32 && global->type.val_type == VALUE_TYPE_I32
&& global->init_expr.init_expr_type && global->init_expr.init_expr_type
== INIT_EXPR_TYPE_I32_CONST == INIT_EXPR_TYPE_I32_CONST
&& (uint64)(uint32)global->init_expr.u.i32 && (uint64)(uint32)global->init_expr.u.unary.v.i32
<= aux_heap_base) { <= aux_heap_base) {
aux_stack_top_global = global; aux_stack_top_global = global;
aux_stack_top = (uint64)(uint32)global->init_expr.u.i32; aux_stack_top =
(uint64)(uint32)global->init_expr.u.unary.v.i32;
module->aux_stack_top_global_index = module->aux_stack_top_global_index =
module->import_global_count + global_index; module->import_global_count + global_index;
module->aux_stack_bottom = aux_stack_top; module->aux_stack_bottom = aux_stack_top;
@ -3448,8 +3624,14 @@ wasm_loader_unload(WASMModule *module)
if (module->memories) if (module->memories)
wasm_runtime_free(module->memories); wasm_runtime_free(module->memories);
if (module->globals) if (module->globals) {
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
for (i = 0; i < module->global_count; i++) {
destroy_init_expr(&module->globals[i].init_expr);
}
#endif
wasm_runtime_free(module->globals); wasm_runtime_free(module->globals);
}
if (module->exports) if (module->exports)
wasm_runtime_free(module->exports); wasm_runtime_free(module->exports);
@ -3458,6 +3640,9 @@ wasm_loader_unload(WASMModule *module)
for (i = 0; i < module->table_seg_count; i++) { for (i = 0; i < module->table_seg_count; i++) {
if (module->table_segments[i].init_values) if (module->table_segments[i].init_values)
wasm_runtime_free(module->table_segments[i].init_values); wasm_runtime_free(module->table_segments[i].init_values);
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(&module->table_segments[i].base_offset);
#endif
} }
wasm_runtime_free(module->table_segments); wasm_runtime_free(module->table_segments);
} }
@ -3467,6 +3652,9 @@ wasm_loader_unload(WASMModule *module)
if (module->data_segments[i]) { if (module->data_segments[i]) {
if (module->data_segments[i]->is_data_cloned) if (module->data_segments[i]->is_data_cloned)
wasm_runtime_free(module->data_segments[i]->data); wasm_runtime_free(module->data_segments[i]->data);
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
destroy_init_expr(&module->data_segments[i]->base_offset);
#endif
wasm_runtime_free(module->data_segments[i]); wasm_runtime_free(module->data_segments[i]);
} }
} }
@ -7320,7 +7508,8 @@ re_scan:
== VALUE_TYPE_FUNCREF == VALUE_TYPE_FUNCREF
&& module->globals[i].init_expr.init_expr_type && module->globals[i].init_expr.init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST == INIT_EXPR_TYPE_FUNCREF_CONST
&& module->globals[i].init_expr.u.u32 == func_idx) { && module->globals[i].init_expr.u.unary.v.ref_index
== func_idx) {
func_declared = true; func_declared = true;
break; break;
} }
@ -7334,7 +7523,8 @@ re_scan:
i++, table_seg++) { i++, table_seg++) {
if (table_seg->elem_type == VALUE_TYPE_FUNCREF) { if (table_seg->elem_type == VALUE_TYPE_FUNCREF) {
for (j = 0; j < table_seg->value_count; j++) { for (j = 0; j < table_seg->value_count; j++) {
if (table_seg->init_values[j].u.ref_index if (table_seg->init_values[j]
.u.unary.v.ref_index
== func_idx) { == func_idx) {
func_declared = true; func_declared = true;
break; break;

View File

@ -1165,6 +1165,81 @@ instantiate_array_global_recursive(WASMModule *module,
} }
#endif #endif
static bool
get_init_value_recursive(WASMModule *module, InitializerExpression *expr,
WASMGlobalInstance *globals, WASMValue *value,
char *error_buf, uint32 error_buf_size)
{
uint8 flag = expr->init_expr_type;
switch (flag) {
case INIT_EXPR_TYPE_GET_GLOBAL:
{
if (!check_global_init_expr(module, expr->u.unary.v.global_index,
error_buf, error_buf_size)) {
goto fail;
}
*value = globals[expr->u.unary.v.global_index].initial_value;
break;
}
case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_I64_CONST:
{
*value = expr->u.unary.v;
break;
}
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
{
WASMValue l_value, r_value;
if (!expr->u.binary.l_expr || !expr->u.binary.r_expr) {
goto fail;
}
if (!get_init_value_recursive(module, expr->u.binary.l_expr,
globals, &l_value, error_buf,
error_buf_size)) {
goto fail;
}
if (!get_init_value_recursive(module, expr->u.binary.r_expr,
globals, &r_value, error_buf,
error_buf_size)) {
goto fail;
}
if (flag == INIT_EXPR_TYPE_I32_ADD) {
value->i32 = l_value.i32 + r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I32_SUB) {
value->i32 = l_value.i32 - r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I32_MUL) {
value->i32 = l_value.i32 * r_value.i32;
}
else if (flag == INIT_EXPR_TYPE_I64_ADD) {
value->i64 = l_value.i64 + r_value.i64;
}
else if (flag == INIT_EXPR_TYPE_I64_SUB) {
value->i64 = l_value.i64 - r_value.i64;
}
else if (flag == INIT_EXPR_TYPE_I64_MUL) {
value->i64 = l_value.i64 * r_value.i64;
}
break;
}
#endif /* end of WASM_ENABLE_EXTENDED_CONST_EXPR != 0 */
default:
goto fail;
}
return true;
fail:
return false;
}
/** /**
* Instantiate globals in a module. * Instantiate globals in a module.
*/ */
@ -1209,7 +1284,7 @@ globals_instantiate(WASMModule *module, WASMModuleInstance *module_inst,
/* The linked global instance has been initialized, we /* The linked global instance has been initialized, we
just need to copy the value. */ just need to copy the value. */
global->initial_value = global->initial_value =
global_import->import_global_linked->init_expr.u; global_import->import_global_linked->init_expr.u.unary.v;
} }
else else
#endif #endif
@ -1245,17 +1320,23 @@ globals_instantiate(WASMModule *module, WASMModuleInstance *module_inst,
#endif #endif
switch (flag) { switch (flag) {
case INIT_EXPR_TYPE_I32_CONST:
case INIT_EXPR_TYPE_I64_CONST:
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
#if WASM_ENABLE_EXTENDED_CONST_EXPR != 0
case INIT_EXPR_TYPE_I32_ADD:
case INIT_EXPR_TYPE_I32_SUB:
case INIT_EXPR_TYPE_I32_MUL:
case INIT_EXPR_TYPE_I64_ADD:
case INIT_EXPR_TYPE_I64_SUB:
case INIT_EXPR_TYPE_I64_MUL:
#endif
{ {
if (!check_global_init_expr(module, init_expr->u.global_index, if (!get_init_value_recursive(module, init_expr, globals,
error_buf, error_buf_size)) { &global->initial_value, error_buf,
error_buf_size)) {
goto fail; goto fail;
} }
bh_memcpy_s(
&(global->initial_value), sizeof(WASMValue),
&(globals[init_expr->u.global_index].initial_value),
sizeof(globals[init_expr->u.global_index].initial_value));
break; break;
} }
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
@ -1267,11 +1348,12 @@ globals_instantiate(WASMModule *module, WASMModuleInstance *module_inst,
uint32 type_idx; uint32 type_idx;
if (flag == INIT_EXPR_TYPE_STRUCT_NEW) { if (flag == INIT_EXPR_TYPE_STRUCT_NEW) {
init_values = (WASMStructNewInitValues *)init_expr->u.data; init_values =
(WASMStructNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
} }
else { else {
type_idx = init_expr->u.type_index; type_idx = init_expr->u.unary.v.type_index;
} }
struct_obj = instantiate_struct_global_recursive( struct_obj = instantiate_struct_global_recursive(
@ -1294,12 +1376,14 @@ globals_instantiate(WASMModule *module, WASMModuleInstance *module_inst,
uint32 type_idx, len; uint32 type_idx, len;
if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) { if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) {
type_idx = init_expr->u.array_new_default.type_index; type_idx =
len = init_expr->u.array_new_default.length; init_expr->u.unary.v.array_new_default.type_index;
len = init_expr->u.unary.v.array_new_default.length;
array_init_value = &empty_value; array_init_value = &empty_value;
} }
else { else {
init_values = (WASMArrayNewInitValues *)init_expr->u.data; init_values =
(WASMArrayNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
len = init_values->length; len = init_values->length;
@ -1318,13 +1402,12 @@ globals_instantiate(WASMModule *module, WASMModuleInstance *module_inst,
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
{ {
global->initial_value.gc_obj = global->initial_value.gc_obj =
(wasm_obj_t)wasm_i31_obj_new(init_expr->u.i32); (wasm_obj_t)wasm_i31_obj_new(init_expr->u.unary.v.i32);
break; break;
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
default: default:
bh_memcpy_s(&(global->initial_value), sizeof(WASMValue), global->initial_value = init_expr->u.unary.v;
&(init_expr->u), sizeof(init_expr->u));
break; break;
} }
@ -2668,7 +2751,7 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
} }
STORE_PTR((void **)global_data, func_obj); STORE_PTR((void **)global_data, func_obj);
global_data += sizeof(void *); global_data += sizeof(void *);
/* Also update the inital_value since other globals may /* Also update the initial_value since other globals may
* refer to this */ * refer to this */
global->initial_value.gc_obj = (wasm_obj_t)func_obj; global->initial_value.gc_obj = (wasm_obj_t)func_obj;
break; break;
@ -2698,6 +2781,7 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
uint8 *memory_data = NULL; uint8 *memory_data = NULL;
uint64 memory_size = 0; uint64 memory_size = 0;
WASMDataSeg *data_seg = module->data_segments[i]; WASMDataSeg *data_seg = module->data_segments[i];
WASMValue offset_value;
#if WASM_ENABLE_BULK_MEMORY != 0 #if WASM_ENABLE_BULK_MEMORY != 0
if (data_seg->is_passive) if (data_seg->is_passive)
@ -2717,54 +2801,37 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
(uint64)memory->num_bytes_per_page * memory->cur_page_count; (uint64)memory->num_bytes_per_page * memory->cur_page_count;
bh_assert(memory_data || memory_size == 0); bh_assert(memory_data || memory_size == 0);
bh_assert(data_seg->base_offset.init_expr_type uint8 offset_flag = data_seg->base_offset.init_expr_type;
== INIT_EXPR_TYPE_GET_GLOBAL bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
|| data_seg->base_offset.init_expr_type || (memory->is_memory64 ? is_valid_i64_offset(offset_flag)
== (memory->is_memory64 ? INIT_EXPR_TYPE_I64_CONST : is_valid_i32_offset(offset_flag)));
: INIT_EXPR_TYPE_I32_CONST));
if (data_seg->base_offset.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) { if (!get_init_value_recursive(module, &data_seg->base_offset, globals,
if (!check_global_init_expr(module, &offset_value, error_buf,
data_seg->base_offset.u.global_index, error_buf_size)) {
error_buf, error_buf_size)) { goto fail;
goto fail; }
}
if (offset_flag == INIT_EXPR_TYPE_GET_GLOBAL) {
if (!globals if (!globals
|| globals[data_seg->base_offset.u.global_index].type || globals[data_seg->base_offset.u.unary.v.global_index].type
!= (memory->is_memory64 ? VALUE_TYPE_I64 != (memory->is_memory64 ? VALUE_TYPE_I64
: VALUE_TYPE_I32)) { : VALUE_TYPE_I32)) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"data segment does not fit"); "data segment does not fit");
goto fail; goto fail;
} }
#if WASM_ENABLE_MEMORY64 != 0
if (memory->is_memory64) {
base_offset =
(uint64)globals[data_seg->base_offset.u.global_index]
.initial_value.i64;
}
else
#endif
{
base_offset =
(uint32)globals[data_seg->base_offset.u.global_index]
.initial_value.i32;
}
}
else {
#if WASM_ENABLE_MEMORY64 != 0
if (memory->is_memory64) {
base_offset = (uint64)data_seg->base_offset.u.i64;
}
else
#endif
{
base_offset = (uint32)data_seg->base_offset.u.i32;
}
} }
#if WASM_ENABLE_MEMORY64 != 0
if (memory->is_memory64) {
base_offset = (uint64)offset_value.i64;
}
else
#endif
{
base_offset = (uint32)offset_value.i32;
}
/* check offset */ /* check offset */
if (base_offset > memory_size) { if (base_offset > memory_size) {
#if WASM_ENABLE_MEMORY64 != 0 #if WASM_ENABLE_MEMORY64 != 0
@ -2818,6 +2885,7 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
#else #else
module_inst->e->shared_heap_start_off.u32[0] = UINT32_MAX; module_inst->e->shared_heap_start_off.u32[0] = UINT32_MAX;
#endif #endif
module_inst->e->shared_heap = NULL;
#endif #endif
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
@ -2841,36 +2909,39 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
|| table->init_expr.init_expr_type == INIT_EXPR_TYPE_REFNULL_CONST); || table->init_expr.init_expr_type == INIT_EXPR_TYPE_REFNULL_CONST);
if (table->init_expr.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) { if (table->init_expr.init_expr_type == INIT_EXPR_TYPE_GET_GLOBAL) {
if (!check_global_init_expr(module, table->init_expr.u.global_index, if (!check_global_init_expr(module,
table->init_expr.u.unary.v.global_index,
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
table->init_expr.u.gc_obj = table->init_expr.u.unary.v.gc_obj =
globals[table->init_expr.u.global_index].initial_value.gc_obj; globals[table->init_expr.u.unary.v.global_index]
.initial_value.gc_obj;
} }
else if (table->init_expr.init_expr_type else if (table->init_expr.init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST) { == INIT_EXPR_TYPE_FUNCREF_CONST) {
uint32 func_idx = table->init_expr.u.ref_index; uint32 func_idx = table->init_expr.u.unary.v.ref_index;
if (func_idx != UINT32_MAX) { if (func_idx != UINT32_MAX) {
if (!(table->init_expr.u.gc_obj = if (!(table->init_expr.u.unary.v.gc_obj =
wasm_create_func_obj(module_inst, func_idx, false, wasm_create_func_obj(module_inst, func_idx, false,
error_buf, error_buf_size))) error_buf, error_buf_size)))
goto fail; goto fail;
} }
else { else {
table->init_expr.u.gc_obj = NULL_REF; table->init_expr.u.unary.v.gc_obj = NULL_REF;
} }
} }
else if (table->init_expr.init_expr_type else if (table->init_expr.init_expr_type
== INIT_EXPR_TYPE_REFNULL_CONST) { == INIT_EXPR_TYPE_REFNULL_CONST) {
table->init_expr.u.gc_obj = NULL_REF; table->init_expr.u.unary.v.gc_obj = NULL_REF;
} }
LOG_DEBUG("Init table [%d] elements from [%d] to [%d] as: %p", i, 0, LOG_DEBUG("Init table [%d] elements from [%d] to [%d] as: %p", i, 0,
table_inst->cur_size, (void *)table->init_expr.u.gc_obj); table_inst->cur_size,
(void *)table->init_expr.u.unary.v.gc_obj);
for (j = 0; j < table_inst->cur_size; j++) { for (j = 0; j < table_inst->cur_size; j++) {
*(table_data + j) = table->init_expr.u.gc_obj; *(table_data + j) = table->init_expr.u.unary.v.gc_obj;
} }
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
@ -2882,6 +2953,7 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
/* has check it in loader */ /* has check it in loader */
WASMTableInstance *table = module_inst->tables[table_seg->table_index]; WASMTableInstance *table = module_inst->tables[table_seg->table_index];
table_elem_type_t *table_data; table_elem_type_t *table_data;
WASMValue offset_value;
uint32 j; uint32 j;
#if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0 #if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0
uint8 tbl_elem_type; uint8 tbl_elem_type;
@ -2940,48 +3012,37 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
continue; continue;
#endif #endif
uint8 offset_flag = table_seg->base_offset.init_expr_type;
#if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0 #if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0
bh_assert(table_seg->base_offset.init_expr_type bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
== INIT_EXPR_TYPE_I32_CONST || offset_flag == INIT_EXPR_TYPE_FUNCREF_CONST
|| table_seg->base_offset.init_expr_type || offset_flag == INIT_EXPR_TYPE_REFNULL_CONST
== INIT_EXPR_TYPE_GET_GLOBAL || is_valid_i32_offset(offset_flag));
|| table_seg->base_offset.init_expr_type
== INIT_EXPR_TYPE_FUNCREF_CONST
|| table_seg->base_offset.init_expr_type
== INIT_EXPR_TYPE_REFNULL_CONST);
#else #else
bh_assert(table_seg->base_offset.init_expr_type bh_assert(offset_flag == INIT_EXPR_TYPE_GET_GLOBAL
== INIT_EXPR_TYPE_I32_CONST || is_valid_i32_offset(offset_flag));
|| table_seg->base_offset.init_expr_type
== INIT_EXPR_TYPE_GET_GLOBAL);
#endif #endif
/* init vec(funcidx) or vec(expr) */ if (!get_init_value_recursive(module, &table_seg->base_offset, globals,
if (table_seg->base_offset.init_expr_type &offset_value, error_buf,
== INIT_EXPR_TYPE_GET_GLOBAL) { error_buf_size)) {
if (!check_global_init_expr(module, goto fail;
table_seg->base_offset.u.global_index, }
error_buf, error_buf_size)) {
goto fail;
}
if (offset_flag == INIT_EXPR_TYPE_GET_GLOBAL) {
if (!globals if (!globals
|| globals[table_seg->base_offset.u.global_index].type || globals[table_seg->base_offset.u.unary.v.global_index].type
!= VALUE_TYPE_I32) { != VALUE_TYPE_I32) {
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"type mismatch: elements segment does not fit"); "type mismatch: elements segment does not fit");
goto fail; goto fail;
} }
table_seg->base_offset.u.i32 =
globals[table_seg->base_offset.u.global_index]
.initial_value.i32;
} }
/* check offset since length might negative */ /* check offset since length might negative */
if ((uint32)table_seg->base_offset.u.i32 > table->cur_size) { if ((uint32)offset_value.i32 > table->cur_size) {
LOG_DEBUG("base_offset(%d) > table->cur_size(%d)", LOG_DEBUG("base_offset(%d) > table->cur_size(%d)", offset_value.i32,
table_seg->base_offset.u.i32, table->cur_size); table->cur_size);
#if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0 #if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"out of bounds table access"); "out of bounds table access");
@ -2994,9 +3055,9 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
/* check offset + length(could be zero) */ /* check offset + length(could be zero) */
length = table_seg->value_count; length = table_seg->value_count;
if ((uint32)table_seg->base_offset.u.i32 + length > table->cur_size) { if ((uint32)offset_value.i32 + length > table->cur_size) {
LOG_DEBUG("base_offset(%d) + length(%d)> table->cur_size(%d)", LOG_DEBUG("base_offset(%d) + length(%d)> table->cur_size(%d)",
table_seg->base_offset.u.i32, length, table->cur_size); offset_value.i32, length, table->cur_size);
#if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0 #if WASM_ENABLE_REF_TYPES != 0 || WASM_ENABLE_GC != 0
set_error_buf(error_buf, error_buf_size, set_error_buf(error_buf, error_buf_size,
"out of bounds table access"); "out of bounds table access");
@ -3026,10 +3087,10 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
case INIT_EXPR_TYPE_FUNCREF_CONST: case INIT_EXPR_TYPE_FUNCREF_CONST:
{ {
#if WASM_ENABLE_GC == 0 #if WASM_ENABLE_GC == 0
ref = (void *)(uintptr_t)init_expr->u.ref_index; ref = (void *)(uintptr_t)init_expr->u.unary.v.ref_index;
#else #else
WASMFuncObjectRef func_obj; WASMFuncObjectRef func_obj;
uint32 func_idx = init_expr->u.ref_index; uint32 func_idx = init_expr->u.unary.v.ref_index;
/* UINT32_MAX indicates that it is a null reference */ /* UINT32_MAX indicates that it is a null reference */
if (func_idx != UINT32_MAX) { if (func_idx != UINT32_MAX) {
if (!(func_obj = wasm_create_func_obj( if (!(func_obj = wasm_create_func_obj(
@ -3048,14 +3109,14 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
case INIT_EXPR_TYPE_GET_GLOBAL: case INIT_EXPR_TYPE_GET_GLOBAL:
{ {
if (!check_global_init_expr(module, if (!check_global_init_expr(
init_expr->u.global_index, module, init_expr->u.unary.v.global_index,
error_buf, error_buf_size)) { error_buf, error_buf_size)) {
goto fail; goto fail;
} }
ref = ref = globals[init_expr->u.unary.v.global_index]
globals[init_expr->u.global_index].initial_value.gc_obj; .initial_value.gc_obj;
break; break;
} }
case INIT_EXPR_TYPE_STRUCT_NEW: case INIT_EXPR_TYPE_STRUCT_NEW:
@ -3068,12 +3129,12 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
uint32 type_idx; uint32 type_idx;
if (flag == INIT_EXPR_TYPE_STRUCT_NEW) { if (flag == INIT_EXPR_TYPE_STRUCT_NEW) {
init_values = init_values = (WASMStructNewInitValues *)
(WASMStructNewInitValues *)init_expr->u.data; init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
} }
else { else {
type_idx = init_expr->u.type_index; type_idx = init_expr->u.unary.v.type_index;
} }
struct_type = (WASMStructType *)module->types[type_idx]; struct_type = (WASMStructType *)module->types[type_idx];
@ -3124,13 +3185,14 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
uint32 type_idx, len; uint32 type_idx, len;
if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) { if (flag == INIT_EXPR_TYPE_ARRAY_NEW_DEFAULT) {
type_idx = init_expr->u.array_new_default.type_index; type_idx =
len = init_expr->u.array_new_default.length; init_expr->u.unary.v.array_new_default.type_index;
len = init_expr->u.unary.v.array_new_default.length;
arr_init_val = &empty_val; arr_init_val = &empty_val;
} }
else { else {
init_values = init_values =
(WASMArrayNewInitValues *)init_expr->u.data; (WASMArrayNewInitValues *)init_expr->u.unary.v.data;
type_idx = init_values->type_idx; type_idx = init_values->type_idx;
len = init_values->length; len = init_values->length;
@ -3176,14 +3238,14 @@ wasm_instantiate(WASMModule *module, WASMModuleInstance *parent,
} }
case INIT_EXPR_TYPE_I31_NEW: case INIT_EXPR_TYPE_I31_NEW:
{ {
ref = (wasm_obj_t)wasm_i31_obj_new(init_expr->u.i32); ref =
(wasm_obj_t)wasm_i31_obj_new(init_expr->u.unary.v.i32);
break; break;
} }
#endif /* end of WASM_ENABLE_GC != 0 */ #endif /* end of WASM_ENABLE_GC != 0 */
} }
*(table_data + table_seg->base_offset.u.i32 + j) = *(table_data + offset_value.i32 + j) = (table_elem_type_t)ref;
(table_elem_type_t)ref;
} }
} }
@ -4161,7 +4223,7 @@ wasm_get_module_inst_mem_consumption(const WASMModuleInstance *module_inst,
sizeof(WASMMemoryInstance *) * module_inst->memory_count; sizeof(WASMMemoryInstance *) * module_inst->memory_count;
for (i = 0; i < module_inst->memory_count; i++) { for (i = 0; i < module_inst->memory_count; i++) {
WASMMemoryInstance *memory = module_inst->memories[i]; WASMMemoryInstance *memory = module_inst->memories[i];
size = memory->num_bytes_per_page * memory->cur_page_count; size = (uint64)memory->num_bytes_per_page * memory->cur_page_count;
mem_conspn->memories_size += size; mem_conspn->memories_size += size;
mem_conspn->app_heap_size += memory->heap_data_end - memory->heap_data; mem_conspn->app_heap_size += memory->heap_data_end - memory->heap_data;
/* size of app heap structure */ /* size of app heap structure */
@ -4195,9 +4257,9 @@ wasm_get_module_inst_mem_consumption(const WASMModuleInstance *module_inst,
#endif /* end of (WASM_ENABLE_MEMORY_PROFILING != 0) \ #endif /* end of (WASM_ENABLE_MEMORY_PROFILING != 0) \
|| (WASM_ENABLE_MEMORY_TRACING != 0) */ || (WASM_ENABLE_MEMORY_TRACING != 0) */
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, wasm_interp_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
uint32 length, uint32 skip_n, char *error_buf, uint32 length, uint32 skip_n, char *error_buf,
uint32_t error_buf_size) uint32_t error_buf_size)
{ {
@ -4242,7 +4304,7 @@ wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer,
} }
return count >= skip_n ? count - skip_n : 0; return count >= skip_n ? count - skip_n : 0;
} }
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
bool bool
@ -4705,10 +4767,10 @@ llvm_jit_table_init(WASMModuleInstance *module_inst, uint32 tbl_idx,
for (i = 0; i < length; i++) { for (i = 0; i < length; i++) {
#if WASM_ENABLE_GC != 0 #if WASM_ENABLE_GC != 0
/* UINT32_MAX indicates that it is a null ref */ /* UINT32_MAX indicates that it is a null ref */
if (init_values[i].u.ref_index != UINT32_MAX) { if (init_values[i].u.unary.v.ref_index != UINT32_MAX) {
if (!(func_obj = wasm_create_func_obj(module_inst, if (!(func_obj = wasm_create_func_obj(
init_values[i].u.ref_index, module_inst, init_values[i].u.unary.v.ref_index, true,
true, NULL, 0))) { NULL, 0))) {
wasm_set_exception(module_inst, "null function reference"); wasm_set_exception(module_inst, "null function reference");
return; return;
} }
@ -4718,7 +4780,7 @@ llvm_jit_table_init(WASMModuleInstance *module_inst, uint32 tbl_idx,
table_elems[i] = NULL_REF; table_elems[i] = NULL_REF;
} }
#else #else
table_elems[i] = init_values[i].u.ref_index; table_elems[i] = init_values[i].u.unary.v.ref_index;
#endif #endif
} }
} }

View File

@ -93,12 +93,21 @@ typedef union {
} MemBound; } MemBound;
typedef struct WASMSharedHeap { typedef struct WASMSharedHeap {
struct WASMSharedHeap *next; /* The global shared heap list maintained in runtime, used for runtime
void *heap_handle; * destroy */
uint8 *base_addr; DefPointer(struct WASMSharedHeap *, next);
/* The logical shared heap chain the shared heap in */
DefPointer(struct WASMSharedHeap *, chain_next);
/* Will be null if shared heap is created from pre allocated memory chunk
* and don't need to dynamic malloc and free */
DefPointer(void *, heap_handle);
DefPointer(uint8 *, base_addr);
uint64 size; uint64 size;
uint64 start_off_mem64; uint64 start_off_mem64;
uint64 start_off_mem32; uint64 start_off_mem32;
/* The number of wasm apps it attached to, for a shared heap chain, only the
* list head need to maintain the valid attached_count */
uint8 attached_count;
} WASMSharedHeap; } WASMSharedHeap;
struct WASMMemoryInstance { struct WASMMemoryInstance {
@ -364,8 +373,6 @@ typedef struct WASMModuleInstanceExtra {
#endif #endif
#if WASM_ENABLE_SHARED_HEAP != 0 #if WASM_ENABLE_SHARED_HEAP != 0
WASMSharedHeap *shared_heap;
#if WASM_ENABLE_JIT != 0
/* /*
* Adjusted shared heap based addr to simple the calculation * Adjusted shared heap based addr to simple the calculation
* in the aot code. The value is: * in the aot code. The value is:
@ -373,7 +380,8 @@ typedef struct WASMModuleInstanceExtra {
*/ */
uint8 *shared_heap_base_addr_adj; uint8 *shared_heap_base_addr_adj;
MemBound shared_heap_start_off; MemBound shared_heap_start_off;
#endif MemBound shared_heap_end_off;
WASMSharedHeap *shared_heap;
#endif #endif
#if WASM_ENABLE_DEBUG_INTERP != 0 \ #if WASM_ENABLE_DEBUG_INTERP != 0 \
@ -731,12 +739,12 @@ wasm_get_table_inst(const WASMModuleInstance *module_inst, uint32 tbl_idx)
#if WASM_ENABLE_DUMP_CALL_STACK != 0 #if WASM_ENABLE_DUMP_CALL_STACK != 0
#if WAMR_ENABLE_COPY_CALLSTACK != 0 #if WASM_ENABLE_COPY_CALL_STACK != 0
uint32 uint32
wasm_interp_copy_callstack(WASMExecEnv *exec_env, wasm_frame_t *buffer, wasm_interp_copy_callstack(WASMExecEnv *exec_env, WASMCApiFrame *buffer,
uint32 length, uint32 skip_n, char *error_buf, uint32 length, uint32 skip_n, char *error_buf,
uint32_t error_buf_size); uint32_t error_buf_size);
#endif // WAMR_ENABLE_COPY_CALLSTACK #endif // WASM_ENABLE_COPY_CALL_STACK
bool bool
wasm_interp_create_call_stack(struct WASMExecEnv *exec_env); wasm_interp_create_call_stack(struct WASMExecEnv *exec_env);

View File

@ -375,6 +375,9 @@ wasi_fd_pread(wasm_exec_env_t exec_env, wasi_fd_t fd, iovec_app_t *iovec_app,
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
total_size = sizeof(wasi_iovec_t) * (uint64)iovs_len; total_size = sizeof(wasi_iovec_t) * (uint64)iovs_len;
if (total_size == 0) {
total_size = 1; /* avoid user-triggered 0-sized allocation */
}
if (total_size >= UINT32_MAX if (total_size >= UINT32_MAX
|| !(iovec_begin = wasm_runtime_malloc((uint32)total_size))) || !(iovec_begin = wasm_runtime_malloc((uint32)total_size)))
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
@ -430,6 +433,9 @@ wasi_fd_pwrite(wasm_exec_env_t exec_env, wasi_fd_t fd,
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
total_size = sizeof(wasi_ciovec_t) * (uint64)iovs_len; total_size = sizeof(wasi_ciovec_t) * (uint64)iovs_len;
if (total_size == 0) {
total_size = 1; /* avoid user-triggered 0-sized allocation */
}
if (total_size >= UINT32_MAX if (total_size >= UINT32_MAX
|| !(ciovec_begin = wasm_runtime_malloc((uint32)total_size))) || !(ciovec_begin = wasm_runtime_malloc((uint32)total_size)))
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
@ -484,6 +490,9 @@ wasi_fd_read(wasm_exec_env_t exec_env, wasi_fd_t fd,
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
total_size = sizeof(wasi_iovec_t) * (uint64)iovs_len; total_size = sizeof(wasi_iovec_t) * (uint64)iovs_len;
if (total_size == 0) {
total_size = 1; /* avoid user-triggered 0-sized allocation */
}
if (total_size >= UINT32_MAX if (total_size >= UINT32_MAX
|| !(iovec_begin = wasm_runtime_malloc((uint32)total_size))) || !(iovec_begin = wasm_runtime_malloc((uint32)total_size)))
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
@ -654,6 +663,9 @@ wasi_fd_write(wasm_exec_env_t exec_env, wasi_fd_t fd,
return (wasi_errno_t)-1; return (wasi_errno_t)-1;
total_size = sizeof(wasi_ciovec_t) * (uint64)iovs_len; total_size = sizeof(wasi_ciovec_t) * (uint64)iovs_len;
if (total_size == 0) {
total_size = 1; /* avoid user-triggered 0-sized allocation */
}
if (total_size >= UINT32_MAX if (total_size >= UINT32_MAX
|| !(ciovec_begin = wasm_runtime_malloc((uint32)total_size))) || !(ciovec_begin = wasm_runtime_malloc((uint32)total_size)))
return (wasi_errno_t)-1; return (wasi_errno_t)-1;

View File

@ -301,7 +301,8 @@ wasm_cluster_create(WASMExecEnv *exec_env)
aux_stack_start -= cluster->stack_size; aux_stack_start -= cluster->stack_size;
for (i = 0; i < cluster_max_thread_num; i++) { for (i = 0; i < cluster_max_thread_num; i++) {
cluster->stack_tops[i] = aux_stack_start - cluster->stack_size * i; cluster->stack_tops[i] =
aux_stack_start - (uint64)cluster->stack_size * i;
} }
} }
#endif #endif

View File

@ -21,6 +21,7 @@
#else #else
#define WASI_NN_IMPORT(name) \ #define WASI_NN_IMPORT(name) \
__attribute__((import_module("wasi_nn"), import_name(name))) __attribute__((import_module("wasi_nn"), import_name(name)))
#warning You are using "wasi_nn", which is a legacy WAMR-specific ABI. It's deperecated and will likely be removed in future versions of WAMR. Please use "wasi_ephemeral_nn" instead. (For a WASM module, use the wasi_ephemeral_nn.h header instead. For the runtime configurations, enable WASM_ENABLE_WASI_EPHEMERAL_NN/WAMR_BUILD_WASI_EPHEMERAL_NN.)
#endif #endif
/** /**
@ -108,14 +109,13 @@ WASI_NN_NAME(compute)
WASI_NN_ERROR_TYPE WASI_NN_ERROR_TYPE
WASI_NN_NAME(get_output) WASI_NN_NAME(get_output)
(WASI_NN_NAME(graph_execution_context) ctx, uint32_t index, (WASI_NN_NAME(graph_execution_context) ctx, uint32_t index,
WASI_NN_NAME(tensor_data) output_tensor, uint32_t output_tensor_max_size, uint8_t *output_tensor, uint32_t output_tensor_max_size,
uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output"); uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output");
#else #else
WASI_NN_ERROR_TYPE WASI_NN_ERROR_TYPE
WASI_NN_NAME(get_output) WASI_NN_NAME(get_output)
(graph_execution_context ctx, uint32_t index, (graph_execution_context ctx, uint32_t index, uint8_t *output_tensor,
WASI_NN_NAME(tensor_data) output_tensor, uint32_t *output_tensor_size) uint32_t *output_tensor_size) WASI_NN_IMPORT("get_output");
WASI_NN_IMPORT("get_output");
#endif #endif
#endif #endif

View File

@ -99,7 +99,14 @@ typedef enum {
// 4-byte f32 elements would have a data array of length 16). Naturally, this // 4-byte f32 elements would have a data array of length 16). Naturally, this
// representation requires some knowledge of how to lay out data in // representation requires some knowledge of how to lay out data in
// memory--e.g., using row-major ordering--and could perhaps be improved. // memory--e.g., using row-major ordering--and could perhaps be improved.
#if !defined(__wasm__) || WASM_ENABLE_WASI_EPHEMERAL_NN != 0
typedef struct {
uint8_t *buf;
uint32_t size;
} WASI_NN_NAME(tensor_data);
#else
typedef uint8_t *WASI_NN_NAME(tensor_data); typedef uint8_t *WASI_NN_NAME(tensor_data);
#endif
// A tensor. // A tensor.
typedef struct { typedef struct {

View File

@ -99,7 +99,8 @@ graph_builder_array_app_native(wasm_module_inst_t instance,
static wasi_nn_error static wasi_nn_error
tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements, tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements,
tensor_wasm *input_tensor_wasm, tensor_data *data) tensor_wasm *input_tensor_wasm, void **data,
uint32_t *size)
{ {
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
#define data_size input_tensor_wasm->data_size #define data_size input_tensor_wasm->data_size
@ -113,8 +114,9 @@ tensor_data_app_native(wasm_module_inst_t instance, uint32_t total_elements,
NN_ERR_PRINTF("input_tensor_wasm->data_offset is invalid"); NN_ERR_PRINTF("input_tensor_wasm->data_offset is invalid");
return invalid_argument; return invalid_argument;
} }
*data = (tensor_data)wasm_runtime_addr_app_to_native( *data = wasm_runtime_addr_app_to_native(
instance, (uint64)input_tensor_wasm->data_offset); instance, (uint64)input_tensor_wasm->data_offset);
*size = data_size;
return success; return success;
#undef data_size #undef data_size
} }
@ -188,16 +190,19 @@ tensor_app_native(wasm_module_inst_t instance, tensor_wasm *input_tensor_wasm,
NN_DBG_PRINTF("Tensor type: %d", input_tensor_wasm->type); NN_DBG_PRINTF("Tensor type: %d", input_tensor_wasm->type);
NN_DBG_PRINTF("Total number of elements: %d", total_elements); NN_DBG_PRINTF("Total number of elements: %d", total_elements);
tensor_data data = NULL; void *data = NULL;
uint32_t datasize;
if (success if (success
!= (res = tensor_data_app_native(instance, total_elements, != (res =
input_tensor_wasm, &data))) { tensor_data_app_native(instance, total_elements,
input_tensor_wasm, &data, &datasize))) {
wasm_runtime_free(dimensions); wasm_runtime_free(dimensions);
return res; return res;
} }
input_tensor->type = input_tensor_wasm->type; input_tensor->type = input_tensor_wasm->type;
input_tensor->dimensions = dimensions; input_tensor->dimensions = dimensions;
input_tensor->data = data; input_tensor->data.buf = data;
input_tensor->data.size = datasize;
return success; return success;
} }

View File

@ -20,6 +20,10 @@
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#include "wasm_export.h" #include "wasm_export.h"
#if WASM_ENABLE_WASI_EPHEMERAL_NN == 0
#warning You are using "wasi_nn", which is a legacy WAMR-specific ABI. It's deperecated and will likely be removed in future versions of WAMR. Please use "wasi_ephemeral_nn" instead. (For a WASM module, use the wasi_ephemeral_nn.h header instead. For the runtime configurations, enable WASM_ENABLE_WASI_EPHEMERAL_NN/WAMR_BUILD_WASI_EPHEMERAL_NN.)
#endif
#define HASHMAP_INITIAL_SIZE 20 #define HASHMAP_INITIAL_SIZE 20
#if defined(__APPLE__) #if defined(__APPLE__)
#define LIB_EXTENTION ".dylib" #define LIB_EXTENTION ".dylib"
@ -51,53 +55,21 @@ struct backends_api_functions {
NN_ERR_PRINTF("Error %s() -> %d", #func, wasi_error); \ NN_ERR_PRINTF("Error %s() -> %d", #func, wasi_error); \
} while (0) } while (0)
/* HashMap utils */ static void *wasi_nn_key;
static HashMap *hashmap;
static uint32
hash_func(const void *key)
{
// fnv1a_hash
const uint32 FNV_PRIME = 16777619;
const uint32 FNV_OFFSET_BASIS = 2166136261U;
uint32 hash = FNV_OFFSET_BASIS;
const unsigned char *bytes = (const unsigned char *)key;
for (size_t i = 0; i < sizeof(uintptr_t); ++i) {
hash ^= bytes[i];
hash *= FNV_PRIME;
}
return hash;
}
static bool
key_equal_func(void *key1, void *key2)
{
return key1 == key2;
}
static void
key_destroy_func(void *key1)
{
/* key type is wasm_module_inst_t*. do nothing */
}
static void static void
wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx) wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx)
{ {
NN_DBG_PRINTF("[WASI NN] DEINIT...");
if (wasi_nn_ctx == NULL) { if (wasi_nn_ctx == NULL) {
NN_ERR_PRINTF(
"Error when deallocating memory. WASI-NN context is NULL");
return; return;
} }
NN_DBG_PRINTF("[WASI NN] DEINIT...");
NN_DBG_PRINTF("Freeing wasi-nn"); NN_DBG_PRINTF("Freeing wasi-nn");
NN_DBG_PRINTF("-> is_model_loaded: %d", wasi_nn_ctx->is_model_loaded); NN_DBG_PRINTF("-> is_model_loaded: %d", wasi_nn_ctx->is_model_loaded);
NN_DBG_PRINTF("-> current_encoding: %d", wasi_nn_ctx->backend); NN_DBG_PRINTF("-> current_encoding: %d", wasi_nn_ctx->backend);
bh_assert(!wasi_nn_ctx->busy);
/* deinit() the backend */ /* deinit() the backend */
if (wasi_nn_ctx->is_backend_ctx_initialized) { if (wasi_nn_ctx->is_backend_ctx_initialized) {
wasi_nn_error res; wasi_nn_error res;
@ -105,13 +77,14 @@ wasi_nn_ctx_destroy(WASINNContext *wasi_nn_ctx)
wasi_nn_ctx->backend_ctx); wasi_nn_ctx->backend_ctx);
} }
os_mutex_destroy(&wasi_nn_ctx->lock);
wasm_runtime_free(wasi_nn_ctx); wasm_runtime_free(wasi_nn_ctx);
} }
static void static void
value_destroy_func(void *value) dtor(wasm_module_inst_t inst, void *ctx)
{ {
wasi_nn_ctx_destroy((WASINNContext *)value); wasi_nn_ctx_destroy(ctx);
} }
bool bool
@ -124,12 +97,9 @@ wasi_nn_initialize()
return false; return false;
} }
// hashmap { instance: wasi_nn_ctx } wasi_nn_key = wasm_runtime_create_context_key(dtor);
hashmap = bh_hash_map_create(HASHMAP_INITIAL_SIZE, true, hash_func, if (wasi_nn_key == NULL) {
key_equal_func, key_destroy_func, NN_ERR_PRINTF("Failed to create context key");
value_destroy_func);
if (hashmap == NULL) {
NN_ERR_PRINTF("Error while initializing hashmap");
os_mutex_destroy(&wasi_nn_lock); os_mutex_destroy(&wasi_nn_lock);
return false; return false;
} }
@ -150,6 +120,11 @@ wasi_nn_initialize_context()
} }
memset(wasi_nn_ctx, 0, sizeof(WASINNContext)); memset(wasi_nn_ctx, 0, sizeof(WASINNContext));
if (os_mutex_init(&wasi_nn_ctx->lock)) {
NN_ERR_PRINTF("Error when initializing a lock for WASI-NN context");
wasm_runtime_free(wasi_nn_ctx);
return NULL;
}
return wasi_nn_ctx; return wasi_nn_ctx;
} }
@ -158,29 +133,59 @@ static WASINNContext *
wasm_runtime_get_wasi_nn_ctx(wasm_module_inst_t instance) wasm_runtime_get_wasi_nn_ctx(wasm_module_inst_t instance)
{ {
WASINNContext *wasi_nn_ctx = WASINNContext *wasi_nn_ctx =
(WASINNContext *)bh_hash_map_find(hashmap, (void *)instance); wasm_runtime_get_context(instance, wasi_nn_key);
if (wasi_nn_ctx == NULL) { if (wasi_nn_ctx == NULL) {
wasi_nn_ctx = wasi_nn_initialize_context(); WASINNContext *newctx = wasi_nn_initialize_context();
if (wasi_nn_ctx == NULL) if (newctx == NULL)
return NULL;
bool ok =
bh_hash_map_insert(hashmap, (void *)instance, (void *)wasi_nn_ctx);
if (!ok) {
NN_ERR_PRINTF("Error while storing context");
wasi_nn_ctx_destroy(wasi_nn_ctx);
return NULL; return NULL;
os_mutex_lock(&wasi_nn_lock);
wasi_nn_ctx = wasm_runtime_get_context(instance, wasi_nn_key);
if (wasi_nn_ctx == NULL) {
wasm_runtime_set_context_spread(instance, wasi_nn_key, newctx);
wasi_nn_ctx = newctx;
newctx = NULL;
}
os_mutex_unlock(&wasi_nn_lock);
if (newctx != NULL) {
wasi_nn_ctx_destroy(newctx);
} }
} }
return wasi_nn_ctx; return wasi_nn_ctx;
} }
static WASINNContext *
lock_ctx(wasm_module_inst_t instance)
{
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
if (wasi_nn_ctx == NULL) {
return NULL;
}
os_mutex_lock(&wasi_nn_ctx->lock);
if (wasi_nn_ctx->busy) {
os_mutex_unlock(&wasi_nn_ctx->lock);
return NULL;
}
wasi_nn_ctx->busy = true;
os_mutex_unlock(&wasi_nn_ctx->lock);
return wasi_nn_ctx;
}
static void
unlock_ctx(WASINNContext *wasi_nn_ctx)
{
if (wasi_nn_ctx == NULL) {
return;
}
os_mutex_lock(&wasi_nn_ctx->lock);
bh_assert(wasi_nn_ctx->busy);
wasi_nn_ctx->busy = false;
os_mutex_unlock(&wasi_nn_ctx->lock);
}
void void
wasi_nn_destroy() wasi_nn_destroy()
{ {
// destroy hashmap will destroy keys and values wasm_runtime_destroy_context_key(wasi_nn_key);
bh_hash_map_destroy(hashmap);
// close backends' libraries and registered functions // close backends' libraries and registered functions
for (unsigned i = 0; i < sizeof(lookup) / sizeof(lookup[0]); i++) { for (unsigned i = 0; i < sizeof(lookup) / sizeof(lookup[0]); i++) {
@ -401,7 +406,7 @@ detect_and_load_backend(graph_encoding backend_hint,
static wasi_nn_error static wasi_nn_error
ensure_backend(wasm_module_inst_t instance, graph_encoding encoding, ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
WASINNContext **wasi_nn_ctx_ptr) WASINNContext *wasi_nn_ctx)
{ {
wasi_nn_error res; wasi_nn_error res;
@ -412,7 +417,6 @@ ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
goto fail; goto fail;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
if (wasi_nn_ctx->is_backend_ctx_initialized) { if (wasi_nn_ctx->is_backend_ctx_initialized) {
if (wasi_nn_ctx->backend != loaded_backend) { if (wasi_nn_ctx->backend != loaded_backend) {
res = unsupported_operation; res = unsupported_operation;
@ -430,7 +434,6 @@ ensure_backend(wasm_module_inst_t instance, graph_encoding encoding,
wasi_nn_ctx->is_backend_ctx_initialized = true; wasi_nn_ctx->is_backend_ctx_initialized = true;
} }
*wasi_nn_ctx_ptr = wasi_nn_ctx;
return success; return success;
fail: fail:
return res; return res;
@ -458,17 +461,23 @@ wasi_nn_load(wasm_exec_env_t exec_env, graph_builder_array_wasm *builder,
if (!instance) if (!instance)
return runtime_error; return runtime_error;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
graph_builder_array builder_native = { 0 }; graph_builder_array builder_native = { 0 };
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
if (success if (success
!= (res = graph_builder_array_app_native( != (res = graph_builder_array_app_native(
instance, builder, builder_wasm_size, &builder_native))) instance, builder, builder_wasm_size, &builder_native)))
return res; goto fail;
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */ #else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
if (success if (success
!= (res = graph_builder_array_app_native(instance, builder, != (res = graph_builder_array_app_native(instance, builder,
&builder_native))) &builder_native)))
return res; goto fail;
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ #endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */
if (!wasm_runtime_validate_native_addr(instance, g, if (!wasm_runtime_validate_native_addr(instance, g,
@ -478,8 +487,7 @@ wasi_nn_load(wasm_exec_env_t exec_env, graph_builder_array_wasm *builder,
goto fail; goto fail;
} }
WASINNContext *wasi_nn_ctx; res = ensure_backend(instance, encoding, wasi_nn_ctx);
res = ensure_backend(instance, encoding, &wasi_nn_ctx);
if (res != success) if (res != success)
goto fail; goto fail;
@ -494,6 +502,7 @@ fail:
// XXX: Free intermediate structure pointers // XXX: Free intermediate structure pointers
if (builder_native.buf) if (builder_native.buf)
wasm_runtime_free(builder_native.buf); wasm_runtime_free(builder_native.buf);
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -527,18 +536,26 @@ wasi_nn_load_by_name(wasm_exec_env_t exec_env, char *name, uint32_t name_len,
NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME %s...", name); NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME %s...", name);
WASINNContext *wasi_nn_ctx; WASINNContext *wasi_nn_ctx = lock_ctx(instance);
res = ensure_backend(instance, autodetect, &wasi_nn_ctx); if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
res = ensure_backend(instance, autodetect, wasi_nn_ctx);
if (res != success) if (res != success)
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name, res, call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name, res,
wasi_nn_ctx->backend_ctx, name, name_len, g); wasi_nn_ctx->backend_ctx, name, name_len, g);
if (res != success) if (res != success)
return res; goto fail;
wasi_nn_ctx->is_model_loaded = true; wasi_nn_ctx->is_model_loaded = true;
return success; res = success;
fail:
unlock_ctx(wasi_nn_ctx);
return res;
} }
wasi_nn_error wasi_nn_error
@ -576,19 +593,28 @@ wasi_nn_load_by_name_with_config(wasm_exec_env_t exec_env, char *name,
NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME_WITH_CONFIG %s %s...", name, config); NN_DBG_PRINTF("[WASI NN] LOAD_BY_NAME_WITH_CONFIG %s %s...", name, config);
WASINNContext *wasi_nn_ctx; WASINNContext *wasi_nn_ctx = lock_ctx(instance);
res = ensure_backend(instance, autodetect, &wasi_nn_ctx); if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
res = ensure_backend(instance, autodetect, wasi_nn_ctx);
if (res != success) if (res != success)
return res; goto fail;
;
call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name_with_config, res, call_wasi_nn_func(wasi_nn_ctx->backend, load_by_name_with_config, res,
wasi_nn_ctx->backend_ctx, name, name_len, config, wasi_nn_ctx->backend_ctx, name, name_len, config,
config_len, g); config_len, g);
if (res != success) if (res != success)
return res; goto fail;
wasi_nn_ctx->is_model_loaded = true; wasi_nn_ctx->is_model_loaded = true;
return success; res = success;
fail:
unlock_ctx(wasi_nn_ctx);
return res;
} }
wasi_nn_error wasi_nn_error
@ -602,20 +628,27 @@ wasi_nn_init_execution_context(wasm_exec_env_t exec_env, graph g,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
if (!wasm_runtime_validate_native_addr( if (!wasm_runtime_validate_native_addr(
instance, ctx, (uint64)sizeof(graph_execution_context))) { instance, ctx, (uint64)sizeof(graph_execution_context))) {
NN_ERR_PRINTF("ctx is invalid"); NN_ERR_PRINTF("ctx is invalid");
return invalid_argument; res = invalid_argument;
goto fail;
} }
call_wasi_nn_func(wasi_nn_ctx->backend, init_execution_context, res, call_wasi_nn_func(wasi_nn_ctx->backend, init_execution_context, res,
wasi_nn_ctx->backend_ctx, g, ctx); wasi_nn_ctx->backend_ctx, g, ctx);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -630,17 +663,21 @@ wasi_nn_set_input(wasm_exec_env_t exec_env, graph_execution_context ctx,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
tensor input_tensor_native = { 0 }; tensor input_tensor_native = { 0 };
if (success if (success
!= (res = tensor_app_native(instance, input_tensor, != (res = tensor_app_native(instance, input_tensor,
&input_tensor_native))) &input_tensor_native)))
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, set_input, res, call_wasi_nn_func(wasi_nn_ctx->backend, set_input, res,
wasi_nn_ctx->backend_ctx, ctx, index, wasi_nn_ctx->backend_ctx, ctx, index,
@ -648,7 +685,8 @@ wasi_nn_set_input(wasm_exec_env_t exec_env, graph_execution_context ctx,
// XXX: Free intermediate structure pointers // XXX: Free intermediate structure pointers
if (input_tensor_native.dimensions) if (input_tensor_native.dimensions)
wasm_runtime_free(input_tensor_native.dimensions); wasm_runtime_free(input_tensor_native.dimensions);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
@ -662,26 +700,32 @@ wasi_nn_compute(wasm_exec_env_t exec_env, graph_execution_context ctx)
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
call_wasi_nn_func(wasi_nn_ctx->backend, compute, res, call_wasi_nn_func(wasi_nn_ctx->backend, compute, res,
wasi_nn_ctx->backend_ctx, ctx); wasi_nn_ctx->backend_ctx, ctx);
fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
wasi_nn_error wasi_nn_error
wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx, wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
uint32_t index, tensor_data output_tensor, uint32_t index, void *output_tensor,
uint32_t output_tensor_len, uint32_t *output_tensor_size) uint32_t output_tensor_len, uint32_t *output_tensor_size)
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */ #else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
wasi_nn_error wasi_nn_error
wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx, wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
uint32_t index, tensor_data output_tensor, uint32_t index, void *output_tensor,
uint32_t *output_tensor_size) uint32_t *output_tensor_size)
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ #endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */
{ {
@ -692,28 +736,36 @@ wasi_nn_get_output(wasm_exec_env_t exec_env, graph_execution_context ctx,
return runtime_error; return runtime_error;
} }
WASINNContext *wasi_nn_ctx = wasm_runtime_get_wasi_nn_ctx(instance);
wasi_nn_error res; wasi_nn_error res;
WASINNContext *wasi_nn_ctx = lock_ctx(instance);
if (wasi_nn_ctx == NULL) {
res = busy;
goto fail;
}
if (success != (res = is_model_initialized(wasi_nn_ctx))) if (success != (res = is_model_initialized(wasi_nn_ctx)))
return res; goto fail;
if (!wasm_runtime_validate_native_addr(instance, output_tensor_size, if (!wasm_runtime_validate_native_addr(instance, output_tensor_size,
(uint64)sizeof(uint32_t))) { (uint64)sizeof(uint32_t))) {
NN_ERR_PRINTF("output_tensor_size is invalid"); NN_ERR_PRINTF("output_tensor_size is invalid");
return invalid_argument; res = invalid_argument;
goto fail;
} }
tensor_data tensor = {
.buf = output_tensor,
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0 #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
.size = output_tensor_len,
#else
.size = *output_tensor_size,
#endif
};
call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res, call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res,
wasi_nn_ctx->backend_ctx, ctx, index, output_tensor, wasi_nn_ctx->backend_ctx, ctx, index, &tensor,
&output_tensor_len);
*output_tensor_size = output_tensor_len;
#else /* WASM_ENABLE_WASI_EPHEMERAL_NN == 0 */
call_wasi_nn_func(wasi_nn_ctx->backend, get_output, res,
wasi_nn_ctx->backend_ctx, ctx, index, output_tensor,
output_tensor_size); output_tensor_size);
#endif /* WASM_ENABLE_WASI_EPHEMERAL_NN != 0 */ fail:
unlock_ctx(wasi_nn_ctx);
return res; return res;
} }

View File

@ -3,15 +3,26 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#ifndef WASI_NN_OPENVINO_HPP #ifndef WASI_NN_BACKEND_H
#define WASI_NN_OPENVINO_HPP #define WASI_NN_BACKEND_H
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#ifdef __cplusplus
extern "C" {
#endif
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
load(void *ctx, graph_builder_array *builder, graph_encoding encoding, load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g); execution_target target, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *tflite_ctx, const char *name, uint32_t namelen, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name_with_config(void *ctx, const char *name, uint32_t namelen,
const char *config, uint32_t config_len, graph *g);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx); init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx);
@ -24,7 +35,7 @@ compute(void *ctx, graph_execution_context exec_ctx);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size); tensor_data *output_tensor, uint32_t *output_tensor_size);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_backend(void **ctx); init_backend(void **ctx);
@ -32,4 +43,8 @@ init_backend(void **ctx);
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *ctx); deinit_backend(void *ctx);
#endif /* WASI_NN_OPENVINO_HPP */ #ifdef __cplusplus
}
#endif
#endif /* WASI_NN_BACKEND_H */

View File

@ -2,7 +2,10 @@
* Copyright (C) 2019 Intel Corporation. All rights reserved. * Copyright (C) 2019 Intel Corporation. All rights reserved.
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_types.h"
#include <stdlib.h>
#include "wasi_nn_backend.h"
#include "utils/logger.h" #include "utils/logger.h"
#include "llama.h" #include "llama.h"
#include "ggml.h" #include "ggml.h"
@ -14,6 +17,10 @@ extern char const *LLAMA_COMMIT;
extern char const *LLAMA_COMPILER; extern char const *LLAMA_COMPILER;
extern char const *LLAMA_BUILD_TARGET; extern char const *LLAMA_BUILD_TARGET;
#if WASM_ENABLE_WASI_EPHEMERAL_NN == 0
#error This backend doesn't support legacy "wasi_nn" abi. Please enable WASM_ENABLE_WASI_EPHEMERAL_NN.
#endif
// compatible with WasmEdge // compatible with WasmEdge
// https://github.com/second-state/WasmEdge-WASINN-examples/blob/master/wasmedge-ggml/README.md#parameters // https://github.com/second-state/WasmEdge-WASINN-examples/blob/master/wasmedge-ggml/README.md#parameters
// https://github.com/WasmEdge/WasmEdge/blob/master/plugins/wasi_nn/ggml.cpp // https://github.com/WasmEdge/WasmEdge/blob/master/plugins/wasi_nn/ggml.cpp
@ -286,7 +293,7 @@ deinit_backend(void *ctx)
llama_backend_free(); llama_backend_free();
os_free(backend_ctx); free(backend_ctx);
return success; return success;
} }
@ -302,6 +309,11 @@ __load_by_name_with_configuration(void *ctx, const char *filename, graph *g)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (backend_ctx->model != NULL) {
// we only implement a single graph
return unsupported_operation;
}
// make sure backend_ctx->config is initialized // make sure backend_ctx->config is initialized
struct llama_model_params model_params = struct llama_model_params model_params =
@ -320,6 +332,7 @@ __load_by_name_with_configuration(void *ctx, const char *filename, graph *g)
#endif #endif
backend_ctx->model = model; backend_ctx->model = model;
*g = 0;
return success; return success;
} }
@ -360,6 +373,16 @@ init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (g != 0 || backend_ctx->model == NULL) {
// we only implement a single graph
return runtime_error;
}
if (backend_ctx->ctx != NULL) {
// we only implement a single context
return unsupported_operation;
}
struct llama_context_params ctx_params = struct llama_context_params ctx_params =
llama_context_params_from_wasi_nn_llama_config(&backend_ctx->config); llama_context_params_from_wasi_nn_llama_config(&backend_ctx->config);
struct llama_context *llama_ctx = struct llama_context *llama_ctx =
@ -370,6 +393,7 @@ init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
} }
backend_ctx->ctx = llama_ctx; backend_ctx->ctx = llama_ctx;
*exec_ctx = 0;
NN_INFO_PRINTF("n_predict = %d, n_ctx = %d", backend_ctx->config.n_predict, NN_INFO_PRINTF("n_predict = %d, n_ctx = %d", backend_ctx->config.n_predict,
llama_n_ctx(backend_ctx->ctx)); llama_n_ctx(backend_ctx->ctx));
@ -381,18 +405,41 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor *wasi_nn_tensor) tensor *wasi_nn_tensor)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
// tensor->data is the prompt string. ends with \0
char *prompt_text = (char *)wasi_nn_tensor->data; if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
if (index != 0) {
NN_ERR_PRINTF("Invalid input index %d", index);
return invalid_argument;
}
// tensor->data is the prompt string.
char *prompt_text = (char *)wasi_nn_tensor->data.buf;
uint32_t prompt_text_len = wasi_nn_tensor->data.size;
// note: buf[0] == 1 is a workaround for
// https://github.com/second-state/WasmEdge-WASINN-examples/issues/196.
// we may remove it in future.
if (wasi_nn_tensor->type != u8 || wasi_nn_tensor->dimensions->size != 1
|| !(wasi_nn_tensor->dimensions->buf[0] == 1
|| wasi_nn_tensor->dimensions->buf[0] == prompt_text_len)) {
return invalid_argument;
}
if (wasi_nn_tensor->dimensions->buf[0] == 1 && prompt_text_len != 1) {
NN_WARN_PRINTF("Ignoring seemingly wrong input tensor dimensions.");
}
#ifndef NDEBUG #ifndef NDEBUG
NN_DBG_PRINTF("--------------------------------------------------"); NN_DBG_PRINTF("--------------------------------------------------");
NN_DBG_PRINTF("prompt_text: %s", prompt_text); NN_DBG_PRINTF("prompt_text: %.*s", (int)prompt_text_len, prompt_text);
NN_DBG_PRINTF("--------------------------------------------------"); NN_DBG_PRINTF("--------------------------------------------------");
#endif #endif
// tokenize the prompt // tokenize the prompt
uint32_t n_token_max = llama_n_ctx(backend_ctx->ctx); uint32_t n_token_max = llama_n_ctx(backend_ctx->ctx);
uint32_t prompt_text_len = strlen(prompt_text);
if (backend_ctx->prompt == NULL) { if (backend_ctx->prompt == NULL) {
backend_ctx->prompt = calloc(n_token_max, sizeof(llama_token)); backend_ctx->prompt = calloc(n_token_max, sizeof(llama_token));
@ -430,6 +477,11 @@ compute(void *ctx, graph_execution_context exec_ctx)
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
wasi_nn_error ret = runtime_error; wasi_nn_error ret = runtime_error;
if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
// reset the generation buffer // reset the generation buffer
if (backend_ctx->generation == NULL) { if (backend_ctx->generation == NULL) {
backend_ctx->generation = backend_ctx->generation =
@ -477,7 +529,6 @@ compute(void *ctx, graph_execution_context exec_ctx)
// main loop // main loop
int32_t n_cur = batch.n_tokens; int32_t n_cur = batch.n_tokens;
int n_decode = 0;
int32_t n_vocab = llama_n_vocab(backend_ctx->model); int32_t n_vocab = llama_n_vocab(backend_ctx->model);
llama_token_data *candidates = NULL; llama_token_data *candidates = NULL;
@ -528,7 +579,6 @@ compute(void *ctx, graph_execution_context exec_ctx)
// push this new token for next evaluation // push this new token for next evaluation
llama_batch_add(&batch, new_token_id, n_cur, seq_ids, llama_batch_add(&batch, new_token_id, n_cur, seq_ids,
sizeof(seq_ids) / sizeof(seq_ids[0]), true); sizeof(seq_ids) / sizeof(seq_ids[0]), true);
n_decode++;
n_cur++; n_cur++;
if (llama_decode(backend_ctx->ctx, batch) != 0) { if (llama_decode(backend_ctx->ctx, batch) != 0) {
@ -549,10 +599,15 @@ fail:
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx; struct LlamaContext *backend_ctx = (struct LlamaContext *)ctx;
if (exec_ctx != 0 || backend_ctx->ctx == NULL) {
// we only implement a single context
return runtime_error;
}
// Compatibility with WasmEdge // Compatibility with WasmEdge
if (index > 1) { if (index > 1) {
NN_ERR_PRINTF("Invalid output index %d", index); NN_ERR_PRINTF("Invalid output index %d", index);
@ -568,7 +623,7 @@ get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
printf("%s\n", output_metadata); printf("%s\n", output_metadata);
} }
memcpy(output_tensor, output_metadata, strlen(output_metadata)); memcpy(output_tensor->buf, output_metadata, strlen(output_metadata));
*output_tensor_size = strlen(output_metadata); *output_tensor_size = strlen(output_metadata);
return success; return success;
} }
@ -588,7 +643,7 @@ get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
printf("%s", buf); printf("%s", buf);
} }
memcpy(output_tensor + end_pos, buf, strlen(buf)); memcpy(output_tensor->buf + end_pos, buf, strlen(buf));
end_pos += strlen(buf); end_pos += strlen(buf);
} }

View File

@ -3,13 +3,16 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_types.h" #include "wasi_nn_backend.h"
#include "wasi_nn_openvino.h"
#include "utils/logger.h" #include "utils/logger.h"
#include "bh_platform.h" #include "bh_platform.h"
#include "openvino/c/openvino.h" #include "openvino/c/openvino.h"
#if WASM_ENABLE_WASI_EPHEMERAL_NN == 0
#error This backend doesn't support legacy "wasi_nn" abi. Please enable WASM_ENABLE_WASI_EPHEMERAL_NN.
#endif
/* /*
* refer to * refer to
* https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html * https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html
@ -26,15 +29,25 @@
* from 4. to 6. is the Inference Loop * from 4. to 6. is the Inference Loop
*/ */
/* these limits are arbitrary. */
#define MAX_GRAPHS 4
#define MAX_EXECUTION_CONTEXTS 4
typedef struct { typedef struct {
ov_core_t *core; ov_core_t *core;
/* keep input model files */ /* keep input model files */
void *weight_data; struct OpenVINOGraph {
ov_tensor_t *weights_tensor; void *weight_data;
ov_model_t *model; ov_tensor_t *weights_tensor;
ov_compiled_model_t *compiled_model; ov_model_t *model;
ov_infer_request_t *infer_request; ov_compiled_model_t *compiled_model;
ov_tensor_t *input_tensor; } graphs[MAX_GRAPHS];
struct OpenVINOExecutionContext {
struct OpenVINOGraph *graph;
ov_infer_request_t *infer_request;
} execution_contexts[MAX_EXECUTION_CONTEXTS];
unsigned int n_graphs;
unsigned int n_execution_contexts;
} OpenVINOContext; } OpenVINOContext;
/* /*
@ -134,7 +147,7 @@ print_model_input_output_info(ov_model_t *model)
output_port = NULL; output_port = NULL;
} }
ov_error = ov_error; (void)ov_error;
fail: fail:
if (friendly_name) if (friendly_name)
ov_free(friendly_name); ov_free(friendly_name);
@ -179,6 +192,29 @@ wasi_nn_tensor_type_to_openvino_element_type(tensor_type wasi_nn_type)
return UNDEFINED; return UNDEFINED;
} }
static void
free_graph(struct OpenVINOGraph *graph)
{
if (graph->weight_data)
os_free(graph->weight_data);
if (graph->weights_tensor)
ov_tensor_free(graph->weights_tensor);
if (graph->model)
ov_model_free(graph->model);
if (graph->compiled_model)
ov_compiled_model_free(graph->compiled_model);
}
static void
free_execution_context(struct OpenVINOExecutionContext *c)
{
if (c->infer_request)
ov_infer_request_free(c->infer_request);
}
static wasi_nn_error static wasi_nn_error
uint32_array_to_int64_array(uint32_t array_size, uint32_t *src, int64_t **dst) uint32_array_to_int64_array(uint32_t array_size, uint32_t *src, int64_t **dst)
{ {
@ -198,6 +234,8 @@ load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g) execution_target target, graph *g)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
unsigned int graph_idx;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
if (encoding != openvino) { if (encoding != openvino) {
@ -223,33 +261,47 @@ load(void *ctx, graph_builder_array *builder, graph_encoding encoding,
graph_builder xml = builder->buf[0]; graph_builder xml = builder->buf[0];
graph_builder weight = builder->buf[1]; graph_builder weight = builder->buf[1];
graph_idx = ov_ctx->n_graphs;
if (graph_idx >= MAX_GRAPHS) {
return runtime_error;
}
graph = &ov_ctx->graphs[graph_idx];
memset(graph, 0, sizeof(*graph));
/* transfer weight to an ov tensor */ /* transfer weight to an ov tensor */
{ {
ov_ctx->weight_data = os_malloc(weight.size); graph->weight_data = os_malloc(weight.size);
if (!ov_ctx->weight_data) if (!graph->weight_data)
goto fail; goto fail;
memcpy(ov_ctx->weight_data, weight.buf, weight.size); memcpy(graph->weight_data, weight.buf, weight.size);
ov_element_type_e type = U8; ov_element_type_e type = U8;
int64_t dims[1] = { weight.size }; int64_t dims[1] = { weight.size };
ov_shape_t shape = { 1, dims }; ov_shape_t shape = { 1, dims };
CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(type, shape, CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(type, shape,
ov_ctx->weight_data, graph->weight_data,
&ov_ctx->weights_tensor), &graph->weights_tensor),
ret); ret);
} }
/* load model from buffer */ /* load model from buffer */
CHECK_OV_STATUS(ov_core_read_model_from_memory_buffer( CHECK_OV_STATUS(ov_core_read_model_from_memory_buffer(
ov_ctx->core, (char *)xml.buf, xml.size, ov_ctx->core, (char *)xml.buf, xml.size,
ov_ctx->weights_tensor, &ov_ctx->model), graph->weights_tensor, &graph->model),
ret); ret);
#ifndef NDEBUG #ifndef NDEBUG
print_model_input_output_info(ov_ctx->model); print_model_input_output_info(graph->model);
#endif #endif
ret = success; CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, graph->model, "CPU", 0,
&graph->compiled_model),
ret);
*g = graph_idx;
ov_ctx->n_graphs++;
return success;
fail: fail:
free_graph(graph);
return ret; return ret;
} }
@ -257,20 +309,62 @@ __attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *ctx, const char *filename, uint32_t filename_len, graph *g) load_by_name(void *ctx, const char *filename, uint32_t filename_len, graph *g)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
unsigned int graph_idx;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
CHECK_OV_STATUS( graph_idx = ov_ctx->n_graphs;
ov_core_read_model(ov_ctx->core, filename, NULL, &ov_ctx->model), ret); if (graph_idx >= MAX_GRAPHS) {
return runtime_error;
}
graph = &ov_ctx->graphs[graph_idx];
ret = success; memset(graph, 0, sizeof(*graph));
CHECK_OV_STATUS(
ov_core_read_model(ov_ctx->core, filename, NULL, &graph->model), ret);
CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, graph->model, "CPU", 0,
&graph->compiled_model),
ret);
*g = graph_idx;
ov_ctx->n_graphs++;
return success;
fail: fail:
free_graph(graph);
return ret; return ret;
} }
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx) init_execution_context(void *ctx, graph g, graph_execution_context *exec_ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOGraph *graph;
struct OpenVINOExecutionContext *exec;
unsigned int exec_idx;
wasi_nn_error ret;
if (g >= ov_ctx->n_graphs)
return runtime_error;
graph = &ov_ctx->graphs[g];
exec_idx = ov_ctx->n_execution_contexts;
if (exec_idx >= MAX_EXECUTION_CONTEXTS)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_idx];
memset(exec, 0, sizeof(*exec));
exec->graph = graph;
CHECK_OV_STATUS(ov_compiled_model_create_infer_request(
graph->compiled_model, &exec->infer_request),
ret);
*exec_ctx = exec_idx;
ov_ctx->n_execution_contexts++;
return success; return success;
fail:
return ret;
} }
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
@ -278,10 +372,16 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor *wasi_nn_tensor) tensor *wasi_nn_tensor)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
ov_shape_t input_shape = { 0 }; ov_shape_t input_shape = { 0 };
ov_tensor_t *input_tensor = NULL;
int64_t *ov_dims = NULL; int64_t *ov_dims = NULL;
if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
/* wasi_nn_tensor -> ov_tensor */ /* wasi_nn_tensor -> ov_tensor */
{ {
ret = uint32_array_to_int64_array(wasi_nn_tensor->dimensions->size, ret = uint32_array_to_int64_array(wasi_nn_tensor->dimensions->size,
@ -305,28 +405,21 @@ set_input(void *ctx, graph_execution_context exec_ctx, uint32_t index,
shape_info); shape_info);
CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, CHECK_OV_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape,
wasi_nn_tensor->data, wasi_nn_tensor->data.buf,
&ov_ctx->input_tensor), &input_tensor),
ret); ret);
} }
CHECK_OV_STATUS(ov_core_compile_model(ov_ctx->core, ov_ctx->model, "CPU", 0,
&ov_ctx->compiled_model),
ret);
CHECK_OV_STATUS(ov_compiled_model_create_infer_request(
ov_ctx->compiled_model, &ov_ctx->infer_request),
ret);
/* install ov_tensor -> infer_request */ /* install ov_tensor -> infer_request */
CHECK_OV_STATUS(ov_infer_request_set_input_tensor_by_index( CHECK_OV_STATUS(ov_infer_request_set_input_tensor_by_index(
ov_ctx->infer_request, index, ov_ctx->input_tensor), exec->infer_request, index, input_tensor),
ret); ret);
ret = success; ret = success;
fail: fail:
if (ov_dims) if (ov_dims)
os_free(ov_dims); os_free(ov_dims);
if (input_tensor)
ov_tensor_free(input_tensor);
ov_shape_free(&input_shape); ov_shape_free(&input_shape);
return ret; return ret;
@ -336,9 +429,14 @@ __attribute__((visibility("default"))) wasi_nn_error
compute(void *ctx, graph_execution_context exec_ctx) compute(void *ctx, graph_execution_context exec_ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
CHECK_OV_STATUS(ov_infer_request_infer(ov_ctx->infer_request), ret); if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
CHECK_OV_STATUS(ov_infer_request_infer(exec->infer_request), ret);
ret = success; ret = success;
fail: fail:
return ret; return ret;
@ -346,28 +444,33 @@ fail:
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index, get_output(void *ctx, graph_execution_context exec_ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
struct OpenVINOExecutionContext *exec;
wasi_nn_error ret = unsupported_operation; wasi_nn_error ret = unsupported_operation;
ov_tensor_t *ov_tensor = NULL; ov_tensor_t *ov_tensor = NULL;
void *data = NULL; void *data = NULL;
size_t byte_size = 0; size_t byte_size = 0;
if (exec_ctx >= ov_ctx->n_execution_contexts)
return runtime_error;
exec = &ov_ctx->execution_contexts[exec_ctx];
CHECK_OV_STATUS(ov_infer_request_get_output_tensor_by_index( CHECK_OV_STATUS(ov_infer_request_get_output_tensor_by_index(
ov_ctx->infer_request, index, &ov_tensor), exec->infer_request, index, &ov_tensor),
ret); ret);
CHECK_OV_STATUS(ov_tensor_get_byte_size(ov_tensor, &byte_size), ret); CHECK_OV_STATUS(ov_tensor_get_byte_size(ov_tensor, &byte_size), ret);
if (byte_size > *output_tensor_size) { if (byte_size > output_tensor->size) {
ret = too_large; ret = too_large;
goto fail; goto fail;
} }
CHECK_OV_STATUS(ov_tensor_data(ov_tensor, &data), ret); CHECK_OV_STATUS(ov_tensor_data(ov_tensor, &data), ret);
memcpy(output_tensor, data, byte_size); memcpy(output_tensor->buf, data, byte_size);
*output_tensor_size = (uint32_t)byte_size; *output_tensor_size = (uint32_t)byte_size;
@ -421,27 +524,16 @@ __attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *ctx) deinit_backend(void *ctx)
{ {
OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx; OpenVINOContext *ov_ctx = (OpenVINOContext *)ctx;
unsigned int i;
if (!ov_ctx) if (!ov_ctx)
return invalid_argument; return invalid_argument;
if (ov_ctx->weight_data) for (i = 0; i < ov_ctx->n_execution_contexts; i++)
os_free(ov_ctx->weight_data); free_execution_context(&ov_ctx->execution_contexts[i]);
if (ov_ctx->weights_tensor) for (i = 0; i < ov_ctx->n_graphs; i++)
ov_tensor_free(ov_ctx->weights_tensor); free_graph(&ov_ctx->graphs[i]);
if (ov_ctx->input_tensor)
ov_tensor_free(ov_ctx->input_tensor);
if (ov_ctx->infer_request)
ov_infer_request_free(ov_ctx->infer_request);
if (ov_ctx->compiled_model)
ov_compiled_model_free(ov_ctx->compiled_model);
if (ov_ctx->model)
ov_model_free(ov_ctx->model);
if (ov_ctx->core) if (ov_ctx->core)
ov_core_free(ov_ctx->core); ov_core_free(ov_ctx->core);

View File

@ -9,7 +9,11 @@
#include "wasi_nn_types.h" #include "wasi_nn_types.h"
#include "wasm_export.h" #include "wasm_export.h"
#include "bh_platform.h"
typedef struct { typedef struct {
korp_mutex lock;
bool busy;
bool is_backend_ctx_initialized; bool is_backend_ctx_initialized;
bool is_model_loaded; bool is_model_loaded;
graph_encoding backend; graph_encoding backend;
@ -28,7 +32,7 @@ typedef wasi_nn_error (*SET_INPUT)(void *, graph_execution_context, uint32_t,
tensor *); tensor *);
typedef wasi_nn_error (*COMPUTE)(void *, graph_execution_context); typedef wasi_nn_error (*COMPUTE)(void *, graph_execution_context);
typedef wasi_nn_error (*GET_OUTPUT)(void *, graph_execution_context, uint32_t, typedef wasi_nn_error (*GET_OUTPUT)(void *, graph_execution_context, uint32_t,
tensor_data, uint32_t *); tensor_data *, uint32_t *);
/* wasi-nn general APIs */ /* wasi-nn general APIs */
typedef wasi_nn_error (*BACKEND_INITIALIZE)(void **); typedef wasi_nn_error (*BACKEND_INITIALIZE)(void **);
typedef wasi_nn_error (*BACKEND_DEINITIALIZE)(void *); typedef wasi_nn_error (*BACKEND_DEINITIALIZE)(void *);

View File

@ -3,11 +3,10 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include "wasi_nn_tensorflowlite.hpp"
#include "utils/logger.h" #include "utils/logger.h"
#include "bh_platform.h" #include "bh_platform.h"
#include "wasi_nn_types.h" #include "wasi_nn_backend.h"
#include "wasm_export.h" #include "wasm_export.h"
#include <tensorflow/lite/interpreter.h> #include <tensorflow/lite/interpreter.h>
@ -281,6 +280,11 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
{ {
TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx; TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx;
if (input_tensor->type != fp32) {
NN_ERR_PRINTF("unsupported input tensor type %u", input_tensor->type);
return runtime_error;
}
wasi_nn_error res; wasi_nn_error res;
if (success != (res = is_valid_graph_execution_context(tfl_ctx, ctx))) if (success != (res = is_valid_graph_execution_context(tfl_ctx, ctx)))
return res; return res;
@ -319,7 +323,7 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
index); index);
int size = model_tensor_size * sizeof(float); int size = model_tensor_size * sizeof(float);
bh_memcpy_s(it, size, input_tensor->data, size); bh_memcpy_s(it, size, input_tensor->data.buf, size);
} }
else { // TODO: Assuming uint8 quantized networks. else { // TODO: Assuming uint8 quantized networks.
TfLiteAffineQuantization *quant_info = TfLiteAffineQuantization *quant_info =
@ -337,7 +341,7 @@ set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_DBG_PRINTF("input tensor: (scale, offset) = (%f, %f)", scale, NN_DBG_PRINTF("input tensor: (scale, offset) = (%f, %f)", scale,
zero_point); zero_point);
float *input_tensor_f = (float *)input_tensor->data; float *input_tensor_f = (float *)input_tensor->data.buf;
for (uint32_t i = 0; i < model_tensor_size; ++i) { for (uint32_t i = 0; i < model_tensor_size; ++i) {
it[i] = (uint8_t)(input_tensor_f[i] / scale + zero_point); it[i] = (uint8_t)(input_tensor_f[i] / scale + zero_point);
} }
@ -361,7 +365,7 @@ compute(void *tflite_ctx, graph_execution_context ctx)
__attribute__((visibility("default"))) wasi_nn_error __attribute__((visibility("default"))) wasi_nn_error
get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index, get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size) tensor_data *output_tensor, uint32_t *output_tensor_size)
{ {
TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx; TFLiteContext *tfl_ctx = (TFLiteContext *)tflite_ctx;
@ -384,23 +388,34 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
return too_large; return too_large;
} }
uint32_t model_tensor_size = 1;
for (int i = 0; i < (int)tensor->dims->size; ++i)
model_tensor_size *= (uint32_t)tensor->dims->data[i];
if (*output_tensor_size < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
if (tensor->quantization.type == kTfLiteNoQuantization) { if (tensor->quantization.type == kTfLiteNoQuantization) {
NN_DBG_PRINTF("No quantization information"); NN_DBG_PRINTF("No quantization information");
float *ot = #if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
tfl_ctx->interpreters[ctx].interpreter->typed_output_tensor<float>( if (output_tensor->size < tensor->bytes) {
index); NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
int size = model_tensor_size * sizeof(float); }
bh_memcpy_s(output_tensor, size, ot, size); #else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
if (output_tensor->size < tensor->bytes / sizeof(float)) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#endif
bh_memcpy_s(output_tensor->buf, output_tensor->size, tensor->data.data,
tensor->bytes);
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
*output_tensor_size = tensor->bytes;
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
*output_tensor_size = tensor->bytes / sizeof(float);
#endif
} }
else { // TODO: Assuming uint8 quantized networks. else { // TODO: Assuming uint8 quantized networks.
TfLiteAffineQuantization *quant_info = TfLiteAffineQuantization *quant_info =
@ -409,6 +424,27 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_ERR_PRINTF("Quantization per channel is not supported"); NN_ERR_PRINTF("Quantization per channel is not supported");
return runtime_error; return runtime_error;
} }
uint32_t model_tensor_size = 1;
for (int i = 0; i < (int)tensor->dims->size; ++i)
model_tensor_size *= (uint32_t)tensor->dims->data[i];
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
if (output_tensor->size / sizeof(float) < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
if (output_tensor->size < model_tensor_size) {
NN_ERR_PRINTF("Insufficient memory to copy tensor %d", index);
return too_large;
}
#endif
uint8_t *ot = tfl_ctx->interpreters[ctx] uint8_t *ot = tfl_ctx->interpreters[ctx]
.interpreter->typed_output_tensor<uint8_t>(index); .interpreter->typed_output_tensor<uint8_t>(index);
@ -417,13 +453,22 @@ get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
NN_DBG_PRINTF("output tensor: (scale, offset) = (%f, %f)", scale, NN_DBG_PRINTF("output tensor: (scale, offset) = (%f, %f)", scale,
zero_point); zero_point);
float *output_tensor_f = (float *)output_tensor; float *output_tensor_f = (float *)output_tensor->buf;
for (uint32_t i = 0; i < model_tensor_size; ++i) { for (uint32_t i = 0; i < model_tensor_size; ++i) {
output_tensor_f[i] = (ot[i] - zero_point) * scale; output_tensor_f[i] = (ot[i] - zero_point) * scale;
} }
#if WASM_ENABLE_WASI_EPHEMERAL_NN != 0
*output_tensor_size = model_tensor_size * sizeof(float);
#else
/*
* for now, maintain the bug-to-bug compatibility with the old abi,
* where the size here is the number of fp32, not bytes.
*/
*output_tensor_size = model_tensor_size;
#endif
} }
*output_tensor_size = model_tensor_size;
return success; return success;
} }

View File

@ -1,47 +0,0 @@
/*
* Copyright (C) 2019 Intel Corporation. All rights reserved.
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/
#ifndef WASI_NN_TENSORFLOWLITE_HPP
#define WASI_NN_TENSORFLOWLITE_HPP
#include "wasi_nn_types.h"
#ifdef __cplusplus
extern "C" {
#endif
__attribute__((visibility("default"))) wasi_nn_error
load(void *tflite_ctx, graph_builder_array *builder, graph_encoding encoding,
execution_target target, graph *g);
__attribute__((visibility("default"))) wasi_nn_error
load_by_name(void *tflite_ctx, const char *filename, uint32_t filename_len,
graph *g);
__attribute__((visibility("default"))) wasi_nn_error
init_execution_context(void *tflite_ctx, graph g, graph_execution_context *ctx);
__attribute__((visibility("default"))) wasi_nn_error
set_input(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor *input_tensor);
__attribute__((visibility("default"))) wasi_nn_error
compute(void *tflite_ctx, graph_execution_context ctx);
__attribute__((visibility("default"))) wasi_nn_error
get_output(void *tflite_ctx, graph_execution_context ctx, uint32_t index,
tensor_data output_tensor, uint32_t *output_tensor_size);
__attribute__((visibility("default"))) wasi_nn_error
init_backend(void **tflite_ctx);
__attribute__((visibility("default"))) wasi_nn_error
deinit_backend(void *tflite_ctx);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -3,6 +3,17 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved. # Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception # SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# on intel mac, this ends up with a lot of the following error.
#
# AttributeError: 'Sequential' object has no attribute '_get_save_spec'.
#
# * "pip install tensorflow" installs tensorflow 2.16.2 on intel mac.
# (because it's the last version before tf deprecated the target.)
# * keras 3 support in the version seems incomplete (thus the error)
# * a workaround: use keras 2 as mentioned in:
# https://github.com/tensorflow/tensorflow/releases/tag/v2.16.1
# https://blog.tensorflow.org/2024/03/whats-new-in-tensorflow-216.html
CURR_PATH=$(cd $(dirname $0) && pwd -P) CURR_PATH=$(cd $(dirname $0) && pwd -P)
# WASM application that uses WASI-NN # WASM application that uses WASI-NN

View File

@ -3,7 +3,7 @@
import tensorflow as tf import tensorflow as tf
import numpy as np import numpy as np
from keras.layers import AveragePooling2D, Conv2D from tensorflow.keras.layers import AveragePooling2D, Conv2D
from tensorflow.keras import Input, Model from tensorflow.keras import Input, Model

View File

@ -406,12 +406,11 @@ os_socket_addr_resolve(const char *host, const char *service,
res = result; res = result;
while (res) { while (res) {
if (!is_addrinfo_supported(res)) {
res = res->ai_next;
continue;
}
if (addr_info_size > pos) { if (addr_info_size > pos) {
if (!is_addrinfo_supported(res)) {
res = res->ai_next;
continue;
}
ret = ret =
sockaddr_to_bh_sockaddr(res->ai_addr, &addr_info[pos].sockaddr); sockaddr_to_bh_sockaddr(res->ai_addr, &addr_info[pos].sockaddr);

View File

@ -35,8 +35,8 @@ extend_vector(Vector *vector, size_t length)
if (length <= vector->max_elems) if (length <= vector->max_elems)
return true; return true;
if (length < vector->size_elem * 3 / 2) if (length < vector->max_elems * 3 / 2)
length = vector->size_elem * 3 / 2; length = vector->max_elems * 3 / 2;
if (!(data = alloc_vector_data(length, vector->size_elem))) { if (!(data = alloc_vector_data(length, vector->size_elem))) {
return false; return false;
@ -194,12 +194,12 @@ bh_vector_append(Vector *vector, const void *elem_buf)
goto just_return; goto just_return;
} }
/* make sure one more slot is used by the thread who allocas it */ /* make sure one more slot is used by the thread who allocates it */
if (vector->lock) if (vector->lock)
os_mutex_lock(vector->lock); os_mutex_lock(vector->lock);
if (!extend_vector(vector, vector->num_elems + 1)) { if (!extend_vector(vector, vector->num_elems + 1)) {
LOG_ERROR("Append ector elem failed: extend vector failed.\n"); LOG_ERROR("Append vector elem failed: extend vector failed.\n");
goto unlock_return; goto unlock_return;
} }

View File

@ -102,6 +102,7 @@ cmake -DWAMR_BUILD_PLATFORM=linux -DWAMR_BUILD_TARGET=ARM
### **Enable lib wasi-nn** ### **Enable lib wasi-nn**
- **WAMR_BUILD_WASI_NN**=1/0, default to disable if not set - **WAMR_BUILD_WASI_NN**=1/0, default to disable if not set
> Note: WAMR_BUILD_WASI_NN without WAMR_BUILD_WASI_EPHEMERAL_NN is deprecated and will likely be removed in future versions of WAMR. Please consider to enable WAMR_BUILD_WASI_EPHEMERAL_NN as well.
> Note: See [WASI-NN](../core/iwasm/libraries/wasi-nn) for more details. > Note: See [WASI-NN](../core/iwasm/libraries/wasi-nn) for more details.
### **Enable lib wasi-nn GPU mode** ### **Enable lib wasi-nn GPU mode**
@ -113,7 +114,7 @@ cmake -DWAMR_BUILD_PLATFORM=linux -DWAMR_BUILD_TARGET=ARM
- **WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH**=Path to the external delegate shared library (e.g. `libedgetpu.so.1.0` for Coral USB) - **WAMR_BUILD_WASI_NN_EXTERNAL_DELEGATE_PATH**=Path to the external delegate shared library (e.g. `libedgetpu.so.1.0` for Coral USB)
### **Enable lib wasi-nn with `wasi_ephemeral_nn` module support** ### **Enable lib wasi-nn with `wasi_ephemeral_nn` module support**
- **WAMR_BUILD_WASI_EPHEMERAL_NN**=1/0, default to disable if not set - **WAMR_BUILD_WASI_EPHEMERAL_NN**=1/0, default to enable if not set
### **Disable boundary check with hardware trap** ### **Disable boundary check with hardware trap**
- **WAMR_DISABLE_HW_BOUND_CHECK**=1/0, default to enable if not set and supported by platform - **WAMR_DISABLE_HW_BOUND_CHECK**=1/0, default to enable if not set and supported by platform
@ -292,6 +293,10 @@ Currently we only profile the memory consumption of module, module_instance and
- **WAMR_BUILD_AOT_INTRINSICS**=1/0, enable the AOT intrinsic functions, default to enable if not set. These functions can be called from the AOT code when `--disable-llvm-intrinsics` flag or `--enable-builtin-intrinsics=<intr1,intr2,...>` flag is used by wamrc to generate the AOT file. - **WAMR_BUILD_AOT_INTRINSICS**=1/0, enable the AOT intrinsic functions, default to enable if not set. These functions can be called from the AOT code when `--disable-llvm-intrinsics` flag or `--enable-builtin-intrinsics=<intr1,intr2,...>` flag is used by wamrc to generate the AOT file.
> Note: See [Tuning the XIP intrinsic functions](./xip.md#tuning-the-xip-intrinsic-functions) for more details. > Note: See [Tuning the XIP intrinsic functions](./xip.md#tuning-the-xip-intrinsic-functions) for more details.
### **Enable extended constant expression**
- **WAMR_BUILD_EXTENDED_CONST_EXPR**=1/0, default to disable if not set.
> Note: See [Extended Constant Expressions](https://github.com/WebAssembly/extended-const/blob/main/proposals/extended-const/Overview.md) for more details.
### **Configurable memory access boundary check** ### **Configurable memory access boundary check**
- **WAMR_CONFIGURABLE_BOUNDS_CHECKS**=1/0, default to disable if not set - **WAMR_CONFIGURABLE_BOUNDS_CHECKS**=1/0, default to disable if not set
> Note: If it is enabled, allow to run `iwasm --disable-bounds-checks` to disable the memory access boundary checks for interpreter mode. > Note: If it is enabled, allow to run `iwasm --disable-bounds-checks` to disable the memory access boundary checks for interpreter mode.
@ -360,4 +365,4 @@ For Valgrind, begin with the following configurations and add additional ones as
-DWAMR_DISABLE_HW_BOUND_CHECK=0 \ -DWAMR_DISABLE_HW_BOUND_CHECK=0 \
-DWAMR_DISABLE_WRITE_GS_BASE=0 -DWAMR_DISABLE_WRITE_GS_BASE=0
#... #...
``` ```

View File

@ -0,0 +1,46 @@
# Security Issue Runbook
This runbook provides step-by-step guidance on handling a security advisory. Typically, it begins with a draft security advisory when we initiate the process outlined in this runbook. The draft security advisory is created by a contributor or a maintainer.
For information on what types of issues are considered security vulnerabilities and require a security advisory for resolution, please refer to [identifying a security issue](./security_need_to_know.md#identifying-a-security-issue).
## Step 1: Initial Response to Security Advisory
- Receive Security Advisory: When a new security advisory is received, the Incident Manager, typically the maintainer who opened the advisory, becomes the first responder. If the advisory was opened by someone else, a maintainer should take on the role of Incident Manager. The Incident Manager can hand off this role to another maintainer if necessary.
- Acknowledge Receipt: The Incident Manager should promptly acknowledge receipt of the advisory and communicate that the investigation will begin immediately. Security issues are the highest priority.
## Step 2: Investigating the Vulnerability
- Identify the Vulnerability: Reproduce the issue to understand the vulnerability. Determine which versions and platforms are affected. Fill out the advisory details with this information.
- Accept the Report: Accept the security report and create a temporary private fork to collaborate on a fix. Invite necessary helpers and stakeholders to this fork, as their input can be valuable.
## Step 3: Communication and Collaboration
- Use Non-Public Channels: Communicate through non-public channels, preferably email, during the resolution process. Avoid filing issues or pull requests on third-party repositories if they are involved.
- Workaround for Third-Party Dependencies: If third-party dependencies are involved, consider a workaround to patch the issue quickly unless the third party can release a fix promptly.
## Step 4: Finalizing and Preparing for Release
- Finalize Details: Once a fix is developed and the vulnerability is fully understood, finalize the advisory details and prepare for public release. Ensure the security issues are resolved in the private fork.
- Request CVE: Use the Big Green Button on the advisory to request a CVE number from GitHub staff.
- Advanced Disclosure Email: Decide on a disclosure date, typically within a week, and send an email to sec-announce@bytecodealliance.org about the upcoming security release. Other ways are also available to communicate the disclosure date.
## Step 5: Preparing and Testing Patch Releases
- Prepare PRs for Patch Releases: Create pull requests in the private fork for each version being patched. Ensure each PR is ready to apply cleanly and includes release notes for each release branch.
- Run Full Test Suite: Run the full test suite locally for the main branch. Attempt to run as much of the CI matrix locally as possible.
## Step 6: Public Release and Communication
- Open Version Bump PRs: Open version bump pull requests on the public repository without including patch notes or release notes for the fix.
- Manually Make PRs from Private Fork: Transfer the necessary pull requests from the private fork to the public repository.
- Merge and Trigger Releases: Merge the version bump PRs and trigger the release process.
- Publish GitHub Advisories: Delete the private forks and use the Big Green Button to publish the advisory.
- Send Security Release Email: Send a follow-up email to sec-announce@bytecodealliance.org describing the security release. Other communication channels can also be used to inform users about the security release.
By following these steps, you can effectively manage and resolve security issues for your open source project, ensuring timely communication and collaboration while maintaining the integrity and security of your software.
## References
- [Vulnerability Response Runbook](https://github.com/bytecodealliance/rfcs/blob/main/accepted/vulnerability-response-runbook.md)
- [Wasmtime Security Vulnerability Runbook](https://docs.wasmtime.dev/security-vulnerability-runbook.html)

View File

@ -30,4 +30,4 @@ Before reporting an issue, particularly one related to crashing, consult [the ch
Upon receiving an issue, thoroughly review [the cheat sheet](https://github.com/bytecodealliance/rfcs/blob/main/accepted/what-is-considered-a-security-bug.md#cheat-sheet-is-this-bug-considered-a-security-vulnerability) to assess and _Report a security vulnerability_ if the issue is indeed a security vulnerability. Upon receiving an issue, thoroughly review [the cheat sheet](https://github.com/bytecodealliance/rfcs/blob/main/accepted/what-is-considered-a-security-bug.md#cheat-sheet-is-this-bug-considered-a-security-vulnerability) to assess and _Report a security vulnerability_ if the issue is indeed a security vulnerability.
Once a security issue is confirmed, please refer to [the runbook](https://github.com/bytecodealliance/rfcs/blob/main/accepted/vulnerability-response-runbook.md) for the subsequent steps to take. Once a security issue is confirmed, please refer to [the runbook](./security_issue_runbook.md) for the subsequent steps to take.

View File

@ -111,7 +111,7 @@ The Fast JIT is a lightweight JIT engine with quick startup, small footprint and
(6) To enable the `Multi-tier JIT` mode: (6) To enable the `Multi-tier JIT` mode:
``` Bash ``` Bash
mkdir build && cd build mkdir build && cd build
cmake .. -DWAMR_BUILD_FAST_JTI=1 -DWAMR_BUILD_JIT=1 cmake .. -DWAMR_BUILD_FAST_JIT=1 -DWAMR_BUILD_JIT=1
make make
``` ```
The Multi-tier JIT is a two level JIT tier-up engine, which launches Fast JIT to run the wasm module as soon as possible and creates backend threads to compile the LLVM JIT functions at the same time, and when the LLVM JIT functions are compiled, the runtime will switch the extecution from the Fast JIT jitted code to LLVM JIT jitted code gradually, so as to gain the best performance. The Multi-tier JIT is a two level JIT tier-up engine, which launches Fast JIT to run the wasm module as soon as possible and creates backend threads to compile the LLVM JIT functions at the same time, and when the LLVM JIT functions are compiled, the runtime will switch the extecution from the Fast JIT jitted code to LLVM JIT jitted code gradually, so as to gain the best performance.

View File

@ -83,17 +83,21 @@ target_link_libraries(vmlib ${LLVM_AVAILABLE_LIBS} ${UV_A_LIBS} -lm -ldl -lpthre
include_directories(${CMAKE_CURRENT_LIST_DIR}/src) include_directories(${CMAKE_CURRENT_LIST_DIR}/src)
include (${SHARED_DIR}/utils/uncommon/shared_uncommon.cmake) include (${SHARED_DIR}/utils/uncommon/shared_uncommon.cmake)
add_executable (shared_heap_chain_test src/shared_heap_chain.c ${UNCOMMON_SHARED_SOURCE})
add_executable (shared_heap_test src/main.c ${UNCOMMON_SHARED_SOURCE}) add_executable (shared_heap_test src/main.c ${UNCOMMON_SHARED_SOURCE})
check_pie_supported() check_pie_supported()
set_target_properties (shared_heap_test PROPERTIES POSITION_INDEPENDENT_CODE ON) set_target_properties (shared_heap_test PROPERTIES POSITION_INDEPENDENT_CODE ON)
if (APPLE) if (APPLE)
target_link_libraries (shared_heap_test vmlib -lm -ldl -lpthread) set (LIBS vmlib -lm -ldl -lpthread)
else () else ()
target_link_libraries (shared_heap_test vmlib -lm -ldl -lpthread -lrt) set (LIBS vmlib -lm -ldl -lpthread -lrt)
endif () endif ()
target_link_libraries (shared_heap_chain_test ${LIBS})
target_link_libraries (shared_heap_test ${LIBS})
add_subdirectory(wasm-apps) add_subdirectory(wasm-apps)
if (WAMR_BUILD_AOT EQUAL 1) if (WAMR_BUILD_AOT EQUAL 1)
@ -107,21 +111,31 @@ if (WAMR_BUILD_AOT EQUAL 1)
) )
if (WAMR_COMPILER) if (WAMR_COMPILER)
message (CHECK_PASS "found") message (CHECK_PASS "found")
else() else ()
message (CHECK_FAIL "not found") message (CHECK_FAIL "not found")
endif() endif ()
if (NOT EXISTS ${WAMR_COMPILER}) if (NOT EXISTS ${WAMR_COMPILER})
message (FATAL_ERROR "Please build wamrc under ${WAMR_ROOT_DIR}/wamr-compiler") message (FATAL_ERROR "Please build wamrc under ${WAMR_ROOT_DIR}/wamr-compiler")
else() else ()
message (STATUS "WAMR_COMPILER is ${WAMR_COMPILER}") message (STATUS "WAMR_COMPILER is ${WAMR_COMPILER}")
endif() endif ()
if (WAMR_BUILD_TARGET STREQUAL "X86_32")
set (WAMR_COMPILER_FLAGS --enable-shared-heap --target=i386)
set (WAMR_COMPILER_CHAIN_FLAGS --enable-shared-chain --target=i386)
else ()
set (WAMR_COMPILER_FLAGS --enable-shared-heap)
set (WAMR_COMPILER_CHAIN_FLAGS --enable-shared-chain)
endif ()
add_custom_target( add_custom_target(
wasm_to_aot wasm_to_aot
ALL ALL
DEPENDS wasm-apps/test1.wasm wasm-apps/test2.wasm ${WAMR_COMPILER} DEPENDS wasm-apps/test1.wasm wasm-apps/test2.wasm ${WAMR_COMPILER}
COMMAND ${WAMR_COMPILER} --enable-shared-heap -o wasm-apps/test1.aot wasm-apps/test1.wasm COMMAND ${WAMR_COMPILER} ${WAMR_COMPILER_FLAGS} -o wasm-apps/test1.aot wasm-apps/test1.wasm
COMMAND ${WAMR_COMPILER} --enable-shared-heap -o wasm-apps/test2.aot wasm-apps/test2.wasm COMMAND ${WAMR_COMPILER} ${WAMR_COMPILER_FLAGS} -o wasm-apps/test2.aot wasm-apps/test2.wasm
COMMAND ${WAMR_COMPILER} ${WAMR_COMPILER_CHAIN_FLAGS} -o wasm-apps/test1_chain.aot wasm-apps/test1.wasm
COMMAND ${WAMR_COMPILER} ${WAMR_COMPILER_CHAIN_FLAGS} -o wasm-apps/test2_chain.aot wasm-apps/test2.wasm
WORKING_DIRECTORY ${CMAKE_BINARY_DIR} WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
) )
endif() endif()

View File

@ -0,0 +1,50 @@
# Shared heap Sample introduction
This is a sample to show how to use the shared heap feature in WAMR. The shared heap feature allows multiple WASM instances to share the same memory space. This feature is useful when you want to run multiple WASM instances in the same process and share data between them. The sandbox nature of WASM is still maintained in the shared heap by WAMR. But the data management and correct data synchronization in shared heap is relied on the user's implementation.
> Note: The shared heap feature is experimental feature, it should be used with caution. It's optional and only available when building WAMR with the CMake cache variable `WAMR_BUILD_SHARED_HEAP` set to 1.
## Build and run the sample
To build the shared heap used in multi thread sample and the shared heap chain sample with following commands:
```bash
cmake -S . -B build
cmake --build build
```
For the shared heap sample, it demonstrates how to create a shared heap and use it shares data between two WASM instances, which would satisfy most of the use cases. Use the following commands to run the sample:
```bash
cd build
./shared_heap_test
```
For the shared heap chain sample. It chains a pre-allocated heap and a normal shared heap to one chain(linked list) as a whole and attaches/detaches all together, and pass the WASM address directly between two WASM instances. Use the following commands to run the sample:
```bash
cd build
./shared_heap_chain_test
```
## How to use shared heap
The shared heap is an advanced feature in WAMR that gives the user flexibility to share data between multiple WASM instances(it will be the same address mapping for different WASM instance) or between WebAssembly and the host without incurring any copy overhead. The shared heap can be regarded as an extension of the WebAssembly linear memory. But it also heavily relies on the user's implementation to manage the shared data correctly. The following are some takeaway points to help the user use the shared heap correctly.
### Create and manage shared heap
You can create a shared heap by calling the `wasm_runtime_create_shared_heap(SharedHeapInitArgs *init_args)` API. And based on the `init_args`, you can create a shared heap in two ways:
1. WAMR managed shared heap: when only `init_args.size` is given and `init_args.pre_allocated_addr` stays as NULL, WAMR will allocate a shared heap(not from the linear memory) with the given size. The shared heap will be managed by WAMR, the wasm app or host(WAMR users) can dynamically manage memory from it by calling `wasm_runtime_shared_heap_malloc()` and `wasm_runtime_shared_heap_free()` on demand. Only the memory allocated from the shared heap is valid and can be shared, not the unallocated part of shared heap memory. And it will be automatically freed when runtime is destroyed(when `wasm_runtime_destroy()` is called).
2. Preallocated shared heap: the user can also use a pre-allocated memory(it can be allocated from the system heap, or is a static global buffer, the correctness of its accessibility and size needs to be ensured by the user) as a shared heap by giving `init_args.pre_allocated_addr` and `init_args.size`. This kind of shared heap serves as an area for data exchange, primarily between the host and WebAssembly. Any data within this area can be directly accessed by both sides (assuming the layout of the data structure is known). For instance, the host can store large structured variables in this space, allowing the WebAssembly application to operate on them without the need for copying. And the pre-allocated memory will relies on user to manage its life cycle.
After creation, the shared heap can be attached to a WASM instance(an additional segment appended to the end of the linear memory) by calling `wasm_runtime_attach_shared_heap(wasm_module_inst_t module_inst, wasm_shared_heap_t shared_heap)`. And it can be detached by calling `wasm_runtime_detach_shared_heap(wasm_module_inst_t module_inst)`. So that the data sharing can only happen between the WASM instances that have the same shared heap attached, complete by user's choice.
#### Shared heap chain
Sometimes you may want to use multiple shared heaps to attach together as a chain(linked list) and to share data more flexibly. You can call `wasm_runtime_chain_shared_heaps(wasm_shared_heap_t head, wasm_shared_heap_t body)` to chain two shared heaps together. The shared heap list remains one continuous shared heap in wasm app's point of view. To create a shared heap chain, the shared heaps can't be currently attached to any WASM instance.
> PS: At most one shared heap in shared heap list can be WAMR managed shared heap, the rest have to be the pre-allocated shared heap.
![shared-heap-chain](./images/shared_heap_chain.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View File

@ -55,7 +55,7 @@ thread1_callback(void *arg)
i + 1); i + 1);
printf("wasm app1 send buf: %s\n\n", buf); printf("wasm app1 send buf: %s\n\n", buf);
if (!bh_post_msg(queue, 1, buf, 1024 * i)) { if (!bh_post_msg(queue, 1, buf, 1024 * (i + 1))) {
printf("Failed to post message to queue\n"); printf("Failed to post message to queue\n");
wasm_runtime_shared_heap_free(module_inst, offset); wasm_runtime_shared_heap_free(module_inst, offset);
break; break;
@ -84,7 +84,7 @@ thread1_callback(void *arg)
buf = wasm_runtime_addr_app_to_native(module_inst, argv[0]); buf = wasm_runtime_addr_app_to_native(module_inst, argv[0]);
printf("wasm app1 send buf: %s\n\n", buf); printf("wasm app1 send buf: %s\n\n", buf);
if (!bh_post_msg(queue, 1, buf, 1024 * i)) { if (!bh_post_msg(queue, 1, buf, 1024 * (i + 1))) {
printf("Failed to post message to queue\n"); printf("Failed to post message to queue\n");
wasm_runtime_shared_heap_free(module_inst, argv[0]); wasm_runtime_shared_heap_free(module_inst, argv[0]);
break; break;
@ -251,7 +251,7 @@ main(int argc, char **argv)
heap_init_args.size = 65536; heap_init_args.size = 65536;
shared_heap = wasm_runtime_create_shared_heap(&heap_init_args); shared_heap = wasm_runtime_create_shared_heap(&heap_init_args);
if (!shared_heap) { if (!shared_heap) {
printf("Create shared heap failed. error: %s\n", error_buf); printf("Create shared heap failed.\n");
goto fail; goto fail;
} }
@ -268,7 +268,7 @@ main(int argc, char **argv)
} }
/* create thread 1 */ /* create thread 1 */
struct thread_arg targ1 = { 0 }; thread_arg targ1 = { 0 };
korp_tid tid1; korp_tid tid1;
targ1.queue = queue; targ1.queue = queue;
targ1.module_inst = module_inst1; targ1.module_inst = module_inst1;
@ -279,7 +279,7 @@ main(int argc, char **argv)
} }
/* create thread 2 */ /* create thread 2 */
struct thread_arg targ2 = { 0 }; thread_arg targ2 = { 0 };
korp_tid tid2; korp_tid tid2;
targ2.queue = queue; targ2.queue = queue;
targ2.module_inst = module_inst2; targ2.module_inst = module_inst2;

View File

@ -0,0 +1,321 @@
/*
* Copyright (C) 2019 Intel Corporation. All rights reserved.
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/
#include "wasm_export.h"
#include "bh_platform.h"
#include "bh_read_file.h"
#define BUF_SIZE 4096
static char preallocated_buf[BUF_SIZE];
static bool
produce_data(wasm_module_inst_t module_inst, wasm_exec_env_t exec_env,
bh_queue *queue, wasm_function_inst_t func, uint32 *argv,
uint32 buf_size, bool free_on_fail)
{
uint8 *buf;
wasm_runtime_call_wasm(exec_env, func, 2, argv);
if (wasm_runtime_get_exception(module_inst)) {
printf("Failed to call function: %s\n",
wasm_runtime_get_exception(module_inst));
return false;
}
if (argv[0] == 0) {
printf("Failed to allocate memory from shared heap\n");
return false;
}
buf = wasm_runtime_addr_app_to_native(module_inst, argv[0]);
printf("wasm app1 send buf: %s\n\n", buf);
/* Passes wasm address directly between wasm apps since memory in shared
* heap chain is viewed as single address space in wasm's perspective */
buf = (uint8 *)(uintptr_t)argv[0];
if (!bh_post_msg(queue, 1, buf, buf_size)) {
printf("Failed to post message to queue\n");
if (free_on_fail)
wasm_runtime_shared_heap_free(module_inst, argv[0]);
return false;
}
return true;
}
static void *
wasm_producer(wasm_module_inst_t module_inst, bh_queue *queue)
{
wasm_exec_env_t exec_env;
wasm_function_inst_t my_shared_heap_malloc_func, my_shared_heap_free_func,
produce_str_func;
uint32 i, argv[2];
/* lookup wasm functions */
if (!(my_shared_heap_malloc_func = wasm_runtime_lookup_function(
module_inst, "my_shared_heap_malloc"))
|| !(my_shared_heap_free_func = wasm_runtime_lookup_function(
module_inst, "my_shared_heap_free"))
|| !(produce_str_func =
wasm_runtime_lookup_function(module_inst, "produce_str"))) {
printf("Failed to lookup function.\n");
}
/* create exec env */
if (!(exec_env = wasm_runtime_create_exec_env(module_inst, 32768))) {
printf("Failed to create exec env.\n");
return NULL;
}
/* allocate memory by calling my_shared_heap_malloc function and send it
to wasm app2 */
for (i = 0; i < 8; i++) {
argv[0] = 1024 * (i + 1);
argv[1] = i + 1;
if (!produce_data(module_inst, exec_env, queue,
my_shared_heap_malloc_func, argv, 1024 * (i + 1),
true)) {
break;
}
}
/* use pre-allocated shared heap memory by calling produce_str function and
send it to wasm app2, the pre-allocated shared heap is the last one in
chain, so its end address is calculated from UIN32_MAX */
uint32 wasm_start_addr = UINT32_MAX - BUF_SIZE + 1;
for (i = 8; i < 16; i++) {
argv[0] = wasm_start_addr + 512 * (i - 8);
argv[1] = i + 1;
if (!produce_data(module_inst, exec_env, queue, produce_str_func, argv,
512, false)) {
break;
}
}
wasm_runtime_destroy_exec_env(exec_env);
return NULL;
}
static void
wasm_consumer(wasm_module_inst_t module_inst, bh_queue *queue)
{
wasm_function_inst_t print_buf_func, consume_str_func;
wasm_exec_env_t exec_env;
uint32 argv[2], i;
bh_message_t msg;
char *buf;
/* lookup wasm function */
if (!(print_buf_func =
wasm_runtime_lookup_function(module_inst, "print_buf"))
|| !(consume_str_func =
wasm_runtime_lookup_function(module_inst, "consume_str"))) {
printf("Failed to lookup function.\n");
return;
}
/* create exec env */
if (!(exec_env = wasm_runtime_create_exec_env(module_inst, 32768))) {
printf("Failed to create exec env.\n");
return;
}
for (i = 0; i < 16; i++) {
msg = bh_get_msg(queue, BHT_WAIT_FOREVER);
if (!msg)
return;
buf = bh_message_payload(msg);
/* call wasm function */
argv[0] = (uint32)(uintptr_t)buf;
if (i < 8)
wasm_runtime_call_wasm(exec_env, print_buf_func, 1, argv);
else
wasm_runtime_call_wasm(exec_env, consume_str_func, 1, argv);
if (wasm_runtime_get_exception(module_inst)) {
printf(
"Failed to call 'print_buf' or 'consumer_str' function: %s\n",
wasm_runtime_get_exception(module_inst));
}
bh_free_msg(msg);
}
wasm_runtime_destroy_exec_env(exec_env);
}
static char global_heap_buf[512 * 1024];
int
main(int argc, char **argv)
{
char *wasm_file1 = NULL, *wasm_file2 = NULL;
uint8 *wasm_file1_buf = NULL, *wasm_file2_buf = NULL;
uint32 wasm_file1_size, wasm_file2_size;
wasm_module_t wasm_module1 = NULL, wasm_module2 = NULL;
wasm_module_inst_t module_inst1 = NULL;
wasm_module_inst_t module_inst2 = NULL;
wasm_shared_heap_t shared_heap = NULL, shared_heap2 = NULL,
shared_heap_chain = NULL;
bh_queue *queue = NULL;
RuntimeInitArgs init_args;
SharedHeapInitArgs heap_init_args;
char error_buf[128] = { 0 };
bool aot_mode = false;
int ret = -1;
if (argc > 1 && !strcmp(argv[1], "--aot"))
aot_mode = true;
if (!aot_mode)
printf("Test shared heap in interpreter mode\n\n");
else
printf("Test shared heap in AOT mode\n\n");
memset(&init_args, 0, sizeof(RuntimeInitArgs));
init_args.mem_alloc_type = Alloc_With_Pool;
init_args.mem_alloc_option.pool.heap_buf = global_heap_buf;
init_args.mem_alloc_option.pool.heap_size = sizeof(global_heap_buf);
/* init wasm runtime */
if (!wasm_runtime_full_init(&init_args)) {
printf("Init runtime environment failed.\n");
return -1;
}
/* create queue */
if (!(queue = bh_queue_create())) {
printf("Create queue failed.\n");
goto fail;
}
/* read wasm file */
if (!aot_mode)
wasm_file1 = "./wasm-apps/test1.wasm";
else
wasm_file1 = "./wasm-apps/test1_chain.aot";
if (!(wasm_file1_buf =
bh_read_file_to_buffer(wasm_file1, &wasm_file1_size))) {
printf("Open wasm file %s failed.\n", wasm_file1);
goto fail;
}
/* load wasm file */
wasm_module1 = wasm_runtime_load((uint8 *)wasm_file1_buf, wasm_file1_size,
error_buf, sizeof(error_buf));
if (!wasm_module1) {
printf("Load wasm module failed. error: %s\n", error_buf);
goto fail;
}
/* instantiate module */
module_inst1 = wasm_runtime_instantiate(wasm_module1, 65536, 0, error_buf,
sizeof(error_buf));
if (!module_inst1) {
printf("Instantiate wasm module failed. error: %s\n", error_buf);
goto fail;
}
/* read wasm file */
if (!aot_mode)
wasm_file2 = "./wasm-apps/test2.wasm";
else
wasm_file2 = "./wasm-apps/test2_chain.aot";
if (!(wasm_file2_buf =
bh_read_file_to_buffer(wasm_file2, &wasm_file2_size))) {
printf("Open wasm file %s failed.\n", wasm_file1);
goto fail;
}
/* load wasm file */
wasm_module2 = wasm_runtime_load((uint8 *)wasm_file2_buf, wasm_file2_size,
error_buf, sizeof(error_buf));
if (!wasm_module2) {
printf("Load wasm module failed. error: %s\n", error_buf);
goto fail;
}
/* instantiate module */
module_inst2 = wasm_runtime_instantiate(wasm_module2, 65536, 0, error_buf,
sizeof(error_buf));
if (!module_inst2) {
printf("Instantiate wasm module failed. error: %s\n", error_buf);
goto fail;
}
/* create shared heap */
memset(&heap_init_args, 0, sizeof(heap_init_args));
heap_init_args.size = 65536;
shared_heap = wasm_runtime_create_shared_heap(&heap_init_args);
if (!shared_heap) {
printf("Create shared heap failed.\n");
goto fail;
}
/* create a preallocated shared heap */
memset(&heap_init_args, 0, sizeof(heap_init_args));
heap_init_args.pre_allocated_addr = preallocated_buf;
heap_init_args.size = BUF_SIZE;
shared_heap2 = wasm_runtime_create_shared_heap(&heap_init_args);
if (!shared_heap2) {
printf("Create preallocated shared heap failed\n");
goto fail;
}
shared_heap_chain =
wasm_runtime_chain_shared_heaps(shared_heap, shared_heap2);
if (!shared_heap_chain) {
printf("Create shared heap chain failed\n");
goto fail;
}
/* attach module instance 1 to the shared heap */
if (!wasm_runtime_attach_shared_heap(module_inst1, shared_heap_chain)) {
printf("Attach shared heap failed.\n");
goto fail;
}
/* attach module instance 2 to the shared heap */
if (!wasm_runtime_attach_shared_heap(module_inst2, shared_heap_chain)) {
printf("Attach shared heap failed.\n");
goto fail;
}
/* wasm 1 produce shared data */
wasm_producer(module_inst1, queue);
/* wasm 2 consume shared data */
wasm_consumer(module_inst2, queue);
ret = 0;
fail:
if (module_inst2)
wasm_runtime_deinstantiate(module_inst2);
if (module_inst1)
wasm_runtime_deinstantiate(module_inst1);
if (wasm_module2)
wasm_runtime_unload(wasm_module2);
if (wasm_module1)
wasm_runtime_unload(wasm_module1);
if (wasm_file2_buf)
wasm_runtime_free(wasm_file2_buf);
if (wasm_file1_buf)
wasm_runtime_free(wasm_file1_buf);
if (queue)
bh_queue_destroy(queue);
wasm_runtime_destroy();
return ret;
}

View File

@ -30,9 +30,7 @@ set (CMAKE_EXE_LINKER_FLAGS
-Wl,--no-entry,--strip-all, \ -Wl,--no-entry,--strip-all, \
-Wl,--export=__heap_base,--export=__data_end \ -Wl,--export=__heap_base,--export=__data_end \
-Wl,--export=__wasm_call_ctors \ -Wl,--export=__wasm_call_ctors \
-Wl,--export=my_shared_heap_malloc \ -Wl,--export-all \
-Wl,--export=my_shared_heap_free \
-Wl,--export=print_buf \
-Wl,--allow-undefined" -Wl,--allow-undefined"
) )

View File

@ -58,3 +58,14 @@ my_shared_heap_free(void *ptr)
{ {
shared_heap_free(ptr); shared_heap_free(ptr);
} }
void *
produce_str(char *addr, uint32_t index)
{
char c;
snprintf(addr, 512, "Data: %u stores to pre-allocated shared heap", index);
/* Actually access it in wasm */
c = addr[0];
printf("In WASM: the first char is %c\n", c);
return addr;
}

View File

@ -4,8 +4,7 @@
*/ */
#include <stdio.h> #include <stdio.h>
#include <stdint.h>
#include <stdio.h>
extern void extern void
shared_heap_free(void *ptr); shared_heap_free(void *ptr);
@ -16,3 +15,14 @@ print_buf(char *buf)
printf("wasm app2's wasm func received buf: %s\n\n", buf); printf("wasm app2's wasm func received buf: %s\n\n", buf);
shared_heap_free(buf); shared_heap_free(buf);
} }
void
consume_str(char *buf)
{
/* Actually access it in wasm */
char c = buf[0];
printf("In WASM: wasm app2's wasm func received buf in pre-allocated "
"shared buf: "
"%s with its first char is %c\n\n",
buf, c);
}

View File

@ -79,7 +79,7 @@ Client is running...
Start receiving. Start receiving.
Start sending. Start sending.
Send 106 bytes successfully! Send 106 bytes successfully!
Receive 106 bytes successlly! Receive 106 bytes successfully!
Data: Data:
The stars shine down The stars shine down
It brings us light It brings us light

View File

@ -25,6 +25,7 @@ static bool server_is_ready = false;
void * void *
run_as_server(void *arg) run_as_server(void *arg)
{ {
(void)arg;
int sock = -1, on = 1; int sock = -1, on = 1;
struct sockaddr_in addr = { 0 }; struct sockaddr_in addr = { 0 };
int addrlen = 0; int addrlen = 0;
@ -109,7 +110,7 @@ run_as_server(void *arg)
fail2: fail2:
close(new_sock); close(new_sock);
fail1: fail1:
shutdown(sock, SHUT_RD); shutdown(sock, SHUT_RDWR);
close(sock); close(sock);
return NULL; return NULL;
} }
@ -117,6 +118,7 @@ fail1:
void * void *
run_as_client(void *arg) run_as_client(void *arg)
{ {
(void)arg;
int sock = -1; int sock = -1;
struct sockaddr_in addr = { 0 }; struct sockaddr_in addr = { 0 };
/* buf of server is 106 bytes */ /* buf of server is 106 bytes */
@ -159,7 +161,7 @@ run_as_client(void *arg)
goto fail; goto fail;
} }
printf("Receive %ld bytes successlly!\n", recv_len); printf("Receive %ld bytes successfully!\n", recv_len);
assert(recv_len == 106); assert(recv_len == 106);
printf("Data:\n"); printf("Data:\n");
@ -170,7 +172,7 @@ run_as_client(void *arg)
} }
fail: fail:
shutdown(sock, SHUT_RD); shutdown(sock, SHUT_RDWR);
close(sock); close(sock);
return NULL; return NULL;
} }
@ -178,6 +180,8 @@ fail:
int int
main(int argc, char *argv[]) main(int argc, char *argv[])
{ {
(void)argc;
(void)argv;
pthread_t cs[2] = { 0 }; pthread_t cs[2] = { 0 };
uint8_t i = 0; uint8_t i = 0;
int ret = EXIT_SUCCESS; int ret = EXIT_SUCCESS;

View File

@ -50,6 +50,7 @@ local_printf(const char *formatter, ...)
void * void *
run_as_server(void *arg) run_as_server(void *arg)
{ {
(void)arg;
int sock = -1, on = 1; int sock = -1, on = 1;
struct sockaddr_in addr = { 0 }; struct sockaddr_in addr = { 0 };
int addrlen = 0; int addrlen = 0;
@ -134,7 +135,7 @@ run_as_server(void *arg)
fail2: fail2:
close(new_sock); close(new_sock);
fail1: fail1:
shutdown(sock, SHUT_RD); shutdown(sock, SHUT_RDWR);
close(sock); close(sock);
return NULL; return NULL;
} }
@ -142,6 +143,7 @@ fail1:
void * void *
run_as_client(void *arg) run_as_client(void *arg)
{ {
(void)arg;
int sock = -1; int sock = -1;
struct sockaddr_in addr = { 0 }; struct sockaddr_in addr = { 0 };
/* buf of server is 106 bytes */ /* buf of server is 106 bytes */
@ -184,7 +186,7 @@ run_as_client(void *arg)
goto fail; goto fail;
} }
local_printf("Receive %ld bytes successlly!\n", recv_len); local_printf("Receive %ld bytes successfully!\n", recv_len);
assert(recv_len == 106); assert(recv_len == 106);
local_printf("Data:\n"); local_printf("Data:\n");
@ -195,7 +197,7 @@ run_as_client(void *arg)
} }
fail: fail:
shutdown(sock, SHUT_RD); shutdown(sock, SHUT_RDWR);
close(sock); close(sock);
return NULL; return NULL;
} }
@ -203,6 +205,8 @@ fail:
int int
main(int argc, char *argv[]) main(int argc, char *argv[])
{ {
(void)argc;
(void)argv;
pthread_t cs[2] = { 0 }; pthread_t cs[2] = { 0 };
uint8_t i = 0; uint8_t i = 0;
int ret = EXIT_SUCCESS; int ret = EXIT_SUCCESS;

View File

@ -58,6 +58,7 @@ LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size)
option.enable_simd = true; option.enable_simd = true;
option.enable_ref_types = true; option.enable_ref_types = true;
option.enable_gc = true; option.enable_gc = true;
option.aux_stack_frame_type = AOT_STACK_FRAME_TYPE_STANDARD;
comp_data = comp_data =
aot_create_comp_data(module, option.target_arch, option.enable_gc); aot_create_comp_data(module, option.target_arch, option.enable_gc);

View File

@ -72,7 +72,7 @@ def to_json(inst, cls):
class Fuzzing(db.Model): class Fuzzing(db.Model):
__tablename__ = 'fazzing_task' __tablename__ = 'fuzzing_task'
id = db.Column(db.Integer, autoincrement=True, id = db.Column(db.Integer, autoincrement=True,
primary_key=True, nullable=False) primary_key=True, nullable=False)
repo = db.Column(db.String(200), nullable=False, default='') repo = db.Column(db.String(200), nullable=False, default='')
@ -96,7 +96,7 @@ class TaskError(db.Model):
__tablename__ = 'task_error' __tablename__ = 'task_error'
id = db.Column(db.Integer, autoincrement=True, id = db.Column(db.Integer, autoincrement=True,
primary_key=True, nullable=False) primary_key=True, nullable=False)
fazzing_id = db.Column(db.Integer, db.ForeignKey("fazzing_task.id")) fuzzing_id = db.Column(db.Integer, db.ForeignKey("fuzzing_task.id"))
name = db.Column(db.String(200), nullable=False, default='') name = db.Column(db.String(200), nullable=False, default='')
std_out = db.Column(db.Text, default='') std_out = db.Column(db.Text, default='')
data = db.Column(db.JSON) data = db.Column(db.JSON)
@ -119,9 +119,9 @@ def to_data(data):
def error_count(data): def error_count(data):
error = len(TaskError.query.filter( error = len(TaskError.query.filter(
TaskError.fazzing_id == data.get('id'), TaskError.status.in_([1, 2])).all()) TaskError.fuzzing_id == data.get('id'), TaskError.status.in_([1, 2])).all())
end_error = len(TaskError.query.filter( end_error = len(TaskError.query.filter(
TaskError.fazzing_id == data.get('id'), TaskError.status == 0).all()) TaskError.fuzzing_id == data.get('id'), TaskError.status == 0).all())
data['error'] = error data['error'] = error
data['end_error'] = end_error data['end_error'] = end_error
return data return data
@ -159,11 +159,11 @@ def show_fuzz_list():
id = data.get('id') id = data.get('id')
if id: if id:
all_error = TaskError.query.filter( all_error = TaskError.query.filter(
TaskError.fazzing_id == id).with_entities(TaskError.id, TaskError.fazzing_id, TaskError.fuzzing_id == id).with_entities(TaskError.id, TaskError.fuzzing_id,
TaskError.create_time, TaskError.data, TaskError.create_time, TaskError.data,
TaskError.name, TaskError.status, TaskError.name, TaskError.status,
TaskError.update_time, TaskError.comment).order_by(TaskError.status.desc(), TaskError.update_time.desc(), TaskError.id.desc()).all() TaskError.update_time, TaskError.comment).order_by(TaskError.status.desc(), TaskError.update_time.desc(), TaskError.id.desc()).all()
data_message = [{'id': error['id'], "fuzzing_id": error['fazzing_id'], data_message = [{'id': error['id'], "fuzzing_id": error['fuzzing_id'],
"name": error['name'], "data": error['data'], "name": error['name'], "data": error['data'],
'create_time': error['create_time'].strftime('%Y-%m-%d %H:%M:%S'), 'create_time': error['create_time'].strftime('%Y-%m-%d %H:%M:%S'),
'update_time': error['update_time'].strftime('%Y-%m-%d %H:%M:%S'), 'update_time': error['update_time'].strftime('%Y-%m-%d %H:%M:%S'),
@ -204,7 +204,7 @@ def New_fuzzing():
# curd.set_error_status_to(list(map(lambda x: x.id, error_list)), db) # curd.set_error_status_to(list(map(lambda x: x.id, error_list)), db)
# Fuzzing.query.filter_by(id=fuzz.id).delete() # Fuzzing.query.filter_by(id=fuzz.id).delete()
fuzz.data = {'error': "Clone repo Error"} fuzz.data = {'error': "Clone repo Error"}
db.commit() db.session.commit()
return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"}) return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"})
wamr_path_parent = fuzz_dir.parent.parent wamr_path_parent = fuzz_dir.parent.parent
@ -277,7 +277,7 @@ def scheduler_run_task():
for fuzz in fuzz_query: for fuzz in fuzz_query:
all_error = TaskError.query.filter( all_error = TaskError.query.filter(
TaskError.fazzing_id == fuzz.id).with_entities(TaskError.name).all() TaskError.fuzzing_id == fuzz.id).with_entities(TaskError.name).all()
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{fuzz.id}' 'workspace' / f'build_{fuzz.id}'
dir_list = filter(lambda x: x.startswith( dir_list = filter(lambda x: x.startswith(
@ -287,7 +287,7 @@ def scheduler_run_task():
for dir in dir_list: for dir in dir_list:
cmd = f'cd {fuzz_cmd} && ./wasm_mutator_fuzz {dir}' cmd = f'cd {fuzz_cmd} && ./wasm_mutator_fuzz {dir}'
status, resp = getstatusoutput(cmd) status, resp = getstatusoutput(cmd)
task_error = TaskError(name=dir, std_out=resp, fazzing_id=fuzz.id, task_error = TaskError(name=dir, std_out=resp, fuzzing_id=fuzz.id,
create_time=datetime.utcnow() + timedelta(hours=8)) create_time=datetime.utcnow() + timedelta(hours=8))
db.session.add(task_error) db.session.add(task_error)
db.session.commit() db.session.commit()
@ -312,7 +312,7 @@ def get_error_txt():
return jsonify({"status": 0, "results": [], 'msg': "Error"}) return jsonify({"status": 0, "results": [], 'msg': "Error"})
error = TaskError.query.get(id) error = TaskError.query.get(id)
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{error.fazzing_id}' 'workspace' / f'build_{error.fuzzing_id}'
file_cmd = fuzz_cmd / error.name file_cmd = fuzz_cmd / error.name
response = send_file(file_cmd, as_attachment=True, response = send_file(file_cmd, as_attachment=True,
@ -351,7 +351,7 @@ def get_cases_zip():
with ZipFile(memory_file, "w", ZIP_DEFLATED) as zf: with ZipFile(memory_file, "w", ZIP_DEFLATED) as zf:
for task_error in task_query: for task_error in task_query:
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{task_error.fazzing_id}' 'workspace' / f'build_{task_error.fuzzing_id}'
file_cmd = fuzz_cmd / task_error.name file_cmd = fuzz_cmd / task_error.name
zf.write(str(file_cmd), arcname=task_error.name) zf.write(str(file_cmd), arcname=task_error.name)
memory_file.seek(0) memory_file.seek(0)
@ -399,7 +399,7 @@ def error_restart():
if run_status: if run_status:
return jsonify({"status": 0, "results": [], 'msg': "There are already tasks in progress"}) return jsonify({"status": 0, "results": [], 'msg': "There are already tasks in progress"})
task_query = TaskError.query.filter(TaskError.id.in_(id_list)).all() task_query = TaskError.query.filter(TaskError.id.in_(id_list)).all()
fuzzing_id = task_query[0].fazzing_id fuzzing_id = task_query[0].fuzzing_id
fuzz_cmd = wasm_mutator_dir / \ fuzz_cmd = wasm_mutator_dir / \
'workspace' / f'build_{fuzzing_id}' 'workspace' / f'build_{fuzzing_id}'
restart_cmd = wasm_mutator_dir / \ restart_cmd = wasm_mutator_dir / \
@ -412,7 +412,7 @@ def error_restart():
if not Path(restart_cmd / 'wamr').exists(): if not Path(restart_cmd / 'wamr').exists():
print('------ error: clone repo not folder exists ------') print('------ error: clone repo not folder exists ------')
# fuzz.data = {'error': "Clone repo Error"} # fuzz.data = {'error': "Clone repo Error"}
db.commit() db.session.commit()
return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"}) return jsonify({"status": 0, "result": "", "msg": "Clone repo Error"})
wamr_path_parent = fuzz_dir.parent.parent wamr_path_parent = fuzz_dir.parent.parent
wamr_path = wamr_path_parent / 'wamr' wamr_path = wamr_path_parent / 'wamr'

View File

@ -218,22 +218,57 @@ simply run `run.py`
./run.py ./run.py
``` ```
Specify a specific issue with option `--issues`/`-i`
```shell
./run.py --issues 2833 # test 1 issue #2833
./run.py -i 2833,2834,2835 # test 3 issues #2833 #2834 #2835
```
If everything went well, you should see similarly output in your command line output If everything went well, you should see similarly output in your command line output
```shell ```shell
Finish testing, 22/22 of test cases passed, no more issues should further test ==== Test results ====
Total: 22
Passed: 22
Failed: 0
Left issues in folder: no more
Cases in JSON but not found in folder: no more
``` ```
If you add the test case under directory `issues` but forget to add the running config in json file, the output can be something like If you add the test case under directory `issues` but forget to add the running config in json file, the output can be something like
```shell ```shell
Finish testing, 21/21 of test cases passed, {2945} issue(s) should further test ==== Test results ====
Total: 21
Passed: 21
Failed: 0
missed: 0
Left issues in folder: #3022
Cases in JSON but not found in folder: no more
```
If you add the test case in `running_config.json` but used the wrong id or forget to add the test case under directory `issues`, the output can be someting like
```shell
==== Test results ====
Total: 21
Passed: 21
Failed: 0
missed: 0
Left issues in folder: #2855
Cases in JSON but not found in folder: #12345
``` ```
If some test case are failing, then it will be something like If some test case are failing, then it will be something like
```shell ```shell
Finish testing, 21/22 of test cases passed, no more issue(s) should further test ==== Test results ====
Total: 22
Passed: 21
Failed: 1
Left issues in folder: no more
Cases in JSON but not found in folder: no more
``` ```
And a log file named `issues_tests.log` will be generated and inside it will display the details of the failing cases, for example: And a log file named `issues_tests.log` will be generated and inside it will display the details of the failing cases, for example:

View File

@ -10,7 +10,9 @@ import os
import subprocess import subprocess
import glob import glob
import re import re
from typing import Dict import argparse
from typing import Dict, Optional, List
WORK_DIR = os.getcwd() WORK_DIR = os.getcwd()
TEST_WASM_COMMAND = ( TEST_WASM_COMMAND = (
@ -45,7 +47,12 @@ def dump_error_log(failing_issue_id, command_lists, exit_code_cmp, stdout_cmp):
) )
def get_issue_ids_should_test(): def get_issue_ids_should_test(selected_ids: Optional[List[int]] = None):
"""Find all issue IDs that should be tested in folder issues."""
# If specific issue IDs are provided, return them as a set
if selected_ids:
return set(selected_ids)
# Define the path pattern # Define the path pattern
path_pattern = "issues/issue-*" path_pattern = "issues/issue-*"
@ -60,8 +67,8 @@ def get_issue_ids_should_test():
# Extract the issue number using regular expression # Extract the issue number using regular expression
match = re.search(pattern, dir_path) match = re.search(pattern, dir_path)
if match: if match:
issue_number = match.group(1) issue_number = int(match.group(1))
issue_numbers.add(int(issue_number)) issue_numbers.add(issue_number)
# Print the set of issue numbers # Print the set of issue numbers
return issue_numbers return issue_numbers
@ -77,10 +84,10 @@ def get_and_check(d, key, default=None, nullable=False):
def run_and_compare_results( def run_and_compare_results(
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content issue_id, cmd, description, ret_code, stdout_content
): ) -> bool:
print(f"####################################") print(f"####################################")
print(f"test BA issue #{issue_id} `{description}`: {cmd}") print(f"test BA issue #{issue_id} `{description}`...")
command_list = cmd.split() command_list = cmd.split()
result = subprocess.run( result = subprocess.run(
command_list, command_list,
@ -95,19 +102,21 @@ def run_and_compare_results(
exit_code_cmp = f"exit code (actual, expected) : {actual_exit_code, ret_code}" exit_code_cmp = f"exit code (actual, expected) : {actual_exit_code, ret_code}"
stdout_cmp = f"stdout (actual, expected) : {actual_output, stdout_content}" stdout_cmp = f"stdout (actual, expected) : {actual_output, stdout_content}"
print(exit_code_cmp)
print(stdout_cmp)
if actual_exit_code == ret_code and ( if actual_exit_code == ret_code and (
actual_output == stdout_content actual_output == stdout_content
or (stdout_content == "Compile success" or (
and actual_output.find(stdout_content) != -1) stdout_content == "Compile success"
and actual_output.find(stdout_content) != -1
)
or (len(stdout_content) > 30 and actual_output.find(stdout_content) != -1) or (len(stdout_content) > 30 and actual_output.find(stdout_content) != -1)
): ):
passed_ids.add(issue_id)
print("== PASS ==") print("== PASS ==")
return True
else: else:
failed_ids.add(issue_id) print(cmd)
print(exit_code_cmp)
print(stdout_cmp)
print(f"== FAILED: {issue_id} ==") print(f"== FAILED: {issue_id} ==")
dump_error_log( dump_error_log(
issue_id, issue_id,
@ -115,15 +124,11 @@ def run_and_compare_results(
exit_code_cmp, exit_code_cmp,
stdout_cmp, stdout_cmp,
) )
return False
print("")
def run_issue_test_wamrc( def run_issue_test_wamrc(issue_id, compile_options):
passed_ids, failed_ids, issue_id, compile_options, stdout_only_cmp_last_line=False
):
compiler = get_and_check(compile_options, "compiler") compiler = get_and_check(compile_options, "compiler")
only_compile = get_and_check(compile_options, "only compile")
in_file = get_and_check(compile_options, "in file") in_file = get_and_check(compile_options, "in file")
out_file = get_and_check(compile_options, "out file") out_file = get_and_check(compile_options, "out file")
options = get_and_check(compile_options, "options") options = get_and_check(compile_options, "options")
@ -145,14 +150,10 @@ def run_issue_test_wamrc(
compiler=compiler, options=options, out_file=out_file_path, in_file=in_file_path compiler=compiler, options=options, out_file=out_file_path, in_file=in_file_path
) )
run_and_compare_results( return run_and_compare_results(issue_id, cmd, description, ret_code, stdout_content)
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content
)
return only_compile
def run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case): def run_issue_test_iwasm(issue_id, test_case) -> bool:
runtime = get_and_check(test_case, "runtime") runtime = get_and_check(test_case, "runtime")
mode = get_and_check(test_case, "mode") mode = get_and_check(test_case, "mode")
file = get_and_check(test_case, "file") file = get_and_check(test_case, "file")
@ -194,17 +195,19 @@ def run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case):
argument=argument, argument=argument,
) )
run_and_compare_results( return run_and_compare_results(issue_id, cmd, description, ret_code, stdout_content)
passed_ids, failed_ids, issue_id, cmd, description, ret_code, stdout_content
)
def process_and_run_test_cases(data: Dict[str, Dict]): def process_and_run_test_cases(
issue_ids_should_test = get_issue_ids_should_test() data: Dict[str, Dict], selected_ids: Optional[List[int]] = None
):
issue_ids_should_test = get_issue_ids_should_test(selected_ids)
passed_ids = set() passed_ids = set()
failed_ids = set() failed_ids = set()
json_only_ids = set()
# Iterate through each test case in the json data
for test_case in data.get("test cases", []): for test_case in data.get("test cases", []):
is_deprecated = get_and_check(test_case, "deprecated") is_deprecated = get_and_check(test_case, "deprecated")
issue_ids = get_and_check(test_case, "ids", default=[]) issue_ids = get_and_check(test_case, "ids", default=[])
@ -214,33 +217,79 @@ def process_and_run_test_cases(data: Dict[str, Dict]):
continue continue
compile_options = get_and_check(test_case, "compile_options", nullable=True) compile_options = get_and_check(test_case, "compile_options", nullable=True)
for issue_id in issue_ids:
only_compile = False
# if this issue needs to test wamrc to compile the test case first
if compile_options:
only_compile = compile_options["only compile"]
run_issue_test_wamrc(passed_ids, failed_ids, issue_id, compile_options)
# if this issue requires to test iwasm to run the test case for issue_id in issue_ids:
if not only_compile: if issue_id not in issue_ids_should_test:
run_issue_test_iwasm(passed_ids, failed_ids, issue_id, test_case) json_only_ids.add(issue_id)
continue
# cross out the this issue_id in the should test set # cross out the this issue_id in the should test set
issue_ids_should_test.remove(issue_id) issue_ids_should_test.remove(issue_id)
only_compile = False
# if this issue needs to test wamrc to compile the test case first
if compile_options:
only_compile = compile_options["only compile"]
compile_res = run_issue_test_wamrc(issue_id, compile_options)
if only_compile:
if compile_res:
passed_ids.add(issue_id)
else:
failed_ids.add(issue_id)
continue
else:
# if compile success, then continue to test iwasm
if not compile_res:
failed_ids.add(issue_id)
continue
# if this issue requires to test iwasm to run the test case
if not only_compile:
if run_issue_test_iwasm(issue_id, test_case):
passed_ids.add(issue_id)
else:
failed_ids.add(issue_id)
total = len(passed_ids) + len(failed_ids) total = len(passed_ids) + len(failed_ids)
passed = len(passed_ids) passed = len(passed_ids)
failed = len(failed_ids) failed = len(failed_ids)
issue_ids_should_test = (
issue_ids_should_test if issue_ids_should_test else "no more" format_issue_ids_should_test = (
" ".join(f"#{x}" for x in issue_ids_should_test)
if issue_ids_should_test
else "no more"
) )
format_json_only_ids = (
" ".join(f"#{x}" for x in json_only_ids) if json_only_ids else "no more"
)
print(f"####################################")
print(f"==== Test results ====") print(f"==== Test results ====")
print(f" Total: {total}") print(f" Total: {total}")
print(f" Passed: {passed}") print(f" Passed: {passed}")
print(f" Failed: {failed}") print(f" Failed: {failed}")
if not selected_ids:
print(f" Left issues in folder: {format_issue_ids_should_test}")
print(f" Cases in JSON but not found in folder: {format_json_only_ids}")
else:
print(f" Issues not found in folder: {format_issue_ids_should_test}")
def main(): def main():
parser = argparse.ArgumentParser(description="Run BA issue tests.")
parser.add_argument(
"-i",
"--issues",
type=str,
help="Comma separated list of issue ids to run, e.g. 1,2,3. Default: all.",
)
args = parser.parse_args()
selected_ids = None
if args.issues:
selected_ids = [int(x) for x in args.issues.split(",") if x.strip().isdigit()]
# Path to the JSON file # Path to the JSON file
file_path = "running_config.json" file_path = "running_config.json"
@ -256,7 +305,7 @@ def main():
os.remove(LOG_FILE) os.remove(LOG_FILE)
# Process the data # Process the data
process_and_run_test_cases(data) process_and_run_test_cases(data, selected_ids)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -17,7 +17,7 @@ git apply ../../../wamr-test-suites/spec-test-script/gc_ignore_cases.patch
# Set OCaml compiler environment # Set OCaml compiler environment
eval $(opam config env) eval $(opam config env)
echo "compile the reference intepreter" echo "compile the reference interpreter"
pushd interpreter pushd interpreter
make make
popd popd

View File

@ -9,7 +9,7 @@ import os
from collections import OrderedDict from collections import OrderedDict
def CLI_ARGS_GENREATOR(running_modes_supported: list[str]) -> list[str]: def CLI_ARGS_GENERATOR(running_modes_supported: list[str]) -> list[str]:
res = [] res = []
list_2d = [["--default-running-mode={} --module-running-mode={}".format(i, j) list_2d = [["--default-running-mode={} --module-running-mode={}".format(i, j)
for i in running_modes_supported] for j in running_modes_supported] for i in running_modes_supported] for j in running_modes_supported]
@ -35,16 +35,16 @@ def main():
] ]
# Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion. # Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion.
# just to be safe, using orderreddict # just to be safe, using OrderedDict
# key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested} # key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested}
test_options = OrderedDict({ test_options = OrderedDict({
"INTERP": {"compile_flag": COMPILE_FLAGS[0], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:1])}, "INTERP": {"compile_flag": COMPILE_FLAGS[0], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:1])},
"FAST_JIT": {"compile_flag": COMPILE_FLAGS[1], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:2])}, "FAST_JIT": {"compile_flag": COMPILE_FLAGS[1], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:2])},
"LLVM_JIT": {"compile_flag": COMPILE_FLAGS[2], "LLVM_JIT": {"compile_flag": COMPILE_FLAGS[2],
"cli_args": CLI_ARGS_GENREATOR([RUNNING_MODES[0], RUNNING_MODES[2]])}, "cli_args": CLI_ARGS_GENERATOR([RUNNING_MODES[0], RUNNING_MODES[2]])},
"MULTI_TIER_JIT": {"compile_flag": COMPILE_FLAGS[3], "cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES)}, "MULTI_TIER_JIT": {"compile_flag": COMPILE_FLAGS[3], "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES)},
"EAGER_JIT_WITH_BOTH_JIT": {"compile_flag": COMPILE_FLAGS[4], "EAGER_JIT_WITH_BOTH_JIT": {"compile_flag": COMPILE_FLAGS[4],
"cli_args": CLI_ARGS_GENREATOR(RUNNING_MODES[:3])} "cli_args": CLI_ARGS_GENERATOR(RUNNING_MODES[:3])}
}) })
build_cmd = "./build_c_embed.sh \"{build_flag}\"" build_cmd = "./build_c_embed.sh \"{build_flag}\""

View File

@ -29,7 +29,7 @@ def main():
] ]
# Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion. # Python 3.7+: Dictionary iteration order is guaranteed to be in order of insertion.
# just to be safe, using orderreddict # just to be safe, using OrderedDict
# key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested} # key: value -> compile mode, {"compile_flag": CMake compile flag, "iwasm_cli_args": array of CLI args tested}
test_options = OrderedDict({ test_options = OrderedDict({
"INTERP": {"compile_flag": COMPILE_FLAGS[0], "iwasm_cli_args": IWASM_CLI_ARGS[:1]}, "INTERP": {"compile_flag": COMPILE_FLAGS[0], "iwasm_cli_args": IWASM_CLI_ARGS[:1]},

View File

@ -19,8 +19,15 @@ set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS}") set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS}")
if(WAMR_BUILD_TARGET STREQUAL "X86_32") if(WAMR_BUILD_TARGET STREQUAL "X86_32")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -m32") # 1) Force -m32
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -m32") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -m32" CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -m32" CACHE STRING "" FORCE)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -m32" CACHE STRING "" FORCE)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -m32" CACHE STRING "" FORCE)
# 2) Make CMake prefer i386 libraries
set(CMAKE_SYSTEM_PROCESSOR i386 CACHE STRING "" FORCE)
set(CMAKE_LIBRARY_ARCHITECTURE "i386-linux-gnu" CACHE STRING "" FORCE)
endif() endif()
# Prevent overriding the parent project's compiler/linker # Prevent overriding the parent project's compiler/linker
@ -29,12 +36,21 @@ set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
# Fetch Google test # Fetch Google test
include (FetchContent) include (FetchContent)
FetchContent_Declare (
if(${CMAKE_VERSION} VERSION_GREATER_EQUAL "3.24")
FetchContent_Declare (
googletest googletest
URL https://github.com/google/googletest/archive/03597a01ee50ed33e9dfd640b249b4be3799d395.zip URL https://github.com/google/googletest/archive/03597a01ee50ed33e9dfd640b249b4be3799d395.zip
DOWNLOAD_EXTRACT_TIMESTAMP TRUE DOWNLOAD_EXTRACT_TIMESTAMP ON
) )
FetchContent_MakeAvailable (googletest) else()
FetchContent_Declare (
googletest
URL https://github.com/google/googletest/archive/03597a01ee50ed33e9dfd640b249b4be3799d395.zip
)
endif()
FetchContent_MakeAvailable(googletest)
SET(GOOGLETEST_INCLUDED 1) SET(GOOGLETEST_INCLUDED 1)

View File

@ -31,7 +31,7 @@ class memory64_atomic_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -56,6 +56,8 @@ class memory64_atomic_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }

View File

@ -31,7 +31,7 @@ class memory64_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -56,11 +56,13 @@ class memory64_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }
void destory_exec_env() void destroy_exec_env()
{ {
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
wasm_runtime_deinstantiate(module_inst); wasm_runtime_deinstantiate(module_inst);
@ -201,7 +203,7 @@ TEST_P(memory64_test_suite, memory_8GB)
i64 = 0xbeefdead; i64 = 0xbeefdead;
ASSERT_EQ(i64, GET_U64_FROM_ADDR(wasm_argv)); ASSERT_EQ(i64, GET_U64_FROM_ADDR(wasm_argv));
destory_exec_env(); destroy_exec_env();
} }
TEST_P(memory64_test_suite, mem64_from_clang) TEST_P(memory64_test_suite, mem64_from_clang)
@ -228,7 +230,7 @@ TEST_P(memory64_test_suite, mem64_from_clang)
i32 = 0x109; i32 = 0x109;
ASSERT_EQ(i32, wasm_argv[0]); ASSERT_EQ(i32, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
} }
INSTANTIATE_TEST_CASE_P(RunningMode, memory64_test_suite, INSTANTIATE_TEST_CASE_P(RunningMode, memory64_test_suite,

View File

@ -21,7 +21,7 @@ std::string TEST_WASM1 = "/hello.wasm";
std::string TEST_WASM2 = "/mytest.wasm"; std::string TEST_WASM2 = "/mytest.wasm";
char *WASM_FILE_1; char *WASM_FILE_1;
char *WASM_FILE_2; char *WASM_FILE_2;
std::vector<RunningMode> running_mode_supportted = { Mode_Interp, std::vector<RunningMode> running_mode_supported = { Mode_Interp,
#if WASM_ENABLE_FAST_JIT != 0 #if WASM_ENABLE_FAST_JIT != 0
Mode_Fast_JIT, Mode_Fast_JIT,
#endif #endif
@ -76,7 +76,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
return true; return true;
fail: fail:
if (!module) if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
@ -101,11 +101,13 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
if (exec_env) if (exec_env)
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
if (module_inst) if (module_inst)
wasm_runtime_deinstantiate(module_inst);
if (module)
wasm_runtime_unload(module); wasm_runtime_unload(module);
return false; return false;
} }
void destory_exec_env() void destroy_exec_env()
{ {
wasm_runtime_destroy_exec_env(exec_env); wasm_runtime_destroy_exec_env(exec_env);
wasm_runtime_deinstantiate(module_inst); wasm_runtime_deinstantiate(module_inst);
@ -139,7 +141,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
ASSERT_EQ(10, wasm_argv[0]); ASSERT_EQ(10, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
} }
void run_wasm_complex(char *filename1, char *filename2, void run_wasm_complex(char *filename1, char *filename2,
@ -168,7 +170,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
ASSERT_EQ(10, wasm_argv[0]); ASSERT_EQ(10, wasm_argv[0]);
destory_exec_env(); destroy_exec_env();
/* run wasm file 2 in running_mode */ /* run wasm file 2 in running_mode */
ret = load_wasm_file(filename2); ret = load_wasm_file(filename2);
@ -184,7 +186,7 @@ class wasm_running_modes_test_suite : public testing::TestWithParam<RunningMode>
ret = wasm_runtime_call_wasm(exec_env, main, 2, wasm_argv); ret = wasm_runtime_call_wasm(exec_env, main, 2, wasm_argv);
ASSERT_TRUE(ret); ASSERT_TRUE(ret);
destory_exec_env(); destroy_exec_env();
} }
public: public:
@ -246,7 +248,7 @@ TEST_F(wasm_running_modes_test_suite, wasm_runtime_is_running_mode_supported)
// normal situation // normal situation
ASSERT_EQ(true, wasm_runtime_is_running_mode_supported( ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(
static_cast<RunningMode>(Mode_Default))); static_cast<RunningMode>(Mode_Default)));
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(running_mode)); ASSERT_EQ(true, wasm_runtime_is_running_mode_supported(running_mode));
} }
@ -264,7 +266,7 @@ TEST_F(wasm_running_modes_test_suite, wasm_runtime_set_default_running_mode)
// normal situation: only set up // normal situation: only set up
ASSERT_EQ(true, wasm_runtime_set_default_running_mode( ASSERT_EQ(true, wasm_runtime_set_default_running_mode(
static_cast<RunningMode>(Mode_Default))); static_cast<RunningMode>(Mode_Default)));
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
ASSERT_EQ(true, wasm_runtime_set_default_running_mode(running_mode)); ASSERT_EQ(true, wasm_runtime_set_default_running_mode(running_mode));
} }
@ -296,13 +298,13 @@ TEST_P(wasm_running_modes_test_suite,
wasm_runtime_set_and_get_running_mode_complex) wasm_runtime_set_and_get_running_mode_complex)
{ {
RunningMode default_running_mode = GetParam(); RunningMode default_running_mode = GetParam();
for (auto running_mode : running_mode_supportted) { for (auto running_mode : running_mode_supported) {
run_wasm_complex(WASM_FILE_1, WASM_FILE_2, default_running_mode, run_wasm_complex(WASM_FILE_1, WASM_FILE_2, default_running_mode,
running_mode); running_mode);
} }
} }
INSTANTIATE_TEST_CASE_P(RunningMode, wasm_running_modes_test_suite, INSTANTIATE_TEST_CASE_P(RunningMode, wasm_running_modes_test_suite,
testing::ValuesIn(running_mode_supportted)); testing::ValuesIn(running_mode_supported));
} }

View File

@ -12,12 +12,20 @@ set(WAMR_BUILD_AOT 1)
set(WAMR_BUILD_INTERP 1) set(WAMR_BUILD_INTERP 1)
set(WAMR_BUILD_FAST_INTERP 1) set(WAMR_BUILD_FAST_INTERP 1)
set(WAMR_BUILD_JIT 0) set(WAMR_BUILD_JIT 0)
set(WAMR_BUILD_MEMORY64 1) if(WAMR_BUILD_TARGET STREQUAL "X86_32")
set(WAMR_BUILD_MEMORY64 0)
else()
set(WAMR_BUILD_MEMORY64 1)
endif()
set(WAMR_BUILD_SHARED_HEAP 1) set(WAMR_BUILD_SHARED_HEAP 1)
# Compile wasm modules # Compile wasm modules
add_subdirectory(wasm-apps) add_subdirectory(wasm-apps)
if (WAMR_BUILD_MEMORY64 EQUAL 1)
add_subdirectory(wasm-apps/memory64)
endif ()
# if only load this CMake other than load it as subdirectory # if only load this CMake other than load it as subdirectory
include(../unit_common.cmake) include(../unit_common.cmake)
@ -56,4 +64,4 @@ add_executable(shared_heap_test ${unit_test_sources})
target_link_libraries(shared_heap_test ${LLVM_AVAILABLE_LIBS} gtest_main) target_link_libraries(shared_heap_test ${LLVM_AVAILABLE_LIBS} gtest_main)
gtest_discover_tests(shared_heap_test) gtest_discover_tests(shared_heap_test)

File diff suppressed because it is too large Load Diff

View File

@ -29,44 +29,81 @@ set(CMAKE_EXE_LINKER_FLAGS
-Wl,--allow-undefined" -Wl,--allow-undefined"
) )
if (WAMR_BUILD_TARGET STREQUAL "X86_32")
set (WAMR_COMPILER_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-heap --target=i386)
set (WAMR_COMPILER_CHAIN_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-chain --target=i386)
else ()
set (WAMR_COMPILER_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-heap)
set (WAMR_COMPILER_CHAIN_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-chain)
endif ()
function(copy_wasm TARGET_NAME)
add_custom_command(TARGET ${TARGET_NAME} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${TARGET_NAME}
${CMAKE_CURRENT_BINARY_DIR}/../
COMMENT "Copy ${TARGET_NAME} to the same directory of google test"
)
endfunction()
function(compile_and_copy_aot_from TARGET_NAME)
string(REPLACE ".wasm" ".aot" AOT_TARGET ${TARGET_NAME})
string(REPLACE ".wasm" "_chain.aot" AOT_CHAIN_TARGET ${TARGET_NAME})
add_custom_command(TARGET ${TARGET_NAME} POST_BUILD
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_FLAGS}
-o ${AOT_TARGET}
${TARGET_NAME}
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${AOT_TARGET}
${CMAKE_CURRENT_BINARY_DIR}/../
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_CHAIN_FLAGS}
-o ${AOT_CHAIN_TARGET}
${TARGET_NAME}
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${AOT_CHAIN_TARGET}
${CMAKE_CURRENT_BINARY_DIR}/../
COMMENT "Compile and copy ${AOT_TARGET} to the same directory of google test"
)
endfunction()
add_executable(test.wasm test.c) add_executable(test.wasm test.c)
target_link_libraries(test.wasm) target_link_libraries(test.wasm)
copy_wasm(test.wasm)
add_custom_command(TARGET test.wasm POST_BUILD compile_and_copy_aot_from(test.wasm)
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/test.wasm
${CMAKE_CURRENT_BINARY_DIR}/../
COMMENT "Copy test.wasm to the same directory of google test"
)
add_custom_command(TARGET test.wasm POST_BUILD
COMMAND ${WAMRC_ROOT_DIR}/wamrc --opt-level=0 --enable-shared-heap --bounds-checks=1
-o
test.aot
test.wasm
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/test.aot
${CMAKE_CURRENT_BINARY_DIR}/../
COMMENT "Copy test.aot to the same directory of google test"
)
add_executable(test_addr_conv.wasm test_addr_conv.c) add_executable(test_addr_conv.wasm test_addr_conv.c)
target_link_libraries(test.wasm) target_link_libraries(test_addr_conv.wasm)
copy_wasm(test_addr_conv.wasm)
compile_and_copy_aot_from(test_addr_conv.wasm)
add_custom_command(TARGET test_addr_conv.wasm POST_BUILD # copy and compile aot for bulk memory test
COMMAND ${CMAKE_COMMAND} -E copy set(SOURCE_WASM ${CMAKE_CURRENT_SOURCE_DIR}/bulk-memory/test_bulk_memory.wasm)
${CMAKE_CURRENT_BINARY_DIR}/test_addr_conv.wasm set(BUILD_WASM ${CMAKE_CURRENT_BINARY_DIR}/../test_bulk_memory.wasm)
${CMAKE_CURRENT_BINARY_DIR}/../ set(OUTPUT_AOT ${CMAKE_CURRENT_BINARY_DIR}/../test_bulk_memory.aot)
COMMENT "Copy test_addr_conv.wasm to the same directory of google test" set(OUTPUT_CHAIN_AOT ${CMAKE_CURRENT_BINARY_DIR}/../test_bulk_memory_chain.aot)
)
add_custom_command(TARGET test_addr_conv.wasm POST_BUILD add_custom_command(
COMMAND ${WAMRC_ROOT_DIR}/wamrc --opt-level=0 --enable-shared-heap --bounds-checks=1 OUTPUT ${BUILD_WASM}
-o COMMAND ${CMAKE_COMMAND} -E copy
test_addr_conv.aot ${SOURCE_WASM}
test_addr_conv.wasm ${BUILD_WASM}
COMMAND ${CMAKE_COMMAND} -E copy DEPENDS ${SOURCE_WASM}
${CMAKE_CURRENT_BINARY_DIR}/test_addr_conv.aot COMMENT "Copying bulk memory WASM to build directory"
${CMAKE_CURRENT_BINARY_DIR}/../ )
COMMENT "Copy test_addr_conv.aot to the same directory of google test"
) add_custom_command(
OUTPUT ${OUTPUT_AOT}
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_FLAGS}
-o ${OUTPUT_AOT}
${BUILD_WASM}
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_CHAIN_FLAGS}
-o ${OUTPUT_CHAIN_AOT}
${BUILD_WASM}
DEPENDS ${BUILD_WASM}
COMMENT "Compiling bulk memory AOT from copied WASM"
)
add_custom_target(compile_bulk_memory_aot ALL
DEPENDS ${OUTPUT_AOT}
)

View File

@ -0,0 +1,12 @@
(module
(memory 1)
(func $memory_fill_test (param $dst i32) (param $val i32) (param $len i32)
local.get $dst
local.get $val
local.get $len
memory.fill
)
(export "memory_fill_test" (func $memory_fill_test))
)

View File

@ -0,0 +1,68 @@
# Copyright (C) 2019 Intel Corporation. All rights reserved.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
cmake_minimum_required(VERSION 3.14)
project(wasm-apps-wasm64)
set(WAMR_ROOT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../../../../..)
set(WAMRC_ROOT_DIR ${WAMR_ROOT_DIR}/wamr-compiler/build)
set(CMAKE_SYSTEM_PROCESSOR wasm64)
set(CMAKE_SYSROOT ${WAMR_ROOT_DIR}/wamr-sdk/app/libc-builtin-sysroot)
if (NOT DEFINED WASI_SDK_DIR)
set(WASI_SDK_DIR "/opt/wasi-sdk")
endif ()
set(CMAKE_C_FLAGS "-nostdlib -pthread -Qunused-arguments")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -z stack-size=8192 -nostdlib -O0 --target=wasm64")
set(CMAKE_C_COMPILER_TARGET "wasm64")
set(CMAKE_C_COMPILER "${WASI_SDK_DIR}/bin/clang")
set(DEFINED_SYMBOLS
"${WAMR_ROOT_DIR}/wamr-sdk/app/libc-builtin-sysroot/share/defined-symbols.txt")
set(CMAKE_EXE_LINKER_FLAGS
"-Wl,--no-entry \
-Wl,--initial-memory=65536 \
-Wl,--export-all \
-Wl,--allow-undefined"
)
set (WAMR_COMPILER_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-heap)
set (WAMR_COMPILER_CHAIN_FLAGS --opt-level=3 --bounds-checks=1 --enable-shared-chain)
function(copy_wasm TARGET_NAME)
add_custom_command(TARGET ${TARGET_NAME} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${TARGET_NAME}
${CMAKE_CURRENT_BINARY_DIR}/../../
COMMENT "Copy ${TARGET_NAME} to the same directory of google test"
)
endfunction()
function(compile_and_copy_aot_from TARGET_NAME)
string(REPLACE ".wasm" ".aot" AOT_TARGET ${TARGET_NAME})
string(REPLACE ".wasm" "_chain.aot" AOT_CHAIN_TARGET ${TARGET_NAME})
add_custom_command(TARGET ${TARGET_NAME} POST_BUILD
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_FLAGS}
-o ${AOT_TARGET}
${TARGET_NAME}
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${AOT_TARGET}
${CMAKE_CURRENT_BINARY_DIR}/../../
COMMAND ${WAMRC_ROOT_DIR}/wamrc ${WAMR_COMPILER_CHAIN_FLAGS}
-o ${AOT_CHAIN_TARGET}
${TARGET_NAME}
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_BINARY_DIR}/${AOT_CHAIN_TARGET}
${CMAKE_CURRENT_BINARY_DIR}/../../
COMMENT "Compile and copy ${AOT_TARGET} ${AOT_CHAIN_TARGET} to the same directory of google test"
)
endfunction()
add_executable(test64.wasm ../test.c)
target_link_libraries(test64.wasm)
copy_wasm(test64.wasm)
compile_and_copy_aot_from(test64.wasm)

View File

@ -3,7 +3,7 @@
* SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
*/ */
#include <stdio.h> #define NULL 0
extern void * extern void *
shared_heap_malloc(int size); shared_heap_malloc(int size);
@ -32,3 +32,31 @@ test_malloc_fail()
shared_heap_free(ptr); shared_heap_free(ptr);
return 0; return 0;
} }
void *
my_shared_heap_malloc(int size)
{
return shared_heap_malloc(size);
}
void
my_shared_heap_free(void *addr)
{
shared_heap_free(addr);
}
char
read_modify_write_8(char *addr, char value)
{
char original_value = *addr;
*addr = value;
return original_value;
}
short
read_modify_write_16(short *addr, short value)
{
short original_value = *addr;
*addr = value;
return original_value;
}

View File

@ -30,3 +30,17 @@ test()
shared_heap_free(ptr); shared_heap_free(ptr);
return 1; return 1;
} }
int
test_preallocated(void *app_addr)
{
int *ptr = (int *)app_addr;
int *ptr2 = NULL;
ptr2 = test_addr_conv(ptr);
if (ptr2 != ptr) {
return 0;
}
return 1;
}

View File

@ -172,6 +172,7 @@ def test_case(
clean_up_flag=True, clean_up_flag=True,
verbose_flag=True, verbose_flag=True,
gc_flag=False, gc_flag=False,
extended_const_flag=False,
memory64_flag=False, memory64_flag=False,
multi_memory_flag=False, multi_memory_flag=False,
qemu_flag=False, qemu_flag=False,
@ -229,6 +230,9 @@ def test_case(
if gc_flag: if gc_flag:
CMD.append("--gc") CMD.append("--gc")
if extended_const_flag:
CMD.append("--extended-const")
if memory64_flag: if memory64_flag:
CMD.append("--memory64") CMD.append("--memory64")
@ -304,6 +308,7 @@ def test_suite(
clean_up_flag=True, clean_up_flag=True,
verbose_flag=True, verbose_flag=True,
gc_flag=False, gc_flag=False,
extended_const_flag=False,
memory64_flag=False, memory64_flag=False,
multi_memory_flag=False, multi_memory_flag=False,
parl_flag=False, parl_flag=False,
@ -385,6 +390,7 @@ def test_suite(
clean_up_flag, clean_up_flag,
verbose_flag, verbose_flag,
gc_flag, gc_flag,
extended_const_flag,
memory64_flag, memory64_flag,
multi_memory_flag, multi_memory_flag,
qemu_flag, qemu_flag,
@ -428,6 +434,7 @@ def test_suite(
clean_up_flag, clean_up_flag,
verbose_flag, verbose_flag,
gc_flag, gc_flag,
extended_const_flag,
memory64_flag, memory64_flag,
multi_memory_flag, multi_memory_flag,
qemu_flag, qemu_flag,
@ -561,6 +568,13 @@ def main():
dest="gc_flag", dest="gc_flag",
help="Running with GC feature", help="Running with GC feature",
) )
parser.add_argument(
"--enable-extended-const",
action="store_true",
default=False,
dest="extended_const_flag",
help="Running with extended const expression feature",
)
parser.add_argument( parser.add_argument(
"--memory64", "--memory64",
action="store_true", action="store_true",
@ -619,6 +633,7 @@ def main():
options.clean_up_flag, options.clean_up_flag,
options.verbose_flag, options.verbose_flag,
options.gc_flag, options.gc_flag,
options.extended_const_flag,
options.memory64_flag, options.memory64_flag,
options.multi_memory_flag, options.multi_memory_flag,
options.parl_flag, options.parl_flag,
@ -648,6 +663,7 @@ def main():
options.clean_up_flag, options.clean_up_flag,
options.verbose_flag, options.verbose_flag,
options.gc_flag, options.gc_flag,
options.extended_const_flag,
options.memory64_flag, options.memory64_flag,
options.multi_memory_flag, options.multi_memory_flag,
options.qemu_flag, options.qemu_flag,

View File

@ -0,0 +1,506 @@
diff --git a/test/core/elem.wast b/test/core/elem.wast
index 92dab52..3954bca 100644
--- a/test/core/elem.wast
+++ b/test/core/elem.wast
@@ -571,6 +571,7 @@
;; Element sections across multiple modules change the same table
+(;
(module $module1
(type $out-i32 (func (result i32)))
(table (export "shared-table") 10 funcref)
@@ -620,7 +621,7 @@
(assert_return (invoke $module1 "call-7") (i32.const 67))
(assert_return (invoke $module1 "call-8") (i32.const 69))
(assert_return (invoke $module1 "call-9") (i32.const 70))
-
+;)
;; Element segments must match element type of table
(assert_invalid
@@ -659,24 +660,30 @@
(func (export "set") (param $i i32) (param $x externref)
(table.set $t (local.get $i) (local.get $x))))
-(register "exporter" $m)
+;; (register "exporter" $m)
-(assert_return (invoke $m "get" (i32.const 0)) (ref.null extern))
-(assert_return (invoke $m "get" (i32.const 1)) (ref.null extern))
+;; (assert_return (invoke $m "get" (i32.const 0)) (ref.null extern))
+;; (assert_return (invoke $m "get" (i32.const 1)) (ref.null extern))
+(assert_return (invoke "get" (i32.const 0)) (ref.null extern))
+(assert_return (invoke "get" (i32.const 1)) (ref.null extern))
-(assert_return (invoke $m "set" (i32.const 0) (ref.extern 42)))
-(assert_return (invoke $m "set" (i32.const 1) (ref.extern 137)))
-
-(assert_return (invoke $m "get" (i32.const 0)) (ref.extern 42))
-(assert_return (invoke $m "get" (i32.const 1)) (ref.extern 137))
+;; (assert_return (invoke $m "set" (i32.const 0) (ref.extern 42)))
+;; (assert_return (invoke $m "set" (i32.const 1) (ref.extern 137)))
+(assert_return (invoke "set" (i32.const 0) (ref.extern 42)))
+(assert_return (invoke "set" (i32.const 1) (ref.extern 137)))
+;; (assert_return (invoke $m "get" (i32.const 0)) (ref.extern 42))
+;; (assert_return (invoke $m "get" (i32.const 1)) (ref.extern 137))
+(assert_return (invoke "get" (i32.const 0)) (ref.extern 42))
+(assert_return (invoke "get" (i32.const 1)) (ref.extern 137))
+(;
(module
(import "exporter" "table" (table $t 2 externref))
(elem (i32.const 0) externref (ref.null extern)))
(assert_return (invoke $m "get" (i32.const 0)) (ref.null extern))
(assert_return (invoke $m "get" (i32.const 1)) (ref.extern 137))
-
+;)
;; Initializing a table with imported funcref global
(module $module4
@@ -686,6 +693,7 @@
(global (export "f") funcref (ref.func 0))
)
+(;
(register "module4" $module4)
(module
@@ -699,6 +707,7 @@
)
(assert_return (invoke "call_imported_elem") (i32.const 42))
+;)
;; Extended contant expressions
diff --git a/test/core/ref_func.wast b/test/core/ref_func.wast
index adb5cb7..6396013 100644
--- a/test/core/ref_func.wast
+++ b/test/core/ref_func.wast
@@ -4,7 +4,7 @@
(register "M")
(module
- (func $f (import "M" "f") (param i32) (result i32))
+ (func $f (param $x i32) (result i32) (local.get $x))
(func $g (param $x i32) (result i32)
(i32.add (local.get $x) (i32.const 1))
)
diff --git a/test/core/table_copy.wast b/test/core/table_copy.wast
index 380e84e..59230cf 100644
--- a/test/core/table_copy.wast
+++ b/test/core/table_copy.wast
@@ -14,11 +14,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -106,11 +106,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -198,11 +198,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -290,11 +290,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -382,11 +382,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -474,11 +474,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -566,11 +566,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -658,11 +658,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -750,11 +750,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -842,11 +842,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -934,11 +934,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1026,11 +1026,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1118,11 +1118,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1210,11 +1210,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1302,11 +1302,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1394,11 +1394,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1486,11 +1486,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -1578,11 +1578,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
diff --git a/test/core/table_init.wast b/test/core/table_init.wast
index 0b2d26f..3c595e5 100644
--- a/test/core/table_init.wast
+++ b/test/core/table_init.wast
@@ -14,11 +14,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -72,11 +72,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -130,11 +130,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t0) (i32.const 2) func 3 1 4 1)
@@ -196,11 +196,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -254,11 +254,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)
@@ -312,11 +312,11 @@
(module
(type (func (result i32))) ;; type #0
- (import "a" "ef0" (func (result i32))) ;; index 0
- (import "a" "ef1" (func (result i32)))
- (import "a" "ef2" (func (result i32)))
- (import "a" "ef3" (func (result i32)))
- (import "a" "ef4" (func (result i32))) ;; index 4
+ (func (result i32) (i32.const 0)) ;; index 0
+ (func (result i32) (i32.const 1))
+ (func (result i32) (i32.const 2))
+ (func (result i32) (i32.const 3))
+ (func (result i32) (i32.const 4)) ;; index 4
(table $t0 30 30 funcref)
(table $t1 30 30 funcref)
(elem (table $t1) (i32.const 2) func 3 1 4 1)

View File

@ -336,6 +336,9 @@ parser.add_argument('--multi-thread', default=False, action='store_true',
parser.add_argument('--gc', default=False, action='store_true', parser.add_argument('--gc', default=False, action='store_true',
help='Test with GC') help='Test with GC')
parser.add_argument('--extended-const', action='store_true',
help='Enable extended const expression feature')
parser.add_argument('--memory64', default=False, action='store_true', parser.add_argument('--memory64', default=False, action='store_true',
help='Test with Memory64') help='Test with Memory64')
@ -1112,6 +1115,8 @@ def compile_wast_to_wasm(form, wast_tempfile, wasm_tempfile, opts):
cmd = [opts.wast2wasm, "--enable-memory64", "--no-check", wast_tempfile, "-o", wasm_tempfile ] cmd = [opts.wast2wasm, "--enable-memory64", "--no-check", wast_tempfile, "-o", wasm_tempfile ]
elif opts.multi_memory: elif opts.multi_memory:
cmd = [opts.wast2wasm, "--enable-multi-memory", "--no-check", wast_tempfile, "-o", wasm_tempfile ] cmd = [opts.wast2wasm, "--enable-multi-memory", "--no-check", wast_tempfile, "-o", wasm_tempfile ]
elif opts.extended_const:
cmd = [opts.wast2wasm, "--enable-extended-const", "--no-check", wast_tempfile, "-o", wasm_tempfile ]
else: else:
# `--enable-multi-memory` for a case in memory.wast but doesn't require runtime support # `--enable-multi-memory` for a case in memory.wast but doesn't require runtime support
cmd = [opts.wast2wasm, "--enable-multi-memory", "--enable-threads", "--no-check", cmd = [opts.wast2wasm, "--enable-multi-memory", "--enable-threads", "--no-check",
@ -1155,6 +1160,9 @@ def compile_wasm_to_aot(wasm_tempfile, aot_tempfile, runner, opts, r, output = '
cmd.append("--enable-gc") cmd.append("--enable-gc")
cmd.append("--enable-tail-call") cmd.append("--enable-tail-call")
if opts.extended_const:
cmd.append("--enable-extended-const")
if output == 'object': if output == 'object':
cmd.append("--format=object") cmd.append("--format=object")
elif output == 'ir': elif output == 'ir':

View File

@ -41,6 +41,7 @@ function help()
echo "-j set the platform to test" echo "-j set the platform to test"
echo "-T set sanitizer to use in tests(ubsan|tsan|asan|posan)" echo "-T set sanitizer to use in tests(ubsan|tsan|asan|posan)"
echo "-A use the specified wamrc command instead of building it" echo "-A use the specified wamrc command instead of building it"
echo "-N enable extended const expression feature"
echo "-r [requirement name] [N [N ...]] specify a requirement name followed by one or more" echo "-r [requirement name] [N [N ...]] specify a requirement name followed by one or more"
echo " subrequirement IDs, if no subrequirement is specificed," echo " subrequirement IDs, if no subrequirement is specificed,"
echo " it will run all subrequirements. When this optin is used," echo " it will run all subrequirements. When this optin is used,"
@ -59,6 +60,7 @@ ENABLE_MULTI_THREAD=0
COLLECT_CODE_COVERAGE=0 COLLECT_CODE_COVERAGE=0
ENABLE_SIMD=0 ENABLE_SIMD=0
ENABLE_GC=0 ENABLE_GC=0
ENABLE_EXTENDED_CONST_EXPR=0
ENABLE_MEMORY64=0 ENABLE_MEMORY64=0
ENABLE_MULTI_MEMORY=0 ENABLE_MULTI_MEMORY=0
ENABLE_XIP=0 ENABLE_XIP=0
@ -87,7 +89,7 @@ REQUIREMENT_NAME=""
# Initialize an empty array for subrequirement IDs # Initialize an empty array for subrequirement IDs
SUBREQUIREMENT_IDS=() SUBREQUIREMENT_IDS=()
while getopts ":s:cabgvt:m:MCpSXexwWEPGQF:j:T:r:A:" opt while getopts ":s:cabgvt:m:MCpSXexwWEPGQF:j:T:r:A:N" opt
do do
OPT_PARSED="TRUE" OPT_PARSED="TRUE"
case $opt in case $opt in
@ -191,6 +193,10 @@ do
echo "enable GC feature" echo "enable GC feature"
ENABLE_GC=1 ENABLE_GC=1
;; ;;
N)
echo "enable extended const expression feature"
ENABLE_EXTENDED_CONST_EXPR=1
;;
P) P)
PARALLELISM=1 PARALLELISM=1
;; ;;
@ -362,31 +368,31 @@ function sightglass_test()
function setup_wabt() function setup_wabt()
{ {
# please sync with .github/actions/install-wasi-sdk-wabt/action.yml # please sync with .github/actions/install-wasi-sdk-wabt/action.yml
case ${PLATFORM} in
cosmopolitan)
;;
linux)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-ubuntu-20.04.tar.gz
WABT_VERSION=1.0.37
;;
darwin)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.36/wabt-1.0.36-macos-12.tar.gz
WABT_VERSION=1.0.36
;;
windows)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-windows.tar.gz
WABT_VERSION=1.0.37
;;
*)
echo "wabt platform for ${PLATFORM} in unknown"
exit 1
;;
esac
if [ ${WABT_BINARY_RELEASE} == "YES" ]; then if [ ${WABT_BINARY_RELEASE} == "YES" ]; then
echo "download a binary release and install" echo "download a binary release and install"
local WAT2WASM=${WORK_DIR}/wabt/out/gcc/Release/wat2wasm local WAT2WASM=${WORK_DIR}/wabt/out/gcc/Release/wat2wasm
if [ ! -f ${WAT2WASM} ]; then if [ ! -f ${WAT2WASM} ]; then
case ${PLATFORM} in
cosmopolitan)
;;
linux)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-ubuntu-20.04.tar.gz
WABT_VERSION=1.0.37
;;
darwin)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.36/wabt-1.0.36-macos-12.tar.gz
WABT_VERSION=1.0.36
;;
windows)
WABT_URL=https://github.com/WebAssembly/wabt/releases/download/1.0.37/wabt-1.0.37-windows.tar.gz
WABT_VERSION=1.0.37
;;
*)
echo "wabt platform for ${PLATFORM} in unknown"
exit 1
;;
esac
pushd /tmp pushd /tmp
wget -O wabt-tar.gz --progress=dot:giga ${WABT_URL} wget -O wabt-tar.gz --progress=dot:giga ${WABT_URL}
tar xf wabt-tar.gz tar xf wabt-tar.gz
@ -414,7 +420,7 @@ function setup_wabt()
function compile_reference_interpreter() function compile_reference_interpreter()
{ {
echo "compile the reference intepreter" echo "compile the reference interpreter"
pushd interpreter pushd interpreter
make make
if [ $? -ne 0 ] if [ $? -ne 0 ]
@ -485,6 +491,17 @@ function spec_test()
# (func $f (param (ref null $t)) (result funcref) (local.get 0)) # (func $f (param (ref null $t)) (result funcref) (local.get 0))
# #
compile_reference_interpreter compile_reference_interpreter
elif [[ ${ENABLE_EXTENDED_CONST_EXPR} == 1 ]]; then
echo "checkout spec for extended const expression proposal"
git clone -b main --single-branch https://github.com/WebAssembly/extended-const.git spec
pushd spec
# Jan 14, 2025. README.md: Add note that this proposal is done (#20)
git reset --hard 8d4f6aa2b00a8e7c0174410028625c6a176db8a1
# ignore import table cases
git apply --ignore-whitespace ../../spec-test-script/extended_const.patch || exit 1
elif [[ ${ENABLE_MEMORY64} == 1 ]]; then elif [[ ${ENABLE_MEMORY64} == 1 ]]; then
echo "checkout spec for memory64 proposal" echo "checkout spec for memory64 proposal"
@ -587,6 +604,10 @@ function spec_test()
ARGS_FOR_SPEC_TEST+="--gc " ARGS_FOR_SPEC_TEST+="--gc "
fi fi
if [[ ${ENABLE_EXTENDED_CONST_EXPR} == 1 ]]; then
ARGS_FOR_SPEC_TEST+="--enable-extended-const "
fi
if [[ 1 == ${ENABLE_MEMORY64} ]]; then if [[ 1 == ${ENABLE_MEMORY64} ]]; then
ARGS_FOR_SPEC_TEST+="--memory64 " ARGS_FOR_SPEC_TEST+="--memory64 "
fi fi
@ -832,6 +853,7 @@ function build_wamrc()
&& cmake .. \ && cmake .. \
-DCOLLECT_CODE_COVERAGE=${COLLECT_CODE_COVERAGE} \ -DCOLLECT_CODE_COVERAGE=${COLLECT_CODE_COVERAGE} \
-DWAMR_BUILD_SHRUNK_MEMORY=0 \ -DWAMR_BUILD_SHRUNK_MEMORY=0 \
-DWAMR_BUILD_EXTENDED_CONST_EXPR=${ENABLE_EXTENDED_CONST_EXPR} \
&& make -j 4 && make -j 4
} }
@ -1023,6 +1045,10 @@ function trigger()
EXTRA_COMPILE_FLAGS+=" -DWAMR_BUILD_TAIL_CALL=1" EXTRA_COMPILE_FLAGS+=" -DWAMR_BUILD_TAIL_CALL=1"
fi fi
if [[ ${ENABLE_EXTENDED_CONST_EXPR} == 1 ]]; then
EXTRA_COMPILE_FLAGS+=" -DWAMR_BUILD_EXTENDED_CONST_EXPR=1"
fi
if [[ ${ENABLE_DEBUG_VERSION} == 1 ]]; then if [[ ${ENABLE_DEBUG_VERSION} == 1 ]]; then
EXTRA_COMPILE_FLAGS+=" -DCMAKE_BUILD_TYPE=Debug" EXTRA_COMPILE_FLAGS+=" -DCMAKE_BUILD_TYPE=Debug"
fi fi

View File

@ -53,6 +53,7 @@ add_definitions(-DWASM_ENABLE_PERF_PROFILING=1)
add_definitions(-DWASM_ENABLE_LOAD_CUSTOM_SECTION=1) add_definitions(-DWASM_ENABLE_LOAD_CUSTOM_SECTION=1)
add_definitions(-DWASM_ENABLE_MODULE_INST_CONTEXT=1) add_definitions(-DWASM_ENABLE_MODULE_INST_CONTEXT=1)
add_definitions(-DWASM_ENABLE_MEMORY64=1) add_definitions(-DWASM_ENABLE_MEMORY64=1)
add_definitions(-DWASM_ENABLE_EXTENDED_CONST_EXPR=1)
add_definitions(-DWASM_ENABLE_GC=1) add_definitions(-DWASM_ENABLE_GC=1)
@ -284,6 +285,7 @@ include (${IWASM_DIR}/interpreter/iwasm_interp.cmake)
include (${IWASM_DIR}/aot/iwasm_aot.cmake) include (${IWASM_DIR}/aot/iwasm_aot.cmake)
include (${IWASM_DIR}/compilation/iwasm_compl.cmake) include (${IWASM_DIR}/compilation/iwasm_compl.cmake)
include (${PROJECT_SOURCE_DIR}/../build-scripts/version.cmake) include (${PROJECT_SOURCE_DIR}/../build-scripts/version.cmake)
include (${IWASM_DIR}/libraries/shared-heap/shared_heap.cmake)
if (WAMR_BUILD_LIBC_BUILTIN EQUAL 1) if (WAMR_BUILD_LIBC_BUILTIN EQUAL 1)
include (${IWASM_DIR}/libraries/libc-builtin/libc_builtin.cmake) include (${IWASM_DIR}/libraries/libc-builtin/libc_builtin.cmake)
@ -366,6 +368,7 @@ add_library (vmlib
${LIBC_WASI_SOURCE} ${LIBC_WASI_SOURCE}
${LIB_PTHREAD_SOURCE} ${LIB_PTHREAD_SOURCE}
${LIB_WASI_THREADS_SOURCE} ${LIB_WASI_THREADS_SOURCE}
${LIB_SHARED_HEAP_SOURCE}
${IWASM_COMMON_SOURCE} ${IWASM_COMMON_SOURCE}
${IWASM_INTERP_SOURCE} ${IWASM_INTERP_SOURCE}
${IWASM_AOT_SOURCE} ${IWASM_AOT_SOURCE}

View File

@ -213,7 +213,9 @@ print_help()
printf(" --enable-linux-perf Enable linux perf support\n"); printf(" --enable-linux-perf Enable linux perf support\n");
#endif #endif
printf(" --mllvm=<option> Add the LLVM command line option\n"); printf(" --mllvm=<option> Add the LLVM command line option\n");
printf(" --enable-shared-heap Enable shared heap feature\n"); printf(" --enable-shared-heap Enable shared heap feature, assuming only one shared heap will be attached\n");
printf(" --enable-shared-chain Enable shared heap chain feature, works for more than one shared heap\n");
printf(" WARNING: enable this feature will largely increase code size\n");
printf(" -v=n Set log verbose level (0 to 5, default is 2), larger with more log\n"); printf(" -v=n Set log verbose level (0 to 5, default is 2), larger with more log\n");
printf(" --version Show version information\n"); printf(" --version Show version information\n");
printf("Examples: wamrc -o test.aot test.wasm\n"); printf("Examples: wamrc -o test.aot test.wasm\n");
@ -421,6 +423,7 @@ main(int argc, char *argv[])
option.enable_bulk_memory = true; option.enable_bulk_memory = true;
option.enable_ref_types = true; option.enable_ref_types = true;
option.enable_gc = false; option.enable_gc = false;
option.enable_extended_const = false;
aot_call_stack_features_init_default(&option.call_stack_features); aot_call_stack_features_init_default(&option.call_stack_features);
/* Process options */ /* Process options */
@ -534,6 +537,9 @@ main(int argc, char *argv[])
else if (!strcmp(argv[0], "--disable-aux-stack-check")) { else if (!strcmp(argv[0], "--disable-aux-stack-check")) {
option.enable_aux_stack_check = false; option.enable_aux_stack_check = false;
} }
else if (!strcmp(argv[0], "--enable-extended-const")) {
option.enable_extended_const = true;
}
else if (!strcmp(argv[0], "--enable-dump-call-stack")) { else if (!strcmp(argv[0], "--enable-dump-call-stack")) {
option.aux_stack_frame_type = AOT_STACK_FRAME_TYPE_STANDARD; option.aux_stack_frame_type = AOT_STACK_FRAME_TYPE_STANDARD;
} }
@ -661,6 +667,9 @@ main(int argc, char *argv[])
else if (!strcmp(argv[0], "--enable-shared-heap")) { else if (!strcmp(argv[0], "--enable-shared-heap")) {
option.enable_shared_heap = true; option.enable_shared_heap = true;
} }
else if (!strcmp(argv[0], "--enable-shared-chain")) {
option.enable_shared_chain = true;
}
else if (!strcmp(argv[0], "--version")) { else if (!strcmp(argv[0], "--version")) {
uint32 major, minor, patch; uint32 major, minor, patch;
wasm_runtime_get_version(&major, &minor, &patch); wasm_runtime_get_version(&major, &minor, &patch);
@ -723,6 +732,13 @@ main(int argc, char *argv[])
option.enable_ref_types = false; option.enable_ref_types = false;
} }
if (option.enable_shared_chain) {
LOG_VERBOSE("Enable shared chain will overwrite shared heap and sw "
"bounds control");
option.enable_shared_heap = false;
option.bounds_checks = true;
}
if (!use_dummy_wasm) { if (!use_dummy_wasm) {
wasm_file_name = argv[0]; wasm_file_name = argv[0];

Some files were not shown because too many files have changed in this diff Show More