Implement multi-memory for classic-interpreter. Support core spec (and bulk memory) opcodes now,
and will support atomic opcodes, and add multi-memory export APIs in the future.
PS: Multi-memory spec test patched a lot for linking test to adapt for multi-module implementation.
The table index in the call_indirect/return_call_indirect opcode should be
one byte 0x00 when ref-types/GC isn't enabled, and should be treated as
leb u32 when ref-types/GC is enabled.
And make aot compiler bail out if ref-types/GC is disabled by command line
argument while ref-types instructions are used.
As reported in #3500, when debug interpreter is enabled, the classic interpreter
performs a lock operation to read `exec_env->current_status->signal_flag` and
do further handling before fetching next opcode, which makes the interpreter
run slower.
This PR atomic loads the `exec_env->current_status->signal_flag` without mutex
lock when 32-bit atomic load is supported, and only adding lock for further
handling when the signal_flag is WAMR_SIG_SINGSTEP, which improves the
performance.
The "handlers" on the interpreter stack is sometimes treated as
host pointers and sometimes treated as i64 values. It's quite broken
for targets where pointers are not 64-bit.
This commit makes them host pointers consistently. (at least for
32-bit and 64-bit pointers. We don't support other pointer
sizes anyway.)
Fixes https://github.com/bytecodealliance/wasm-micro-runtime/issues/3110
In classic interpreter, fast interpreter and fast-jit running modes, set the local
variables' default value to NULL_REF (0xFFFFFFFF) rather than 0 if they are type
of externref or funcref.
The issue was reported in #3390 and #3391.
- Add a few API (https://github.com/bytecodealliance/wasm-micro-runtime/issues/3325)
```c
wasm_runtime_detect_native_stack_overflow_size
wasm_runtime_detect_native_stack_overflow
```
- Adapt the runtime to use them
- Adapt samples/native-stack-overflow to use them
- Add a few missing overflow checks in the interpreters
- Build and run the sample on the CI
Enhance the GC subtyping checks:
- Fix issues in the type equivalence check
- Enable the recursive type subtyping check
- Add a equivalence type flag in defined types of aot file, if there is an
equivalence type before, just set it true and re-use the previous type
- Normalize the defined types for interpreter and AOT
- Enable spec test case type-equivalence.wast and type-subtyping.wast,
and enable some commented cases
- Enable set WAMR_BUILD_SANITIZER from cmake variable
The current frame was freed before tail calling to an import or native function
and the prev_frame was set as exec_env's cur_frame, so after the tail calling,
we should recover context from prev_frame but not current frame.
Found in https://github.com/bytecodealliance/wasm-micro-runtime/issues/3279.
Adding a new cmake flag (cache variable) `WAMR_BUILD_MEMORY64` to enable
the memory64 feature, it can only be enabled on the 64-bit platform/target and
can only use software boundary check. And when it is enabled, it can support both
i32 and i64 linear memory types. The main modifications are:
- wasm loader & mini-loader: loading and bytecode validating process
- wasm runtime: memory instantiating process
- classic-interpreter: wasm code executing process
- Support memory64 memory in related runtime APIs
- Modify main function type check when it's memory64 wasm file
- Modify `wasm_runtime_invoke_native` and `wasm_runtime_invoke_native_raw` to
handle registered native function pointer argument when memory64 is enabled
- memory64 classic-interpreter spec test in `test_wamr.sh` and in CI
Currently, it supports memory64 memory wasm file that uses core spec
(including bulk memory proposal) opcodes and threads opcodes.
ps.
https://github.com/bytecodealliance/wasm-micro-runtime/issues/3091https://github.com/bytecodealliance/wasm-micro-runtime/pull/3240https://github.com/bytecodealliance/wasm-micro-runtime/pull/3260
Implement the GC (Garbage Collection) feature for interpreter mode,
AOT mode and LLVM-JIT mode, and support most features of the latest
spec proposal, and also enable the stringref feature.
Use `cmake -DWAMR_BUILD_GC=1/0` to enable/disable the feature,
and `wamrc --enable-gc` to generate the AOT file with GC supported.
And update the AOT file version from 2 to 3 since there are many AOT
ABI breaks, including the changes of AOT file format, the changes of
AOT module/memory instance layouts, the AOT runtime APIs for the
AOT code to invoke and so on.
Using `CHECK_BULK_MEMORY_OVERFLOW(addr + offset, n, maddr)` to do the
boundary check may encounter integer overflow in `addr + offset`, change to
use `CHECK_MEMORY_OVERFLOW(n)` instead, which converts `addr` and `offset`
to uint64 first and then add them to avoid integer overflow.
This PR adds the initial support for WASM exception handling:
* Inside the classic interpreter only:
* Initial handling of Tags
* Initial handling of Exceptions based on W3C Exception Proposal
* Import and Export of Exceptions and Tags
* Add `cmake -DWAMR_BUILD_EXCE_HANDLING=1/0` option to enable/disable
the feature, and by default it is disabled
* Update the wamr-test-suites scripts to test the feature
* Additional CI/CD changes to validate the exception spec proposal cases
Refer to:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/1884587513f3c68bebfe9ad759bccdfed8
Signed-off-by: Ricardo Aguilar <ricardoaguilar@siemens.com>
Co-authored-by: Chris Woods <chris.woods@siemens.com>
Co-authored-by: Rene Ermler <rene.ermler@siemens.com>
Co-authored-by: Trenner Thomas <trenner.thomas@siemens.com>
Though SIMD isn't supported by interpreter, when JIT is enabled,
developer may run `iwasm --interp <wasm_file>` to trigger the SIMD
opcode in interpreter, which isn't handled before this PR.
- Enable quick aot entry when hw bound check is disabled
- Remove unnecessary ret_type argument in the quick aot entries
- Declare detailed prototype of aot function to call in each quick aot entry
Enhance the statistic of wasm function execution time, or the performance
profiling feature:
- Add os_time_thread_cputime_us() to get the cputime of a thread,
and use it to calculate the execution time of a wasm function
- Support the statistic of the children execution time of a function,
and dump it in wasm_runtime_dump_perf_profiling
- Expose two APIs:
wasm_runtime_sum_wasm_exec_time
wasm_runtime_get_wasm_func_exec_time
And rename os_time_get_boot_microsecond to os_time_get_boot_us.
In some scenarios there may be lots of callings to AOT/JIT functions from the
host embedder, which expects good performance for the calling process, while
in the current implementation, runtime calls the wasm_runtime_invoke_native
to prepare the array of registers and stacks for the invokeNative assemble code,
and the latter then puts the elements in the array to physical registers and
native stacks and calls the AOT/JIT function, there may be many data copying
and handlings which impact the performance.
This PR registers some quick AOT/JIT entries for some simple wasm signatures,
and let runtime call the entry to directly invoke the AOT/JIT function instead of
calling wasm_runtime_invoke_native, which speedups the calling process.
We may extend the mechanism next to allow the developer to register his quick
AOT/JIT entries to speedup the calling process of invoking the AOT/JIT functions
for some specific signatures.
- Don't allocate the implicit/unused frame when calling the LLVM JIT function
- Don't set exec_env's thread handle and stack boundary in the recursive
calling from host, since they have been set in the first time calling
- Fix frame not freed in llvm_jit_call_func_bytecode
Currently, `data.drop` instruction is implemented by directly modifying the
underlying module. It breaks use cases where you have multiple instances
sharing a single loaded module. `elem.drop` has the same problem too.
This PR fixes the issue by keeping track of which data/elem segments have
been dropped by using bitmaps for each module instances separately, and
add a sample to demonstrate the issue and make the CI run it.
Also add a missing check of dropped elements to the fast-jit `table.init`.
Fixes: https://github.com/bytecodealliance/wasm-micro-runtime/issues/2735
Fixes: https://github.com/bytecodealliance/wasm-micro-runtime/issues/2772
Fixed a bug in the processing of the br_table_cache opcode that caused out-of-range
references when the label index was greater than the length of the label.
Avoid the stack traces getting mixed up together when multi-threading is enabled
by using exception_lock/unlock in dumping the call stacks.
And remove duplicated call stack dump in wasm_application.c.
Also update coding guideline CI to fix the clang-format-12 not found issue.
- Inherit shared memory from the parent instance, instead of
trying to look it up by the underlying module. The old method
works correctly only when every cluster uses different module.
- Use reference count in WASMMemoryInstance/AOTMemoryInstance
to mark whether the memory is shared or not
- Retire WASMSharedMemNode
- For atomic opcode implementations in the interpreters, use
a global lock for now
- Update the internal API users
(wasi-threads, lib-pthread, wasm_runtime_spawn_thread)
Fixes https://github.com/bytecodealliance/wasm-micro-runtime/issues/1962
We have observed a significant performance degradation after merging
https://github.com/bytecodealliance/wasm-micro-runtime/pull/1991
Instead of protecting suspend flags with a mutex, we implement the flags
as atomic variable and only use mutex when atomics are not available
on a given platform.
Allow to use `cmake -DWAMR_CONFIGURABLE_BOUNDS_CHECKS=1` to
build iwasm, and then run `iwasm --disable-bounds-checks` to disable the
memory access boundary checks.
And add two APIs:
`wasm_runtime_set_bounds_checks` and `wasm_runtime_is_bounds_checks_enabled`
Segue is an optimization technology which uses x86 segment register to store
the WebAssembly linear memory base address, so as to remove most of the cost
of SFI (Software-based Fault Isolation) base addition and free up a general
purpose register, by this way it may:
- Improve the performance of JIT/AOT
- Reduce the footprint of JIT/AOT, the JIT/AOT code generated is smaller
- Reduce the compilation time of JIT/AOT
This PR uses the x86-64 GS segment register to apply the optimization, currently
it supports linux and linux-sgx platforms on x86-64 target. By default it is disabled,
developer can use the option below to enable it for wamrc and iwasm(with LLVM
JIT enabled):
```bash
wamrc --enable-segue=[<flags>] -o output_file wasm_file
iwasm --enable-segue=[<flags>] wasm_file [args...]
```
`flags` can be:
i32.load, i64.load, f32.load, f64.load, v128.load,
i32.store, i64.store, f32.store, f64.store, v128.store
Use comma to separate them, e.g. `--enable-segue=i32.load,i64.store`,
and `--enable-segue` means all flags are added.
Acknowledgement:
Many thanks to Intel Labs, UC San Diego and UT Austin teams for introducing this
technology and the great support and guidance!
Signed-off-by: Wenyong Huang <wenyong.huang@intel.com>
Co-authored-by: Vahldiek-oberwagner, Anjo Lucas <anjo.lucas.vahldiek-oberwagner@intel.com>
Add nightly (UTC time) checks with asan and ubsan, and also put gcc-4.8 build
to nightly run since we don't need to run it with every PR.
Co-authored-by: Maksim Litskevich <makslit@amazon.co.uk>
Load memory data size in each time memory access boundary check in
multi-threading mode since it may be changed by other threads when
memory growing.
And use `memory->memory_data_size` instead of
`memory->num_bytes_per_page * memory->cur_page_count` to refine
the code.
Use the shared memory's shared_mem_lock to lock the whole atomic.wait and
atomic.notify processes, and use it for os_cond_reltimedwait and os_cond_notify,
so as to make the whole processes actual atomic operations:
the original implementation accesses the wait address with shared_mem_lock
and uses wait_node->wait_lock for os_cond_reltimedwait, which is not an atomic
operation.
And remove the unnecessary wait_map_lock and wait_lock, since the whole
processes are already locked by shared_mem_lock.