Fix the errors reported in the sanitizer test of nightly run CI.
When the stack is in polymorphic state, the stack operands may be changed
after pop and push operations (e.g. stack is empty but pop op can succeed
in polymorphic, and the push op can push a new operand to stack), this may
impact the following checks to other target blocks of the br_table opcode.
Implement the GC (Garbage Collection) feature for interpreter mode,
AOT mode and LLVM-JIT mode, and support most features of the latest
spec proposal, and also enable the stringref feature.
Use `cmake -DWAMR_BUILD_GC=1/0` to enable/disable the feature,
and `wamrc --enable-gc` to generate the AOT file with GC supported.
And update the AOT file version from 2 to 3 since there are many AOT
ABI breaks, including the changes of AOT file format, the changes of
AOT module/memory instance layouts, the AOT runtime APIs for the
AOT code to invoke and so on.
This PR adds the initial support for WASM exception handling:
* Inside the classic interpreter only:
* Initial handling of Tags
* Initial handling of Exceptions based on W3C Exception Proposal
* Import and Export of Exceptions and Tags
* Add `cmake -DWAMR_BUILD_EXCE_HANDLING=1/0` option to enable/disable
the feature, and by default it is disabled
* Update the wamr-test-suites scripts to test the feature
* Additional CI/CD changes to validate the exception spec proposal cases
Refer to:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/1884587513f3c68bebfe9ad759bccdfed8
Signed-off-by: Ricardo Aguilar <ricardoaguilar@siemens.com>
Co-authored-by: Chris Woods <chris.woods@siemens.com>
Co-authored-by: Rene Ermler <rene.ermler@siemens.com>
Co-authored-by: Trenner Thomas <trenner.thomas@siemens.com>
Allow to invoke the quick call entry wasm_runtime_quick_invoke_c_api_import to
call the wasm-c-api import functions to speedup the calling process, which reduces
the data copying.
Use `wamrc --invoke-c-api-import` to generate the optimized AOT code, and set
`jit_options->quick_invoke_c_api_import` true in wasm_engine_new when LLVM JIT
is enabled.
In some scenarios there may be lots of callings to AOT/JIT functions from the
host embedder, which expects good performance for the calling process, while
in the current implementation, runtime calls the wasm_runtime_invoke_native
to prepare the array of registers and stacks for the invokeNative assemble code,
and the latter then puts the elements in the array to physical registers and
native stacks and calls the AOT/JIT function, there may be many data copying
and handlings which impact the performance.
This PR registers some quick AOT/JIT entries for some simple wasm signatures,
and let runtime call the entry to directly invoke the AOT/JIT function instead of
calling wasm_runtime_invoke_native, which speedups the calling process.
We may extend the mechanism next to allow the developer to register his quick
AOT/JIT entries to speedup the calling process of invoking the AOT/JIT functions
for some specific signatures.
And refactor the original perf support
- use WAMR_BUILD_LINUX_PERF as the cmake compilation control
- use WASM_ENABLE_LINUX_PERF as the compiler macro
- use `wamrc --enable-linux-perf` to generate aot file which contains fp operations
- use `iwasm --enable-linux-perf` to create perf map for `perf record`
- Fix op_br_table arity type check when the dest block is loop block
- Fix op_drop issue when the stack is polymorphic and it is to drop
an ANY type value in the stack
When labels-as-values is enabled in a target which doesn't support
unaligned address access, 16-bit offset is used to store the relative
offset between two opcode labels. But it is a little small and the loader
may report "pre-compiled label offset out of range" error.
Emitting 32-bit data instead to resolve the issue: emit label address in
32-bit target and emit 32-bit relative offset in 64-bit target.
See also:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2635
`wasm_loader_push_pop_frame_offset` may pop n operands by using
`loader_ctx->stack_cell_num` to check whether the operand can be
popped or not. While `loader_ctx->stack_cell_num` is updated in the
later `wasm_loader_push_pop_frame_ref`, the check may fail if the stack
is in polymorphic state and lead to `ctx->frame_offset` underflow.
Fix issue #2577 and #2586.
Segue is an optimization technology which uses x86 segment register to store
the WebAssembly linear memory base address, so as to remove most of the cost
of SFI (Software-based Fault Isolation) base addition and free up a general
purpose register, by this way it may:
- Improve the performance of JIT/AOT
- Reduce the footprint of JIT/AOT, the JIT/AOT code generated is smaller
- Reduce the compilation time of JIT/AOT
This PR uses the x86-64 GS segment register to apply the optimization, currently
it supports linux and linux-sgx platforms on x86-64 target. By default it is disabled,
developer can use the option below to enable it for wamrc and iwasm(with LLVM
JIT enabled):
```bash
wamrc --enable-segue=[<flags>] -o output_file wasm_file
iwasm --enable-segue=[<flags>] wasm_file [args...]
```
`flags` can be:
i32.load, i64.load, f32.load, f64.load, v128.load,
i32.store, i64.store, f32.store, f64.store, v128.store
Use comma to separate them, e.g. `--enable-segue=i32.load,i64.store`,
and `--enable-segue` means all flags are added.
Acknowledgement:
Many thanks to Intel Labs, UC San Diego and UT Austin teams for introducing this
technology and the great support and guidance!
Signed-off-by: Wenyong Huang <wenyong.huang@intel.com>
Co-authored-by: Vahldiek-oberwagner, Anjo Lucas <anjo.lucas.vahldiek-oberwagner@intel.com>
When ref.func opcode refers to a function whose function index no smaller than
current function, the destination func should be forward-declared: it is declared
in the table element segments, or is declared in the export list.
Multiple threads generated from the same module should use the same
lock to protect the atomic operations.
Before this PR, each thread used a different lock to protect atomic
operations (e.g. atomic add), making the lock ineffective.
Fix#1958.
Enable setting running mode when executing a wasm bytecode file
- Four running modes are supported: interpreter, fast-jit, llvm-jit and multi-tier-jit
- Add APIs to set/get the default running mode of the runtime
- Add APIs to set/get the running mode of a wasm module instance
- Add running mode options for iwasm command line tool
And add size/opt level options for LLVM JIT
Should use import_function_count but not import_count to calculate
the func_index in handle_name_section when custom name section
feature is enabled.
And clear the compile warnings of mini loader.
Implement 2-level Multi-tier JIT engine: tier-up from Fast JIT to LLVM JIT to
get quick cold startup by Fast JIT and better performance by gradually
switching to LLVM JIT when the LLVM JIT functions are compiled by the
backend threads.
Refer to:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/1302
Limit max_stack_cell_num/max_csp_num to be no larger than UINT16_MAX,
and don't check all_cell_num in interpreter again.
And refine some codes in interpreter.
Refine the generated LLVM IRs at the beginning of each LLVM AOT/JIT function
to fasten the LLVM IR optimization:
- Only create argv_buf if there are func calls in this function
- Only create native stack bound if stack bound check is enabled
- Only create aux stack info if there is opcode set_global_aux_stack
- Only create native symbol if indirect_mode is enabled
- Only create memory info if there are memory operations
- Only create func_type_indexes if there is opcode call_indirect
Refactor LLVM JIT for some purposes:
- To simplify the source code of JIT compilation
- To simplify the JIT modes
- To align with LLVM latest changes
- To prepare for the Multi-tier JIT compilation, refer to #1302
The changes mainly include:
- Remove the MCJIT mode, replace it with ORC JIT eager mode
- Remove the LLVM legacy pass manager (only keep the LLVM new pass manager)
- Change the lazy mode's LLVM module/function binding:
change each function in an individual LLVM module into all functions in a single LLVM module
- Upgraded ORC JIT to ORCv2 JIT to enable lazy compilation
Refer to #1468
Refactor the layout of interpreter and AOT module instance:
- Unify the interp/AOT module instance, use the same WASMModuleInstance/
WASMMemoryInstance/WASMTableInstance data structures for both interpreter
and AOT
- Make the offset of most fields the same in module instance for both interpreter
and AOT, append memory instance structure, global data and table instances to
the end of module instance for interpreter mode (like AOT mode)
- For extra fields in WASM module instance, use WASMModuleInstanceExtra to
create a field `e` for interpreter
- Change the LLVM JIT module instance creating process, LLVM JIT uses the WASM
module and module instance same as interpreter/Fast-JIT mode. So that Fast JIT
and LLVM JIT can access the same data structures, and make it possible to
implement the Multi-tier JIT (tier-up from Fast JIT to LLVM JIT) in the future
- Unify some APIs: merge some APIs for module instance and memory instance's
related operations (only implement one copy)
Note that the AOT ABI is same, the AOT file format, AOT relocation types, how AOT
code accesses the AOT module instance and so on are kept unchanged.
Refer to:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/1384
And enable classic interpreter instead fast interpreter when llvm jit is enabled,
so as to fix the issue that llvm jit cannot handle opcode drop_64/select_64.
Remove handling opcode DROP_64/SELECT_64 in loader stage
prepare_bytecode, as they are the modified opcodes of DROP/SELECT
for optimization purpose, but not the opcodes defined by spec.
Normalize wasm types, for the two wasm types, if their parameter types
and result types are the same, we only save one copy, so as to reduce
the footprint and simplify the type comparison in opcode CALL_INDIRECT.
And fix issue in interpreter globals_instantiate, and remove used codes.
Reserve one pointer size for fast-interp code_compiled_size: if the last opcode of
current function is to be dropped (e.g. OP_DROP), the peak memory usage will
be larger than the final code_compiled_size, we record the peak size to ensure
there won't be invalid memory access during the second traversing.
Should not clear last label's polymorphic state after current label is popped
Fix invalid func_idx check in opcode REF_FUNC
Add check when there are extra unneeded bytecodes for a wasm function