Allow to wait for a new debugger connection once the previous one
is disconnected:
- when receiving a detach command
- when the client socket is closed (for example, lldb process is killed)
Currently we initialize and destroy LLVM environment in aot_create_comp_context
and aot_destroy_comp_context, which are called in wasm_module_load/unload,
and the latter may be invoked multiple times, which leads to duplicated LLVM
initialization/destroy and may result in unexpected behaviors.
Move the LLVM init/destroy into runtime init/destroy to resolve the issue.
Allow to have multiple stores in an engine and multiple instances
in a store. Letting a wasm_function_t pass its wasm_store_t to make
it more efficient.
Add macro WASM_ENABLE_WORD_ALING_READ to enable reading
1/2/4 and n bytes data from vram buffer, which requires 4-byte addr
alignment reading.
Eliminate XIP AOT relocations related to the below ones:
i32_div_u, f32_min, f32_max, f32_ceil, f32_floor, f32_trunc, f32_rint
Change wasm-c-api default log level to output less logs by default:
- For debug mode, change log level from 5 to 4
- For release mode, change log level from 3 to 2
The host embedder may new/delete wasm-c-api engine simultaneously
in multiple threads, which requires lock for the operations. Since there
isn't one time called global init/destroy APIs provided by wasm-c-api,
we define a global lock and initialize it with thread mutex initializer if
the platform supports that, and use it to lock the operations of engine.
If the platform doesn't support thread mutex initializer, we require
developer to create the lock by himself to ensure the thread-safe of the
engine operations.
Allow to unregister (or unload) the previously registered native libs,
so that no need to restart the whole engine by using
`wasm_runtime_destroy/wasm_runtime_init`.
The general optimizations may create some intrinsic function calls
like llvm.memset, so we move indirect mode optimization after them
to remove these function calls at last.
Signed-off-by: Huang Qi <huangqi3@xiaomi.com>
Some offsets can be directly gotten at the compilation stage after the interp/AOT
module instance refactoring PR was merged, so as to reduce some unnecessary
load instructions and improve the Fast JIT performance:
- Access fields of wasm memory instance structure
- Access fields of wasm table instance structure
- Access the global data
Translate call_indirect opcode by calling wasm functions with Fast JIT IRs instead of
calling jit_call_indirect runtime API, so as to improve the performance.
Translate call native function process with Fast JIT IRs to validate each pointer argument
and convert it into native address, and then call the native function directly instead
of calling jit_invoke_native runtime API, so as to improve the performance.
Refactor LLVM JIT for some purposes:
- To simplify the source code of JIT compilation
- To simplify the JIT modes
- To align with LLVM latest changes
- To prepare for the Multi-tier JIT compilation, refer to #1302
The changes mainly include:
- Remove the MCJIT mode, replace it with ORC JIT eager mode
- Remove the LLVM legacy pass manager (only keep the LLVM new pass manager)
- Change the lazy mode's LLVM module/function binding:
change each function in an individual LLVM module into all functions in a single LLVM module
- Upgraded ORC JIT to ORCv2 JIT to enable lazy compilation
Refer to #1468
Refactor the layout of interpreter and AOT module instance:
- Unify the interp/AOT module instance, use the same WASMModuleInstance/
WASMMemoryInstance/WASMTableInstance data structures for both interpreter
and AOT
- Make the offset of most fields the same in module instance for both interpreter
and AOT, append memory instance structure, global data and table instances to
the end of module instance for interpreter mode (like AOT mode)
- For extra fields in WASM module instance, use WASMModuleInstanceExtra to
create a field `e` for interpreter
- Change the LLVM JIT module instance creating process, LLVM JIT uses the WASM
module and module instance same as interpreter/Fast-JIT mode. So that Fast JIT
and LLVM JIT can access the same data structures, and make it possible to
implement the Multi-tier JIT (tier-up from Fast JIT to LLVM JIT) in the future
- Unify some APIs: merge some APIs for module instance and memory instance's
related operations (only implement one copy)
Note that the AOT ABI is same, the AOT file format, AOT relocation types, how AOT
code accesses the AOT module instance and so on are kept unchanged.
Refer to:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/1384
Initial integration of WASI-NN based on #1225:
- Implement the library core/iwasm/libraries/wasi-nn
- Support TensorFlow, CPU, F32 at the first stage
- Add cmake variable `-DWAMR_BUILD_WASI_NN`
- Add test case based on Docker image and update document
Refer to #1573
Add a couple of socket examples that can be used with WAMR:
- The `timeout_client` and `timeout_server` examples demonstrate socket
send and receive timeouts using the socket options
- The `multicast_client` and `multicast_server` examples demonstrate receiving
multicast packets in WASM
And add several macro controls for `socket_opts` example.
While compiling the file wasi_socket_ext.c with pedantic options (typically
`-Wimplicit-int-conversion` and `-Wmissing-prototypes`), some warnings are raised.
This PR addresses those warnings by adding missing static statements before
functions and explicitly casting a narrowing conversion.
And fix the error handling after calling getpeername.
The function was introduced to WASI about half a year ago after it already
existed in WAMR.
It caused problems with compiling `wasi_socket_ext.c` with the wasi-sdk
that already had this hostcall exported (wasi-sdk >= 15).
The approach we take is the following:
- we update WASI interface to be compatible with the wasi_snapshot_preview1
- compilation with `wasi_socket_ext.c` supports both wasi_sdk >= 15 and wasi_sdk < 15
(although we intend to drop support for < 15 at one point of time)
- we override `accept()` from wasi-libc - we do that because `accept()` in `wasi-libc`
doesn't support returning address (as it doesn't have `getpeername()` implemented),
so `wasi_socket_ext.c` offers more functionality right now
Resolves#1167 and #1528.
[1] https://github.com/WebAssembly/WASI/blob/main/phases/snapshot/witx/wasi_snapshot_preview1.witx
Memory num_bytes_per_page was incorrectly set in memory enlarging for
shared memory, we fix it. And don't set memory_data_size again for shared
memory.
Implement more socket APIs, refer to #1336 and below PRs:
- Implement wasi_addr_resolve function (#1319)
- Fix socket-api byte order issue when host/network order are the same (#1327)
- Enhance sock_addr_local syscall (#1320)
- Implement sock_addr_remote syscall (#1360)
- Add support for IPv6 in WAMR (#1411)
- Implement ns lookup allowlist (#1420)
- Implement sock_send_to and sock_recv_from system calls (#1457)
- Added http downloader and multicast socket options (#1467)
- Fix `bind()` calls to receive the correct size of `sockaddr` structure (#1490)
- Assert on correct parameters (#1505)
- Copy only received bytes from socket recv buffer into the app buffer (#1497)
Co-authored-by: Marcin Kolny <mkolny@amazon.com>
Co-authored-by: Marcin Kolny <marcin.kolny@gmail.com>
Co-authored-by: Callum Macmillan <callumimacmillan@gmail.com>
Fix two issues of building WAMR on Windows:
- The build_llvm.py script calls itself, spawning instances faster than they expire,
which makes Python3 eat up the entire RAM in a pretty short time.
- The MSVC compiler doesn't support preprocessor statements inside macro expressions.
Two places inside bh_assert() were found.
If WASM app has called pthread_detach() to detach a thread, it will be detached again
when thread exits. Attempting to detach an already detached thread may result in crash
in musl-libc. This patch fixes it.
And enable classic interpreter instead fast interpreter when llvm jit is enabled,
so as to fix the issue that llvm jit cannot handle opcode drop_64/select_64.
Remove handling opcode DROP_64/SELECT_64 in loader stage
prepare_bytecode, as they are the modified opcodes of DROP/SELECT
for optimization purpose, but not the opcodes defined by spec.