mirror of
https://github.com/bytecodealliance/wasm-micro-runtime.git
synced 2025-09-06 18:01:08 +00:00
![]() Propose two enhancements: - Shared heap created from preallocated memory buffer: The user can create a shared heap from a pre-allocated buffer and see that memory region as one large chunk; there's no need to dynamically manage it(malloc/free). The user needs to make sure the native address and size of that memory region are valid. - Introduce shared heap chain: The user can create a shared heap chain, from the wasm app point of view, it's still a continuous memory region in wasm app's point of view while in the native it can consist of multiple shared heaps (each of which is a continuous memory region). For example, one 500MB shared heap 1 and one 500 MB shared heap 2 form a chain, in Wasm's point of view, it's one 1GB shared heap. After these enhancements, the data sharing between wasm apps, and between hosts can be more efficient and flexible. Admittedly shared heap management can be more complex for users, but it's similar to the zero-overhead principle. No overhead will be imposed for the users who don't use the shared heap enhancement or don't use the shared heap at all. |
||
---|---|---|
.. | ||
basic | ||
bh-atomic | ||
cmake | ||
debug-tools | ||
file | ||
inst-context | ||
inst-context-threads | ||
linux-perf | ||
mem-allocator | ||
multi-module | ||
multi-thread | ||
native-lib | ||
native-stack-overflow | ||
printversion | ||
ref-types | ||
sgx-ra | ||
shared-heap | ||
shared-module | ||
socket-api | ||
spawn-thread | ||
terminate | ||
wasi-threads | ||
wasm-c-api | ||
wasm-c-api-imports | ||
workload | ||
README.md |
Samples
- basic: Demonstrating how to use runtime exposed API's to call WASM functions, how to register native functions and call them, and how to call WASM function from native function.
- file: Demonstrating the supported file interaction API of WASI. This sample can also demonstrate the SGX IPFS (Intel Protected File System), enabling an enclave to seal and unseal data at rest.
- multi-thread: Demonstrating how to run wasm application which creates multiple threads to execute wasm functions concurrently, and uses mutex/cond by calling pthread related API's.
- spawn-thread: Demonstrating how to execute wasm functions of the same wasm application concurrently, in threads created by host embedder or runtime, but not the wasm application itself.
- wasi-threads: Demonstrating how to run wasm application which creates multiple threads to execute wasm functions concurrently based on lib wasi-threads.
- multi-module: Demonstrating the multiple modules as dependencies feature which implements the load-time dynamic linking.
- ref-types: Demonstrating how to call wasm functions with argument of externref type introduced by reference types proposal.
- wasm-c-api: Demonstrating how to run some samples from wasm-c-api proposal and showing the supported API's.
- socket-api: Demonstrating how to run wasm tcp server and tcp client applications, and how they communicate with each other.
- native-lib: Demonstrating how to write required interfaces in native library, build it into a shared library and register the shared library to iwasm.
- sgx-ra: Demonstrating how to execute Remote Attestation on SGX with librats, which enables mutual attestation with other runtimes or other entities that support librats to ensure that each is running within the TEE.
- workload: Demonstrating how to build and run some complex workloads, e.g. tensorflow-lite, XNNPACK, wasm-av1, meshoptimizer and bwa.
- debug-tools: Demonstrating how to symbolicate a stack trace.