Path: blob/main/crates/cranelift/src/translate/code_translator.rs
3085 views
//! This module contains the bulk of the interesting code performing the translation between1//! WebAssembly and Cranelift IR.2//!3//! The translation is done in one pass, opcode by opcode. Two main data structures are used during4//! code translations: the value stack and the control stack. The value stack mimics the execution5//! of the WebAssembly stack machine: each instruction result is pushed onto the stack and6//! instruction arguments are popped off the stack. Similarly, when encountering a control flow7//! block, it is pushed onto the control stack and popped off when encountering the corresponding8//! `End`.9//!10//! Another data structure, the translation state, records information concerning unreachable code11//! status and about if inserting a return at the end of the function is necessary.12//!13//! Some of the WebAssembly instructions need information about the environment for which they14//! are being translated:15//!16//! - the loads and stores need the memory base address;17//! - the `get_global` and `set_global` instructions depend on how the globals are implemented;18//! - `memory.size` and `memory.grow` are runtime functions;19//! - `call_indirect` has to translate the function index into the address of where this20//! is;21//!22//! That is why `translate_function_body` takes an object having the `WasmRuntime` trait as23//! argument.24//!25//! There is extra complexity associated with translation of 128-bit SIMD instructions.26//! Wasm only considers there to be a single 128-bit vector type. But CLIF's type system27//! distinguishes different lane configurations, so considers 8X16, 16X8, 32X4 and 64X2 to be28//! different types. The result is that, in wasm, it's perfectly OK to take the output of (eg)29//! an `add.16x8` and use that as an operand of a `sub.32x4`, without using any cast. But when30//! translated into CLIF, that will cause a verifier error due to the apparent type mismatch.31//!32//! This file works around that problem by liberally inserting `bitcast` instructions in many33//! places -- mostly, before the use of vector values, either as arguments to CLIF instructions34//! or as block actual parameters. These are no-op casts which nevertheless have different35//! input and output types, and are used (mostly) to "convert" 16X8, 32X4 and 64X2-typed vectors36//! to the "canonical" type, 8X16. Hence the functions `optionally_bitcast_vector`,37//! `bitcast_arguments`, `pop*_with_bitcast`, `canonicalise_then_jump`,38//! `canonicalise_then_br{z,nz}`, `is_non_canonical_v128` and `canonicalise_v128_values`.39//! Note that the `bitcast*` functions are occasionally used to convert to some type other than40//! 8X16, but the `canonicalise*` functions always convert to type 8X16.41//!42//! Be careful when adding support for new vector instructions. And when adding new jumps, even43//! if they are apparently don't have any connection to vectors. Never generate any kind of44//! (inter-block) jump directly. Instead use `canonicalise_then_jump` and45//! `canonicalise_then_br{z,nz}`.46//!47//! The use of bitcasts is ugly and inefficient, but currently unavoidable:48//!49//! * they make the logic in this file fragile: miss out a bitcast for any reason, and there is50//! the risk of the system failing in the verifier. At least for debug builds.51//!52//! * in the new backends, they potentially interfere with pattern matching on CLIF -- the53//! patterns need to take into account the presence of bitcast nodes.54//!55//! * in the new backends, they get translated into machine-level vector-register-copy56//! instructions, none of which are actually necessary. We then depend on the register57//! allocator to coalesce them all out.58//!59//! * they increase the total number of CLIF nodes that have to be processed, hence slowing down60//! the compilation pipeline. Also, the extra coalescing work generates a slowdown.61//!62//! A better solution which would avoid all four problems would be to remove the 8X16, 16X8,63//! 32X4 and 64X2 types from CLIF and instead have a single V128 type.64//!65//! For further background see also:66//! <https://github.com/bytecodealliance/wasmtime/issues/1147>67//! ("Too many raw_bitcasts in SIMD code")68//! <https://github.com/bytecodealliance/cranelift/pull/1251>69//! ("Add X128 type to represent WebAssembly's V128 type")70//! <https://github.com/bytecodealliance/cranelift/pull/1236>71//! ("Relax verification to allow I8X16 to act as a default vector type")7273use crate::Reachability;74use crate::bounds_checks::{BoundsCheck, bounds_check_and_compute_addr};75use crate::func_environ::{Extension, FuncEnvironment};76use crate::translate::TargetEnvironment;77use crate::translate::environ::StructFieldsVec;78use crate::translate::stack::{ControlStackFrame, ElseData};79use crate::translate::translation_utils::{80block_with_params, blocktype_params_results, f32_translation, f64_translation,81};82use cranelift_codegen::ir::condcodes::{FloatCC, IntCC};83use cranelift_codegen::ir::immediates::Offset32;84use cranelift_codegen::ir::{85self, AtomicRmwOp, ExceptionTag, InstBuilder, JumpTableData, MemFlags, Value, ValueLabel,86};87use cranelift_codegen::ir::{BlockArg, types::*};88use cranelift_codegen::packed_option::ReservedValue;89use cranelift_frontend::{FunctionBuilder, Variable};90use itertools::Itertools;91use smallvec::{SmallVec, ToSmallVec};92use std::collections::{HashMap, hash_map};93use std::vec::Vec;94use wasmparser::{FuncValidator, MemArg, Operator, WasmModuleResources};95use wasmtime_environ::{96DataIndex, ElemIndex, FuncIndex, GlobalIndex, MemoryIndex, TableIndex, TagIndex, TypeConvert,97TypeIndex, WasmHeapType, WasmRefType, WasmResult, WasmValType, wasm_unsupported,98};99100/// Given a `Reachability<T>`, unwrap the inner `T` or, when unreachable, set101/// `state.reachable = false` and return.102///103/// Used in combination with calling `prepare_addr` and `prepare_atomic_addr`104/// when we can statically determine that a Wasm access will unconditionally105/// trap.106macro_rules! unwrap_or_return_unreachable_state {107($environ:ident, $value:expr) => {108match $value {109Reachability::Reachable(x) => x,110Reachability::Unreachable => {111$environ.stacks.reachable = false;112return Ok(());113}114}115};116}117118/// Translates wasm operators into Cranelift IR instructions.119pub fn translate_operator(120validator: &mut FuncValidator<impl WasmModuleResources>,121op: &Operator,122operand_types: Option<&[WasmValType]>,123builder: &mut FunctionBuilder,124environ: &mut FuncEnvironment<'_>,125) -> WasmResult<()> {126log::trace!("Translating Wasm opcode: {op:?}");127128if !environ.is_reachable() {129translate_unreachable_operator(validator, &op, builder, environ)?;130return Ok(());131}132133// Given that we believe the current block is reachable, the FunctionBuilder ought to agree.134debug_assert!(!builder.is_unreachable());135let srcloc = builder.srcloc();136137let operand_types = operand_types.unwrap_or_else(|| {138panic!("should always have operand types available for valid, reachable ops; op = {op:?}")139});140141// This big match treats all Wasm code operators.142match op {143/********************************** Locals ****************************************144* `get_local` and `set_local` are treated as non-SSA variables and will completely145* disappear in the Cranelift Code146***********************************************************************************/147Operator::LocalGet { local_index } => {148let val = builder.use_var(Variable::from_u32(*local_index));149environ.stacks.push1(val);150let label = ValueLabel::from_u32(*local_index);151builder.set_val_label(val, label);152}153Operator::LocalSet { local_index } => {154let mut val = environ.stacks.pop1();155156// Ensure SIMD values are cast to their default Cranelift type, I8x16.157let ty = builder.func.dfg.value_type(val);158if ty.is_vector() {159val = optionally_bitcast_vector(val, I8X16, builder);160}161162builder.def_var(Variable::from_u32(*local_index), val);163let label = ValueLabel::from_u32(*local_index);164builder.set_val_label(val, label);165environ.state_slot_local_set(builder, *local_index, val);166}167Operator::LocalTee { local_index } => {168let mut val = environ.stacks.peek1();169170// Ensure SIMD values are cast to their default Cranelift type, I8x16.171let ty = builder.func.dfg.value_type(val);172if ty.is_vector() {173val = optionally_bitcast_vector(val, I8X16, builder);174}175176builder.def_var(Variable::from_u32(*local_index), val);177let label = ValueLabel::from_u32(*local_index);178builder.set_val_label(val, label);179environ.state_slot_local_set(builder, *local_index, val);180}181/********************************** Globals ****************************************182* `get_global` and `set_global` are handled by the environment.183***********************************************************************************/184Operator::GlobalGet { global_index } => {185let global_index = GlobalIndex::from_u32(*global_index);186let val = environ.translate_global_get(builder, global_index)?;187environ.stacks.push1(val);188}189Operator::GlobalSet { global_index } => {190let global_index = GlobalIndex::from_u32(*global_index);191let mut val = environ.stacks.pop1();192// Ensure SIMD values are cast to their default Cranelift type, I8x16.193if builder.func.dfg.value_type(val).is_vector() {194val = optionally_bitcast_vector(val, I8X16, builder);195}196environ.translate_global_set(builder, global_index, val)?;197}198/********************************* Stack misc ***************************************199* `drop`, `nop`, `unreachable` and `select`.200***********************************************************************************/201Operator::Drop => {202environ.stacks.pop1();203}204Operator::Select => {205let (mut arg1, mut arg2, cond) = environ.stacks.pop3();206if builder.func.dfg.value_type(arg1).is_vector() {207arg1 = optionally_bitcast_vector(arg1, I8X16, builder);208}209if builder.func.dfg.value_type(arg2).is_vector() {210arg2 = optionally_bitcast_vector(arg2, I8X16, builder);211}212environ.stacks.push1(builder.ins().select(cond, arg1, arg2));213}214Operator::TypedSelect { ty: _ } => {215// We ignore the explicit type parameter as it is only needed for216// validation, which we require to have been performed before217// translation.218let (mut arg1, mut arg2, cond) = environ.stacks.pop3();219if builder.func.dfg.value_type(arg1).is_vector() {220arg1 = optionally_bitcast_vector(arg1, I8X16, builder);221}222if builder.func.dfg.value_type(arg2).is_vector() {223arg2 = optionally_bitcast_vector(arg2, I8X16, builder);224}225environ.stacks.push1(builder.ins().select(cond, arg1, arg2));226}227Operator::Nop => {228// We do nothing229}230Operator::Unreachable => {231environ.trap(builder, crate::TRAP_UNREACHABLE);232environ.stacks.reachable = false;233}234/***************************** Control flow blocks **********************************235* When starting a control flow block, we create a new `Block` that will hold the code236* after the block, and we push a frame on the control stack. Depending on the type237* of block, we create a new `Block` for the body of the block with an associated238* jump instruction.239*240* The `End` instruction pops the last control frame from the control stack, seals241* the destination block (since `br` instructions targeting it only appear inside the242* block and have already been translated) and modify the value stack to use the243* possible `Block`'s arguments values.244***********************************************************************************/245Operator::Block { blockty } => {246let (params, results) = blocktype_params_results(validator, *blockty)?;247let next = block_with_params(builder, results.clone(), environ)?;248environ.stacks.push_block(next, params.len(), results.len());249}250Operator::Loop { blockty } => {251let (params, results) = blocktype_params_results(validator, *blockty)?;252let loop_body = block_with_params(builder, params.clone(), environ)?;253let next = block_with_params(builder, results.clone(), environ)?;254canonicalise_then_jump(builder, loop_body, environ.stacks.peekn(params.len()));255environ256.stacks257.push_loop(loop_body, next, params.len(), results.len());258259// Pop the initial `Block` actuals and replace them with the `Block`'s260// params since control flow joins at the top of the loop.261environ.stacks.popn(params.len());262environ263.stacks264.stack265.extend_from_slice(builder.block_params(loop_body));266267builder.switch_to_block(loop_body);268environ.translate_loop_header(builder)?;269}270Operator::If { blockty } => {271let val = environ.stacks.pop1();272273let next_block = builder.create_block();274let (params, results) = blocktype_params_results(validator, *blockty)?;275let (destination, else_data) = if params.clone().eq(results.clone()) {276// It is possible there is no `else` block, so we will only277// allocate a block for it if/when we find the `else`. For now,278// we if the condition isn't true, then we jump directly to the279// destination block following the whole `if...end`. If we do end280// up discovering an `else`, then we will allocate a block for it281// and go back and patch the jump.282let destination = block_with_params(builder, results.clone(), environ)?;283let branch_inst = canonicalise_brif(284builder,285val,286next_block,287&[],288destination,289environ.stacks.peekn(params.len()),290);291(292destination,293ElseData::NoElse {294branch_inst,295placeholder: destination,296},297)298} else {299// The `if` type signature is not valid without an `else` block,300// so we eagerly allocate the `else` block here.301let destination = block_with_params(builder, results.clone(), environ)?;302let else_block = block_with_params(builder, params.clone(), environ)?;303canonicalise_brif(304builder,305val,306next_block,307&[],308else_block,309environ.stacks.peekn(params.len()),310);311builder.seal_block(else_block);312(destination, ElseData::WithElse { else_block })313};314315builder.seal_block(next_block); // Only predecessor is the current block.316builder.switch_to_block(next_block);317318// Here we append an argument to a Block targeted by an argumentless jump instruction319// But in fact there are two cases:320// - either the If does not have a Else clause, in that case ty = EmptyBlock321// and we add nothing;322// - either the If have an Else clause, in that case the destination of this jump323// instruction will be changed later when we translate the Else operator.324environ.stacks.push_if(325destination,326else_data,327params.len(),328results.len(),329*blockty,330);331}332Operator::Else => {333let i = environ.stacks.control_stack.len() - 1;334let reachable = environ.is_reachable();335match environ.stacks.control_stack[i] {336ControlStackFrame::If {337ref else_data,338head_is_reachable,339ref mut consequent_ends_reachable,340num_return_values,341blocktype,342destination,343..344} => {345// We finished the consequent, so record its final346// reachability state.347debug_assert!(consequent_ends_reachable.is_none());348*consequent_ends_reachable = Some(reachable);349350if head_is_reachable {351// We have a branch from the head of the `if` to the `else`.352environ.stacks.reachable = true;353354// Ensure we have a block for the `else` block (it may have355// already been pre-allocated, see `ElseData` for details).356let else_block = match *else_data {357ElseData::NoElse {358branch_inst,359placeholder,360} => {361let (params, _results) =362blocktype_params_results(validator, blocktype)?;363debug_assert_eq!(params.len(), num_return_values);364let else_block =365block_with_params(builder, params.clone(), environ)?;366canonicalise_then_jump(367builder,368destination,369environ.stacks.peekn(params.len()),370);371environ.stacks.popn(params.len());372373builder.change_jump_destination(374branch_inst,375placeholder,376else_block,377);378builder.seal_block(else_block);379else_block380}381ElseData::WithElse { else_block } => {382canonicalise_then_jump(383builder,384destination,385environ.stacks.peekn(num_return_values),386);387environ.stacks.popn(num_return_values);388else_block389}390};391392// You might be expecting that we push the parameters for this393// `else` block here, something like this:394//395// state.pushn(&control_stack_frame.params);396//397// We don't do that because they are already on the top of the stack398// for us: we pushed the parameters twice when we saw the initial399// `if` so that we wouldn't have to save the parameters in the400// `ControlStackFrame` as another `Vec` allocation.401402builder.switch_to_block(else_block);403404// We don't bother updating the control frame's `ElseData`405// to `WithElse` because nothing else will read it.406}407}408_ => unreachable!(),409}410}411Operator::End => {412let frame = environ.stacks.control_stack.pop().unwrap();413let next_block = frame.following_code();414let return_count = frame.num_return_values();415let return_args = environ.stacks.peekn_mut(return_count);416417canonicalise_then_jump(builder, next_block, return_args);418// You might expect that if we just finished an `if` block that419// didn't have a corresponding `else` block, then we would clean420// up our duplicate set of parameters that we pushed earlier421// right here. However, we don't have to explicitly do that,422// since we truncate the stack back to the original height423// below.424425builder.switch_to_block(next_block);426builder.seal_block(next_block);427428// If it is a loop we also have to seal the body loop block429if let ControlStackFrame::Loop { header, .. } = frame {430builder.seal_block(header)431}432433frame.restore_catch_handlers(&mut environ.stacks.handlers, builder);434435frame.truncate_value_stack_to_original_size(436&mut environ.stacks.stack,437&mut environ.stacks.stack_shape,438);439environ440.stacks441.stack442.extend_from_slice(builder.block_params(next_block));443}444/**************************** Branch instructions *********************************445* The branch instructions all have as arguments a target nesting level, which446* corresponds to how many control stack frames do we have to pop to get the447* destination `Block`.448*449* Once the destination `Block` is found, we sometimes have to declare a certain depth450* of the stack unreachable, because some branch instructions are terminator.451*452* The `br_table` case is much more complicated because Cranelift's `br_table` instruction453* does not support jump arguments like all the other branch instructions. That is why, in454* the case where we would use jump arguments for every other branch instruction, we455* need to split the critical edges leaving the `br_tables` by creating one `Block` per456* table destination; the `br_table` will point to these newly created `Blocks` and these457* `Block`s contain only a jump instruction pointing to the final destination, this time with458* jump arguments.459*460* This system is also implemented in Cranelift's SSA construction algorithm, because461* `use_var` located in a destination `Block` of a `br_table` might trigger the addition462* of jump arguments in each predecessor branch instruction, one of which might be a463* `br_table`.464***********************************************************************************/465Operator::Br { relative_depth } => {466let i = environ.stacks.control_stack.len() - 1 - (*relative_depth as usize);467let (return_count, br_destination) = {468let frame = &mut environ.stacks.control_stack[i];469// We signal that all the code that follows until the next End is unreachable470frame.set_branched_to_exit();471let return_count = if frame.is_loop() {472frame.num_param_values()473} else {474frame.num_return_values()475};476(return_count, frame.br_destination())477};478let destination_args = environ.stacks.peekn_mut(return_count);479canonicalise_then_jump(builder, br_destination, destination_args);480environ.stacks.popn(return_count);481environ.stacks.reachable = false;482}483Operator::BrIf { relative_depth } => translate_br_if(*relative_depth, builder, environ),484Operator::BrTable { targets } => {485let default = targets.default();486let mut min_depth = default;487for depth in targets.targets() {488let depth = depth?;489if depth < min_depth {490min_depth = depth;491}492}493let jump_args_count = {494let i = environ.stacks.control_stack.len() - 1 - (min_depth as usize);495let min_depth_frame = &environ.stacks.control_stack[i];496if min_depth_frame.is_loop() {497min_depth_frame.num_param_values()498} else {499min_depth_frame.num_return_values()500}501};502let val = environ.stacks.pop1();503let mut data = Vec::with_capacity(targets.len() as usize);504if jump_args_count == 0 {505// No jump arguments506for depth in targets.targets() {507let depth = depth?;508let block = {509let i = environ.stacks.control_stack.len() - 1 - (depth as usize);510let frame = &mut environ.stacks.control_stack[i];511frame.set_branched_to_exit();512frame.br_destination()513};514data.push(builder.func.dfg.block_call(block, &[]));515}516let block = {517let i = environ.stacks.control_stack.len() - 1 - (default as usize);518let frame = &mut environ.stacks.control_stack[i];519frame.set_branched_to_exit();520frame.br_destination()521};522let block = builder.func.dfg.block_call(block, &[]);523let jt = builder.create_jump_table(JumpTableData::new(block, &data));524builder.ins().br_table(val, jt);525} else {526// Here we have jump arguments, but Cranelift's br_table doesn't support them527// We then proceed to split the edges going out of the br_table528let return_count = jump_args_count;529let mut dest_block_sequence = vec![];530let mut dest_block_map = HashMap::new();531for depth in targets.targets() {532let depth = depth?;533let branch_block = match dest_block_map.entry(depth as usize) {534hash_map::Entry::Occupied(entry) => *entry.get(),535hash_map::Entry::Vacant(entry) => {536let block = builder.create_block();537dest_block_sequence.push((depth as usize, block));538*entry.insert(block)539}540};541data.push(builder.func.dfg.block_call(branch_block, &[]));542}543let default_branch_block = match dest_block_map.entry(default as usize) {544hash_map::Entry::Occupied(entry) => *entry.get(),545hash_map::Entry::Vacant(entry) => {546let block = builder.create_block();547dest_block_sequence.push((default as usize, block));548*entry.insert(block)549}550};551let default_branch_block = builder.func.dfg.block_call(default_branch_block, &[]);552let jt = builder.create_jump_table(JumpTableData::new(default_branch_block, &data));553builder.ins().br_table(val, jt);554for (depth, dest_block) in dest_block_sequence {555builder.switch_to_block(dest_block);556builder.seal_block(dest_block);557let real_dest_block = {558let i = environ.stacks.control_stack.len() - 1 - depth;559let frame = &mut environ.stacks.control_stack[i];560frame.set_branched_to_exit();561frame.br_destination()562};563let destination_args = environ.stacks.peekn_mut(return_count);564canonicalise_then_jump(builder, real_dest_block, destination_args);565}566environ.stacks.popn(return_count);567}568environ.stacks.reachable = false;569}570Operator::Return => {571let return_count = {572let frame = &mut environ.stacks.control_stack[0];573frame.num_return_values()574};575{576let mut return_args = environ.stacks.peekn(return_count).to_vec();577environ.handle_before_return(&return_args, builder);578bitcast_wasm_returns(&mut return_args, builder);579builder.ins().return_(&return_args);580}581environ.stacks.popn(return_count);582environ.stacks.reachable = false;583}584/********************************** Exception handling **********************************/585Operator::Catch { .. }586| Operator::Rethrow { .. }587| Operator::Delegate { .. }588| Operator::CatchAll => {589return Err(wasm_unsupported!(590"legacy exception handling proposal is not supported"591));592}593594Operator::TryTable { try_table } => {595// First, create a block on the control stack. This also596// updates the handler state that is attached to all calls597// made within this block.598let body = builder.create_block();599let (params, results) = blocktype_params_results(validator, try_table.ty)?;600let next = block_with_params(builder, results.clone(), environ)?;601builder.ins().jump(body, []);602builder.seal_block(body);603604// For each catch clause, create a block with the605// equivalent of `br` to the target (unboxing the exnref606// into stack values or pushing it directly, depending on607// the kind of clause).608let ckpt = environ.stacks.handlers.take_checkpoint();609let mut catch_blocks = vec![];610// Process in *reverse* order: see the comment on611// [`HandlerState`]. In brief, this allows us to unify the612// left-to-right matching semantics of a single613// `try_table`'s catch clauses with the inside-out614// (deepest scope first) semantics of nested `try_table`s.615for catch in try_table.catches.iter().rev() {616// This will register the block in `state.handlers`617// under the appropriate tag.618catch_blocks.push(create_catch_block(builder, catch, environ)?);619}620621environ.stacks.push_try_table_block(622next,623catch_blocks,624params.len(),625results.len(),626ckpt,627);628629// Continue codegen into the main body block.630builder.switch_to_block(body);631}632633Operator::Throw { tag_index } => {634let tag_index = TagIndex::from_u32(*tag_index);635let arity = environ.tag_param_arity(tag_index);636let args = environ.stacks.peekn(arity).to_vec();637environ.translate_exn_throw(builder, tag_index, &args)?;638environ.stacks.popn(arity);639environ.stacks.reachable = false;640}641642Operator::ThrowRef => {643let exnref = environ.stacks.pop1();644environ.translate_exn_throw_ref(builder, exnref)?;645environ.stacks.reachable = false;646}647648/************************************ Calls ****************************************649* The call instructions pop off their arguments from the stack and append their650* return values to it. `call_indirect` needs environment support because there is an651* argument referring to an index in the external functions table of the module.652************************************************************************************/653Operator::Call { function_index } => {654let function_index = FuncIndex::from_u32(*function_index);655let ty = environ.module.functions[function_index]656.signature657.unwrap_module_type_index();658let sig_ref = environ.get_or_create_interned_sig_ref(builder.func, ty);659let num_args = environ.num_params_for_func(function_index);660661// Bitcast any vector arguments to their default type, I8X16, before calling.662let mut args = environ.stacks.peekn(num_args).to_vec();663bitcast_wasm_params(environ, sig_ref, &mut args, builder);664665let inst_results =666environ.translate_call(builder, srcloc, function_index, sig_ref, &args)?;667668debug_assert_eq!(669inst_results.len(),670builder.func.dfg.signatures[sig_ref].returns.len(),671"translate_call results should match the call signature"672);673environ.stacks.popn(num_args);674environ.stacks.pushn(&inst_results);675}676Operator::CallIndirect {677type_index,678table_index,679} => {680// `type_index` is the index of the function's signature and681// `table_index` is the index of the table to search the function682// in.683let type_index = TypeIndex::from_u32(*type_index);684let sigref = environ.get_or_create_sig_ref(builder.func, type_index);685let num_args = environ.num_params_for_function_type(type_index);686let callee = environ.stacks.pop1();687688// Bitcast any vector arguments to their default type, I8X16, before calling.689let mut args = environ.stacks.peekn(num_args).to_vec();690bitcast_wasm_params(environ, sigref, &mut args, builder);691692let inst_results = environ.translate_call_indirect(693builder,694srcloc,695validator.features(),696TableIndex::from_u32(*table_index),697type_index,698sigref,699callee,700&args,701)?;702let inst_results = match inst_results {703Some(results) => results,704None => {705environ.stacks.reachable = false;706return Ok(());707}708};709710debug_assert_eq!(711inst_results.len(),712builder.func.dfg.signatures[sigref].returns.len(),713"translate_call_indirect results should match the call signature"714);715environ.stacks.popn(num_args);716environ.stacks.pushn(&inst_results);717}718/******************************* Tail Calls ******************************************719* The tail call instructions pop their arguments from the stack and720* then permanently transfer control to their callee. The indirect721* version requires environment support (while the direct version can722* optionally be hooked but doesn't require it) it interacts with the723* VM's runtime state via tables.724************************************************************************************/725Operator::ReturnCall { function_index } => {726let function_index = FuncIndex::from_u32(*function_index);727let ty = environ.module.functions[function_index]728.signature729.unwrap_module_type_index();730let sig_ref = environ.get_or_create_interned_sig_ref(builder.func, ty);731let num_args = environ.num_params_for_func(function_index);732733// Bitcast any vector arguments to their default type, I8X16, before calling.734let mut args = environ.stacks.peekn(num_args).to_vec();735bitcast_wasm_params(environ, sig_ref, &mut args, builder);736737environ.translate_return_call(builder, srcloc, function_index, sig_ref, &args)?;738739environ.stacks.popn(num_args);740environ.stacks.reachable = false;741}742Operator::ReturnCallIndirect {743type_index,744table_index,745} => {746// `type_index` is the index of the function's signature and747// `table_index` is the index of the table to search the function748// in.749let type_index = TypeIndex::from_u32(*type_index);750let sigref = environ.get_or_create_sig_ref(builder.func, type_index);751let num_args = environ.num_params_for_function_type(type_index);752let callee = environ.stacks.pop1();753754// Bitcast any vector arguments to their default type, I8X16, before calling.755let mut args = environ.stacks.peekn(num_args).to_vec();756bitcast_wasm_params(environ, sigref, &mut args, builder);757758environ.translate_return_call_indirect(759builder,760srcloc,761validator.features(),762TableIndex::from_u32(*table_index),763type_index,764sigref,765callee,766&args,767)?;768769environ.stacks.popn(num_args);770environ.stacks.reachable = false;771}772Operator::ReturnCallRef { type_index } => {773// Get function signature774// `index` is the index of the function's signature and `table_index` is the index of775// the table to search the function in.776let type_index = TypeIndex::from_u32(*type_index);777let sigref = environ.get_or_create_sig_ref(builder.func, type_index);778let num_args = environ.num_params_for_function_type(type_index);779let callee = environ.stacks.pop1();780781// Bitcast any vector arguments to their default type, I8X16, before calling.782let mut args = environ.stacks.peekn(num_args).to_vec();783bitcast_wasm_params(environ, sigref, &mut args, builder);784785environ.translate_return_call_ref(builder, srcloc, sigref, callee, &args)?;786787environ.stacks.popn(num_args);788environ.stacks.reachable = false;789}790/******************************* Memory management ***********************************791* Memory management is handled by environment. It is usually translated into calls to792* special functions.793************************************************************************************/794Operator::MemoryGrow { mem } => {795// The WebAssembly MVP only supports one linear memory, but we expect the reserved796// argument to be a memory index.797let mem = MemoryIndex::from_u32(*mem);798let _heap = environ.get_or_create_heap(builder.func, mem);799let val = environ.stacks.pop1();800environ.before_memory_grow(builder, val, mem);801let result = environ.translate_memory_grow(builder, mem, val)?;802environ.stacks.push1(result);803}804Operator::MemorySize { mem } => {805let mem = MemoryIndex::from_u32(*mem);806let _heap = environ.get_or_create_heap(builder.func, mem);807let result = environ.translate_memory_size(builder.cursor(), mem)?;808environ.stacks.push1(result);809}810/******************************* Load instructions ***********************************811* Wasm specifies an integer alignment flag but we drop it in Cranelift.812* The memory base address is provided by the environment.813************************************************************************************/814Operator::I32Load8U { memarg } => {815unwrap_or_return_unreachable_state!(816environ,817translate_load(memarg, ir::Opcode::Uload8, I32, builder, environ)?818);819}820Operator::I32Load16U { memarg } => {821unwrap_or_return_unreachable_state!(822environ,823translate_load(memarg, ir::Opcode::Uload16, I32, builder, environ)?824);825}826Operator::I32Load8S { memarg } => {827unwrap_or_return_unreachable_state!(828environ,829translate_load(memarg, ir::Opcode::Sload8, I32, builder, environ)?830);831}832Operator::I32Load16S { memarg } => {833unwrap_or_return_unreachable_state!(834environ,835translate_load(memarg, ir::Opcode::Sload16, I32, builder, environ)?836);837}838Operator::I64Load8U { memarg } => {839unwrap_or_return_unreachable_state!(840environ,841translate_load(memarg, ir::Opcode::Uload8, I64, builder, environ)?842);843}844Operator::I64Load16U { memarg } => {845unwrap_or_return_unreachable_state!(846environ,847translate_load(memarg, ir::Opcode::Uload16, I64, builder, environ)?848);849}850Operator::I64Load8S { memarg } => {851unwrap_or_return_unreachable_state!(852environ,853translate_load(memarg, ir::Opcode::Sload8, I64, builder, environ)?854);855}856Operator::I64Load16S { memarg } => {857unwrap_or_return_unreachable_state!(858environ,859translate_load(memarg, ir::Opcode::Sload16, I64, builder, environ)?860);861}862Operator::I64Load32S { memarg } => {863unwrap_or_return_unreachable_state!(864environ,865translate_load(memarg, ir::Opcode::Sload32, I64, builder, environ)?866);867}868Operator::I64Load32U { memarg } => {869unwrap_or_return_unreachable_state!(870environ,871translate_load(memarg, ir::Opcode::Uload32, I64, builder, environ)?872);873}874Operator::I32Load { memarg } => {875unwrap_or_return_unreachable_state!(876environ,877translate_load(memarg, ir::Opcode::Load, I32, builder, environ)?878);879}880Operator::F32Load { memarg } => {881unwrap_or_return_unreachable_state!(882environ,883translate_load(memarg, ir::Opcode::Load, F32, builder, environ)?884);885}886Operator::I64Load { memarg } => {887unwrap_or_return_unreachable_state!(888environ,889translate_load(memarg, ir::Opcode::Load, I64, builder, environ)?890);891}892Operator::F64Load { memarg } => {893unwrap_or_return_unreachable_state!(894environ,895translate_load(memarg, ir::Opcode::Load, F64, builder, environ)?896);897}898Operator::V128Load { memarg } => {899unwrap_or_return_unreachable_state!(900environ,901translate_load(memarg, ir::Opcode::Load, I8X16, builder, environ)?902);903}904Operator::V128Load8x8S { memarg } => {905//TODO(#6829): add before_load() and before_store() hooks for SIMD loads and stores.906let (flags, _, base) = unwrap_or_return_unreachable_state!(907environ,908prepare_addr(memarg, 8, builder, environ)?909);910let loaded = builder.ins().sload8x8(flags, base, 0);911environ.stacks.push1(loaded);912}913Operator::V128Load8x8U { memarg } => {914let (flags, _, base) = unwrap_or_return_unreachable_state!(915environ,916prepare_addr(memarg, 8, builder, environ)?917);918let loaded = builder.ins().uload8x8(flags, base, 0);919environ.stacks.push1(loaded);920}921Operator::V128Load16x4S { memarg } => {922let (flags, _, base) = unwrap_or_return_unreachable_state!(923environ,924prepare_addr(memarg, 8, builder, environ)?925);926let loaded = builder.ins().sload16x4(flags, base, 0);927environ.stacks.push1(loaded);928}929Operator::V128Load16x4U { memarg } => {930let (flags, _, base) = unwrap_or_return_unreachable_state!(931environ,932prepare_addr(memarg, 8, builder, environ)?933);934let loaded = builder.ins().uload16x4(flags, base, 0);935environ.stacks.push1(loaded);936}937Operator::V128Load32x2S { memarg } => {938let (flags, _, base) = unwrap_or_return_unreachable_state!(939environ,940prepare_addr(memarg, 8, builder, environ)?941);942let loaded = builder.ins().sload32x2(flags, base, 0);943environ.stacks.push1(loaded);944}945Operator::V128Load32x2U { memarg } => {946let (flags, _, base) = unwrap_or_return_unreachable_state!(947environ,948prepare_addr(memarg, 8, builder, environ)?949);950let loaded = builder.ins().uload32x2(flags, base, 0);951environ.stacks.push1(loaded);952}953/****************************** Store instructions ***********************************954* Wasm specifies an integer alignment flag but we drop it in Cranelift.955* The memory base address is provided by the environment.956************************************************************************************/957Operator::I32Store { memarg }958| Operator::I64Store { memarg }959| Operator::F32Store { memarg }960| Operator::F64Store { memarg } => {961translate_store(memarg, ir::Opcode::Store, builder, environ)?;962}963Operator::I32Store8 { memarg } | Operator::I64Store8 { memarg } => {964translate_store(memarg, ir::Opcode::Istore8, builder, environ)?;965}966Operator::I32Store16 { memarg } | Operator::I64Store16 { memarg } => {967translate_store(memarg, ir::Opcode::Istore16, builder, environ)?;968}969Operator::I64Store32 { memarg } => {970translate_store(memarg, ir::Opcode::Istore32, builder, environ)?;971}972Operator::V128Store { memarg } => {973translate_store(memarg, ir::Opcode::Store, builder, environ)?;974}975/****************************** Nullary Operators ************************************/976Operator::I32Const { value } => {977environ978.stacks979.push1(builder.ins().iconst(I32, i64::from(value.cast_unsigned())));980}981Operator::I64Const { value } => environ.stacks.push1(builder.ins().iconst(I64, *value)),982Operator::F32Const { value } => {983environ984.stacks985.push1(builder.ins().f32const(f32_translation(*value)));986}987Operator::F64Const { value } => {988environ989.stacks990.push1(builder.ins().f64const(f64_translation(*value)));991}992/******************************* Unary Operators *************************************/993Operator::I32Clz | Operator::I64Clz => {994let arg = environ.stacks.pop1();995environ.stacks.push1(builder.ins().clz(arg));996}997Operator::I32Ctz | Operator::I64Ctz => {998let arg = environ.stacks.pop1();999environ.stacks.push1(builder.ins().ctz(arg));1000}1001Operator::I32Popcnt | Operator::I64Popcnt => {1002let arg = environ.stacks.pop1();1003environ.stacks.push1(builder.ins().popcnt(arg));1004}1005Operator::I64ExtendI32S => {1006let val = environ.stacks.pop1();1007environ.stacks.push1(builder.ins().sextend(I64, val));1008}1009Operator::I64ExtendI32U => {1010let val = environ.stacks.pop1();1011environ.stacks.push1(builder.ins().uextend(I64, val));1012}1013Operator::I32WrapI64 => {1014let val = environ.stacks.pop1();1015environ.stacks.push1(builder.ins().ireduce(I32, val));1016}1017Operator::F32Sqrt | Operator::F64Sqrt => {1018let arg = environ.stacks.pop1();1019environ.stacks.push1(builder.ins().sqrt(arg));1020}1021Operator::F32Ceil => {1022let arg = environ.stacks.pop1();1023let result = environ.ceil_f32(builder, arg);1024environ.stacks.push1(result);1025}1026Operator::F64Ceil => {1027let arg = environ.stacks.pop1();1028let result = environ.ceil_f64(builder, arg);1029environ.stacks.push1(result);1030}1031Operator::F32Floor => {1032let arg = environ.stacks.pop1();1033let result = environ.floor_f32(builder, arg);1034environ.stacks.push1(result);1035}1036Operator::F64Floor => {1037let arg = environ.stacks.pop1();1038let result = environ.floor_f64(builder, arg);1039environ.stacks.push1(result);1040}1041Operator::F32Trunc => {1042let arg = environ.stacks.pop1();1043let result = environ.trunc_f32(builder, arg);1044environ.stacks.push1(result);1045}1046Operator::F64Trunc => {1047let arg = environ.stacks.pop1();1048let result = environ.trunc_f64(builder, arg);1049environ.stacks.push1(result);1050}1051Operator::F32Nearest => {1052let arg = environ.stacks.pop1();1053let result = environ.nearest_f32(builder, arg);1054environ.stacks.push1(result);1055}1056Operator::F64Nearest => {1057let arg = environ.stacks.pop1();1058let result = environ.nearest_f64(builder, arg);1059environ.stacks.push1(result);1060}1061Operator::F32Abs | Operator::F64Abs => {1062let val = environ.stacks.pop1();1063environ.stacks.push1(builder.ins().fabs(val));1064}1065Operator::F32Neg | Operator::F64Neg => {1066let arg = environ.stacks.pop1();1067environ.stacks.push1(builder.ins().fneg(arg));1068}1069Operator::F64ConvertI64U | Operator::F64ConvertI32U => {1070let val = environ.stacks.pop1();1071environ.stacks.push1(builder.ins().fcvt_from_uint(F64, val));1072}1073Operator::F64ConvertI64S | Operator::F64ConvertI32S => {1074let val = environ.stacks.pop1();1075environ.stacks.push1(builder.ins().fcvt_from_sint(F64, val));1076}1077Operator::F32ConvertI64S | Operator::F32ConvertI32S => {1078let val = environ.stacks.pop1();1079environ.stacks.push1(builder.ins().fcvt_from_sint(F32, val));1080}1081Operator::F32ConvertI64U | Operator::F32ConvertI32U => {1082let val = environ.stacks.pop1();1083environ.stacks.push1(builder.ins().fcvt_from_uint(F32, val));1084}1085Operator::F64PromoteF32 => {1086let val = environ.stacks.pop1();1087environ.stacks.push1(builder.ins().fpromote(F64, val));1088}1089Operator::F32DemoteF64 => {1090let val = environ.stacks.pop1();1091environ.stacks.push1(builder.ins().fdemote(F32, val));1092}1093Operator::I64TruncF64S | Operator::I64TruncF32S => {1094let val = environ.stacks.pop1();1095let result = environ.translate_fcvt_to_sint(builder, I64, val);1096environ.stacks.push1(result);1097}1098Operator::I32TruncF64S | Operator::I32TruncF32S => {1099let val = environ.stacks.pop1();1100let result = environ.translate_fcvt_to_sint(builder, I32, val);1101environ.stacks.push1(result);1102}1103Operator::I64TruncF64U | Operator::I64TruncF32U => {1104let val = environ.stacks.pop1();1105let result = environ.translate_fcvt_to_uint(builder, I64, val);1106environ.stacks.push1(result);1107}1108Operator::I32TruncF64U | Operator::I32TruncF32U => {1109let val = environ.stacks.pop1();1110let result = environ.translate_fcvt_to_uint(builder, I32, val);1111environ.stacks.push1(result);1112}1113Operator::I64TruncSatF64S | Operator::I64TruncSatF32S => {1114let val = environ.stacks.pop1();1115environ1116.stacks1117.push1(builder.ins().fcvt_to_sint_sat(I64, val));1118}1119Operator::I32TruncSatF64S | Operator::I32TruncSatF32S => {1120let val = environ.stacks.pop1();1121environ1122.stacks1123.push1(builder.ins().fcvt_to_sint_sat(I32, val));1124}1125Operator::I64TruncSatF64U | Operator::I64TruncSatF32U => {1126let val = environ.stacks.pop1();1127environ1128.stacks1129.push1(builder.ins().fcvt_to_uint_sat(I64, val));1130}1131Operator::I32TruncSatF64U | Operator::I32TruncSatF32U => {1132let val = environ.stacks.pop1();1133environ1134.stacks1135.push1(builder.ins().fcvt_to_uint_sat(I32, val));1136}1137Operator::F32ReinterpretI32 => {1138let val = environ.stacks.pop1();1139environ1140.stacks1141.push1(builder.ins().bitcast(F32, MemFlags::new(), val));1142}1143Operator::F64ReinterpretI64 => {1144let val = environ.stacks.pop1();1145environ1146.stacks1147.push1(builder.ins().bitcast(F64, MemFlags::new(), val));1148}1149Operator::I32ReinterpretF32 => {1150let val = environ.stacks.pop1();1151environ1152.stacks1153.push1(builder.ins().bitcast(I32, MemFlags::new(), val));1154}1155Operator::I64ReinterpretF64 => {1156let val = environ.stacks.pop1();1157environ1158.stacks1159.push1(builder.ins().bitcast(I64, MemFlags::new(), val));1160}1161Operator::I32Extend8S => {1162let val = environ.stacks.pop1();1163environ.stacks.push1(builder.ins().ireduce(I8, val));1164let val = environ.stacks.pop1();1165environ.stacks.push1(builder.ins().sextend(I32, val));1166}1167Operator::I32Extend16S => {1168let val = environ.stacks.pop1();1169environ.stacks.push1(builder.ins().ireduce(I16, val));1170let val = environ.stacks.pop1();1171environ.stacks.push1(builder.ins().sextend(I32, val));1172}1173Operator::I64Extend8S => {1174let val = environ.stacks.pop1();1175environ.stacks.push1(builder.ins().ireduce(I8, val));1176let val = environ.stacks.pop1();1177environ.stacks.push1(builder.ins().sextend(I64, val));1178}1179Operator::I64Extend16S => {1180let val = environ.stacks.pop1();1181environ.stacks.push1(builder.ins().ireduce(I16, val));1182let val = environ.stacks.pop1();1183environ.stacks.push1(builder.ins().sextend(I64, val));1184}1185Operator::I64Extend32S => {1186let val = environ.stacks.pop1();1187environ.stacks.push1(builder.ins().ireduce(I32, val));1188let val = environ.stacks.pop1();1189environ.stacks.push1(builder.ins().sextend(I64, val));1190}1191/****************************** Binary Operators ************************************/1192Operator::I32Add | Operator::I64Add => {1193let (arg1, arg2) = environ.stacks.pop2();1194environ.stacks.push1(builder.ins().iadd(arg1, arg2));1195}1196Operator::I32And | Operator::I64And => {1197let (arg1, arg2) = environ.stacks.pop2();1198environ.stacks.push1(builder.ins().band(arg1, arg2));1199}1200Operator::I32Or | Operator::I64Or => {1201let (arg1, arg2) = environ.stacks.pop2();1202environ.stacks.push1(builder.ins().bor(arg1, arg2));1203}1204Operator::I32Xor | Operator::I64Xor => {1205let (arg1, arg2) = environ.stacks.pop2();1206environ.stacks.push1(builder.ins().bxor(arg1, arg2));1207}1208Operator::I32Shl | Operator::I64Shl => {1209let (arg1, arg2) = environ.stacks.pop2();1210environ.stacks.push1(builder.ins().ishl(arg1, arg2));1211}1212Operator::I32ShrS | Operator::I64ShrS => {1213let (arg1, arg2) = environ.stacks.pop2();1214environ.stacks.push1(builder.ins().sshr(arg1, arg2));1215}1216Operator::I32ShrU | Operator::I64ShrU => {1217let (arg1, arg2) = environ.stacks.pop2();1218environ.stacks.push1(builder.ins().ushr(arg1, arg2));1219}1220Operator::I32Rotl | Operator::I64Rotl => {1221let (arg1, arg2) = environ.stacks.pop2();1222environ.stacks.push1(builder.ins().rotl(arg1, arg2));1223}1224Operator::I32Rotr | Operator::I64Rotr => {1225let (arg1, arg2) = environ.stacks.pop2();1226environ.stacks.push1(builder.ins().rotr(arg1, arg2));1227}1228Operator::F32Add | Operator::F64Add => {1229let (arg1, arg2) = environ.stacks.pop2();1230environ.stacks.push1(builder.ins().fadd(arg1, arg2));1231}1232Operator::I32Sub | Operator::I64Sub => {1233let (arg1, arg2) = environ.stacks.pop2();1234environ.stacks.push1(builder.ins().isub(arg1, arg2));1235}1236Operator::F32Sub | Operator::F64Sub => {1237let (arg1, arg2) = environ.stacks.pop2();1238environ.stacks.push1(builder.ins().fsub(arg1, arg2));1239}1240Operator::I32Mul | Operator::I64Mul => {1241let (arg1, arg2) = environ.stacks.pop2();1242environ.stacks.push1(builder.ins().imul(arg1, arg2));1243}1244Operator::F32Mul | Operator::F64Mul => {1245let (arg1, arg2) = environ.stacks.pop2();1246environ.stacks.push1(builder.ins().fmul(arg1, arg2));1247}1248Operator::F32Div | Operator::F64Div => {1249let (arg1, arg2) = environ.stacks.pop2();1250environ.stacks.push1(builder.ins().fdiv(arg1, arg2));1251}1252Operator::I32DivS | Operator::I64DivS => {1253let (arg1, arg2) = environ.stacks.pop2();1254let result = environ.translate_sdiv(builder, arg1, arg2);1255environ.stacks.push1(result);1256}1257Operator::I32DivU | Operator::I64DivU => {1258let (arg1, arg2) = environ.stacks.pop2();1259let result = environ.translate_udiv(builder, arg1, arg2);1260environ.stacks.push1(result);1261}1262Operator::I32RemS | Operator::I64RemS => {1263let (arg1, arg2) = environ.stacks.pop2();1264let result = environ.translate_srem(builder, arg1, arg2);1265environ.stacks.push1(result);1266}1267Operator::I32RemU | Operator::I64RemU => {1268let (arg1, arg2) = environ.stacks.pop2();1269let result = environ.translate_urem(builder, arg1, arg2);1270environ.stacks.push1(result);1271}1272Operator::F32Min | Operator::F64Min => {1273let (arg1, arg2) = environ.stacks.pop2();1274environ.stacks.push1(builder.ins().fmin(arg1, arg2));1275}1276Operator::F32Max | Operator::F64Max => {1277let (arg1, arg2) = environ.stacks.pop2();1278environ.stacks.push1(builder.ins().fmax(arg1, arg2));1279}1280Operator::F32Copysign | Operator::F64Copysign => {1281let (arg1, arg2) = environ.stacks.pop2();1282environ.stacks.push1(builder.ins().fcopysign(arg1, arg2));1283}1284/**************************** Comparison Operators **********************************/1285Operator::I32LtS | Operator::I64LtS => {1286translate_icmp(IntCC::SignedLessThan, builder, environ)1287}1288Operator::I32LtU | Operator::I64LtU => {1289translate_icmp(IntCC::UnsignedLessThan, builder, environ)1290}1291Operator::I32LeS | Operator::I64LeS => {1292translate_icmp(IntCC::SignedLessThanOrEqual, builder, environ)1293}1294Operator::I32LeU | Operator::I64LeU => {1295translate_icmp(IntCC::UnsignedLessThanOrEqual, builder, environ)1296}1297Operator::I32GtS | Operator::I64GtS => {1298translate_icmp(IntCC::SignedGreaterThan, builder, environ)1299}1300Operator::I32GtU | Operator::I64GtU => {1301translate_icmp(IntCC::UnsignedGreaterThan, builder, environ)1302}1303Operator::I32GeS | Operator::I64GeS => {1304translate_icmp(IntCC::SignedGreaterThanOrEqual, builder, environ)1305}1306Operator::I32GeU | Operator::I64GeU => {1307translate_icmp(IntCC::UnsignedGreaterThanOrEqual, builder, environ)1308}1309Operator::I32Eqz | Operator::I64Eqz => {1310let arg = environ.stacks.pop1();1311let val = builder.ins().icmp_imm(IntCC::Equal, arg, 0);1312environ.stacks.push1(builder.ins().uextend(I32, val));1313}1314Operator::I32Eq | Operator::I64Eq => translate_icmp(IntCC::Equal, builder, environ),1315Operator::F32Eq | Operator::F64Eq => translate_fcmp(FloatCC::Equal, builder, environ),1316Operator::I32Ne | Operator::I64Ne => translate_icmp(IntCC::NotEqual, builder, environ),1317Operator::F32Ne | Operator::F64Ne => translate_fcmp(FloatCC::NotEqual, builder, environ),1318Operator::F32Gt | Operator::F64Gt => translate_fcmp(FloatCC::GreaterThan, builder, environ),1319Operator::F32Ge | Operator::F64Ge => {1320translate_fcmp(FloatCC::GreaterThanOrEqual, builder, environ)1321}1322Operator::F32Lt | Operator::F64Lt => translate_fcmp(FloatCC::LessThan, builder, environ),1323Operator::F32Le | Operator::F64Le => {1324translate_fcmp(FloatCC::LessThanOrEqual, builder, environ)1325}1326Operator::RefNull { hty } => {1327let hty = environ.convert_heap_type(*hty)?;1328let result = environ.translate_ref_null(builder.cursor(), hty)?;1329environ.stacks.push1(result);1330}1331Operator::RefIsNull => {1332let value = environ.stacks.pop1();1333let [WasmValType::Ref(ty)] = operand_types else {1334unreachable!("validation")1335};1336let result = environ.translate_ref_is_null(builder.cursor(), value, *ty)?;1337environ.stacks.push1(result);1338}1339Operator::RefFunc { function_index } => {1340let index = FuncIndex::from_u32(*function_index);1341let result = environ.translate_ref_func(builder.cursor(), index)?;1342environ.stacks.push1(result);1343}1344Operator::MemoryAtomicWait32 { memarg } | Operator::MemoryAtomicWait64 { memarg } => {1345// The WebAssembly MVP only supports one linear memory and1346// wasmparser will ensure that the memory indices specified are1347// zero.1348let implied_ty = match op {1349Operator::MemoryAtomicWait64 { .. } => I64,1350Operator::MemoryAtomicWait32 { .. } => I32,1351_ => unreachable!(),1352};1353let memory_index = MemoryIndex::from_u32(memarg.memory);1354let heap = environ.get_or_create_heap(builder.func, memory_index);1355let timeout = environ.stacks.pop1(); // 64 (fixed)1356let expected = environ.stacks.pop1(); // 32 or 64 (per the `Ixx` in `IxxAtomicWait`)1357assert!(builder.func.dfg.value_type(expected) == implied_ty);1358let addr = environ.stacks.pop1();1359let effective_addr = if memarg.offset == 0 {1360addr1361} else {1362let index_type = environ.heaps()[heap].index_type();1363let offset = builder.ins().iconst(index_type, memarg.offset as i64);1364environ.uadd_overflow_trap(builder, addr, offset, ir::TrapCode::HEAP_OUT_OF_BOUNDS)1365};1366// `fn translate_atomic_wait` can inspect the type of `expected` to figure out what1367// code it needs to generate, if it wants.1368let res = environ.translate_atomic_wait(1369builder,1370memory_index,1371heap,1372effective_addr,1373expected,1374timeout,1375)?;1376environ.stacks.push1(res);1377}1378Operator::MemoryAtomicNotify { memarg } => {1379let memory_index = MemoryIndex::from_u32(memarg.memory);1380let heap = environ.get_or_create_heap(builder.func, memory_index);1381let count = environ.stacks.pop1(); // 32 (fixed)1382let addr = environ.stacks.pop1();1383let effective_addr = if memarg.offset == 0 {1384addr1385} else {1386let index_type = environ.heaps()[heap].index_type();1387let offset = builder.ins().iconst(index_type, memarg.offset as i64);1388environ.uadd_overflow_trap(builder, addr, offset, ir::TrapCode::HEAP_OUT_OF_BOUNDS)1389};1390let res = environ.translate_atomic_notify(1391builder,1392memory_index,1393heap,1394effective_addr,1395count,1396)?;1397environ.stacks.push1(res);1398}1399Operator::I32AtomicLoad { memarg } => {1400translate_atomic_load(I32, I32, memarg, builder, environ)?1401}1402Operator::I64AtomicLoad { memarg } => {1403translate_atomic_load(I64, I64, memarg, builder, environ)?1404}1405Operator::I32AtomicLoad8U { memarg } => {1406translate_atomic_load(I32, I8, memarg, builder, environ)?1407}1408Operator::I32AtomicLoad16U { memarg } => {1409translate_atomic_load(I32, I16, memarg, builder, environ)?1410}1411Operator::I64AtomicLoad8U { memarg } => {1412translate_atomic_load(I64, I8, memarg, builder, environ)?1413}1414Operator::I64AtomicLoad16U { memarg } => {1415translate_atomic_load(I64, I16, memarg, builder, environ)?1416}1417Operator::I64AtomicLoad32U { memarg } => {1418translate_atomic_load(I64, I32, memarg, builder, environ)?1419}14201421Operator::I32AtomicStore { memarg } => {1422translate_atomic_store(I32, memarg, builder, environ)?1423}1424Operator::I64AtomicStore { memarg } => {1425translate_atomic_store(I64, memarg, builder, environ)?1426}1427Operator::I32AtomicStore8 { memarg } => {1428translate_atomic_store(I8, memarg, builder, environ)?1429}1430Operator::I32AtomicStore16 { memarg } => {1431translate_atomic_store(I16, memarg, builder, environ)?1432}1433Operator::I64AtomicStore8 { memarg } => {1434translate_atomic_store(I8, memarg, builder, environ)?1435}1436Operator::I64AtomicStore16 { memarg } => {1437translate_atomic_store(I16, memarg, builder, environ)?1438}1439Operator::I64AtomicStore32 { memarg } => {1440translate_atomic_store(I32, memarg, builder, environ)?1441}14421443Operator::I32AtomicRmwAdd { memarg } => {1444translate_atomic_rmw(I32, I32, AtomicRmwOp::Add, memarg, builder, environ)?1445}1446Operator::I64AtomicRmwAdd { memarg } => {1447translate_atomic_rmw(I64, I64, AtomicRmwOp::Add, memarg, builder, environ)?1448}1449Operator::I32AtomicRmw8AddU { memarg } => {1450translate_atomic_rmw(I32, I8, AtomicRmwOp::Add, memarg, builder, environ)?1451}1452Operator::I32AtomicRmw16AddU { memarg } => {1453translate_atomic_rmw(I32, I16, AtomicRmwOp::Add, memarg, builder, environ)?1454}1455Operator::I64AtomicRmw8AddU { memarg } => {1456translate_atomic_rmw(I64, I8, AtomicRmwOp::Add, memarg, builder, environ)?1457}1458Operator::I64AtomicRmw16AddU { memarg } => {1459translate_atomic_rmw(I64, I16, AtomicRmwOp::Add, memarg, builder, environ)?1460}1461Operator::I64AtomicRmw32AddU { memarg } => {1462translate_atomic_rmw(I64, I32, AtomicRmwOp::Add, memarg, builder, environ)?1463}14641465Operator::I32AtomicRmwSub { memarg } => {1466translate_atomic_rmw(I32, I32, AtomicRmwOp::Sub, memarg, builder, environ)?1467}1468Operator::I64AtomicRmwSub { memarg } => {1469translate_atomic_rmw(I64, I64, AtomicRmwOp::Sub, memarg, builder, environ)?1470}1471Operator::I32AtomicRmw8SubU { memarg } => {1472translate_atomic_rmw(I32, I8, AtomicRmwOp::Sub, memarg, builder, environ)?1473}1474Operator::I32AtomicRmw16SubU { memarg } => {1475translate_atomic_rmw(I32, I16, AtomicRmwOp::Sub, memarg, builder, environ)?1476}1477Operator::I64AtomicRmw8SubU { memarg } => {1478translate_atomic_rmw(I64, I8, AtomicRmwOp::Sub, memarg, builder, environ)?1479}1480Operator::I64AtomicRmw16SubU { memarg } => {1481translate_atomic_rmw(I64, I16, AtomicRmwOp::Sub, memarg, builder, environ)?1482}1483Operator::I64AtomicRmw32SubU { memarg } => {1484translate_atomic_rmw(I64, I32, AtomicRmwOp::Sub, memarg, builder, environ)?1485}14861487Operator::I32AtomicRmwAnd { memarg } => {1488translate_atomic_rmw(I32, I32, AtomicRmwOp::And, memarg, builder, environ)?1489}1490Operator::I64AtomicRmwAnd { memarg } => {1491translate_atomic_rmw(I64, I64, AtomicRmwOp::And, memarg, builder, environ)?1492}1493Operator::I32AtomicRmw8AndU { memarg } => {1494translate_atomic_rmw(I32, I8, AtomicRmwOp::And, memarg, builder, environ)?1495}1496Operator::I32AtomicRmw16AndU { memarg } => {1497translate_atomic_rmw(I32, I16, AtomicRmwOp::And, memarg, builder, environ)?1498}1499Operator::I64AtomicRmw8AndU { memarg } => {1500translate_atomic_rmw(I64, I8, AtomicRmwOp::And, memarg, builder, environ)?1501}1502Operator::I64AtomicRmw16AndU { memarg } => {1503translate_atomic_rmw(I64, I16, AtomicRmwOp::And, memarg, builder, environ)?1504}1505Operator::I64AtomicRmw32AndU { memarg } => {1506translate_atomic_rmw(I64, I32, AtomicRmwOp::And, memarg, builder, environ)?1507}15081509Operator::I32AtomicRmwOr { memarg } => {1510translate_atomic_rmw(I32, I32, AtomicRmwOp::Or, memarg, builder, environ)?1511}1512Operator::I64AtomicRmwOr { memarg } => {1513translate_atomic_rmw(I64, I64, AtomicRmwOp::Or, memarg, builder, environ)?1514}1515Operator::I32AtomicRmw8OrU { memarg } => {1516translate_atomic_rmw(I32, I8, AtomicRmwOp::Or, memarg, builder, environ)?1517}1518Operator::I32AtomicRmw16OrU { memarg } => {1519translate_atomic_rmw(I32, I16, AtomicRmwOp::Or, memarg, builder, environ)?1520}1521Operator::I64AtomicRmw8OrU { memarg } => {1522translate_atomic_rmw(I64, I8, AtomicRmwOp::Or, memarg, builder, environ)?1523}1524Operator::I64AtomicRmw16OrU { memarg } => {1525translate_atomic_rmw(I64, I16, AtomicRmwOp::Or, memarg, builder, environ)?1526}1527Operator::I64AtomicRmw32OrU { memarg } => {1528translate_atomic_rmw(I64, I32, AtomicRmwOp::Or, memarg, builder, environ)?1529}15301531Operator::I32AtomicRmwXor { memarg } => {1532translate_atomic_rmw(I32, I32, AtomicRmwOp::Xor, memarg, builder, environ)?1533}1534Operator::I64AtomicRmwXor { memarg } => {1535translate_atomic_rmw(I64, I64, AtomicRmwOp::Xor, memarg, builder, environ)?1536}1537Operator::I32AtomicRmw8XorU { memarg } => {1538translate_atomic_rmw(I32, I8, AtomicRmwOp::Xor, memarg, builder, environ)?1539}1540Operator::I32AtomicRmw16XorU { memarg } => {1541translate_atomic_rmw(I32, I16, AtomicRmwOp::Xor, memarg, builder, environ)?1542}1543Operator::I64AtomicRmw8XorU { memarg } => {1544translate_atomic_rmw(I64, I8, AtomicRmwOp::Xor, memarg, builder, environ)?1545}1546Operator::I64AtomicRmw16XorU { memarg } => {1547translate_atomic_rmw(I64, I16, AtomicRmwOp::Xor, memarg, builder, environ)?1548}1549Operator::I64AtomicRmw32XorU { memarg } => {1550translate_atomic_rmw(I64, I32, AtomicRmwOp::Xor, memarg, builder, environ)?1551}15521553Operator::I32AtomicRmwXchg { memarg } => {1554translate_atomic_rmw(I32, I32, AtomicRmwOp::Xchg, memarg, builder, environ)?1555}1556Operator::I64AtomicRmwXchg { memarg } => {1557translate_atomic_rmw(I64, I64, AtomicRmwOp::Xchg, memarg, builder, environ)?1558}1559Operator::I32AtomicRmw8XchgU { memarg } => {1560translate_atomic_rmw(I32, I8, AtomicRmwOp::Xchg, memarg, builder, environ)?1561}1562Operator::I32AtomicRmw16XchgU { memarg } => {1563translate_atomic_rmw(I32, I16, AtomicRmwOp::Xchg, memarg, builder, environ)?1564}1565Operator::I64AtomicRmw8XchgU { memarg } => {1566translate_atomic_rmw(I64, I8, AtomicRmwOp::Xchg, memarg, builder, environ)?1567}1568Operator::I64AtomicRmw16XchgU { memarg } => {1569translate_atomic_rmw(I64, I16, AtomicRmwOp::Xchg, memarg, builder, environ)?1570}1571Operator::I64AtomicRmw32XchgU { memarg } => {1572translate_atomic_rmw(I64, I32, AtomicRmwOp::Xchg, memarg, builder, environ)?1573}15741575Operator::I32AtomicRmwCmpxchg { memarg } => {1576translate_atomic_cas(I32, I32, memarg, builder, environ)?1577}1578Operator::I64AtomicRmwCmpxchg { memarg } => {1579translate_atomic_cas(I64, I64, memarg, builder, environ)?1580}1581Operator::I32AtomicRmw8CmpxchgU { memarg } => {1582translate_atomic_cas(I32, I8, memarg, builder, environ)?1583}1584Operator::I32AtomicRmw16CmpxchgU { memarg } => {1585translate_atomic_cas(I32, I16, memarg, builder, environ)?1586}1587Operator::I64AtomicRmw8CmpxchgU { memarg } => {1588translate_atomic_cas(I64, I8, memarg, builder, environ)?1589}1590Operator::I64AtomicRmw16CmpxchgU { memarg } => {1591translate_atomic_cas(I64, I16, memarg, builder, environ)?1592}1593Operator::I64AtomicRmw32CmpxchgU { memarg } => {1594translate_atomic_cas(I64, I32, memarg, builder, environ)?1595}15961597Operator::AtomicFence { .. } => {1598builder.ins().fence();1599}1600Operator::MemoryCopy { src_mem, dst_mem } => {1601let src_index = MemoryIndex::from_u32(*src_mem);1602let _src_heap = environ.get_or_create_heap(builder.func, src_index);16031604let dst_index = MemoryIndex::from_u32(*dst_mem);1605let _dst_heap = environ.get_or_create_heap(builder.func, dst_index);16061607let len = environ.stacks.pop1();1608let src_pos = environ.stacks.pop1();1609let dst_pos = environ.stacks.pop1();1610environ.translate_memory_copy(builder, src_index, dst_index, dst_pos, src_pos, len)?;1611}1612Operator::MemoryFill { mem } => {1613let mem = MemoryIndex::from_u32(*mem);1614let _heap = environ.get_or_create_heap(builder.func, mem);1615let len = environ.stacks.pop1();1616let val = environ.stacks.pop1();1617let dest = environ.stacks.pop1();1618environ.translate_memory_fill(builder, mem, dest, val, len)?;1619}1620Operator::MemoryInit { data_index, mem } => {1621let mem = MemoryIndex::from_u32(*mem);1622let _heap = environ.get_or_create_heap(builder.func, mem);1623let len = environ.stacks.pop1();1624let src = environ.stacks.pop1();1625let dest = environ.stacks.pop1();1626environ.translate_memory_init(builder, mem, *data_index, dest, src, len)?;1627}1628Operator::DataDrop { data_index } => {1629environ.translate_data_drop(builder.cursor(), *data_index)?;1630}1631Operator::TableSize { table: index } => {1632let result =1633environ.translate_table_size(builder.cursor(), TableIndex::from_u32(*index))?;1634environ.stacks.push1(result);1635}1636Operator::TableGrow { table: index } => {1637let table_index = TableIndex::from_u32(*index);1638let delta = environ.stacks.pop1();1639let init_value = environ.stacks.pop1();1640let result = environ.translate_table_grow(builder, table_index, delta, init_value)?;1641environ.stacks.push1(result);1642}1643Operator::TableGet { table: index } => {1644let table_index = TableIndex::from_u32(*index);1645let index = environ.stacks.pop1();1646let result = environ.translate_table_get(builder, table_index, index)?;1647environ.stacks.push1(result);1648}1649Operator::TableSet { table: index } => {1650let table_index = TableIndex::from_u32(*index);1651let value = environ.stacks.pop1();1652let index = environ.stacks.pop1();1653environ.translate_table_set(builder, table_index, value, index)?;1654}1655Operator::TableCopy {1656dst_table: dst_table_index,1657src_table: src_table_index,1658} => {1659let len = environ.stacks.pop1();1660let src = environ.stacks.pop1();1661let dest = environ.stacks.pop1();1662environ.translate_table_copy(1663builder,1664TableIndex::from_u32(*dst_table_index),1665TableIndex::from_u32(*src_table_index),1666dest,1667src,1668len,1669)?;1670}1671Operator::TableFill { table } => {1672let table_index = TableIndex::from_u32(*table);1673let len = environ.stacks.pop1();1674let val = environ.stacks.pop1();1675let dest = environ.stacks.pop1();1676environ.translate_table_fill(builder, table_index, dest, val, len)?;1677}1678Operator::TableInit {1679elem_index,1680table: table_index,1681} => {1682let len = environ.stacks.pop1();1683let src = environ.stacks.pop1();1684let dest = environ.stacks.pop1();1685environ.translate_table_init(1686builder,1687*elem_index,1688TableIndex::from_u32(*table_index),1689dest,1690src,1691len,1692)?;1693}1694Operator::ElemDrop { elem_index } => {1695environ.translate_elem_drop(builder.cursor(), *elem_index)?;1696}1697Operator::V128Const { value } => {1698let data = value.bytes().to_vec().into();1699let handle = builder.func.dfg.constants.insert(data);1700let value = builder.ins().vconst(I8X16, handle);1701// the v128.const is typed in CLIF as a I8x16 but bitcast to a different type1702// before use1703environ.stacks.push1(value)1704}1705Operator::I8x16Splat | Operator::I16x8Splat => {1706let reduced = builder1707.ins()1708.ireduce(type_of(op).lane_type(), environ.stacks.pop1());1709let splatted = builder.ins().splat(type_of(op), reduced);1710environ.stacks.push1(splatted)1711}1712Operator::I32x4Splat1713| Operator::I64x2Splat1714| Operator::F32x4Splat1715| Operator::F64x2Splat => {1716let splatted = builder.ins().splat(type_of(op), environ.stacks.pop1());1717environ.stacks.push1(splatted)1718}1719Operator::V128Load8Splat { memarg }1720| Operator::V128Load16Splat { memarg }1721| Operator::V128Load32Splat { memarg }1722| Operator::V128Load64Splat { memarg } => {1723unwrap_or_return_unreachable_state!(1724environ,1725translate_load(1726memarg,1727ir::Opcode::Load,1728type_of(op).lane_type(),1729builder,1730environ,1731)?1732);1733let splatted = builder.ins().splat(type_of(op), environ.stacks.pop1());1734environ.stacks.push1(splatted)1735}1736Operator::V128Load32Zero { memarg } | Operator::V128Load64Zero { memarg } => {1737unwrap_or_return_unreachable_state!(1738environ,1739translate_load(1740memarg,1741ir::Opcode::Load,1742type_of(op).lane_type(),1743builder,1744environ,1745)?1746);1747let as_vector = builder1748.ins()1749.scalar_to_vector(type_of(op), environ.stacks.pop1());1750environ.stacks.push1(as_vector)1751}1752Operator::V128Load8Lane { memarg, lane }1753| Operator::V128Load16Lane { memarg, lane }1754| Operator::V128Load32Lane { memarg, lane }1755| Operator::V128Load64Lane { memarg, lane } => {1756let vector = pop1_with_bitcast(environ, type_of(op), builder);1757unwrap_or_return_unreachable_state!(1758environ,1759translate_load(1760memarg,1761ir::Opcode::Load,1762type_of(op).lane_type(),1763builder,1764environ,1765)?1766);1767let replacement = environ.stacks.pop1();1768environ1769.stacks1770.push1(builder.ins().insertlane(vector, replacement, *lane))1771}1772Operator::V128Store8Lane { memarg, lane }1773| Operator::V128Store16Lane { memarg, lane }1774| Operator::V128Store32Lane { memarg, lane }1775| Operator::V128Store64Lane { memarg, lane } => {1776let vector = pop1_with_bitcast(environ, type_of(op), builder);1777environ1778.stacks1779.push1(builder.ins().extractlane(vector, *lane));1780translate_store(memarg, ir::Opcode::Store, builder, environ)?;1781}1782Operator::I8x16ExtractLaneS { lane } | Operator::I16x8ExtractLaneS { lane } => {1783let vector = pop1_with_bitcast(environ, type_of(op), builder);1784let extracted = builder.ins().extractlane(vector, *lane);1785environ.stacks.push1(builder.ins().sextend(I32, extracted))1786}1787Operator::I8x16ExtractLaneU { lane } | Operator::I16x8ExtractLaneU { lane } => {1788let vector = pop1_with_bitcast(environ, type_of(op), builder);1789let extracted = builder.ins().extractlane(vector, *lane);1790environ.stacks.push1(builder.ins().uextend(I32, extracted));1791// On x86, PEXTRB zeroes the upper bits of the destination register of extractlane so1792// uextend could be elided; for now, uextend is needed for Cranelift's type checks to1793// work.1794}1795Operator::I32x4ExtractLane { lane }1796| Operator::I64x2ExtractLane { lane }1797| Operator::F32x4ExtractLane { lane }1798| Operator::F64x2ExtractLane { lane } => {1799let vector = pop1_with_bitcast(environ, type_of(op), builder);1800environ1801.stacks1802.push1(builder.ins().extractlane(vector, *lane))1803}1804Operator::I8x16ReplaceLane { lane } | Operator::I16x8ReplaceLane { lane } => {1805let (vector, replacement) = environ.stacks.pop2();1806let ty = type_of(op);1807let reduced = builder.ins().ireduce(ty.lane_type(), replacement);1808let vector = optionally_bitcast_vector(vector, ty, builder);1809environ1810.stacks1811.push1(builder.ins().insertlane(vector, reduced, *lane))1812}1813Operator::I32x4ReplaceLane { lane }1814| Operator::I64x2ReplaceLane { lane }1815| Operator::F32x4ReplaceLane { lane }1816| Operator::F64x2ReplaceLane { lane } => {1817let (vector, replacement) = environ.stacks.pop2();1818let vector = optionally_bitcast_vector(vector, type_of(op), builder);1819environ1820.stacks1821.push1(builder.ins().insertlane(vector, replacement, *lane))1822}1823Operator::I8x16Shuffle { lanes, .. } => {1824let (a, b) = pop2_with_bitcast(environ, I8X16, builder);1825let result = environ.i8x16_shuffle(builder, a, b, lanes);1826environ.stacks.push1(result);1827// At this point the original types of a and b are lost; users of this value (i.e. this1828// WASM-to-CLIF translator) may need to bitcast for type-correctness. This is due1829// to WASM using the less specific v128 type for certain operations and more specific1830// types (e.g. i8x16) for others.1831}1832Operator::I8x16Swizzle => {1833let (a, b) = pop2_with_bitcast(environ, I8X16, builder);1834let result = environ.swizzle(builder, a, b);1835environ.stacks.push1(result);1836}1837Operator::I8x16Add | Operator::I16x8Add | Operator::I32x4Add | Operator::I64x2Add => {1838let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1839environ.stacks.push1(builder.ins().iadd(a, b))1840}1841Operator::I8x16AddSatS | Operator::I16x8AddSatS => {1842let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1843environ.stacks.push1(builder.ins().sadd_sat(a, b))1844}1845Operator::I8x16AddSatU | Operator::I16x8AddSatU => {1846let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1847environ.stacks.push1(builder.ins().uadd_sat(a, b))1848}1849Operator::I8x16Sub | Operator::I16x8Sub | Operator::I32x4Sub | Operator::I64x2Sub => {1850let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1851environ.stacks.push1(builder.ins().isub(a, b))1852}1853Operator::I8x16SubSatS | Operator::I16x8SubSatS => {1854let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1855environ.stacks.push1(builder.ins().ssub_sat(a, b))1856}1857Operator::I8x16SubSatU | Operator::I16x8SubSatU => {1858let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1859environ.stacks.push1(builder.ins().usub_sat(a, b))1860}1861Operator::I8x16MinS | Operator::I16x8MinS | Operator::I32x4MinS => {1862let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1863environ.stacks.push1(builder.ins().smin(a, b))1864}1865Operator::I8x16MinU | Operator::I16x8MinU | Operator::I32x4MinU => {1866let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1867environ.stacks.push1(builder.ins().umin(a, b))1868}1869Operator::I8x16MaxS | Operator::I16x8MaxS | Operator::I32x4MaxS => {1870let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1871environ.stacks.push1(builder.ins().smax(a, b))1872}1873Operator::I8x16MaxU | Operator::I16x8MaxU | Operator::I32x4MaxU => {1874let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1875environ.stacks.push1(builder.ins().umax(a, b))1876}1877Operator::I8x16AvgrU | Operator::I16x8AvgrU => {1878let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1879environ.stacks.push1(builder.ins().avg_round(a, b))1880}1881Operator::I8x16Neg | Operator::I16x8Neg | Operator::I32x4Neg | Operator::I64x2Neg => {1882let a = pop1_with_bitcast(environ, type_of(op), builder);1883environ.stacks.push1(builder.ins().ineg(a))1884}1885Operator::I8x16Abs | Operator::I16x8Abs | Operator::I32x4Abs | Operator::I64x2Abs => {1886let a = pop1_with_bitcast(environ, type_of(op), builder);1887environ.stacks.push1(builder.ins().iabs(a))1888}1889Operator::I16x8Mul | Operator::I32x4Mul | Operator::I64x2Mul => {1890let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1891environ.stacks.push1(builder.ins().imul(a, b))1892}1893Operator::V128Or => {1894let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1895environ.stacks.push1(builder.ins().bor(a, b))1896}1897Operator::V128Xor => {1898let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1899environ.stacks.push1(builder.ins().bxor(a, b))1900}1901Operator::V128And => {1902let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1903environ.stacks.push1(builder.ins().band(a, b))1904}1905Operator::V128AndNot => {1906let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);1907environ.stacks.push1(builder.ins().band_not(a, b))1908}1909Operator::V128Not => {1910let a = environ.stacks.pop1();1911environ.stacks.push1(builder.ins().bnot(a));1912}1913Operator::I8x16Shl | Operator::I16x8Shl | Operator::I32x4Shl | Operator::I64x2Shl => {1914let (a, b) = environ.stacks.pop2();1915let bitcast_a = optionally_bitcast_vector(a, type_of(op), builder);1916// The spec expects to shift with `b mod lanewidth`; This is directly compatible1917// with cranelift's instruction.1918environ.stacks.push1(builder.ins().ishl(bitcast_a, b))1919}1920Operator::I8x16ShrU | Operator::I16x8ShrU | Operator::I32x4ShrU | Operator::I64x2ShrU => {1921let (a, b) = environ.stacks.pop2();1922let bitcast_a = optionally_bitcast_vector(a, type_of(op), builder);1923// The spec expects to shift with `b mod lanewidth`; This is directly compatible1924// with cranelift's instruction.1925environ.stacks.push1(builder.ins().ushr(bitcast_a, b))1926}1927Operator::I8x16ShrS | Operator::I16x8ShrS | Operator::I32x4ShrS | Operator::I64x2ShrS => {1928let (a, b) = environ.stacks.pop2();1929let bitcast_a = optionally_bitcast_vector(a, type_of(op), builder);1930// The spec expects to shift with `b mod lanewidth`; This is directly compatible1931// with cranelift's instruction.1932environ.stacks.push1(builder.ins().sshr(bitcast_a, b))1933}1934Operator::V128Bitselect => {1935let (a, b, c) = pop3_with_bitcast(environ, I8X16, builder);1936// The CLIF operand ordering is slightly different and the types of all three1937// operands must match (hence the bitcast).1938environ.stacks.push1(builder.ins().bitselect(c, a, b))1939}1940Operator::V128AnyTrue => {1941let a = pop1_with_bitcast(environ, type_of(op), builder);1942let bool_result = builder.ins().vany_true(a);1943environ1944.stacks1945.push1(builder.ins().uextend(I32, bool_result))1946}1947Operator::I8x16AllTrue1948| Operator::I16x8AllTrue1949| Operator::I32x4AllTrue1950| Operator::I64x2AllTrue => {1951let a = pop1_with_bitcast(environ, type_of(op), builder);1952let bool_result = builder.ins().vall_true(a);1953environ1954.stacks1955.push1(builder.ins().uextend(I32, bool_result))1956}1957Operator::I8x16Bitmask1958| Operator::I16x8Bitmask1959| Operator::I32x4Bitmask1960| Operator::I64x2Bitmask => {1961let a = pop1_with_bitcast(environ, type_of(op), builder);1962environ.stacks.push1(builder.ins().vhigh_bits(I32, a));1963}1964Operator::I8x16Eq | Operator::I16x8Eq | Operator::I32x4Eq | Operator::I64x2Eq => {1965translate_vector_icmp(IntCC::Equal, type_of(op), builder, environ)1966}1967Operator::I8x16Ne | Operator::I16x8Ne | Operator::I32x4Ne | Operator::I64x2Ne => {1968translate_vector_icmp(IntCC::NotEqual, type_of(op), builder, environ)1969}1970Operator::I8x16GtS | Operator::I16x8GtS | Operator::I32x4GtS | Operator::I64x2GtS => {1971translate_vector_icmp(IntCC::SignedGreaterThan, type_of(op), builder, environ)1972}1973Operator::I8x16LtS | Operator::I16x8LtS | Operator::I32x4LtS | Operator::I64x2LtS => {1974translate_vector_icmp(IntCC::SignedLessThan, type_of(op), builder, environ)1975}1976Operator::I8x16GtU | Operator::I16x8GtU | Operator::I32x4GtU => {1977translate_vector_icmp(IntCC::UnsignedGreaterThan, type_of(op), builder, environ)1978}1979Operator::I8x16LtU | Operator::I16x8LtU | Operator::I32x4LtU => {1980translate_vector_icmp(IntCC::UnsignedLessThan, type_of(op), builder, environ)1981}1982Operator::I8x16GeS | Operator::I16x8GeS | Operator::I32x4GeS | Operator::I64x2GeS => {1983translate_vector_icmp(1984IntCC::SignedGreaterThanOrEqual,1985type_of(op),1986builder,1987environ,1988)1989}1990Operator::I8x16LeS | Operator::I16x8LeS | Operator::I32x4LeS | Operator::I64x2LeS => {1991translate_vector_icmp(IntCC::SignedLessThanOrEqual, type_of(op), builder, environ)1992}1993Operator::I8x16GeU | Operator::I16x8GeU | Operator::I32x4GeU => translate_vector_icmp(1994IntCC::UnsignedGreaterThanOrEqual,1995type_of(op),1996builder,1997environ,1998),1999Operator::I8x16LeU | Operator::I16x8LeU | Operator::I32x4LeU => translate_vector_icmp(2000IntCC::UnsignedLessThanOrEqual,2001type_of(op),2002builder,2003environ,2004),2005Operator::F32x4Eq | Operator::F64x2Eq => {2006translate_vector_fcmp(FloatCC::Equal, type_of(op), builder, environ)2007}2008Operator::F32x4Ne | Operator::F64x2Ne => {2009translate_vector_fcmp(FloatCC::NotEqual, type_of(op), builder, environ)2010}2011Operator::F32x4Lt | Operator::F64x2Lt => {2012translate_vector_fcmp(FloatCC::LessThan, type_of(op), builder, environ)2013}2014Operator::F32x4Gt | Operator::F64x2Gt => {2015translate_vector_fcmp(FloatCC::GreaterThan, type_of(op), builder, environ)2016}2017Operator::F32x4Le | Operator::F64x2Le => {2018translate_vector_fcmp(FloatCC::LessThanOrEqual, type_of(op), builder, environ)2019}2020Operator::F32x4Ge | Operator::F64x2Ge => {2021translate_vector_fcmp(FloatCC::GreaterThanOrEqual, type_of(op), builder, environ)2022}2023Operator::F32x4Add | Operator::F64x2Add => {2024let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2025environ.stacks.push1(builder.ins().fadd(a, b))2026}2027Operator::F32x4Sub | Operator::F64x2Sub => {2028let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2029environ.stacks.push1(builder.ins().fsub(a, b))2030}2031Operator::F32x4Mul | Operator::F64x2Mul => {2032let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2033environ.stacks.push1(builder.ins().fmul(a, b))2034}2035Operator::F32x4Div | Operator::F64x2Div => {2036let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2037environ.stacks.push1(builder.ins().fdiv(a, b))2038}2039Operator::F32x4Max | Operator::F64x2Max => {2040let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2041environ.stacks.push1(builder.ins().fmax(a, b))2042}2043Operator::F32x4Min | Operator::F64x2Min => {2044let (a, b) = pop2_with_bitcast(environ, type_of(op), builder);2045environ.stacks.push1(builder.ins().fmin(a, b))2046}2047Operator::F32x4PMax | Operator::F64x2PMax => {2048// Note the careful ordering here with respect to `fcmp` and2049// `bitselect`. This matches the spec definition of:2050//2051// fpmax(z1, z2) =2052// * If z1 is less than z2 then return z2.2053// * Else return z1.2054let ty = type_of(op);2055let (a, b) = pop2_with_bitcast(environ, ty, builder);2056let cmp = builder.ins().fcmp(FloatCC::LessThan, a, b);2057let cmp = optionally_bitcast_vector(cmp, ty, builder);2058environ.stacks.push1(builder.ins().bitselect(cmp, b, a))2059}2060Operator::F32x4PMin | Operator::F64x2PMin => {2061// Note the careful ordering here which is similar to `pmax` above:2062//2063// fpmin(z1, z2) =2064// * If z2 is less than z1 then return z2.2065// * Else return z1.2066let ty = type_of(op);2067let (a, b) = pop2_with_bitcast(environ, ty, builder);2068let cmp = builder.ins().fcmp(FloatCC::LessThan, b, a);2069let cmp = optionally_bitcast_vector(cmp, ty, builder);2070environ.stacks.push1(builder.ins().bitselect(cmp, b, a))2071}2072Operator::F32x4Sqrt | Operator::F64x2Sqrt => {2073let a = pop1_with_bitcast(environ, type_of(op), builder);2074environ.stacks.push1(builder.ins().sqrt(a))2075}2076Operator::F32x4Neg | Operator::F64x2Neg => {2077let a = pop1_with_bitcast(environ, type_of(op), builder);2078environ.stacks.push1(builder.ins().fneg(a))2079}2080Operator::F32x4Abs | Operator::F64x2Abs => {2081let a = pop1_with_bitcast(environ, type_of(op), builder);2082environ.stacks.push1(builder.ins().fabs(a))2083}2084Operator::F32x4ConvertI32x4S => {2085let a = pop1_with_bitcast(environ, I32X4, builder);2086environ.stacks.push1(builder.ins().fcvt_from_sint(F32X4, a))2087}2088Operator::F32x4ConvertI32x4U => {2089let a = pop1_with_bitcast(environ, I32X4, builder);2090environ.stacks.push1(builder.ins().fcvt_from_uint(F32X4, a))2091}2092Operator::F64x2ConvertLowI32x4S => {2093let a = pop1_with_bitcast(environ, I32X4, builder);2094let widened_a = builder.ins().swiden_low(a);2095environ2096.stacks2097.push1(builder.ins().fcvt_from_sint(F64X2, widened_a));2098}2099Operator::F64x2ConvertLowI32x4U => {2100let a = pop1_with_bitcast(environ, I32X4, builder);2101let widened_a = builder.ins().uwiden_low(a);2102environ2103.stacks2104.push1(builder.ins().fcvt_from_uint(F64X2, widened_a));2105}2106Operator::F64x2PromoteLowF32x4 => {2107let a = pop1_with_bitcast(environ, F32X4, builder);2108environ.stacks.push1(builder.ins().fvpromote_low(a));2109}2110Operator::F32x4DemoteF64x2Zero => {2111let a = pop1_with_bitcast(environ, F64X2, builder);2112environ.stacks.push1(builder.ins().fvdemote(a));2113}2114Operator::I32x4TruncSatF32x4S => {2115let a = pop1_with_bitcast(environ, F32X4, builder);2116environ2117.stacks2118.push1(builder.ins().fcvt_to_sint_sat(I32X4, a))2119}2120Operator::I32x4TruncSatF64x2SZero => {2121let a = pop1_with_bitcast(environ, F64X2, builder);2122let converted_a = builder.ins().fcvt_to_sint_sat(I64X2, a);2123let handle = builder.func.dfg.constants.insert(vec![0u8; 16].into());2124let zero = builder.ins().vconst(I64X2, handle);21252126environ2127.stacks2128.push1(builder.ins().snarrow(converted_a, zero));2129}21302131// FIXME(#5913): the relaxed instructions here are translated the same2132// as the saturating instructions, even when the code generator2133// configuration allow for different semantics across hosts. On x86,2134// however, it's theoretically possible to have a slightly more optimal2135// lowering which accounts for NaN differently, although the lowering is2136// still not trivial (e.g. one instruction). At this time the2137// more-optimal-but-still-large lowering for x86 is not implemented so2138// the relaxed instructions are listed here instead of down below with2139// the other relaxed instructions. An x86-specific implementation (or2140// perhaps for other backends too) should be added and the codegen for2141// the relaxed instruction should conditionally be different.2142Operator::I32x4RelaxedTruncF32x4U | Operator::I32x4TruncSatF32x4U => {2143let a = pop1_with_bitcast(environ, F32X4, builder);2144environ2145.stacks2146.push1(builder.ins().fcvt_to_uint_sat(I32X4, a))2147}2148Operator::I32x4RelaxedTruncF64x2UZero | Operator::I32x4TruncSatF64x2UZero => {2149let a = pop1_with_bitcast(environ, F64X2, builder);2150let zero_constant = builder.func.dfg.constants.insert(vec![0u8; 16].into());2151let result = if environ.is_x86() && !environ.isa().has_round() {2152// On x86 the vector lowering for `fcvt_to_uint_sat` requires2153// SSE4.1 `round` instructions. If SSE4.1 isn't available it2154// falls back to a libcall which we don't want in Wasmtime.2155// Handle this by falling back to the scalar implementation2156// which does not require SSE4.1 instructions.2157let lane0 = builder.ins().extractlane(a, 0);2158let lane1 = builder.ins().extractlane(a, 1);2159let lane0_rounded = builder.ins().fcvt_to_uint_sat(I32, lane0);2160let lane1_rounded = builder.ins().fcvt_to_uint_sat(I32, lane1);2161let result = builder.ins().vconst(I32X4, zero_constant);2162let result = builder.ins().insertlane(result, lane0_rounded, 0);2163builder.ins().insertlane(result, lane1_rounded, 1)2164} else {2165let converted_a = builder.ins().fcvt_to_uint_sat(I64X2, a);2166let zero = builder.ins().vconst(I64X2, zero_constant);2167builder.ins().uunarrow(converted_a, zero)2168};2169environ.stacks.push1(result);2170}21712172Operator::I8x16NarrowI16x8S => {2173let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2174environ.stacks.push1(builder.ins().snarrow(a, b))2175}2176Operator::I16x8NarrowI32x4S => {2177let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2178environ.stacks.push1(builder.ins().snarrow(a, b))2179}2180Operator::I8x16NarrowI16x8U => {2181let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2182environ.stacks.push1(builder.ins().unarrow(a, b))2183}2184Operator::I16x8NarrowI32x4U => {2185let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2186environ.stacks.push1(builder.ins().unarrow(a, b))2187}2188Operator::I16x8ExtendLowI8x16S => {2189let a = pop1_with_bitcast(environ, I8X16, builder);2190environ.stacks.push1(builder.ins().swiden_low(a))2191}2192Operator::I16x8ExtendHighI8x16S => {2193let a = pop1_with_bitcast(environ, I8X16, builder);2194environ.stacks.push1(builder.ins().swiden_high(a))2195}2196Operator::I16x8ExtendLowI8x16U => {2197let a = pop1_with_bitcast(environ, I8X16, builder);2198environ.stacks.push1(builder.ins().uwiden_low(a))2199}2200Operator::I16x8ExtendHighI8x16U => {2201let a = pop1_with_bitcast(environ, I8X16, builder);2202environ.stacks.push1(builder.ins().uwiden_high(a))2203}2204Operator::I32x4ExtendLowI16x8S => {2205let a = pop1_with_bitcast(environ, I16X8, builder);2206environ.stacks.push1(builder.ins().swiden_low(a))2207}2208Operator::I32x4ExtendHighI16x8S => {2209let a = pop1_with_bitcast(environ, I16X8, builder);2210environ.stacks.push1(builder.ins().swiden_high(a))2211}2212Operator::I32x4ExtendLowI16x8U => {2213let a = pop1_with_bitcast(environ, I16X8, builder);2214environ.stacks.push1(builder.ins().uwiden_low(a))2215}2216Operator::I32x4ExtendHighI16x8U => {2217let a = pop1_with_bitcast(environ, I16X8, builder);2218environ.stacks.push1(builder.ins().uwiden_high(a))2219}2220Operator::I64x2ExtendLowI32x4S => {2221let a = pop1_with_bitcast(environ, I32X4, builder);2222environ.stacks.push1(builder.ins().swiden_low(a))2223}2224Operator::I64x2ExtendHighI32x4S => {2225let a = pop1_with_bitcast(environ, I32X4, builder);2226environ.stacks.push1(builder.ins().swiden_high(a))2227}2228Operator::I64x2ExtendLowI32x4U => {2229let a = pop1_with_bitcast(environ, I32X4, builder);2230environ.stacks.push1(builder.ins().uwiden_low(a))2231}2232Operator::I64x2ExtendHighI32x4U => {2233let a = pop1_with_bitcast(environ, I32X4, builder);2234environ.stacks.push1(builder.ins().uwiden_high(a))2235}2236Operator::I16x8ExtAddPairwiseI8x16S => {2237let a = pop1_with_bitcast(environ, I8X16, builder);2238let widen_low = builder.ins().swiden_low(a);2239let widen_high = builder.ins().swiden_high(a);2240environ2241.stacks2242.push1(builder.ins().iadd_pairwise(widen_low, widen_high));2243}2244Operator::I32x4ExtAddPairwiseI16x8S => {2245let a = pop1_with_bitcast(environ, I16X8, builder);2246let widen_low = builder.ins().swiden_low(a);2247let widen_high = builder.ins().swiden_high(a);2248environ2249.stacks2250.push1(builder.ins().iadd_pairwise(widen_low, widen_high));2251}2252Operator::I16x8ExtAddPairwiseI8x16U => {2253let a = pop1_with_bitcast(environ, I8X16, builder);2254let widen_low = builder.ins().uwiden_low(a);2255let widen_high = builder.ins().uwiden_high(a);2256environ2257.stacks2258.push1(builder.ins().iadd_pairwise(widen_low, widen_high));2259}2260Operator::I32x4ExtAddPairwiseI16x8U => {2261let a = pop1_with_bitcast(environ, I16X8, builder);2262let widen_low = builder.ins().uwiden_low(a);2263let widen_high = builder.ins().uwiden_high(a);2264environ2265.stacks2266.push1(builder.ins().iadd_pairwise(widen_low, widen_high));2267}2268Operator::F32x4Ceil => {2269let arg = pop1_with_bitcast(environ, F32X4, builder);2270let result = environ.ceil_f32x4(builder, arg);2271environ.stacks.push1(result);2272}2273Operator::F64x2Ceil => {2274let arg = pop1_with_bitcast(environ, F64X2, builder);2275let result = environ.ceil_f64x2(builder, arg);2276environ.stacks.push1(result);2277}2278Operator::F32x4Floor => {2279let arg = pop1_with_bitcast(environ, F32X4, builder);2280let result = environ.floor_f32x4(builder, arg);2281environ.stacks.push1(result);2282}2283Operator::F64x2Floor => {2284let arg = pop1_with_bitcast(environ, F64X2, builder);2285let result = environ.floor_f64x2(builder, arg);2286environ.stacks.push1(result);2287}2288Operator::F32x4Trunc => {2289let arg = pop1_with_bitcast(environ, F32X4, builder);2290let result = environ.trunc_f32x4(builder, arg);2291environ.stacks.push1(result);2292}2293Operator::F64x2Trunc => {2294let arg = pop1_with_bitcast(environ, F64X2, builder);2295let result = environ.trunc_f64x2(builder, arg);2296environ.stacks.push1(result);2297}2298Operator::F32x4Nearest => {2299let arg = pop1_with_bitcast(environ, F32X4, builder);2300let result = environ.nearest_f32x4(builder, arg);2301environ.stacks.push1(result);2302}2303Operator::F64x2Nearest => {2304let arg = pop1_with_bitcast(environ, F64X2, builder);2305let result = environ.nearest_f64x2(builder, arg);2306environ.stacks.push1(result);2307}2308Operator::I32x4DotI16x8S => {2309let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2310let alow = builder.ins().swiden_low(a);2311let blow = builder.ins().swiden_low(b);2312let low = builder.ins().imul(alow, blow);2313let ahigh = builder.ins().swiden_high(a);2314let bhigh = builder.ins().swiden_high(b);2315let high = builder.ins().imul(ahigh, bhigh);2316environ.stacks.push1(builder.ins().iadd_pairwise(low, high));2317}2318Operator::I8x16Popcnt => {2319let arg = pop1_with_bitcast(environ, type_of(op), builder);2320environ.stacks.push1(builder.ins().popcnt(arg));2321}2322Operator::I16x8Q15MulrSatS => {2323let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2324environ.stacks.push1(builder.ins().sqmul_round_sat(a, b))2325}2326Operator::I16x8ExtMulLowI8x16S => {2327let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2328let a_low = builder.ins().swiden_low(a);2329let b_low = builder.ins().swiden_low(b);2330environ.stacks.push1(builder.ins().imul(a_low, b_low));2331}2332Operator::I16x8ExtMulHighI8x16S => {2333let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2334let a_high = builder.ins().swiden_high(a);2335let b_high = builder.ins().swiden_high(b);2336environ.stacks.push1(builder.ins().imul(a_high, b_high));2337}2338Operator::I16x8ExtMulLowI8x16U => {2339let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2340let a_low = builder.ins().uwiden_low(a);2341let b_low = builder.ins().uwiden_low(b);2342environ.stacks.push1(builder.ins().imul(a_low, b_low));2343}2344Operator::I16x8ExtMulHighI8x16U => {2345let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2346let a_high = builder.ins().uwiden_high(a);2347let b_high = builder.ins().uwiden_high(b);2348environ.stacks.push1(builder.ins().imul(a_high, b_high));2349}2350Operator::I32x4ExtMulLowI16x8S => {2351let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2352let a_low = builder.ins().swiden_low(a);2353let b_low = builder.ins().swiden_low(b);2354environ.stacks.push1(builder.ins().imul(a_low, b_low));2355}2356Operator::I32x4ExtMulHighI16x8S => {2357let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2358let a_high = builder.ins().swiden_high(a);2359let b_high = builder.ins().swiden_high(b);2360environ.stacks.push1(builder.ins().imul(a_high, b_high));2361}2362Operator::I32x4ExtMulLowI16x8U => {2363let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2364let a_low = builder.ins().uwiden_low(a);2365let b_low = builder.ins().uwiden_low(b);2366environ.stacks.push1(builder.ins().imul(a_low, b_low));2367}2368Operator::I32x4ExtMulHighI16x8U => {2369let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2370let a_high = builder.ins().uwiden_high(a);2371let b_high = builder.ins().uwiden_high(b);2372environ.stacks.push1(builder.ins().imul(a_high, b_high));2373}2374Operator::I64x2ExtMulLowI32x4S => {2375let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2376let a_low = builder.ins().swiden_low(a);2377let b_low = builder.ins().swiden_low(b);2378environ.stacks.push1(builder.ins().imul(a_low, b_low));2379}2380Operator::I64x2ExtMulHighI32x4S => {2381let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2382let a_high = builder.ins().swiden_high(a);2383let b_high = builder.ins().swiden_high(b);2384environ.stacks.push1(builder.ins().imul(a_high, b_high));2385}2386Operator::I64x2ExtMulLowI32x4U => {2387let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2388let a_low = builder.ins().uwiden_low(a);2389let b_low = builder.ins().uwiden_low(b);2390environ.stacks.push1(builder.ins().imul(a_low, b_low));2391}2392Operator::I64x2ExtMulHighI32x4U => {2393let (a, b) = pop2_with_bitcast(environ, I32X4, builder);2394let a_high = builder.ins().uwiden_high(a);2395let b_high = builder.ins().uwiden_high(b);2396environ.stacks.push1(builder.ins().imul(a_high, b_high));2397}2398Operator::MemoryDiscard { .. } => {2399return Err(wasm_unsupported!(2400"proposed memory-control operator {:?}",2401op2402));2403}24042405Operator::F32x4RelaxedMax | Operator::F64x2RelaxedMax => {2406let ty = type_of(op);2407let (a, b) = pop2_with_bitcast(environ, ty, builder);2408environ.stacks.push1(2409if environ.relaxed_simd_deterministic() || !environ.is_x86() {2410// Deterministic semantics match the `fmax` instruction, or2411// the `fAAxBB.max` wasm instruction.2412builder.ins().fmax(a, b)2413} else {2414// Note that this matches the `pmax` translation which has2415// careful ordering of its operands to trigger2416// pattern-matches in the x86 backend.2417let cmp = builder.ins().fcmp(FloatCC::LessThan, a, b);2418let cmp = optionally_bitcast_vector(cmp, ty, builder);2419builder.ins().bitselect(cmp, b, a)2420},2421)2422}24232424Operator::F32x4RelaxedMin | Operator::F64x2RelaxedMin => {2425let ty = type_of(op);2426let (a, b) = pop2_with_bitcast(environ, ty, builder);2427environ.stacks.push1(2428if environ.relaxed_simd_deterministic() || !environ.is_x86() {2429// Deterministic semantics match the `fmin` instruction, or2430// the `fAAxBB.min` wasm instruction.2431builder.ins().fmin(a, b)2432} else {2433// Note that this matches the `pmin` translation which has2434// careful ordering of its operands to trigger2435// pattern-matches in the x86 backend.2436let cmp = builder.ins().fcmp(FloatCC::LessThan, b, a);2437let cmp = optionally_bitcast_vector(cmp, ty, builder);2438builder.ins().bitselect(cmp, b, a)2439},2440);2441}24422443Operator::I8x16RelaxedSwizzle => {2444let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2445let result = environ.relaxed_swizzle(builder, a, b);2446environ.stacks.push1(result);2447}24482449Operator::F32x4RelaxedMadd => {2450let (a, b, c) = pop3_with_bitcast(environ, type_of(op), builder);2451let result = environ.fma_f32x4(builder, a, b, c);2452environ.stacks.push1(result);2453}2454Operator::F64x2RelaxedMadd => {2455let (a, b, c) = pop3_with_bitcast(environ, type_of(op), builder);2456let result = environ.fma_f64x2(builder, a, b, c);2457environ.stacks.push1(result);2458}2459Operator::F32x4RelaxedNmadd => {2460let (a, b, c) = pop3_with_bitcast(environ, type_of(op), builder);2461let a = builder.ins().fneg(a);2462let result = environ.fma_f32x4(builder, a, b, c);2463environ.stacks.push1(result);2464}2465Operator::F64x2RelaxedNmadd => {2466let (a, b, c) = pop3_with_bitcast(environ, type_of(op), builder);2467let a = builder.ins().fneg(a);2468let result = environ.fma_f64x2(builder, a, b, c);2469environ.stacks.push1(result);2470}24712472Operator::I8x16RelaxedLaneselect2473| Operator::I16x8RelaxedLaneselect2474| Operator::I32x4RelaxedLaneselect2475| Operator::I64x2RelaxedLaneselect => {2476let ty = type_of(op);2477let (a, b, c) = pop3_with_bitcast(environ, ty, builder);2478// Note that the variable swaps here are intentional due to2479// the difference of the order of the wasm op and the clif2480// op.2481environ.stacks.push1(2482if environ.relaxed_simd_deterministic()2483|| !environ.use_blendv_for_relaxed_laneselect(ty)2484{2485// Deterministic semantics are a `bitselect` along the lines2486// of the wasm `v128.bitselect` instruction.2487builder.ins().bitselect(c, a, b)2488} else {2489builder.ins().blendv(c, a, b)2490},2491);2492}24932494Operator::I32x4RelaxedTruncF32x4S => {2495let a = pop1_with_bitcast(environ, F32X4, builder);2496environ.stacks.push1(2497if environ.relaxed_simd_deterministic() || !environ.is_x86() {2498// Deterministic semantics are to match the2499// `i32x4.trunc_sat_f32x4_s` instruction.2500builder.ins().fcvt_to_sint_sat(I32X4, a)2501} else {2502builder.ins().x86_cvtt2dq(I32X4, a)2503},2504)2505}2506Operator::I32x4RelaxedTruncF64x2SZero => {2507let a = pop1_with_bitcast(environ, F64X2, builder);2508let converted_a = if environ.relaxed_simd_deterministic() || !environ.is_x86() {2509// Deterministic semantics are to match the2510// `i32x4.trunc_sat_f64x2_s_zero` instruction.2511builder.ins().fcvt_to_sint_sat(I64X2, a)2512} else {2513builder.ins().x86_cvtt2dq(I64X2, a)2514};2515let handle = builder.func.dfg.constants.insert(vec![0u8; 16].into());2516let zero = builder.ins().vconst(I64X2, handle);25172518environ2519.stacks2520.push1(builder.ins().snarrow(converted_a, zero));2521}2522Operator::I16x8RelaxedQ15mulrS => {2523let (a, b) = pop2_with_bitcast(environ, I16X8, builder);2524environ.stacks.push1(2525if environ.relaxed_simd_deterministic()2526|| !environ.use_x86_pmulhrsw_for_relaxed_q15mul()2527{2528// Deterministic semantics are to match the2529// `i16x8.q15mulr_sat_s` instruction.2530builder.ins().sqmul_round_sat(a, b)2531} else {2532builder.ins().x86_pmulhrsw(a, b)2533},2534);2535}2536Operator::I16x8RelaxedDotI8x16I7x16S => {2537let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2538environ.stacks.push1(2539if environ.relaxed_simd_deterministic() || !environ.use_x86_pmaddubsw_for_dot() {2540// Deterministic semantics are to treat both operands as2541// signed integers and perform the dot product.2542let alo = builder.ins().swiden_low(a);2543let blo = builder.ins().swiden_low(b);2544let lo = builder.ins().imul(alo, blo);2545let ahi = builder.ins().swiden_high(a);2546let bhi = builder.ins().swiden_high(b);2547let hi = builder.ins().imul(ahi, bhi);2548builder.ins().iadd_pairwise(lo, hi)2549} else {2550builder.ins().x86_pmaddubsw(a, b)2551},2552);2553}25542555Operator::I32x4RelaxedDotI8x16I7x16AddS => {2556let c = pop1_with_bitcast(environ, I32X4, builder);2557let (a, b) = pop2_with_bitcast(environ, I8X16, builder);2558let dot =2559if environ.relaxed_simd_deterministic() || !environ.use_x86_pmaddubsw_for_dot() {2560// Deterministic semantics are to treat both operands as2561// signed integers and perform the dot product.2562let alo = builder.ins().swiden_low(a);2563let blo = builder.ins().swiden_low(b);2564let lo = builder.ins().imul(alo, blo);2565let ahi = builder.ins().swiden_high(a);2566let bhi = builder.ins().swiden_high(b);2567let hi = builder.ins().imul(ahi, bhi);2568builder.ins().iadd_pairwise(lo, hi)2569} else {2570builder.ins().x86_pmaddubsw(a, b)2571};2572let dotlo = builder.ins().swiden_low(dot);2573let dothi = builder.ins().swiden_high(dot);2574let dot32 = builder.ins().iadd_pairwise(dotlo, dothi);2575environ.stacks.push1(builder.ins().iadd(dot32, c));2576}25772578Operator::BrOnNull { relative_depth } => {2579let r = environ.stacks.pop1();2580let &[.., WasmValType::Ref(r_ty)] = operand_types else {2581unreachable!("validation")2582};2583let is_null = environ.translate_ref_is_null(builder.cursor(), r, r_ty)?;2584let (br_destination, inputs) = translate_br_if_args(*relative_depth, environ);2585let else_block = builder.create_block();2586canonicalise_brif(builder, is_null, br_destination, inputs, else_block, &[]);25872588builder.seal_block(else_block); // The only predecessor is the current block.2589builder.switch_to_block(else_block);2590environ.stacks.push1(r);2591}2592Operator::BrOnNonNull { relative_depth } => {2593// We write this a bit differently from the spec to avoid an extra2594// block/branch and the typed accounting thereof. Instead of the2595// spec's approach, it's described as such:2596// Peek the value val from the stack.2597// If val is ref.null ht, then: pop the value val from the stack.2598// Else: Execute the instruction (br relative_depth).2599let r = environ.stacks.peek1();2600let [.., WasmValType::Ref(r_ty)] = operand_types else {2601unreachable!("validation")2602};2603let r_ty = *r_ty;2604let (br_destination, inputs) = translate_br_if_args(*relative_depth, environ);2605let inputs = inputs.to_vec();2606let is_null = environ.translate_ref_is_null(builder.cursor(), r, r_ty)?;2607let else_block = builder.create_block();2608canonicalise_brif(builder, is_null, else_block, &[], br_destination, &inputs);26092610// In the null case, pop the ref2611environ.stacks.pop1();26122613builder.seal_block(else_block); // The only predecessor is the current block.26142615// The rest of the translation operates on our is null case, which is2616// currently an empty block2617builder.switch_to_block(else_block);2618}2619Operator::CallRef { type_index } => {2620// Get function signature2621// `index` is the index of the function's signature and `table_index` is the index of2622// the table to search the function in.2623let type_index = TypeIndex::from_u32(*type_index);2624let sigref = environ.get_or_create_sig_ref(builder.func, type_index);2625let num_args = environ.num_params_for_function_type(type_index);2626let callee = environ.stacks.pop1();26272628// Bitcast any vector arguments to their default type, I8X16, before calling.2629let mut args = environ.stacks.peekn(num_args).to_vec();2630bitcast_wasm_params(environ, sigref, &mut args, builder);26312632let inst_results =2633environ.translate_call_ref(builder, srcloc, sigref, callee, &args)?;26342635debug_assert_eq!(2636inst_results.len(),2637builder.func.dfg.signatures[sigref].returns.len(),2638"translate_call_ref results should match the call signature"2639);2640environ.stacks.popn(num_args);2641environ.stacks.pushn(&inst_results);2642}2643Operator::RefAsNonNull => {2644let r = environ.stacks.pop1();2645let [.., WasmValType::Ref(r_ty)] = operand_types else {2646unreachable!("validation")2647};2648let is_null = environ.translate_ref_is_null(builder.cursor(), r, *r_ty)?;2649environ.trapnz(builder, is_null, crate::TRAP_NULL_REFERENCE);2650environ.stacks.push1(r);2651}26522653Operator::RefI31 => {2654let val = environ.stacks.pop1();2655let i31ref = environ.translate_ref_i31(builder.cursor(), val)?;2656environ.stacks.push1(i31ref);2657}2658Operator::I31GetS => {2659let i31ref = environ.stacks.pop1();2660let val = environ.translate_i31_get_s(builder, i31ref)?;2661environ.stacks.push1(val);2662}2663Operator::I31GetU => {2664let i31ref = environ.stacks.pop1();2665let val = environ.translate_i31_get_u(builder, i31ref)?;2666environ.stacks.push1(val);2667}26682669Operator::StructNew { struct_type_index } => {2670let struct_type_index = TypeIndex::from_u32(*struct_type_index);2671let arity = environ.struct_fields_len(struct_type_index)?;2672let fields: StructFieldsVec = environ.stacks.peekn(arity).iter().copied().collect();2673environ.stacks.popn(arity);2674let struct_ref = environ.translate_struct_new(builder, struct_type_index, fields)?;2675environ.stacks.push1(struct_ref);2676}26772678Operator::StructNewDefault { struct_type_index } => {2679let struct_type_index = TypeIndex::from_u32(*struct_type_index);2680let struct_ref = environ.translate_struct_new_default(builder, struct_type_index)?;2681environ.stacks.push1(struct_ref);2682}26832684Operator::StructSet {2685struct_type_index,2686field_index,2687} => {2688let struct_type_index = TypeIndex::from_u32(*struct_type_index);2689let val = environ.stacks.pop1();2690let struct_ref = environ.stacks.pop1();2691environ.translate_struct_set(2692builder,2693struct_type_index,2694*field_index,2695struct_ref,2696val,2697)?;2698}26992700Operator::StructGetS {2701struct_type_index,2702field_index,2703} => {2704let struct_type_index = TypeIndex::from_u32(*struct_type_index);2705let struct_ref = environ.stacks.pop1();2706let val = environ.translate_struct_get(2707builder,2708struct_type_index,2709*field_index,2710struct_ref,2711Some(Extension::Sign),2712)?;2713environ.stacks.push1(val);2714}27152716Operator::StructGetU {2717struct_type_index,2718field_index,2719} => {2720let struct_type_index = TypeIndex::from_u32(*struct_type_index);2721let struct_ref = environ.stacks.pop1();2722let val = environ.translate_struct_get(2723builder,2724struct_type_index,2725*field_index,2726struct_ref,2727Some(Extension::Zero),2728)?;2729environ.stacks.push1(val);2730}27312732Operator::StructGet {2733struct_type_index,2734field_index,2735} => {2736let struct_type_index = TypeIndex::from_u32(*struct_type_index);2737let struct_ref = environ.stacks.pop1();2738let val = environ.translate_struct_get(2739builder,2740struct_type_index,2741*field_index,2742struct_ref,2743None,2744)?;2745environ.stacks.push1(val);2746}27472748Operator::ArrayNew { array_type_index } => {2749let array_type_index = TypeIndex::from_u32(*array_type_index);2750let (elem, len) = environ.stacks.pop2();2751let array_ref = environ.translate_array_new(builder, array_type_index, elem, len)?;2752environ.stacks.push1(array_ref);2753}2754Operator::ArrayNewDefault { array_type_index } => {2755let array_type_index = TypeIndex::from_u32(*array_type_index);2756let len = environ.stacks.pop1();2757let array_ref = environ.translate_array_new_default(builder, array_type_index, len)?;2758environ.stacks.push1(array_ref);2759}2760Operator::ArrayNewFixed {2761array_type_index,2762array_size,2763} => {2764let array_type_index = TypeIndex::from_u32(*array_type_index);2765let array_size = usize::try_from(*array_size).unwrap();2766let elems = environ.stacks.peekn(array_size).to_vec();2767let array_ref = environ.translate_array_new_fixed(builder, array_type_index, &elems)?;2768environ.stacks.popn(array_size);2769environ.stacks.push1(array_ref);2770}2771Operator::ArrayNewData {2772array_type_index,2773array_data_index,2774} => {2775let array_type_index = TypeIndex::from_u32(*array_type_index);2776let array_data_index = DataIndex::from_u32(*array_data_index);2777let (data_offset, len) = environ.stacks.pop2();2778let array_ref = environ.translate_array_new_data(2779builder,2780array_type_index,2781array_data_index,2782data_offset,2783len,2784)?;2785environ.stacks.push1(array_ref);2786}2787Operator::ArrayNewElem {2788array_type_index,2789array_elem_index,2790} => {2791let array_type_index = TypeIndex::from_u32(*array_type_index);2792let array_elem_index = ElemIndex::from_u32(*array_elem_index);2793let (elem_offset, len) = environ.stacks.pop2();2794let array_ref = environ.translate_array_new_elem(2795builder,2796array_type_index,2797array_elem_index,2798elem_offset,2799len,2800)?;2801environ.stacks.push1(array_ref);2802}2803Operator::ArrayCopy {2804array_type_index_dst,2805array_type_index_src,2806} => {2807let array_type_index_dst = TypeIndex::from_u32(*array_type_index_dst);2808let array_type_index_src = TypeIndex::from_u32(*array_type_index_src);2809let (dst_array, dst_index, src_array, src_index, len) = environ.stacks.pop5();2810environ.translate_array_copy(2811builder,2812array_type_index_dst,2813dst_array,2814dst_index,2815array_type_index_src,2816src_array,2817src_index,2818len,2819)?;2820}2821Operator::ArrayFill { array_type_index } => {2822let array_type_index = TypeIndex::from_u32(*array_type_index);2823let (array, index, val, len) = environ.stacks.pop4();2824environ.translate_array_fill(builder, array_type_index, array, index, val, len)?;2825}2826Operator::ArrayInitData {2827array_type_index,2828array_data_index,2829} => {2830let array_type_index = TypeIndex::from_u32(*array_type_index);2831let array_data_index = DataIndex::from_u32(*array_data_index);2832let (array, dst_index, src_index, len) = environ.stacks.pop4();2833environ.translate_array_init_data(2834builder,2835array_type_index,2836array,2837dst_index,2838array_data_index,2839src_index,2840len,2841)?;2842}2843Operator::ArrayInitElem {2844array_type_index,2845array_elem_index,2846} => {2847let array_type_index = TypeIndex::from_u32(*array_type_index);2848let array_elem_index = ElemIndex::from_u32(*array_elem_index);2849let (array, dst_index, src_index, len) = environ.stacks.pop4();2850environ.translate_array_init_elem(2851builder,2852array_type_index,2853array,2854dst_index,2855array_elem_index,2856src_index,2857len,2858)?;2859}2860Operator::ArrayLen => {2861let array = environ.stacks.pop1();2862let len = environ.translate_array_len(builder, array)?;2863environ.stacks.push1(len);2864}2865Operator::ArrayGet { array_type_index } => {2866let array_type_index = TypeIndex::from_u32(*array_type_index);2867let (array, index) = environ.stacks.pop2();2868let elem =2869environ.translate_array_get(builder, array_type_index, array, index, None)?;2870environ.stacks.push1(elem);2871}2872Operator::ArrayGetS { array_type_index } => {2873let array_type_index = TypeIndex::from_u32(*array_type_index);2874let (array, index) = environ.stacks.pop2();2875let elem = environ.translate_array_get(2876builder,2877array_type_index,2878array,2879index,2880Some(Extension::Sign),2881)?;2882environ.stacks.push1(elem);2883}2884Operator::ArrayGetU { array_type_index } => {2885let array_type_index = TypeIndex::from_u32(*array_type_index);2886let (array, index) = environ.stacks.pop2();2887let elem = environ.translate_array_get(2888builder,2889array_type_index,2890array,2891index,2892Some(Extension::Zero),2893)?;2894environ.stacks.push1(elem);2895}2896Operator::ArraySet { array_type_index } => {2897let array_type_index = TypeIndex::from_u32(*array_type_index);2898let (array, index, elem) = environ.stacks.pop3();2899environ.translate_array_set(builder, array_type_index, array, index, elem)?;2900}2901Operator::RefEq => {2902let (r1, r2) = environ.stacks.pop2();2903let eq = builder.ins().icmp(ir::condcodes::IntCC::Equal, r1, r2);2904let eq = builder.ins().uextend(ir::types::I32, eq);2905environ.stacks.push1(eq);2906}2907Operator::RefTestNonNull { hty } => {2908let r = environ.stacks.pop1();2909let [.., WasmValType::Ref(r_ty)] = operand_types else {2910unreachable!("validation")2911};2912let heap_type = environ.convert_heap_type(*hty)?;2913let result = environ.translate_ref_test(2914builder,2915WasmRefType {2916heap_type,2917nullable: false,2918},2919r,2920*r_ty,2921)?;2922environ.stacks.push1(result);2923}2924Operator::RefTestNullable { hty } => {2925let r = environ.stacks.pop1();2926let [.., WasmValType::Ref(r_ty)] = operand_types else {2927unreachable!("validation")2928};2929let heap_type = environ.convert_heap_type(*hty)?;2930let result = environ.translate_ref_test(2931builder,2932WasmRefType {2933heap_type,2934nullable: true,2935},2936r,2937*r_ty,2938)?;2939environ.stacks.push1(result);2940}2941Operator::RefCastNonNull { hty } => {2942let r = environ.stacks.pop1();2943let [.., WasmValType::Ref(r_ty)] = operand_types else {2944unreachable!("validation")2945};2946let heap_type = environ.convert_heap_type(*hty)?;2947let cast_okay = environ.translate_ref_test(2948builder,2949WasmRefType {2950heap_type,2951nullable: false,2952},2953r,2954*r_ty,2955)?;2956environ.trapz(builder, cast_okay, crate::TRAP_CAST_FAILURE);2957environ.stacks.push1(r);2958}2959Operator::RefCastNullable { hty } => {2960let r = environ.stacks.pop1();2961let [.., WasmValType::Ref(r_ty)] = operand_types else {2962unreachable!("validation")2963};2964let heap_type = environ.convert_heap_type(*hty)?;2965let cast_okay = environ.translate_ref_test(2966builder,2967WasmRefType {2968heap_type,2969nullable: true,2970},2971r,2972*r_ty,2973)?;2974environ.trapz(builder, cast_okay, crate::TRAP_CAST_FAILURE);2975environ.stacks.push1(r);2976}2977Operator::BrOnCast {2978relative_depth,2979to_ref_type,2980from_ref_type: _,2981} => {2982let r = environ.stacks.peek1();2983let [.., WasmValType::Ref(r_ty)] = operand_types else {2984unreachable!("validation")2985};29862987let to_ref_type = environ.convert_ref_type(*to_ref_type)?;2988let cast_is_okay = environ.translate_ref_test(builder, to_ref_type, r, *r_ty)?;29892990let (cast_succeeds_block, inputs) = translate_br_if_args(*relative_depth, environ);2991let cast_fails_block = builder.create_block();2992canonicalise_brif(2993builder,2994cast_is_okay,2995cast_succeeds_block,2996inputs,2997cast_fails_block,2998&[2999// NB: the `cast_fails_block` is dominated by the current3000// block, and therefore doesn't need any block params.3001],3002);30033004// The only predecessor is the current block.3005builder.seal_block(cast_fails_block);30063007// The next Wasm instruction is executed when the cast failed and we3008// did not branch away.3009builder.switch_to_block(cast_fails_block);3010}3011Operator::BrOnCastFail {3012relative_depth,3013to_ref_type,3014from_ref_type: _,3015} => {3016let r = environ.stacks.peek1();3017let [.., WasmValType::Ref(r_ty)] = operand_types else {3018unreachable!("validation")3019};30203021let to_ref_type = environ.convert_ref_type(*to_ref_type)?;3022let cast_is_okay = environ.translate_ref_test(builder, to_ref_type, r, *r_ty)?;30233024let (cast_fails_block, inputs) = translate_br_if_args(*relative_depth, environ);3025let cast_succeeds_block = builder.create_block();3026canonicalise_brif(3027builder,3028cast_is_okay,3029cast_succeeds_block,3030&[3031// NB: the `cast_succeeds_block` is dominated by the current3032// block, and therefore doesn't need any block params.3033],3034cast_fails_block,3035inputs,3036);30373038// The only predecessor is the current block.3039builder.seal_block(cast_succeeds_block);30403041// The next Wasm instruction is executed when the cast succeeded and3042// we did not branch away.3043builder.switch_to_block(cast_succeeds_block);3044}30453046Operator::AnyConvertExtern => {3047// Pop an `externref`, push an `anyref`. But they have the same3048// representation, so we don't actually need to do anything.3049}3050Operator::ExternConvertAny => {3051// Pop an `anyref`, push an `externref`. But they have the same3052// representation, so we don't actually need to do anything.3053}30543055Operator::ContNew { cont_type_index } => {3056let cont_type_index = TypeIndex::from_u32(*cont_type_index);3057let arg_types: SmallVec<[_; 8]> = environ3058.continuation_arguments(cont_type_index)3059.to_smallvec();3060let result_types: SmallVec<[_; 8]> =3061environ.continuation_returns(cont_type_index).to_smallvec();3062let r = environ.stacks.pop1();3063let contobj = environ.translate_cont_new(builder, r, &arg_types, &result_types)?;3064environ.stacks.push1(contobj);3065}3066Operator::ContBind {3067argument_index,3068result_index,3069} => {3070let src_types = environ.continuation_arguments(TypeIndex::from_u32(*argument_index));3071let dst_arity = environ3072.continuation_arguments(TypeIndex::from_u32(*result_index))3073.len();3074let arg_count = src_types.len() - dst_arity;30753076let arg_types = &src_types[0..arg_count];3077for arg_type in arg_types {3078// We can't bind GC objects using cont.bind at the moment: We3079// don't have the necessary infrastructure to traverse the3080// buffers used by cont.bind when looking for GC roots. Thus,3081// this crude check ensures that these buffers can never contain3082// GC roots to begin with.3083if arg_type.is_vmgcref_type_and_not_i31() {3084return Err(wasmtime_environ::WasmError::Unsupported(3085"cont.bind does not support GC types at the moment".into(),3086));3087}3088}30893090let (original_contobj, args) =3091environ.stacks.peekn(arg_count + 1).split_last().unwrap();3092let original_contobj = *original_contobj;3093let args = args.to_vec();30943095let new_contobj = environ.translate_cont_bind(builder, original_contobj, &args);30963097environ.stacks.popn(arg_count + 1);3098environ.stacks.push1(new_contobj);3099}3100Operator::Suspend { tag_index } => {3101let tag_index = TagIndex::from_u32(*tag_index);3102let param_types = environ.tag_params(tag_index).to_vec();3103let return_types: SmallVec<[_; 8]> = environ3104.tag_returns(tag_index)3105.iter()3106.map(|ty| crate::value_type(environ.isa(), *ty))3107.collect();31083109let params = environ.stacks.peekn(param_types.len()).to_vec();3110let param_count = params.len();31113112let return_values =3113environ.translate_suspend(builder, tag_index.as_u32(), ¶ms, &return_types);31143115environ.stacks.popn(param_count);3116environ.stacks.pushn(&return_values);3117}3118Operator::Resume {3119cont_type_index,3120resume_table: wasm_resume_table,3121} => {3122// We translate the block indices in the wasm resume_table to actual Blocks.3123let mut clif_resume_table = vec![];3124for handle in &wasm_resume_table.handlers {3125match handle {3126wasmparser::Handle::OnLabel { tag, label } => {3127let i = environ.stacks.control_stack.len() - 1 - (*label as usize);3128let frame = &mut environ.stacks.control_stack[i];3129// This is side-effecting!3130frame.set_branched_to_exit();3131clif_resume_table.push((*tag, Some(frame.br_destination())));3132}3133wasmparser::Handle::OnSwitch { tag } => {3134clif_resume_table.push((*tag, None));3135}3136}3137}31383139let cont_type_index = TypeIndex::from_u32(*cont_type_index);3140let arity = environ.continuation_arguments(cont_type_index).len();3141let (contobj, call_args) = environ.stacks.peekn(arity + 1).split_last().unwrap();3142let contobj = *contobj;3143let call_args = call_args.to_vec();31443145let cont_return_vals = environ.translate_resume(3146builder,3147cont_type_index.as_u32(),3148contobj,3149&call_args,3150&clif_resume_table,3151)?;31523153environ.stacks.popn(arity + 1); // arguments + continuation3154environ.stacks.pushn(&cont_return_vals);3155}3156Operator::ResumeThrow {3157cont_type_index: _,3158tag_index: _,3159resume_table: _,3160} => {3161// TODO(10248) This depends on exception handling3162return Err(wasmtime_environ::WasmError::Unsupported(3163"resume.throw instructions not supported, yet".to_string(),3164));3165}3166Operator::Switch {3167cont_type_index,3168tag_index,3169} => {3170// Arguments of the continuation we are going to switch to3171let continuation_argument_types: SmallVec<[_; 8]> = environ3172.continuation_arguments(TypeIndex::from_u32(*cont_type_index))3173.to_smallvec();3174// Arity includes the continuation argument3175let arity = continuation_argument_types.len();3176let (contobj, switch_args) = environ.stacks.peekn(arity).split_last().unwrap();3177let contobj = *contobj;3178let switch_args = switch_args.to_vec();31793180// Type of the continuation we are going to create by suspending the3181// currently running stack3182let current_continuation_type = continuation_argument_types.last().unwrap();3183let current_continuation_type = current_continuation_type.unwrap_ref_type();31843185// Argument types of current_continuation_type. These will in turn3186// be the types of the arguments we receive when someone switches3187// back to this switch instruction3188let current_continuation_arg_types: SmallVec<[_; 8]> =3189match current_continuation_type.heap_type {3190WasmHeapType::ConcreteCont(index) => {3191let mti = index3192.as_module_type_index()3193.expect("Only supporting module type indices on switch for now");31943195environ3196.continuation_arguments(TypeIndex::from_u32(mti.as_u32()))3197.iter()3198.map(|ty| crate::value_type(environ.isa(), *ty))3199.collect()3200}3201_ => panic!("Invalid type on switch"),3202};32033204let switch_return_values = environ.translate_switch(3205builder,3206*tag_index,3207contobj,3208&switch_args,3209¤t_continuation_arg_types,3210)?;32113212environ.stacks.popn(arity);3213environ.stacks.pushn(&switch_return_values)3214}32153216Operator::GlobalAtomicGet { .. }3217| Operator::GlobalAtomicSet { .. }3218| Operator::GlobalAtomicRmwAdd { .. }3219| Operator::GlobalAtomicRmwSub { .. }3220| Operator::GlobalAtomicRmwOr { .. }3221| Operator::GlobalAtomicRmwXor { .. }3222| Operator::GlobalAtomicRmwAnd { .. }3223| Operator::GlobalAtomicRmwXchg { .. }3224| Operator::GlobalAtomicRmwCmpxchg { .. }3225| Operator::TableAtomicGet { .. }3226| Operator::TableAtomicSet { .. }3227| Operator::TableAtomicRmwXchg { .. }3228| Operator::TableAtomicRmwCmpxchg { .. }3229| Operator::StructAtomicGet { .. }3230| Operator::StructAtomicGetS { .. }3231| Operator::StructAtomicGetU { .. }3232| Operator::StructAtomicSet { .. }3233| Operator::StructAtomicRmwAdd { .. }3234| Operator::StructAtomicRmwSub { .. }3235| Operator::StructAtomicRmwOr { .. }3236| Operator::StructAtomicRmwXor { .. }3237| Operator::StructAtomicRmwAnd { .. }3238| Operator::StructAtomicRmwXchg { .. }3239| Operator::StructAtomicRmwCmpxchg { .. }3240| Operator::ArrayAtomicGet { .. }3241| Operator::ArrayAtomicGetS { .. }3242| Operator::ArrayAtomicGetU { .. }3243| Operator::ArrayAtomicSet { .. }3244| Operator::ArrayAtomicRmwAdd { .. }3245| Operator::ArrayAtomicRmwSub { .. }3246| Operator::ArrayAtomicRmwOr { .. }3247| Operator::ArrayAtomicRmwXor { .. }3248| Operator::ArrayAtomicRmwAnd { .. }3249| Operator::ArrayAtomicRmwXchg { .. }3250| Operator::ArrayAtomicRmwCmpxchg { .. }3251| Operator::RefI31Shared { .. } => {3252return Err(wasm_unsupported!(3253"shared-everything-threads operators are not yet implemented"3254));3255}32563257Operator::I64MulWideS => {3258let (arg1, arg2) = environ.stacks.pop2();3259let arg1 = builder.ins().sextend(I128, arg1);3260let arg2 = builder.ins().sextend(I128, arg2);3261let result = builder.ins().imul(arg1, arg2);3262let (lo, hi) = builder.ins().isplit(result);3263environ.stacks.push2(lo, hi);3264}3265Operator::I64MulWideU => {3266let (arg1, arg2) = environ.stacks.pop2();3267let arg1 = builder.ins().uextend(I128, arg1);3268let arg2 = builder.ins().uextend(I128, arg2);3269let result = builder.ins().imul(arg1, arg2);3270let (lo, hi) = builder.ins().isplit(result);3271environ.stacks.push2(lo, hi);3272}3273Operator::I64Add128 => {3274let (arg1, arg2, arg3, arg4) = environ.stacks.pop4();3275let arg1 = builder.ins().iconcat(arg1, arg2);3276let arg2 = builder.ins().iconcat(arg3, arg4);3277let result = builder.ins().iadd(arg1, arg2);3278let (res1, res2) = builder.ins().isplit(result);3279environ.stacks.push2(res1, res2);3280}3281Operator::I64Sub128 => {3282let (arg1, arg2, arg3, arg4) = environ.stacks.pop4();3283let arg1 = builder.ins().iconcat(arg1, arg2);3284let arg2 = builder.ins().iconcat(arg3, arg4);3285let result = builder.ins().isub(arg1, arg2);3286let (res1, res2) = builder.ins().isplit(result);3287environ.stacks.push2(res1, res2);3288}32893290// catch-all as `Operator` is `#[non_exhaustive]`3291op => return Err(wasm_unsupported!("operator {op:?}")),3292};3293Ok(())3294}32953296/// Deals with a Wasm instruction located in an unreachable portion of the code. Most of them3297/// are dropped but special ones like `End` or `Else` signal the potential end of the unreachable3298/// portion so the translation state must be updated accordingly.3299fn translate_unreachable_operator(3300validator: &FuncValidator<impl WasmModuleResources>,3301op: &Operator,3302builder: &mut FunctionBuilder,3303environ: &mut FuncEnvironment<'_>,3304) -> WasmResult<()> {3305debug_assert!(!environ.is_reachable());3306match *op {3307Operator::If { blockty } => {3308// Push a placeholder control stack entry. The if isn't reachable,3309// so we don't have any branches anywhere.3310environ.stacks.push_if(3311ir::Block::reserved_value(),3312ElseData::NoElse {3313branch_inst: ir::Inst::reserved_value(),3314placeholder: ir::Block::reserved_value(),3315},33160,33170,3318blockty,3319);3320}3321Operator::Loop { blockty: _ }3322| Operator::Block { blockty: _ }3323| Operator::TryTable { try_table: _ } => {3324environ.stacks.push_block(ir::Block::reserved_value(), 0, 0);3325}3326Operator::Else => {3327let i = environ.stacks.control_stack.len() - 1;3328let reachable = environ.is_reachable();3329match environ.stacks.control_stack[i] {3330ControlStackFrame::If {3331ref else_data,3332head_is_reachable,3333ref mut consequent_ends_reachable,3334blocktype,3335..3336} => {3337debug_assert!(consequent_ends_reachable.is_none());3338*consequent_ends_reachable = Some(reachable);33393340if head_is_reachable {3341// We have a branch from the head of the `if` to the `else`.3342environ.stacks.reachable = true;33433344let else_block = match *else_data {3345ElseData::NoElse {3346branch_inst,3347placeholder,3348} => {3349let (params, _results) =3350blocktype_params_results(validator, blocktype)?;3351let else_block = block_with_params(builder, params, environ)?;3352let frame = environ.stacks.control_stack.last().unwrap();3353frame.truncate_value_stack_to_else_params(3354&mut environ.stacks.stack,3355&mut environ.stacks.stack_shape,3356);33573358// We change the target of the branch instruction.3359builder.change_jump_destination(3360branch_inst,3361placeholder,3362else_block,3363);3364builder.seal_block(else_block);3365else_block3366}3367ElseData::WithElse { else_block } => {3368let frame = environ.stacks.control_stack.last().unwrap();3369frame.truncate_value_stack_to_else_params(3370&mut environ.stacks.stack,3371&mut environ.stacks.stack_shape,3372);3373else_block3374}3375};33763377builder.switch_to_block(else_block);33783379// Again, no need to push the parameters for the `else`,3380// since we already did when we saw the original `if`. See3381// the comment for translating `Operator::Else` in3382// `translate_operator` for details.3383}3384}3385_ => unreachable!(),3386}3387}3388Operator::End => {3389let value_stack = &mut environ.stacks.stack;3390let stack_shape = &mut environ.stacks.stack_shape;3391let control_stack = &mut environ.stacks.control_stack;3392let frame = control_stack.pop().unwrap();33933394frame.restore_catch_handlers(&mut environ.stacks.handlers, builder);33953396// Pop unused parameters from stack.3397frame.truncate_value_stack_to_original_size(value_stack, stack_shape);33983399let reachable_anyway = match frame {3400// If it is a loop we also have to seal the body loop block3401ControlStackFrame::Loop { header, .. } => {3402builder.seal_block(header);3403// And loops can't have branches to the end.3404false3405}3406// If we never set `consequent_ends_reachable` then that means3407// we are finishing the consequent now, and there was no3408// `else`. Whether the following block is reachable depends only3409// on if the head was reachable.3410ControlStackFrame::If {3411head_is_reachable,3412consequent_ends_reachable: None,3413..3414} => head_is_reachable,3415// Since we are only in this function when in unreachable code,3416// we know that the alternative just ended unreachable. Whether3417// the following block is reachable depends on if the consequent3418// ended reachable or not.3419ControlStackFrame::If {3420head_is_reachable,3421consequent_ends_reachable: Some(consequent_ends_reachable),3422..3423} => head_is_reachable && consequent_ends_reachable,3424// All other control constructs are already handled.3425_ => false,3426};34273428if frame.exit_is_branched_to() || reachable_anyway {3429builder.switch_to_block(frame.following_code());3430builder.seal_block(frame.following_code());34313432// And add the return values of the block but only if the next block is reachable3433// (which corresponds to testing if the stack depth is 1)3434value_stack.extend_from_slice(builder.block_params(frame.following_code()));3435environ.stacks.reachable = true;3436}3437}3438_ => {3439// We don't translate because this is unreachable code3440}3441}34423443Ok(())3444}34453446/// This function is a generalized helper for validating that a wasm-supplied3447/// heap address is in-bounds.3448///3449/// This function takes a litany of parameters and requires that the *Wasm*3450/// address to be verified is at the top of the stack in `state`. This will3451/// generate necessary IR to validate that the heap address is correctly3452/// in-bounds, and various parameters are returned describing the valid *native*3453/// heap address if execution reaches that point.3454///3455/// Returns `None` when the Wasm access will unconditionally trap.3456///3457/// Returns `(flags, wasm_addr, native_addr)`.3458fn prepare_addr(3459memarg: &MemArg,3460access_size: u8,3461builder: &mut FunctionBuilder,3462environ: &mut FuncEnvironment<'_>,3463) -> WasmResult<Reachability<(MemFlags, Value, Value)>> {3464let index = environ.stacks.pop1();34653466let memory_index = MemoryIndex::from_u32(memarg.memory);3467let heap = environ.get_or_create_heap(builder.func, memory_index);34683469// How exactly the bounds check is performed here and what it's performed3470// on is a bit tricky. Generally we want to rely on access violations (e.g.3471// segfaults) to generate traps since that means we don't have to bounds3472// check anything explicitly.3473//3474// (1) If we don't have a guard page of unmapped memory, though, then we3475// can't rely on this trapping behavior through segfaults. Instead we need3476// to bounds-check the entire memory access here which is everything from3477// `addr32 + offset` to `addr32 + offset + width` (not inclusive). In this3478// scenario our adjusted offset that we're checking is `memarg.offset +3479// access_size`. Note that we do saturating arithmetic here to avoid3480// overflow. The addition here is in the 64-bit space, which means that3481// we'll never overflow for 32-bit wasm but for 64-bit this is an issue. If3482// our effective offset is u64::MAX though then it's impossible for for3483// that to actually be a valid offset because otherwise the wasm linear3484// memory would take all of the host memory!3485//3486// (2) If we have a guard page, however, then we can perform a further3487// optimization of the generated code by only checking multiples of the3488// offset-guard size to be more CSE-friendly. Knowing that we have at least3489// 1 page of a guard page we're then able to disregard the `width` since we3490// know it's always less than one page. Our bounds check will be for the3491// first byte which will either succeed and be guaranteed to fault if it's3492// actually out of bounds, or the bounds check itself will fail. In any case3493// we assert that the width is reasonably small for now so this assumption3494// can be adjusted in the future if we get larger widths.3495//3496// Put another way we can say, where `y < offset_guard_size`:3497//3498// n * offset_guard_size + y = offset3499//3500// We'll then pass `n * offset_guard_size` as the bounds check value. If3501// this traps then our `offset` would have trapped anyway. If this check3502// passes we know3503//3504// addr32 + n * offset_guard_size < bound3505//3506// which means3507//3508// addr32 + n * offset_guard_size + y < bound + offset_guard_size3509//3510// because `y < offset_guard_size`, which then means:3511//3512// addr32 + offset < bound + offset_guard_size3513//3514// Since we know that that guard size bytes are all unmapped we're3515// guaranteed that `offset` and the `width` bytes after it are either3516// in-bounds or will hit the guard page, meaning we'll get the desired3517// semantics we want.3518//3519// ---3520//3521// With all that in mind remember that the goal is to bounds check as few3522// things as possible. To facilitate this the "fast path" is expected to be3523// hit like so:3524//3525// * For wasm32, wasmtime defaults to 4gb "static" memories with 2gb guard3526// regions. This means that for all offsets <=2gb, we hit the optimized3527// case for `heap_addr` on static memories 4gb in size in cranelift's3528// legalization of `heap_addr`, eliding the bounds check entirely.3529//3530// * For wasm64 offsets <=2gb will generate a single `heap_addr`3531// instruction, but at this time all heaps are "dynamic" which means that3532// a single bounds check is forced. Ideally we'd do better here, but3533// that's the current state of affairs.3534//3535// Basically we assume that most configurations have a guard page and most3536// offsets in `memarg` are <=2gb, which means we get the fast path of one3537// `heap_addr` instruction plus a hardcoded i32-offset in memory-related3538// instructions.3539let heap = environ.heaps()[heap].clone();3540let addr = match u32::try_from(memarg.offset) {3541// If our offset fits within a u32, then we can place the it into the3542// offset immediate of the `heap_addr` instruction.3543Ok(offset) => bounds_check_and_compute_addr(3544builder,3545environ,3546&heap,3547index,3548BoundsCheck::StaticOffset {3549offset,3550access_size,3551},3552ir::TrapCode::HEAP_OUT_OF_BOUNDS,3553),35543555// If the offset doesn't fit within a u32, then we can't pass it3556// directly into `heap_addr`.3557//3558// One reasonable question you might ask is "why not?". There's no3559// fundamental reason why `heap_addr` *must* take a 32-bit offset. The3560// reason this isn't done, though, is that blindly changing the offset3561// to a 64-bit offset increases the size of the `InstructionData` enum3562// in cranelift by 8 bytes (16 to 24). This can have significant3563// performance implications so the conclusion when this was written was3564// that we shouldn't do that.3565//3566// Without the ability to put the whole offset into the `heap_addr`3567// instruction we need to fold the offset into the address itself with3568// an unsigned addition. In doing so though we need to check for3569// overflow because that would mean the address is out-of-bounds (wasm3570// bounds checks happen on the effective 33 or 65 bit address once the3571// offset is factored in).3572//3573// Once we have the effective address, offset already folded in, then3574// `heap_addr` is used to verify that the address is indeed in-bounds.3575//3576// Note that this is generating what's likely to be at least two3577// branches, one for the overflow and one for the bounds check itself.3578// For now though that should hopefully be ok since 4gb+ offsets are3579// relatively odd/rare. In the future if needed we can look into3580// optimizing this more.3581Err(_) => {3582let offset = builder3583.ins()3584.iconst(heap.index_type(), memarg.offset.cast_signed());3585let adjusted_index = environ.uadd_overflow_trap(3586builder,3587index,3588offset,3589ir::TrapCode::HEAP_OUT_OF_BOUNDS,3590);3591bounds_check_and_compute_addr(3592builder,3593environ,3594&heap,3595adjusted_index,3596BoundsCheck::StaticOffset {3597offset: 0,3598access_size,3599},3600ir::TrapCode::HEAP_OUT_OF_BOUNDS,3601)3602}3603};3604let addr = match addr {3605Reachability::Unreachable => return Ok(Reachability::Unreachable),3606Reachability::Reachable(a) => a,3607};36083609// Note that we don't set `is_aligned` here, even if the load instruction's3610// alignment immediate may says it's aligned, because WebAssembly's3611// immediate field is just a hint, while Cranelift's aligned flag needs a3612// guarantee. WebAssembly memory accesses are always little-endian.3613let mut flags = MemFlags::new();3614flags.set_endianness(ir::Endianness::Little);36153616if heap.pcc_memory_type.is_some() {3617// Proof-carrying code is enabled; check this memory access.3618flags.set_checked();3619}36203621// The access occurs to the `heap` disjoint category of abstract3622// state. This may allow alias analysis to merge redundant loads,3623// etc. when heap accesses occur interleaved with other (table,3624// vmctx, stack) accesses.3625flags.set_alias_region(Some(ir::AliasRegion::Heap));36263627Ok(Reachability::Reachable((flags, index, addr)))3628}36293630fn align_atomic_addr(3631memarg: &MemArg,3632loaded_bytes: u8,3633builder: &mut FunctionBuilder,3634environ: &mut FuncEnvironment<'_>,3635) {3636// Atomic addresses must all be aligned correctly, and for now we check3637// alignment before we check out-of-bounds-ness. The order of this check may3638// need to be updated depending on the outcome of the official threads3639// proposal itself.3640//3641// Note that with an offset>0 we generate an `iadd_imm` where the result is3642// thrown away after the offset check. This may truncate the offset and the3643// result may overflow as well, but those conditions won't affect the3644// alignment check itself. This can probably be optimized better and we3645// should do so in the future as well.3646if loaded_bytes > 1 {3647let addr = environ.stacks.pop1(); // "peek" via pop then push3648environ.stacks.push1(addr);3649let effective_addr = if memarg.offset == 0 {3650addr3651} else {3652builder.ins().iadd_imm(addr, memarg.offset.cast_signed())3653};3654debug_assert!(loaded_bytes.is_power_of_two());3655let misalignment = builder3656.ins()3657.band_imm(effective_addr, i64::from(loaded_bytes - 1));3658let f = builder.ins().icmp_imm(IntCC::NotEqual, misalignment, 0);3659environ.trapnz(builder, f, crate::TRAP_HEAP_MISALIGNED);3660}3661}36623663/// Like `prepare_addr` but for atomic accesses.3664///3665/// Returns `None` when the Wasm access will unconditionally trap.3666fn prepare_atomic_addr(3667memarg: &MemArg,3668loaded_bytes: u8,3669builder: &mut FunctionBuilder,3670environ: &mut FuncEnvironment<'_>,3671) -> WasmResult<Reachability<(MemFlags, Value, Value)>> {3672align_atomic_addr(memarg, loaded_bytes, builder, environ);3673prepare_addr(memarg, loaded_bytes, builder, environ)3674}36753676/// Translate a load instruction.3677///3678/// Returns the execution state's reachability after the load is translated.3679fn translate_load(3680memarg: &MemArg,3681opcode: ir::Opcode,3682result_ty: Type,3683builder: &mut FunctionBuilder,3684environ: &mut FuncEnvironment<'_>,3685) -> WasmResult<Reachability<()>> {3686let mem_op_size = mem_op_size(opcode, result_ty);3687let (flags, wasm_index, base) = match prepare_addr(memarg, mem_op_size, builder, environ)? {3688Reachability::Unreachable => return Ok(Reachability::Unreachable),3689Reachability::Reachable((f, i, b)) => (f, i, b),3690};36913692environ.before_load(builder, mem_op_size, wasm_index, memarg.offset);36933694let (load, dfg) = builder3695.ins()3696.Load(opcode, result_ty, flags, Offset32::new(0), base);3697environ.stacks.push1(dfg.first_result(load));3698Ok(Reachability::Reachable(()))3699}37003701/// Translate a store instruction.3702fn translate_store(3703memarg: &MemArg,3704opcode: ir::Opcode,3705builder: &mut FunctionBuilder,3706environ: &mut FuncEnvironment<'_>,3707) -> WasmResult<()> {3708let val = environ.stacks.pop1();3709let val_ty = builder.func.dfg.value_type(val);3710let mem_op_size = mem_op_size(opcode, val_ty);37113712let (flags, wasm_index, base) = unwrap_or_return_unreachable_state!(3713environ,3714prepare_addr(memarg, mem_op_size, builder, environ)?3715);37163717environ.before_store(builder, mem_op_size, wasm_index, memarg.offset);37183719builder3720.ins()3721.Store(opcode, val_ty, flags, Offset32::new(0), val, base);3722Ok(())3723}37243725fn mem_op_size(opcode: ir::Opcode, ty: Type) -> u8 {3726match opcode {3727ir::Opcode::Istore8 | ir::Opcode::Sload8 | ir::Opcode::Uload8 => 1,3728ir::Opcode::Istore16 | ir::Opcode::Sload16 | ir::Opcode::Uload16 => 2,3729ir::Opcode::Istore32 | ir::Opcode::Sload32 | ir::Opcode::Uload32 => 4,3730ir::Opcode::Store | ir::Opcode::Load => u8::try_from(ty.bytes()).unwrap(),3731_ => panic!("unknown size of mem op for {opcode:?}"),3732}3733}37343735fn translate_icmp(cc: IntCC, builder: &mut FunctionBuilder, environ: &mut FuncEnvironment<'_>) {3736let (arg0, arg1) = environ.stacks.pop2();3737let val = builder.ins().icmp(cc, arg0, arg1);3738environ.stacks.push1(builder.ins().uextend(I32, val));3739}37403741fn translate_atomic_rmw(3742widened_ty: Type,3743access_ty: Type,3744op: AtomicRmwOp,3745memarg: &MemArg,3746builder: &mut FunctionBuilder,3747environ: &mut FuncEnvironment<'_>,3748) -> WasmResult<()> {3749let mut arg2 = environ.stacks.pop1();3750let arg2_ty = builder.func.dfg.value_type(arg2);37513752// The operation is performed at type `access_ty`, and the old value is zero-extended3753// to type `widened_ty`.3754match access_ty {3755I8 | I16 | I32 | I64 => {}3756_ => {3757return Err(wasm_unsupported!(3758"atomic_rmw: unsupported access type {:?}",3759access_ty3760));3761}3762};3763let w_ty_ok = match widened_ty {3764I32 | I64 => true,3765_ => false,3766};3767assert!(w_ty_ok && widened_ty.bytes() >= access_ty.bytes());37683769assert!(arg2_ty.bytes() >= access_ty.bytes());3770if arg2_ty.bytes() > access_ty.bytes() {3771arg2 = builder.ins().ireduce(access_ty, arg2);3772}37733774let (flags, _, addr) = unwrap_or_return_unreachable_state!(3775environ,3776prepare_atomic_addr(3777memarg,3778u8::try_from(access_ty.bytes()).unwrap(),3779builder,3780environ,3781)?3782);37833784let mut res = builder.ins().atomic_rmw(access_ty, flags, op, addr, arg2);3785if access_ty != widened_ty {3786res = builder.ins().uextend(widened_ty, res);3787}3788environ.stacks.push1(res);3789Ok(())3790}37913792fn translate_atomic_cas(3793widened_ty: Type,3794access_ty: Type,3795memarg: &MemArg,3796builder: &mut FunctionBuilder,3797environ: &mut FuncEnvironment<'_>,3798) -> WasmResult<()> {3799let (mut expected, mut replacement) = environ.stacks.pop2();3800let expected_ty = builder.func.dfg.value_type(expected);3801let replacement_ty = builder.func.dfg.value_type(replacement);38023803// The compare-and-swap is performed at type `access_ty`, and the old value is zero-extended3804// to type `widened_ty`.3805match access_ty {3806I8 | I16 | I32 | I64 => {}3807_ => {3808return Err(wasm_unsupported!(3809"atomic_cas: unsupported access type {:?}",3810access_ty3811));3812}3813};3814let w_ty_ok = match widened_ty {3815I32 | I64 => true,3816_ => false,3817};3818assert!(w_ty_ok && widened_ty.bytes() >= access_ty.bytes());38193820assert!(expected_ty.bytes() >= access_ty.bytes());3821if expected_ty.bytes() > access_ty.bytes() {3822expected = builder.ins().ireduce(access_ty, expected);3823}3824assert!(replacement_ty.bytes() >= access_ty.bytes());3825if replacement_ty.bytes() > access_ty.bytes() {3826replacement = builder.ins().ireduce(access_ty, replacement);3827}38283829let (flags, _, addr) = unwrap_or_return_unreachable_state!(3830environ,3831prepare_atomic_addr(3832memarg,3833u8::try_from(access_ty.bytes()).unwrap(),3834builder,3835environ,3836)?3837);3838let mut res = builder.ins().atomic_cas(flags, addr, expected, replacement);3839if access_ty != widened_ty {3840res = builder.ins().uextend(widened_ty, res);3841}3842environ.stacks.push1(res);3843Ok(())3844}38453846fn translate_atomic_load(3847widened_ty: Type,3848access_ty: Type,3849memarg: &MemArg,3850builder: &mut FunctionBuilder,3851environ: &mut FuncEnvironment<'_>,3852) -> WasmResult<()> {3853// The load is performed at type `access_ty`, and the loaded value is zero extended3854// to `widened_ty`.3855match access_ty {3856I8 | I16 | I32 | I64 => {}3857_ => {3858return Err(wasm_unsupported!(3859"atomic_load: unsupported access type {:?}",3860access_ty3861));3862}3863};3864let w_ty_ok = match widened_ty {3865I32 | I64 => true,3866_ => false,3867};3868assert!(w_ty_ok && widened_ty.bytes() >= access_ty.bytes());38693870let (flags, _, addr) = unwrap_or_return_unreachable_state!(3871environ,3872prepare_atomic_addr(3873memarg,3874u8::try_from(access_ty.bytes()).unwrap(),3875builder,3876environ,3877)?3878);3879let mut res = builder.ins().atomic_load(access_ty, flags, addr);3880if access_ty != widened_ty {3881res = builder.ins().uextend(widened_ty, res);3882}3883environ.stacks.push1(res);3884Ok(())3885}38863887fn translate_atomic_store(3888access_ty: Type,3889memarg: &MemArg,3890builder: &mut FunctionBuilder,3891environ: &mut FuncEnvironment<'_>,3892) -> WasmResult<()> {3893let mut data = environ.stacks.pop1();3894let data_ty = builder.func.dfg.value_type(data);38953896// The operation is performed at type `access_ty`, and the data to be stored may first3897// need to be narrowed accordingly.3898match access_ty {3899I8 | I16 | I32 | I64 => {}3900_ => {3901return Err(wasm_unsupported!(3902"atomic_store: unsupported access type {:?}",3903access_ty3904));3905}3906};3907let d_ty_ok = match data_ty {3908I32 | I64 => true,3909_ => false,3910};3911assert!(d_ty_ok && data_ty.bytes() >= access_ty.bytes());39123913if data_ty.bytes() > access_ty.bytes() {3914data = builder.ins().ireduce(access_ty, data);3915}39163917let (flags, _, addr) = unwrap_or_return_unreachable_state!(3918environ,3919prepare_atomic_addr(3920memarg,3921u8::try_from(access_ty.bytes()).unwrap(),3922builder,3923environ,3924)?3925);3926builder.ins().atomic_store(flags, data, addr);3927Ok(())3928}39293930fn translate_vector_icmp(3931cc: IntCC,3932needed_type: Type,3933builder: &mut FunctionBuilder,3934env: &mut FuncEnvironment<'_>,3935) {3936let (a, b) = env.stacks.pop2();3937let bitcast_a = optionally_bitcast_vector(a, needed_type, builder);3938let bitcast_b = optionally_bitcast_vector(b, needed_type, builder);3939env.stacks3940.push1(builder.ins().icmp(cc, bitcast_a, bitcast_b))3941}39423943fn translate_fcmp(cc: FloatCC, builder: &mut FunctionBuilder, env: &mut FuncEnvironment<'_>) {3944let (arg0, arg1) = env.stacks.pop2();3945let val = builder.ins().fcmp(cc, arg0, arg1);3946env.stacks.push1(builder.ins().uextend(I32, val));3947}39483949fn translate_vector_fcmp(3950cc: FloatCC,3951needed_type: Type,3952builder: &mut FunctionBuilder,3953env: &mut FuncEnvironment<'_>,3954) {3955let (a, b) = env.stacks.pop2();3956let bitcast_a = optionally_bitcast_vector(a, needed_type, builder);3957let bitcast_b = optionally_bitcast_vector(b, needed_type, builder);3958env.stacks3959.push1(builder.ins().fcmp(cc, bitcast_a, bitcast_b))3960}39613962fn translate_br_if(3963relative_depth: u32,3964builder: &mut FunctionBuilder,3965env: &mut FuncEnvironment<'_>,3966) {3967let val = env.stacks.pop1();3968let (br_destination, inputs) = translate_br_if_args(relative_depth, env);3969let next_block = builder.create_block();3970canonicalise_brif(builder, val, br_destination, inputs, next_block, &[]);39713972builder.seal_block(next_block); // The only predecessor is the current block.3973builder.switch_to_block(next_block);3974}39753976fn translate_br_if_args<'a>(3977relative_depth: u32,3978env: &'a mut FuncEnvironment<'_>,3979) -> (ir::Block, &'a mut [ir::Value]) {3980let i = env.stacks.control_stack.len() - 1 - (relative_depth as usize);3981let (return_count, br_destination) = {3982let frame = &mut env.stacks.control_stack[i];3983// The values returned by the branch are still available for the reachable3984// code that comes after it3985frame.set_branched_to_exit();3986let return_count = if frame.is_loop() {3987frame.num_param_values()3988} else {3989frame.num_return_values()3990};3991(return_count, frame.br_destination())3992};3993let inputs = env.stacks.peekn_mut(return_count);3994(br_destination, inputs)3995}39963997/// Determine the returned value type of a WebAssembly operator3998fn type_of(operator: &Operator) -> Type {3999match operator {4000Operator::V128Load { .. }4001| Operator::V128Store { .. }4002| Operator::V128Const { .. }4003| Operator::V128Not4004| Operator::V128And4005| Operator::V128AndNot4006| Operator::V128Or4007| Operator::V128Xor4008| Operator::V128AnyTrue4009| Operator::V128Bitselect => I8X16, // default type representing V12840104011Operator::I8x16Shuffle { .. }4012| Operator::I8x16Splat4013| Operator::V128Load8Splat { .. }4014| Operator::V128Load8Lane { .. }4015| Operator::V128Store8Lane { .. }4016| Operator::I8x16ExtractLaneS { .. }4017| Operator::I8x16ExtractLaneU { .. }4018| Operator::I8x16ReplaceLane { .. }4019| Operator::I8x16Eq4020| Operator::I8x16Ne4021| Operator::I8x16LtS4022| Operator::I8x16LtU4023| Operator::I8x16GtS4024| Operator::I8x16GtU4025| Operator::I8x16LeS4026| Operator::I8x16LeU4027| Operator::I8x16GeS4028| Operator::I8x16GeU4029| Operator::I8x16Neg4030| Operator::I8x16Abs4031| Operator::I8x16AllTrue4032| Operator::I8x16Shl4033| Operator::I8x16ShrS4034| Operator::I8x16ShrU4035| Operator::I8x16Add4036| Operator::I8x16AddSatS4037| Operator::I8x16AddSatU4038| Operator::I8x16Sub4039| Operator::I8x16SubSatS4040| Operator::I8x16SubSatU4041| Operator::I8x16MinS4042| Operator::I8x16MinU4043| Operator::I8x16MaxS4044| Operator::I8x16MaxU4045| Operator::I8x16AvgrU4046| Operator::I8x16Bitmask4047| Operator::I8x16Popcnt4048| Operator::I8x16RelaxedLaneselect => I8X16,40494050Operator::I16x8Splat4051| Operator::V128Load16Splat { .. }4052| Operator::V128Load16Lane { .. }4053| Operator::V128Store16Lane { .. }4054| Operator::I16x8ExtractLaneS { .. }4055| Operator::I16x8ExtractLaneU { .. }4056| Operator::I16x8ReplaceLane { .. }4057| Operator::I16x8Eq4058| Operator::I16x8Ne4059| Operator::I16x8LtS4060| Operator::I16x8LtU4061| Operator::I16x8GtS4062| Operator::I16x8GtU4063| Operator::I16x8LeS4064| Operator::I16x8LeU4065| Operator::I16x8GeS4066| Operator::I16x8GeU4067| Operator::I16x8Neg4068| Operator::I16x8Abs4069| Operator::I16x8AllTrue4070| Operator::I16x8Shl4071| Operator::I16x8ShrS4072| Operator::I16x8ShrU4073| Operator::I16x8Add4074| Operator::I16x8AddSatS4075| Operator::I16x8AddSatU4076| Operator::I16x8Sub4077| Operator::I16x8SubSatS4078| Operator::I16x8SubSatU4079| Operator::I16x8MinS4080| Operator::I16x8MinU4081| Operator::I16x8MaxS4082| Operator::I16x8MaxU4083| Operator::I16x8AvgrU4084| Operator::I16x8Mul4085| Operator::I16x8Bitmask4086| Operator::I16x8RelaxedLaneselect => I16X8,40874088Operator::I32x4Splat4089| Operator::V128Load32Splat { .. }4090| Operator::V128Load32Lane { .. }4091| Operator::V128Store32Lane { .. }4092| Operator::I32x4ExtractLane { .. }4093| Operator::I32x4ReplaceLane { .. }4094| Operator::I32x4Eq4095| Operator::I32x4Ne4096| Operator::I32x4LtS4097| Operator::I32x4LtU4098| Operator::I32x4GtS4099| Operator::I32x4GtU4100| Operator::I32x4LeS4101| Operator::I32x4LeU4102| Operator::I32x4GeS4103| Operator::I32x4GeU4104| Operator::I32x4Neg4105| Operator::I32x4Abs4106| Operator::I32x4AllTrue4107| Operator::I32x4Shl4108| Operator::I32x4ShrS4109| Operator::I32x4ShrU4110| Operator::I32x4Add4111| Operator::I32x4Sub4112| Operator::I32x4Mul4113| Operator::I32x4MinS4114| Operator::I32x4MinU4115| Operator::I32x4MaxS4116| Operator::I32x4MaxU4117| Operator::I32x4Bitmask4118| Operator::I32x4TruncSatF32x4S4119| Operator::I32x4TruncSatF32x4U4120| Operator::I32x4RelaxedLaneselect4121| Operator::V128Load32Zero { .. } => I32X4,41224123Operator::I64x2Splat4124| Operator::V128Load64Splat { .. }4125| Operator::V128Load64Lane { .. }4126| Operator::V128Store64Lane { .. }4127| Operator::I64x2ExtractLane { .. }4128| Operator::I64x2ReplaceLane { .. }4129| Operator::I64x2Eq4130| Operator::I64x2Ne4131| Operator::I64x2LtS4132| Operator::I64x2GtS4133| Operator::I64x2LeS4134| Operator::I64x2GeS4135| Operator::I64x2Neg4136| Operator::I64x2Abs4137| Operator::I64x2AllTrue4138| Operator::I64x2Shl4139| Operator::I64x2ShrS4140| Operator::I64x2ShrU4141| Operator::I64x2Add4142| Operator::I64x2Sub4143| Operator::I64x2Mul4144| Operator::I64x2Bitmask4145| Operator::I64x2RelaxedLaneselect4146| Operator::V128Load64Zero { .. } => I64X2,41474148Operator::F32x4Splat4149| Operator::F32x4ExtractLane { .. }4150| Operator::F32x4ReplaceLane { .. }4151| Operator::F32x4Eq4152| Operator::F32x4Ne4153| Operator::F32x4Lt4154| Operator::F32x4Gt4155| Operator::F32x4Le4156| Operator::F32x4Ge4157| Operator::F32x4Abs4158| Operator::F32x4Neg4159| Operator::F32x4Sqrt4160| Operator::F32x4Add4161| Operator::F32x4Sub4162| Operator::F32x4Mul4163| Operator::F32x4Div4164| Operator::F32x4Min4165| Operator::F32x4Max4166| Operator::F32x4PMin4167| Operator::F32x4PMax4168| Operator::F32x4ConvertI32x4S4169| Operator::F32x4ConvertI32x4U4170| Operator::F32x4Ceil4171| Operator::F32x4Floor4172| Operator::F32x4Trunc4173| Operator::F32x4Nearest4174| Operator::F32x4RelaxedMax4175| Operator::F32x4RelaxedMin4176| Operator::F32x4RelaxedMadd4177| Operator::F32x4RelaxedNmadd => F32X4,41784179Operator::F64x2Splat4180| Operator::F64x2ExtractLane { .. }4181| Operator::F64x2ReplaceLane { .. }4182| Operator::F64x2Eq4183| Operator::F64x2Ne4184| Operator::F64x2Lt4185| Operator::F64x2Gt4186| Operator::F64x2Le4187| Operator::F64x2Ge4188| Operator::F64x2Abs4189| Operator::F64x2Neg4190| Operator::F64x2Sqrt4191| Operator::F64x2Add4192| Operator::F64x2Sub4193| Operator::F64x2Mul4194| Operator::F64x2Div4195| Operator::F64x2Min4196| Operator::F64x2Max4197| Operator::F64x2PMin4198| Operator::F64x2PMax4199| Operator::F64x2Ceil4200| Operator::F64x2Floor4201| Operator::F64x2Trunc4202| Operator::F64x2Nearest4203| Operator::F64x2RelaxedMax4204| Operator::F64x2RelaxedMin4205| Operator::F64x2RelaxedMadd4206| Operator::F64x2RelaxedNmadd => F64X2,42074208_ => unimplemented!(4209"Currently only SIMD instructions are mapped to their return type; the \4210following instruction is not mapped: {:?}",4211operator4212),4213}4214}42154216/// Some SIMD operations only operate on I8X16 in CLIF; this will convert them to that type by4217/// adding a bitcast if necessary.4218fn optionally_bitcast_vector(4219value: Value,4220needed_type: Type,4221builder: &mut FunctionBuilder,4222) -> Value {4223if builder.func.dfg.value_type(value) != needed_type {4224let mut flags = MemFlags::new();4225flags.set_endianness(ir::Endianness::Little);4226builder.ins().bitcast(needed_type, flags, value)4227} else {4228value4229}4230}42314232#[inline(always)]4233fn is_non_canonical_v128(ty: ir::Type) -> bool {4234match ty {4235I64X2 | I32X4 | I16X8 | F32X4 | F64X2 => true,4236_ => false,4237}4238}42394240/// Cast to I8X16, any vector values in `values` that are of "non-canonical" type (meaning, not4241/// I8X16), and return them in a slice. A pre-scan is made to determine whether any casts are4242/// actually necessary, and if not, the original slice is returned. Otherwise the cast values4243/// are returned in a slice that belongs to the caller-supplied `SmallVec`.4244fn canonicalise_v128_values<'a>(4245tmp_canonicalised: &'a mut SmallVec<[BlockArg; 16]>,4246builder: &mut FunctionBuilder,4247values: &'a [ir::Value],4248) -> &'a [BlockArg] {4249debug_assert!(tmp_canonicalised.is_empty());4250// Cast, and push the resulting `Value`s into `canonicalised`.4251for v in values {4252let value = if is_non_canonical_v128(builder.func.dfg.value_type(*v)) {4253let mut flags = MemFlags::new();4254flags.set_endianness(ir::Endianness::Little);4255builder.ins().bitcast(I8X16, flags, *v)4256} else {4257*v4258};4259tmp_canonicalised.push(BlockArg::from(value));4260}4261tmp_canonicalised.as_slice()4262}42634264/// Generate a `jump` instruction, but first cast all 128-bit vector values to I8X16 if they4265/// don't have that type. This is done in somewhat roundabout way so as to ensure that we4266/// almost never have to do any heap allocation.4267fn canonicalise_then_jump(4268builder: &mut FunctionBuilder,4269destination: ir::Block,4270params: &[ir::Value],4271) -> ir::Inst {4272let mut tmp_canonicalised = SmallVec::<[_; 16]>::new();4273let canonicalised = canonicalise_v128_values(&mut tmp_canonicalised, builder, params);4274builder.ins().jump(destination, canonicalised)4275}42764277/// The same but for a `brif` instruction.4278fn canonicalise_brif(4279builder: &mut FunctionBuilder,4280cond: ir::Value,4281block_then: ir::Block,4282params_then: &[ir::Value],4283block_else: ir::Block,4284params_else: &[ir::Value],4285) -> ir::Inst {4286let mut tmp_canonicalised_then = SmallVec::<[_; 16]>::new();4287let canonicalised_then =4288canonicalise_v128_values(&mut tmp_canonicalised_then, builder, params_then);4289let mut tmp_canonicalised_else = SmallVec::<[_; 16]>::new();4290let canonicalised_else =4291canonicalise_v128_values(&mut tmp_canonicalised_else, builder, params_else);4292builder.ins().brif(4293cond,4294block_then,4295canonicalised_then,4296block_else,4297canonicalised_else,4298)4299}43004301/// A helper for popping and bitcasting a single value; since SIMD values can lose their type by4302/// using v128 (i.e. CLIF's I8x16) we must re-type the values using a bitcast to avoid CLIF4303/// typing issues.4304fn pop1_with_bitcast(4305env: &mut FuncEnvironment<'_>,4306needed_type: Type,4307builder: &mut FunctionBuilder,4308) -> Value {4309optionally_bitcast_vector(env.stacks.pop1(), needed_type, builder)4310}43114312/// A helper for popping and bitcasting two values; since SIMD values can lose their type by4313/// using v128 (i.e. CLIF's I8x16) we must re-type the values using a bitcast to avoid CLIF4314/// typing issues.4315fn pop2_with_bitcast(4316env: &mut FuncEnvironment<'_>,4317needed_type: Type,4318builder: &mut FunctionBuilder,4319) -> (Value, Value) {4320let (a, b) = env.stacks.pop2();4321let bitcast_a = optionally_bitcast_vector(a, needed_type, builder);4322let bitcast_b = optionally_bitcast_vector(b, needed_type, builder);4323(bitcast_a, bitcast_b)4324}43254326fn pop3_with_bitcast(4327env: &mut FuncEnvironment<'_>,4328needed_type: Type,4329builder: &mut FunctionBuilder,4330) -> (Value, Value, Value) {4331let (a, b, c) = env.stacks.pop3();4332let bitcast_a = optionally_bitcast_vector(a, needed_type, builder);4333let bitcast_b = optionally_bitcast_vector(b, needed_type, builder);4334let bitcast_c = optionally_bitcast_vector(c, needed_type, builder);4335(bitcast_a, bitcast_b, bitcast_c)4336}43374338fn bitcast_arguments<'a>(4339builder: &FunctionBuilder,4340arguments: &'a mut [Value],4341params: &[ir::AbiParam],4342param_predicate: impl Fn(usize) -> bool,4343) -> Vec<(Type, &'a mut Value)> {4344let filtered_param_types = params4345.iter()4346.enumerate()4347.filter(|(i, _)| param_predicate(*i))4348.map(|(_, param)| param.value_type);43494350// zip_eq, from the itertools::Itertools trait, is like Iterator::zip but panics if one4351// iterator ends before the other. The `param_predicate` is required to select exactly as many4352// elements of `params` as there are elements in `arguments`.4353let pairs = filtered_param_types.zip_eq(arguments.iter_mut());43544355// The arguments which need to be bitcasted are those which have some vector type but the type4356// expected by the parameter is not the same vector type as that of the provided argument.4357pairs4358.filter(|(param_type, _)| param_type.is_vector())4359.filter(|(param_type, arg)| {4360let arg_type = builder.func.dfg.value_type(**arg);4361assert!(4362arg_type.is_vector(),4363"unexpected type mismatch: expected {}, argument {} was actually of type {}",4364param_type,4365*arg,4366arg_type4367);43684369// This is the same check that would be done by `optionally_bitcast_vector`, except we4370// can't take a mutable borrow of the FunctionBuilder here, so we defer inserting the4371// bitcast instruction to the caller.4372arg_type != *param_type4373})4374.collect()4375}43764377/// A helper for bitcasting a sequence of return values for the function currently being built. If4378/// a value is a vector type that does not match its expected type, this will modify the value in4379/// place to point to the result of a `bitcast`. This conversion is necessary to translate Wasm4380/// code that uses `V128` as function parameters (or implicitly in block parameters) and still use4381/// specific CLIF types (e.g. `I32X4`) in the function body.4382pub fn bitcast_wasm_returns(arguments: &mut [Value], builder: &mut FunctionBuilder) {4383let changes = bitcast_arguments(builder, arguments, &builder.func.signature.returns, |i| {4384builder.func.signature.returns[i].purpose == ir::ArgumentPurpose::Normal4385});4386for (t, arg) in changes {4387let mut flags = MemFlags::new();4388flags.set_endianness(ir::Endianness::Little);4389*arg = builder.ins().bitcast(t, flags, *arg);4390}4391}43924393/// Like `bitcast_wasm_returns`, but for the parameters being passed to a specified callee.4394fn bitcast_wasm_params(4395environ: &mut FuncEnvironment<'_>,4396callee_signature: ir::SigRef,4397arguments: &mut [Value],4398builder: &mut FunctionBuilder,4399) {4400let callee_signature = &builder.func.dfg.signatures[callee_signature];4401let changes = bitcast_arguments(builder, arguments, &callee_signature.params, |i| {4402environ.is_wasm_parameter(i)4403});4404for (t, arg) in changes {4405let mut flags = MemFlags::new();4406flags.set_endianness(ir::Endianness::Little);4407*arg = builder.ins().bitcast(t, flags, *arg);4408}4409}44104411fn create_catch_block(4412builder: &mut FunctionBuilder,4413catch: &wasmparser::Catch,4414environ: &mut FuncEnvironment<'_>,4415) -> WasmResult<ir::Block> {4416let (is_ref, tag, label) = match catch {4417wasmparser::Catch::One { tag, label } => (false, Some(*tag), *label),4418wasmparser::Catch::OneRef { tag, label } => (true, Some(*tag), *label),4419wasmparser::Catch::All { label } => (false, None, *label),4420wasmparser::Catch::AllRef { label } => (true, None, *label),4421};44224423// We always create a handler block with one blockparam for the4424// one exception payload value that we use (`exn0` block-call4425// argument). This one payload value is the `exnref`. Note,4426// however, that we carry it in a native host-pointer-sized4427// payload (because this is what the exception ABI in Cranelift4428// requires). We then generate the args for the actual branch to4429// the handler block: we add unboxing code to load each value in4430// the exception signature if a specific tag is expected (hence4431// signature is known), and then append the `exnref` itself if we4432// are compiling a `*Ref` variant.44334434let (exn_ref_ty, needs_stack_map) = environ.reference_type(WasmHeapType::Exn);4435let (exn_payload_wasm_ty, exn_payload_ty) = match environ.pointer_type().bits() {443632 => (wasmparser::ValType::I32, I32),443764 => (wasmparser::ValType::I64, I64),4438_ => panic!("Unsupported pointer width"),4439};4440let block = block_with_params(builder, [exn_payload_wasm_ty], environ)?;4441builder.switch_to_block(block);4442let exn_ref = builder.func.dfg.block_params(block)[0];4443debug_assert!(exn_ref_ty.bits() <= exn_payload_ty.bits());4444let exn_ref = if exn_ref_ty.bits() < exn_payload_ty.bits() {4445builder.ins().ireduce(exn_ref_ty, exn_ref)4446} else {4447exn_ref4448};44494450if needs_stack_map {4451builder.declare_value_needs_stack_map(exn_ref);4452}44534454// We encode tag indices from the module directly as Cranelift4455// `ExceptionTag`s. We will translate those to (instance,4456// defined-tag-index) pairs during the unwind walk -- necessarily4457// dynamic because tag imports are provided only at instantiation4458// time.4459let clif_tag = tag.map(|t| ExceptionTag::from_u32(t));44604461environ.stacks.handlers.add_handler(clif_tag, block);44624463let mut params = vec![];44644465if let Some(tag) = tag {4466let tag = TagIndex::from_u32(tag);4467params.extend(environ.translate_exn_unbox(builder, tag, exn_ref)?);4468}4469if is_ref {4470params.push(exn_ref);4471}44724473// Generate the branch itself.4474let i = environ.stacks.control_stack.len() - 1 - (label as usize);4475let frame = &mut environ.stacks.control_stack[i];4476frame.set_branched_to_exit();4477canonicalise_then_jump(builder, frame.br_destination(), ¶ms);44784479Ok(block)4480}448144824483