Fix VM reuse_stack lost across stub VM boundary on IO suspension
Root cause: when perform fires inside a VM closure chain (call_closure_reuse), the caller frames are saved to reuse_stack on the ACTIVE VM. But the _cek_io_suspend_hook and _cek_eval_lambda_ref create a NEW stub VM for the VmSuspended exception. On resume, resume_vm runs on the STUB VM which has an empty reuse_stack — the caller frames are orphaned on the original VM. Fix: transfer reuse_stack from _active_vm to the stub VM before raising VmSuspended. This ensures resume_vm -> restore_reuse can find and restore the caller's frames after async resume via _driveAsync/setTimeout. Also restore step_limit/step_count refs dropped by bootstrap.py regeneration. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -10,7 +10,9 @@ open Sx_runtime
|
||||
let trampoline_fn : (value -> value) ref = ref (fun v -> v)
|
||||
let trampoline v = !trampoline_fn v
|
||||
|
||||
|
||||
(* Step limit for timeout protection *)
|
||||
let step_limit : int ref = ref 0
|
||||
let step_count : int ref = ref 0
|
||||
|
||||
(* === Mutable globals — backing refs for transpiler's !_ref / _ref := === *)
|
||||
let _strict_ref = ref (Bool false)
|
||||
|
||||
Reference in New Issue
Block a user