forked from wrenn/wrenn
fix: prevent Go runtime memory corruption and sandbox halt after snapshot restore
Three root causes addressed: 1. Go page allocator corruption: allocations between the pre-snapshot GC and VM freeze leave the summary tree inconsistent. After restore, GC reads corrupted metadata — either panicking (killing PID 1 → kernel panic) or silently failing to collect, causing unbounded heap growth until OOM. Fix: move GC to after all HTTP allocations in PostSnapshotPrepare, then set GOMAXPROCS(1) so any remaining allocations run sequentially with no concurrent page allocator access. GOMAXPROCS is restored on first health check after restore. 2. PostInit timeout starvation: WaitUntilReady and PostInit shared a single 30s context. If WaitUntilReady consumed most of it, PostInit failed — RestoreAfterSnapshot never ran, leaving envd with keep-alives disabled and zombie connections. Fix: separate timeout contexts. 3. CP HTTP server missing timeouts: no ReadHeaderTimeout or IdleTimeout caused goroutine leaks from hung proxy connections. Fix: add both, matching host agent values. Also adds UFFD prefetch to proactively load all guest pages after restore, eliminating on-demand page fault latency for subsequent RPC calls.
This commit is contained in:
@ -5,8 +5,6 @@ package port
|
||||
|
||||
import (
|
||||
"context"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@ -72,9 +70,12 @@ func (p *PortSubsystem) Start(parentCtx context.Context) {
|
||||
}()
|
||||
}
|
||||
|
||||
// Stop quiesces the scanner and forwarder goroutines and forces a GC cycle
|
||||
// to put the Go runtime's page allocator in a consistent state before snapshot.
|
||||
// Stop quiesces the scanner and forwarder goroutines.
|
||||
// Blocks until both goroutines have exited. Safe to call if already stopped.
|
||||
//
|
||||
// GC is NOT run here — it is deferred to PostSnapshotPrepare so that the
|
||||
// GC happens after all allocations (connection cleanup, HTTP response) are
|
||||
// complete, minimizing the window where page allocator corruption can occur.
|
||||
func (p *PortSubsystem) Stop() {
|
||||
p.mu.Lock()
|
||||
if !p.running {
|
||||
@ -90,12 +91,6 @@ func (p *PortSubsystem) Stop() {
|
||||
|
||||
cancelFn()
|
||||
wg.Wait()
|
||||
|
||||
// Force two GC cycles to ensure all spans are swept and the page
|
||||
// allocator summary tree is fully consistent before the VM is frozen.
|
||||
runtime.GC()
|
||||
runtime.GC()
|
||||
debug.FreeOSMemory()
|
||||
}
|
||||
|
||||
// Restart stops the subsystem (if running) and starts it again with a fresh
|
||||
|
||||
Reference in New Issue
Block a user