1
0
forked from wrenn/wrenn

294 Commits

Author SHA1 Message Date
62bede5dae fix: resolve bugs and DRY violations in sandbox manager and API handlers
- Fix createFromSnapshot discarding memoryMB param (balloon optimization was dead)
- Fix double dm-snapshot removal in Pause() cleanupPauseFailure path
- Fix DestroySandbox RPC mapping all errors to CodeNotFound
- Fix handleFailed event consumer missing pausing/resuming → error transitions
- Fix stream resource leak in StreamUpload on early-return paths
- Add envs/cwd fields to ExecRequest proto for foreground exec parity
- Extract createResources rollback helper to eliminate 4x duplicated teardown
- Remove unused chClient.ping method
- Add .mcp.json to gitignore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-05-17 02:30:32 +06:00
74f85ce4e9 refactor: polish control plane and host agent code
- Decompose executeBuild (318 lines) into provisionBuildSandbox and
  finalizeBuild helpers for readability
- Extract cleanupPauseFailure in sandbox manager to unify 3 inconsistent
  inline teardown paths (also fixes CoW file leak on rename failure)
- Remove unused ctx parameter from startProcess/startProcessForRestore
- Add missing MASQUERADE rollback entry in CreateNetwork for symmetry
- Consolidate duplicate writeJSON for UTF-8/base64 exec response
2026-05-17 02:11:48 +06:00
124e097e23 refactor: eliminate DRY violations across control plane and host agent
Extract shared helpers to consolidate repeated patterns:
- requireRunningSandbox: sandbox lookup + running check (10 call sites)
- upgradeAndAuthenticate: WS upgrade + JWT/API-key auth (3 handlers)
- updateLastActive: last_active_at update with background context (5 sites)
- attachCowAndCreate: cow loop attach + dmsetup create (devicemapper)
- issueRegistrationToken: token gen + Redis + audit (host service)
- ErrNotFound sentinel: replaces string matching in hostagent server

Also merges duplicate wsProcessOut/wsOutMsg types into one.

Net: -208 lines, zero behavior change.
2026-05-17 02:03:06 +06:00
a5425969ed fix: assorted bug fixes for CH migration
Fix resource leaks, race conditions, and error handling across host
agent and control plane: proper sparse file cleanup on close error,
connect error wrapping for MakeDir, CoW file cleanup on pause failure,
per-sandbox VM directories, deferred map deletion to avoid race in VM
destroy, and goroutine launch for extension background workers.
2026-05-17 01:47:56 +06:00
fb16bc9ed1 chore: update proto, scripts, and docs for CH migration
- Update hostagent proto: firecracker_version → vmm_version in metadata
- Regenerate hostagent.pb.go
- Update .env.example: WRENN_FIRECRACKER_BIN → WRENN_CH_BIN
- Update Makefile: remove --isnotfc from dev-envd target
- Update prepare-wrenn-user.sh: firecracker → cloud-hypervisor paths
  and capability assignments
- Update wrenn-init.sh: disable write_zeroes on rootfs for dm-snapshot
  compatibility with CH
- Update README.md and CLAUDE.md: Firecracker → Cloud Hypervisor
  throughout
2026-05-17 01:33:35 +06:00
dd8a940431 feat(envd): update guest agent for Cloud Hypervisor
Remove Firecracker-specific MMDS metadata fetching and metrics host
module. CH communicates with the guest purely over TAP networking,
so MMDS (Firecracker's metadata service via MMDS address) is no longer
needed.

- Remove src/host/ module (mmds.rs, metrics.rs)
- Remove reqwest dependency (was only used for MMDS HTTP calls)
- Remove --isnotfc CLI flag (no longer dual-mode)
- Simplify health endpoint and init handler
- Update state management for CH snapshot lifecycle
- Bump version to 0.3.0
2026-05-17 01:33:25 +06:00
eaa6b8576d feat(vm): replace Firecracker with Cloud Hypervisor
Migrate the entire VM layer from Firecracker to Cloud Hypervisor (CH).
CH provides native snapshot/restore via its HTTP API, eliminating the
need for custom UFFD handling, memfile processing, and snapshot header
management that Firecracker required.

Key changes:
- Remove fc.go, jailer.go (FC process management)
- Remove internal/uffd/ package (userfaultfd lazy page loading)
- Remove snapshot/header.go, mapping.go, memfile.go (FC snapshot format)
- Add ch.go (CH HTTP API client over Unix socket)
- Add process.go (CH process lifecycle with unshare+netns)
- Add chversion.go (CH version detection)
- Refactor sandbox manager: remove UFFD socket tracking, snapshot
  parent/diff chaining, FC-specific balloon logic; add crash watcher
- Simplify snapshot/local.go to CH's native snapshot format
- Update VM config: FirecrackerBin → VMMBin, new CH-specific fields
- Update envdclient, devicemapper, network for CH compatibility
2026-05-17 01:33:12 +06:00
c2dc382787 Updated openapi schema 2026-05-16 18:32:37 +06:00
3671af2498 feat: immediate sandbox reconciliation on host reconnect
When a host transitions from unreachable → online via heartbeat, trigger
ReconcileHost in a background goroutine so "missing" sandboxes are
resolved instantly instead of waiting up to 60s for the next monitor tick.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-05-16 16:15:49 +06:00
e34bcedc31 Merge pull request 'fix/remove-sync-updates' (#47) from fix/remove-sync-updates into dev
Reviewed-on: wrenn/wrenn#47
2026-05-15 08:08:07 +00:00
ff91ef3edf Bump versions 2026-05-15 13:56:04 +06:00
ba3a3db98c Updated openapi specs 2026-05-15 12:39:06 +06:00
6faad45a28 feat: async sandbox lifecycle with Redis Stream events
Replace synchronous RPC-based CP-host communication for sandbox
lifecycle operations (Create, Pause, Resume, Destroy) with an async
pattern. CP handlers now return 202 Accepted immediately, fire agent
RPCs in background goroutines, and publish state events to a Redis
Stream. A background consumer processes events as a fallback writer.

Agent-side auto-pause events are pushed to the CP via HTTP callback
(POST /v1/hosts/sandbox-events), keeping Redis internal to the CP.

All DB status transitions use conditional updates
(UpdateSandboxStatusIf, UpdateSandboxRunningIf) to prevent race
conditions between concurrent operations and background goroutines.

The HostMonitor reconciler is kept at 60s as a safety net, extended
to handle transient statuses (starting, pausing, resuming, stopping).

Frontend updated to handle 202 responses with empty bodies and render
transient statuses with blue indicators.
2026-05-15 12:25:16 +06:00
c08884fa2c Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-05-13 11:05:49 +06:00
4707f16c76 v0.1.6 (#45)
## What's New?
Performance updates for large capsules, admin panel enhancement and bug fixes

### Envd
- Fixed bug with sandbox metrics calculation
- Page cache drop and balloon inflation to reduce memfile snapshot
- Updated rpc timeout logic for better control
- Added tests

### Admin Panel
- Add/Remove platform admin
- Updated template deletion logic for fine grained permission

### Others
- Minor frontend visual improvement
- Minor bugfixes
- Version bump

Co-authored-by: Tasnim Kabir Sadik <tksadik92@gmail.com>
Reviewed-on: wrenn/wrenn#45
Co-authored-by: pptx704 <rafeed@omukk.dev>
Co-committed-by: pptx704 <rafeed@omukk.dev>
2026-05-13 05:05:35 +00:00
6164d7cae3 version bump 2026-05-13 10:58:54 +06:00
dc6776cc8f fix(agent): register with CP before inflating rootfs images 2026-05-13 10:52:22 +06:00
0bfda08f47 Merge pull request 'test (envd): add 136 unit tests across 12 modules' (#44) from testing/envd into dev
Reviewed-on: wrenn/wrenn#44
2026-05-13 04:42:06 +00:00
485be22a16 test(envd): add 136 unit tests across 12 modules
Cover all pure-function modules with inline #[cfg(test)] blocks:
crypto (NIST/RFC 4231 known-answer vectors), auth (SecureToken ops,
signature generation/validation), conntracker (snapshot lifecycle),
execcontext, util (AtomicMax concurrent correctness), http/encoding
(RFC 7231 negotiation), port/conn (/proc/net/tcp parsing),
rpc/entry (format_permissions), and permissions/path (tilde expansion,
ensure_dirs). Add tempfile dev-dep for filesystem tests. Update
Makefile test target to include cargo test.
2026-05-13 10:39:54 +06:00
ead406bdac Merge pull request 'fix: resolve large operation reliability — stream hangs, pause races, and memory bloat' (#43) from fix/large-operations into dev
Reviewed-on: wrenn/wrenn#43
2026-05-13 03:44:41 +00:00
1472d77b52 Merge branch 'dev' into fix/large-operations 2026-05-13 03:44:19 +00:00
6a0fea30a6 Rootfs script updated 2026-05-13 09:35:06 +06:00
8c34388fc2 Changed commands to check if envd is statically linked or not 2026-05-12 23:19:30 +06:00
aca43d51eb fix: resolve process stream hangs, pause race, and PTY signal loss
- Cache terminal EndEvent on ProcessHandle so connect() can detect
  already-exited processes instead of hanging forever on broadcast
  receivers that missed the event. Subscribe before checking cache
  to close the TOCTOU window.

- Protect sb.Status writes in Pause with m.mu to prevent data race
  with concurrent readers (AcquireProxyConn, Exec, etc.).

- Restart metrics sampler in restoreRunning so a failed pause attempt
  doesn't permanently kill sandbox metrics collection.

- Return dequeued non-input messages from coalescePtyInput instead of
  dropping them, preventing silent loss of kill/resize signals during
  typing bursts.
2026-05-09 18:11:15 +06:00
522e1c5e90 fix: subscribe to process channels before spawning threads to prevent event loss
Fast-exiting processes (e.g. echo) sent data/end events before
start() subscribed to the broadcast channels, causing the stream
to hang indefinitely and the exec RPC to time out with 502.

Move channel subscription into spawn_process, before reader/waiter
threads start, and return pre-subscribed receivers via SpawnedProcess.
2026-05-09 17:28:37 +06:00
d1d316f35c fix: resolve exec 502 by terminating process streams on exit
The start() and connect() streaming RPCs blocked forever in the data
event loop because ProcessHandle retains a broadcast sender (needed for
reconnection via connect()), preventing the channel from closing.

Race data_rx against end_rx with tokio::select! so the stream terminates
when the process exits. Remaining buffered data is drained before
yielding the end event.
2026-05-09 16:36:33 +06:00
2af8412cdc fix: use RwLock for envd Defaults to fix silent mutation loss
The /init handler's default_user mutation cloned the Defaults struct,
mutated the clone, then dropped it — the actual state was never updated.
This caused processes to always run as "root" regardless of the user
set via POST /init. Additionally, default_workdir was accepted in the
init request but never applied.

Wrap user and workdir fields in RwLock with accessor methods so mutations
propagate correctly through the shared AppState.
2026-05-09 15:28:09 +06:00
c93ad5e2db fix: harden pause flow with connection isolation and UFFD event handling
Restructure pause to: block new operations (StatusPausing), drain proxy
connections with 5s grace, force-close remaining via context cancellation,
drop page cache, inflate balloon, then freeze vCPUs. Previously connections
could arrive during the pause window and API operations weren't blocked.

Handle UFFD_EVENT_REMOVE/UNMAP/REMAP/FORK gracefully instead of crashing
the UFFD server. These events fire during balloon deflation on snapshot
restore, killing the page fault handler and preventing VM boot.

Also adds ConnTracker.ForceClose() with cancellable context propagated
through the proxy handler, so lingering proxy connections are actively
terminated rather than left dangling.
2026-05-09 14:51:19 +06:00
38799770db fix: inflate balloon before snapshot to reduce memfile size
Firecracker dumps the entire VM memory region regardless of guest
usage. A 20GB VM using 500MB still produces a ~20GB memfile because
freed pages retain stale data (non-zero blocks).

Inflate the balloon device before snapshot to reclaim free guest
memory. Balloon pages become zero from FC's perspective, allowing
ProcessMemfile to skip them. This reduces memfile size from ~20GB
to ~1-2GB for lightly-used VMs.

- Pause: read guest memory usage, inflate balloon to reclaim free
  pages, wait 2s for guest kernel to process, then proceed
- Resume: deflate balloon to 0 after PostInit so guest gets full
  memory back
- createFromSnapshot: same deflation since template snapshots
  inherit inflated balloon state
- All balloon ops are best-effort with debug logging on failure
2026-05-05 15:38:04 +06:00
51b5d7b3ba fix: resolve pause/snapshot failures and CoW exhaustion on large VMs
Remove hard 10s timeout from Firecracker HTTP client — callers already
pass context.Context with appropriate deadlines, and 20GB+ memfile
writes easily exceed 10s.

Ensure CoW file is at least as large as the origin rootfs. Previously,
WRENN_DEFAULT_ROOTFS_SIZE=30Gi expanded the base image to 30GB but the
default 5GB CoW could not hold all writes, causing dm-snapshot
invalidation and EIO on all guest I/O.

Destroy frozen VMs in resumeOnError instead of leaving zombies that
report "running" but can't execute. Use fresh context for the resume
attempt so a cancelled caller context doesn't falsely trigger destroy.

Increase CP→Agent ResponseHeaderTimeout from 45s to 5min and
PrepareSnapshot timeout from 3s to 30s for large-memory VMs.

After failed pause, ping agent to detect destroyed sandboxes and mark
DB status as "error" instead of reverting to "running".
2026-05-04 01:46:57 +06:00
fd5fa28205 Merge pull request 'Enhanced frontend ux' (#42) from enhance/frontend into dev
Reviewed-on: wrenn/wrenn#42
2026-05-03 11:08:48 +00:00
1244c08e42 fix: fetch sandbox metrics immediately on page load
Metrics data was only fetched after Chart.js dynamic import completed,
leaving graphs empty until the first poll interval fired. Now
loadMetrics() runs in parallel with the Chart.js import, and
initCharts() resets the dedup key so pre-fetched data populates
newly created chart instances.
2026-05-03 16:43:26 +06:00
021d709de2 feat: show template owner and restrict delete in admin panel
Add Owner column to admin templates table, resolving team IDs to names
via admin teams API. Disable delete for non-platform templates and the
minimal template, with contextual tooltips explaining why.
2026-05-03 15:51:20 +06:00
cac6fcd626 feat: admin grant/revoke from admin panel
Add PUT /v1/admin/users/{id}/admin endpoint and frontend UI for
granting and revoking platform admin status. Uses atomic conditional
SQL (RevokeUserAdmin) to prevent race conditions that could remove
the last admin. Includes idempotency check, audit logging, and
confirmation dialog with self-demotion warning.
2026-05-03 15:24:34 +06:00
4954b19d7c fix: merge capsule data in-place to prevent visual refresh on poll
Replaces full array assignment with granular merge that reuses existing
Svelte proxy objects, so only rows with actual data changes re-render.
2026-05-03 15:09:21 +06:00
01819642cc fix: drop page cache before snapshot to reduce memory dump size
Linux keeps freed memory as page cache, which Firecracker snapshots
as non-zero blocks. A 16GB VM with 12GB stale cache would write all
12GB to disk. Dropping pagecache (not dentries/inodes) in
/snapshot/prepare before blocking the reclaimer shrinks snapshots
to actual working set size with minimal resume latency impact.
2026-05-03 14:27:49 +06:00
cb28f7759d Merge pull request 'fix: accurate sandbox metrics and memory management' (#41) from bugfix/sandbox-metrics-calculations into dev
Reviewed-on: wrenn/wrenn#41
2026-05-03 06:41:41 +00:00
1178ab8b21 fix: accurate sandbox metrics and memory management
Three issues fixed:

1. Memory metrics read host-side VmRSS of the Firecracker process,
   which includes guest page cache and never decreases. Replaced
   readMemRSS(fcPID) with readEnvdMemUsed(client) that queries
   envd's /metrics endpoint for guest-side total - MemAvailable.
   This matches neofetch and reflects actual process memory.

2. Added Firecracker balloon device (deflate_on_oom, 5s stats) and
   envd-side periodic page cache reclaimer (drop_caches when >80%
   used). Reclaimer is gated by snapshot_in_progress flag with
   sync() before freeze to prevent memory corruption during pause.

3. Sampling interval 500ms → 1s, ring buffer capacities adjusted
   to maintain same time windows. Reduces per-host HTTP load from
   240 calls/sec to 120 calls/sec at 120 capsules.

Also: maxDiffGenerations 8 → 1 (merge every re-pause since UFFD
lazy-loads anyway), envd mem_used formula uses total - available.
2026-05-03 12:19:01 +06:00
233e747d5d Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-05-03 04:56:14 +06:00
f5a23c1fa0 v0.1.5 (#40)
Reviewed-on: wrenn/wrenn#40
2026-05-02 22:56:00 +00:00
20a228eb8d Merge pull request 'Rewritten envd with rust to improve reliability during pause and resume operations' (#39) from feat/envd-rewrite into dev
Reviewed-on: wrenn/wrenn#39
2026-05-02 22:49:36 +00:00
ef5f223863 fix: improve error feedback for terminal disconnects and host unavailability
Show "[session disconnected]" in terminal when PTY websocket closes cleanly.
Map scheduler and agent unavailability errors to 503 with user-friendly
message instead of leaking internal details.
2026-05-03 04:47:10 +06:00
31456fd169 fix: resolve PTY failure, MMDS file writes, and metrics instability in envd-rs
Three bugs fixed:

1. PTY connections failed because home directory was hardcoded as
   /home/{username} instead of reading from /etc/passwd. For root,
   this produced /home/root/ which doesn't exist — CWD validation
   rejected every PTY Start request without explicit cwd. Fixed all
   6 locations to use user.dir from nix::unistd::User.

2. MMDS polling silently failed to parse metadata because the
   logs_collector_address field lacked #[serde(default)]. The host
   agent only sends instanceID + envID — missing "address" field
   caused every deserialize attempt to fail, so .WRENN_SANDBOX_ID
   and .WRENN_TEMPLATE_ID were never written. Also added error
   logging and create_dir_all before file writes.

3. Metrics CPU values were non-deterministic because a fresh
   sysinfo::System was created per request with a 100ms sleep
   between reads. Replaced with a background thread that samples
   CPU at fixed 1-second intervals via a persistent System instance,
   matching gopsutil's internal caching behavior. Metrics endpoint
   now reads cached atomic values — no blocking, consistent window.

Also: close master PTY fd in child pre_exec, add process.Start
request logging, bump version to 0.2.0.
2026-05-03 04:28:10 +06:00
bbcde17d49 Updated static link check for envd 2026-05-03 03:32:41 +06:00
f328113a2a rename guest hostname from "sandbox" to "capsule"
Terminal prompt inside VMs now shows root@capsule instead of
root@sandbox, aligning with user-facing "capsule" terminology.
2026-05-03 03:32:03 +06:00
1143acd37a refactor: remove Go envd module, update host agent for Rust envd
The Go envd guest agent (`envd/`) is fully replaced by the Rust
implementation (`envd-rs/`). This commit removes the Go module and
updates all references across the codebase.

Makefile: remove ENVD_DIR, VERSION_ENVD, build-envd-go, dev-envd-go,
and Go envd from proto/fmt/vet/tidy/clean targets. Add static-link
verification to build-envd.

Host agent: rewrite snapshot quiesce comments that referenced Go GC
and page allocator corruption — no longer applicable with Rust envd.
Tighten envdclient to expect HTTP 200 (not 204) from health and file
upload endpoints, and require JSON version response from FetchVersion.

Remove NOTICE (no e2b-derived code remains). Update CLAUDE.md and
README.md to reflect Rust envd architecture.
2026-05-03 03:12:25 +06:00
0b53d34417 feat: rewrite envd guest agent in Rust (envd-rs)
Complete Rust rewrite of the Go envd guest daemon that runs as PID 1
inside Firecracker microVMs. Feature-complete across all 8 phases:

- Health, metrics, and env var endpoints
- Crypto (SHA-256/512, HMAC), auth (secure token, signing), init/snapshot
- Connect RPC via connectrpc + buffa (process + filesystem services)
- File transfer (GET/POST /files) with gzip, multipart, chown, ENOSPC
- Port subsystem (/proc/net/tcp scanner, socat forwarder)
- Cgroup2 manager with noop fallback
- Snapshot/restore lifecycle (conntracker, port subsystem stop/restart)
- SIGTERM graceful shutdown, --cmd initial process spawn
- MMDS metadata polling for Firecracker mode

42 source files, ~4200 LOC, 4.1MB stripped release binary.
Makefile updated: build-envd now targets Rust (musl static),
build-envd-go preserved for Go builds.
2026-05-03 02:47:15 +06:00
3deecbff89 fix: prevent Go runtime memory corruption and sandbox halt after snapshot restore
Three root causes addressed:

1. Go page allocator corruption: allocations between the pre-snapshot GC
   and VM freeze leave the summary tree inconsistent. After restore, GC
   reads corrupted metadata — either panicking (killing PID 1 → kernel
   panic) or silently failing to collect, causing unbounded heap growth
   until OOM. Fix: move GC to after all HTTP allocations in
   PostSnapshotPrepare, then set GOMAXPROCS(1) so any remaining
   allocations run sequentially with no concurrent page allocator access.
   GOMAXPROCS is restored on first health check after restore.

2. PostInit timeout starvation: WaitUntilReady and PostInit shared a
   single 30s context. If WaitUntilReady consumed most of it, PostInit
   failed — RestoreAfterSnapshot never ran, leaving envd with keep-alives
   disabled and zombie connections. Fix: separate timeout contexts.

3. CP HTTP server missing timeouts: no ReadHeaderTimeout or IdleTimeout
   caused goroutine leaks from hung proxy connections. Fix: add both,
   matching host agent values.

Also adds UFFD prefetch to proactively load all guest pages after restore,
eliminating on-demand page fault latency for subsequent RPC calls.
2026-05-02 17:22:51 +06:00
bb582deefa fix: prevent sandbox halt after resume by fixing HTTP/2 HOL blocking and adding timeouts
Disable HTTP/2 on both host agent server and CP→agent transport — multiplexing
caused head-of-line blocking when a slow sandbox RPC stalled the shared connection.
Add ResponseHeaderTimeout to envd HTTP clients. Merge SetDefaults into Resume's
PostInit call to eliminate an extra round-trip that could hang on a stale connection.
2026-05-02 13:48:51 +06:00
7ef9a64613 fix: close stale TCP connections across snapshot/restore to prevent envd hangs
After Firecracker snapshot restore, zombie TCP sockets from the previous
session cause Go runtime corruption inside the guest VM, making envd
unresponsive. This manifests as infinite loading in the file browser and
terminal timeouts (524) in production (HTTP/2 + Cloudflare) but not locally.

Four-part fix:
- Add ServerConnTracker to envd that tracks connections via ConnState callback,
  closes idle connections and disables keep-alives before snapshot, then closes
  all pre-snapshot zombie connections on restore (while preserving post-restore
  connections like the /init request)
- Split envdclient into timeout (2min) and streaming (no timeout) HTTP clients;
  use streaming client for file transfers and process RPCs
- Close host-side idle envdclient connections before PrepareSnapshot so FIN
  packets propagate during the 3s quiesce window
- Add StreamingHTTPClient() accessor; streaming file transfer handlers in
  hostagent use it instead of the timeout client
2026-05-02 05:19:37 +06:00
f3572f7356 Fix empty WRENN_TEMPLATE_ID after resuming paused sandbox
Resume() was building VMConfig without TemplateID, so Firecracker MMDS
received an empty string. envd's PostInit then wrote that empty value to
/run/wrenn/.WRENN_TEMPLATE_ID. Fix by persisting the template ID in
snapshot metadata during Pause and reading it back during Resume.
2026-05-02 04:57:08 +06:00
2e998a26a2 Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-05-01 15:01:32 +06:00
4fcc19e91f v0.1.4 (#38)
Reviewed-on: wrenn/wrenn#38
Co-authored-by: pptx704 <rafeed@omukk.dev>
Co-committed-by: pptx704 <rafeed@omukk.dev>
2026-05-01 09:01:08 +00:00
f3ec626d58 Envd version bump 2026-05-01 14:59:37 +06:00
f4733e2f7a Version bump 2026-04-25 04:49:17 +06:00
cdacc12a48 Merge pull request 'Fixed network throttle when an application is running' (#37) from fix/network-throttle-on-load into dev
Reviewed-on: wrenn/wrenn#37
2026-04-24 22:43:31 +00:00
bd98610153 fix: sandbox network responsiveness under port-binding apps
Running port-binding applications (Jupyter, http.server, NextJS) inside
sandboxes caused severe PTY sluggishness and proxy navigation errors.

Root cause: the CP sandbox proxy and Connect RPC pool shared a single
HTTP transport. Heavy proxy traffic (Jupyter WebSocket, REST polling)
interfered with PTY RPC streams via HTTP/2 flow control contention.

Transport isolation (main fix):
- Add dedicated proxy transport on CP (NewProxyTransport) with HTTP/2
  disabled, separate from the RPC pool transport
- Add dedicated proxy transport on host agent, replacing
  http.DefaultTransport
- Add dedicated envdclient transport with tuned connection pooling
- Replace http.DefaultClient in file streaming RPCs with per-sandbox
  envd client

Proxy path rewriting (navigation fix):
- Add ModifyResponse to rewrite Location headers with /proxy/{id}/{port}
  prefix, handling both root-relative and absolute-URL redirects
- Strip prefix back out in CP subdomain proxy for correct browser
  behavior
- Replace path.Join with string concat in CP Director to preserve
  trailing slashes (prevents redirect loops on directory listings)

Proxy resilience:
- Add dial retry with linear backoff (3 attempts) to handle socat
  startup delay when ports are first detected
- Cache ReverseProxy instances per sandbox+port+host in sync.Map
- Add EvictProxy callback wired into sandbox Manager.Destroy

Buffer and server hardening:
- Increase PTY and exec stream channel buffers from 16 to 256
- Add ReadHeaderTimeout (10s) and IdleTimeout (620s) to host agent
  HTTP server

Network tuning:
- Set TAP device TxQueueLen to 5000 (up from default 1000)
- Add Firecracker tx_rate_limiter (200 MB/s sustained, 100 MB burst)
  to prevent guest traffic from saturating the TAP
2026-04-25 04:21:55 +06:00
5e13879954 fix: OAuth ConnectProvider state HMAC format mismatch
ConnectProvider computed HMAC over bare state, but Callback always
verifies HMAC(state+":"+intent). This caused the account-linking
flow to always fail with invalid_state.
2026-04-25 02:00:39 +06:00
339cd7bee1 fix: security and stability fixes from code review
- Scope WebSocket auth bypass to only WS endpoints by restructuring
  routes into separate chi Groups. Non-WS routes no longer passthrough
  unauthenticated requests with spoofed Upgrade headers. Added
  optionalAPIKeyOrJWT middleware for WS routes (injects auth context
  from API key/JWT if present, passes through otherwise) and
  markAdminWS middleware for admin WS routes.

- Fix nil pointer dereference in envd Handler.Wait() — p.tty.Close()
  was called unconditionally but p.tty is nil for non-PTY processes,
  crashing every non-PTY process exit.

- Fix goroutine leak in sandbox Pause — stopSampler was never called,
  leaking one sampler goroutine per successful pause operation.

- Decouple PTY WebSocket reads from RPC dispatch using a buffered
  channel to prevent backpressure-induced connection drops under fast
  typing. Includes input coalescing to reduce RPC call volume.
2026-04-24 15:48:38 +06:00
153a54fdcd Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-21 16:11:59 +06:00
c3afd0c8a0 Merge pull request 'Audit logging, Data anonymization, and OAuth flow improvements' (#35) from feat/compliance into dev
Reviewed-on: wrenn/wrenn#35
2026-04-21 10:09:37 +00:00
11928a172a feat: send email notification on account hard-delete
Notify users via email when their account is permanently deleted after
the 15-day soft-delete grace period. Query now returns email alongside
user ID so the notification can be sent after deletion.

Email failure is logged as a warning but does not block cleanup.
2026-04-21 16:01:56 +06:00
bb2146d838 refactor: deduplicate audit logger with shared entry builders
Replace repetitive actorFields + write boilerplate across all 25+ typed
Log methods with shared helpers: newEntry (general), newAdminEntry
(platform-level), resolveHostTeamID, and logSystemHostEvent.

Reduces logger.go from 665 to 374 lines with no behavior change.
2026-04-21 15:54:39 +06:00
d270ab7752 Version bump 2026-04-21 15:54:04 +06:00
7fd801c1eb feat: add audit logging for all admin actions and admin audit page
Log every admin-panel action (user activate/deactivate, team BYOC toggle,
team delete, template delete, build create/cancel) to the audit_logs table
under PlatformTeamID with scope "admin".

Add GET /v1/admin/audit-logs endpoint and /admin/audit frontend page with
infinite scroll and hierarchical filters. Expose audit.Entry + Log() for
cloud repo extensibility.

Fix seed_platform_team down-migration FK violation by deleting dependent
rows before the team row.
2026-04-21 15:41:45 +06:00
edec170652 fix: remove accent gradient bars from admin host dialogs
Normalize admin host page dialogs to match design system pattern:
border + shadow only, no colored gradient strips. Align animation
timing and shadow to reference components (DestroyDialog, etc).
2026-04-21 15:02:09 +06:00
684c98b0fa fix: admin capsule create audit log uses PlatformTeamID
POST /v1/admin/capsules was outside the injectPlatformTeam middleware
subrouter, so audit entries landed under the admin's personal team.
2026-04-21 14:54:52 +06:00
ebbbde9cd1 feat: anonymize audit logs on user hard-delete and fix host audit log team assignment
Anonymize audit logs when soft-deleted users are purged after 15 days:
actor_name set to 'deleted-user', actor_id and resource_id nulled,
email stripped from member metadata. Per-user delete ensures no user
is removed without successful anonymization.

Frontend renders deleted-user as a styled red badge in audit log view.

Fix shared host create/delete audit logs landing in admin's personal
team — now correctly assigned to PlatformTeamID.
2026-04-21 14:42:09 +06:00
6a6b489471 feat: separate GitHub OAuth login/signup flows with name confirmation
Block auto-account creation when signing in via GitHub from login mode.
Signup via GitHub now shows a name confirmation dialog before redirecting
to dashboard, letting users verify/edit their display name pulled from
GitHub.

- Add intent query param to OAuth redirect, persisted in HMAC-signed state cookie
- Block registration in callback when intent=login, return no_account error
- Set wrenn_oauth_new_signup cookie on new account creation
- Frontend callback shows name confirmation dialog for new signups
- Add no_account error message to login page
2026-04-21 11:03:12 +06:00
dbc6030c17 Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-21 10:09:36 +06:00
9ee6e3e1a8 Merge pull request 'Feat: Added daily usage page' (#34) from feat/usage into dev
Reviewed-on: wrenn/wrenn#34
2026-04-18 08:54:04 +00:00
aa96557d1c Clean up dashboard page headers for consistency
Remove unnecessary wrapper divs around h1/subtitle pairs in audit,
channels, settings, and templates pages. Drop inline count from
channels header.
2026-04-18 14:47:33 +06:00
47be1143fb Add MiddlewareProvider interface for extension middleware
Allows cloud extensions to inject middleware that wraps OSS routes
(e.g. billing enforcement) before they are registered.
2026-04-18 14:47:29 +06:00
8f8638e6db Bump version to 0.1.2 2026-04-18 14:47:25 +06:00
003453fa3c Normalize usage page layout and clarify copy
Separate summary cards with proper surface hierarchy, add staggered
entrance animations, tighten padding, and rewrite labels/descriptions
to be specific and actionable rather than generic.
2026-04-18 14:46:01 +06:00
92aab09104 Add daily usage metrics (CPU-minutes, RAM GB-minutes)
Introduce pre-computed daily usage rollups from sandbox_metrics_snapshots.
An hourly background worker aggregates completed days, while today's
usage is computed live from snapshots at query time for freshness.

Backend: new daily_usage table, rollup worker, UsageService, and
GET /v1/capsules/usage endpoint with date range filtering (up to 92 days).

Frontend: replace Usage page placeholder with bar charts (Chart.js),
summary total cards, and preset/custom date range controls.
2026-04-18 14:29:09 +06:00
e7670e4449 Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-17 16:41:08 +06:00
955aa09780 Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-17 01:24:52 +06:00
ce452c3d11 Merge pull request 'Improved codebase to prepare for production' (#32) from chore/hardening into dev
Reviewed-on: wrenn/wrenn#32
2026-04-16 13:00:06 +00:00
ab034062d3 Merge branch 'dev' into chore/hardening 2026-04-16 12:58:48 +00:00
24f904fa74 Add +page.js to disable prerendering for admin capsule detail page 2026-04-16 18:38:03 +06:00
cc63ed2197 Minor patch 2026-04-16 18:14:50 +06:00
9c4fea93bc Added host preparation script and updated claude md 2026-04-16 16:56:04 +06:00
977c3a466a Shrink minimal rootfs on graceful host agent shutdown
On startup EnsureImageSizes expands the minimal rootfs to the configured
disk size. This adds the inverse: ShrinkMinimalImage runs e2fsck + resize2fs -M
during graceful shutdown so the image is stored compactly on disk.
2026-04-16 16:26:50 +06:00
e6e3975426 Add unauthenticated /health endpoint to control plane
Returns JSON with status and build version for monitoring and
load balancer health checks.
2026-04-16 16:13:42 +06:00
bba5f80294 Add production file logging with logrotate support
Both control plane and host agent now write structured slog output to
$WRENN_DIR/logs/ in addition to stderr. Log level is configurable via
LOG_LEVEL env var (default: info). SIGHUP reopens the log file so
logrotate can rotate without copytruncate.
2026-04-16 15:09:26 +06:00
44c32587e3 Cap network slot allocator at 32767 to match veth IP space
The veth addressing uses 10.12.0.0/16 with 2 IPs per slot. At slot
index 32768, vethOffset=65536 overflows byte arithmetic and wraps back
to 10.12.0.0, causing silent IP collisions with existing sandboxes.
Cap the allocator at 32767, which is the actual addressable limit.
2026-04-16 14:57:44 +06:00
b9aa444472 Merge pull request 'Bug fixes and optimizations' (#31) from fix/optimizations into dev
Reviewed-on: wrenn/wrenn#31
2026-04-16 00:39:47 +00:00
fb4b67adb3 Destroy owned sandboxes on user disable and fix OAuth login resilience
When an admin disables a user, all active sandboxes (running, paused,
hibernated) for teams they own are now destroyed and their API keys
are deleted. User queries now filter by status column instead of
deleted_at, so re-enabling a user always works. OAuth login paths
use ensureDefaultTeam to auto-create a team if the user has none,
matching the email/password login behavior.
2026-04-16 06:37:51 +06:00
9ea847923c Fix concurrency, security, and correctness issues across backend and frontend
- C1: Add sync.RWMutex to vm.Manager to protect concurrent vms map access
- H1: Fix IP arithmetic overflow in network slot addressing (byte truncation)
- H5: Fix MultiplexedChannel.Fork() TOCTOU race (move exited check inside lock)
- H8: Remove snapshot overwrite — return template_name_taken conflict instead
- H9: Wrap DeleteAccount DB ops in a transaction, make team deletion fatal
- H10: Sanitize serviceErrToHTTP to stop leaking internal error messages
- H11: Add deleted_at IS NULL to GetUserByEmail/GetUserByID queries
- H12: Add id DESC to audit log composite index for cursor pagination
- H15: Delete dead AuthModal.svelte component
- H17: Move JWT from WebSocket URL query param to first WS message
- H18: Fix $derived to $derived.by in FilesTab breadcrumbs
2026-04-16 06:11:42 +06:00
ed2222c80c Move sidebar into layout files and fix timer cleanup across frontend
Sidebar and AdminSidebar were re-instantiated on every page navigation
(17 pages total), causing unnecessary DOM teardown/rebuild and redundant
localStorage reads. Now each lives in its respective +layout.svelte as a
single persistent instance.

Also adds onDestroy cleanup for leaked timers (settings, team, login RAF
loop) and CSS containment on <main> to isolate layout recalculations.
2026-04-16 05:34:47 +06:00
e91109d69c Fix API key cleanup on user deactivation and build archive race condition
Delete all API keys created by a user when their account is disabled,
deleted, or soft-deleted. Store build archives before enqueuing to Redis
so workers never dequeue a build with missing files.
2026-04-16 05:29:02 +06:00
451d0819cc Merge pull request 'Added settings for users and proper email flow for authentication' (#30) from feat/user-onboarding into dev
Reviewed-on: wrenn/wrenn#30
2026-04-15 22:45:30 +00:00
084c6caa7d Redirect authenticated users away from login page 2026-04-16 04:30:25 +06:00
43e838c55c Fix cascading deletion gaps for user and team cleanup
- Add ON DELETE CASCADE to users_teams, oauth_providers, admin_permissions
  and ON DELETE SET NULL (with nullable columns) to team_api_keys.created_by,
  hosts.created_by, host_tokens.created_by so HardDeleteExpiredUsers no longer
  fails with FK violations
- User account deletion now cascades to sole-owned teams via DeleteTeamInternal,
  preventing orphaned teams with live sandboxes after account removal
- ListActiveSandboxesByTeam now includes hibernated sandboxes so their disk
  snapshots are cleaned up during team deletion
- Team soft-delete now hard-deletes sandbox metric points, metric snapshots,
  API keys, and channels to prevent data accumulation on deleted teams
- Extract deleteTeamCore() to deduplicate shared logic across DeleteTeam,
  AdminDeleteTeam, and DeleteTeamInternal
- Fix ListAPIKeysByTeamWithCreator to use LEFT JOIN after created_by became
  nullable, and update handler to read pgtype.Text.String for creator_email
2026-04-16 04:26:48 +06:00
e1b23f3d79 Updated claude md with better design 2026-04-16 04:22:30 +06:00
a3f75300a9 Add email activation flow and replace is_active with status column
Email signup now creates inactive users who must activate via a 30-minute
email token before signing in. Team creation is deferred to first login
after activation, while OAuth users continue to get teams immediately.

- Replace boolean is_active with status column (inactive/active/disabled/deleted)
- Add POST /v1/auth/activate endpoint with Redis-backed token consumption
- Signup returns message instead of JWT, sends activation email
- Login differentiates error messages by user status
- Add confirm password field to signup form
- Add /activate frontend page that auto-logs in on success
- Handle inactive user cleanup on re-signup (30-min cooldown) and OAuth collision
2026-04-16 04:05:41 +06:00
e8a2217247 Add settings page, forgot/reset password flows, and me API client
Adds /dashboard/settings route with profile/password/OAuth/account-deletion
management. Adds /forgot-password and /reset-password routes. Enables sidebar
settings link. Adds typed me.ts API client.
2026-04-16 03:25:03 +06:00
93e6fe8160 Add Wrenn wordmark to email template and improve spacing 2026-04-16 03:24:59 +06:00
f69fa8cded Add /v1/me account management endpoints
Adds self-service endpoints: GET/PATCH/DELETE /v1/me, POST /v1/me/password,
POST /v1/me/password/reset{/confirm}, GET/DELETE /v1/me/providers/{provider}.
Includes OAuth account-linking flow via cookie, hard-delete cleanup goroutine
(24h ticker, 15-day grace period), and OpenAPI spec for all new routes.
2026-04-16 03:24:55 +06:00
bc8348b199 Add DB queries for account self-service
New queries: UpdateUserPassword, SoftDeleteUser, HardDeleteExpiredUsers,
CountUserOwnedTeamsWithOtherMembers, GetOAuthProvidersByUserID, DeleteOAuthProvider.
2026-04-16 03:24:42 +06:00
81715947bb Updated claude md 2026-04-16 02:08:03 +06:00
d705f83b68 Removed unnecessary files and renamed minimal update script 2026-04-16 02:06:39 +06:00
2f0e7fcdc2 Merge pull request 'Added transactional email sending' (#29) from feat/email-transaction into dev
Reviewed-on: wrenn/wrenn#29
2026-04-15 18:56:57 +00:00
970ae2b6b2 Updated email template for optional name 2026-04-16 00:54:38 +06:00
ded9c15f06 minor changes 2026-04-16 00:54:20 +06:00
9d68eb5f00 Add transactional email system via SMTP
Introduce internal/email package with SMTP sending, embedded HTML/text
templates, and multipart MIME assembly. Emails use a generic EmailData
struct (recipient name, message, optional button, optional closing) so
new email types can be added without code changes.

Wired into signup (welcome email), team creation, and team member
addition. No-op mailer when SMTP_HOST is not configured.
2026-04-16 00:46:08 +06:00
700512b627 Updated letter-spacing 2026-04-15 22:38:19 +06:00
d1975089f1 Merge pull request 'Added metadata tracking for binaries and refactored to maintain a separate cloud version' (#28) from feat/meta-versioning into dev
Reviewed-on: wrenn/wrenn#28
2026-04-15 15:44:20 +00:00
a5ad3731f2 Refactored to maintain a separate cloud version
Moves 12 packages from internal/ to pkg/ (config, id, validate, events, db,
auth, lifecycle, scheduler, channels, audit, service) so they can be imported
by the enterprise repo as a Go module dependency.

Introduces pkg/cpextension (shared Extension interface + ServerContext) and
pkg/cpserver (Run() entrypoint with functional options) so the enterprise
main.go can call cpserver.Run(cpserver.WithExtensions(...)) without duplicating
the 20-step server bootstrap. Adds db/migrations/embed.go for go:embed access
to OSS SQL migrations from the enterprise module.

cmd/control-plane/main.go is reduced to a 10-line wrapper around cpserver.Run.
2026-04-15 21:41:48 +06:00
11d746dcfc Merge pull request 'Fixed issues with code interpreter' (#27) from fix/code-interpreter into dev
Reviewed-on: wrenn/wrenn#27
2026-04-15 12:56:18 +00:00
5f877afb9e Remove PTY inactivity timeout to keep terminal sessions alive indefinitely
Sessions now only end on process exit or explicit kill, not idle time.
The keepalive ping every 30s remains to prevent network-level disconnects.
2026-04-15 18:31:48 +06:00
5b4fde055c Fix build recipe execution and flatten reliability
- Set HOME in bctx.EnvVars when USER switches so ~ expands correctly in
  subsequent RUN/WORKDIR steps instead of resolving to /root
- Run /bin/sync inside the guest before FlattenRootfs destroys the VM,
  preventing pip-installed files from being captured as 0-byte due to
  unflushed page cache
- Wrap healthcheck command with su <user> so it runs with the template's
  default user context (correct HOME, correct UID)
- Export Shellescape from the recipe package for use in build service
- Add code-runner-beta recipe (Jupyter server with ipykernel --sys-prefix)
  and replace old python-interpreter-v0-beta
2026-04-15 18:24:54 +06:00
59507d7553 Merge pull request 'Added teams and users pages to admin panel' (#26) from feat/admin-panel into dev
Reviewed-on: wrenn/wrenn#26
2026-04-14 22:00:40 +00:00
a265c15c4d Add admin user management with is_active enforcement
Admin users page at /admin/users with paginated user list showing name,
email, team counts, role, join date, and active status toggle. Inactive
users are blocked from all authenticated endpoints immediately via DB
check in JWT middleware. OAuth login errors now show human-readable
messages on the login page.
2026-04-15 03:58:44 +06:00
d332630267 Add admin teams management page
Admin panel now includes a Teams page with paginated listing of all teams
(including soft-deleted), BYOC enable with confirmation dialog, and team
deletion with active capsule warnings. Shows member count, owner info,
active capsules, and channel count per team.
2026-04-15 03:36:37 +06:00
587f6ed8ad Merge pull request 'Implemented least-loaded host scheduler with bottleneck-first strategy' (#25) from feat/host-scheduler into dev
Reviewed-on: wrenn/wrenn#25
2026-04-14 21:03:25 +00:00
82d281b5b5 Implement least-loaded host scheduler with bottleneck-first strategy
Replace round-robin scheduling with resource-aware host selection that
picks the host with the most headroom at its tightest resource. Extends
the HostScheduler interface with memory/disk params for admission control.
2026-04-15 03:02:29 +06:00
17d5d07b3a Removed unused env vars from env example 2026-04-15 02:19:28 +06:00
71b87020c9 Remove redundant comments from login page glow animation 2026-04-14 04:32:17 +06:00
516890c49a Add background process execution API
Start long-running processes (web servers, daemons) without blocking the
HTTP request. Leverages envd's existing background process support
(context.Background(), List, Connect, SendSignal RPCs) and wires it
through the host agent and control plane layers.

New API surface:
- POST /v1/capsules/{id}/exec with background:true → 202 {pid, tag}
- GET /v1/capsules/{id}/processes → list running processes
- DELETE /v1/capsules/{id}/processes/{selector} → kill by PID or tag
- WS /v1/capsules/{id}/processes/{selector}/stream → reconnect to output

The {selector} param auto-detects: numeric = PID, string = tag.
Tags are auto-generated as "proc-" + 8 hex chars if not provided.
2026-04-14 03:57:01 +06:00
962860ba74 Pre-pause snapshot signal to prevent Go runtime crash on restore
envd crashes with "fatal error: bad summary data" after Firecracker
snapshot/restore because the page allocator radix tree is inconsistent
when vCPUs are frozen mid-allocation. The port scanner goroutine
allocates heavily every second, making it the primary trigger.

Add POST /snapshot/prepare to envd — the host agent calls it before
vm.Pause to quiesce continuous goroutines and force GC. On restore,
PostInit restarts the port subsystem via the existing /init endpoint.

- New PortSubsystem abstraction with Start/Stop/Restart lifecycle
- Context-based goroutine cancellation (replaces irreversible channel close)
- Context-aware Signal to prevent scanner/forwarder deadlock
- Fix forwarder goroutine leak (was spinning forever on closed channel)
- Kill socat children on stop to prevent orphans across snapshots
- Fix double cmd.Wait panic (exec.Command instead of CommandContext)
2026-04-13 05:21:10 +06:00
117c46a386 Fix: Auto-admin didn't work for oauth users 2026-04-13 05:00:37 +06:00
d828a6be08 Normalize dashboard page headers: add divider line and align button layout
Add consistent mt-6 border-b divider to Capsules, Metrics, and Templates
headers. Align Channels header to match Keys page pattern (items-center,
description inside the title group).
2026-04-13 04:59:40 +06:00
bbdb44afee Merge pull request 'Added manual template building' (#24) from feat/admin-panel into dev
Reviewed-on: wrenn/wrenn#24
2026-04-12 22:44:39 +00:00
784fe5c7a8 Polish admin capsule pages and improve shared components
- Admin list: remove redundant Open button, normalize with dashboard
  patterns (sorting, search highlight, auto-refresh, animations)
- Admin detail: breadcrumb header, status bar, visibility polling
- FilesTab: add treeOnly prop, compact mode uses 2/7 tree + 5/7 preview
  split, expand tree to full width when no file selected, improve copy
- MetricsPanel: hide Live badge in compact layout (redundant with status)
- DestroyDialog: accept destroyFn prop for admin capsule deletion
2026-04-13 04:41:51 +06:00
60c0de670c Extract MetricsPanel component and use it in admin capsule detail page
Moves all Chart.js metrics logic (polling, smoothing, chart init/update)
into a reusable MetricsPanel component with 'full' and 'compact' layout
modes. The admin capsule detail page now reuses MetricsPanel, TerminalTab,
and FilesTab — no duplicated code.
2026-04-13 04:16:53 +06:00
90bea52ccd Add admin capsule management, fix file browser for special files, normalize dialog styles
- Admin capsule CRUD: list, create (platform templates), get detail with
  terminal/files/metrics, snapshot, destroy
- First signup auto-promotes to platform admin
- JWT auth via query param for WebSocket connections
- File browser: handle non-regular files (devices, pipes, sockets) gracefully
  instead of showing raw backend errors
- Normalize admin template dialogs to match established dialog patterns:
  remove accent bars, unify animation/shadow/button styles
2026-04-13 04:12:36 +06:00
f920023ecf Block download for non-regular files in file browser
Disable the download button for symlinks and show a dedicated
preview pane explaining the symlink target and suggesting to
navigate to the target file instead. Guard handleDownload against
non-file types as a safety net.
2026-04-13 02:57:38 +06:00
19ddb1ab8b Normalize dialog styles across capsules and templates pages
Aligned all dialog boxes to a consistent pattern: same shadow
(--shadow-dialog), animation (fadeUp 0.2s ease), button sizing
(py-2, duration-150), and hover effects. Added template type
indicator dot to CreateCapsuleDialog combobox. Removed accent
gradient bars from templates page inline dialogs.
2026-04-13 02:48:58 +06:00
5633957b51 Explicit write when mounting rootfs for updates 2026-04-13 02:38:09 +06:00
eb47e22496 Merge pull request 'Fixed crash on non-regular files and connection leaks' (#23) from hotfix/file-browsing-error-for-dev into dev
Reviewed-on: wrenn/wrenn#23
2026-04-12 20:12:46 +00:00
b1595baa19 Updated env.example 2026-04-13 02:10:43 +06:00
da06ecb97b Fix file browser crash on non-regular files and connection leaks
- envd: reject non-regular files (devices, pipes, sockets) in GetFiles
  to prevent infinite reads from /dev/zero, /dev/urandom etc.
- host agent: add context cancellation check in ReadFileStream loop
  with proper Connect error codes
- frontend: abort in-flight file reads on file switch, directory
  navigation, and component teardown via AbortController
- frontend: guard against abort errors surfacing in UI, use try/finally
  for fileLoading state
2026-04-13 02:09:50 +06:00
0d5007089e Merge pull request 'Updated dependencies and fixed breaking changes' (#22) from fix/dependency-updates into dev
Reviewed-on: wrenn/wrenn#22
2026-04-12 18:26:57 +00:00
0e7b198768 Bump netlink v1.3.1 and netns v0.0.5
Fixes resource leaks in named namespace handlers, adds IFF_RUNNING
flag deserialization and RouteGetWithOptions.
2026-04-13 00:13:40 +06:00
9ad704c12b Update CP listen port to 9725 and public URL to app.wrenn.dev 2026-04-13 00:01:59 +06:00
0189d030bb Bump frontend and Go x/ dependencies
- vite 7→8, @sveltejs/vite-plugin-svelte 6→7, typescript 5→6
- golang.org/x/crypto v0.49→v0.50, golang.org/x/sys v0.42→v0.43 (both modules)
2026-04-13 00:01:53 +06:00
7b853a05ba Update pgx/v5 from v5.8.0 to v5.9.1
Picks up timestamp scan optimizations, ContextWatcher goroutine leak
fix, and stdlib ResetSession connection pool fix.
2026-04-12 22:50:28 +06:00
108b68c3fa Updated gitignore 2026-04-12 22:24:54 +06:00
565817273d Rename API routes /v1/sandboxes → /v1/capsules 2026-04-12 21:51:04 +06:00
ea65fb584c Merge pull request 'Completed template build for admins' (#21) from feat/admin-template-build into dev
Reviewed-on: wrenn/wrenn#21
2026-04-11 21:41:18 +00:00
25b5258841 COPY multi-source support, configurable rootfs size, build fixes
- COPY now supports multiple sources: COPY a.txt b.txt /dest/
  Last argument is always destination (matches Dockerfile semantics).
- COPY resolves relative destinations against current WORKDIR.
- WRENN_DEFAULT_ROOTFS_SIZE env var (e.g. 5G, 2Gi, 1000M, 512Mi)
  controls template rootfs expansion. Used both at agent startup
  (EnsureImageSizes) and after FlattenRootfs (shrink then re-expand).
- Pre-build now sets WORKDIR /home/wrenn-user after USER switch.
- Extracted archive files get chmod a+rX for readability.
- Path traversal validation on COPY sources.
2026-04-12 03:39:17 +06:00
46c43b95c2 Visual polish 2026-04-12 02:44:40 +06:00
000318f77e Fix runtime env leaking into templates, add hostname to /etc/hosts
- Filter out user-specific env vars (HOME, USER, LOGNAME, SHELL, etc.)
  from template default_env so they don't override envd's per-user
  resolution. Fixes bash sourcing /root/.bashrc as wrenn-user.
- Keep WRENN_SANDBOX (legitimate runtime flag), only filter per-sandbox
  IDs (WRENN_SANDBOX_ID, WRENN_TEMPLATE_ID).
- Add "127.0.0.1 sandbox" to /etc/hosts in wrenn-init.sh so sudo can
  resolve the hostname. Fixes "unable to resolve host sandbox" error.
- Move capsule lifecycle buttons (Pause/Resume/Snapshot/Destroy) to the
  same row as Stats/Files/Terminal tabs.
- Show vCPU/Memory for all template types with Required/Recommended
  tooltips on the user templates page.
2026-04-12 02:43:09 +06:00
f5eeb0ffcc Rename /dashboard/snapshots to /dashboard/templates, show specs for all template types
- Rename snapshots route to templates for consistency with sidebar label
- Show vCPU and Memory values for base templates (not just snapshots),
  with tooltip distinguishing "Required" vs "Recommended"
- Show recipe copy button in admin build logs
- Admin panel defaults to /admin/templates on entry
- WORKDIR creates directory if not present (mkdir -p)
- Use USER command in pre-build instead of raw adduser
- Fix Svelte whitespace stripping in step keyword display
2026-04-12 02:22:43 +06:00
75af2a4f66 Add USER, COPY, ENV persistence to template build system
Implement three new recipe commands for the admin template builder:

- USER <name>: creates the user (adduser + passwordless sudo), switches
  execution context so subsequent RUN/START commands run as that user
  via su wrapping. Last USER becomes the template's default_user.

- COPY <src> <dst>: copies files from an uploaded build archive
  (tar/tar.gz/zip) into the sandbox. Source paths validated against
  traversal. Ownership set to the current USER.

- ENV persistence: accumulated env vars stored in templates.default_env
  (JSONB) and injected via PostInit when sandboxes are created from the
  template, mirroring Docker's image metadata approach.

Supporting changes:
- Pre-build creates wrenn-user as default (via USER command)
- WORKDIR now creates the directory if it doesn't exist (mkdir -p)
- Per-step progress updates (ProgressFunc callback) for live UI
- Multipart form support on POST /v1/admin/builds for archive upload
- Proto: default_user/default_env fields on Create/ResumeSandboxRequest
- Host agent: SetDefaults calls PostInitWithDefaults on envd
- Control plane: reads template defaults, passes on sandbox create/resume
- Frontend: file upload widget, recipe copy button, keyword colors for
  USER/COPY, fixed Svelte whitespace stripping in step display
- Admin panel defaults to /admin/templates instead of /admin/hosts
- Migration adds default_user and default_env to templates and
  template_builds tables
2026-04-12 02:10:01 +06:00
f6c3dc0801 Merge pull request 'bugfix: preserve agent gRPC status codes and map AlreadyExists to 409 Conflict' (#20) from bugfix/mkdir-already-exists-409 into dev
Reviewed-on: wrenn/wrenn#20
2026-04-11 17:59:16 +00:00
f5a9a1209f fix: map CodeAlreadyExists to HTTP 409 Conflict
Updated the `agentErrToHTTP` switch statement to explicitly catch
`connect.CodeAlreadyExists` (as well as
`connect.CodeFailedPrecondition`)
and return `http.StatusConflict` (409) instead of falling through to the

default 502 Bad Gateway.
2026-04-11 23:54:48 +06:00
8d0356e372 fix: stop overwriting agent gRPC errors with CodeInternal
Removed the `connect.NewError(connect.CodeInternal, ...)` wrapper in the
Server's MakeDir proxy handler. Previously, this wrapper was catching
specific agent errors (like CodeAlreadyExists) and casting them into
generic Code 13 (Internal) errors, stripping the gRPC metadata.

This change allows the control-plane to act as a transparent pipeline,
ensuring the API gateway can properly interpret and route specific
filesystem failures.
2026-04-11 23:54:23 +06:00
c3c9ced9dd Remove API key auth requirement for sandbox port proxy connections
Sandbox URLs ({port}-{sandbox_id}.{domain}) are now accessible without
authentication. The sandbox ID in the hostname is sufficient for routing.
2026-04-11 13:59:07 +06:00
7d0a21644f Merge pull request 'Visual optimizations for the web UI' (#19) from fix/optimizations into dev
Reviewed-on: wrenn/wrenn#19
2026-04-11 02:24:01 +00:00
26917d432d Add syntax highlighting to file browser, harden capsules list
File browser:
- Add shiki-based syntax highlighting (lazy-loaded, zero initial bundle
  impact) with support for 30+ languages
- Cap highlighting at 2000 lines to avoid freezing on large files
- Pre-compute preview lines as derived state instead of re-splitting
  on every render
- Add content-visibility: auto on code lines for off-screen skip
- Remove per-line CSS transitions (unnecessary paint on 5000 elements)
- Cap row entrance animations to first 30 entries

Capsules list:
- Pause auto-refresh polling when browser tab is hidden
- Add empty state for search with no results
- Fix error state not clearing on successful refresh
- Fix action menu positioning near viewport edges
- Disable create button when no template selected
2026-04-11 07:49:11 +06:00
430fb9e70e Add per-provider brand colors to channels page
Give each provider (Discord, Slack, Teams, Google Chat, Telegram,
Matrix, Webhook) its own distinctive color for badges, row hover
stripes, and dialog tags. Move channel count into the header as a
serif numeral for stronger typographic hierarchy.
2026-04-11 07:14:13 +06:00
0807946d45 Replace template text input with searchable combobox, lock specs for snapshots
Template field is now a filterable dropdown that fetches available
templates on dialog open. Selecting a snapshot auto-fills and disables
vCPU/memory inputs since they must match the original capsule config.
2026-04-11 07:00:59 +06:00
11ca6935a6 Skip row fly-transitions on template filter change to prevent visual flicker
After initial page load animations complete, subsequent filter switches
render instantly (duration: 0) instead of replaying staggered fly-in/out
transitions that caused all rows to flash before filtering took effect.
2026-04-11 06:48:50 +06:00
e2f869bfc2 Minor textual change 2026-04-11 06:23:31 +06:00
21b82c2283 Optimize frontend polling: visibility API, range-based intervals, skip redundant redraws
Adds Page Visibility API to StatsPanel, templates, and capsule detail
pages so polling pauses when the browser tab is hidden. Capsule metrics
now use range-appropriate poll intervals (10s for 5m/10m, up to 120s for
24h) instead of a flat 10s. Chart updates are skipped when the data
fingerprint hasn't changed, avoiding unnecessary Canvas redraws.
2026-04-11 06:20:29 +06:00
dbad418093 Harden channels page: deduplicate dropdowns, add missing provider logos
Consolidate three identical click-outside $effect blocks into a reusable
useClickOutside helper. Extract duplicated events checkbox list into an
eventsDropdownItems snippet shared by create and edit dialogs. Add brand
SVG icons for Teams, Google Chat, and Matrix providers.
2026-04-11 06:18:36 +06:00
2bad843069 Extract SnapshotDialog and DestroyDialog into reusable components
Add lifecycle buttons (pause, resume, snapshot, destroy) to the
individual capsule detail page and refactor both the list and detail
pages to share the new dialog components.
2026-04-11 06:08:19 +06:00
9332f4ac18 Merge pull request 'Terminal connection (PTY)' (#18) from feat/ssh-connection into dev
Reviewed-on: wrenn/wrenn#18
2026-04-10 23:45:10 +00:00
cf191ca821 Harden file browser: cap preview lines, fix race conditions, download UX
- Cap text preview at 5,000 lines with truncation footer and download link
  to prevent browser freeze on large files (300k+ DOM nodes)
- Add request generation counters to discard stale API responses from
  rapid directory/file clicking
- Guard initial $effect with hasInitiallyLoaded to prevent double-load
- Add download loading state with spinner and disabled button
- Delay URL.revokeObjectURL by 5s so browser can start download
2026-04-11 05:43:32 +06:00
d2202c4f49 Harden terminal: binary-safe base64, auto-reconnect, session limits
- Replace btoa/atob with TextEncoder/TextDecoder for binary-safe base64
  encoding — fixes crash on multi-byte UTF-8 input (emoji, CJK, accents)
- Auto-reconnect on abnormal WebSocket close while session is live
- Cap concurrent sessions at 8 with disabled "+" button at limit
- Guard all ws.send() calls with try/catch via wsSend() wrapper
- Clean up input flush timer on session close and component destroy
- Close all sessions when capsule stops running (isRunning → false)
- Clean up orphaned display entry if DOM container fails to render
2026-04-11 05:35:53 +06:00
1826af37a5 Increase multiplexer fork buffer to 4096 to prevent output drops
64-entry buffer was too small for high-throughput PTY output (e.g.
ls -laihR /). The consumer couldn't drain fast enough over the RPC
stream, causing the non-blocking send fallback to silently discard
data. 4096 entries (~64MB at 16KB/chunk) handles sustained output
without drops while still preventing deadlock on stuck consumers.
2026-04-11 05:16:43 +06:00
acc721526d Polish terminal tab: merge status bar into tab strip, normalize sizing
- Merge separate status bar into unified tab bar (one row of chrome instead of two)
- Bump font/button/icon sizes to match rest of capsule page
- VS Code-style tab separators with intelligent hiding around active tab
- Hide tab bar when no sessions exist (empty state has its own CTA)
- Fix xterm background gaps by painting viewport/screen backgrounds
- Increase terminal font from 13px to 14px
2026-04-11 05:10:46 +06:00
4b2ff279f7 Add terminal tab to capsule detail page and fix envd process lookup bugs
- Add multi-session Terminal tab with xterm.js (session tabs, close, reconnect)
- Keep terminal mounted across tab switches to preserve sessions
- Persist active tab in URL (?tab=terminal) so refresh stays on terminal
- Buffer keystrokes (50ms) to reduce per-character RPC overhead
- Add WebSocket auth via ?token= query param for browser WS connections
- Enable ws:true in Vite dev proxy for WebSocket support

envd fixes (pre-existing bugs exposed by multi-session terminals):
- Fix getProcess tag Range: inverted return values caused early stop when
  multiple tagged processes existed, making SendInput fail with "not found"
- Fix multiplexer deadlock: blocking send to cancelled fork's unbuffered
  channel prevented process cleanup. Now uses buffered channels (cap 64)
  with non-blocking fallback
2026-04-11 04:27:16 +06:00
ab3fc4a807 Add interactive PTY terminal sessions for sandboxes
Wire envd's existing PTY process capabilities through the full stack:
hostagent proto (4 new RPCs: PtyAttach, PtySendInput, PtyResize, PtyKill),
envdclient, sandbox manager, and a new WebSocket endpoint at
GET /v1/sandboxes/{id}/pty with bidirectional JSON message protocol.

Sessions use tag-based identity for disconnect/reconnect support,
base64-encoded PTY data for binary safety, and a 120s inactivity timeout.
2026-04-11 02:42:59 +06:00
09f030d202 Replace file browser not-running state with centered empty state
The small bordered card looked broken and misaligned — now uses a
full-width centered layout with floating icon, matching the app's
empty-state pattern.
2026-04-10 23:32:17 +06:00
43c15c86de Merge pull request 'Added browser based filesystem interactions' (#16) from feat/file-interactions into dev
Reviewed-on: wrenn/wrenn#16
2026-04-10 13:40:39 +00:00
851f54a9e1 Polish file browser: add up button, normalize design, improve UX
Add parent directory button in breadcrumb bar, remove redundant ..
row from file list. Normalize styles to use design system tokens
(accent glow, iconFloat, fadeUp). Improve empty states, add staggered
row entrance animation, file extension badge, and clearer UX copy.
2026-04-10 19:24:24 +06:00
4ed17b2776 Fix stale WRENN_SANDBOX_ID and WRENN_TEMPLATE_ID after snapshot restore
After restoring a VM from snapshot, envd had already completed its initial
MMDS poll, so the metadata files in /run/wrenn/ and env vars retained values
from the original sandbox. Call POST /init after WaitUntilReady on both
resume and create-from-template paths to trigger envd to re-read MMDS.
2026-04-10 19:23:48 +06:00
0e6daaabe0 Fix file browser: use ~ as default path, support tilde expansion
- Default to ~ instead of hardcoded /home/user — envd resolves it
  to the actual home dir of the configured user
- Pass ~ and ~/... paths through to envd for server-side expansion
- Resolve actual absolute path from response entries for breadcrumbs
- Fall back to / if home dir is empty or doesn't exist
- Fix leftover label prop on admin templates CopyButton
2026-04-10 19:10:20 +06:00
82531b735c Add Files tab to capsule detail page with file browser and preview
Implements a split-panel file browser: directory tree on the left with
path input and breadcrumb navigation, file preview on the right with
line numbers. Binary/large files (>10MB) show a download prompt instead.

Also adds CopyButton component across capsule, snapshot, and template
pages, and fixes pre-existing type errors in StatsPanel and admin
templates page.
2026-04-10 18:43:11 +06:00
c9283cac70 Add filesystem operations (list, mkdir, remove) across full stack
Plumb ListDir, MakeDir, and RemovePath through all layers:
REST API → host agent RPC → envdclient → envd. These endpoints
enable a web file browser for sandbox filesystem interaction.

New endpoints (all under requireAPIKeyOrJWT):
- POST /v1/sandboxes/{id}/files/list
- POST /v1/sandboxes/{id}/files/mkdir
- POST /v1/sandboxes/{id}/files/remove
2026-04-10 18:05:13 +06:00
c1987b0bda Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-10 03:03:04 +06:00
2b31af8fde Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-10 02:50:50 +06:00
831c898b71 Merge pull request 'Added channels for external notifications' (#13) from feat/channels into dev
Reviewed-on: wrenn/sandbox#13
2026-04-09 19:20:36 +00:00
0f78982186 feat: channel audit logging, name cleaning, message formatting, and dashboard UI
- Add audit log entries for channel create, update, rotate_config, delete
- Clean channel names on create/update (trim, lowercase, spaces → hyphens,
  SafeName validation)
- Format chat notifications with full event details (resource, actor, team,
  timestamp) instead of one-liners
- Fix Discord split-line embeds by setting splitLines=No on shoutrrr URL
- Add channels dashboard page and sidebar navigation
2026-04-10 01:17:03 +06:00
84dd15d22b feat: add notification channels with provider integrations and retry
Implement a channels system for notifying teams via external providers
(Discord, Slack, Teams, Google Chat, Telegram, Matrix, webhook) when
lifecycle events occur (capsule/template/host state changes).

- Channel CRUD API under /v1/channels (JWT-only auth)
- Test endpoint to verify config before saving (POST /v1/channels/test)
- Secret rotation endpoint (PUT /v1/channels/{id}/config)
- AES-256-GCM encryption for provider secrets (WRENN_ENCRYPTION_KEY)
- Redis stream event publishing from audit logger
- Background dispatcher with consumer group and retry (10s, 30s)
- Webhook delivery with HMAC-SHA256 signing (X-WRENN-SIGNATURE)
- shoutrrr integration for chat providers
- Secrets never exposed in API responses
2026-04-09 17:06:06 +06:00
5148b5dd64 Updated CLAUDE.md 2026-04-09 14:28:39 +06:00
37d85ec998 chore: relicense from BSL 1.1 to Apache 2.0
Replace Business Source License with Apache License Version 2.0 across
LICENSE, envd/LICENSE, and NOTICE. Update NOTICE to remove BSL-era
framing that singled out Apache-only portions.
2026-04-09 14:28:19 +06:00
e2beef817d Expose host up/down audit events to BYOC teams and refresh dashboard navigation
Change host marked_down/marked_up audit log scope from "admin" to "team" so
BYOC team members can see when their hosts go unreachable or recover. Rename
BYOC sidebar entry to Hosts, add placeholder billing/usage pages, disable
unimplemented notifications/settings links, and point docs to external site.
2026-04-09 14:24:20 +06:00
a9ca13b238 Changed redis dependency to keydb 2026-04-09 00:47:19 +06:00
e3ffa576ce Fix review findings: IP collision, pause race, proxy path, ENV ordering, conn drain
- Fix IP address collision at slot 32768+ by using bitwise shifts instead of
  byte-truncating division in network slot addressing
- Add per-sandbox lifecycleMu to serialize concurrent Pause/Destroy calls
- Sanitize proxy forwarding path with path.Clean
- Sort ENV keys in recipe shell preamble for deterministic ordering
- Fix ConnTracker goroutine leak by adding cancel channel to Drain/Reset
- Update context_test to assert deterministic ENV ordering
2026-04-08 04:32:41 +06:00
dd50cfdcb1 fix: security hardening from CSO audit
- Add auth failure logging (login, API key, JWT) with IP/email/prefix
- Move OAuth JWT from URL params to short-lived cookies to prevent
  token leakage via browser history, server logs, and Referer headers
- Pin Swagger UI to v5.18.2 with SRI integrity hashes
- Upgrade Go toolchain to 1.25.8 (fixes 5 called stdlib vulns)
- Fix unchecked error in host agent credential refresh
- Add .gstack to .gitignore for security report artifacts
2026-04-08 03:46:31 +06:00
3675ecba65 chore: add gstack skill routing rules to CLAUDE.md 2026-04-08 02:28:02 +06:00
c8615466be Enforce mandatory mTLS for CP↔agent communication
Both the control plane and host agent now refuse to start without valid
mTLS configuration, closing the unauthenticated proxy/RPC attack surface
that existed when running in plain HTTP fallback mode.
2026-04-08 02:25:43 +06:00
2737288a2b Merge pull request 'Changes for a python code interpreter' (#12) from feat/python-code-interpreter into dev
Reviewed-on: wrenn/sandbox#12
2026-04-07 20:18:06 +00:00
0ea0e7cc70 Fix expandEnv regex, init script crash, healthcheck deadline, and test issues
- Fix envRegex: remove spurious (\$)? group that swallowed $$$, handle ${}
- wrenn-init.sh: add || true to networking commands under set -e, remove dead code
- waitForHealthcheck: use context deadline for unlimited retries instead of implicit 100 cap
- Make parseSandboxEnv a package-level function (unused receiver)
- Fix WrappedCommand test: map iteration order dependency, pre-expand env values
- Fix error wrapping: %v → %w per project conventions
- test-jupyter-kernel.py: move import to top-level, fix misleading comment
2026-04-08 02:14:53 +06:00
11e08e5b96 Merge branch 'dev' into feat/python-code-interpreter 2026-04-07 19:35:55 +00:00
4dc8cc3867 Removed incorrect example cert format 2026-04-07 19:35:26 +00:00
9852f96127 Modified expandEnv to use regex.
Updated recipefile with test script to check code execution with state
management
2026-04-07 22:56:56 +06:00
bf05677bef Merge branch 'dev' into feat/python-code-interpreter 2026-04-06 20:45:54 +00:00
4f340b8847 feat: add env expansion, sandbox env fetching, and configurable
healthchecks

Fix ENV instructions to expand $VAR references at set time using the
current env state, preventing self-referencing values like
PATH=/opt/venv/bin:$PATH from producing recursive expansions. Remove
expandEnv from shellPrefix to avoid double expansion.

Fetch sandbox environment variables via `env` before recipe execution
so ENV steps resolve against actual runtime values from the base
template image.

Replace hardcoded healthcheck timing with a Dockerfile-like flag parser
supporting --interval, --timeout, --start-period, and --retries. Add
start-period grace window and bounded retry counting to
waitForHealthcheck.

Add python-interpreter-v0-beta recipe and healthcheck files.
2026-04-07 01:15:43 +06:00
f57fe85492 Merge pull request 'Minor temporary fix for sitewide metrics' (#11) from patch/analytics into dev
Reviewed-on: wrenn/sandbox#11
2026-04-04 07:11:49 +00:00
9a52b47786 Minor temporary fix for sitewide metrics 2026-04-04 13:11:18 +06:00
ab38c8372c Merge pull request 'Feature: HTTP communication with sandbox' (#10) from code-interpreter into dev
Reviewed-on: wrenn/sandbox#10
2026-04-02 17:41:07 +00:00
8b5fa3438e Replace gopsutil port scanner with direct /proc/net/tcp reading
The envd port scanner used gopsutil's net.Connections() which walks
/proc/{pid}/fd to enumerate socket inodes. This corrupts Go runtime
semaphore state when the VM is paused mid-operation and restored from
a Firecracker snapshot.

Replace with a direct /proc/net/tcp + /proc/net/tcp6 parser that reads
a single file per address family — no /proc/{pid}/fd walk, no goroutines,
no WaitGroups. Also replace concurrent-map (smap) in the scanner with a
plain sync.RWMutex-protected map, since concurrent-map's Items() spawns
goroutines with a WaitGroup internally, which is equally unsafe across
snapshot boundaries.

Use socket inode instead of PID for the port forwarding map key, since
inode is available directly from /proc/net/tcp without the fd walk.
2026-04-01 15:47:28 +06:00
2b4c5e0176 Add pre-pause proxy connection drain and sandbox proxy caching
Introduce ConnTracker (atomic.Bool + WaitGroup) to track in-flight proxy
connections per sandbox. Before pausing a VM, the manager drains active
connections with a 2s grace period, preventing Go runtime corruption
inside the guest caused by stale TCP state surviving Firecracker
snapshot/restore.

Also add:
- AcquireProxyConn on Manager for atomic lookup + connection tracking
- Proxy cache (120s TTL) on CP SandboxProxyWrapper with single-query
  DB lookup (GetSandboxProxyTarget) to avoid two round-trips
- Reset() on ConnTracker to re-enable connections if pause fails
2026-04-01 15:09:44 +06:00
377e856c8f Fix lint warnings: drop deprecated Name field from snapshot response, check errcheck in benchmark
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 21:28:57 +06:00
948db13bed Add skip_pre_post build option, cancel endpoint, and recipe package
- skip_pre_post flag on builds bypasses apt update/clean pre/post steps for
  faster iteration when the recipe handles its own environment setup
- POST /v1/admin/builds/{id}/cancel endpoint marks an in-progress build as
  cancelled; UpdateBuildStatus now also sets completed_at for 'cancelled'
- internal/recipe: typed recipe parser and executor (RUN/ENV/COPY steps)
  replacing the raw string slice approach in the build worker
- pre/post build commands prefixed with RUN to match recipe step format
2026-03-30 21:24:52 +06:00
25ce0729d5 Add mTLS to CP→agent channel
- Internal ECDSA P-256 CA (WRENN_CA_CERT/WRENN_CA_KEY env vars); when absent
  the system falls back to plain HTTP so dev mode works without certificates
- Host leaf cert (7-day TTL, IP SAN) issued at registration and renewed on
  every JWT refresh; fingerprint + expiry stored in DB (cert_expires_at column
  replaces the removed mtls_enabled flag)
- CP ephemeral client cert (24-hour TTL) via CPCertStore with atomic hot-swap;
  background goroutine renews it every 12 hours without restarting the server
- Host agent uses tls.Listen + httpServer.Serve so GetCertificate callback is
  respected (ListenAndServeTLS always reads cert from disk)
- Sandbox reverse proxy now uses pool.Transport() so it shares the same TLS
  config as the Connect RPC clients instead of http.DefaultTransport
- Credentials file renamed host-credentials.json with cert_pem/key_pem/
  ca_cert_pem fields; duplicate register/refresh response structs collapsed
  to authResponse
2026-03-30 21:24:35 +06:00
88f919c4ca Rename sandbox prefix to cl-, add MMDS metadata, fix proxy port routing
- Change sandbox ID prefix from sb- to cl- (capsule) throughout
- Fix proxy URL regex character class: base36 uses 0-9a-z, not just hex
- Add MMDS V2 config and metadata to VM boot flow so envd can read
  WRENN_SANDBOX_ID and WRENN_TEMPLATE_ID from inside the guest
- Pass TemplateID through VMConfig into both fresh and snapshot boot paths
2026-03-30 17:12:05 +06:00
8f06fc554a Replace Full snapshot fallback with file-level diff merge
Always use Firecracker Diff snapshots (fast, only changed pages) and
merge diff files at the file level when the generation cap is reached.
The previous approach used Firecracker's Full snapshot type which dumps
all memory to disk and can timeout, losing all snapshot data on failure.

Add snapshot.MergeDiffs() which reads each block from the appropriate
generation's diff file via the header mapping and writes them into a
single consolidated file with a fresh generation-0 header.
2026-03-29 02:33:33 +06:00
1ca10230a9 Prefix network namespaces with wrenn-, add stale cleanup, lower diff cap
Rename ns-{idx} to wrenn-ns-{idx} and veth-{idx} to wrenn-veth-{idx}
to avoid collisions with other tools. Add CleanupStaleNamespaces() at
agent startup to remove orphaned namespaces, veths, iptables rules, and
routes from a previous crash. Lower maxDiffGenerations from 10 to 8 to
prevent Go runtime memory corruption from snapshot/restore drift.
2026-03-29 02:14:30 +06:00
46d60fc5a5 Seed minimal template in DB and protect it from deletion
Insert a minimal template row (all-zeros UUID) so it appears in both
team and admin template listings. Guard delete endpoints to prevent
removal of the minimal template.
2026-03-29 01:34:54 +06:00
906cc42d13 Rename AGENT_*/CP_LISTEN_ADDR env vars to WRENN_* prefix
AGENT_FILES_ROOTDIR → WRENN_DIR, AGENT_LISTEN_ADDR → WRENN_HOST_LISTEN_ADDR,
AGENT_CP_URL → WRENN_CP_URL, AGENT_HOST_INTERFACE → WRENN_HOST_INTERFACE,
CP_LISTEN_ADDR → WRENN_CP_LISTEN_ADDR. Consolidates all env vars under a
consistent WRENN_ namespace.
2026-03-29 00:30:20 +06:00
75b28ed899 Add UUID-based template IDs and team-scoped template directory layout
Introduces internal/layout package for centralized path construction,
migrates templates from name-based TEXT primary keys to UUID PKs with
team-scoped directories (WRENN_DIR/images/teams/{team_id}/{template_id}).
The built-in minimal template uses sentinel zero UUIDs. Proto messages
carry team_id + template_id alongside deprecated template name field.
Team deletion now cleans up template files across all hosts.
2026-03-29 00:30:10 +06:00
03e96629c7 Remove slug from team page UI 2026-03-28 20:45:57 +06:00
34af77e0d8 Fix snapshot race, delete auth, sparse dd, default disk to 5GB
Snapshot race fix:
- Pre-mark sandbox as "paused" in DB before issuing CreateSnapshot and
  PauseSandbox RPCs, preventing the reconciler from marking it "stopped"
  during the flatten window when the sandbox is gone from the host
  agent's in-memory map but DB still says "running"
- Revert status to "running" on RPC failure
- Check ctx.Err() before writing response to avoid writing to dead
  connections when client disconnects during long snapshot operations

Delete auth fix:
- Block non-admin deletion of platform templates (team_id = all-zeros)
  at DELETE /v1/snapshots/{name} with 403, preventing file deletion
  before the team ownership check fails

Sparse dd:
- Add conv=sparse to dd in FlattenSnapshot so flattened images preserve
  sparseness (~200MB actual vs 5GB logical)

Default disk size:
- Change default disk_size_mb from 20GB to 5GB across migration,
  manager, service, build, and EnsureImageSizes
- Disable split-button dropdown arrow for platform templates in
  dashboard snapshots page (teams cannot delete platform templates)
2026-03-28 14:30:18 +06:00
c89a664a37 Switch API ID format from UUID to base36 for compact, E2B-style IDs
DB stays native UUID; the format/parse layer now encodes 16 UUID bytes
as 25-char lowercase alphanumeric (base36) strings instead of the
standard 36-char hex-with-dashes format. e.g. sb-2e5glxi4g3qnhwci95qev0cg0
2026-03-27 00:53:51 +06:00
3509ca90e8 Add pre/post build stages, fix exec timeout, expand guest PATH
Build phases:
- Pre-build (apt update) and post-build (apt clean, autoremove, rm lists)
  run with 10-minute timeout; user recipe commands keep 30s timeout
- Log entries include phase field for UI grouping
- Always send explicit TimeoutSec to host agent (0 defaulted to 30s)

Frontend:
- Pre-build/post-build steps show phase label without exposing commands
- Recipe steps numbered independently starting from 1

Guest PATH:
- Add /usr/games:/usr/local/games to wrenn-init.sh PATH export
  (standard Ubuntu paths, needed for packages like cowsay)
2026-03-27 00:28:32 +06:00
c8acac92cc Add pre/post build stages to template builds
Pre-build: apt update
Post-build: apt clean, apt autoremove, rm apt lists

Total steps count includes pre/post commands for accurate progress bars.
2026-03-27 00:00:48 +06:00
5cb37bf2a0 Add admin template deletion with broadcast to all hosts
- DELETE /v1/admin/templates/{name} endpoint (admin-only)
- Broadcasts DeleteSnapshot RPC to all online hosts before removing DB record
- Frontend admin templates page uses deleteAdminTemplate() instead of
  team-scoped deleteSnapshot()
- Delete button shown for all template types, not just snapshots
2026-03-26 23:53:08 +06:00
c0d6381bbe Add disk_size_mb, auto-expand base images, admin templates endpoint
Disk sizing:
- Add disk_size_mb column to sandboxes table (default 20480 = 20GB)
- Add disk_size_mb to CreateSandboxRequest proto, passed through the
  full chain: service → RPC → host agent → sandbox manager → devicemapper
- devicemapper.CreateSnapshot takes separate cowSizeBytes param so the
  sparse CoW file can be sized independently from the origin
- EnsureImageSizes() runs at host agent startup: expands any base image
  smaller than 20GB via truncate + resize2fs (sparse, no extra physical
  disk). Sandboxes then get the full 20GB via fast dm-snapshot path
- FlattenRootfs shrinks output images with resize2fs -M so stored
  templates are compact; EnsureImageSizes re-expands on next startup

Admin templates visibility:
- Add GET /v1/admin/templates endpoint listing all templates across teams
- Frontend admin templates page uses listAdminTemplates() instead of
  team-scoped listSnapshots()
- Platform templates (team_id = all-zeros UUID) now visible to all teams:
  GetTemplateByTeam, ListTemplatesByTeam, ListTemplatesByTeamAndType
  queries include platform team_id in WHERE clause
2026-03-26 23:45:41 +06:00
4ddd494160 Switch database IDs from TEXT to native UUID
Consolidate 16 migrations into one with UUID columns for all entity
IDs. TEXT is kept only for polymorphic fields (audit_logs.actor_id,
resource_id) and template names. The id package now generates UUIDs
via google/uuid, with Format*/Parse* helpers for the prefixed wire
format (sb-{uuid}, usr-{uuid}, etc.). Auth context, services, and
handlers pass pgtype.UUID internally; conversion to/from prefixed
strings happens at API and RPC boundaries. Adds PlatformTeamID
(all-zeros UUID) for shared resources.
2026-03-26 16:16:21 +06:00
cdd89a7cee Fix review issues: detached contexts, loop device leak, timer leak, size_bytes
- Use context.Background() with timeout in destroySandbox/failBuild so
  cleanup and DB writes survive parent context cancellation on shutdown
- Fix loop device refcount leak in FlattenRootfs when dmDevice is nil
- Replace time.After with time.NewTimer in healthcheck polling to avoid
  goroutine leak when healthcheck passes early
- Capture size_bytes from CreateSnapshot/FlattenRootfs RPC responses
  instead of hardcoding 0 in the templates table insert
- Avoid leaking internal error details to API clients in build handler
2026-03-26 15:31:38 +06:00
1ce62934b3 Add template build system with admin panel, async workers, and FlattenRootfs RPC
Introduces an end-to-end template building pipeline: admins submit a recipe
(list of shell commands) via the dashboard, a Redis-backed worker pool spins
up a sandbox, executes each command, and produces either a full snapshot
(with healthcheck) or an image-only template (rootfs flattened via a new
FlattenRootfs host-agent RPC). Build progress and per-step logs are persisted
to a new template_builds table and polled by the frontend.

Backend:
- New FlattenRootfs RPC (proto + host agent + sandbox manager)
- BuildService with Redis queue (BLPOP) and configurable worker pool (default 2)
- Admin-only REST endpoints: POST/GET /v1/admin/builds, GET /v1/admin/builds/{id}
- Migration for template_builds table with JSONB logs and recipe columns
- sqlc queries for build CRUD and progress updates

Frontend:
- /admin/templates page with Templates + Builds tabs
- Create Template dialog with recipe textarea, healthcheck, specs
- Build history with expandable per-step logs, status badges, progress bars
- Auto-polling every 3s for active builds
- AdminSidebar updated with Templates nav item
2026-03-26 15:27:21 +06:00
6898528096 Replace one-shot clock_settime with chrony for continuous guest time sync
Switch from the envd /init endpoint pushing host time via syscall to
chronyd reading the KVM PTP hardware clock (/dev/ptp0) continuously.
This fixes clock drift between init calls and handles snapshot resume
gracefully.

Changes:
- Add clocksource=kvm-clock kernel boot arg
- Start chronyd in wrenn-init.sh before tini (PHC /dev/ptp0, makestep 1.0 -1)
- Remove clock_settime logic from envd SetData and shouldSetSystemTime
- Remove client.Init() clock sync calls from sandbox manager (3 sites)
- Remove Init() method from envdclient (no longer needed)
- Simplify rootfs scripts: socat/chrony now come from apt in the container
  image, only envd/wrenn-init/tini are injected by build scripts
2026-03-26 04:47:44 +06:00
12d1e356fa Minor UI copy updates across capsules and templates pages 2026-03-26 03:58:12 +06:00
139f86bf9c Fix static build: disable prerender for dynamic capsule detail route
The [id] route cannot be prerendered at build time since IDs are unknown.
With adapter-static's index.html fallback, the route is handled client-side.
2026-03-26 02:13:12 +06:00
b0a8b498a8 WIP: Add Caddy reverse proxy for dev environment
Add Caddy to docker-compose as the single entry point on port 8000:
- localhost -> /api/* stripped and proxied to CP:8080, /* to frontend:5173
- *.localhost -> proxied to CP:8080 (sandbox proxy catch-all)
- Direct /v1/*, /auth/*, /docs routes proxied to CP

Move CP from :8000 to :8080 (its default). Caddy takes :8000.
Update .env.example, vite proxy target (kept as fallback), and Makefile
dev targets (pg_isready via docker exec, frontend binds 0.0.0.0).

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:21 +06:00
4be65b0abb WIP: Add sandbox proxy catch-all to control plane
Add SandboxProxyWrapper that intercepts requests with Host headers
matching {port}-{sandbox_id}.{domain} and proxies them through the
owning host agent's /proxy endpoint.

Authentication is via X-API-Key only (no JWT). The API key's team must
own the sandbox. Export EnsureScheme from lifecycle package for reuse.

Request flow: SDK -> Caddy -> CP catch-all -> Host Agent -> sandbox VM.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:10 +06:00
f4675ebfc0 WIP: Add HTTP proxy endpoint to host agent
Add /proxy/{sandbox_id}/{port}/* handler that reverse-proxies HTTP
requests to services running inside sandbox VMs. The sandbox's host IP
(10.11.0.{idx}) is used as the upstream target.

Includes port validation (1-65535) and shared HTTP transport for
connection pooling. Supports WebSocket upgrades for protocols like
Jupyter's streaming API.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:01 +06:00
602ee470d9 WIP: Add socat injection to rootfs build scripts
Inject a statically-linked socat binary into rootfs images. envd's
port forwarder requires socat to bridge localhost-listening services
(e.g. Jupyter kernel) to the guest TAP interface.

Both scripts follow the same 3-step resolution: check rootfs, check
host, build from source (http://www.dest-unreach.org/socat/ v1.8.1.1).
Static linkage is verified before injection.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:11:54 +06:00
8cdf91d895 Merge pull request 'Added metrics' (#9) from metrics into dev
Reviewed-on: wrenn/sandbox#9
2026-03-25 16:40:06 +00:00
ed7880bc6c Add per-capsule stats detail page with live CPU/RAM charts
- New detail page at /dashboard/capsules/[id] with Stats and Files tabs
- Stats tab shows capsule info card (status, template, CPU, memory, disk,
  started, idle timeout) and two stacked Chart.js charts with live values
- Metrics API client with 10s polling and moving-average smoothing
- Capsule ID in list table is now a clickable link to the detail page
- Layout breadcrumb header (Capsules > sb-xxx) with back navigation
- Fix metrics sampler: use v.PID() directly as Firecracker PID since
  unshare -m execs (not forks) through the bash/ip-netns-exec/firecracker
  chain, so all share the same PID. Removes unused findChildPID.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:31:05 +06:00
27ff828e60 Push GetSandboxMetricPoints time filter into SQL
The query was fetching all rows for a (sandbox_id, tier) pair and
filtering by timestamp in Go. For repeatedly-paused sandboxes the
24h tier can accumulate up to 30 days of data, causing up to 120x
over-fetching for a 6h range request.

Add AND ts >= $3 to the query so Postgres filters on the primary key
(sandbox_id, tier, ts) directly. Drop the redundant Go-side loop.
2026-03-25 21:53:19 +06:00
6eacf0f735 Fix LIKE pattern injection in user email search
Escape LIKE metacharacters (% and _) in the email prefix before passing
to the SQL query, and enforce the documented '@' requirement to prevent
broad user enumeration. Move search logic out of TeamService into
usersHandler since it is a site-wide lookup, not team-scoped.
2026-03-25 21:53:09 +06:00
88cb24bb86 Minor improvement 2026-03-25 21:27:11 +06:00
49b0b646a8 Add 5m, 1h, 6h, 12h range filters to metrics endpoint
Maps each user-facing range to the appropriate underlying ring buffer
tier and applies a time cutoff filter. No new ring buffers needed —
5m/10m read from the 10m tier, 1h/2h from the 2h tier, 6h/12h/24h
from the 24h tier.
2026-03-25 20:44:28 +06:00
9acdbb5ae9 Add per-sandbox CPU/memory/disk metrics collection
Samples /proc/{fc_pid}/stat (CPU%), /proc/{fc_pid}/status (VmRSS), and
stat() on CoW files at 500ms intervals per running sandbox. Three tiered
ring buffers downsample into 30s and 5min averages for 10min/2h/24h
retention. Metrics are flushed to DB on pause (all tiers) and destroy
(24h only). New GetSandboxMetrics and FlushSandboxMetrics RPCs on the
host agent, proxied through GET /v1/sandboxes/{id}/metrics?range= on
the control plane. Returns live data for running sandboxes, DB data for
paused, and 404 for stopped.
2026-03-25 20:10:33 +06:00
7473c15f52 Bugfix: cgroup2 related error inside the sandbox 2026-03-25 19:45:57 +06:00
8d5ba3873a Fix capsules table blink on background poll refresh
Poll fetches now silently update data without triggering loading
states, spinner animations, or row fadeUp re-animations. Only manual
refresh shows the spin indicator.
2026-03-25 19:44:13 +06:00
b0e6f5ffb3 Bolder stats page layout with stronger visual hierarchy
- Accent stripes: 3px → 5px; indicator dots: 6px → 8px
- Peak values step down to text-[1.714rem]/text-secondary so Now values read as the clear hero
- Now labels: semibold + uppercase for weight parity with the metric
- Cell padding py-5 → py-6; outer gap-7/pt-4 → gap-8/pt-6 for breathing room
- Chart fills: 7-8% → 11-13% opacity; lines: 1.5 → 2px
- Tick labels brighter (#635f5c), grid lines slightly more visible
- Running capsules chart: min-height 220 → 260px
2026-03-25 18:18:04 +06:00
a69b0f579c Split CPU and RAM into separate side-by-side charts
CPU (vCPUs) and RAM (GB) use different units and scales, so combining
them on a dual-axis chart was misleading. Each now has its own chart
card, laid out side-by-side.
2026-03-25 16:39:25 +06:00
45793e181c Move metrics to after templates in sidebar nav 2026-03-25 16:08:38 +06:00
e3750f79f9 Fix metrics sampler to record zero-value snapshots when idle
SampleSandboxMetrics previously filtered WHERE status IN ('running',
'starting', 'paused'), which returned no rows when all capsules were
stopped. This caused zero snapshots to be skipped, leaving the
time-series charts with no trailing data points instead of showing
the expected zero values.

Remove the WHERE filter so the query groups by all teams that have
any sandbox row. The per-status FILTER clauses on the aggregates
already produce correct zero counts for stopped capsules.

Also includes the per-VM RAM ceiling formula change (sum(ceil(each/2))
instead of ceil(sum/2)).
2026-03-25 15:50:19 +06:00
930da8a578 Move metrics to dedicated nav item, simplify capsules page
- Add Metrics nav item to sidebar with bar chart icon
- Create /dashboard/metrics page wrapping StatsPanel
- Remove tabs from capsules page (list is now the only view)
- Flatten capsules route: /capsules directly shows the list,
  removing the /list and /stats sub-routes
- Strip redundant title/subtitle from StatsPanel (page header
  provides context)
2026-03-25 15:24:21 +06:00
47b0ed5b52 Fix metrics correctness, redesign stats page
- Replace stale snapshot read (GetCurrentMetrics) with live query
  (GetLiveMetrics) against sandboxes table — always returns correct
  zeros when no capsules are running
- Fix CPU reserved formula: running + starting only; paused VMs no
  longer contribute vCPUs (RAM reservation for paused unchanged)
- Merge top cards into 3 paired Now/Peak cards with colored accent
  borders (green/blue/amber matching chart colors)
- Move Live badge from Running Capsules card to page-level header
- Add colored category dots to card and chart headers
- Charts stacked vertically, flex-1 to fill remaining page height
- vCPUs chart color changed to blue (#5a9fd4), RAM stays amber
2026-03-25 15:11:46 +06:00
fee66bda50 Add live stats page with metrics sampling and route split
- New sandbox_metrics_snapshots table sampled every 10s (60-day retention)
- Background MetricsSampler goroutine wired into control plane startup
- GET /v1/sandboxes/stats?range=5m|1h|6h|24h|30d endpoint with adaptive
  polling intervals; reserved CPU/RAM uses ceil(paused/2) formula
- StatsPanel component: 4 stat cards + 2 Chart.js line charts (straight
  lines, integer y-axis for running count, dual-axis for CPU/RAM)
- Range filter persisted in URL query param; polls update data silently
  (no blink — loading state only shown on initial mount)
- Split /dashboard/capsules into /list and /stats sub-routes with shared
  layout; capsuleRunningCount store syncs badge across routes
- CreateCapsuleDialog extracted as reusable component
2026-03-25 14:41:05 +06:00
2349f585ae Bolder, more delightful frontend across all pages
- app.css: replace flat --shadow-sm token with real shadows; add
  --shadow-card and --shadow-dialog tokens; add @keyframes status-ping
  and .animate-status-ping utility (outward ring ripple, GPU-composited
  via will-change) for live running status dots
- login: headline 5rem → 6.5rem with tighter leading/tracking; expand
  container to 460px; add sage-green dot grid texture layer beneath the
  mouse-reactive glow for industrial depth
- capsules: upgrade all running dots (header chip + row indicators +
  status bar) from opacity-fade to ring ripple; apply --shadow-dialog
  to Launch and Snapshot dialogs
- keys: apply --shadow-dialog to all three dialogs
- audit: remove duplicate @keyframes fadeUp and iconFloat (redundant
  with app.css definitions, audit's fadeUp also subtly diverged)
- sidebar: active indicator bar taller and thicker (h-5 w-[3px] → h-6
  w-1); active bg more vivid (accent/12%); label font-medium →
  font-semibold; team dialog gets --shadow-dialog
2026-03-25 12:55:23 +06:00
d4eb24be7e Added snapshot name dialogue on the UI 2026-03-25 05:30:31 +06:00
0414fbe733 Merge pull request 'Added audit logs for users' (#7) from audit-logs into dev
Reviewed-on: wrenn/sandbox#7
2026-03-24 23:21:09 +00:00
6b76abe38e Remove expandable metadata from audit log rows
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 05:19:32 +06:00
3ce8fdcb02 Add audit logs frontend page
Infinite-scroll table with hierarchical filter dropdown, expandable
metadata rows, and status-coded visual signals per event severity.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 05:18:04 +06:00
1be30034bd Add audit log infrastructure and GET /v1/audit-logs endpoint
Introduces an append-only audit trail for all user and system actions:
sandbox lifecycle (create/pause/resume/destroy/auto-pause), snapshots,
team rename, API key create/revoke, member add/remove/leave/role_update,
and BYOC host add/delete/marked_down/marked_up.

- New audit_logs table (migration) with team_id, actor, resource,
  action, scope (team|admin), status (success|info|warning|error),
  metadata, and created_at
- AuditLogger (internal/audit) with named fire-and-forget methods per
  event; system actor used for background events (HostMonitor, TTL reaper)
- GET /v1/audit-logs: JWT-only, cursor pagination (max 200), multi-value
  filters for resource_type and action (comma-sep or repeated params);
  members see team-scoped events only, admins/owners see all
- AuthContext extended with APIKeyID + APIKeyName so API key requests
  record meaningful actor identity
- HostMonitor wired with AuditLogger for auto-pause and host marked_down
2026-03-25 05:15:16 +06:00
9878156798 Merge pull request 'Set up working host registration (including BYOC) with the CP' (#6) from host-registration into dev
Reviewed-on: wrenn/sandbox#6
2026-03-24 21:19:12 +00:00
e069b3e679 Add BYOC page, admin section, and is_byoc team visibility gating
- Frontend: BYOC hosts page (/dashboard/byoc) with register/delete flows,
  shimmer loading, pulsing online status, animated token reveal checkmark
- Frontend: Admin section (/admin/hosts) with platform + BYOC tabs, stat
  pills, skeleton loading, slide-in animations for new rows
- Frontend: AdminSidebar component with accent top bar and admin pill badge
- Frontend: BYOC nav item shown only when team.is_byoc is true (derived
  from teams store, not JWT); disabled for members
- Frontend: Admin shield button in Sidebar, visible only to platform admins
- Backend: is_admin in JWT claims + requireAdmin middleware (DB-validated)
- Backend: is_byoc added to teamResponse so frontend derives visibility
  from fresh team data rather than stale JWT fields
- Backend: SetBYOC admin endpoint (PUT /v1/admin/teams/{id}/byoc)
- Backend: Admin hosts list enriches BYOC entries with team_name
- Host agent: load .env file via godotenv on startup
2026-03-25 03:10:41 +06:00
9bf67aa7f7 Implement host registration, JWT refresh tokens, and multi-host scheduling
Replaces the hardcoded CP_HOST_AGENT_ADDR single-agent setup with a
DB-driven registration system supporting multiple host agents (BYOC).

Key changes:
- Host agents register via one-time token, receive a 7-day JWT + 60-day
  refresh token; heartbeat loop auto-refreshes on 401/403 and pauses all
  sandboxes if refresh fails
- HostClientPool: lazy Connect RPC client cache keyed by host ID, replacing
  the single static agent client throughout the API and service layers
- RoundRobinScheduler: picks an online host for each new sandbox via
  ListActiveHosts; extensible for future scheduling strategies
- HostMonitor (replaces Reconciler): passive heartbeat staleness check marks
  hosts unreachable and sandboxes missing after 90s; active reconciliation
  per online host restores missing-but-alive sandboxes and stops orphans
- Graceful host delete: returns 409 with affected sandbox list without
  ?force=true; force-delete destroys sandboxes then evicts pool client
- Snapshot delete broadcasts to all online hosts (templates have no host_id)
- sandbox.Manager.PauseAll: pauses all running VMs on CP connectivity loss
- New migration: host_refresh_tokens table with token rotation (issue-then-
  revoke ordering to prevent lockout on mid-rotation crash)
- New sandbox status 'missing' (reversible, unlike 'stopped') and host
  status 'unreachable'; both reflected in OpenAPI spec
- Fix: refresh token auth failure now returns 401 (was 400 via generic
  'invalid' substring match in serviceErrToHTTP)
2026-03-24 18:32:05 +06:00
f968da9768 Minor frontend enhancements 2026-03-24 17:25:00 +06:00
3932bc056e Add user names, team-scoped sandbox guard, and login robustness fixes
- Add name column to users (migration + sqlc regen); propagate through JWT
  claims, auth context, all auth/OAuth handlers, service layer, and frontend
- Sidebar and team page show name instead of email; team page splits Name/Email
  into separate columns
- Block sandbox creation in UI and API when user has no active team context
- loginTeam helper falls back to first active team when no default is set,
  fixing login for invited users with no is_default membership
- Exclude soft-deleted teams from GetDefaultTeamForUser, GetBYOCTeams queries
- Guard host creation against soft-deleted teams in service/host.go
- SwitchTeam re-fetches name from DB instead of trusting stale JWT claim
- Reset teams store on login so stale data from a previous session never persists
- Update openapi.yaml: add name to SignupRequest and AuthResponse schemas
2026-03-24 16:56:10 +06:00
aaeccd32ce Merge pull request 'Frontend consistency and improvements' (#5) from frontend-enhancement into dev
Reviewed-on: wrenn/sandbox#5
2026-03-24 10:00:27 +00:00
915d934c26 Frontend consistency pass: delight, audit, and normalization
Delight (keys page):
- Animated checkmark draw + circle pop on key reveal dialog open
- Key display area pulses accent glow on open to draw eye to "copy this"
- Copy button spring-bounces on successful copy (re-triggers on repeat)
- Empty state key icon floats (iconFloat, now global)
- Row hover uses scaleY left-accent stripe (matches capsules pattern)
- New key row flashes accent on reveal dialog dismiss (matches capsule-born)

Audit fixes (all dashboard pages):
- Page titles standardized to em dash: "Wrenn — X" across all four pages
- formatDate/timeAgo extracted to src/lib/utils/format.ts (string | undefined
  signatures); keys and snapshots now import from there instead of duplicating
- team formatDate gains undefined guard (kept local, date-only format differs)
- spin-once and iconFloat keyframes moved to app.css as globals; scoped copies
  removed from capsules and keys
- Snapshots empty state icon was referencing undefined @keyframes float; fixed
  to iconFloat

Normalization:
- Snapshots table rows: replaced ::before pseudo-element accent (opacity-only,
  single color) with DOM row-stripe element using scaleY transition, type-keyed
  color (green for snapshots, blue for images) — matches capsules pattern
- Create Key dialog: max-w-[400px] → max-w-[420px] to align with form dialogs
- Snapshots count and empty-state heading are now terminology-aware: shows
  "templates/snapshots/images" based on active filter; empty heading for all
  filter reads "No templates yet" instead of "No snapshots yet"

Not done (documented in audit, deferred):
- Sidebar nav items pointing to unimplemented routes (audit, usage, billing,
  notifications, settings) — left as-is, needs product decision
- Dialog max-widths fully normalized beyond Create Key — minor, deferred
- capsules timeAgo not imported from shared util (formatTime differs intentionally)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 15:51:11 +06:00
336080bb6d Merge pull request 'Added team related functionalities' (#4) from team-management into dev
Reviewed-on: wrenn/sandbox#4
2026-03-24 08:58:32 +00:00
90c296f5e1 Polish team page: delight micro-interactions and layout improvements
- Slug + Team ID rows collapsed into a 2-column grid for better density
- "you" badge moved inline with email instead of stacked below it
- Copy checkmark draws itself via SVG stroke-dashoffset animation
- New member row flashes accent-green on entry
- Removed member row slides out smoothly (fly transition)
- Member rows use staggered fly-in on page load
- Team name briefly highlights accent color after a successful rename
- Search result avatars get colorized initials based on email character
2026-03-24 14:56:19 +06:00
bf494f73fc Fix team name blink on navigation by lifting teams into a singleton store
Teams list was fetched on every Sidebar mount (each page navigation),
causing a flash from '…' to the real name on every tab switch. Move teams
into a module-level reactive store (teams.svelte.ts) that fetches once per
session and is shared between Sidebar and the team page.
2026-03-24 14:44:09 +06:00
71a7fdb76f Fix user search to trigger on 3 characters without requiring @
The anti-enumeration guard required @ in the email prefix, causing the
typeahead to silently return nothing until the user typed @. Replace with
a minimum 3-character length check to match the frontend trigger condition.
2026-03-24 14:41:01 +06:00
b3e8bdd171 Refine team management: name chars, danger zone, no-team state
- Allow hyphens, @, and apostrophes in team names (backend regex)
- After delete/leave, switch to next available team instead of logging
  out; if no teams remain, show a toast prompting to create one
- Disable delete/leave button when user has only one team, with
  explanatory hint to create another team first
- Show empty state on /dashboard/team when auth has no team context,
  pointing user to the sidebar to create a team
- Fetch all teams in parallel with team detail on page load to power
  the isLastTeam guard
2026-03-24 14:34:20 +06:00
1e681da738 Add team management frontend
- New /dashboard/team page with inline team name editing, slug/ID copy,
  members table with split-button (remove + make admin/member), add member
  typeahead, and danger zone (delete/leave) with confirmation dialogs
- Sidebar now fetches real teams from API, supports team switching and
  team creation via dialog
- Rename nav item Members → Team, route /dashboard/members → /dashboard/team
- New src/lib/api/team.ts with typed functions for all team endpoints
2026-03-24 14:21:53 +06:00
8e5d426638 Add team management endpoints
- Three-role model (owner/admin/member) with owner protection invariants
- Team CRUD: create, rename (admin+), soft-delete with VM cleanup (owner only)
- Member management: add by email, remove, role updates (admin+), leave
- Switch-team endpoint re-issues JWT after DB membership verification
- User email prefix search for add-member UI autocomplete
- JWT carries role as a hint; all authorization decisions verified from DB
- Team slug: immutable 12-char hex (e.g. a1b2c3-d1e2f3), reserved on soft-delete
- Migration adds slug + deleted_at to teams; backfills existing rows
2026-03-24 13:29:54 +06:00
4e26d7a292 Merge pull request 'Minor frontend enhancement' (#3) from frontend into dev
Reviewed-on: wrenn/sandbox#3
2026-03-24 06:36:17 +00:00
79eba782fb Updated design docs 2026-03-24 12:34:58 +06:00
b786a825d4 Polish dashboard frontend: spacing, copy, resilience
- Increase content padding (p-7→p-8) and table cell padding (px-4→px-5,
  py-3→py-4 for data rows) across capsules, keys, and snapshots pages
- Improve animation performance: wrenn-glow uses opacity instead of
  box-shadow (compositor-only, no paint cost)
- Add prefers-reduced-motion media query covering inline style animations
- Fix OAuth error display on login page (read ?error= param on mount)
- Harden clipboard copy with try-catch and toast fallback
- Improve empty state copy, dialog microcopy, and error messages
- Add retry button to error banners on keys page
- Replace "All systems operational" footer bar with a clean 1px divider
- Fix text truncation on long capsule/snapshot names (min-w-0 + truncate)
2026-03-24 12:33:18 +06:00
71564b202e Merge branch 'main' of git.omukk.dev:wrenn/sandbox into dev 2026-03-24 01:11:43 +06:00
5f0dbadea6 Fix snapshot and sandbox delete consistency
- Snapshot delete: make agent RPC failure a hard error so DB record is
  not removed when files cannot be deleted from disk
- Snapshot overwrite: call agent to delete old files before removing the
  DB record, preventing stale memfile.{uuid} generations from accumulating
  on disk across repeated overwrites
- Sandbox destroy: only swallow CodeNotFound from the agent (sandbox
  already gone / TTL-reaped); any other error now propagates to the caller
  instead of being silently ignored
2026-03-23 02:59:30 +06:00
36782e1b4f Add tini as PID 1, guest clock sync, and fix PATH in guest VMs
- Use tini as PID 1 in wrenn-init.sh so zombie processes are reaped
  and signals are forwarded correctly to envd
- Set standard PATH in wrenn-init.sh so child processes spawned by envd
  can find common binaries (fixes "nice: ls command not found")
- Add envdclient.Init() to POST /init on envd after every boot/resume,
  syncing the guest clock via unix.ClockSettime — critical after snapshot
  resume where the guest clock is frozen
- Run Init in a background goroutine so it doesn't block the CreateSandbox
  RPC response; a slow Init (vCPU busy with envd startup) was causing the
  RPC context to be canceled before the response reached the control plane
- Update rootfs-from-container.sh and update-debug-rootfs.sh to inject
  tini into the rootfs, checking the container image and host first,
  downloading from GitHub releases as fallback
2026-03-23 02:45:27 +06:00
97292ba0bf Added basic frontend (#1)
Reviewed-on: wrenn/sandbox#1
Co-authored-by: pptx704 <rafeed@omukk.dev>
Co-committed-by: pptx704 <rafeed@omukk.dev>
2026-03-22 19:01:38 +00:00
866f3ac012 Consolidate host agent path env vars into single AGENT_FILES_ROOTDIR
Replace AGENT_KERNEL_PATH, AGENT_IMAGES_PATH, AGENT_SANDBOXES_PATH,
AGENT_SNAPSHOTS_PATH, and AGENT_TOKEN_FILE with a single
AGENT_FILES_ROOTDIR (default /var/lib/wrenn) that derives all
subdirectory paths automatically.
2026-03-17 05:59:26 +06:00
2c66959b92 Add host registration, heartbeat, and multi-host management
Implements the full host ↔ control plane connection flow:

- Host CRUD endpoints (POST/GET/DELETE /v1/hosts) with role-based access:
  regular hosts admin-only, BYOC hosts for admins and team owners
- One-time registration token flow: admin creates host → gets token (1hr TTL
  in Redis + Postgres audit trail) → host agent registers with specs → gets
  long-lived JWT (1yr)
- Host agent registration client with automatic spec detection (arch, CPU,
  memory, disk) and token persistence to disk
- Periodic heartbeat (30s) via POST /v1/hosts/{id}/heartbeat with X-Host-Token
  auth and host ID cross-check
- Token regeneration endpoint (POST /v1/hosts/{id}/token) for retry after
  failed registration
- Tag management (add/remove/list) with team-scoped access control
- Host JWT with typ:"host" claim, cross-use prevention in both VerifyJWT and
  VerifyHostJWT
- requireHostToken middleware for host agent authentication
- DB-level race protection: RegisterHost uses AND status='pending' with
  rows-affected check; Redis GetDel for atomic token consume
- Migration for future mTLS support (cert_fingerprint, mtls_enabled columns)
- Host agent flags: --register (one-time token), --address (required ip:port)
- serviceErrToHTTP extended with "forbidden" → 403 mapping
- OpenAPI spec, .env.example, and README updated
2026-03-17 05:51:28 +06:00
e4ead076e3 Add admin users, BYOC teams, hosts schema, and Redis for host registration
Introduce three migrations: admin permissions (is_admin + permissions table),
BYOC team tracking, and multi-host support (hosts, host_tokens, host_tags).
Add Redis to dev infra and wire up client in control plane for ephemeral
host registration tokens. Add go-redis dependency.
2026-03-17 03:26:42 +06:00
1d59b50e49 Remove empty admin UI stubs
The internal/admin/ package was never imported or mounted — just
placeholder files. Removing to avoid confusion before the real
dashboard is built.
2026-03-16 05:39:43 +06:00
f38d5812d1 Extract shared service layer for sandbox, API key, and template operations
Moves business logic from API handlers into internal/service/ so that
both the REST API and the upcoming dashboard can share the same operations
without duplicating code. API handlers now delegate to the service layer
and only handle HTTP-specific concerns (request parsing, response formatting).
2026-03-16 05:39:30 +06:00
931b7d54b3 Add GitHub OAuth login with provider registry
Implement OAuth 2.0 login via GitHub as an alternative to email/password.
Uses a provider registry pattern (internal/auth/oauth/) so adding Google
or other providers later requires only a new Provider implementation.

Flow: GET /v1/auth/oauth/github redirects to GitHub, callback exchanges
the code for a user profile, upserts the user + team atomically, and
redirects to the frontend with a JWT token.

Key changes:
- Migration: make password_hash nullable, add oauth_providers table
- Provider registry with GitHubProvider (profile + email fallback)
- CSRF state cookie with HMAC-SHA256 validation
- Race-safe registration (23505 collision retries as login)
- Startup validation: CP_PUBLIC_URL required when OAuth is configured

Not fully tested — needs integration tests with a real GitHub OAuth app
and end-to-end testing with the frontend callback page.
2026-03-15 06:31:58 +06:00
477d4f8cf6 Add auto-pause TTL and ping endpoint for sandbox inactivity management
Replace the existing auto-destroy TTL behavior with auto-pause: when a
sandbox exceeds its timeout_sec of inactivity, the TTL reaper now pauses
it (snapshot + teardown) instead of destroying it, preserving the ability
to resume later.

Key changes:
- TTL reaper calls Pause instead of Destroy, with fallback to Destroy if
  pause fails (e.g. Firecracker process already gone)
- New PingSandbox RPC resets the in-memory LastActiveAt timer
- New POST /v1/sandboxes/{id}/ping REST endpoint resets both agent memory
  and DB last_active_at
- ListSandboxes RPC now includes auto_paused_sandbox_ids so the reconciler
  can distinguish auto-paused sandboxes from crashed ones in a single call
- Reconciler polls every 5s (was 30s) and marks auto-paused as "paused"
  vs orphaned as "stopped"
- Resume RPC accepts timeout_sec from DB so TTL survives pause/resume cycles
- Reaper checks every 2s (was 10s) and uses a detached context to avoid
  incomplete pauses on app shutdown
- Default timeout_sec changed from 300 to 0 (no auto-pause unless requested)
2026-03-15 05:15:18 +06:00
88246fac2b Fix sandbox lifecycle cleanup and dmsetup remove reliability
- Add retry with backoff to dmsetupRemove for transient "device busy"
  errors caused by kernel not releasing the device immediately after
  Firecracker exits. Only retries on "Device or resource busy"; other
  errors (not found, permission denied) return immediately.

- Thread context.Context through RemoveSnapshot/RestoreSnapshot so
  retries respect cancellation. Use context.Background() in all error
  cleanup paths to prevent cancelled contexts from skipping cleanup
  and leaking dm devices on the host.

- Resume vCPUs on pause failure: if snapshot creation or memfile
  processing fails after freezing the VM, unfreeze vCPUs so the
  sandbox stays usable instead of becoming a frozen zombie.

- Fix resource leaks in Pause when CoW rename or metadata write fails:
  properly clean up network, slot, loop device, and remove from boxes
  map instead of leaving a dead sandbox with leaked host resources.

- Fix Resume WaitUntilReady failure: roll back CoW file to the snapshot
  directory instead of deleting it, preserving the paused state so the
  user can retry.

- Skip m.loops.Release when RemoveSnapshot fails during pause since
  the stale dm device still references the origin loop device.

- Fix incorrect VCPUs placeholder in Resume VMConfig that used memory
  size instead of a sensible default.
2026-03-14 06:42:34 +06:00
1846168736 Fix device-mapper "Device or resource busy" error on sandbox resume
Pause was logging RemoveSnapshot failures as warnings and continuing,
which left stale dm devices behind. Resume then failed trying to create
a device with the same name.

- Make RemoveSnapshot failure a hard error in Pause (clean up remaining
  resources and return error instead of silently proceeding)
- Add defensive stale device cleanup in RestoreSnapshot before creating
  the new dm device
2026-03-14 03:57:14 +06:00
c92cc29b88 Add authentication, authorization, and team-scoped access control
Implement email/password auth with JWT sessions and API key auth for
sandbox lifecycle. Users get a default team on signup; sandboxes,
snapshots, and API keys are scoped to teams.

- Add user, team, users_teams, and team_api_keys tables (goose migrations)
- Add JWT middleware (Bearer token) for user management endpoints
- Add API key middleware (X-API-Key header, SHA-256 hashed) for sandbox ops
- Add signup/login handlers with transactional user+team creation
- Add API key CRUD endpoints (create/list/delete)
- Replace owner_id with team_id on sandboxes and templates
- Update all handlers to use team-scoped queries
- Add godotenv for .env file loading
- Update OpenAPI spec and test UI with auth flows
2026-03-14 03:57:06 +06:00
712b77b01c Add script to create rootfs from Docker container 2026-03-13 09:41:58 +06:00
80a99eec87 Add diff snapshots for re-pause to avoid UFFD fault-in storm
Use Firecracker's Diff snapshot type when re-pausing a previously
resumed sandbox, capturing only dirty pages instead of a full memory
dump. Chains up to 10 incremental generations before collapsing back
to a Full snapshot. Multi-generation diff files (memfile.{buildID})
are supported alongside the legacy single-file format in resume,
template creation, and snapshot existence checks.
2026-03-13 09:41:58 +06:00
a0d635ae5e Fix path traversal in template/snapshot names and network cleanup leaks
Add SafeName validator (allowlist regex) to reject directory traversal
in user-supplied template and snapshot names. Validated at both API
handlers (400 response) and sandbox manager (defense in depth).

Refactor CreateNetwork with rollback slice so partially created
resources (namespace, veth, routes, iptables rules) are cleaned up
on any error. Refactor RemoveNetwork to collect and return errors
instead of silently ignoring them.
2026-03-13 08:40:36 +06:00
63e9132d38 Add device-mapper snapshots, test UI, fix pause ordering and lint errors
- Replace reflink rootfs copy with device-mapper snapshots (shared
  read-only loop device per base template, per-sandbox sparse CoW file)
- Add devicemapper package with create/restore/remove/flatten operations
  and refcounted LoopRegistry for base image loop devices
- Fix pause ordering: destroy VM before removing dm-snapshot to avoid
  "device busy" error (FC must release the dm device first)
- Add test UI at GET /test for sandbox lifecycle management (create,
  pause, resume, destroy, exec, snapshot create/list/delete)
- Fix DirSize to report actual disk usage (stat.Blocks * 512) instead
  of apparent size, so sparse CoW files report correctly
- Add timing logs to pause flow for performance diagnostics
- Fix all lint errors across api, network, vm, uffd, and sandbox packages
- Remove obsolete internal/filesystem package (replaced by devicemapper)
- Update CLAUDE.md with device-mapper architecture documentation
2026-03-13 08:25:40 +06:00
778894b488 Made license related changes 2026-03-13 05:42:10 +06:00
a1bd439c75 Add sandbox snapshot and restore with UFFD lazy memory loading
Implement full snapshot lifecycle: pause (snapshot + free resources),
resume (UFFD-based lazy restore), and named snapshot templates that
can spawn new sandboxes from frozen VM state.

Key changes:
- Snapshot header system with generational diff mapping (inspired by e2b)
- UFFD server for lazy page fault handling during snapshot restore
- Stable rootfs symlink path (/tmp/fc-vm/) for snapshot compatibility
- Templates DB table and CRUD API endpoints (POST/GET/DELETE /v1/snapshots)
- CreateSnapshot/DeleteSnapshot RPCs in hostagent proto
- Reconciler excludes paused sandboxes (expected absent from host agent)
- Snapshot templates lock vcpus/memory to baked-in values
- Proper cleanup of uffd sockets and pause snapshot files on destroy
2026-03-12 09:19:37 +06:00
9b94df7f56 Rewrite CLAUDE.md and README.md
CLAUDE.md: replace bloated 850-line version with focused 230-line
guide. Fix inaccuracies (module path, build dir, Connect RPC vs gRPC,
buf vs protoc). Add detailed architecture with request flows, code
generation workflow, rootfs update process, and two-module gotchas.

README.md: add core deployment instructions (prerequisites, build,
host setup, configuration, running, rootfs workflow).
2026-03-11 06:37:11 +06:00
0c245e9e1c Fix guest VM outbound networking and DNS resolution
Add resolv.conf to wrenn-init so guests can resolve DNS, and fix the
host MASQUERADE rule to match vpeerIP (the actual source after namespace
SNAT) instead of hostIP.
2026-03-11 06:02:31 +06:00
b4d8edb65b Add streaming exec and file transfer endpoints
Add WebSocket-based streaming exec endpoint and streaming file
upload/download endpoints to the control plane API. Includes new
host agent RPC methods (ExecStream, StreamWriteFile, StreamReadFile),
envd client streaming support, and OpenAPI spec updates.
2026-03-11 05:42:42 +06:00
ec3360d9ad Add minimal control plane with REST API, database, and reconciler
- REST API (chi router): sandbox CRUD, exec, pause/resume, file write/read
- PostgreSQL persistence via pgx/v5 + sqlc (sandboxes table with goose migration)
- Connect RPC client to host agent for all VM operations
- Reconciler syncs host agent state with DB every 30s (detects TTL-reaped sandboxes)
- OpenAPI 3.1 spec served at /openapi.yaml, Swagger UI at /docs
- Added WriteFile/ReadFile RPCs to hostagent proto and implementations
- File upload via multipart form, download via JSON body POST
- sandbox_id propagated from control plane to host agent on create
2026-03-10 16:50:12 +06:00
d7b25b0891 updated license structure 2026-03-10 04:32:29 +06:00
34c89e814d Added basic license information 2026-03-10 04:28:51 +06:00
6f0c365d44 Add host agent RPC server with sandbox lifecycle management
Implement the host agent as a Connect RPC server that orchestrates
sandbox creation, destruction, pause/resume, and command execution.
Includes sandbox manager with TTL-based reaper, network slot allocator,
rootfs cloning, hostagent proto definition with generated stubs, and
test/debug scripts. Fix Firecracker process lifetime bug where VM was
tied to HTTP request context instead of background context.
2026-03-10 03:54:53 +06:00
c31ce90306 Centralize envd proto source of truth to proto/envd/
Remove duplicate proto files from envd/spec/ and update envd's
buf.gen.yaml to generate stubs from the canonical proto/envd/ location.
Both modules now generate their own Connect RPC stubs from the same
source protos.
2026-03-10 02:49:31 +06:00
7753938044 Add host agent with VM lifecycle, TAP networking, and envd client
Implements Phase 1: boot a Firecracker microVM, execute a command inside
it via envd, and get the output back. Uses raw Firecracker HTTP API via
Unix socket (not the Go SDK) for full control over the VM lifecycle.

- internal/vm: VM manager with create/pause/resume/destroy, Firecracker
  HTTP client, process launcher with unshare + ip netns exec isolation
- internal/network: per-sandbox network namespace with veth pair, TAP
  device, NAT rules, and IP forwarding
- internal/envdclient: Connect RPC client for envd process/filesystem
  services with health check retry
- cmd/host-agent: demo binary that boots a VM, runs "echo hello", prints
  output, and cleans up
- proto/envd: canonical proto files with buf + protoc-gen-connect-go
  code generation
- images/wrenn-init.sh: minimal PID 1 init script for guest VMs
- CLAUDE.md: updated architecture to reflect TAP networking (not vsock)
  and Firecracker HTTP API (not Go SDK)
2026-03-10 00:06:47 +06:00
a3898d68fb Port envd from e2b with internalized shared packages and Connect RPC
- Copy envd source from e2b-dev/infra, internalize shared dependencies
  into envd/internal/shared/ (keys, filesystem, id, smap, utils)
- Switch from gRPC to Connect RPC for all envd services
- Update module paths to git.omukk.dev/wrenn/{sandbox,sandbox/envd}
- Add proto specs (process, filesystem) with buf-based code generation
- Implement full envd: process exec, filesystem ops, port forwarding,
  cgroup management, MMDS integration, and HTTP API
- Update main module dependencies (firecracker SDK, pgx, goose, etc.)
- Remove placeholder .gitkeep files replaced by real implementations
2026-03-09 21:03:19 +06:00
238 changed files with 11725 additions and 25239 deletions

View File

@ -16,7 +16,7 @@ WRENN_HOST_LISTEN_ADDR=:50051
WRENN_HOST_INTERFACE=eth0
WRENN_CP_URL=http://localhost:9725
WRENN_DEFAULT_ROOTFS_SIZE=5Gi
WRENN_FIRECRACKER_BIN=/usr/local/bin/firecracker
WRENN_CH_BIN=/usr/local/bin/cloud-hypervisor
# Auth
JWT_SECRET=

7
.gitignore vendored
View File

@ -36,10 +36,14 @@ go.work.sum
e2b/
.impeccable.md
.gstack
.mcp.json
## Builds
builds/
## Rust
envd-rs/target/
## Frontend
frontend/node_modules/
frontend/.svelte-kit/
@ -49,3 +53,6 @@ frontend/build/
internal/dashboard/static/*
!internal/dashboard/static/.gitkeep.dual-graph/
.dual-graph/
# Added by code-review-graph
.code-review-graph/
.mcp.json

116
CLAUDE.md
View File

@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
Wrenn Sandbox is a microVM-based code execution platform. Users create isolated sandboxes (Firecracker microVMs), run code inside them, and get output back via SDKs. Think E2B but with persistent sandboxes, pool-based pricing, and a single-binary deployment story.
Wrenn Sandbox is a microVM-based code execution platform. Users create isolated sandboxes (Cloud Hypervisor microVMs), run code inside them, and get output back via SDKs. Think E2B but with persistent sandboxes, pool-based pricing, and a single-binary deployment story.
## Build & Development Commands
@ -14,7 +14,7 @@ All commands go through the Makefile. Never use raw `go build` or `go run`.
make build # Build all binaries → builds/
make build-cp # Control plane only
make build-agent # Host agent only
make build-envd # envd static binary (verified statically linked)
make build-envd # envd static binary (Rust, musl, verified statically linked)
make build-frontend # SvelteKit dashboard → frontend/build/ (served by Caddy)
make dev # Full local dev: infra + migrate + control plane
@ -23,13 +23,13 @@ make dev-down # Stop dev infra
make dev-cp # Control plane with hot reload (if air installed)
make dev-frontend # Vite dev server with HMR (port 5173)
make dev-agent # Host agent (sudo required)
make dev-envd # envd in TCP debug mode
make dev-envd # envd in debug mode (port 49983)
make check # fmt + vet + lint + test (CI order)
make test # Unit tests: go test -race -v ./internal/...
make test-integration # Integration tests (require host agent + Firecracker)
make fmt # gofmt both modules
make vet # go vet both modules
make test-integration # Integration tests (require host agent + Cloud Hypervisor)
make fmt # gofmt
make vet # go vet
make lint # golangci-lint
make migrate-up # Apply pending migrations
@ -38,8 +38,8 @@ make migrate-create name=xxx # Scaffold new goose migration (never create manua
make migrate-reset # Drop + re-apply all
make generate # Proto (buf) + sqlc codegen
make proto # buf generate for all proto dirs
make tidy # go mod tidy both modules
make proto # buf generate for proto dirs
make tidy # go mod tidy
```
Run a single test: `go test -race -v -run TestName ./internal/path/...`
@ -50,15 +50,15 @@ Run a single test: `go test -race -v -run TestName ./internal/path/...`
User SDK → HTTPS/WS → Control Plane → Connect RPC → Host Agent → HTTP/Connect RPC over TAP → envd (inside VM)
```
**Three binaries, two Go modules:**
**Three binaries:**
| Binary | Module | Entry point | Runs as |
|--------|--------|-------------|---------|
| wrenn-cp | `git.omukk.dev/wrenn/wrenn` | `cmd/control-plane/main.go` | Unprivileged |
| wrenn-agent | `git.omukk.dev/wrenn/wrenn` | `cmd/host-agent/main.go` | `wrenn` user with capabilities (SYS_ADMIN, NET_ADMIN, NET_RAW, SYS_PTRACE, KILL, DAC_OVERRIDE, MKNOD) via setcap; also accepts root |
| envd | `git.omukk.dev/wrenn/wrenn/envd` (standalone `envd/go.mod`) | `envd/main.go` | PID 1 inside guest VM |
| Binary | Language | Entry point | Runs as |
|--------|----------|-------------|---------|
| wrenn-cp | Go (`git.omukk.dev/wrenn/wrenn`) | `cmd/control-plane/main.go` | Unprivileged |
| wrenn-agent | Go (`git.omukk.dev/wrenn/wrenn`) | `cmd/host-agent/main.go` | `wrenn` user with capabilities (SYS_ADMIN, NET_ADMIN, NET_RAW, SYS_PTRACE, KILL, DAC_OVERRIDE, MKNOD) via setcap; also accepts root |
| envd | Rust (`envd-rs/`) | `envd-rs/src/main.rs` | PID 1 inside guest VM |
envd is a **completely independent Go module**. It is never imported by the main module. The only connection is the protobuf contract. It compiles to a static binary baked into rootfs images.
envd is a standalone Rust binary (Tokio + Axum + connectrpc-rs). It is completely independent from the Go module — the only connection is the protobuf contract. It compiles to a statically linked musl binary baked into rootfs images.
**Key architectural invariant:** The host agent is **stateful** (in-memory `boxes` map is the source of truth for running VMs). The control plane is **stateless** (all persistent state in PostgreSQL). The reconciler (`internal/api/reconciler.go`) bridges the gap — it periodically compares DB records against the host agent's live state and marks orphaned sandboxes as "stopped".
@ -92,27 +92,31 @@ Startup (`cmd/host-agent/main.go`) wires: root/capabilities check → enable IP
- **RPC Server** (`internal/hostagent/server.go`): implements `hostagentv1connect.HostAgentServiceHandler`. Thin wrapper — every method delegates to `sandbox.Manager`. Maps Connect error codes on return.
- **Sandbox Manager** (`internal/sandbox/manager.go`): the core orchestration layer. Maintains in-memory state in `boxes map[string]*sandboxState` (protected by `sync.RWMutex`). Each `sandboxState` holds a `models.Sandbox`, a `*network.Slot`, and an `*envdclient.Client`. Runs a TTL reaper (every 10s) that auto-destroys timed-out sandboxes.
- **VM Manager** (`internal/vm/manager.go`, `fc.go`, `config.go`): manages Firecracker processes. Uses raw HTTP API over Unix socket (`/tmp/fc-{sandboxID}.sock`), not the firecracker-go-sdk Machine type. Launches Firecracker via `unshare -m` + `ip netns exec`. Configures VM via PUT to `/boot-source`, `/drives/rootfs`, `/network-interfaces/eth0`, `/machine-config`, then starts with PUT `/actions`.
- **VM Manager** (`internal/vm/manager.go`, `ch.go`, `config.go`): manages Cloud Hypervisor processes. Uses raw HTTP API over Unix socket (`/tmp/ch-{sandboxID}.sock`). Launches Cloud Hypervisor via `unshare -m` + `ip netns exec` with `--api-socket path=...`. Configures and boots VM via `PUT /vm.create` + `PUT /vm.boot`. Snapshot restore uses `--restore source_url=file://...`.
- **Network** (`internal/network/setup.go`, `allocator.go`): per-sandbox network namespace with veth pair + TAP device. See Networking section below.
- **Device Mapper** (`internal/devicemapper/devicemapper.go`): CoW rootfs via device-mapper snapshots. Shared read-only loop devices per base template (refcounted `LoopRegistry`), per-sandbox sparse CoW files, dm-snapshot create/restore/remove/flatten operations.
- **envd Client** (`internal/envdclient/client.go`, `health.go`): dual interface to the guest agent. Connect RPC for streaming process exec (`process.Start()` bidirectional stream). Plain HTTP for file operations (POST/GET `/files?path=...&username=root`). Health check polls `GET /health` every 100ms until ready (30s timeout).
### envd (Guest Agent)
**Module:** `envd/` with its own `go.mod` (`git.omukk.dev/wrenn/wrenn/envd`)
**Directory:** `envd-rs/` — standalone Rust crate
Runs as PID 1 inside the microVM via `wrenn-init.sh` (mounts procfs/sysfs/dev, sets hostname, writes resolv.conf, then execs envd). Extracted from E2B (Apache 2.0), with shared packages internalized into `envd/internal/shared/`. Listens on TCP `0.0.0.0:49983`.
Runs as PID 1 inside the microVM via `wrenn-init.sh` (mounts procfs/sysfs/dev, sets hostname, writes resolv.conf, then execs envd via tini). Built with `cargo build --release --target x86_64-unknown-linux-musl`. Listens on TCP `0.0.0.0:49983`.
- **ProcessService**: start processes, stream stdout/stderr, signal handling, PTY support
- **FilesystemService**: stat/list/mkdir/move/remove/watch files
- **Health**: GET `/health`
- **Stack**: Tokio (async runtime) + Axum (HTTP) + connectrpc-rs (Connect protocol RPC)
- **ProcessService** (Connect RPC): start/connect/list/signal processes, stream stdout/stderr, PTY support
- **FilesystemService** (Connect RPC): stat/list/mkdir/move/remove/watch files
- **HTTP endpoints**: GET `/health`, GET `/metrics`, POST `/init`, POST `/snapshot/prepare`, GET/POST `/files`
- **Proto codegen**: `connectrpc-build` compiles `proto/envd/*.proto` at `cargo build` time via `build.rs` — no committed stubs
- **Build**: `make build-envd` → static musl binary in `builds/envd`
- **Dev**: `make dev-envd``cargo run -- --port 49983`
### Dashboard (Frontend)
**Directory:** `frontend/` — standalone SvelteKit app (Svelte 5, runes mode)
- **Stack**: SvelteKit + `adapter-static` + Tailwind CSS v4 + Bits UI (headless accessible components)
- **Package manager**: pnpm
- **Package manager**: Bun
- **Routing**: SvelteKit file-based routing under `frontend/src/routes/`
- **Routing layout**: `/login` and `/signup` at root, authenticated pages under `/dashboard/*` (e.g. `/dashboard/capsules`, `/dashboard/keys`)
- **Build output**: `frontend/build/` — static files served by Caddy
@ -160,7 +164,7 @@ HIBERNATED → RUNNING (cold snapshot resume, slower)
**Sandbox creation** (`POST /v1/capsules`):
1. API handler generates sandbox ID, inserts into DB as "pending"
2. RPC `CreateSandbox` → host agent → `sandbox.Manager.Create()`
3. Manager: resolve base rootfs → acquire shared loop device → create dm-snapshot (sparse CoW file) → allocate network slot → `CreateNetwork()` (netns + veth + tap + NAT) → `vm.Create()` (start Firecracker with `/dev/mapper/wrenn-{id}`, configure via HTTP API, boot) → `envdclient.WaitUntilReady()` (poll /health) → store in-memory state
3. Manager: resolve base rootfs → acquire shared loop device → create dm-snapshot (sparse CoW file) → allocate network slot → `CreateNetwork()` (netns + veth + tap + NAT) → `vm.Create()` (start Cloud Hypervisor with `/dev/mapper/wrenn-{id}`, configure via `PUT /vm.create` + `PUT /vm.boot`) → `envdclient.WaitUntilReady()` (poll /health) → store in-memory state
4. API handler updates DB to "running" with host_ip
**Command execution** (`POST /v1/capsules/{id}/exec`):
@ -185,17 +189,16 @@ Routes defined in `internal/api/server.go`, handlers in `internal/api/handlers_*
### Proto (Connect RPC)
Proto source of truth is `proto/envd/*.proto` and `proto/hostagent/*.proto`. Run `make proto` to regenerate. Three `buf.gen.yaml` files control output:
Proto source of truth is `proto/envd/*.proto` and `proto/hostagent/*.proto`. Run `make proto` to regenerate Go stubs. Two `buf.gen.yaml` files control Go output:
| buf.gen.yaml location | Generates to | Used by |
|---|---|---|
| `proto/envd/buf.gen.yaml` | `proto/envd/gen/` | Main module (host agent's envd client) |
| `proto/hostagent/buf.gen.yaml` | `proto/hostagent/gen/` | Main module (control plane ↔ host agent) |
| `envd/spec/buf.gen.yaml` | `envd/internal/services/spec/` | envd module (guest agent server) |
The envd `buf.gen.yaml` reads from `../../proto/envd/` (same source protos) but generates into envd's own module. This means the same `.proto` files produce two independent sets of Go stubs — one for each Go module.
The Rust envd (`envd-rs/`) generates its own protobuf stubs at `cargo build` time via `connectrpc-build` in `envd-rs/build.rs`, reading from the same `proto/envd/*.proto` sources. No committed Rust stubs — they live in `OUT_DIR`.
To add a new RPC method: edit the `.proto` file → `make proto` → implement the handler on both sides.
To add a new RPC method: edit the `.proto` file → `make proto` (Go stubs) → rebuild envd-rs (Rust stubs generated automatically) → implement the handler on both sides.
### sqlc
@ -206,10 +209,10 @@ To add a new query: add it to the appropriate `.sql` file in `db/queries/` → `
## Key Technical Decisions
- **Connect RPC** (not gRPC) for all RPC communication between components
- **Buf + protoc-gen-connect-go** for code generation (not protoc-gen-go-grpc)
- **Raw Firecracker HTTP API** via Unix socket (not firecracker-go-sdk Machine type)
- **Buf + protoc-gen-connect-go** for Go code generation; **connectrpc-build** for Rust code generation in envd
- **Raw Cloud Hypervisor HTTP API** via Unix socket (`PUT /vm.create` + `PUT /vm.boot`)
- **TAP networking** (not vsock) for host-to-envd communication
- **Device-mapper snapshots** for rootfs CoW — shared read-only loop device per base template, per-sandbox sparse CoW file, Firecracker gets `/dev/mapper/wrenn-{id}`
- **Device-mapper snapshots** for rootfs CoW — shared read-only loop device per base template, per-sandbox sparse CoW file, Cloud Hypervisor gets `/dev/mapper/wrenn-{id}`
- **PostgreSQL** via pgx/v5 + sqlc (type-safe query generation). Goose for migrations (plain SQL, up/down)
- **Dashboard**: SvelteKit (Svelte 5, adapter-static) + Tailwind CSS v4 + Bits UI. Built to static files in `frontend/build/`, served by Caddy (not embedded in the Go binary)
- **Lago** for billing (external service, not in this codebase)
@ -218,19 +221,15 @@ To add a new query: add it to the appropriate `.sql` file in `db/queries/` → `
- **Go style**: `gofmt`, `go vet`, `context.Context` everywhere, errors wrapped with `fmt.Errorf("action: %w", err)`, `slog` for logging, no global state
- **Naming**: Sandbox IDs `sb-` + 8 hex, API keys `wrn_` + 32 chars, Host IDs `host-` + 8 hex
- **Dependencies**: Use `go get` to add deps, never hand-edit go.mod. For envd deps: `cd envd && go get ...` (separate module)
- **Dependencies**: Use `go get` to add Go deps, never hand-edit go.mod. For envd-rs deps: edit `envd-rs/Cargo.toml`
- **Generated code**: Always commit generated code (proto stubs, sqlc). Never add generated code to .gitignore
- **Migrations**: Always use `make migrate-create name=xxx`, never create migration files manually
- **Testing**: Table-driven tests for handlers and state machine transitions
### Two-module gotcha
The main module (`go.mod`) and envd (`envd/go.mod`) are fully independent. `make tidy`, `make fmt`, `make vet` already operate on both. But when adding dependencies manually, remember to target the correct module (`cd envd && go get ...` for envd deps). `make proto` also generates stubs for both modules from the same proto sources.
## Rootfs & Guest Init
- **wrenn-init** (`images/wrenn-init.sh`): the PID 1 init script baked into every rootfs. Mounts virtual filesystems, sets hostname, writes `/etc/resolv.conf`, then execs envd.
- **Updating the rootfs** after changing envd or wrenn-init: `bash scripts/update-debug-rootfs.sh [rootfs_path]`. This builds envd via `make build-envd`, mounts the rootfs image, copies in the new binaries, and unmounts. Defaults to `/var/lib/wrenn/images/minimal.ext4`.
- **Updating the rootfs** after changing envd or wrenn-init: `bash scripts/update-minimal-rootfs.sh`. This builds envd via `make build-envd` (Rust → static musl binary), mounts the rootfs image, copies in the new binaries, and unmounts. Defaults to `/var/lib/wrenn/images/minimal.ext4`.
- Rootfs images are minimal debootstrap — no systemd, no coreutils beyond busybox. Use `/bin/sh -c` for shell builtins inside the guest.
## Fixed Paths (on host machine)
@ -238,19 +237,19 @@ The main module (`go.mod`) and envd (`envd/go.mod`) are fully independent. `make
- Kernel: `/var/lib/wrenn/kernels/vmlinux`
- Base rootfs images: `/var/lib/wrenn/images/{template}.ext4`
- Sandbox clones: `/var/lib/wrenn/sandboxes/`
- Firecracker: `/usr/local/bin/firecracker` (e2b's fork of firecracker)
- Cloud Hypervisor: `/usr/local/bin/cloud-hypervisor`
## Design Context
### Users
Developers across the full spectrum — solo engineers building side projects, startup teams integrating sandboxed execution into products, and platform/infra engineers at larger organizations running production workloads on Firecracker microVMs. They arrive with context: they know what a process is, what a rootfs is, what a TTY means. The interface must feel at home for all three: approachable enough not to intimidate a hacker, precise enough to earn the trust of a production ops team. Never condescend, never oversimplify. Trust the user to understand what they're looking at.
Developers across the full spectrum — solo engineers building side projects, startup teams integrating sandboxed execution into products, and platform/infra engineers at larger organizations running production workloads on Cloud Hypervisor microVMs. They arrive with context: they know what a process is, what a rootfs is, what a TTY means. The interface must feel at home for all three: approachable enough not to intimidate a hacker, precise enough to earn the trust of a production ops team. Never condescend, never oversimplify. Trust the user to understand what they're looking at.
**Primary job to be done:** Understand what's running, act on it confidently, and get back to code.
### Brand Personality
**Precise. Warm. Uncompromising.**
Wrenn is an engineer's favorite tool — built with visible care, not assembled from defaults. It runs real infrastructure (Firecracker microVMs), so the UI should reflect that seriousness without becoming cold or corporate. The warmth comes from the typography and color palette; the precision comes from hierarchy, density, and data fidelity.
Wrenn is an engineer's favorite tool — built with visible care, not assembled from defaults. It runs real infrastructure (Cloud Hypervisor microVMs), so the UI should reflect that seriousness without becoming cold or corporate. The warmth comes from the typography and color palette; the precision comes from hierarchy, density, and data fidelity.
Emotional goal: **in control.** Users leave a session with full confidence in what's running, what happened, and what comes next. Nothing is hidden, nothing is ambiguous.
@ -372,3 +371,42 @@ All values are CSS custom properties in `frontend/src/app.css`.
4. **Legible at speed.** Users scan dashboards in seconds. Strong typographic contrast (serif h1, mono IDs, sans body), consistent patterns, and predictable placement let users orientate instantly without reading everything.
5. **Craft signals trust.** For infrastructure that runs production code, the quality of the UI is a proxy for the quality of the product. Pixel-level decisions matter. Polish is not decoration — it's a trust signal.
<!-- code-review-graph MCP tools -->
## MCP Tools: code-review-graph
**IMPORTANT: This project has a knowledge graph. ALWAYS use the
code-review-graph MCP tools BEFORE using Grep/Glob/Read to explore
the codebase.** The graph is faster, cheaper (fewer tokens), and gives
you structural context (callers, dependents, test coverage) that file
scanning cannot.
### When to use graph tools FIRST
- **Exploring code**: `semantic_search_nodes` or `query_graph` instead of Grep
- **Understanding impact**: `get_impact_radius` instead of manually tracing imports
- **Code review**: `detect_changes` + `get_review_context` instead of reading entire files
- **Finding relationships**: `query_graph` with callers_of/callees_of/imports_of/tests_for
- **Architecture questions**: `get_architecture_overview` + `list_communities`
Fall back to Grep/Glob/Read **only** when the graph doesn't cover what you need.
### Key Tools
| Tool | Use when |
|------|----------|
| `detect_changes` | Reviewing code changes — gives risk-scored analysis |
| `get_review_context` | Need source snippets for review — token-efficient |
| `get_impact_radius` | Understanding blast radius of a change |
| `get_affected_flows` | Finding which execution paths are impacted |
| `query_graph` | Tracing callers, callees, imports, tests, dependencies |
| `semantic_search_nodes` | Finding functions/classes by name or keyword |
| `get_architecture_overview` | Understanding high-level codebase structure |
| `refactor_tool` | Planning renames, finding dead code |
### Workflow
1. The graph auto-updates on file changes (via hooks).
2. Use `detect_changes` for code review.
3. Use `get_affected_flows` to understand impact.
4. Use `query_graph` pattern="tests_for" to check coverage.

View File

@ -2,12 +2,10 @@
# Variables
# ═══════════════════════════════════════════════════
DATABASE_URL ?= postgres://wrenn:wrenn@localhost:5432/wrenn?sslmode=disable
GOBIN := $(shell pwd)/builds
ENVD_DIR := envd
BIN_DIR := $(shell pwd)/builds
COMMIT := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown")
VERSION_CP := $(shell cat VERSION_CP 2>/dev/null | tr -d '[:space:]' || echo "0.0.0-dev")
VERSION_AGENT := $(shell cat VERSION_AGENT 2>/dev/null | tr -d '[:space:]' || echo "0.0.0-dev")
VERSION_ENVD := $(shell cat envd/VERSION 2>/dev/null | tr -d '[:space:]' || echo "0.0.0-dev")
LDFLAGS := -s -w
# ═══════════════════════════════════════════════════
@ -18,19 +16,23 @@ LDFLAGS := -s -w
build: build-cp build-agent build-envd
build-frontend:
cd frontend && pnpm install --frozen-lockfile && pnpm build
cd frontend && bun install --frozen-lockfile && bun run build
build-cp:
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_CP) -X main.commit=$(COMMIT)" -o $(GOBIN)/wrenn-cp ./cmd/control-plane
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_CP) -X main.commit=$(COMMIT)" -o $(BIN_DIR)/wrenn-cp ./cmd/control-plane
build-agent:
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_AGENT) -X main.commit=$(COMMIT)" -o $(GOBIN)/wrenn-agent ./cmd/host-agent
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_AGENT) -X main.commit=$(COMMIT)" -o $(BIN_DIR)/wrenn-agent ./cmd/host-agent
build-envd:
cd $(ENVD_DIR) && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="$(LDFLAGS) -X main.Version=$(VERSION_ENVD) -X main.commitSHA=$(COMMIT)" -o $(GOBIN)/envd .
@file $(GOBIN)/envd | grep -q "statically linked" || \
(echo "ERROR: envd is not statically linked!" && exit 1)
cd envd-rs && ENVD_COMMIT=$(COMMIT) cargo build --release --target x86_64-unknown-linux-musl
@cp envd-rs/target/x86_64-unknown-linux-musl/release/envd $(BIN_DIR)/envd
@readelf -h $(BIN_DIR)/envd | grep -q 'Type:.*DYN' && \
readelf -d $(BIN_DIR)/envd | grep -q 'FLAGS_1.*PIE' && \
! readelf -d $(BIN_DIR)/envd | grep -q '(NEEDED)' && \
{ ! readelf -lW $(BIN_DIR)/envd | grep -q 'Requesting program interpreter' || \
readelf -lW $(BIN_DIR)/envd | grep -Fq '[Requesting program interpreter: /lib/ld-musl-x86_64.so.1]'; } || \
(echo "ERROR: envd must be PIE, have no DT_NEEDED shared libs, and either have no interpreter or use /lib/ld-musl-x86_64.so.1" && exit 1)
# ═══════════════════════════════════════════════════
# Development
@ -57,11 +59,10 @@ dev-agent:
sudo go run ./cmd/host-agent
dev-frontend:
cd frontend && pnpm dev --port 5173 --host 0.0.0.0
cd frontend && bun run dev --port 5173 --host 0.0.0.0
dev-envd:
cd $(ENVD_DIR) && go run . --debug --listen-tcp :3002
cd envd-rs && cargo run -- --port 49983
# ═══════════════════════════════════════════════════
# Database (goose)
@ -94,7 +95,6 @@ generate: proto sqlc
proto:
cd proto/envd && buf generate
cd proto/hostagent && buf generate
cd $(ENVD_DIR)/spec && buf generate
sqlc:
sqlc generate
@ -106,17 +106,16 @@ sqlc:
fmt:
gofmt -w .
cd $(ENVD_DIR) && gofmt -w .
lint:
golangci-lint run ./...
vet:
go vet ./...
cd $(ENVD_DIR) && go vet ./...
test:
go test -race -v ./internal/...
cd envd-rs && cargo test
test-integration:
go test -race -v -tags=integration ./tests/integration/...
@ -125,7 +124,6 @@ test-all: test test-integration
tidy:
go mod tidy
cd $(ENVD_DIR) && go mod tidy
## Run all quality checks in CI order
check: fmt vet lint test
@ -155,8 +153,8 @@ setup-host:
sudo bash scripts/setup-host.sh
install: build
sudo cp $(GOBIN)/wrenn-cp /usr/local/bin/
sudo cp $(GOBIN)/wrenn-agent /usr/local/bin/
sudo cp $(BIN_DIR)/wrenn-cp /usr/local/bin/
sudo cp $(BIN_DIR)/wrenn-agent /usr/local/bin/
sudo cp deploy/systemd/*.service /etc/systemd/system/
sudo systemctl daemon-reload
@ -167,7 +165,7 @@ install: build
clean:
rm -rf builds/
cd $(ENVD_DIR) && rm -f envd
cd envd-rs && cargo clean
# ═══════════════════════════════════════════════════
# Help
@ -183,11 +181,11 @@ help:
@echo " make dev-cp Control plane (hot reload if air installed)"
@echo " make dev-frontend Vite dev server with HMR (port 5173)"
@echo " make dev-agent Host agent (sudo required)"
@echo " make dev-envd envd in TCP debug mode"
@echo " make dev-envd envd in debug mode (port 49983)"
@echo ""
@echo " make build Build all binaries → builds/"
@echo " make build-frontend Build SvelteKit dashboard → frontend/build/"
@echo " make build-envd Build envd static binary"
@echo " make build-envd Build envd static binary (Rust, musl)"
@echo ""
@echo " make migrate-up Apply migrations"
@echo " make migrate-create name=xxx New migration"

19
NOTICE
View File

@ -1,19 +0,0 @@
Wrenn Sandbox
Copyright (c) 2026 M/S Omukk, Bangladesh
This project includes software derived from the following project:
Project: e2b infra
Repository: https://github.com/e2b-dev/infra
The following files and directories in this repository contain code derived from the above project:
- envd/
- proto/envd/*.proto
- internal/snapshot/
- internal/uffd/
Modifications to this code were made by M/S Omukk.
Copyright (c) 2023 FoundryLabs, Inc.
Modifications Copyright (c) 2026 M/S Omukk, Bangladesh

View File

@ -5,10 +5,11 @@ Secure infrastructure for AI
## Prerequisites
- Linux host with `/dev/kvm` access (bare metal or nested virt)
- Firecracker binary at `/usr/local/bin/firecracker`
- Cloud Hypervisor binary at `/usr/local/bin/cloud-hypervisor`
- PostgreSQL
- Go 1.25+
- pnpm (for frontend)
- Rust 1.88+ with `x86_64-unknown-linux-musl` target (`rustup target add x86_64-unknown-linux-musl`)
- Bun (for frontend)
- Docker (for dev infra and rootfs builds)
## Build

View File

@ -1 +1 @@
0.1.0
0.2.0

View File

@ -1 +1 @@
0.1.3
0.2.0

View File

@ -80,6 +80,25 @@ func main() {
os.Exit(1)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Register with the control plane before touching rootfs images. If the
// agent can't reach the CP there's no point inflating images (and crashing
// afterward would leave them in the expanded state).
creds, err := hostagent.Register(ctx, hostagent.RegistrationConfig{
CPURL: cpURL,
RegistrationToken: *registrationToken,
TokenFile: credsFile,
Address: *advertiseAddr,
})
if err != nil {
slog.Error("host registration failed", "error", err)
os.Exit(1)
}
slog.Info("host registered", "host_id", creds.HostID)
// Parse default rootfs size from env (e.g. "5G", "2Gi", "1000M").
defaultRootfsSizeMB := sandbox.DefaultDiskSizeMB
if sizeStr := os.Getenv("WRENN_DEFAULT_ROOTFS_SIZE"); sizeStr != "" {
@ -107,48 +126,47 @@ func main() {
}
slog.Info("resolved kernel", "version", kernelVersion, "path", kernelPath)
// Detect firecracker version.
fcBin := envOrDefault("WRENN_FIRECRACKER_BIN", "/usr/local/bin/firecracker")
fcVersion, err := sandbox.DetectFirecrackerVersion(fcBin)
// Detect cloud-hypervisor version.
chBin := envOrDefault("WRENN_CH_BIN", "/usr/local/bin/cloud-hypervisor")
chVersion, err := sandbox.DetectCHVersion(chBin)
if err != nil {
slog.Error("failed to detect firecracker version", "error", err)
slog.Error("failed to detect cloud-hypervisor version", "error", err)
os.Exit(1)
}
slog.Info("resolved firecracker", "version", fcVersion, "path", fcBin)
slog.Info("resolved cloud-hypervisor", "version", chVersion, "path", chBin)
cfg := sandbox.Config{
WrennDir: rootDir,
DefaultRootfsSizeMB: defaultRootfsSizeMB,
KernelPath: kernelPath,
KernelVersion: kernelVersion,
FirecrackerBin: fcBin,
FirecrackerVersion: fcVersion,
VMMBin: chBin,
VMMVersion: chVersion,
AgentVersion: version,
}
mgr := sandbox.New(cfg)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Set up lifecycle event callback sender so autonomous events
// (auto-pause, auto-destroy) are pushed to the CP proactively.
cb := hostagent.NewCallbackSender(cpURL, credsFile, creds.HostID)
mgr.SetEventSender(hostagent.NewEventSender(cb))
mgr.StartTTLReaper(ctx)
// Register with the control plane and start heartbeating.
creds, err := hostagent.Register(ctx, hostagent.RegistrationConfig{
CPURL: cpURL,
RegistrationToken: *registrationToken,
TokenFile: credsFile,
Address: *advertiseAddr,
})
if err != nil {
slog.Error("host registration failed", "error", err)
os.Exit(1)
}
slog.Info("host registered", "host_id", creds.HostID)
// httpServer is declared here so the shutdown func can reference it.
httpServer := &http.Server{Addr: listenAddr}
// ReadTimeout/WriteTimeout are intentionally omitted — they would kill
// long-lived Connect RPC streams and WebSocket proxy connections.
httpServer := &http.Server{
Addr: listenAddr,
ReadHeaderTimeout: 10 * time.Second,
IdleTimeout: 620 * time.Second, // > typical LB upstream timeout (600s)
// Disable HTTP/2: empty non-nil map prevents Go from registering
// the h2 ALPN token. Connect RPC works over HTTP/1.1; HTTP/2
// multiplexing causes HOL blocking when a slow sandbox RPC stalls
// the shared connection.
TLSNextProto: make(map[string]func(*http.Server, *tls.Conn, http.Handler)),
}
// mTLS is mandatory — refuse to start without a valid certificate.
var certStore hostagent.CertStore
@ -193,6 +211,7 @@ func main() {
path, handler := hostagentv1connect.NewHostAgentServiceHandler(srv)
proxyHandler := hostagent.NewProxyHandler(mgr)
mgr.SetOnDestroy(proxyHandler.EvictProxy)
mux := http.NewServeMux()
mux.Handle(path, handler)
@ -212,8 +231,9 @@ func main() {
func() {
doShutdown("host deleted from CP")
},
// onCredsRefreshed: hot-swap the TLS certificate after a JWT refresh.
// onCredsRefreshed: hot-swap the TLS certificate and update callback JWT.
func(tf *hostagent.TokenFile) {
cb.UpdateJWT(tf.JWT)
if tf.CertPEM == "" || tf.KeyPEM == "" {
return
}
@ -225,12 +245,16 @@ func main() {
},
)
// Graceful shutdown on SIGINT/SIGTERM.
// Graceful shutdown on SIGINT/SIGTERM. A second signal force-exits
// so the operator can always kill the process if shutdown hangs.
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
go func() {
sig := <-sigCh
doShutdown("signal: " + sig.String())
go doShutdown("signal: " + sig.String())
sig = <-sigCh
slog.Error("received second signal, force exiting", "signal", sig.String())
os.Exit(1)
}()
slog.Info("host agent starting", "addr", listenAddr, "host_id", creds.HostID, "version", version, "commit", commit)
@ -272,7 +296,7 @@ func checkPrivileges() error {
name string
}{
{1, "CAP_DAC_OVERRIDE"}, // /dev/loop*, /dev/mapper/*, /dev/net/tun
{5, "CAP_KILL"}, // SIGTERM/SIGKILL to Firecracker processes
{5, "CAP_KILL"}, // SIGTERM/SIGKILL to cloud-hypervisor processes
{12, "CAP_NET_ADMIN"}, // netlink, iptables, routing, TAP/veth
{13, "CAP_NET_RAW"}, // raw sockets (iptables)
{19, "CAP_SYS_PTRACE"}, // reading /proc/self/ns/net (netns.Get)

View File

@ -72,7 +72,7 @@ ORDER BY created_at DESC;
UPDATE sandboxes
SET status = 'missing',
last_updated = NOW()
WHERE host_id = $1 AND status IN ('running', 'starting', 'pending');
WHERE host_id = $1 AND status IN ('running', 'starting', 'pending', 'pausing', 'resuming', 'stopping');
-- name: UpdateSandboxMetadata :exec
UPDATE sandboxes
@ -80,6 +80,30 @@ SET metadata = $2,
last_updated = NOW()
WHERE id = $1;
-- name: UpdateSandboxRunningIf :one
-- Conditionally transition a sandbox to running only if the current status
-- matches the expected value. Prevents races where a user destroys a sandbox
-- while the create/resume goroutine is still in-flight.
UPDATE sandboxes
SET status = 'running',
host_ip = $3,
guest_ip = $4,
started_at = $5,
last_active_at = $5,
last_updated = NOW()
WHERE id = $1 AND status = $2
RETURNING *;
-- name: UpdateSandboxStatusIf :one
-- Atomically update status only when the current status matches the expected value.
-- Prevents background goroutines from overwriting a status that has since changed
-- (e.g. user destroyed a sandbox while Create was in-flight).
UPDATE sandboxes
SET status = $3,
last_updated = NOW()
WHERE id = $1 AND status = $2
RETURNING *;
-- name: BulkRestoreRunning :exec
-- Called by the reconciler when a host comes back online and its sandboxes are
-- confirmed alive. Restores only sandboxes that are in 'missing' state.

View File

@ -22,6 +22,12 @@ RETURNING *;
-- name: SetUserAdmin :exec
UPDATE users SET is_admin = $2, updated_at = NOW() WHERE id = $1;
-- name: RevokeUserAdmin :execrows
UPDATE users u SET is_admin = false, updated_at = NOW()
WHERE u.id = $1
AND u.is_admin = true
AND (SELECT COUNT(*) FROM users WHERE is_admin = true AND status != 'deleted') > 1;
-- name: GetAdminUsers :many
SELECT * FROM users WHERE is_admin = TRUE ORDER BY created_at;

View File

@ -0,0 +1,2 @@
[target.x86_64-unknown-linux-musl]
linker = "musl-gcc"

2199
envd-rs/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

83
envd-rs/Cargo.toml Normal file
View File

@ -0,0 +1,83 @@
[package]
name = "envd"
version = "0.3.0"
edition = "2024"
rust-version = "1.88"
[dependencies]
# Async runtime
tokio = { version = "1", features = ["full"] }
# HTTP framework
axum = { version = "0.8", features = ["multipart"] }
tower = { version = "0.5", features = ["util"] }
tower-http = { version = "0.6", features = ["cors", "fs"] }
tower-service = "0.3"
# RPC (Connect protocol — serves Connect + gRPC + gRPC-Web on same port)
connectrpc = { version = "0.3", features = ["axum"] }
buffa-types = { path = "buffa-types-shim" }
# CLI
clap = { version = "4", features = ["derive"] }
# Serialization
serde = { version = "1", features = ["derive"] }
serde_json = "1"
# Logging
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["json", "env-filter"] }
# System metrics
sysinfo = "0.33"
# Unix syscalls
nix = { version = "0.30", features = ["fs", "process", "signal", "user", "term", "mount", "ioctl"] }
# Concurrent map
dashmap = "6"
# Crypto
sha2 = "0.10"
hmac = "0.12"
hex = "0.4"
base64 = "0.22"
# Secure memory
zeroize = { version = "1", features = ["derive"] }
# File watching
notify = "7"
# Compression
flate2 = "1"
# Directory walking
walkdir = "2"
# Misc
libc = "0.2"
bytes = "1"
http = "1"
http-body-util = "0.1"
futures = "0.3"
tokio-util = { version = "0.7", features = ["io"] }
subtle = "2"
http-body = "1.0.1"
buffa = "0.3"
async-stream = "0.3.6"
mime_guess = "2"
[dev-dependencies]
tempfile = "3"
[build-dependencies]
connectrpc-build = "0.3"
[profile.release]
strip = true
lto = true
opt-level = "z"
codegen-units = 1
panic = "abort"

140
envd-rs/README.md Normal file
View File

@ -0,0 +1,140 @@
# envd (Rust)
Wrenn guest agent daemon — runs as PID 1 inside Cloud Hypervisor microVMs. Provides process management, filesystem operations, file transfer, port forwarding, and VM lifecycle control over Connect RPC and HTTP.
Rust rewrite of `envd/` (Go). Drop-in replacement — same wire protocol, same endpoints, same CLI flags.
## Prerequisites
- Rust 1.88+ (required by `connectrpc` 0.3.3)
- `protoc` (protobuf compiler, for proto codegen at build time)
- `musl-tools` (for static linking)
```bash
# Ubuntu/Debian
sudo apt install musl-tools protobuf-compiler
# Rust musl target
rustup target add x86_64-unknown-linux-musl
```
## Building
### Static binary (production — what goes into the rootfs)
```bash
cd envd-rs
ENVD_COMMIT=$(git rev-parse --short HEAD) \
cargo build --release --target x86_64-unknown-linux-musl
```
Output: `target/x86_64-unknown-linux-musl/release/envd`
Verify static linking:
```bash
file target/x86_64-unknown-linux-musl/release/envd
# should say: "statically linked"
ldd target/x86_64-unknown-linux-musl/release/envd
# should say: "not a dynamic executable"
```
### Debug binary (dev machine, dynamically linked)
```bash
cd envd-rs
cargo build
```
Run locally (outside a VM):
```bash
./target/debug/envd --port 49983
```
### Via Makefile (from repo root)
```bash
make build-envd # static musl release build
make build-envd-go # Go version (for comparison)
```
## CLI Flags
```
--port <PORT> Listen port [default: 49983]
--version Print version and exit
--commit Print git commit and exit
--cmd <CMD> Spawn a process at startup (e.g. --cmd "/bin/bash")
--cgroup-root <PATH> Cgroup v2 root [default: /sys/fs/cgroup]
```
## Endpoints
### HTTP
| Method | Path | Description |
|--------|---------------------|--------------------------------------|
| GET | `/health` | Health check, triggers post-restore |
| GET | `/metrics` | System metrics (CPU, memory, disk) |
| GET | `/envs` | Current environment variables |
| POST | `/init` | Host agent init (token, env, mounts) |
| POST | `/snapshot/prepare` | Quiesce before Cloud Hypervisor snapshot |
| GET | `/files` | Download file (gzip, range support) |
| POST | `/files` | Upload file(s) via multipart |
### Connect RPC (same port)
| Service | RPCs |
|------------|-------------------------------------------------------------------------|
| Process | List, Start, Connect, Update, StreamInput, SendInput, SendSignal, CloseStdin |
| Filesystem | Stat, MakeDir, Move, ListDir, Remove, WatchDir, CreateWatcher, GetWatcherEvents, RemoveWatcher |
## Architecture
```
42 files, ~4200 LOC Rust
Binary: ~4 MB (stripped, LTO, musl static)
src/
├── main.rs # Entry point, CLI, server setup
├── state.rs # Shared AppState
├── config.rs # Constants
├── conntracker.rs # TCP connection tracking for snapshot/restore
├── execcontext.rs # Default user/workdir/env
├── logging.rs # tracing-subscriber (JSON or pretty)
├── util.rs # AtomicMax
├── auth/ # Token, signing, middleware
├── crypto/ # SHA-256, SHA-512, HMAC
├── host/ # System metrics
├── http/ # Axum handlers (health, init, snapshot, files, encoding)
├── permissions/ # Path resolution, user lookup, chown
├── rpc/ # Connect RPC services
│ ├── pb.rs # Generated proto types
│ ├── process_*.rs # Process service + handler (PTY, pipe, broadcast)
│ ├── filesystem_*.rs # Filesystem service (stat, list, watch, mkdir, move, remove)
│ └── entry.rs # EntryInfo builder
├── port/ # Port subsystem
│ ├── conn.rs # /proc/net/tcp parser
│ ├── scanner.rs # Periodic TCP port scanner
│ ├── forwarder.rs # socat-based port forwarding
│ └── subsystem.rs # Lifecycle (start/stop/restart)
└── cgroups/ # Cgroup v2 manager (pty/user/socat groups)
```
## Updating the rootfs
After building the static binary, copy it into the rootfs:
```bash
bash scripts/update-debug-rootfs.sh [rootfs_path]
```
Or manually:
```bash
sudo mount -o loop /var/lib/wrenn/images/minimal.ext4 /mnt
sudo cp target/x86_64-unknown-linux-musl/release/envd /mnt/usr/bin/envd
sudo umount /mnt
```

View File

@ -0,0 +1,12 @@
[package]
name = "buffa-types"
version = "0.3.0"
edition = "2024"
publish = false
[dependencies]
buffa = "0.3"
serde = { version = "1", features = ["derive"] }
[build-dependencies]
connectrpc-build = "0.3"

View File

@ -0,0 +1,9 @@
fn main() {
connectrpc_build::Config::new()
.files(&["/usr/include/google/protobuf/timestamp.proto"])
.includes(&["/usr/include"])
.include_file("_types.rs")
.emit_register_fn(false)
.compile()
.unwrap();
}

View File

@ -0,0 +1,6 @@
#![allow(dead_code, non_camel_case_types, unused_imports, clippy::derivable_impls)]
use ::buffa;
use ::serde;
include!(concat!(env!("OUT_DIR"), "/_types.rs"));

11
envd-rs/build.rs Normal file
View File

@ -0,0 +1,11 @@
fn main() {
connectrpc_build::Config::new()
.files(&[
"../proto/envd/process.proto",
"../proto/envd/filesystem.proto",
])
.includes(&["../proto/envd", "/usr/include"])
.include_file("_connectrpc.rs")
.compile()
.unwrap();
}

View File

@ -0,0 +1,3 @@
[toolchain]
channel = "stable"
targets = ["x86_64-unknown-linux-gnu", "x86_64-unknown-linux-musl"]

View File

@ -0,0 +1,56 @@
use std::sync::Arc;
use axum::extract::Request;
use axum::http::StatusCode;
use axum::middleware::Next;
use axum::response::{IntoResponse, Response};
use serde_json::json;
use crate::auth::token::SecureToken;
const ACCESS_TOKEN_HEADER: &str = "x-access-token";
/// Paths excluded from general token auth.
/// Format: "METHOD/path"
const AUTH_EXCLUDED: &[&str] = &[
"GET/health",
"GET/files",
"POST/files",
"POST/init",
"POST/snapshot/prepare",
];
/// Axum middleware that checks X-Access-Token header.
pub async fn auth_layer(
request: Request,
next: Next,
access_token: Arc<SecureToken>,
) -> Response {
if access_token.is_set() {
let method = request.method().as_str();
let path = request.uri().path();
let key = format!("{method}{path}");
let is_excluded = AUTH_EXCLUDED.iter().any(|p| *p == key);
let header_val = request
.headers()
.get(ACCESS_TOKEN_HEADER)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if !access_token.equals(header_val) && !is_excluded {
tracing::error!("unauthorized access attempt");
return (
StatusCode::UNAUTHORIZED,
axum::Json(json!({
"code": 401,
"message": "unauthorized access, please provide a valid access token or method signing if supported"
})),
)
.into_response();
}
}
next.run(request).await
}

3
envd-rs/src/auth/mod.rs Normal file
View File

@ -0,0 +1,3 @@
pub mod token;
pub mod signing;
pub mod middleware;

210
envd-rs/src/auth/signing.rs Normal file
View File

@ -0,0 +1,210 @@
use crate::auth::token::SecureToken;
use crate::crypto;
use zeroize::Zeroize;
pub const READ_OPERATION: &str = "read";
pub const WRITE_OPERATION: &str = "write";
/// Generate a v1 signature: `v1_{sha256_base64(path:operation:username:token[:expiration])}`
pub fn generate_signature(
token: &SecureToken,
path: &str,
username: &str,
operation: &str,
expiration: Option<i64>,
) -> Result<String, &'static str> {
let mut token_bytes = token.bytes().ok_or("access token is not set")?;
let payload = match expiration {
Some(exp) => format!(
"{}:{}:{}:{}:{}",
path,
operation,
username,
String::from_utf8_lossy(&token_bytes),
exp
),
None => format!(
"{}:{}:{}:{}",
path,
operation,
username,
String::from_utf8_lossy(&token_bytes),
),
};
token_bytes.zeroize();
let hash = crypto::sha256::hash_without_prefix(payload.as_bytes());
Ok(format!("v1_{hash}"))
}
/// Validate a request's signing. Returns Ok(()) if valid.
pub fn validate_signing(
token: &SecureToken,
header_token: Option<&str>,
signature: Option<&str>,
signature_expiration: Option<i64>,
username: &str,
path: &str,
operation: &str,
) -> Result<(), String> {
if !token.is_set() {
return Ok(());
}
if let Some(ht) = header_token {
if !ht.is_empty() {
if token.equals(ht) {
return Ok(());
}
return Err("access token present in header but does not match".into());
}
}
let sig = signature.ok_or("missing signature query parameter")?;
let expected = generate_signature(token, path, username, operation, signature_expiration)
.map_err(|e| format!("error generating signing key: {e}"))?;
if expected != sig {
return Err("invalid signature".into());
}
if let Some(exp) = signature_expiration {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
if exp < now {
return Err("signature is already expired".into());
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
fn test_token(val: &[u8]) -> SecureToken {
let t = SecureToken::new();
t.set(val).unwrap();
t
}
fn far_future() -> i64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs() as i64
+ 3600
}
#[test]
fn generate_starts_with_v1() {
let token = test_token(b"secret");
let sig = generate_signature(&token, "/file", "root", READ_OPERATION, None).unwrap();
assert!(sig.starts_with("v1_"));
}
#[test]
fn generate_deterministic() {
let token = test_token(b"secret");
let s1 = generate_signature(&token, "/file", "root", READ_OPERATION, None).unwrap();
let s2 = generate_signature(&token, "/file", "root", READ_OPERATION, None).unwrap();
assert_eq!(s1, s2);
}
#[test]
fn generate_with_expiration_differs() {
let token = test_token(b"secret");
let without = generate_signature(&token, "/f", "u", READ_OPERATION, None).unwrap();
let with = generate_signature(&token, "/f", "u", READ_OPERATION, Some(9999)).unwrap();
assert_ne!(without, with);
}
#[test]
fn generate_unset_token_errors() {
let token = SecureToken::new();
assert!(generate_signature(&token, "/f", "u", READ_OPERATION, None).is_err());
}
#[test]
fn validate_no_token_set_passes() {
let token = SecureToken::new();
assert!(validate_signing(&token, None, None, None, "root", "/f", READ_OPERATION).is_ok());
}
#[test]
fn validate_correct_header_token() {
let token = test_token(b"secret");
assert!(validate_signing(&token, Some("secret"), None, None, "root", "/f", READ_OPERATION).is_ok());
}
#[test]
fn validate_wrong_header_token() {
let token = test_token(b"secret");
let result = validate_signing(&token, Some("wrong"), None, None, "root", "/f", READ_OPERATION);
assert!(result.is_err());
assert!(result.unwrap_err().contains("does not match"));
}
#[test]
fn validate_valid_signature() {
let token = test_token(b"secret");
let exp = far_future();
let sig = generate_signature(&token, "/file", "root", READ_OPERATION, Some(exp)).unwrap();
assert!(validate_signing(&token, None, Some(&sig), Some(exp), "root", "/file", READ_OPERATION).is_ok());
}
#[test]
fn validate_invalid_signature() {
let token = test_token(b"secret");
let result = validate_signing(&token, None, Some("v1_bad"), Some(far_future()), "root", "/f", READ_OPERATION);
assert!(result.is_err());
assert!(result.unwrap_err().contains("invalid signature"));
}
#[test]
fn validate_expired_signature() {
let token = test_token(b"secret");
let expired: i64 = 1_000_000;
let sig = generate_signature(&token, "/f", "root", READ_OPERATION, Some(expired)).unwrap();
let result = validate_signing(&token, None, Some(&sig), Some(expired), "root", "/f", READ_OPERATION);
assert!(result.is_err());
assert!(result.unwrap_err().contains("expired"));
}
#[test]
fn validate_missing_signature() {
let token = test_token(b"secret");
let result = validate_signing(&token, None, None, None, "root", "/f", READ_OPERATION);
assert!(result.is_err());
assert!(result.unwrap_err().contains("missing signature"));
}
#[test]
fn validate_empty_header_token_falls_through_to_signature() {
let token = test_token(b"secret");
let result = validate_signing(&token, Some(""), None, None, "root", "/f", READ_OPERATION);
assert!(result.is_err());
assert!(result.unwrap_err().contains("missing signature"));
}
#[test]
fn validate_valid_signature_no_expiration() {
let token = test_token(b"secret");
let sig = generate_signature(&token, "/file", "root", READ_OPERATION, None).unwrap();
assert!(validate_signing(&token, None, Some(&sig), None, "root", "/file", READ_OPERATION).is_ok());
}
#[test]
fn different_operations_produce_different_signatures() {
let token = test_token(b"secret");
let r = generate_signature(&token, "/f", "root", READ_OPERATION, None).unwrap();
let w = generate_signature(&token, "/f", "root", WRITE_OPERATION, None).unwrap();
assert_ne!(r, w);
}
}

256
envd-rs/src/auth/token.rs Normal file
View File

@ -0,0 +1,256 @@
use std::sync::RwLock;
use subtle::ConstantTimeEq;
use zeroize::Zeroize;
/// Secure token storage with constant-time comparison and zeroize-on-drop.
///
/// Mirrors Go's SecureToken backed by memguard.LockedBuffer.
/// In Rust we rely on `zeroize` for Drop-based zeroing.
pub struct SecureToken {
inner: RwLock<Option<Vec<u8>>>,
}
impl SecureToken {
pub fn new() -> Self {
Self {
inner: RwLock::new(None),
}
}
pub fn set(&self, token: &[u8]) -> Result<(), &'static str> {
if token.is_empty() {
return Err("empty token not allowed");
}
let mut guard = self.inner.write().unwrap();
if let Some(ref mut old) = *guard {
old.zeroize();
}
*guard = Some(token.to_vec());
Ok(())
}
pub fn is_set(&self) -> bool {
let guard = self.inner.read().unwrap();
guard.is_some()
}
/// Constant-time comparison.
pub fn equals(&self, other: &str) -> bool {
let guard = self.inner.read().unwrap();
match guard.as_ref() {
Some(buf) => buf.as_slice().ct_eq(other.as_bytes()).into(),
None => false,
}
}
/// Constant-time comparison with another SecureToken.
pub fn equals_secure(&self, other: &SecureToken) -> bool {
let other_bytes = match other.bytes() {
Some(b) => b,
None => return false,
};
let guard = self.inner.read().unwrap();
let result = match guard.as_ref() {
Some(buf) => buf.as_slice().ct_eq(&other_bytes).into(),
None => false,
};
// other_bytes dropped here, Vec<u8> doesn't auto-zeroize but
// we accept this — same as Go's `defer memguard.WipeBytes(otherBytes)`
result
}
/// Returns a copy of the token bytes (for signature generation).
pub fn bytes(&self) -> Option<Vec<u8>> {
let guard = self.inner.read().unwrap();
guard.as_ref().map(|b| b.clone())
}
/// Transfer token from another SecureToken, clearing the source.
pub fn take_from(&self, src: &SecureToken) {
let taken = {
let mut src_guard = src.inner.write().unwrap();
src_guard.take()
};
let mut guard = self.inner.write().unwrap();
if let Some(ref mut old) = *guard {
old.zeroize();
}
*guard = taken;
}
pub fn destroy(&self) {
let mut guard = self.inner.write().unwrap();
if let Some(ref mut buf) = *guard {
buf.zeroize();
}
*guard = None;
}
}
impl Drop for SecureToken {
fn drop(&mut self) {
if let Ok(mut guard) = self.inner.write() {
if let Some(ref mut buf) = *guard {
buf.zeroize();
}
}
}
}
/// Deserialize from JSON string, matching Go's UnmarshalJSON behavior.
/// Expects a quoted JSON string. Rejects escape sequences.
impl SecureToken {
pub fn from_json_bytes(data: &mut [u8]) -> Result<Self, &'static str> {
if data.len() < 2 || data[0] != b'"' || data[data.len() - 1] != b'"' {
data.zeroize();
return Err("invalid secure token JSON string");
}
let content = &data[1..data.len() - 1];
if content.contains(&b'\\') {
data.zeroize();
return Err("invalid secure token: unexpected escape sequence");
}
if content.is_empty() {
data.zeroize();
return Err("empty token not allowed");
}
let token = Self::new();
token.set(content).map_err(|_| "failed to set token")?;
data.zeroize();
Ok(token)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn new_is_unset() {
let t = SecureToken::new();
assert!(!t.is_set());
assert!(!t.equals("anything"));
}
#[test]
fn set_and_equals() {
let t = SecureToken::new();
t.set(b"secret").unwrap();
assert!(t.is_set());
assert!(t.equals("secret"));
assert!(!t.equals("wrong"));
}
#[test]
fn set_empty_errors() {
let t = SecureToken::new();
assert!(t.set(b"").is_err());
assert!(!t.is_set());
}
#[test]
fn set_overwrites_previous() {
let t = SecureToken::new();
t.set(b"first").unwrap();
t.set(b"second").unwrap();
assert!(!t.equals("first"));
assert!(t.equals("second"));
}
#[test]
fn destroy_clears() {
let t = SecureToken::new();
t.set(b"secret").unwrap();
t.destroy();
assert!(!t.is_set());
assert!(!t.equals("secret"));
}
#[test]
fn bytes_returns_copy() {
let t = SecureToken::new();
assert!(t.bytes().is_none());
t.set(b"hello").unwrap();
assert_eq!(t.bytes().unwrap(), b"hello");
}
#[test]
fn take_from_transfers_and_clears_source() {
let src = SecureToken::new();
src.set(b"token").unwrap();
let dst = SecureToken::new();
dst.take_from(&src);
assert!(!src.is_set());
assert!(dst.equals("token"));
}
#[test]
fn take_from_overwrites_existing() {
let src = SecureToken::new();
src.set(b"new").unwrap();
let dst = SecureToken::new();
dst.set(b"old").unwrap();
dst.take_from(&src);
assert!(dst.equals("new"));
assert!(!dst.equals("old"));
}
#[test]
fn equals_secure_matching() {
let a = SecureToken::new();
a.set(b"same").unwrap();
let b = SecureToken::new();
b.set(b"same").unwrap();
assert!(a.equals_secure(&b));
}
#[test]
fn equals_secure_different() {
let a = SecureToken::new();
a.set(b"one").unwrap();
let b = SecureToken::new();
b.set(b"two").unwrap();
assert!(!a.equals_secure(&b));
}
#[test]
fn equals_secure_unset() {
let a = SecureToken::new();
let b = SecureToken::new();
assert!(!a.equals_secure(&b));
}
#[test]
fn from_json_bytes_valid() {
let mut data = b"\"mysecret\"".to_vec();
let t = SecureToken::from_json_bytes(&mut data).unwrap();
assert!(t.equals("mysecret"));
assert!(data.iter().all(|&b| b == 0));
}
#[test]
fn from_json_bytes_rejects_missing_quotes() {
let mut data = b"noquotes".to_vec();
assert!(SecureToken::from_json_bytes(&mut data).is_err());
assert!(data.iter().all(|&b| b == 0));
}
#[test]
fn from_json_bytes_rejects_escape_sequences() {
let mut data = b"\"has\\nescapes\"".to_vec();
assert!(SecureToken::from_json_bytes(&mut data).is_err());
assert!(data.iter().all(|&b| b == 0));
}
#[test]
fn from_json_bytes_rejects_empty_content() {
let mut data = b"\"\"".to_vec();
assert!(SecureToken::from_json_bytes(&mut data).is_err());
assert!(data.iter().all(|&b| b == 0));
}
}

View File

@ -0,0 +1,66 @@
use std::collections::HashMap;
use std::fs;
use std::os::unix::io::{OwnedFd, RawFd};
use std::path::PathBuf;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum ProcessType {
Pty,
User,
Socat,
}
pub trait CgroupManager: Send + Sync {
fn get_fd(&self, proc_type: ProcessType) -> Option<RawFd>;
}
pub struct Cgroup2Manager {
fds: HashMap<ProcessType, OwnedFd>,
}
impl Cgroup2Manager {
pub fn new(root: &str, configs: &[(ProcessType, &str, &[(&str, &str)])]) -> Result<Self, String> {
let mut fds = HashMap::new();
for (proc_type, sub_path, properties) in configs {
let full_path = PathBuf::from(root).join(sub_path);
fs::create_dir_all(&full_path).map_err(|e| {
format!("failed to create cgroup {}: {e}", full_path.display())
})?;
for (name, value) in *properties {
let prop_path = full_path.join(name);
fs::write(&prop_path, value).map_err(|e| {
format!("failed to write cgroup property {}: {e}", prop_path.display())
})?;
}
let fd = nix::fcntl::open(
&full_path,
nix::fcntl::OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
)
.map_err(|e| format!("failed to open cgroup {}: {e}", full_path.display()))?;
fds.insert(*proc_type, fd);
}
Ok(Self { fds })
}
}
impl CgroupManager for Cgroup2Manager {
fn get_fd(&self, proc_type: ProcessType) -> Option<RawFd> {
use std::os::unix::io::AsRawFd;
self.fds.get(&proc_type).map(|fd| fd.as_raw_fd())
}
}
pub struct NoopCgroupManager;
impl CgroupManager for NoopCgroupManager {
fn get_fd(&self, _proc_type: ProcessType) -> Option<RawFd> {
None
}
}

11
envd-rs/src/config.rs Normal file
View File

@ -0,0 +1,11 @@
use std::time::Duration;
pub const DEFAULT_PORT: u16 = 49983;
pub const IDLE_TIMEOUT: Duration = Duration::from_secs(640);
pub const CORS_MAX_AGE: Duration = Duration::from_secs(7200);
pub const PORT_SCANNER_INTERVAL: Duration = Duration::from_millis(1000);
pub const DEFAULT_USER: &str = "root";
pub const WRENN_RUN_DIR: &str = "/run/wrenn";
pub const KILOBYTE: u64 = 1024;
pub const MEGABYTE: u64 = 1024 * KILOBYTE;

200
envd-rs/src/conntracker.rs Normal file
View File

@ -0,0 +1,200 @@
use std::collections::HashSet;
use std::sync::Mutex;
/// Tracks active TCP connections for snapshot/restore lifecycle.
///
/// Before snapshot: close idle connections, record active ones.
/// After restore: close all pre-snapshot connections (zombie TCP sockets).
///
/// In Rust/axum, we don't have Go's ConnState callback. Instead we track
/// connections via a tower middleware that registers connection IDs.
/// For the initial implementation, we track by a simple connection counter
/// and rely on axum's graceful shutdown mechanics.
pub struct ConnTracker {
inner: Mutex<ConnTrackerInner>,
}
struct ConnTrackerInner {
active: HashSet<u64>,
pre_snapshot: Option<HashSet<u64>>,
next_id: u64,
keepalives_enabled: bool,
}
impl ConnTracker {
pub fn new() -> Self {
Self {
inner: Mutex::new(ConnTrackerInner {
active: HashSet::new(),
pre_snapshot: None,
next_id: 0,
keepalives_enabled: true,
}),
}
}
pub fn register_connection(&self) -> u64 {
let mut inner = self.inner.lock().unwrap();
let id = inner.next_id;
inner.next_id += 1;
inner.active.insert(id);
id
}
pub fn remove_connection(&self, id: u64) {
let mut inner = self.inner.lock().unwrap();
inner.active.remove(&id);
if let Some(ref mut pre) = inner.pre_snapshot {
pre.remove(&id);
}
}
pub fn prepare_for_snapshot(&self) {
let mut inner = self.inner.lock().unwrap();
inner.keepalives_enabled = false;
inner.pre_snapshot = Some(inner.active.clone());
tracing::info!(
active_connections = inner.active.len(),
"snapshot: recorded pre-snapshot connections, keep-alives disabled"
);
}
pub fn restore_after_snapshot(&self) {
let mut inner = self.inner.lock().unwrap();
if let Some(pre) = inner.pre_snapshot.take() {
let zombie_count = pre.len();
for id in &pre {
inner.active.remove(id);
}
if zombie_count > 0 {
tracing::info!(zombie_count, "restore: closed zombie connections");
}
}
inner.keepalives_enabled = true;
}
pub fn keepalives_enabled(&self) -> bool {
self.inner.lock().unwrap().keepalives_enabled
}
#[cfg(test)]
fn active_count(&self) -> usize {
self.inner.lock().unwrap().active.len()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn register_assigns_sequential_ids() {
let ct = ConnTracker::new();
assert_eq!(ct.register_connection(), 0);
assert_eq!(ct.register_connection(), 1);
assert_eq!(ct.register_connection(), 2);
}
#[test]
fn remove_clears_active() {
let ct = ConnTracker::new();
let id = ct.register_connection();
assert_eq!(ct.active_count(), 1);
ct.remove_connection(id);
assert_eq!(ct.active_count(), 0);
}
#[test]
fn remove_nonexistent_is_noop() {
let ct = ConnTracker::new();
ct.remove_connection(999);
assert_eq!(ct.active_count(), 0);
}
#[test]
fn prepare_disables_keepalives() {
let ct = ConnTracker::new();
assert!(ct.keepalives_enabled());
ct.register_connection();
ct.prepare_for_snapshot();
assert!(!ct.keepalives_enabled());
}
#[test]
fn restore_removes_zombies_and_reenables_keepalives() {
let ct = ConnTracker::new();
let id0 = ct.register_connection();
let id1 = ct.register_connection();
ct.prepare_for_snapshot();
ct.restore_after_snapshot();
assert!(ct.keepalives_enabled());
// Both pre-snapshot connections removed as zombies
assert_eq!(ct.active_count(), 0);
// IDs don't matter anymore, but remove shouldn't panic
ct.remove_connection(id0);
ct.remove_connection(id1);
}
#[test]
fn restore_without_prepare_is_noop() {
let ct = ConnTracker::new();
let _id = ct.register_connection();
ct.restore_after_snapshot();
assert!(ct.keepalives_enabled());
assert_eq!(ct.active_count(), 1);
}
#[test]
fn connection_closed_before_restore_not_zombie() {
let ct = ConnTracker::new();
let id0 = ct.register_connection();
let _id1 = ct.register_connection();
ct.prepare_for_snapshot();
// Close id0 during snapshot window
ct.remove_connection(id0);
assert_eq!(ct.active_count(), 1);
ct.restore_after_snapshot();
// id1 was zombie (still active at restore), id0 already gone
assert_eq!(ct.active_count(), 0);
}
#[test]
fn post_snapshot_connection_survives_restore() {
let ct = ConnTracker::new();
ct.register_connection();
ct.prepare_for_snapshot();
// New connection after snapshot
let _post = ct.register_connection();
ct.restore_after_snapshot();
// Pre-snapshot connection removed, post-snapshot survives
assert_eq!(ct.active_count(), 1);
}
#[test]
fn full_lifecycle() {
let ct = ConnTracker::new();
let _a = ct.register_connection();
let b = ct.register_connection();
let _c = ct.register_connection();
assert_eq!(ct.active_count(), 3);
assert!(ct.keepalives_enabled());
ct.prepare_for_snapshot();
assert!(!ct.keepalives_enabled());
let d = ct.register_connection();
ct.remove_connection(b);
ct.restore_after_snapshot();
assert!(ct.keepalives_enabled());
// a and c were zombies, b removed before restore, d is post-snapshot
assert_eq!(ct.active_count(), 1);
ct.remove_connection(d);
assert_eq!(ct.active_count(), 0);
// Can reuse tracker after restore
let e = ct.register_connection();
assert_eq!(ct.active_count(), 1);
assert!(e > d);
}
}

View File

@ -0,0 +1,43 @@
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
pub fn compute(key: &[u8], data: &[u8]) -> String {
let mut mac = HmacSha256::new_from_slice(key).expect("HMAC accepts any key length");
mac.update(data);
let result = mac.finalize();
hex::encode(result.into_bytes())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn rfc4231_tc1() {
let key = &[0x0b; 20];
let data = b"Hi There";
assert_eq!(
compute(key, data),
"b0344c61d8db38535ca8afceaf0bf12b881dc200c9833da726e9376c2e32cff7"
);
}
#[test]
fn rfc4231_tc2() {
let key = b"Jefe";
let data = b"what do ya want for nothing?";
assert_eq!(
compute(key, data),
"5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843"
);
}
#[test]
fn output_is_64_hex_chars() {
let result = compute(b"key", b"data");
assert_eq!(result.len(), 64);
assert!(result.chars().all(|c| c.is_ascii_hexdigit()));
}
}

View File

@ -0,0 +1,3 @@
pub mod sha256;
pub mod sha512;
pub mod hmac_sha256;

View File

@ -0,0 +1,54 @@
use base64::Engine;
use base64::engine::general_purpose::STANDARD_NO_PAD;
use sha2::{Digest, Sha256};
pub fn hash(data: &[u8]) -> String {
let h = Sha256::digest(data);
let encoded = STANDARD_NO_PAD.encode(h);
format!("$sha256${encoded}")
}
pub fn hash_without_prefix(data: &[u8]) -> String {
let h = Sha256::digest(data);
STANDARD_NO_PAD.encode(h)
}
#[cfg(test)]
mod tests {
use super::*;
const VECTORS: &[(&[u8], &str)] = &[
(b"", "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU"),
(b"abc", "ungWv48Bz+pBQUDeXa4iI7ADYaOWF3qctBD/YfIAFa0"),
(b"abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq", "JI1qYdIGOLjlwCaTDD5gOaM85Flk/yFn9uzt1BnbBsE"),
];
#[test]
fn known_answer_with_prefix() {
for (input, expected_b64) in VECTORS {
let result = hash(input);
assert_eq!(result, format!("$sha256${expected_b64}"), "input: {:?}", String::from_utf8_lossy(input));
}
}
#[test]
fn known_answer_without_prefix() {
for (input, expected_b64) in VECTORS {
let result = hash_without_prefix(input);
assert_eq!(result, *expected_b64, "input: {:?}", String::from_utf8_lossy(input));
}
}
#[test]
fn no_base64_padding() {
for (input, _) in VECTORS {
assert!(!hash(input).contains('='));
assert!(!hash_without_prefix(input).contains('='));
}
}
#[test]
fn deterministic() {
assert_eq!(hash(b"test"), hash(b"test"));
}
}

View File

@ -0,0 +1,43 @@
use sha2::{Digest, Sha512};
pub fn hash_access_token(token: &str) -> String {
let h = Sha512::digest(token.as_bytes());
hex::encode(h)
}
pub fn hash_access_token_bytes(token: &[u8]) -> String {
let h = Sha512::digest(token);
hex::encode(h)
}
#[cfg(test)]
mod tests {
use super::*;
const VECTORS: &[(&str, &str)] = &[
("", "cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e"),
("abc", "ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f"),
("abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq", "204a8fc6dda82f0a0ced7beb8e08a41657c16ef468b228a8279be331a703c33596fd15c13b1b07f9aa1d3bea57789ca031ad85c7a71dd70354ec631238ca3445"),
];
#[test]
fn known_answer() {
for (input, expected) in VECTORS {
assert_eq!(hash_access_token(input), *expected, "input: {input:?}");
}
}
#[test]
fn str_and_bytes_agree() {
for (input, _) in VECTORS {
assert_eq!(hash_access_token(input), hash_access_token_bytes(input.as_bytes()));
}
}
#[test]
fn output_is_lowercase_hex_128_chars() {
let h = hash_access_token("anything");
assert_eq!(h.len(), 128);
assert!(h.chars().all(|c| c.is_ascii_hexdigit() && !c.is_ascii_uppercase()));
}
}

118
envd-rs/src/execcontext.rs Normal file
View File

@ -0,0 +1,118 @@
use dashmap::DashMap;
use std::sync::{Arc, RwLock};
pub struct Defaults {
pub env_vars: Arc<DashMap<String, String>>,
user: RwLock<String>,
workdir: RwLock<Option<String>>,
}
impl Defaults {
pub fn new(user: &str) -> Self {
Self {
env_vars: Arc::new(DashMap::new()),
user: RwLock::new(user.to_string()),
workdir: RwLock::new(None),
}
}
pub fn user(&self) -> String {
self.user.read().unwrap().clone()
}
pub fn set_user(&self, user: String) {
*self.user.write().unwrap() = user;
}
pub fn workdir(&self) -> Option<String> {
self.workdir.read().unwrap().clone()
}
pub fn set_workdir(&self, workdir: Option<String>) {
*self.workdir.write().unwrap() = workdir;
}
}
pub fn resolve_default_workdir(workdir: &str, default_workdir: Option<&str>) -> String {
if !workdir.is_empty() {
return workdir.to_string();
}
if let Some(dw) = default_workdir {
return dw.to_string();
}
String::new()
}
pub fn resolve_default_username<'a>(
username: Option<&'a str>,
default_username: &'a str,
) -> Result<&'a str, &'static str> {
if let Some(u) = username {
return Ok(u);
}
if !default_username.is_empty() {
return Ok(default_username);
}
Err("username not provided")
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn workdir_explicit_overrides_default() {
assert_eq!(resolve_default_workdir("/explicit", Some("/default")), "/explicit");
}
#[test]
fn workdir_empty_uses_default() {
assert_eq!(resolve_default_workdir("", Some("/default")), "/default");
}
#[test]
fn workdir_empty_no_default_returns_empty() {
assert_eq!(resolve_default_workdir("", None), "");
}
#[test]
fn workdir_explicit_ignores_none_default() {
assert_eq!(resolve_default_workdir("/explicit", None), "/explicit");
}
#[test]
fn username_explicit_returns_explicit() {
assert_eq!(resolve_default_username(Some("root"), "wrenn").unwrap(), "root");
}
#[test]
fn username_none_uses_default() {
assert_eq!(resolve_default_username(None, "wrenn").unwrap(), "wrenn");
}
#[test]
fn username_none_empty_default_errors() {
assert!(resolve_default_username(None, "").is_err());
}
#[test]
fn username_some_overrides_empty_default() {
assert_eq!(resolve_default_username(Some("root"), "").unwrap(), "root");
}
#[test]
fn defaults_user_set_and_get() {
let d = Defaults::new("initial");
assert_eq!(d.user(), "initial");
d.set_user("changed".into());
assert_eq!(d.user(), "changed");
}
#[test]
fn defaults_workdir_initially_none() {
let d = Defaults::new("user");
assert!(d.workdir().is_none());
d.set_workdir(Some("/home".into()));
assert_eq!(d.workdir().unwrap(), "/home");
}
}

View File

@ -0,0 +1,336 @@
use axum::http::Request;
const ENCODING_GZIP: &str = "gzip";
const ENCODING_IDENTITY: &str = "identity";
const ENCODING_WILDCARD: &str = "*";
const SUPPORTED_ENCODINGS: &[&str] = &[ENCODING_GZIP];
struct EncodingWithQuality {
encoding: String,
quality: f64,
}
fn parse_encoding_with_quality(value: &str) -> EncodingWithQuality {
let value = value.trim();
let mut quality = 1.0;
if let Some(idx) = value.find(';') {
let params = &value[idx + 1..];
let enc = value[..idx].trim();
for param in params.split(';') {
let param = param.trim();
if let Some(stripped) = param.strip_prefix("q=").or_else(|| param.strip_prefix("Q=")) {
if let Ok(q) = stripped.parse::<f64>() {
quality = q;
}
}
}
return EncodingWithQuality {
encoding: enc.to_ascii_lowercase(),
quality,
};
}
EncodingWithQuality {
encoding: value.to_ascii_lowercase(),
quality,
}
}
fn parse_accept_encoding_header(header: &str) -> (Vec<EncodingWithQuality>, bool) {
if header.is_empty() {
return (Vec::new(), false);
}
let encodings: Vec<EncodingWithQuality> =
header.split(',').map(|v| parse_encoding_with_quality(v)).collect();
let mut identity_rejected = false;
let mut identity_explicitly_accepted = false;
let mut wildcard_rejected = false;
for eq in &encodings {
match eq.encoding.as_str() {
ENCODING_IDENTITY => {
if eq.quality == 0.0 {
identity_rejected = true;
} else {
identity_explicitly_accepted = true;
}
}
ENCODING_WILDCARD => {
if eq.quality == 0.0 {
wildcard_rejected = true;
}
}
_ => {}
}
}
if wildcard_rejected && !identity_explicitly_accepted {
identity_rejected = true;
}
(encodings, identity_rejected)
}
pub fn is_identity_acceptable<B>(r: &Request<B>) -> bool {
let header = r
.headers()
.get("accept-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let (_, rejected) = parse_accept_encoding_header(header);
!rejected
}
pub fn parse_accept_encoding<B>(r: &Request<B>) -> Result<&'static str, String> {
let header = r
.headers()
.get("accept-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if header.is_empty() {
return Ok(ENCODING_IDENTITY);
}
let (mut encodings, identity_rejected) = parse_accept_encoding_header(header);
encodings.sort_by(|a, b| b.quality.partial_cmp(&a.quality).unwrap_or(std::cmp::Ordering::Equal));
for eq in &encodings {
if eq.quality == 0.0 {
continue;
}
if eq.encoding == ENCODING_IDENTITY {
return Ok(ENCODING_IDENTITY);
}
if eq.encoding == ENCODING_WILDCARD {
if identity_rejected && !SUPPORTED_ENCODINGS.is_empty() {
return Ok(SUPPORTED_ENCODINGS[0]);
}
return Ok(ENCODING_IDENTITY);
}
if eq.encoding == ENCODING_GZIP {
return Ok(ENCODING_GZIP);
}
}
if !identity_rejected {
return Ok(ENCODING_IDENTITY);
}
Err(format!("no acceptable encoding found, supported: {SUPPORTED_ENCODINGS:?}"))
}
pub fn parse_content_encoding<B>(r: &Request<B>) -> Result<&'static str, String> {
let header = r
.headers()
.get("content-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if header.is_empty() {
return Ok(ENCODING_IDENTITY);
}
let encoding = header.trim().to_ascii_lowercase();
if encoding == ENCODING_IDENTITY {
return Ok(ENCODING_IDENTITY);
}
if SUPPORTED_ENCODINGS.contains(&encoding.as_str()) {
return Ok(ENCODING_GZIP);
}
Err(format!("unsupported Content-Encoding: {header}, supported: {SUPPORTED_ENCODINGS:?}"))
}
#[cfg(test)]
mod tests {
use super::*;
use axum::http::Request;
fn req_with_accept(v: &str) -> Request<()> {
Request::builder()
.header("accept-encoding", v)
.body(())
.unwrap()
}
fn req_with_content(v: &str) -> Request<()> {
Request::builder()
.header("content-encoding", v)
.body(())
.unwrap()
}
fn req_no_headers() -> Request<()> {
Request::builder().body(()).unwrap()
}
// parse_encoding_with_quality
#[test]
fn encoding_quality_default_1() {
let eq = parse_encoding_with_quality("gzip");
assert_eq!(eq.encoding, "gzip");
assert_eq!(eq.quality, 1.0);
}
#[test]
fn encoding_quality_explicit() {
let eq = parse_encoding_with_quality("gzip;q=0.8");
assert_eq!(eq.encoding, "gzip");
assert_eq!(eq.quality, 0.8);
}
#[test]
fn encoding_quality_case_insensitive() {
let eq = parse_encoding_with_quality("GZIP;Q=0.5");
assert_eq!(eq.encoding, "gzip");
assert_eq!(eq.quality, 0.5);
}
#[test]
fn encoding_quality_zero() {
let eq = parse_encoding_with_quality("gzip;q=0");
assert_eq!(eq.quality, 0.0);
}
#[test]
fn encoding_quality_whitespace_trimmed() {
let eq = parse_encoding_with_quality(" gzip ; q=0.9 ");
assert_eq!(eq.encoding, "gzip");
assert_eq!(eq.quality, 0.9);
}
// parse_accept_encoding_header
#[test]
fn accept_header_empty() {
let (encs, rejected) = parse_accept_encoding_header("");
assert!(encs.is_empty());
assert!(!rejected);
}
#[test]
fn accept_header_identity_q0_rejects() {
let (_, rejected) = parse_accept_encoding_header("identity;q=0");
assert!(rejected);
}
#[test]
fn accept_header_wildcard_q0_rejects_identity() {
let (_, rejected) = parse_accept_encoding_header("*;q=0");
assert!(rejected);
}
#[test]
fn accept_header_wildcard_q0_but_identity_explicit_accepted() {
let (_, rejected) = parse_accept_encoding_header("*;q=0, identity");
assert!(!rejected);
}
// parse_accept_encoding (full)
#[test]
fn accept_encoding_no_header_returns_identity() {
assert_eq!(parse_accept_encoding(&req_no_headers()).unwrap(), "identity");
}
#[test]
fn accept_encoding_gzip() {
assert_eq!(parse_accept_encoding(&req_with_accept("gzip")).unwrap(), "gzip");
}
#[test]
fn accept_encoding_identity_explicit() {
assert_eq!(parse_accept_encoding(&req_with_accept("identity")).unwrap(), "identity");
}
#[test]
fn accept_encoding_gzip_higher_quality() {
assert_eq!(
parse_accept_encoding(&req_with_accept("identity;q=0.1, gzip;q=0.9")).unwrap(),
"gzip"
);
}
#[test]
fn accept_encoding_wildcard_returns_identity() {
assert_eq!(parse_accept_encoding(&req_with_accept("*")).unwrap(), "identity");
}
#[test]
fn accept_encoding_wildcard_identity_rejected_returns_gzip() {
assert_eq!(
parse_accept_encoding(&req_with_accept("identity;q=0, *")).unwrap(),
"gzip"
);
}
#[test]
fn accept_encoding_all_rejected_errors() {
assert!(parse_accept_encoding(&req_with_accept("identity;q=0, *;q=0")).is_err());
}
#[test]
fn accept_encoding_unsupported_only_falls_to_identity() {
assert_eq!(parse_accept_encoding(&req_with_accept("br")).unwrap(), "identity");
}
// is_identity_acceptable
#[test]
fn identity_acceptable_no_header() {
assert!(is_identity_acceptable(&req_no_headers()));
}
#[test]
fn identity_acceptable_gzip_only() {
assert!(is_identity_acceptable(&req_with_accept("gzip")));
}
#[test]
fn identity_not_acceptable_identity_q0() {
assert!(!is_identity_acceptable(&req_with_accept("identity;q=0")));
}
#[test]
fn identity_not_acceptable_wildcard_q0() {
assert!(!is_identity_acceptable(&req_with_accept("*;q=0")));
}
#[test]
fn identity_acceptable_wildcard_q0_but_identity_explicit() {
assert!(is_identity_acceptable(&req_with_accept("*;q=0, identity")));
}
// parse_content_encoding
#[test]
fn content_encoding_empty_returns_identity() {
assert_eq!(parse_content_encoding(&req_no_headers()).unwrap(), "identity");
}
#[test]
fn content_encoding_gzip() {
assert_eq!(parse_content_encoding(&req_with_content("gzip")).unwrap(), "gzip");
}
#[test]
fn content_encoding_identity_explicit() {
assert_eq!(parse_content_encoding(&req_with_content("identity")).unwrap(), "identity");
}
#[test]
fn content_encoding_unsupported_errors() {
assert!(parse_content_encoding(&req_with_content("br")).is_err());
}
#[test]
fn content_encoding_case_insensitive() {
assert_eq!(parse_content_encoding(&req_with_content("GZIP")).unwrap(), "gzip");
}
}

25
envd-rs/src/http/envs.rs Normal file
View File

@ -0,0 +1,25 @@
use std::collections::HashMap;
use std::sync::Arc;
use axum::Json;
use axum::extract::State;
use axum::http::header;
use axum::response::IntoResponse;
use crate::state::AppState;
pub async fn get_envs(State(state): State<Arc<AppState>>) -> impl IntoResponse {
tracing::debug!("getting env vars");
let envs: HashMap<String, String> = state
.defaults
.env_vars
.iter()
.map(|entry| (entry.key().clone(), entry.value().clone()))
.collect();
(
[(header::CACHE_CONTROL, "no-store")],
Json(envs),
)
}

20
envd-rs/src/http/error.rs Normal file
View File

@ -0,0 +1,20 @@
use axum::Json;
use axum::http::StatusCode;
use axum::response::IntoResponse;
use serde::Serialize;
#[derive(Serialize)]
struct ErrorBody {
code: u16,
message: String,
}
pub fn json_error(status: StatusCode, message: &str) -> impl IntoResponse {
(
status,
Json(ErrorBody {
code: status.as_u16(),
message: message.to_string(),
}),
)
}

447
envd-rs/src/http/files.rs Normal file
View File

@ -0,0 +1,447 @@
use std::io::Write as _;
use std::path::Path;
use std::sync::Arc;
use axum::body::Body;
use axum::extract::{FromRequest, Query, Request, State};
use axum::http::{StatusCode, header};
use axum::response::{IntoResponse, Response};
use serde::{Deserialize, Serialize};
use crate::auth::signing;
use crate::execcontext;
use crate::http::encoding;
use crate::permissions::path::{ensure_dirs, expand_and_resolve};
use crate::permissions::user::lookup_user;
use crate::state::AppState;
const ACCESS_TOKEN_HEADER: &str = "x-access-token";
#[derive(Deserialize)]
pub struct FileParams {
pub path: Option<String>,
pub username: Option<String>,
pub signature: Option<String>,
pub signature_expiration: Option<i64>,
}
#[derive(Serialize)]
struct EntryInfo {
path: String,
name: String,
r#type: &'static str,
}
fn json_error(status: StatusCode, msg: &str) -> Response {
let body = serde_json::json!({ "code": status.as_u16(), "message": msg });
(status, axum::Json(body)).into_response()
}
fn extract_header_token(req: &Request) -> Option<&str> {
req.headers()
.get(ACCESS_TOKEN_HEADER)
.and_then(|v| v.to_str().ok())
}
fn validate_file_signing(
state: &AppState,
header_token: Option<&str>,
params: &FileParams,
path: &str,
operation: &str,
username: &str,
) -> Result<(), String> {
signing::validate_signing(
&state.access_token,
header_token,
params.signature.as_deref(),
params.signature_expiration,
username,
path,
operation,
)
}
/// GET /files — download a file
pub async fn get_files(
State(state): State<Arc<AppState>>,
Query(params): Query<FileParams>,
req: Request,
) -> Response {
let path_str = params.path.as_deref().unwrap_or("");
let header_token = extract_header_token(&req);
let default_user = state.defaults.user();
let username = match execcontext::resolve_default_username(
params.username.as_deref(),
&default_user,
) {
Ok(u) => u.to_string(),
Err(e) => return json_error(StatusCode::BAD_REQUEST, e),
};
if let Err(e) = validate_file_signing(
&state,
header_token,
&params,
path_str,
signing::READ_OPERATION,
&username,
) {
return json_error(StatusCode::UNAUTHORIZED, &e);
}
let user = match lookup_user(&username) {
Ok(u) => u,
Err(e) => return json_error(StatusCode::UNAUTHORIZED, &e),
};
let home_dir = user.dir.to_string_lossy().to_string();
let default_workdir = state.defaults.workdir();
let resolved = match expand_and_resolve(path_str, &home_dir, default_workdir.as_deref())
{
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
};
let meta = match std::fs::metadata(&resolved) {
Ok(m) => m,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
return json_error(
StatusCode::NOT_FOUND,
&format!("path '{}' does not exist", resolved),
);
}
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error checking path: {e}"),
);
}
};
if meta.is_dir() {
return json_error(
StatusCode::BAD_REQUEST,
&format!("path '{}' is a directory", resolved),
);
}
if !meta.file_type().is_file() {
return json_error(
StatusCode::BAD_REQUEST,
&format!("path '{}' is not a regular file", resolved),
);
}
let accept_enc = match encoding::parse_accept_encoding(&req) {
Ok(e) => e,
Err(e) => return json_error(StatusCode::NOT_ACCEPTABLE, &e),
};
let has_range_or_conditional = req.headers().get("range").is_some()
|| req.headers().get("if-modified-since").is_some()
|| req.headers().get("if-none-match").is_some()
|| req.headers().get("if-range").is_some();
let use_encoding = if has_range_or_conditional {
if !encoding::is_identity_acceptable(&req) {
return json_error(
StatusCode::NOT_ACCEPTABLE,
"identity encoding not acceptable for Range or conditional request",
);
}
"identity"
} else {
accept_enc
};
let file_data = match std::fs::read(&resolved) {
Ok(d) => d,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error reading file: {e}"),
);
}
};
let filename = Path::new(&resolved)
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
let content_disposition = format!("inline; filename=\"{}\"", filename);
let content_type = mime_guess::from_path(&resolved)
.first_raw()
.unwrap_or("application/octet-stream");
if use_encoding == "gzip" {
let mut encoder =
flate2::write::GzEncoder::new(Vec::new(), flate2::Compression::default());
if let Err(e) = encoder.write_all(&file_data) {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("gzip encoding error: {e}"),
);
}
let compressed = match encoder.finish() {
Ok(d) => d,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("gzip finish error: {e}"),
);
}
};
return Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_ENCODING, "gzip")
.header(header::CONTENT_DISPOSITION, content_disposition)
.header(header::VARY, "Accept-Encoding")
.body(Body::from(compressed))
.unwrap();
}
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_DISPOSITION, content_disposition)
.header(header::VARY, "Accept-Encoding")
.header(header::CONTENT_LENGTH, file_data.len())
.body(Body::from(file_data))
.unwrap()
}
/// POST /files — upload file(s) via multipart
pub async fn post_files(
State(state): State<Arc<AppState>>,
Query(params): Query<FileParams>,
req: Request,
) -> Response {
let path_str = params.path.as_deref().unwrap_or("");
let header_token = extract_header_token(&req);
let default_user = state.defaults.user();
let username = match execcontext::resolve_default_username(
params.username.as_deref(),
&default_user,
) {
Ok(u) => u.to_string(),
Err(e) => return json_error(StatusCode::BAD_REQUEST, e),
};
if let Err(e) = validate_file_signing(
&state,
header_token,
&params,
path_str,
signing::WRITE_OPERATION,
&username,
) {
return json_error(StatusCode::UNAUTHORIZED, &e);
}
let user = match lookup_user(&username) {
Ok(u) => u,
Err(e) => return json_error(StatusCode::UNAUTHORIZED, &e),
};
let home_dir = user.dir.to_string_lossy().to_string();
let uid = user.uid;
let gid = user.gid;
let content_enc = match encoding::parse_content_encoding(&req) {
Ok(e) => e,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
};
let mut multipart = match axum::extract::Multipart::from_request(req, &()).await {
Ok(m) => m,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error parsing multipart: {e}"),
);
}
};
let mut uploaded: Vec<EntryInfo> = Vec::new();
let default_workdir = state.defaults.workdir();
while let Ok(Some(field)) = multipart.next_field().await {
let field_name = field.name().unwrap_or("").to_string();
if field_name != "file" {
continue;
}
let file_path = if !path_str.is_empty() {
match expand_and_resolve(path_str, &home_dir, default_workdir.as_deref()) {
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
}
} else {
let fname = field
.file_name()
.unwrap_or("upload")
.to_string();
match expand_and_resolve(&fname, &home_dir, default_workdir.as_deref()) {
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
}
};
if uploaded.iter().any(|e| e.path == file_path) {
return json_error(
StatusCode::BAD_REQUEST,
&format!("cannot upload multiple files to same path '{}'", file_path),
);
}
let raw_bytes = match field.bytes().await {
Ok(b) => b,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error reading field: {e}"),
);
}
};
let data = if content_enc == "gzip" {
use std::io::Read;
let mut decoder = flate2::read::GzDecoder::new(&raw_bytes[..]);
let mut buf = Vec::new();
match decoder.read_to_end(&mut buf) {
Ok(_) => buf,
Err(e) => {
return json_error(
StatusCode::BAD_REQUEST,
&format!("gzip decompression failed: {e}"),
);
}
}
} else {
raw_bytes.to_vec()
};
if let Err(e) = process_file(&file_path, &data, uid, gid) {
let (status, msg) = e;
return json_error(status, &msg);
}
let name = Path::new(&file_path)
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
uploaded.push(EntryInfo {
path: file_path,
name,
r#type: "file",
});
}
axum::Json(uploaded).into_response()
}
fn process_file(
path: &str,
data: &[u8],
uid: nix::unistd::Uid,
gid: nix::unistd::Gid,
) -> Result<(), (StatusCode, String)> {
let dir = Path::new(path)
.parent()
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_default();
if !dir.is_empty() {
ensure_dirs(&dir, uid, gid).map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error ensuring directories: {e}"),
)
})?;
}
let can_pre_chown = match std::fs::metadata(path) {
Ok(meta) => {
if meta.is_dir() {
return Err((
StatusCode::BAD_REQUEST,
format!("path is a directory: {path}"),
));
}
true
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => false,
Err(e) => {
return Err((
StatusCode::INTERNAL_SERVER_ERROR,
format!("error getting file info: {e}"),
))
}
};
let mut chowned = false;
if can_pre_chown {
match std::os::unix::fs::chown(path, Some(uid.as_raw()), Some(gid.as_raw())) {
Ok(()) => chowned = true,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(e) => {
return Err((
StatusCode::INTERNAL_SERVER_ERROR,
format!("error changing ownership: {e}"),
))
}
}
}
let mut file = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o666)
.open(path)
.map_err(|e| {
if e.raw_os_error() == Some(libc::ENOSPC) {
return (
StatusCode::INSUFFICIENT_STORAGE,
"not enough disk space available".to_string(),
);
}
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error opening file: {e}"),
)
})?;
if !chowned {
std::os::unix::fs::chown(path, Some(uid.as_raw()), Some(gid.as_raw())).map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error changing ownership: {e}"),
)
})?;
}
file.write_all(data).map_err(|e| {
if e.raw_os_error() == Some(libc::ENOSPC) {
return (
StatusCode::INSUFFICIENT_STORAGE,
"not enough disk space available".to_string(),
);
}
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error writing file: {e}"),
)
})?;
Ok(())
}
use std::os::unix::fs::OpenOptionsExt;

View File

@ -0,0 +1,20 @@
use std::sync::Arc;
use axum::Json;
use axum::extract::State;
use axum::http::header;
use axum::response::IntoResponse;
use serde_json::json;
use crate::state::AppState;
pub async fn get_health(State(state): State<Arc<AppState>>) -> impl IntoResponse {
state.try_restore_recovery();
tracing::trace!("health check");
(
[(header::CACHE_CONTROL, "no-store")],
Json(json!({ "version": state.version })),
)
}

249
envd-rs/src/http/init.rs Normal file
View File

@ -0,0 +1,249 @@
use std::collections::HashMap;
use std::sync::Arc;
use axum::Json;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use serde::Deserialize;
use crate::state::AppState;
#[derive(Deserialize, Default)]
pub struct InitRequest {
#[serde(rename = "access_token")]
pub access_token: Option<String>,
#[serde(rename = "defaultUser")]
pub default_user: Option<String>,
#[serde(rename = "defaultWorkdir")]
pub default_workdir: Option<String>,
#[serde(rename = "envVars")]
pub env_vars: Option<HashMap<String, String>>,
#[serde(rename = "hyperloop_ip")]
pub hyperloop_ip: Option<String>,
pub timestamp: Option<String>,
#[serde(rename = "volume_mounts")]
pub volume_mounts: Option<Vec<VolumeMount>>,
pub sandbox_id: Option<String>,
pub template_id: Option<String>,
}
#[derive(Deserialize)]
pub struct VolumeMount {
pub nfs_target: String,
pub path: String,
}
/// POST /init — called by host agent after boot and after every resume.
pub async fn post_init(
State(state): State<Arc<AppState>>,
body: Option<Json<InitRequest>>,
) -> impl IntoResponse {
let init_req = body.map(|b| b.0).unwrap_or_default();
// Validate access token if provided
if let Some(ref token_str) = init_req.access_token {
if let Err(e) = validate_init_access_token(&state, token_str).await {
tracing::error!(error = %e, "init: access token validation failed");
return (StatusCode::UNAUTHORIZED, e).into_response();
}
}
// Idempotent timestamp check
if let Some(ref ts_str) = init_req.timestamp {
if let Ok(ts) = chrono_parse_to_nanos(ts_str) {
if !state.last_set_time.set_to_greater(ts) {
// Stale request, skip data updates
return trigger_restore_and_respond(&state).await;
}
}
}
// Apply env vars
if let Some(ref vars) = init_req.env_vars {
tracing::debug!(count = vars.len(), "setting env vars");
for (k, v) in vars {
state.defaults.env_vars.insert(k.clone(), v.clone());
}
}
// Set access token
if let Some(ref token_str) = init_req.access_token {
if !token_str.is_empty() {
tracing::debug!("setting access token");
let _ = state.access_token.set(token_str.as_bytes());
} else if state.access_token.is_set() {
tracing::debug!("clearing access token");
state.access_token.destroy();
}
}
// Set default user
if let Some(ref user) = init_req.default_user {
if !user.is_empty() {
tracing::debug!(user = %user, "setting default user");
state.defaults.set_user(user.clone());
}
}
// Set default workdir
if let Some(ref workdir) = init_req.default_workdir {
if !workdir.is_empty() {
tracing::debug!(workdir = %workdir, "setting default workdir");
state.defaults.set_workdir(Some(workdir.clone()));
}
}
// Hyperloop /etc/hosts setup
if let Some(ref ip) = init_req.hyperloop_ip {
let ip = ip.clone();
let env_vars = Arc::clone(&state.defaults.env_vars);
tokio::spawn(async move {
setup_hyperloop(&ip, &env_vars).await;
});
}
// NFS mounts
if let Some(ref mounts) = init_req.volume_mounts {
for mount in mounts {
let target = mount.nfs_target.clone();
let path = mount.path.clone();
tokio::spawn(async move {
setup_nfs(&target, &path).await;
});
}
}
// Set sandbox/template metadata from request body.
if let Some(ref id) = init_req.sandbox_id {
tracing::debug!(sandbox_id = %id, "setting sandbox ID from init request");
// SAFETY: envd is single-threaded at init time; no concurrent env reads.
unsafe { std::env::set_var("WRENN_SANDBOX_ID", id) };
write_run_file(".WRENN_SANDBOX_ID", id);
state.defaults.env_vars.insert("WRENN_SANDBOX_ID".into(), id.clone());
}
if let Some(ref id) = init_req.template_id {
tracing::debug!(template_id = %id, "setting template ID from init request");
// SAFETY: envd is single-threaded at init time; no concurrent env reads.
unsafe { std::env::set_var("WRENN_TEMPLATE_ID", id) };
write_run_file(".WRENN_TEMPLATE_ID", id);
state.defaults.env_vars.insert("WRENN_TEMPLATE_ID".into(), id.clone());
}
trigger_restore_and_respond(&state).await
}
async fn trigger_restore_and_respond(state: &AppState) -> axum::response::Response {
state.try_restore_recovery();
(
StatusCode::NO_CONTENT,
[(header::CACHE_CONTROL, "no-store")],
)
.into_response()
}
async fn validate_init_access_token(state: &AppState, request_token: &str) -> Result<(), String> {
// Fast path: matches existing token
if state.access_token.is_set() && !request_token.is_empty() && state.access_token.equals(request_token) {
return Ok(());
}
// First-time setup: no existing token
if !state.access_token.is_set() {
return Ok(());
}
if request_token.is_empty() {
return Err("access token reset not authorized".into());
}
Err("access token validation failed".into())
}
async fn setup_hyperloop(address: &str, env_vars: &dashmap::DashMap<String, String>) {
// Write to /etc/hosts: events.wrenn.local → address
let entry = format!("{address} events.wrenn.local\n");
match std::fs::read_to_string("/etc/hosts") {
Ok(contents) => {
let filtered: String = contents
.lines()
.filter(|line| !line.contains("events.wrenn.local"))
.collect::<Vec<_>>()
.join("\n");
let new_contents = format!("{filtered}\n{entry}");
if let Err(e) = std::fs::write("/etc/hosts", new_contents) {
tracing::error!(error = %e, "failed to modify hosts file");
return;
}
}
Err(e) => {
tracing::error!(error = %e, "failed to read hosts file");
return;
}
}
env_vars.insert(
"WRENN_EVENTS_ADDRESS".into(),
format!("http://{address}"),
);
}
async fn setup_nfs(nfs_target: &str, path: &str) {
let mkdir = tokio::process::Command::new("mkdir")
.args(["-p", path])
.output()
.await;
if let Err(e) = mkdir {
tracing::error!(error = %e, path, "nfs: mkdir failed");
return;
}
let mount = tokio::process::Command::new("mount")
.args([
"-v",
"-t",
"nfs",
"-o",
"mountproto=tcp,mountport=2049,proto=tcp,port=2049,nfsvers=3,noacl",
nfs_target,
path,
])
.output()
.await;
match mount {
Ok(output) => {
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
if output.status.success() {
tracing::info!(nfs_target, path, stdout = %stdout, "nfs: mount success");
} else {
tracing::error!(nfs_target, path, stderr = %stderr, "nfs: mount failed");
}
}
Err(e) => {
tracing::error!(error = %e, nfs_target, path, "nfs: mount command failed");
}
}
}
fn write_run_file(name: &str, value: &str) {
let dir = std::path::Path::new("/run/wrenn");
if let Err(e) = std::fs::create_dir_all(dir) {
tracing::warn!(error = %e, "failed to create /run/wrenn");
return;
}
if let Err(e) = std::fs::write(dir.join(name), value) {
tracing::warn!(error = %e, name, "failed to write run file");
}
}
fn chrono_parse_to_nanos(ts: &str) -> Result<i64, ()> {
let secs = ts.parse::<f64>().ok();
if let Some(s) = secs {
return Ok((s * 1_000_000_000.0) as i64);
}
Err(())
}

View File

@ -0,0 +1,89 @@
use std::sync::Arc;
use std::time::{SystemTime, UNIX_EPOCH};
use axum::Json;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use serde::Serialize;
use crate::state::AppState;
#[derive(Serialize)]
pub struct Metrics {
ts: i64,
cpu_count: u32,
cpu_used_pct: f32,
mem_total_mib: u64,
mem_used_mib: u64,
mem_total: u64,
mem_used: u64,
disk_used: u64,
disk_total: u64,
}
pub async fn get_metrics(State(state): State<Arc<AppState>>) -> impl IntoResponse {
tracing::trace!("get metrics");
match collect_metrics(&state) {
Ok(m) => (
StatusCode::OK,
[(header::CACHE_CONTROL, "no-store")],
Json(m),
)
.into_response(),
Err(e) => {
tracing::error!(error = %e, "failed to get metrics");
StatusCode::INTERNAL_SERVER_ERROR.into_response()
}
}
}
fn collect_metrics(state: &AppState) -> Result<Metrics, String> {
let cpu_count = state.cpu_count();
let cpu_used_pct_rounded = state.cpu_used_pct();
let mut sys = sysinfo::System::new();
sys.refresh_memory();
let mem_total = sys.total_memory();
let mem_available = sys.available_memory();
let mem_used = mem_total.saturating_sub(mem_available);
let mem_total_mib = mem_total / 1024 / 1024;
let mem_used_mib = mem_used / 1024 / 1024;
let (disk_total, disk_used) = disk_stats("/").map_err(|e| e.to_string())?;
let ts = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
Ok(Metrics {
ts,
cpu_count,
cpu_used_pct: cpu_used_pct_rounded,
mem_total_mib,
mem_used_mib,
mem_total,
mem_used,
disk_used,
disk_total,
})
}
fn disk_stats(path: &str) -> Result<(u64, u64), nix::Error> {
use std::ffi::CString;
let c_path = CString::new(path).unwrap();
let mut stat: libc::statfs = unsafe { std::mem::zeroed() };
let ret = unsafe { libc::statfs(c_path.as_ptr(), &mut stat) };
if ret != 0 {
return Err(nix::Error::last());
}
let block = stat.f_bsize as u64;
let total = stat.f_blocks * block;
let available = stat.f_bavail * block;
Ok((total, total - available))
}

56
envd-rs/src/http/mod.rs Normal file
View File

@ -0,0 +1,56 @@
pub mod encoding;
pub mod envs;
pub mod error;
pub mod files;
pub mod health;
pub mod init;
pub mod metrics;
pub mod snapshot;
use std::sync::Arc;
use std::time::Duration;
use axum::Router;
use axum::routing::{get, post};
use http::header::{CACHE_CONTROL, HeaderName};
use http::Method;
use tower_http::cors::{AllowHeaders, AllowMethods, AllowOrigin, CorsLayer};
use crate::config::CORS_MAX_AGE;
use crate::state::AppState;
pub fn router(state: Arc<AppState>) -> Router {
let cors = CorsLayer::new()
.allow_origin(AllowOrigin::any())
.allow_methods(AllowMethods::list([
Method::HEAD,
Method::GET,
Method::POST,
Method::PUT,
Method::PATCH,
Method::DELETE,
]))
.allow_headers(AllowHeaders::any())
.expose_headers([
HeaderName::from_static("location"),
CACHE_CONTROL,
HeaderName::from_static("x-content-type-options"),
HeaderName::from_static("connect-content-encoding"),
HeaderName::from_static("connect-protocol-version"),
HeaderName::from_static("grpc-encoding"),
HeaderName::from_static("grpc-message"),
HeaderName::from_static("grpc-status"),
HeaderName::from_static("grpc-status-details-bin"),
])
.max_age(Duration::from_secs(CORS_MAX_AGE.as_secs()));
Router::new()
.route("/health", get(health::get_health))
.route("/metrics", get(metrics::get_metrics))
.route("/envs", get(envs::get_envs))
.route("/init", post(init::post_init))
.route("/snapshot/prepare", post(snapshot::post_snapshot_prepare))
.route("/files", get(files::get_files).post(files::post_files))
.layer(cors)
.with_state(state)
}

View File

@ -0,0 +1,49 @@
use std::sync::Arc;
use std::sync::atomic::Ordering;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use crate::state::AppState;
/// POST /snapshot/prepare — quiesce subsystems before VM snapshot.
///
/// In Rust there is no GC dance. We just:
/// 1. Drop page cache to shrink snapshot size
/// 2. Stop port subsystem
/// 3. Close idle connections via conntracker
/// 4. Set needs_restore flag
pub async fn post_snapshot_prepare(State(state): State<Arc<AppState>>) -> impl IntoResponse {
// Drop page cache BEFORE blocking the reclaimer — avoids snapshotting
// gigabytes of stale cache that inflates the memory dump on disk.
// "1" = pagecache only (keep dentries/inodes for faster resume).
if let Err(e) = std::fs::write("/proc/sys/vm/drop_caches", "1") {
tracing::warn!(error = %e, "snapshot/prepare: drop_caches failed");
} else {
tracing::info!("snapshot/prepare: page cache dropped");
}
// Block memory reclaimer — prevents drop_caches from running mid-freeze
// which would corrupt kernel page table state.
state.snapshot_in_progress.store(true, Ordering::Release);
if let Some(ref ps) = state.port_subsystem {
ps.stop();
tracing::info!("snapshot/prepare: port subsystem stopped");
}
state.conn_tracker.prepare_for_snapshot();
tracing::info!("snapshot/prepare: connections prepared");
// Sync filesystem buffers so dirty pages are flushed before freeze.
unsafe { libc::sync(); }
state.needs_restore.store(true, Ordering::Release);
tracing::info!("snapshot/prepare: ready for freeze");
(
StatusCode::NO_CONTENT,
[(header::CACHE_CONTROL, "no-store")],
)
}

17
envd-rs/src/logging.rs Normal file
View File

@ -0,0 +1,17 @@
use tracing_subscriber::{EnvFilter, fmt, layer::SubscriberExt, util::SubscriberInitExt};
pub fn init(json: bool) {
let filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info"));
if json {
tracing_subscriber::registry()
.with(filter)
.with(fmt::layer().json().flatten_event(true))
.init();
} else {
tracing_subscriber::registry()
.with(filter)
.with(fmt::layer())
.init();
}
}

269
envd-rs/src/main.rs Normal file
View File

@ -0,0 +1,269 @@
#![allow(dead_code)]
mod auth;
mod cgroups;
mod config;
mod conntracker;
mod crypto;
mod execcontext;
mod http;
mod logging;
mod permissions;
mod port;
mod rpc;
mod state;
mod util;
use std::fs;
use std::net::SocketAddr;
use std::path::Path;
use std::sync::Arc;
use clap::Parser;
use tokio::net::TcpListener;
use config::{DEFAULT_PORT, DEFAULT_USER, WRENN_RUN_DIR};
use execcontext::Defaults;
use port::subsystem::PortSubsystem;
use state::AppState;
const VERSION: &str = env!("CARGO_PKG_VERSION");
const COMMIT: &str = {
match option_env!("ENVD_COMMIT") {
Some(c) => c,
None => "unknown",
}
};
#[derive(Parser)]
#[command(name = "envd", about = "Wrenn guest agent daemon")]
struct Cli {
#[arg(long, default_value_t = DEFAULT_PORT)]
port: u16,
#[arg(long)]
version: bool,
#[arg(long)]
commit: bool,
#[arg(long = "cmd", default_value = "")]
start_cmd: String,
#[arg(long = "cgroup-root", default_value = "/sys/fs/cgroup")]
cgroup_root: String,
}
#[tokio::main]
async fn main() {
let cli = Cli::parse();
if cli.version {
println!("{VERSION}");
return;
}
if cli.commit {
println!("{COMMIT}");
return;
}
logging::init(true);
if let Err(e) = fs::create_dir_all(WRENN_RUN_DIR) {
tracing::error!(error = %e, "failed to create wrenn run directory");
}
let defaults = Defaults::new(DEFAULT_USER);
defaults
.env_vars
.insert("WRENN_SANDBOX".into(), "true".into());
let wrenn_sandbox_path = Path::new(WRENN_RUN_DIR).join(".WRENN_SANDBOX");
if let Err(e) = fs::write(&wrenn_sandbox_path, b"true") {
tracing::error!(error = %e, "failed to write sandbox file");
}
// Cgroup manager
let cgroup_manager: Arc<dyn cgroups::CgroupManager> =
match cgroups::Cgroup2Manager::new(
&cli.cgroup_root,
&[
(
cgroups::ProcessType::Pty,
"wrenn/pty",
&[] as &[(&str, &str)],
),
(
cgroups::ProcessType::User,
"wrenn/user",
&[] as &[(&str, &str)],
),
(
cgroups::ProcessType::Socat,
"wrenn/socat",
&[] as &[(&str, &str)],
),
],
) {
Ok(m) => {
tracing::info!("cgroup2 manager initialized");
Arc::new(m)
}
Err(e) => {
tracing::warn!(error = %e, "cgroup2 init failed, using noop");
Arc::new(cgroups::NoopCgroupManager)
}
};
// Port subsystem
let port_subsystem = Arc::new(PortSubsystem::new(Arc::clone(&cgroup_manager)));
port_subsystem.start();
tracing::info!("port subsystem started");
let state = AppState::new(
defaults,
VERSION.to_string(),
COMMIT.to_string(),
Some(Arc::clone(&port_subsystem)),
);
// Memory reclaimer — drop page cache when available memory is low.
// The balloon device can only reclaim pages the guest kernel freed.
// Pauses during snapshot/prepare to avoid corrupting kernel page table state.
{
let state_for_reclaimer = Arc::clone(&state);
std::thread::spawn(move || memory_reclaimer(state_for_reclaimer));
}
// RPC services (Connect protocol — serves Connect + gRPC + gRPC-Web on same port)
let connect_router = rpc::rpc_router(Arc::clone(&state));
let app = http::router(Arc::clone(&state))
.fallback_service(connect_router.into_axum_service());
// --cmd: spawn initial process if specified
if !cli.start_cmd.is_empty() {
let cmd = cli.start_cmd.clone();
let state_clone = Arc::clone(&state);
tokio::spawn(async move {
spawn_initial_command(&cmd, &state_clone);
});
}
let addr = SocketAddr::from(([0, 0, 0, 0], cli.port));
tracing::info!(port = cli.port, version = VERSION, commit = COMMIT, "envd starting");
let listener = TcpListener::bind(addr).await.expect("failed to bind");
let graceful = axum::serve(listener, app).with_graceful_shutdown(async move {
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
.expect("failed to register SIGTERM")
.recv()
.await;
tracing::info!("SIGTERM received, shutting down");
});
if let Err(e) = graceful.await {
tracing::error!(error = %e, "server error");
}
port_subsystem.stop();
}
fn spawn_initial_command(cmd: &str, state: &AppState) {
use crate::permissions::user::lookup_user;
use crate::rpc::process_handler;
use std::collections::HashMap;
let default_user = state.defaults.user();
let user = match lookup_user(&default_user) {
Ok(u) => u,
Err(e) => {
tracing::error!(error = %e, "cmd: failed to lookup user");
return;
}
};
let home = user.dir.to_string_lossy().to_string();
let default_workdir = state.defaults.workdir();
let cwd = default_workdir
.as_deref()
.unwrap_or(&home);
match process_handler::spawn_process(
cmd,
&[],
&HashMap::new(),
cwd,
None,
false,
Some("init-cmd".to_string()),
&user,
&state.defaults.env_vars,
) {
Ok(spawned) => {
tracing::info!(pid = spawned.handle.pid, cmd, "initial command spawned");
}
Err(e) => {
tracing::error!(error = %e, cmd, "failed to spawn initial command");
}
}
}
fn memory_reclaimer(state: Arc<AppState>) {
use std::sync::atomic::Ordering;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
const CHECK_INTERVAL: Duration = Duration::from_secs(10);
const DROP_THRESHOLD_PCT: u64 = 80;
const RESTORE_GRACE_SECS: u64 = 30;
loop {
std::thread::sleep(CHECK_INTERVAL);
if state.snapshot_in_progress.load(Ordering::Acquire) {
continue;
}
// Skip during post-restore grace period. Balloon deflation causes
// transient high memory that resolves on its own — triggering
// drop_caches during UFFD page fault storms makes the guest unresponsive.
let restore_epoch = state.restore_epoch.load(Ordering::Acquire);
if restore_epoch > 0 {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
if now.saturating_sub(restore_epoch) < RESTORE_GRACE_SECS {
continue;
}
}
let mut sys = sysinfo::System::new();
sys.refresh_memory();
let total = sys.total_memory();
let available = sys.available_memory();
if total == 0 {
continue;
}
let used_pct = ((total - available) * 100) / total;
if used_pct >= DROP_THRESHOLD_PCT {
if state.snapshot_in_progress.load(Ordering::Acquire) {
continue;
}
if let Err(e) = std::fs::write("/proc/sys/vm/drop_caches", "3") {
tracing::debug!(error = %e, "drop_caches failed");
} else {
let mut sys2 = sysinfo::System::new();
sys2.refresh_memory();
let freed_mb =
sys2.available_memory().saturating_sub(available) / (1024 * 1024);
tracing::info!(used_pct, freed_mb, "page cache dropped");
}
}
}
}

View File

@ -0,0 +1,2 @@
pub mod user;
pub mod path;

View File

@ -0,0 +1,184 @@
use std::fs;
use std::os::unix::fs::chown;
use std::path::{Path, PathBuf};
use nix::unistd::{Gid, Uid};
fn expand_tilde(path: &str, home_dir: &str) -> Result<String, String> {
if path.is_empty() || !path.starts_with('~') {
return Ok(path.to_string());
}
if path.len() > 1 && path.as_bytes()[1] != b'/' && path.as_bytes()[1] != b'\\' {
return Err("cannot expand user-specific home dir".into());
}
Ok(format!("{}{}", home_dir, &path[1..]))
}
pub fn expand_and_resolve(
path: &str,
home_dir: &str,
default_path: Option<&str>,
) -> Result<String, String> {
let path = if path.is_empty() {
default_path.unwrap_or("").to_string()
} else {
path.to_string()
};
let path = expand_tilde(&path, home_dir)?;
if Path::new(&path).is_absolute() {
return Ok(path);
}
let joined = PathBuf::from(home_dir).join(&path);
joined
.canonicalize()
.or_else(|_| Ok(joined))
.map(|p| p.to_string_lossy().to_string())
}
pub fn ensure_dirs(path: &str, uid: Uid, gid: Gid) -> Result<(), String> {
let path = Path::new(path);
let mut current = PathBuf::new();
for component in path.components() {
current.push(component);
let current_str = current.to_string_lossy();
if current_str == "/" {
continue;
}
match fs::metadata(&current) {
Ok(meta) => {
if !meta.is_dir() {
return Err(format!("path is a file: {current_str}"));
}
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
fs::create_dir(&current)
.map_err(|e| format!("failed to create directory {current_str}: {e}"))?;
chown(&current, Some(uid.as_raw()), Some(gid.as_raw()))
.map_err(|e| format!("failed to chown directory {current_str}: {e}"))?;
}
Err(e) => {
return Err(format!("failed to stat directory {current_str}: {e}"));
}
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
// expand_tilde
#[test]
fn tilde_empty_passthrough() {
assert_eq!(expand_tilde("", "/home/u").unwrap(), "");
}
#[test]
fn tilde_no_tilde_passthrough() {
assert_eq!(expand_tilde("/absolute", "/home/u").unwrap(), "/absolute");
}
#[test]
fn tilde_bare() {
assert_eq!(expand_tilde("~", "/home/user").unwrap(), "/home/user");
}
#[test]
fn tilde_slash_path() {
assert_eq!(expand_tilde("~/docs", "/home/user").unwrap(), "/home/user/docs");
}
#[test]
fn tilde_nested() {
assert_eq!(expand_tilde("~/a/b/c", "/h").unwrap(), "/h/a/b/c");
}
#[test]
fn tilde_other_user_errors() {
assert!(expand_tilde("~bob/foo", "/home/user").is_err());
}
#[test]
fn tilde_relative_no_tilde() {
assert_eq!(expand_tilde("relative/path", "/home/u").unwrap(), "relative/path");
}
// expand_and_resolve
#[test]
fn resolve_absolute_passthrough() {
assert_eq!(expand_and_resolve("/abs/path", "/home", None).unwrap(), "/abs/path");
}
#[test]
fn resolve_empty_uses_default() {
assert_eq!(expand_and_resolve("", "/home", Some("/default")).unwrap(), "/default");
}
#[test]
fn resolve_empty_no_default_falls_back_to_home() {
// Empty path with no default → joins "" with home_dir → returns home_dir
let result = expand_and_resolve("", "/home", None).unwrap();
assert_eq!(result, "/home");
}
#[test]
fn resolve_tilde_expands() {
assert_eq!(expand_and_resolve("~/dir", "/home/u", None).unwrap(), "/home/u/dir");
}
#[test]
fn resolve_relative_joins_home() {
let result = expand_and_resolve("subdir", "/tmp", None).unwrap();
// Relative path joined with home and canonicalized (or raw join on missing)
assert!(result.starts_with("/tmp"));
assert!(result.contains("subdir"));
}
#[test]
fn resolve_tilde_other_user_errors() {
assert!(expand_and_resolve("~bob", "/home/u", None).is_err());
}
// ensure_dirs
#[test]
fn ensure_dirs_creates_nested() {
let tmp = tempfile::TempDir::new().unwrap();
let path = tmp.path().join("a/b/c");
let uid = nix::unistd::getuid();
let gid = nix::unistd::getgid();
ensure_dirs(path.to_str().unwrap(), uid, gid).unwrap();
assert!(path.is_dir());
}
#[test]
fn ensure_dirs_existing_is_ok() {
let tmp = tempfile::TempDir::new().unwrap();
let uid = nix::unistd::getuid();
let gid = nix::unistd::getgid();
ensure_dirs(tmp.path().to_str().unwrap(), uid, gid).unwrap();
}
#[test]
fn ensure_dirs_file_in_path_errors() {
let tmp = tempfile::TempDir::new().unwrap();
let file_path = tmp.path().join("afile");
std::fs::write(&file_path, "").unwrap();
let nested = file_path.join("subdir");
let uid = nix::unistd::getuid();
let gid = nix::unistd::getgid();
let result = ensure_dirs(nested.to_str().unwrap(), uid, gid);
assert!(result.is_err());
assert!(result.unwrap_err().contains("path is a file"));
}
}

View File

@ -0,0 +1,32 @@
use nix::unistd::{Gid, Group, Uid, User};
pub fn lookup_user(username: &str) -> Result<User, String> {
User::from_name(username)
.map_err(|e| format!("error looking up user '{username}': {e}"))?
.ok_or_else(|| format!("user '{username}' not found"))
}
pub fn get_uid_gid(user: &User) -> (Uid, Gid) {
(user.uid, user.gid)
}
pub fn get_user_groups(user: &User) -> Vec<Gid> {
let c_name = std::ffi::CString::new(user.name.as_str()).unwrap();
nix::unistd::getgrouplist(&c_name, user.gid).unwrap_or_default()
}
pub fn lookup_username_by_uid(uid: Uid) -> String {
User::from_uid(uid)
.ok()
.flatten()
.map(|u| u.name)
.unwrap_or_else(|| uid.to_string())
}
pub fn lookup_groupname_by_gid(gid: Gid) -> String {
Group::from_gid(gid)
.ok()
.flatten()
.map(|g| g.name)
.unwrap_or_else(|| gid.to_string())
}

260
envd-rs/src/port/conn.rs Normal file
View File

@ -0,0 +1,260 @@
use std::io::{self, BufRead};
#[derive(Debug, Clone)]
pub struct ConnStat {
pub local_ip: String,
pub local_port: u32,
pub status: String,
pub family: u32,
pub inode: u64,
}
fn tcp_state_name(hex: &str) -> &'static str {
match hex {
"01" => "ESTABLISHED",
"02" => "SYN_SENT",
"03" => "SYN_RECV",
"04" => "FIN_WAIT1",
"05" => "FIN_WAIT2",
"06" => "TIME_WAIT",
"07" => "CLOSE",
"08" => "CLOSE_WAIT",
"09" => "LAST_ACK",
"0A" => "LISTEN",
"0B" => "CLOSING",
_ => "UNKNOWN",
}
}
pub fn read_tcp_connections() -> Vec<ConnStat> {
let mut conns = Vec::new();
if let Ok(c) = parse_proc_net_tcp("/proc/net/tcp", libc::AF_INET as u32) {
conns.extend(c);
}
if let Ok(c) = parse_proc_net_tcp("/proc/net/tcp6", libc::AF_INET6 as u32) {
conns.extend(c);
}
conns
}
fn parse_proc_net_tcp(path: &str, family: u32) -> io::Result<Vec<ConnStat>> {
let file = std::fs::File::open(path)?;
let reader = io::BufReader::new(file);
let mut conns = Vec::new();
let mut first = true;
for line in reader.lines() {
let line = line?;
if first {
first = false;
continue;
}
let line = line.trim().to_string();
if line.is_empty() {
continue;
}
let fields: Vec<&str> = line.split_whitespace().collect();
if fields.len() < 10 {
continue;
}
let (ip, port) = match parse_hex_addr(fields[1], family) {
Some(v) => v,
None => continue,
};
let state = tcp_state_name(fields[3]);
let inode: u64 = match fields[9].parse() {
Ok(v) => v,
Err(_) => continue,
};
conns.push(ConnStat {
local_ip: ip,
local_port: port,
status: state.to_string(),
family,
inode,
});
}
Ok(conns)
}
fn parse_hex_addr(s: &str, family: u32) -> Option<(String, u32)> {
let (ip_hex, port_hex) = s.split_once(':')?;
let port = u32::from_str_radix(port_hex, 16).ok()?;
let ip_bytes = hex::decode(ip_hex).ok()?;
let ip_str = if family == libc::AF_INET as u32 {
if ip_bytes.len() != 4 {
return None;
}
format!("{}.{}.{}.{}", ip_bytes[3], ip_bytes[2], ip_bytes[1], ip_bytes[0])
} else {
if ip_bytes.len() != 16 {
return None;
}
let mut octets = [0u8; 16];
for i in 0..4 {
octets[i * 4] = ip_bytes[i * 4 + 3];
octets[i * 4 + 1] = ip_bytes[i * 4 + 2];
octets[i * 4 + 2] = ip_bytes[i * 4 + 1];
octets[i * 4 + 3] = ip_bytes[i * 4];
}
let addr = std::net::Ipv6Addr::from(octets);
addr.to_string()
};
Some((ip_str, port))
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
// tcp_state_name
#[test]
fn state_all_known_codes() {
assert_eq!(tcp_state_name("01"), "ESTABLISHED");
assert_eq!(tcp_state_name("02"), "SYN_SENT");
assert_eq!(tcp_state_name("03"), "SYN_RECV");
assert_eq!(tcp_state_name("04"), "FIN_WAIT1");
assert_eq!(tcp_state_name("05"), "FIN_WAIT2");
assert_eq!(tcp_state_name("06"), "TIME_WAIT");
assert_eq!(tcp_state_name("07"), "CLOSE");
assert_eq!(tcp_state_name("08"), "CLOSE_WAIT");
assert_eq!(tcp_state_name("09"), "LAST_ACK");
assert_eq!(tcp_state_name("0A"), "LISTEN");
assert_eq!(tcp_state_name("0B"), "CLOSING");
}
#[test]
fn state_unknown_code() {
assert_eq!(tcp_state_name("FF"), "UNKNOWN");
assert_eq!(tcp_state_name("00"), "UNKNOWN");
}
// parse_hex_addr
#[test]
fn ipv4_localhost() {
let (ip, port) = parse_hex_addr("0100007F:0050", libc::AF_INET as u32).unwrap();
assert_eq!(ip, "127.0.0.1");
assert_eq!(port, 80);
}
#[test]
fn ipv4_any() {
let (ip, port) = parse_hex_addr("00000000:0035", libc::AF_INET as u32).unwrap();
assert_eq!(ip, "0.0.0.0");
assert_eq!(port, 53);
}
#[test]
fn ipv4_real_address() {
// 192.168.1.1 in little-endian = 0101A8C0
let (ip, port) = parse_hex_addr("0101A8C0:01BB", libc::AF_INET as u32).unwrap();
assert_eq!(ip, "192.168.1.1");
assert_eq!(port, 443);
}
#[test]
fn ipv4_wrong_byte_count_returns_none() {
assert!(parse_hex_addr("0100:0050", libc::AF_INET as u32).is_none());
}
#[test]
fn invalid_hex_returns_none() {
assert!(parse_hex_addr("ZZZZZZZZ:0050", libc::AF_INET as u32).is_none());
}
#[test]
fn no_colon_returns_none() {
assert!(parse_hex_addr("0100007F0050", libc::AF_INET as u32).is_none());
}
#[test]
fn ipv6_loopback() {
// ::1 in /proc/net/tcp6 format: 00000000000000000000000001000000
let (ip, port) = parse_hex_addr(
"00000000000000000000000001000000:0035",
libc::AF_INET6 as u32,
)
.unwrap();
assert_eq!(ip, "::1");
assert_eq!(port, 53);
}
#[test]
fn ipv6_wrong_byte_count_returns_none() {
assert!(parse_hex_addr("0100007F:0050", libc::AF_INET6 as u32).is_none());
}
// parse_proc_net_tcp
fn write_tcp_file(content: &str) -> tempfile::NamedTempFile {
let mut f = tempfile::NamedTempFile::new().unwrap();
f.write_all(content.as_bytes()).unwrap();
f.flush().unwrap();
f
}
#[test]
fn parse_empty_file() {
let f = write_tcp_file(
" sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode\n",
);
let conns = parse_proc_net_tcp(f.path().to_str().unwrap(), libc::AF_INET as u32).unwrap();
assert!(conns.is_empty());
}
#[test]
fn parse_single_entry() {
let content = "\
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 0100007F:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 12345 1 00000000\n";
let f = write_tcp_file(content);
let conns = parse_proc_net_tcp(f.path().to_str().unwrap(), libc::AF_INET as u32).unwrap();
assert_eq!(conns.len(), 1);
assert_eq!(conns[0].local_ip, "127.0.0.1");
assert_eq!(conns[0].local_port, 80);
assert_eq!(conns[0].status, "LISTEN");
assert_eq!(conns[0].inode, 12345);
assert_eq!(conns[0].family, libc::AF_INET as u32);
}
#[test]
fn parse_skips_malformed_rows() {
let content = "\
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 0100007F:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 12345 1 00000000
bad line
1: short\n";
let f = write_tcp_file(content);
let conns = parse_proc_net_tcp(f.path().to_str().unwrap(), libc::AF_INET as u32).unwrap();
assert_eq!(conns.len(), 1);
}
#[test]
fn parse_multiple_entries() {
let content = "\
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 0100007F:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 100 1 00000000
1: 00000000:01BB 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 200 1 00000000\n";
let f = write_tcp_file(content);
let conns = parse_proc_net_tcp(f.path().to_str().unwrap(), libc::AF_INET as u32).unwrap();
assert_eq!(conns.len(), 2);
assert_eq!(conns[0].local_port, 80);
assert_eq!(conns[1].local_port, 443);
}
#[test]
fn parse_nonexistent_file_errors() {
assert!(parse_proc_net_tcp("/nonexistent/path", libc::AF_INET as u32).is_err());
}
}

View File

@ -0,0 +1,181 @@
use std::collections::HashMap;
use std::os::unix::process::CommandExt;
use std::process::Command;
use std::sync::Arc;
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use crate::cgroups::{CgroupManager, ProcessType};
use super::conn::ConnStat;
const DEFAULT_GATEWAY_IP: &str = "169.254.0.21";
#[derive(PartialEq)]
enum PortState {
Forward,
Delete,
}
struct PortToForward {
pid: Option<u32>,
inode: u64,
family: u32,
state: PortState,
port: u32,
}
fn family_to_ip_version(family: u32) -> u32 {
if family == libc::AF_INET as u32 {
4
} else if family == libc::AF_INET6 as u32 {
6
} else {
0
}
}
pub struct Forwarder {
cgroup_manager: Arc<dyn CgroupManager>,
ports: HashMap<String, PortToForward>,
source_ip: String,
}
impl Forwarder {
pub fn new(cgroup_manager: Arc<dyn CgroupManager>) -> Self {
Self {
cgroup_manager,
ports: HashMap::new(),
source_ip: DEFAULT_GATEWAY_IP.to_string(),
}
}
pub async fn start_forwarding(
&mut self,
mut rx: mpsc::Receiver<Vec<ConnStat>>,
cancel: CancellationToken,
) {
loop {
tokio::select! {
_ = cancel.cancelled() => {
self.stop_all();
return;
}
msg = rx.recv() => {
match msg {
Some(conns) => self.process_scan(conns),
None => {
self.stop_all();
return;
}
}
}
}
}
}
fn process_scan(&mut self, conns: Vec<ConnStat>) {
for ptf in self.ports.values_mut() {
ptf.state = PortState::Delete;
}
for conn in &conns {
let key = format!("{}-{}", conn.inode, conn.local_port);
if let Some(ptf) = self.ports.get_mut(&key) {
ptf.state = PortState::Forward;
} else {
tracing::debug!(
ip = %conn.local_ip,
port = conn.local_port,
family = family_to_ip_version(conn.family),
"detected new port on localhost"
);
let mut ptf = PortToForward {
pid: None,
inode: conn.inode,
family: family_to_ip_version(conn.family),
state: PortState::Forward,
port: conn.local_port,
};
self.start_port_forwarding(&mut ptf);
self.ports.insert(key, ptf);
}
}
let to_stop: Vec<String> = self
.ports
.iter()
.filter(|(_, v)| v.state == PortState::Delete)
.map(|(k, _)| k.clone())
.collect();
for key in to_stop {
if let Some(ptf) = self.ports.get(&key) {
stop_port_forwarding(ptf);
}
self.ports.remove(&key);
}
}
fn start_port_forwarding(&self, ptf: &mut PortToForward) {
let listen_arg = format!(
"TCP4-LISTEN:{},bind={},reuseaddr,fork",
ptf.port, self.source_ip
);
let connect_arg = format!("TCP{}:localhost:{}", ptf.family, ptf.port);
let mut cmd = Command::new("socat");
cmd.args(["-d", "-d", "-d", &listen_arg, &connect_arg]);
unsafe {
let cgroup_fd = self.cgroup_manager.get_fd(ProcessType::Socat);
cmd.pre_exec(move || {
libc::setpgid(0, 0);
if let Some(fd) = cgroup_fd {
let pid_str = format!("{}", libc::getpid());
let tasks_path = format!("/proc/self/fd/{}/cgroup.procs", fd);
let _ = std::fs::write(&tasks_path, pid_str.as_bytes());
}
Ok(())
});
}
tracing::debug!(
port = ptf.port,
inode = ptf.inode,
family = ptf.family,
source_ip = %self.source_ip,
"starting port forwarding"
);
match cmd.spawn() {
Ok(child) => {
ptf.pid = Some(child.id());
std::thread::spawn(move || {
let mut child = child;
let _ = child.wait();
});
}
Err(e) => {
tracing::error!(error = %e, port = ptf.port, "failed to start socat");
}
}
}
fn stop_all(&mut self) {
for ptf in self.ports.values() {
stop_port_forwarding(ptf);
}
self.ports.clear();
}
}
fn stop_port_forwarding(ptf: &PortToForward) {
if let Some(pid) = ptf.pid {
tracing::debug!(port = ptf.port, pid, "stopping port forwarding");
unsafe {
libc::kill(-(pid as i32), libc::SIGKILL);
}
}
}

4
envd-rs/src/port/mod.rs Normal file
View File

@ -0,0 +1,4 @@
pub mod conn;
pub mod forwarder;
pub mod scanner;
pub mod subsystem;

View File

@ -0,0 +1,81 @@
use std::sync::{Arc, RwLock};
use std::time::Duration;
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use super::conn::{ConnStat, read_tcp_connections};
pub struct ScannerFilter {
pub ips: Vec<String>,
pub state: String,
}
impl ScannerFilter {
pub fn matches(&self, conn: &ConnStat) -> bool {
if self.state.is_empty() && self.ips.is_empty() {
return false;
}
self.ips.contains(&conn.local_ip) && self.state == conn.status
}
}
pub struct ScannerSubscriber {
pub tx: mpsc::Sender<Vec<ConnStat>>,
pub filter: Option<ScannerFilter>,
}
pub struct Scanner {
period: Duration,
subs: RwLock<Vec<(String, Arc<ScannerSubscriber>)>>,
}
impl Scanner {
pub fn new(period: Duration) -> Self {
Self {
period,
subs: RwLock::new(Vec::new()),
}
}
pub fn add_subscriber(
&self,
id: &str,
filter: Option<ScannerFilter>,
) -> mpsc::Receiver<Vec<ConnStat>> {
let (tx, rx) = mpsc::channel(4);
let sub = Arc::new(ScannerSubscriber { tx, filter });
let mut subs = self.subs.write().unwrap();
subs.push((id.to_string(), sub));
rx
}
pub fn remove_subscriber(&self, id: &str) {
let mut subs = self.subs.write().unwrap();
subs.retain(|(sid, _)| sid != id);
}
pub async fn scan_and_broadcast(&self, cancel: CancellationToken) {
loop {
let conns = tokio::task::spawn_blocking(read_tcp_connections)
.await
.unwrap_or_default();
{
let subs = self.subs.read().unwrap();
for (_, sub) in subs.iter() {
let payload = match &sub.filter {
Some(f) => conns.iter().filter(|c| f.matches(c)).cloned().collect(),
None => conns.clone(),
};
let _ = sub.tx.try_send(payload);
}
}
tokio::select! {
_ = cancel.cancelled() => return,
_ = tokio::time::sleep(self.period) => {}
}
}
}
}

View File

@ -0,0 +1,78 @@
use std::sync::Arc;
use tokio_util::sync::CancellationToken;
use crate::cgroups::CgroupManager;
use crate::config::PORT_SCANNER_INTERVAL;
use super::forwarder::Forwarder;
use super::scanner::{Scanner, ScannerFilter};
pub struct PortSubsystem {
cgroup_manager: Arc<dyn CgroupManager>,
cancel: std::sync::Mutex<Option<CancellationToken>>,
}
impl PortSubsystem {
pub fn new(cgroup_manager: Arc<dyn CgroupManager>) -> Self {
Self {
cgroup_manager,
cancel: std::sync::Mutex::new(None),
}
}
pub fn start(&self) {
let mut guard = self.cancel.lock().unwrap();
if guard.is_some() {
return;
}
let cancel = CancellationToken::new();
*guard = Some(cancel.clone());
drop(guard);
let cgroup_manager = Arc::clone(&self.cgroup_manager);
let cancel_scanner = cancel.clone();
let cancel_forwarder = cancel.clone();
tokio::spawn(async move {
let scanner = Arc::new(Scanner::new(PORT_SCANNER_INTERVAL));
let rx = scanner.add_subscriber(
"port-forwarder",
Some(ScannerFilter {
ips: vec![
"127.0.0.1".to_string(),
"localhost".to_string(),
"::1".to_string(),
],
state: "LISTEN".to_string(),
}),
);
let scanner_clone = Arc::clone(&scanner);
let scanner_handle = tokio::spawn(async move {
scanner_clone.scan_and_broadcast(cancel_scanner).await;
});
let forwarder_handle = tokio::spawn(async move {
let mut forwarder = Forwarder::new(cgroup_manager);
forwarder.start_forwarding(rx, cancel_forwarder).await;
});
let _ = tokio::join!(scanner_handle, forwarder_handle);
});
}
pub fn stop(&self) {
let mut guard = self.cancel.lock().unwrap();
if let Some(cancel) = guard.take() {
cancel.cancel();
}
}
pub fn restart(&self) {
self.stop();
self.start();
}
}

231
envd-rs/src/rpc/entry.rs Normal file
View File

@ -0,0 +1,231 @@
use std::os::unix::fs::MetadataExt;
use std::path::Path;
use connectrpc::{ConnectError, ErrorCode};
use crate::permissions::user::{lookup_groupname_by_gid, lookup_username_by_uid};
use crate::rpc::pb::filesystem::{EntryInfo, FileType};
use nix::unistd::{Gid, Uid};
const NFS_SUPER_MAGIC: i64 = 0x6969;
const CIFS_MAGIC: i64 = 0xFF534D42;
const SMB_SUPER_MAGIC: i64 = 0x517B;
const SMB2_MAGIC_NUMBER: i64 = 0xFE534D42;
const FUSE_SUPER_MAGIC: i64 = 0x65735546;
pub fn is_network_mount(path: &str) -> Result<bool, String> {
let c_path = std::ffi::CString::new(path).map_err(|e| e.to_string())?;
let mut stat: libc::statfs = unsafe { std::mem::zeroed() };
let ret = unsafe { libc::statfs(c_path.as_ptr(), &mut stat) };
if ret != 0 {
return Err(format!(
"statfs {path}: {}",
std::io::Error::last_os_error()
));
}
let fs_type = stat.f_type as i64;
Ok(matches!(
fs_type,
NFS_SUPER_MAGIC | CIFS_MAGIC | SMB_SUPER_MAGIC | SMB2_MAGIC_NUMBER | FUSE_SUPER_MAGIC
))
}
pub fn build_entry_info(path: &str) -> Result<EntryInfo, ConnectError> {
let p = Path::new(path);
let lstat = std::fs::symlink_metadata(p).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("file not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error getting file info: {e}"))
}
})?;
let is_symlink = lstat.file_type().is_symlink();
let (file_type, mode, symlink_target) = if is_symlink {
let target = std::fs::canonicalize(p)
.map(|t| t.to_string_lossy().to_string())
.unwrap_or_else(|_| path.to_string());
let target_type = match std::fs::metadata(p) {
Ok(meta) => meta_to_file_type(&meta),
Err(_) => FileType::FILE_TYPE_UNSPECIFIED,
};
let target_mode = std::fs::metadata(p)
.map(|m| m.mode() & 0o7777)
.unwrap_or(0);
(target_type, target_mode, Some(target))
} else {
let ft = meta_to_file_type(&lstat);
let mode = lstat.mode() & 0o7777;
(ft, mode, None)
};
let uid = lstat.uid();
let gid = lstat.gid();
let owner = lookup_username_by_uid(Uid::from_raw(uid));
let group = lookup_groupname_by_gid(Gid::from_raw(gid));
let modified_time = {
let mtime_sec = lstat.mtime();
let mtime_nsec = lstat.mtime_nsec() as i32;
if mtime_sec == 0 && mtime_nsec == 0 {
None
} else {
Some(buffa_types::google::protobuf::Timestamp {
seconds: mtime_sec,
nanos: mtime_nsec,
..Default::default()
})
}
};
let name = p
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
let permissions = format_permissions(lstat.mode());
Ok(EntryInfo {
name,
r#type: buffa::EnumValue::Known(file_type),
path: path.to_string(),
size: lstat.len() as i64,
mode,
permissions,
owner,
group,
modified_time: modified_time.into(),
symlink_target: symlink_target,
..Default::default()
})
}
fn meta_to_file_type(meta: &std::fs::Metadata) -> FileType {
if meta.is_file() {
FileType::FILE_TYPE_FILE
} else if meta.is_dir() {
FileType::FILE_TYPE_DIRECTORY
} else if meta.file_type().is_symlink() {
FileType::FILE_TYPE_SYMLINK
} else {
FileType::FILE_TYPE_UNSPECIFIED
}
}
fn format_permissions(mode: u32) -> String {
let file_type = match mode & libc::S_IFMT {
libc::S_IFDIR => 'd',
libc::S_IFLNK => 'L',
libc::S_IFREG => '-',
libc::S_IFBLK => 'b',
libc::S_IFCHR => 'c',
libc::S_IFIFO => 'p',
libc::S_IFSOCK => 'S',
_ => '?',
};
let perms = mode & 0o777;
let mut s = String::with_capacity(10);
s.push(file_type);
for shift in [6, 3, 0] {
let bits = (perms >> shift) & 7;
s.push(if bits & 4 != 0 { 'r' } else { '-' });
s.push(if bits & 2 != 0 { 'w' } else { '-' });
s.push(if bits & 1 != 0 { 'x' } else { '-' });
}
s
}
#[cfg(test)]
mod tests {
use super::*;
// format_permissions
#[test]
fn regular_file_755() {
assert_eq!(format_permissions(libc::S_IFREG | 0o755), "-rwxr-xr-x");
}
#[test]
fn directory_755() {
assert_eq!(format_permissions(libc::S_IFDIR | 0o755), "drwxr-xr-x");
}
#[test]
fn symlink_777() {
assert_eq!(format_permissions(libc::S_IFLNK | 0o777), "Lrwxrwxrwx");
}
#[test]
fn regular_file_000() {
assert_eq!(format_permissions(libc::S_IFREG | 0o000), "----------");
}
#[test]
fn regular_file_644() {
assert_eq!(format_permissions(libc::S_IFREG | 0o644), "-rw-r--r--");
}
#[test]
fn block_device() {
assert_eq!(format_permissions(libc::S_IFBLK | 0o660), "brw-rw----");
}
#[test]
fn char_device() {
assert_eq!(format_permissions(libc::S_IFCHR | 0o666), "crw-rw-rw-");
}
#[test]
fn fifo() {
assert_eq!(format_permissions(libc::S_IFIFO | 0o644), "prw-r--r--");
}
#[test]
fn socket() {
assert_eq!(format_permissions(libc::S_IFSOCK | 0o755), "Srwxr-xr-x");
}
#[test]
fn unknown_type() {
assert_eq!(format_permissions(0o755), "?rwxr-xr-x");
}
#[test]
fn setuid_in_mode_only_affects_lower_bits() {
// setuid (0o4755) — format_permissions masks with 0o777, so same as 0o755
assert_eq!(
format_permissions(libc::S_IFREG | 0o4755),
format_permissions(libc::S_IFREG | 0o755),
);
}
#[test]
fn output_always_10_chars() {
for mode in [0o000, 0o777, 0o644, 0o755, 0o4755] {
assert_eq!(format_permissions(libc::S_IFREG | mode).len(), 10);
}
}
// meta_to_file_type — needs real filesystem
#[test]
fn meta_regular_file() {
let f = tempfile::NamedTempFile::new().unwrap();
let meta = std::fs::metadata(f.path()).unwrap();
assert_eq!(meta_to_file_type(&meta), FileType::FILE_TYPE_FILE);
}
#[test]
fn meta_directory() {
let d = tempfile::TempDir::new().unwrap();
let meta = std::fs::metadata(d.path()).unwrap();
assert_eq!(meta_to_file_type(&meta), FileType::FILE_TYPE_DIRECTORY);
}
}

View File

@ -0,0 +1,402 @@
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use connectrpc::{ConnectError, Context, ErrorCode};
use dashmap::DashMap;
use futures::Stream;
use crate::permissions::path::{ensure_dirs, expand_and_resolve};
use crate::permissions::user::lookup_user;
use crate::rpc::entry::build_entry_info;
use crate::rpc::pb::filesystem::*;
use crate::state::AppState;
pub struct FilesystemServiceImpl {
state: Arc<AppState>,
watchers: DashMap<String, WatcherHandle>,
}
struct WatcherHandle {
events: Arc<Mutex<Vec<FilesystemEvent>>>,
_watcher: notify::RecommendedWatcher,
}
impl FilesystemServiceImpl {
pub fn new(state: Arc<AppState>) -> Self {
Self {
state,
watchers: DashMap::new(),
}
}
fn resolve_path(&self, path: &str, ctx: &Context) -> Result<String, ConnectError> {
let username = extract_username(ctx).unwrap_or_else(|| self.state.defaults.user());
let user = lookup_user(&username).map_err(|e| {
ConnectError::new(ErrorCode::Unauthenticated, format!("invalid user: {e}"))
})?;
let home_dir = user.dir.to_string_lossy().to_string();
let default_workdir = self.state.defaults.workdir();
expand_and_resolve(path, &home_dir, default_workdir.as_deref())
.map_err(|e| ConnectError::new(ErrorCode::InvalidArgument, e))
}
}
fn extract_username(ctx: &Context) -> Option<String> {
ctx.extensions.get::<AuthUser>().map(|u| u.0.clone())
}
#[derive(Clone)]
pub struct AuthUser(pub String);
impl Filesystem for FilesystemServiceImpl {
async fn stat(
&self,
ctx: Context,
request: buffa::view::OwnedView<StatRequestView<'static>>,
) -> Result<(StatResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
let entry = build_entry_info(&path)?;
Ok((
StatResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn make_dir(
&self,
ctx: Context,
request: buffa::view::OwnedView<MakeDirRequestView<'static>>,
) -> Result<(MakeDirResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
match std::fs::metadata(&path) {
Ok(meta) => {
if meta.is_dir() {
return Err(ConnectError::new(
ErrorCode::AlreadyExists,
format!("directory already exists: {path}"),
));
}
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("path exists but is not a directory: {path}"),
));
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(e) => {
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error getting file info: {e}"),
));
}
}
let username = extract_username(&ctx).unwrap_or_else(|| self.state.defaults.user());
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
ensure_dirs(&path, user.uid, user.gid)
.map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
let entry = build_entry_info(&path)?;
Ok((
MakeDirResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn r#move(
&self,
ctx: Context,
request: buffa::view::OwnedView<MoveRequestView<'static>>,
) -> Result<(MoveResponse, Context), ConnectError> {
let source = self.resolve_path(request.source, &ctx)?;
let destination = self.resolve_path(request.destination, &ctx)?;
let username = extract_username(&ctx).unwrap_or_else(|| self.state.defaults.user());
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
if let Some(parent) = Path::new(&destination).parent() {
ensure_dirs(&parent.to_string_lossy(), user.uid, user.gid)
.map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
}
std::fs::rename(&source, &destination).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("source not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error renaming: {e}"))
}
})?;
let entry = build_entry_info(&destination)?;
Ok((
MoveResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn list_dir(
&self,
ctx: Context,
request: buffa::view::OwnedView<ListDirRequestView<'static>>,
) -> Result<(ListDirResponse, Context), ConnectError> {
let mut depth = request.depth as usize;
if depth == 0 {
depth = 1;
}
let path = self.resolve_path(request.path, &ctx)?;
let resolved = std::fs::canonicalize(&path).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("path not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error resolving path: {e}"))
}
})?;
let resolved_str = resolved.to_string_lossy().to_string();
let meta = std::fs::metadata(&resolved).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error getting file info: {e}"))
})?;
if !meta.is_dir() {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("path is not a directory: {path}"),
));
}
let entries = walk_dir(&path, &resolved_str, depth)?;
Ok((
ListDirResponse {
entries,
..Default::default()
},
ctx,
))
}
async fn remove(
&self,
ctx: Context,
request: buffa::view::OwnedView<RemoveRequestView<'static>>,
) -> Result<(RemoveResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
if let Err(e1) = std::fs::remove_dir_all(&path) {
if let Err(e2) = std::fs::remove_file(&path) {
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error removing: {e1}; also tried as file: {e2}"),
));
}
}
Ok((RemoveResponse { ..Default::default() }, ctx))
}
async fn watch_dir(
&self,
_ctx: Context,
_request: buffa::view::OwnedView<WatchDirRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<WatchDirResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
Err(ConnectError::new(
ErrorCode::Unimplemented,
"watch_dir streaming not yet implemented",
))
}
async fn create_watcher(
&self,
ctx: Context,
request: buffa::view::OwnedView<CreateWatcherRequestView<'static>>,
) -> Result<(CreateWatcherResponse, Context), ConnectError> {
use notify::{RecursiveMode, Watcher};
let path = self.resolve_path(request.path, &ctx)?;
let recursive = request.recursive;
if let Ok(true) = crate::rpc::entry::is_network_mount(&path) {
return Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"watching network mounts is not supported",
));
}
let watcher_id = simple_id();
let events: Arc<Mutex<Vec<FilesystemEvent>>> = Arc::new(Mutex::new(Vec::new()));
let events_cb = Arc::clone(&events);
let mut watcher = notify::recommended_watcher(
move |res: Result<notify::Event, notify::Error>| {
if let Ok(event) = res {
let event_type = match event.kind {
notify::EventKind::Create(_) => EventType::EVENT_TYPE_CREATE,
notify::EventKind::Modify(notify::event::ModifyKind::Data(_)) => {
EventType::EVENT_TYPE_WRITE
}
notify::EventKind::Modify(notify::event::ModifyKind::Metadata(_)) => {
EventType::EVENT_TYPE_CHMOD
}
notify::EventKind::Remove(_) => EventType::EVENT_TYPE_REMOVE,
notify::EventKind::Modify(notify::event::ModifyKind::Name(_)) => {
EventType::EVENT_TYPE_RENAME
}
_ => return,
};
for p in &event.paths {
if let Ok(mut guard) = events_cb.lock() {
guard.push(FilesystemEvent {
name: p.to_string_lossy().to_string(),
r#type: buffa::EnumValue::Known(event_type),
..Default::default()
});
}
}
}
},
)
.map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("failed to create watcher: {e}"))
})?;
let mode = if recursive {
RecursiveMode::Recursive
} else {
RecursiveMode::NonRecursive
};
watcher.watch(Path::new(&path), mode).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("failed to watch path: {e}"))
})?;
self.watchers.insert(
watcher_id.clone(),
WatcherHandle {
events,
_watcher: watcher,
},
);
Ok((
CreateWatcherResponse {
watcher_id,
..Default::default()
},
ctx,
))
}
async fn get_watcher_events(
&self,
ctx: Context,
request: buffa::view::OwnedView<GetWatcherEventsRequestView<'static>>,
) -> Result<(GetWatcherEventsResponse, Context), ConnectError> {
let watcher_id: &str = request.watcher_id;
let handle = self.watchers.get(watcher_id).ok_or_else(|| {
ConnectError::new(
ErrorCode::NotFound,
format!("watcher not found: {watcher_id}"),
)
})?;
let events = {
let mut guard = handle.events.lock().unwrap();
std::mem::take(&mut *guard)
};
Ok((
GetWatcherEventsResponse {
events,
..Default::default()
},
ctx,
))
}
async fn remove_watcher(
&self,
ctx: Context,
request: buffa::view::OwnedView<RemoveWatcherRequestView<'static>>,
) -> Result<(RemoveWatcherResponse, Context), ConnectError> {
let watcher_id: &str = request.watcher_id;
self.watchers.remove(watcher_id);
Ok((RemoveWatcherResponse { ..Default::default() }, ctx))
}
}
fn walk_dir(
requested_path: &str,
resolved_path: &str,
depth: usize,
) -> Result<Vec<EntryInfo>, ConnectError> {
let mut entries = Vec::new();
let base = Path::new(resolved_path);
for result in walkdir::WalkDir::new(resolved_path)
.min_depth(1)
.max_depth(depth)
.follow_links(false)
{
let dir_entry = match result {
Ok(e) => e,
Err(e) => {
if e.io_error()
.is_some_and(|io| io.kind() == std::io::ErrorKind::NotFound)
{
continue;
}
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error reading directory: {e}"),
));
}
};
let entry_path = dir_entry.path();
let mut entry = match build_entry_info(&entry_path.to_string_lossy()) {
Ok(e) => e,
Err(e) if e.code == ErrorCode::NotFound => continue,
Err(e) => return Err(e),
};
if let Ok(rel) = entry_path.strip_prefix(base) {
let remapped = PathBuf::from(requested_path).join(rel);
entry.path = remapped.to_string_lossy().to_string();
}
entries.push(entry);
}
Ok(entries)
}
fn simple_id() -> String {
use std::time::{SystemTime, UNIX_EPOCH};
let nanos = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
format!("w-{nanos:x}")
}

26
envd-rs/src/rpc/mod.rs Normal file
View File

@ -0,0 +1,26 @@
pub mod pb;
pub mod entry;
pub mod process_handler;
pub mod process_service;
pub mod filesystem_service;
use std::sync::Arc;
use crate::rpc::process_service::ProcessServiceImpl;
use crate::rpc::filesystem_service::FilesystemServiceImpl;
use crate::state::AppState;
use pb::process::ProcessExt;
use pb::filesystem::FilesystemExt;
/// Build the connect-rust Router with both RPC services registered.
pub fn rpc_router(state: Arc<AppState>) -> connectrpc::Router {
let process_svc = Arc::new(ProcessServiceImpl::new(Arc::clone(&state)));
let filesystem_svc = Arc::new(FilesystemServiceImpl::new(Arc::clone(&state)));
let router = connectrpc::Router::new();
let router = process_svc.register(router);
let router = filesystem_svc.register(router);
router
}

10
envd-rs/src/rpc/pb.rs Normal file
View File

@ -0,0 +1,10 @@
#![allow(dead_code, non_camel_case_types, unused_imports, clippy::derivable_impls)]
use ::buffa;
use ::buffa_types;
use ::connectrpc;
use ::futures;
use ::http_body;
use ::serde;
include!(concat!(env!("OUT_DIR"), "/_connectrpc.rs"));

View File

@ -0,0 +1,419 @@
use std::io::Read;
use std::os::unix::process::CommandExt;
use std::process::Stdio;
use std::sync::{Arc, Mutex};
use connectrpc::{ConnectError, ErrorCode};
use nix::pty::{openpty, Winsize};
use nix::sys::signal::{self, Signal};
use nix::unistd::Pid;
use tokio::sync::broadcast;
use crate::rpc::pb::process::*;
const STD_CHUNK_SIZE: usize = 32768;
const PTY_CHUNK_SIZE: usize = 16384;
const BROADCAST_CAPACITY: usize = 4096;
#[derive(Clone)]
pub enum DataEvent {
Stdout(Vec<u8>),
Stderr(Vec<u8>),
Pty(Vec<u8>),
}
#[derive(Clone)]
pub struct EndEvent {
pub exit_code: i32,
pub exited: bool,
pub status: String,
pub error: Option<String>,
}
pub struct ProcessHandle {
pub config: ProcessConfig,
pub tag: Option<String>,
pub pid: u32,
data_tx: broadcast::Sender<DataEvent>,
end_tx: broadcast::Sender<EndEvent>,
ended: Mutex<Option<EndEvent>>,
stdin: Mutex<Option<std::process::ChildStdin>>,
pty_master: Mutex<Option<std::fs::File>>,
}
impl ProcessHandle {
pub fn subscribe_data(&self) -> broadcast::Receiver<DataEvent> {
self.data_tx.subscribe()
}
pub fn subscribe_end(&self) -> broadcast::Receiver<EndEvent> {
self.end_tx.subscribe()
}
pub fn cached_end(&self) -> Option<EndEvent> {
self.ended.lock().unwrap().clone()
}
pub fn send_signal(&self, sig: Signal) -> Result<(), ConnectError> {
signal::kill(Pid::from_raw(self.pid as i32), sig).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error sending signal: {e}"))
})
}
pub fn write_stdin(&self, data: &[u8]) -> Result<(), ConnectError> {
use std::io::Write;
let mut guard = self.stdin.lock().unwrap();
match guard.as_mut() {
Some(stdin) => stdin.write_all(data).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error writing to stdin: {e}"))
}),
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"stdin not enabled or closed",
)),
}
}
pub fn write_pty(&self, data: &[u8]) -> Result<(), ConnectError> {
use std::io::Write;
let mut guard = self.pty_master.lock().unwrap();
match guard.as_mut() {
Some(master) => master.write_all(data).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error writing to pty: {e}"))
}),
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"pty not assigned to process",
)),
}
}
pub fn close_stdin(&self) -> Result<(), ConnectError> {
if self.pty_master.lock().unwrap().is_some() {
return Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"cannot close stdin for PTY process — send Ctrl+D (0x04) instead",
));
}
let mut guard = self.stdin.lock().unwrap();
*guard = None;
Ok(())
}
pub fn resize_pty(&self, cols: u16, rows: u16) -> Result<(), ConnectError> {
let guard = self.pty_master.lock().unwrap();
match guard.as_ref() {
Some(master) => {
use std::os::unix::io::AsRawFd;
let ws = libc::winsize {
ws_row: rows,
ws_col: cols,
ws_xpixel: 0,
ws_ypixel: 0,
};
let ret = unsafe { libc::ioctl(master.as_raw_fd(), libc::TIOCSWINSZ, &ws) };
if ret != 0 {
return Err(ConnectError::new(
ErrorCode::Internal,
format!(
"ioctl TIOCSWINSZ failed: {}",
std::io::Error::last_os_error()
),
));
}
Ok(())
}
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"tty not assigned to process",
)),
}
}
}
pub struct SpawnedProcess {
pub handle: Arc<ProcessHandle>,
pub data_rx: broadcast::Receiver<DataEvent>,
pub end_rx: broadcast::Receiver<EndEvent>,
}
pub fn spawn_process(
cmd_str: &str,
args: &[String],
envs: &std::collections::HashMap<String, String>,
cwd: &str,
pty_opts: Option<(u16, u16)>,
enable_stdin: bool,
tag: Option<String>,
user: &nix::unistd::User,
default_env_vars: &dashmap::DashMap<String, String>,
) -> Result<SpawnedProcess, ConnectError> {
let mut env: Vec<(String, String)> = Vec::new();
env.push(("PATH".into(), std::env::var("PATH").unwrap_or_default()));
let home = user.dir.to_string_lossy().to_string();
env.push(("HOME".into(), home));
env.push(("USER".into(), user.name.clone()));
env.push(("LOGNAME".into(), user.name.clone()));
default_env_vars.iter().for_each(|entry| {
env.push((entry.key().clone(), entry.value().clone()));
});
for (k, v) in envs {
env.push((k.clone(), v.clone()));
}
let nice_delta = 0 - current_nice();
let oom_script = format!(
r#"echo 100 > /proc/$$/oom_score_adj && exec /usr/bin/nice -n {} "${{@}}""#,
nice_delta
);
let mut wrapper_args = vec![
"-c".to_string(),
oom_script,
"--".to_string(),
cmd_str.to_string(),
];
wrapper_args.extend_from_slice(args);
let uid = user.uid.as_raw();
let gid = user.gid.as_raw();
let (data_tx, _) = broadcast::channel(BROADCAST_CAPACITY);
let (end_tx, _) = broadcast::channel(16);
let config = ProcessConfig {
cmd: cmd_str.to_string(),
args: args.to_vec(),
envs: envs.clone(),
cwd: Some(cwd.to_string()),
..Default::default()
};
if let Some((cols, rows)) = pty_opts {
let pty_result = openpty(
Some(&Winsize {
ws_row: rows,
ws_col: cols,
ws_xpixel: 0,
ws_ypixel: 0,
}),
None,
)
.map_err(|e| ConnectError::new(ErrorCode::Internal, format!("openpty failed: {e}")))?;
let master_fd = pty_result.master;
let slave_fd = pty_result.slave;
let mut command = std::process::Command::new("/bin/sh");
command
.args(&wrapper_args)
.env_clear()
.envs(env.iter().map(|(k, v)| (k.as_str(), v.as_str())))
.current_dir(cwd);
unsafe {
use std::os::unix::io::AsRawFd;
let slave_raw = slave_fd.as_raw_fd();
let master_raw = master_fd.as_raw_fd();
command.pre_exec(move || {
libc::close(master_raw);
nix::unistd::setsid()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))?;
libc::ioctl(slave_raw, libc::TIOCSCTTY, 0);
libc::dup2(slave_raw, 0);
libc::dup2(slave_raw, 1);
libc::dup2(slave_raw, 2);
if slave_raw > 2 {
libc::close(slave_raw);
}
libc::setgid(gid);
libc::setuid(uid);
Ok(())
});
}
command.stdin(Stdio::null());
command.stdout(Stdio::null());
command.stderr(Stdio::null());
let child = command.spawn().map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error starting pty process: {e}"))
})?;
drop(slave_fd);
let pid = child.id();
let master_file: std::fs::File = master_fd.into();
let master_clone = master_file.try_clone().unwrap();
let handle = Arc::new(ProcessHandle {
config,
tag,
pid,
data_tx: data_tx.clone(),
end_tx: end_tx.clone(),
ended: Mutex::new(None),
stdin: Mutex::new(None),
pty_master: Mutex::new(Some(master_file)),
});
let data_rx = handle.subscribe_data();
let end_rx = handle.subscribe_end();
let data_tx_clone = data_tx.clone();
std::thread::spawn(move || {
let mut master = master_clone;
let mut buf = vec![0u8; PTY_CHUNK_SIZE];
loop {
match master.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = data_tx_clone.send(DataEvent::Pty(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
let end_tx_clone = end_tx.clone();
let handle_for_waiter = Arc::clone(&handle);
std::thread::spawn(move || {
let mut child = child;
let end_event = match child.wait() {
Ok(s) => EndEvent {
exit_code: s.code().unwrap_or(-1),
exited: s.code().is_some(),
status: format!("{s}"),
error: None,
},
Err(e) => EndEvent {
exit_code: -1,
exited: false,
status: "error".into(),
error: Some(e.to_string()),
},
};
*handle_for_waiter.ended.lock().unwrap() = Some(end_event.clone());
let _ = end_tx_clone.send(end_event);
});
tracing::info!(pid, cmd = cmd_str, "process started (pty)");
Ok(SpawnedProcess { handle, data_rx, end_rx })
} else {
let mut command = std::process::Command::new("/bin/sh");
command
.args(&wrapper_args)
.env_clear()
.envs(env.iter().map(|(k, v)| (k.as_str(), v.as_str())))
.current_dir(cwd)
.stdout(Stdio::piped())
.stderr(Stdio::piped());
if enable_stdin {
command.stdin(Stdio::piped());
} else {
command.stdin(Stdio::null());
}
unsafe {
command.pre_exec(move || {
libc::setgid(gid);
libc::setuid(uid);
Ok(())
});
}
let mut child = command.spawn().map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error starting process: {e}"))
})?;
let pid = child.id();
let stdin = child.stdin.take();
let stdout = child.stdout.take();
let stderr = child.stderr.take();
let handle = Arc::new(ProcessHandle {
config,
tag,
pid,
data_tx: data_tx.clone(),
end_tx: end_tx.clone(),
ended: Mutex::new(None),
stdin: Mutex::new(stdin),
pty_master: Mutex::new(None),
});
let data_rx = handle.subscribe_data();
let end_rx = handle.subscribe_end();
if let Some(mut out) = stdout {
let tx = data_tx.clone();
std::thread::spawn(move || {
let mut buf = vec![0u8; STD_CHUNK_SIZE];
loop {
match out.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = tx.send(DataEvent::Stdout(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
}
if let Some(mut err_pipe) = stderr {
let tx = data_tx.clone();
std::thread::spawn(move || {
let mut buf = vec![0u8; STD_CHUNK_SIZE];
loop {
match err_pipe.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = tx.send(DataEvent::Stderr(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
}
let end_tx_clone = end_tx.clone();
let handle_for_waiter = Arc::clone(&handle);
std::thread::spawn(move || {
let end_event = match child.wait() {
Ok(s) => EndEvent {
exit_code: s.code().unwrap_or(-1),
exited: s.code().is_some(),
status: format!("{s}"),
error: None,
},
Err(e) => EndEvent {
exit_code: -1,
exited: false,
status: "error".into(),
error: Some(e.to_string()),
},
};
*handle_for_waiter.ended.lock().unwrap() = Some(end_event.clone());
let _ = end_tx_clone.send(end_event);
});
tracing::info!(pid, cmd = cmd_str, "process started (pipe)");
Ok(SpawnedProcess { handle, data_rx, end_rx })
}
}
fn current_nice() -> i32 {
unsafe {
*libc::__errno_location() = 0;
let prio = libc::getpriority(libc::PRIO_PROCESS, 0);
if *libc::__errno_location() != 0 {
return 0;
}
20 - prio
}
}

View File

@ -0,0 +1,481 @@
use std::collections::HashMap;
use std::pin::Pin;
use std::sync::Arc;
use connectrpc::{ConnectError, Context, ErrorCode};
use dashmap::DashMap;
use futures::Stream;
use crate::permissions::path::expand_and_resolve;
use crate::permissions::user::lookup_user;
use crate::rpc::pb::process::*;
use crate::rpc::process_handler::{self, DataEvent, ProcessHandle};
use crate::state::AppState;
pub struct ProcessServiceImpl {
state: Arc<AppState>,
processes: DashMap<u32, Arc<ProcessHandle>>,
}
impl ProcessServiceImpl {
pub fn new(state: Arc<AppState>) -> Self {
Self {
state,
processes: DashMap::new(),
}
}
fn get_process_by_selector(
&self,
selector: &ProcessSelectorView,
) -> Result<Arc<ProcessHandle>, ConnectError> {
match &selector.selector {
Some(process_selector::SelectorView::Pid(pid)) => {
let pid_val = *pid;
self.processes
.get(&pid_val)
.map(|entry| Arc::clone(entry.value()))
.ok_or_else(|| {
ConnectError::new(
ErrorCode::NotFound,
format!("process with pid {pid_val} not found"),
)
})
}
Some(process_selector::SelectorView::Tag(tag)) => {
let tag_str: &str = tag;
for entry in self.processes.iter() {
if let Some(ref t) = entry.value().tag {
if t == tag_str {
return Ok(Arc::clone(entry.value()));
}
}
}
Err(ConnectError::new(
ErrorCode::NotFound,
format!("process with tag {tag_str} not found"),
))
}
None => Err(ConnectError::new(
ErrorCode::InvalidArgument,
"process selector required",
)),
}
}
fn spawn_from_request(
&self,
request: &StartRequestView<'_>,
) -> Result<process_handler::SpawnedProcess, ConnectError> {
let proc_config = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process config required")
})?;
let username = self.state.defaults.user();
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
let cmd: &str = proc_config.cmd;
let args: Vec<String> = proc_config.args.iter().map(|s| s.to_string()).collect();
let envs: HashMap<String, String> = proc_config
.envs
.iter()
.map(|(k, v)| (k.to_string(), v.to_string()))
.collect();
let home_dir = user.dir.to_string_lossy().to_string();
let cwd_str: &str = proc_config.cwd.unwrap_or("");
let default_workdir = self.state.defaults.workdir();
let cwd = expand_and_resolve(cwd_str, &home_dir, default_workdir.as_deref())
.map_err(|e| ConnectError::new(ErrorCode::InvalidArgument, e))?;
let effective_cwd = if cwd.is_empty() { "/" } else { &cwd };
if let Err(_) = std::fs::metadata(effective_cwd) {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("cwd '{effective_cwd}' does not exist"),
));
}
let pty_opts = request.pty.as_option().and_then(|pty| {
pty.size
.as_option()
.map(|sz| (sz.cols as u16, sz.rows as u16))
});
let enable_stdin = request.stdin.unwrap_or(true);
let tag = request.tag.map(|s| s.to_string());
tracing::info!(
cmd = cmd,
has_pty = pty_opts.is_some(),
pty_size = ?pty_opts,
tag = ?tag,
stdin = enable_stdin,
cwd = effective_cwd,
user = %username,
"process.Start request"
);
let spawned = process_handler::spawn_process(
cmd,
&args,
&envs,
effective_cwd,
pty_opts,
enable_stdin,
tag,
&user,
&self.state.defaults.env_vars,
)?;
self.processes.insert(spawned.handle.pid, Arc::clone(&spawned.handle));
let processes = self.processes.clone();
let pid = spawned.handle.pid;
let mut cleanup_end_rx = spawned.handle.subscribe_end();
tokio::spawn(async move {
let _ = cleanup_end_rx.recv().await;
processes.remove(&pid);
});
Ok(spawned)
}
}
impl Process for ProcessServiceImpl {
async fn list(
&self,
ctx: Context,
_request: buffa::view::OwnedView<ListRequestView<'static>>,
) -> Result<(ListResponse, Context), ConnectError> {
let processes: Vec<ProcessInfo> = self
.processes
.iter()
.map(|entry| {
let h = entry.value();
ProcessInfo {
config: buffa::MessageField::some(h.config.clone()),
pid: h.pid,
tag: h.tag.clone(),
..Default::default()
}
})
.collect();
Ok((
ListResponse {
processes,
..Default::default()
},
ctx,
))
}
async fn start(
&self,
ctx: Context,
request: buffa::view::OwnedView<StartRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<StartResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
let spawned = self.spawn_from_request(&request)?;
let pid = spawned.handle.pid;
let mut data_rx = spawned.data_rx;
let mut end_rx = spawned.end_rx;
let stream = async_stream::stream! {
yield Ok(make_start_response(pid));
loop {
tokio::select! {
biased;
data = data_rx.recv() => {
match data {
Ok(ev) => yield Ok(make_data_start_response(ev)),
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
end = end_rx.recv() => {
while let Ok(ev) = data_rx.try_recv() {
yield Ok(make_data_start_response(ev));
}
if let Ok(end) = end {
yield Ok(make_end_start_response(end));
}
break;
}
}
}
};
Ok((Box::pin(stream), ctx))
}
async fn connect(
&self,
ctx: Context,
request: buffa::view::OwnedView<ConnectRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<ConnectResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
let pid = handle.pid;
let mut data_rx = handle.subscribe_data();
let mut end_rx = handle.subscribe_end();
let cached_end = handle.cached_end();
let stream = async_stream::stream! {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(ProcessEvent {
event: Some(process_event::Event::Start(Box::new(
process_event::StartEvent { pid, ..Default::default() },
))),
..Default::default()
}),
..Default::default()
});
if let Some(end) = cached_end {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_end_event(end)),
..Default::default()
});
} else {
loop {
tokio::select! {
biased;
data = data_rx.recv() => {
match data {
Ok(ev) => {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_data_event(ev)),
..Default::default()
});
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
end = end_rx.recv() => {
while let Ok(ev) = data_rx.try_recv() {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_data_event(ev)),
..Default::default()
});
}
if let Ok(end) = end {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_end_event(end)),
..Default::default()
});
}
break;
}
}
}
}
};
Ok((Box::pin(stream), ctx))
}
async fn update(
&self,
ctx: Context,
request: buffa::view::OwnedView<UpdateRequestView<'static>>,
) -> Result<(UpdateResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
if let Some(pty) = request.pty.as_option() {
if let Some(size) = pty.size.as_option() {
handle.resize_pty(size.cols as u16, size.rows as u16)?;
}
}
Ok((UpdateResponse { ..Default::default() }, ctx))
}
async fn stream_input(
&self,
ctx: Context,
mut requests: Pin<
Box<
dyn Stream<
Item = Result<
buffa::view::OwnedView<StreamInputRequestView<'static>>,
ConnectError,
>,
> + Send,
>,
>,
) -> Result<(StreamInputResponse, Context), ConnectError> {
use futures::StreamExt;
let mut handle: Option<Arc<ProcessHandle>> = None;
while let Some(result) = requests.next().await {
let req = result?;
match &req.event {
Some(stream_input_request::EventView::Start(start)) => {
if let Some(selector) = start.process.as_option() {
handle = Some(self.get_process_by_selector(selector)?);
}
}
Some(stream_input_request::EventView::Data(data)) => {
let h = handle.as_ref().ok_or_else(|| {
ConnectError::new(ErrorCode::FailedPrecondition, "no start event received")
})?;
if let Some(input) = data.input.as_option() {
write_input(h, input)?;
}
}
Some(stream_input_request::EventView::Keepalive(_)) => {}
None => {}
}
}
Ok((StreamInputResponse { ..Default::default() }, ctx))
}
async fn send_input(
&self,
ctx: Context,
request: buffa::view::OwnedView<SendInputRequestView<'static>>,
) -> Result<(SendInputResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
if let Some(input) = request.input.as_option() {
write_input(&handle, input)?;
}
Ok((SendInputResponse { ..Default::default() }, ctx))
}
async fn send_signal(
&self,
ctx: Context,
request: buffa::view::OwnedView<SendSignalRequestView<'static>>,
) -> Result<(SendSignalResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
let sig = match request.signal.as_known() {
Some(Signal::SIGNAL_SIGKILL) => nix::sys::signal::Signal::SIGKILL,
Some(Signal::SIGNAL_SIGTERM) => nix::sys::signal::Signal::SIGTERM,
_ => {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
"invalid or unspecified signal",
))
}
};
handle.send_signal(sig)?;
Ok((SendSignalResponse { ..Default::default() }, ctx))
}
async fn close_stdin(
&self,
ctx: Context,
request: buffa::view::OwnedView<CloseStdinRequestView<'static>>,
) -> Result<(CloseStdinResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
handle.close_stdin()?;
Ok((CloseStdinResponse { ..Default::default() }, ctx))
}
}
fn write_input(handle: &ProcessHandle, input: &ProcessInputView) -> Result<(), ConnectError> {
match &input.input {
Some(process_input::InputView::Pty(d)) => handle.write_pty(d),
Some(process_input::InputView::Stdin(d)) => handle.write_stdin(d),
None => Ok(()),
}
}
fn make_start_response(pid: u32) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(ProcessEvent {
event: Some(process_event::Event::Start(Box::new(
process_event::StartEvent {
pid,
..Default::default()
},
))),
..Default::default()
}),
..Default::default()
}
}
fn make_data_event(ev: DataEvent) -> ProcessEvent {
let output = match ev {
DataEvent::Stdout(d) => Some(process_event::data_event::Output::Stdout(d.into())),
DataEvent::Stderr(d) => Some(process_event::data_event::Output::Stderr(d.into())),
DataEvent::Pty(d) => Some(process_event::data_event::Output::Pty(d.into())),
};
ProcessEvent {
event: Some(process_event::Event::Data(Box::new(
process_event::DataEvent {
output,
..Default::default()
},
))),
..Default::default()
}
}
fn make_data_start_response(ev: DataEvent) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(make_data_event(ev)),
..Default::default()
}
}
fn make_end_event(end: process_handler::EndEvent) -> ProcessEvent {
ProcessEvent {
event: Some(process_event::Event::End(Box::new(
process_event::EndEvent {
exit_code: end.exit_code,
exited: end.exited,
status: end.status,
error: end.error,
..Default::default()
},
))),
..Default::default()
}
}
fn make_end_start_response(end: process_handler::EndEvent) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(make_end_event(end)),
..Default::default()
}
}

141
envd-rs/src/state.rs Normal file
View File

@ -0,0 +1,141 @@
use std::sync::atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering};
use std::sync::Arc;
use std::time::{SystemTime, UNIX_EPOCH};
use crate::auth::token::SecureToken;
use crate::conntracker::ConnTracker;
use crate::execcontext::Defaults;
use crate::port::subsystem::PortSubsystem;
use crate::util::AtomicMax;
pub struct AppState {
pub defaults: Defaults,
pub version: String,
pub commit: String,
pub needs_restore: AtomicBool,
pub last_set_time: AtomicMax,
pub access_token: SecureToken,
pub conn_tracker: ConnTracker,
pub port_subsystem: Option<Arc<PortSubsystem>>,
pub cpu_used_pct: AtomicU32,
pub cpu_count: AtomicU32,
pub snapshot_in_progress: AtomicBool,
pub last_health_epoch: AtomicU64,
pub restore_epoch: AtomicU64,
}
impl AppState {
pub fn new(
defaults: Defaults,
version: String,
commit: String,
port_subsystem: Option<Arc<PortSubsystem>>,
) -> Arc<Self> {
let state = Arc::new(Self {
defaults,
version,
commit,
needs_restore: AtomicBool::new(false),
last_set_time: AtomicMax::new(),
access_token: SecureToken::new(),
conn_tracker: ConnTracker::new(),
port_subsystem,
cpu_used_pct: AtomicU32::new(0),
cpu_count: AtomicU32::new(0),
snapshot_in_progress: AtomicBool::new(false),
last_health_epoch: AtomicU64::new(0),
restore_epoch: AtomicU64::new(0),
});
let state_clone = Arc::clone(&state);
std::thread::spawn(move || {
cpu_sampler(state_clone);
});
state
}
pub fn cpu_used_pct(&self) -> f32 {
f32::from_bits(self.cpu_used_pct.load(Ordering::Relaxed))
}
pub fn cpu_count(&self) -> u32 {
self.cpu_count.load(Ordering::Relaxed)
}
/// Runs post-restore recovery if `needs_restore` is set OR a wall-clock
/// gap is detected (catches restores where snapshot/prepare never ran).
pub fn try_restore_recovery(&self) {
let now_epoch = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let prev_epoch = self.last_health_epoch.swap(now_epoch, Ordering::AcqRel);
// Detect restore via wall-clock gap: if >3s passed since last health
// check, the VM was frozen and restored. Catches the case where
// snapshot/prepare timed out and needs_restore was never set.
let gap_detected = prev_epoch > 0 && now_epoch.saturating_sub(prev_epoch) > 3;
let flag_set = self
.needs_restore
.compare_exchange(true, false, Ordering::AcqRel, Ordering::Relaxed)
.is_ok();
if !flag_set && !gap_detected {
return;
}
if gap_detected && !flag_set {
tracing::info!(
gap_secs = now_epoch.saturating_sub(prev_epoch),
"restore: detected via wall-clock gap (needs_restore was not set)"
);
}
tracing::info!("restore: post-restore recovery");
self.snapshot_in_progress.store(false, Ordering::Release);
self.restore_epoch.store(now_epoch, Ordering::Release);
self.conn_tracker.restore_after_snapshot();
if let Some(ref ps) = self.port_subsystem {
ps.restart();
tracing::info!("restore: port subsystem restarted");
}
}
}
fn cpu_sampler(state: Arc<AppState>) {
use sysinfo::System;
let mut sys = System::new();
sys.refresh_cpu_all();
loop {
std::thread::sleep(std::time::Duration::from_secs(1));
if state.needs_restore.load(Ordering::Acquire) {
// After snapshot restore, sysinfo's internal CPU counters are stale.
// Reinitialize to get a fresh baseline.
sys = System::new();
sys.refresh_cpu_all();
continue;
}
sys.refresh_cpu_all();
let pct = sys.global_cpu_usage();
let rounded = if pct > 0.0 {
(pct * 100.0).round() / 100.0
} else {
0.0
};
state
.cpu_used_pct
.store(rounded.to_bits(), Ordering::Relaxed);
state
.cpu_count
.store(sys.cpus().len() as u32, Ordering::Relaxed);
}
}

102
envd-rs/src/util.rs Normal file
View File

@ -0,0 +1,102 @@
use std::sync::atomic::{AtomicI64, Ordering};
pub struct AtomicMax {
val: AtomicI64,
}
impl AtomicMax {
pub fn new() -> Self {
Self {
val: AtomicI64::new(i64::MIN),
}
}
pub fn get(&self) -> i64 {
self.val.load(Ordering::Acquire)
}
/// Sets the stored value to `new` if `new` is strictly greater than
/// the current value. Returns `true` if the value was updated.
pub fn set_to_greater(&self, new: i64) -> bool {
loop {
let current = self.val.load(Ordering::Acquire);
if new <= current {
return false;
}
match self.val.compare_exchange_weak(
current,
new,
Ordering::Release,
Ordering::Relaxed,
) {
Ok(_) => return true,
Err(_) => continue,
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Arc;
#[test]
fn initial_value_is_i64_min() {
let m = AtomicMax::new();
assert_eq!(m.get(), i64::MIN);
}
#[test]
fn updates_when_larger() {
let m = AtomicMax::new();
assert!(m.set_to_greater(0));
assert_eq!(m.get(), 0);
assert!(m.set_to_greater(100));
assert_eq!(m.get(), 100);
}
#[test]
fn returns_false_when_equal() {
let m = AtomicMax::new();
m.set_to_greater(42);
assert!(!m.set_to_greater(42));
assert_eq!(m.get(), 42);
}
#[test]
fn returns_false_when_smaller() {
let m = AtomicMax::new();
m.set_to_greater(100);
assert!(!m.set_to_greater(50));
assert_eq!(m.get(), 100);
}
#[test]
fn concurrent_convergence() {
let m = Arc::new(AtomicMax::new());
let threads: Vec<_> = (0..8)
.map(|t| {
let m = Arc::clone(&m);
std::thread::spawn(move || {
for i in (t * 100)..((t + 1) * 100) {
m.set_to_greater(i);
}
})
})
.collect();
for t in threads {
t.join().unwrap();
}
assert_eq!(m.get(), 799);
}
#[test]
fn i64_max_boundary() {
let m = AtomicMax::new();
assert!(m.set_to_greater(i64::MAX));
assert!(!m.set_to_greater(i64::MAX));
assert!(!m.set_to_greater(0));
assert_eq!(m.get(), i64::MAX);
}
}

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 FoundryLabs, Inc.
Modifications Copyright (c) 2026 M/S Omukk, Bangladesh
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,62 +0,0 @@
BUILD := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown")
LDFLAGS := -s -w -X=main.commitSHA=$(BUILD)
BUILDS := ../builds
# ═══════════════════════════════════════════════════
# Build
# ═══════════════════════════════════════════════════
.PHONY: build build-debug
build:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o $(BUILDS)/envd .
@file $(BUILDS)/envd | grep -q "statically linked" || \
(echo "ERROR: envd is not statically linked!" && exit 1)
build-debug:
CGO_ENABLED=1 go build -race -gcflags=all="-N -l" -ldflags="-X=main.commitSHA=$(BUILD)" -o $(BUILDS)/debug/envd .
# ═══════════════════════════════════════════════════
# Run (debug mode, not inside a VM)
# ═══════════════════════════════════════════════════
.PHONY: run-debug
run-debug: build-debug
$(BUILDS)/debug/envd -isnotfc -port 49983
# ═══════════════════════════════════════════════════
# Code Generation
# ═══════════════════════════════════════════════════
.PHONY: generate proto openapi
generate: proto openapi
proto:
cd spec && buf generate --template buf.gen.yaml
openapi:
go generate ./internal/api/...
# ═══════════════════════════════════════════════════
# Quality
# ═══════════════════════════════════════════════════
.PHONY: fmt vet test tidy
fmt:
gofmt -w .
vet:
go vet ./...
test:
go test -race -v ./...
tidy:
go mod tidy
# ═══════════════════════════════════════════════════
# Clean
# ═══════════════════════════════════════════════════
.PHONY: clean
clean:
rm -f $(BUILDS)/envd $(BUILDS)/debug/envd

View File

@ -1 +0,0 @@
0.1.0

View File

@ -1,42 +0,0 @@
module git.omukk.dev/wrenn/sandbox/envd
go 1.25.8
require (
connectrpc.com/authn v0.1.0
connectrpc.com/connect v1.19.1
connectrpc.com/cors v0.1.0
github.com/awnumar/memguard v0.23.0
github.com/creack/pty v1.1.24
github.com/dchest/uniuri v1.2.0
github.com/e2b-dev/fsnotify v0.0.1
github.com/go-chi/chi/v5 v5.2.5
github.com/google/uuid v1.6.0
github.com/oapi-codegen/runtime v1.2.0
github.com/orcaman/concurrent-map/v2 v2.0.1
github.com/rs/cors v1.11.1
github.com/rs/zerolog v1.34.0
github.com/shirou/gopsutil/v4 v4.26.2
github.com/stretchr/testify v1.11.1
github.com/txn2/txeh v1.8.0
golang.org/x/sys v0.43.0
google.golang.org/protobuf v1.36.11
)
require (
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/awnumar/memcall v0.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/ebitengine/purego v0.10.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/tklauser/go-sysconf v0.3.16 // indirect
github.com/tklauser/numcpus v0.11.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/crypto v0.50.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@ -1,92 +0,0 @@
connectrpc.com/authn v0.1.0 h1:m5weACjLWwgwcjttvUDyTPICJKw74+p2obBVrf8hT9E=
connectrpc.com/authn v0.1.0/go.mod h1:AwNZK/KYbqaJzRYadTuAaoz6sYQSPdORPqh1TOPIkgY=
connectrpc.com/connect v1.19.1 h1:R5M57z05+90EfEvCY1b7hBxDVOUl45PrtXtAV2fOC14=
connectrpc.com/connect v1.19.1/go.mod h1:tN20fjdGlewnSFeZxLKb0xwIZ6ozc3OQs2hTXy4du9w=
connectrpc.com/cors v0.1.0 h1:f3gTXJyDZPrDIZCQ567jxfD9PAIpopHiRDnJRt3QuOQ=
connectrpc.com/cors v0.1.0/go.mod h1:v8SJZCPfHtGH1zsm+Ttajpozd4cYIUryl4dFB6QEpfg=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/awnumar/memcall v0.4.0 h1:B7hgZYdfH6Ot1Goaz8jGne/7i8xD4taZie/PNSFZ29g=
github.com/awnumar/memcall v0.4.0/go.mod h1:8xOx1YbfyuCg3Fy6TO8DK0kZUua3V42/goA5Ru47E8w=
github.com/awnumar/memguard v0.23.0 h1:sJ3a1/SWlcuKIQ7MV+R9p0Pvo9CWsMbGZvcZQtmc68A=
github.com/awnumar/memguard v0.23.0/go.mod h1:olVofBrsPdITtJ2HgxQKrEYEMyIBAIciVG4wNnZhW9M=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dchest/uniuri v1.2.0 h1:koIcOUdrTIivZgSLhHQvKgqdWZq5d7KdMEWF1Ud6+5g=
github.com/dchest/uniuri v1.2.0/go.mod h1:fSzm4SLHzNZvWLvWJew423PhAzkpNQYq+uNLq4kxhkY=
github.com/e2b-dev/fsnotify v0.0.1 h1:7j0I98HD6VehAuK/bcslvW4QDynAULtOuMZtImihjVk=
github.com/e2b-dev/fsnotify v0.0.1/go.mod h1:jAuDjregRrUixKneTRQwPI847nNuPFg3+n5QM/ku/JM=
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/oapi-codegen/runtime v1.2.0 h1:RvKc1CVS1QeKSNzO97FBQbSMZyQ8s6rZd+LpmzwHMP4=
github.com/oapi-codegen/runtime v1.2.0/go.mod h1:Y7ZhmmlE8ikZOmuHRRndiIm7nf3xcVv+YMweKgG1DT0=
github.com/orcaman/concurrent-map/v2 v2.0.1 h1:jOJ5Pg2w1oeB6PeDurIYf6k9PQ+aTITr/6lP/L/zp6c=
github.com/orcaman/concurrent-map/v2 v2.0.1/go.mod h1:9Eq3TG2oBe5FirmYWQfYO5iH1q0Jv47PLaNK++uCdOM=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/shirou/gopsutil/v4 v4.26.2 h1:X8i6sicvUFih4BmYIGT1m2wwgw2VG9YgrDTi7cIRGUI=
github.com/shirou/gopsutil/v4 v4.26.2/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
github.com/txn2/txeh v1.8.0 h1:G1vZgom6+P/xWwU53AMOpcZgC5ni382ukcPP1TDVYHk=
github.com/txn2/txeh v1.8.0/go.mod h1:rRI3Egi3+AFmEXQjft051YdYbxeCT3nFmBLsNCZZaxM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=

View File

@ -1,604 +0,0 @@
// Package api provides primitives to interact with the openapi HTTP API.
//
// Code generated by github.com/oapi-codegen/oapi-codegen/v2 version v2.6.0 DO NOT EDIT.
package api
import (
"context"
"fmt"
"net/http"
"time"
"github.com/go-chi/chi/v5"
"github.com/oapi-codegen/runtime"
openapi_types "github.com/oapi-codegen/runtime/types"
)
const (
AccessTokenAuthScopes = "AccessTokenAuth.Scopes"
)
// Defines values for EntryInfoType.
const (
File EntryInfoType = "file"
)
// Valid indicates whether the value is a known member of the EntryInfoType enum.
func (e EntryInfoType) Valid() bool {
switch e {
case File:
return true
default:
return false
}
}
// EntryInfo defines model for EntryInfo.
type EntryInfo struct {
// Name Name of the file
Name string `json:"name"`
// Path Path to the file
Path string `json:"path"`
// Type Type of the file
Type EntryInfoType `json:"type"`
}
// EntryInfoType Type of the file
type EntryInfoType string
// EnvVars Environment variables to set
type EnvVars map[string]string
// Error defines model for Error.
type Error struct {
// Code Error code
Code int `json:"code"`
// Message Error message
Message string `json:"message"`
}
// Metrics Resource usage metrics
type Metrics struct {
// CpuCount Number of CPU cores
CpuCount *int `json:"cpu_count,omitempty"`
// CpuUsedPct CPU usage percentage
CpuUsedPct *float32 `json:"cpu_used_pct,omitempty"`
// DiskTotal Total disk space in bytes
DiskTotal *int `json:"disk_total,omitempty"`
// DiskUsed Used disk space in bytes
DiskUsed *int `json:"disk_used,omitempty"`
// MemTotal Total virtual memory in bytes
MemTotal *int `json:"mem_total,omitempty"`
// MemUsed Used virtual memory in bytes
MemUsed *int `json:"mem_used,omitempty"`
// Ts Unix timestamp in UTC for current sandbox time
Ts *int64 `json:"ts,omitempty"`
}
// VolumeMount Volume
type VolumeMount struct {
NfsTarget string `json:"nfs_target"`
Path string `json:"path"`
}
// FilePath defines model for FilePath.
type FilePath = string
// Signature defines model for Signature.
type Signature = string
// SignatureExpiration defines model for SignatureExpiration.
type SignatureExpiration = int
// User defines model for User.
type User = string
// FileNotFound defines model for FileNotFound.
type FileNotFound = Error
// InternalServerError defines model for InternalServerError.
type InternalServerError = Error
// InvalidPath defines model for InvalidPath.
type InvalidPath = Error
// InvalidUser defines model for InvalidUser.
type InvalidUser = Error
// NotEnoughDiskSpace defines model for NotEnoughDiskSpace.
type NotEnoughDiskSpace = Error
// UploadSuccess defines model for UploadSuccess.
type UploadSuccess = []EntryInfo
// GetFilesParams defines parameters for GetFiles.
type GetFilesParams struct {
// Path Path to the file, URL encoded. Can be relative to user's home directory.
Path *FilePath `form:"path,omitempty" json:"path,omitempty"`
// Username User used for setting the owner, or resolving relative paths.
Username *User `form:"username,omitempty" json:"username,omitempty"`
// Signature Signature used for file access permission verification.
Signature *Signature `form:"signature,omitempty" json:"signature,omitempty"`
// SignatureExpiration Signature expiration used for defining the expiration time of the signature.
SignatureExpiration *SignatureExpiration `form:"signature_expiration,omitempty" json:"signature_expiration,omitempty"`
}
// PostFilesMultipartBody defines parameters for PostFiles.
type PostFilesMultipartBody struct {
File *openapi_types.File `json:"file,omitempty"`
}
// PostFilesParams defines parameters for PostFiles.
type PostFilesParams struct {
// Path Path to the file, URL encoded. Can be relative to user's home directory.
Path *FilePath `form:"path,omitempty" json:"path,omitempty"`
// Username User used for setting the owner, or resolving relative paths.
Username *User `form:"username,omitempty" json:"username,omitempty"`
// Signature Signature used for file access permission verification.
Signature *Signature `form:"signature,omitempty" json:"signature,omitempty"`
// SignatureExpiration Signature expiration used for defining the expiration time of the signature.
SignatureExpiration *SignatureExpiration `form:"signature_expiration,omitempty" json:"signature_expiration,omitempty"`
}
// PostInitJSONBody defines parameters for PostInit.
type PostInitJSONBody struct {
// AccessToken Access token for secure access to envd service
AccessToken *SecureToken `json:"accessToken,omitempty"`
// DefaultUser The default user to use for operations
DefaultUser *string `json:"defaultUser,omitempty"`
// DefaultWorkdir The default working directory to use for operations
DefaultWorkdir *string `json:"defaultWorkdir,omitempty"`
// EnvVars Environment variables to set
EnvVars *EnvVars `json:"envVars,omitempty"`
// HyperloopIP IP address of the hyperloop server to connect to
HyperloopIP *string `json:"hyperloopIP,omitempty"`
// Timestamp The current timestamp in RFC3339 format
Timestamp *time.Time `json:"timestamp,omitempty"`
VolumeMounts *[]VolumeMount `json:"volumeMounts,omitempty"`
}
// PostFilesMultipartRequestBody defines body for PostFiles for multipart/form-data ContentType.
type PostFilesMultipartRequestBody PostFilesMultipartBody
// PostInitJSONRequestBody defines body for PostInit for application/json ContentType.
type PostInitJSONRequestBody PostInitJSONBody
// ServerInterface represents all server handlers.
type ServerInterface interface {
// Get the environment variables
// (GET /envs)
GetEnvs(w http.ResponseWriter, r *http.Request)
// Download a file
// (GET /files)
GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams)
// Upload a file and ensure the parent directories exist. If the file exists, it will be overwritten.
// (POST /files)
PostFiles(w http.ResponseWriter, r *http.Request, params PostFilesParams)
// Check the health of the service
// (GET /health)
GetHealth(w http.ResponseWriter, r *http.Request)
// Set initial vars, ensure the time and metadata is synced with the host
// (POST /init)
PostInit(w http.ResponseWriter, r *http.Request)
// Get the stats of the service
// (GET /metrics)
GetMetrics(w http.ResponseWriter, r *http.Request)
// Quiesce continuous goroutines before Firecracker snapshot
// (POST /snapshot/prepare)
PostSnapshotPrepare(w http.ResponseWriter, r *http.Request)
}
// Unimplemented server implementation that returns http.StatusNotImplemented for each endpoint.
type Unimplemented struct{}
// Get the environment variables
// (GET /envs)
func (_ Unimplemented) GetEnvs(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Download a file
// (GET /files)
func (_ Unimplemented) GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams) {
w.WriteHeader(http.StatusNotImplemented)
}
// Upload a file and ensure the parent directories exist. If the file exists, it will be overwritten.
// (POST /files)
func (_ Unimplemented) PostFiles(w http.ResponseWriter, r *http.Request, params PostFilesParams) {
w.WriteHeader(http.StatusNotImplemented)
}
// Check the health of the service
// (GET /health)
func (_ Unimplemented) GetHealth(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Set initial vars, ensure the time and metadata is synced with the host
// (POST /init)
func (_ Unimplemented) PostInit(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Get the stats of the service
// (GET /metrics)
func (_ Unimplemented) GetMetrics(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Quiesce continuous goroutines before Firecracker snapshot
// (POST /snapshot/prepare)
func (_ Unimplemented) PostSnapshotPrepare(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// ServerInterfaceWrapper converts contexts to parameters.
type ServerInterfaceWrapper struct {
Handler ServerInterface
HandlerMiddlewares []MiddlewareFunc
ErrorHandlerFunc func(w http.ResponseWriter, r *http.Request, err error)
}
type MiddlewareFunc func(http.Handler) http.Handler
// GetEnvs operation middleware
func (siw *ServerInterfaceWrapper) GetEnvs(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetEnvs(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetFiles operation middleware
func (siw *ServerInterfaceWrapper) GetFiles(w http.ResponseWriter, r *http.Request) {
var err error
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
// Parameter object where we will unmarshal all parameters from the context
var params GetFilesParams
// ------------- Optional query parameter "path" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "path", r.URL.Query(), &params.Path, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "path", Err: err})
return
}
// ------------- Optional query parameter "username" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "username", r.URL.Query(), &params.Username, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "username", Err: err})
return
}
// ------------- Optional query parameter "signature" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature", r.URL.Query(), &params.Signature, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature", Err: err})
return
}
// ------------- Optional query parameter "signature_expiration" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature_expiration", r.URL.Query(), &params.SignatureExpiration, runtime.BindQueryParameterOptions{Type: "integer", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature_expiration", Err: err})
return
}
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetFiles(w, r, params)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostFiles operation middleware
func (siw *ServerInterfaceWrapper) PostFiles(w http.ResponseWriter, r *http.Request) {
var err error
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
// Parameter object where we will unmarshal all parameters from the context
var params PostFilesParams
// ------------- Optional query parameter "path" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "path", r.URL.Query(), &params.Path, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "path", Err: err})
return
}
// ------------- Optional query parameter "username" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "username", r.URL.Query(), &params.Username, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "username", Err: err})
return
}
// ------------- Optional query parameter "signature" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature", r.URL.Query(), &params.Signature, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature", Err: err})
return
}
// ------------- Optional query parameter "signature_expiration" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature_expiration", r.URL.Query(), &params.SignatureExpiration, runtime.BindQueryParameterOptions{Type: "integer", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature_expiration", Err: err})
return
}
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostFiles(w, r, params)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetHealth operation middleware
func (siw *ServerInterfaceWrapper) GetHealth(w http.ResponseWriter, r *http.Request) {
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetHealth(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostInit operation middleware
func (siw *ServerInterfaceWrapper) PostInit(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostInit(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetMetrics operation middleware
func (siw *ServerInterfaceWrapper) GetMetrics(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetMetrics(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostSnapshotPrepare operation middleware
func (siw *ServerInterfaceWrapper) PostSnapshotPrepare(w http.ResponseWriter, r *http.Request) {
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostSnapshotPrepare(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
type UnescapedCookieParamError struct {
ParamName string
Err error
}
func (e *UnescapedCookieParamError) Error() string {
return fmt.Sprintf("error unescaping cookie parameter '%s'", e.ParamName)
}
func (e *UnescapedCookieParamError) Unwrap() error {
return e.Err
}
type UnmarshalingParamError struct {
ParamName string
Err error
}
func (e *UnmarshalingParamError) Error() string {
return fmt.Sprintf("Error unmarshaling parameter %s as JSON: %s", e.ParamName, e.Err.Error())
}
func (e *UnmarshalingParamError) Unwrap() error {
return e.Err
}
type RequiredParamError struct {
ParamName string
}
func (e *RequiredParamError) Error() string {
return fmt.Sprintf("Query argument %s is required, but not found", e.ParamName)
}
type RequiredHeaderError struct {
ParamName string
Err error
}
func (e *RequiredHeaderError) Error() string {
return fmt.Sprintf("Header parameter %s is required, but not found", e.ParamName)
}
func (e *RequiredHeaderError) Unwrap() error {
return e.Err
}
type InvalidParamFormatError struct {
ParamName string
Err error
}
func (e *InvalidParamFormatError) Error() string {
return fmt.Sprintf("Invalid format for parameter %s: %s", e.ParamName, e.Err.Error())
}
func (e *InvalidParamFormatError) Unwrap() error {
return e.Err
}
type TooManyValuesForParamError struct {
ParamName string
Count int
}
func (e *TooManyValuesForParamError) Error() string {
return fmt.Sprintf("Expected one value for %s, got %d", e.ParamName, e.Count)
}
// Handler creates http.Handler with routing matching OpenAPI spec.
func Handler(si ServerInterface) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{})
}
type ChiServerOptions struct {
BaseURL string
BaseRouter chi.Router
Middlewares []MiddlewareFunc
ErrorHandlerFunc func(w http.ResponseWriter, r *http.Request, err error)
}
// HandlerFromMux creates http.Handler with routing matching OpenAPI spec based on the provided mux.
func HandlerFromMux(si ServerInterface, r chi.Router) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{
BaseRouter: r,
})
}
func HandlerFromMuxWithBaseURL(si ServerInterface, r chi.Router, baseURL string) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{
BaseURL: baseURL,
BaseRouter: r,
})
}
// HandlerWithOptions creates http.Handler with additional options
func HandlerWithOptions(si ServerInterface, options ChiServerOptions) http.Handler {
r := options.BaseRouter
if r == nil {
r = chi.NewRouter()
}
if options.ErrorHandlerFunc == nil {
options.ErrorHandlerFunc = func(w http.ResponseWriter, r *http.Request, err error) {
http.Error(w, err.Error(), http.StatusBadRequest)
}
}
wrapper := ServerInterfaceWrapper{
Handler: si,
HandlerMiddlewares: options.Middlewares,
ErrorHandlerFunc: options.ErrorHandlerFunc,
}
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/envs", wrapper.GetEnvs)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/files", wrapper.GetFiles)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/files", wrapper.PostFiles)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/health", wrapper.GetHealth)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/init", wrapper.PostInit)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/metrics", wrapper.GetMetrics)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/snapshot/prepare", wrapper.PostSnapshotPrepare)
})
return r
}

View File

@ -1,133 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"errors"
"fmt"
"net/http"
"slices"
"strconv"
"strings"
"time"
"github.com/awnumar/memguard"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
)
const (
SigningReadOperation = "read"
SigningWriteOperation = "write"
accessTokenHeader = "X-Access-Token"
)
// paths that are always allowed without general authentication
// POST/init is secured via MMDS hash validation instead
var authExcludedPaths = []string{
"GET/health",
"GET/files",
"POST/files",
"POST/init",
"POST/snapshot/prepare",
}
func (a *API) WithAuthorization(handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
if a.accessToken.IsSet() {
authHeader := req.Header.Get(accessTokenHeader)
// check if this path is allowed without authentication (e.g., health check, endpoints supporting signing)
allowedPath := slices.Contains(authExcludedPaths, req.Method+req.URL.Path)
if !a.accessToken.Equals(authHeader) && !allowedPath {
a.logger.Error().Msg("Trying to access secured envd without correct access token")
err := fmt.Errorf("unauthorized access, please provide a valid access token or method signing if supported")
jsonError(w, http.StatusUnauthorized, err)
return
}
}
handler.ServeHTTP(w, req)
})
}
func (a *API) generateSignature(path string, username string, operation string, signatureExpiration *int64) (string, error) {
tokenBytes, err := a.accessToken.Bytes()
if err != nil {
return "", fmt.Errorf("access token is not set: %w", err)
}
defer memguard.WipeBytes(tokenBytes)
var signature string
hasher := keys.NewSHA256Hashing()
if signatureExpiration == nil {
signature = strings.Join([]string{path, operation, username, string(tokenBytes)}, ":")
} else {
signature = strings.Join([]string{path, operation, username, string(tokenBytes), strconv.FormatInt(*signatureExpiration, 10)}, ":")
}
return fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(signature))), nil
}
func (a *API) validateSigning(r *http.Request, signature *string, signatureExpiration *int, username *string, path string, operation string) (err error) {
var expectedSignature string
// no need to validate signing key if access token is not set
if !a.accessToken.IsSet() {
return nil
}
// check if access token is sent in the header
tokenFromHeader := r.Header.Get(accessTokenHeader)
if tokenFromHeader != "" {
if !a.accessToken.Equals(tokenFromHeader) {
return fmt.Errorf("access token present in header but does not match")
}
return nil
}
if signature == nil {
return fmt.Errorf("missing signature query parameter")
}
// Empty string is used when no username is provided and the default user should be used
signatureUsername := ""
if username != nil {
signatureUsername = *username
}
if signatureExpiration == nil {
expectedSignature, err = a.generateSignature(path, signatureUsername, operation, nil)
} else {
exp := int64(*signatureExpiration)
expectedSignature, err = a.generateSignature(path, signatureUsername, operation, &exp)
}
if err != nil {
a.logger.Error().Err(err).Msg("error generating signing key")
return errors.New("invalid signature")
}
// signature validation
if expectedSignature != *signature {
return fmt.Errorf("invalid signature")
}
// signature expiration
if signatureExpiration != nil {
exp := int64(*signatureExpiration)
if exp < time.Now().Unix() {
return fmt.Errorf("signature is already expired")
}
}
return nil
}

View File

@ -1,64 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"fmt"
"strconv"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
)
func TestKeyGenerationAlgorithmIsStable(t *testing.T) {
t.Parallel()
apiToken := "secret-access-token"
secureToken := &SecureToken{}
err := secureToken.Set([]byte(apiToken))
require.NoError(t, err)
api := &API{accessToken: secureToken}
path := "/path/to/demo.txt"
username := "root"
operation := "write"
timestamp := time.Now().Unix()
signature, err := api.generateSignature(path, username, operation, &timestamp)
require.NoError(t, err)
assert.NotEmpty(t, signature)
// locally generated signature
hasher := keys.NewSHA256Hashing()
localSignatureTmp := fmt.Sprintf("%s:%s:%s:%s:%s", path, operation, username, apiToken, strconv.FormatInt(timestamp, 10))
localSignature := fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(localSignatureTmp)))
assert.Equal(t, localSignature, signature)
}
func TestKeyGenerationAlgorithmWithoutExpirationIsStable(t *testing.T) {
t.Parallel()
apiToken := "secret-access-token"
secureToken := &SecureToken{}
err := secureToken.Set([]byte(apiToken))
require.NoError(t, err)
api := &API{accessToken: secureToken}
path := "/path/to/resource.txt"
username := "user"
operation := "read"
signature, err := api.generateSignature(path, username, operation, nil)
require.NoError(t, err)
assert.NotEmpty(t, signature)
// locally generated signature
hasher := keys.NewSHA256Hashing()
localSignatureTmp := fmt.Sprintf("%s:%s:%s:%s", path, operation, username, apiToken)
localSignature := fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(localSignatureTmp)))
assert.Equal(t, localSignature, signature)
}

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
# yaml-language-server: $schema=https://raw.githubusercontent.com/deepmap/oapi-codegen/HEAD/configuration-schema.json
package: api
output: api.gen.go
generate:
models: true
chi-server: true
client: false

View File

@ -1,187 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"compress/gzip"
"errors"
"fmt"
"io"
"mime"
"net/http"
"os"
"os/user"
"path/filepath"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
"git.omukk.dev/wrenn/sandbox/envd/internal/permissions"
)
func (a *API) GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams) {
defer r.Body.Close()
var errorCode int
var errMsg error
var path string
if params.Path != nil {
path = *params.Path
}
operationID := logs.AssignOperationID()
// signing authorization if needed
err := a.validateSigning(r, params.Signature, params.SignatureExpiration, params.Username, path, SigningReadOperation)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("error during auth validation")
jsonError(w, http.StatusUnauthorized, err)
return
}
username, err := execcontext.ResolveDefaultUsername(params.Username, a.defaults.User)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("no user specified")
jsonError(w, http.StatusBadRequest, err)
return
}
defer func() {
l := a.logger.
Err(errMsg).
Str("method", r.Method+" "+r.URL.Path).
Str(string(logs.OperationIDKey), operationID).
Str("path", path).
Str("username", username)
if errMsg != nil {
l = l.Int("error_code", errorCode)
}
l.Msg("File read")
}()
u, err := user.Lookup(username)
if err != nil {
errMsg = fmt.Errorf("error looking up user '%s': %w", username, err)
errorCode = http.StatusUnauthorized
jsonError(w, errorCode, errMsg)
return
}
resolvedPath, err := permissions.ExpandAndResolve(path, u, a.defaults.Workdir)
if err != nil {
errMsg = fmt.Errorf("error expanding and resolving path '%s': %w", path, err)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
stat, err := os.Stat(resolvedPath)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
errMsg = fmt.Errorf("path '%s' does not exist", resolvedPath)
errorCode = http.StatusNotFound
jsonError(w, errorCode, errMsg)
return
}
errMsg = fmt.Errorf("error checking if path exists '%s': %w", resolvedPath, err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
if stat.IsDir() {
errMsg = fmt.Errorf("path '%s' is a directory", resolvedPath)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
// Reject anything that isn't a regular file (devices, pipes, sockets, etc.).
// Reading device files like /dev/zero or /dev/urandom produces infinite data
// and will exhaust memory on all layers of the stack.
if !stat.Mode().IsRegular() {
errMsg = fmt.Errorf("path '%s' is not a regular file", resolvedPath)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
// Validate Accept-Encoding header
encoding, err := parseAcceptEncoding(r)
if err != nil {
errMsg = fmt.Errorf("error parsing Accept-Encoding: %w", err)
errorCode = http.StatusNotAcceptable
jsonError(w, errorCode, errMsg)
return
}
// Tell caches to store separate variants for different Accept-Encoding values
w.Header().Set("Vary", "Accept-Encoding")
// Fall back to identity for Range or conditional requests to preserve http.ServeContent
// behavior (206 Partial Content, 304 Not Modified). However, we must check if identity
// is acceptable per the Accept-Encoding header.
hasRangeOrConditional := r.Header.Get("Range") != "" ||
r.Header.Get("If-Modified-Since") != "" ||
r.Header.Get("If-None-Match") != "" ||
r.Header.Get("If-Range") != ""
if hasRangeOrConditional {
if !isIdentityAcceptable(r) {
errMsg = fmt.Errorf("identity encoding not acceptable for Range or conditional request")
errorCode = http.StatusNotAcceptable
jsonError(w, errorCode, errMsg)
return
}
encoding = EncodingIdentity
}
file, err := os.Open(resolvedPath)
if err != nil {
errMsg = fmt.Errorf("error opening file '%s': %w", resolvedPath, err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
defer file.Close()
w.Header().Set("Content-Disposition", mime.FormatMediaType("inline", map[string]string{"filename": filepath.Base(resolvedPath)}))
// Serve with gzip encoding if requested.
if encoding == EncodingGzip {
w.Header().Set("Content-Encoding", EncodingGzip)
// Set Content-Type based on file extension, preserving the original type
contentType := mime.TypeByExtension(filepath.Ext(path))
if contentType == "" {
contentType = "application/octet-stream"
}
w.Header().Set("Content-Type", contentType)
gw := gzip.NewWriter(w)
defer gw.Close()
_, err = io.Copy(gw, file)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("error writing gzip response")
}
return
}
http.ServeContent(w, r, path, stat.ModTime(), file)
}

View File

@ -1,405 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"bytes"
"compress/gzip"
"context"
"io"
"mime/multipart"
"net/http"
"net/http/httptest"
"net/url"
"os"
"os/user"
"path/filepath"
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
func TestGetFilesContentDisposition(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
tests := []struct {
name string
filename string
expectedHeader string
}{
{
name: "simple filename",
filename: "test.txt",
expectedHeader: `inline; filename=test.txt`,
},
{
name: "filename with extension",
filename: "presentation.pptx",
expectedHeader: `inline; filename=presentation.pptx`,
},
{
name: "filename with multiple dots",
filename: "archive.tar.gz",
expectedHeader: `inline; filename=archive.tar.gz`,
},
{
name: "filename with spaces",
filename: "my document.pdf",
expectedHeader: `inline; filename="my document.pdf"`,
},
{
name: "filename with quotes",
filename: `file"name.txt`,
expectedHeader: `inline; filename="file\"name.txt"`,
},
{
name: "filename with backslash",
filename: `file\name.txt`,
expectedHeader: `inline; filename="file\\name.txt"`,
},
{
name: "unicode filename",
filename: "\u6587\u6863.pdf", // 文档.pdf in Chinese
expectedHeader: "inline; filename*=utf-8''%E6%96%87%E6%A1%A3.pdf",
},
{
name: "dotfile preserved",
filename: ".env",
expectedHeader: `inline; filename=.env`,
},
{
name: "dotfile with extension preserved",
filename: ".gitignore",
expectedHeader: `inline; filename=.gitignore`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
// Create a temp directory and file
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, tt.filename)
err := os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify Content-Disposition header
contentDisposition := resp.Header.Get("Content-Disposition")
assert.Equal(t, tt.expectedHeader, contentDisposition, "Content-Disposition header should be set with correct filename")
})
}
}
func TestGetFilesContentDispositionWithNestedPath(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
// Create a temp directory with nested structure
tempDir := t.TempDir()
nestedDir := filepath.Join(tempDir, "subdir", "another")
err = os.MkdirAll(nestedDir, 0o755)
require.NoError(t, err)
filename := "document.pdf"
tempFile := filepath.Join(nestedDir, filename)
err = os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify Content-Disposition header uses only the base filename, not the full path
contentDisposition := resp.Header.Get("Content-Disposition")
assert.Equal(t, `inline; filename=document.pdf`, contentDisposition, "Content-Disposition should contain only the filename, not the path")
}
func TestGetFiles_GzipEncoding_ExplicitIdentityOffWithRange(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
// Create a temp directory with a test file
tempDir := t.TempDir()
filename := "document.pdf"
tempFile := filepath.Join(tempDir, filename)
err = os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
req.Header.Set("Accept-Encoding", "gzip; q=1,*; q=0")
req.Header.Set("Range", "bytes=0-4") // Request first 5 bytes
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusNotAcceptable, resp.StatusCode)
}
func TestGetFiles_GzipDownload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("hello world, this is a test file for gzip compression")
// Create a temp file with known content
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, "test.txt")
err = os.WriteFile(tempFile, originalContent, 0o644)
require.NoError(t, err)
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
req.Header.Set("Accept-Encoding", "gzip")
w := httptest.NewRecorder()
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
assert.Equal(t, "gzip", resp.Header.Get("Content-Encoding"))
assert.Equal(t, "text/plain; charset=utf-8", resp.Header.Get("Content-Type"))
// Decompress the gzip response body
gzReader, err := gzip.NewReader(resp.Body)
require.NoError(t, err)
defer gzReader.Close()
decompressed, err := io.ReadAll(gzReader)
require.NoError(t, err)
assert.Equal(t, originalContent, decompressed)
}
func TestPostFiles_GzipUpload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("hello world, this is a test file uploaded with gzip")
// Build a multipart body
var multipartBuf bytes.Buffer
mpWriter := multipart.NewWriter(&multipartBuf)
part, err := mpWriter.CreateFormFile("file", "uploaded.txt")
require.NoError(t, err)
_, err = part.Write(originalContent)
require.NoError(t, err)
err = mpWriter.Close()
require.NoError(t, err)
// Gzip-compress the entire multipart body
var gzBuf bytes.Buffer
gzWriter := gzip.NewWriter(&gzBuf)
_, err = gzWriter.Write(multipartBuf.Bytes())
require.NoError(t, err)
err = gzWriter.Close()
require.NoError(t, err)
// Create test API
tempDir := t.TempDir()
destPath := filepath.Join(tempDir, "uploaded.txt")
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
req := httptest.NewRequest(http.MethodPost, "/files?path="+url.QueryEscape(destPath), &gzBuf)
req.Header.Set("Content-Type", mpWriter.FormDataContentType())
req.Header.Set("Content-Encoding", "gzip")
w := httptest.NewRecorder()
params := PostFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.PostFiles(w, req, params)
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify the file was written with the original (decompressed) content
data, err := os.ReadFile(destPath)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
}
func TestGzipUploadThenGzipDownload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("round-trip gzip test: upload compressed, download compressed, verify match")
// --- Upload with gzip ---
// Build a multipart body
var multipartBuf bytes.Buffer
mpWriter := multipart.NewWriter(&multipartBuf)
part, err := mpWriter.CreateFormFile("file", "roundtrip.txt")
require.NoError(t, err)
_, err = part.Write(originalContent)
require.NoError(t, err)
err = mpWriter.Close()
require.NoError(t, err)
// Gzip-compress the entire multipart body
var gzBuf bytes.Buffer
gzWriter := gzip.NewWriter(&gzBuf)
_, err = gzWriter.Write(multipartBuf.Bytes())
require.NoError(t, err)
err = gzWriter.Close()
require.NoError(t, err)
tempDir := t.TempDir()
destPath := filepath.Join(tempDir, "roundtrip.txt")
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
uploadReq := httptest.NewRequest(http.MethodPost, "/files?path="+url.QueryEscape(destPath), &gzBuf)
uploadReq.Header.Set("Content-Type", mpWriter.FormDataContentType())
uploadReq.Header.Set("Content-Encoding", "gzip")
uploadW := httptest.NewRecorder()
uploadParams := PostFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.PostFiles(uploadW, uploadReq, uploadParams)
uploadResp := uploadW.Result()
defer uploadResp.Body.Close()
require.Equal(t, http.StatusOK, uploadResp.StatusCode)
// --- Download with gzip ---
downloadReq := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(destPath), nil)
downloadReq.Header.Set("Accept-Encoding", "gzip")
downloadW := httptest.NewRecorder()
downloadParams := GetFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.GetFiles(downloadW, downloadReq, downloadParams)
downloadResp := downloadW.Result()
defer downloadResp.Body.Close()
require.Equal(t, http.StatusOK, downloadResp.StatusCode)
assert.Equal(t, "gzip", downloadResp.Header.Get("Content-Encoding"))
// Decompress and verify content matches original
gzReader, err := gzip.NewReader(downloadResp.Body)
require.NoError(t, err)
defer gzReader.Close()
decompressed, err := io.ReadAll(gzReader)
require.NoError(t, err)
assert.Equal(t, originalContent, decompressed)
}

View File

@ -1,229 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"compress/gzip"
"fmt"
"io"
"net/http"
"slices"
"sort"
"strconv"
"strings"
)
const (
// EncodingGzip is the gzip content encoding.
EncodingGzip = "gzip"
// EncodingIdentity means no encoding (passthrough).
EncodingIdentity = "identity"
// EncodingWildcard means any encoding is acceptable.
EncodingWildcard = "*"
)
// SupportedEncodings lists the content encodings supported for file transfer.
// The order matters - encodings are checked in order of preference.
var SupportedEncodings = []string{
EncodingGzip,
}
// encodingWithQuality holds an encoding name and its quality value.
type encodingWithQuality struct {
encoding string
quality float64
}
// isSupportedEncoding checks if the given encoding is in the supported list.
// Per RFC 7231, content-coding values are case-insensitive.
func isSupportedEncoding(encoding string) bool {
return slices.Contains(SupportedEncodings, strings.ToLower(encoding))
}
// parseEncodingWithQuality parses an encoding value and extracts the quality.
// Returns the encoding name (lowercased) and quality value (default 1.0 if not specified).
// Per RFC 7231, content-coding values are case-insensitive.
func parseEncodingWithQuality(value string) encodingWithQuality {
value = strings.TrimSpace(value)
quality := 1.0
if idx := strings.Index(value, ";"); idx != -1 {
params := value[idx+1:]
value = strings.TrimSpace(value[:idx])
// Parse q=X.X parameter
for param := range strings.SplitSeq(params, ";") {
param = strings.TrimSpace(param)
if strings.HasPrefix(strings.ToLower(param), "q=") {
if q, err := strconv.ParseFloat(param[2:], 64); err == nil {
quality = q
}
}
}
}
// Normalize encoding to lowercase per RFC 7231
return encodingWithQuality{encoding: strings.ToLower(value), quality: quality}
}
// parseEncoding extracts the encoding name from a header value, stripping quality.
func parseEncoding(value string) string {
return parseEncodingWithQuality(value).encoding
}
// parseContentEncoding parses the Content-Encoding header and returns the encoding.
// Returns an error if an unsupported encoding is specified.
// If no Content-Encoding header is present, returns empty string.
func parseContentEncoding(r *http.Request) (string, error) {
header := r.Header.Get("Content-Encoding")
if header == "" {
return EncodingIdentity, nil
}
encoding := parseEncoding(header)
if encoding == EncodingIdentity {
return EncodingIdentity, nil
}
if !isSupportedEncoding(encoding) {
return "", fmt.Errorf("unsupported Content-Encoding: %s, supported: %v", header, SupportedEncodings)
}
return encoding, nil
}
// parseAcceptEncodingHeader parses the Accept-Encoding header and returns
// the parsed encodings along with the identity rejection state.
// Per RFC 7231 Section 5.3.4, identity is acceptable unless excluded by
// "identity;q=0" or "*;q=0" without a more specific entry for identity with q>0.
func parseAcceptEncodingHeader(header string) ([]encodingWithQuality, bool) {
if header == "" {
return nil, false // identity not rejected when header is empty
}
// Parse all encodings with their quality values
var encodings []encodingWithQuality
for value := range strings.SplitSeq(header, ",") {
eq := parseEncodingWithQuality(value)
encodings = append(encodings, eq)
}
// Check if identity is rejected per RFC 7231 Section 5.3.4:
// identity is acceptable unless excluded by "identity;q=0" or "*;q=0"
// without a more specific entry for identity with q>0.
identityRejected := false
identityExplicitlyAccepted := false
wildcardRejected := false
for _, eq := range encodings {
switch eq.encoding {
case EncodingIdentity:
if eq.quality == 0 {
identityRejected = true
} else {
identityExplicitlyAccepted = true
}
case EncodingWildcard:
if eq.quality == 0 {
wildcardRejected = true
}
}
}
if wildcardRejected && !identityExplicitlyAccepted {
identityRejected = true
}
return encodings, identityRejected
}
// isIdentityAcceptable checks if identity encoding is acceptable based on the
// Accept-Encoding header. Per RFC 7231 section 5.3.4, identity is always
// implicitly acceptable unless explicitly rejected with q=0.
func isIdentityAcceptable(r *http.Request) bool {
header := r.Header.Get("Accept-Encoding")
_, identityRejected := parseAcceptEncodingHeader(header)
return !identityRejected
}
// parseAcceptEncoding parses the Accept-Encoding header and returns the best
// supported encoding based on quality values. Per RFC 7231 section 5.3.4,
// identity is always implicitly acceptable unless explicitly rejected with q=0.
// If no Accept-Encoding header is present, returns empty string (identity).
func parseAcceptEncoding(r *http.Request) (string, error) {
header := r.Header.Get("Accept-Encoding")
if header == "" {
return EncodingIdentity, nil
}
encodings, identityRejected := parseAcceptEncodingHeader(header)
// Sort by quality value (highest first)
sort.Slice(encodings, func(i, j int) bool {
return encodings[i].quality > encodings[j].quality
})
// Find the best supported encoding
for _, eq := range encodings {
// Skip encodings with q=0 (explicitly rejected)
if eq.quality == 0 {
continue
}
if eq.encoding == EncodingIdentity {
return EncodingIdentity, nil
}
// Wildcard means any encoding is acceptable - return a supported encoding if identity is rejected
if eq.encoding == EncodingWildcard {
if identityRejected && len(SupportedEncodings) > 0 {
return SupportedEncodings[0], nil
}
return EncodingIdentity, nil
}
if isSupportedEncoding(eq.encoding) {
return eq.encoding, nil
}
}
// Per RFC 7231, identity is implicitly acceptable unless rejected
if !identityRejected {
return EncodingIdentity, nil
}
// Identity rejected and no supported encodings found
return "", fmt.Errorf("no acceptable encoding found, supported: %v", SupportedEncodings)
}
// getDecompressedBody returns a reader that decompresses the request body based on
// Content-Encoding header. Returns the original body if no encoding is specified.
// Returns an error if an unsupported encoding is specified.
// The caller is responsible for closing both the returned ReadCloser and the
// original request body (r.Body) separately.
func getDecompressedBody(r *http.Request) (io.ReadCloser, error) {
encoding, err := parseContentEncoding(r)
if err != nil {
return nil, err
}
if encoding == EncodingIdentity {
return r.Body, nil
}
switch encoding {
case EncodingGzip:
gzReader, err := gzip.NewReader(r.Body)
if err != nil {
return nil, fmt.Errorf("failed to create gzip reader: %w", err)
}
return gzReader, nil
default:
// This shouldn't happen if isSupportedEncoding is correct
return nil, fmt.Errorf("encoding %s is supported but not implemented", encoding)
}
}

View File

@ -1,496 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestIsSupportedEncoding(t *testing.T) {
t.Parallel()
t.Run("gzip is supported", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("gzip"))
})
t.Run("GZIP is supported (case-insensitive)", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("GZIP"))
})
t.Run("Gzip is supported (case-insensitive)", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("Gzip"))
})
t.Run("br is not supported", func(t *testing.T) {
t.Parallel()
assert.False(t, isSupportedEncoding("br"))
})
t.Run("deflate is not supported", func(t *testing.T) {
t.Parallel()
assert.False(t, isSupportedEncoding("deflate"))
})
}
func TestParseEncodingWithQuality(t *testing.T) {
t.Parallel()
t.Run("returns encoding with default quality 1.0", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001)
})
t.Run("parses quality value", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=0.5")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.5, eq.quality, 0.001)
})
t.Run("parses quality value with whitespace", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip ; q=0.8")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.8, eq.quality, 0.001)
})
t.Run("handles q=0", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=0")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.0, eq.quality, 0.001)
})
t.Run("handles invalid quality value", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=invalid")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001) // defaults to 1.0 on parse error
})
t.Run("trims whitespace from encoding", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality(" gzip ")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001)
})
t.Run("normalizes encoding to lowercase", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("GZIP")
assert.Equal(t, "gzip", eq.encoding)
})
t.Run("normalizes mixed case encoding", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("Gzip;q=0.5")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.5, eq.quality, 0.001)
})
}
func TestParseEncoding(t *testing.T) {
t.Parallel()
t.Run("returns encoding as-is", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip"))
})
t.Run("trims whitespace", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding(" gzip "))
})
t.Run("strips quality value", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip;q=1.0"))
})
t.Run("strips quality value with whitespace", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip ; q=0.5"))
})
}
func TestParseContentEncoding(t *testing.T) {
t.Parallel()
t.Run("returns identity when no header", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when Content-Encoding is gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "gzip")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Content-Encoding is GZIP (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "GZIP")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Content-Encoding is Gzip (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "Gzip")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity for identity encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "identity")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns error for unsupported encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "br")
_, err := parseContentEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "unsupported Content-Encoding")
assert.Contains(t, err.Error(), "supported: [gzip]")
})
t.Run("handles gzip with quality value", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "gzip;q=1.0")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
}
func TestParseAcceptEncoding(t *testing.T) {
t.Parallel()
t.Run("returns identity when no header", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when Accept-Encoding is gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Accept-Encoding is GZIP (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "GZIP")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when gzip is among multiple encodings", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "deflate, gzip, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip with quality value", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=1.0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity for identity encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns identity for wildcard encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity for unsupported encoding only", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity when only unsupported encodings", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "deflate, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("selects gzip when it has highest quality", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br;q=0.5, gzip;q=1.0, deflate;q=0.8")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("selects gzip even with lower quality when others unsupported", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br;q=1.0, gzip;q=0.5")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity when it has higher quality than gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0.5, identity;q=1.0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("skips encoding with q=0", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0, identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity when gzip rejected and no other supported", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns error when identity explicitly rejected and no supported encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br, identity;q=0")
_, err := parseAcceptEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "no acceptable encoding found")
})
t.Run("returns gzip for wildcard when identity rejected", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*, identity;q=0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding) // wildcard with identity rejected returns supported encoding
})
t.Run("returns error when wildcard rejected and no explicit identity", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0")
_, err := parseAcceptEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "no acceptable encoding found")
})
t.Run("returns identity when wildcard rejected but identity explicitly accepted", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0, identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when wildcard rejected but gzip explicitly accepted", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0, gzip")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingGzip, encoding)
})
}
func TestGetDecompressedBody(t *testing.T) {
t.Parallel()
t.Run("returns original body when no Content-Encoding header", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
body, err := getDecompressedBody(req)
require.NoError(t, err)
assert.Equal(t, req.Body, body, "should return original body")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, content, data)
})
t.Run("decompresses gzip body when Content-Encoding is gzip", func(t *testing.T) {
t.Parallel()
originalContent := []byte("test content to compress")
var compressed bytes.Buffer
gw := gzip.NewWriter(&compressed)
_, err := gw.Write(originalContent)
require.NoError(t, err)
err = gw.Close()
require.NoError(t, err)
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(compressed.Bytes()))
req.Header.Set("Content-Encoding", "gzip")
body, err := getDecompressedBody(req)
require.NoError(t, err)
defer body.Close()
assert.NotEqual(t, req.Body, body, "should return a new gzip reader")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
})
t.Run("returns error for invalid gzip data", func(t *testing.T) {
t.Parallel()
invalidGzip := []byte("this is not gzip data")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(invalidGzip))
req.Header.Set("Content-Encoding", "gzip")
_, err := getDecompressedBody(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to create gzip reader")
})
t.Run("returns original body for identity encoding", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
req.Header.Set("Content-Encoding", "identity")
body, err := getDecompressedBody(req)
require.NoError(t, err)
assert.Equal(t, req.Body, body, "should return original body")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, content, data)
})
t.Run("returns error for unsupported encoding", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
req.Header.Set("Content-Encoding", "br")
_, err := getDecompressedBody(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "unsupported Content-Encoding")
})
t.Run("handles gzip with quality value", func(t *testing.T) {
t.Parallel()
originalContent := []byte("test content to compress")
var compressed bytes.Buffer
gw := gzip.NewWriter(&compressed)
_, err := gw.Write(originalContent)
require.NoError(t, err)
err = gw.Close()
require.NoError(t, err)
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(compressed.Bytes()))
req.Header.Set("Content-Encoding", "gzip;q=1.0")
body, err := getDecompressedBody(req)
require.NoError(t, err)
defer body.Close()
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
})
}

View File

@ -1,31 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"encoding/json"
"net/http"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
)
func (a *API) GetEnvs(w http.ResponseWriter, _ *http.Request) {
operationID := logs.AssignOperationID()
a.logger.Debug().Str(string(logs.OperationIDKey), operationID).Msg("Getting env vars")
envs := make(EnvVars)
a.defaults.EnvVars.Range(func(key, value string) bool {
envs[key] = value
return true
})
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
if err := json.NewEncoder(w).Encode(envs); err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("Failed to encode env vars")
}
}

View File

@ -1,23 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"encoding/json"
"errors"
"net/http"
)
func jsonError(w http.ResponseWriter, code int, err error) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.WriteHeader(code)
encodeErr := json.NewEncoder(w).Encode(Error{
Code: code,
Message: err.Error(),
})
if encodeErr != nil {
http.Error(w, errors.Join(encodeErr, err).Error(), http.StatusInternalServerError)
}
}

View File

@ -1,5 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config cfg.yaml ../../spec/envd.yaml

View File

@ -1,296 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/netip"
"os/exec"
"time"
"git.omukk.dev/wrenn/sandbox/envd/internal/host"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
"github.com/awnumar/memguard"
"github.com/rs/zerolog"
"github.com/txn2/txeh"
)
var (
ErrAccessTokenMismatch = errors.New("access token validation failed")
ErrAccessTokenResetNotAuthorized = errors.New("access token reset not authorized")
)
// validateInitAccessToken validates the access token for /init requests.
// Token is valid if it matches the existing token OR the MMDS hash.
// If neither exists, first-time setup is allowed.
func (a *API) validateInitAccessToken(ctx context.Context, requestToken *SecureToken) error {
requestTokenSet := requestToken.IsSet()
// Fast path: token matches existing
if a.accessToken.IsSet() && requestTokenSet && a.accessToken.EqualsSecure(requestToken) {
return nil
}
// Check MMDS only if token didn't match existing
matchesMMDS, mmdsExists := a.checkMMDSHash(ctx, requestToken)
switch {
case matchesMMDS:
return nil
case !a.accessToken.IsSet() && !mmdsExists:
return nil // first-time setup
case !requestTokenSet:
return ErrAccessTokenResetNotAuthorized
default:
return ErrAccessTokenMismatch
}
}
// checkMMDSHash checks if the request token matches the MMDS hash.
// Returns (matches, mmdsExists).
//
// The MMDS hash is set by the orchestrator during Resume:
// - hash(token): requires this specific token
// - hash(""): explicitly allows nil token (token reset authorized)
// - "": MMDS not properly configured, no authorization granted
func (a *API) checkMMDSHash(ctx context.Context, requestToken *SecureToken) (bool, bool) {
if a.isNotFC {
return false, false
}
mmdsHash, err := a.mmdsClient.GetAccessTokenHash(ctx)
if err != nil {
return false, false
}
if mmdsHash == "" {
return false, false
}
if !requestToken.IsSet() {
return mmdsHash == keys.HashAccessToken(""), true
}
tokenBytes, err := requestToken.Bytes()
if err != nil {
return false, true
}
defer memguard.WipeBytes(tokenBytes)
return keys.HashAccessTokenBytes(tokenBytes) == mmdsHash, true
}
func (a *API) PostInit(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
ctx := r.Context()
operationID := logs.AssignOperationID()
logger := a.logger.With().Str(string(logs.OperationIDKey), operationID).Logger()
if r.Body != nil {
// Read raw body so we can wipe it after parsing
body, err := io.ReadAll(r.Body)
// Ensure body is wiped after we're done
defer memguard.WipeBytes(body)
if err != nil {
logger.Error().Msgf("Failed to read request body: %v", err)
w.WriteHeader(http.StatusBadRequest)
return
}
var initRequest PostInitJSONBody
if len(body) > 0 {
err = json.Unmarshal(body, &initRequest)
if err != nil {
logger.Error().Msgf("Failed to decode request: %v", err)
w.WriteHeader(http.StatusBadRequest)
return
}
}
// Ensure request token is destroyed if not transferred via TakeFrom.
// This handles: validation failures, timestamp-based skips, and any early returns.
// Safe because Destroy() is nil-safe and TakeFrom clears the source.
defer initRequest.AccessToken.Destroy()
a.initLock.Lock()
defer a.initLock.Unlock()
// Update data only if the request is newer or if there's no timestamp at all
if initRequest.Timestamp == nil || a.lastSetTime.SetToGreater(initRequest.Timestamp.UnixNano()) {
err = a.SetData(ctx, logger, initRequest)
if err != nil {
switch {
case errors.Is(err, ErrAccessTokenMismatch), errors.Is(err, ErrAccessTokenResetNotAuthorized):
w.WriteHeader(http.StatusUnauthorized)
default:
logger.Error().Msgf("Failed to set data: %v", err)
w.WriteHeader(http.StatusBadRequest)
}
w.Write([]byte(err.Error()))
return
}
}
}
go func() { //nolint:contextcheck // TODO: fix this later
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
host.PollForMMDSOpts(ctx, a.mmdsChan, a.defaults.EnvVars)
}()
// Start the port scanner and forwarder if they were stopped by a
// pre-snapshot prepare call. Start is a no-op if already running,
// so this is safe on first boot and only takes effect after restore.
if a.portSubsystem != nil {
a.portSubsystem.Start(a.rootCtx)
}
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "")
w.WriteHeader(http.StatusNoContent)
}
func (a *API) SetData(ctx context.Context, logger zerolog.Logger, data PostInitJSONBody) error {
// Validate access token before proceeding with any action
// The request must provide a token that is either:
// 1. Matches the existing access token (if set), OR
// 2. Matches the MMDS hash (for token change during resume)
if err := a.validateInitAccessToken(ctx, data.AccessToken); err != nil {
return err
}
if data.EnvVars != nil {
logger.Debug().Msg(fmt.Sprintf("Setting %d env vars", len(*data.EnvVars)))
for key, value := range *data.EnvVars {
logger.Debug().Msgf("Setting env var for %s", key)
a.defaults.EnvVars.Store(key, value)
}
}
if data.AccessToken.IsSet() {
logger.Debug().Msg("Setting access token")
a.accessToken.TakeFrom(data.AccessToken)
} else if a.accessToken.IsSet() {
logger.Debug().Msg("Clearing access token")
a.accessToken.Destroy()
}
if data.HyperloopIP != nil {
go a.SetupHyperloop(*data.HyperloopIP)
}
if data.DefaultUser != nil && *data.DefaultUser != "" {
logger.Debug().Msgf("Setting default user to: %s", *data.DefaultUser)
a.defaults.User = *data.DefaultUser
}
if data.DefaultWorkdir != nil && *data.DefaultWorkdir != "" {
logger.Debug().Msgf("Setting default workdir to: %s", *data.DefaultWorkdir)
a.defaults.Workdir = data.DefaultWorkdir
}
if data.VolumeMounts != nil {
for _, volume := range *data.VolumeMounts {
logger.Debug().Msgf("Mounting %s at %q", volume.NfsTarget, volume.Path)
go a.setupNfs(context.WithoutCancel(ctx), volume.NfsTarget, volume.Path)
}
}
return nil
}
func (a *API) setupNfs(ctx context.Context, nfsTarget, path string) {
commands := [][]string{
{"mkdir", "-p", path},
{"mount", "-v", "-t", "nfs", "-o", "mountproto=tcp,mountport=2049,proto=tcp,port=2049,nfsvers=3,noacl", nfsTarget, path},
}
for _, command := range commands {
data, err := exec.CommandContext(ctx, command[0], command[1:]...).CombinedOutput()
logger := a.getLogger(err)
logger.
Strs("command", command).
Str("output", string(data)).
Msg("Mount NFS")
if err != nil {
return
}
}
}
func (a *API) SetupHyperloop(address string) {
a.hyperloopLock.Lock()
defer a.hyperloopLock.Unlock()
if err := rewriteHostsFile(address, "/etc/hosts"); err != nil {
a.logger.Error().Err(err).Msg("failed to modify hosts file")
} else {
a.defaults.EnvVars.Store("WRENN_EVENTS_ADDRESS", fmt.Sprintf("http://%s", address))
}
}
const eventsHost = "events.wrenn.local"
func rewriteHostsFile(address, path string) error {
hosts, err := txeh.NewHosts(&txeh.HostsConfig{
ReadFilePath: path,
WriteFilePath: path,
})
if err != nil {
return fmt.Errorf("failed to create hosts: %w", err)
}
// Update /etc/hosts to point events.wrenn.local to the hyperloop IP
// This will remove any existing entries for events.wrenn.local first
ipFamily, err := getIPFamily(address)
if err != nil {
return fmt.Errorf("failed to get ip family: %w", err)
}
if ok, current, _ := hosts.HostAddressLookup(eventsHost, ipFamily); ok && current == address {
return nil // nothing to be done
}
hosts.AddHost(address, eventsHost)
return hosts.Save()
}
var (
ErrInvalidAddress = errors.New("invalid IP address")
ErrUnknownAddressFormat = errors.New("unknown IP address format")
)
func getIPFamily(address string) (txeh.IPFamily, error) {
addressIP, err := netip.ParseAddr(address)
if err != nil {
return txeh.IPFamilyV4, fmt.Errorf("failed to parse IP address: %w", err)
}
switch {
case addressIP.Is4():
return txeh.IPFamilyV4, nil
case addressIP.Is6():
return txeh.IPFamilyV6, nil
default:
return txeh.IPFamilyV4, fmt.Errorf("%w: %s", ErrUnknownAddressFormat, address)
}
}

View File

@ -1,524 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"context"
"os"
"path/filepath"
"strings"
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
utilsShared "git.omukk.dev/wrenn/sandbox/envd/internal/shared/utils"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
func TestSimpleCases(t *testing.T) {
t.Parallel()
testCases := map[string]func(string) string{
"both newlines": func(s string) string { return s },
"no newline prefix": func(s string) string { return strings.TrimPrefix(s, "\n") },
"no newline suffix": func(s string) string { return strings.TrimSuffix(s, "\n") },
"no newline prefix or suffix": strings.TrimSpace,
}
for name, preprocessor := range testCases {
t.Run(name, func(t *testing.T) {
t.Parallel()
tempDir := t.TempDir()
value := `
# comment
127.0.0.1 one.host
127.0.0.2 two.host
`
value = preprocessor(value)
inputPath := filepath.Join(tempDir, "hosts")
err := os.WriteFile(inputPath, []byte(value), 0o644)
require.NoError(t, err)
err = rewriteHostsFile("127.0.0.3", inputPath)
require.NoError(t, err)
data, err := os.ReadFile(inputPath)
require.NoError(t, err)
assert.Equal(t, `# comment
127.0.0.1 one.host
127.0.0.2 two.host
127.0.0.3 events.wrenn.local`, strings.TrimSpace(string(data)))
})
}
}
func secureTokenPtr(s string) *SecureToken {
token := &SecureToken{}
_ = token.Set([]byte(s))
return token
}
type mockMMDSClient struct {
hash string
err error
}
func (m *mockMMDSClient) GetAccessTokenHash(_ context.Context) (string, error) {
return m.hash, m.err
}
func newTestAPI(accessToken *SecureToken, mmdsClient MMDSClient) *API {
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
}
api := New(&logger, defaults, nil, false, context.Background(), nil, "test")
if accessToken != nil {
api.accessToken.TakeFrom(accessToken)
}
api.mmdsClient = mmdsClient
return api
}
func TestValidateInitAccessToken(t *testing.T) {
t.Parallel()
ctx := t.Context()
tests := []struct {
name string
accessToken *SecureToken
requestToken *SecureToken
mmdsHash string
mmdsErr error
wantErr error
}{
{
name: "fast path: token matches existing",
accessToken: secureTokenPtr("secret-token"),
requestToken: secureTokenPtr("secret-token"),
mmdsHash: "",
mmdsErr: nil,
wantErr: nil,
},
{
name: "MMDS match: token hash matches MMDS hash",
accessToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("new-token"),
mmdsErr: nil,
wantErr: nil,
},
{
name: "first-time setup: no existing token, MMDS error",
accessToken: nil,
requestToken: secureTokenPtr("new-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
},
{
name: "first-time setup: no existing token, empty MMDS hash",
accessToken: nil,
requestToken: secureTokenPtr("new-token"),
mmdsHash: "",
mmdsErr: nil,
wantErr: nil,
},
{
name: "first-time setup: both tokens nil, no MMDS",
accessToken: nil,
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
},
{
name: "mismatch: existing token differs from request, no MMDS",
accessToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenMismatch,
},
{
name: "mismatch: existing token differs from request, MMDS hash mismatch",
accessToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: keys.HashAccessToken("different-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenMismatch,
},
{
name: "conflict: existing token, nil request, MMDS exists",
accessToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken("some-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
},
{
name: "conflict: existing token, nil request, no MMDS",
accessToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenResetNotAuthorized,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: tt.mmdsHash, err: tt.mmdsErr}
api := newTestAPI(tt.accessToken, mmdsClient)
err := api.validateInitAccessToken(ctx, tt.requestToken)
if tt.wantErr != nil {
require.Error(t, err)
assert.ErrorIs(t, err, tt.wantErr)
} else {
require.NoError(t, err)
}
})
}
}
func TestCheckMMDSHash(t *testing.T) {
t.Parallel()
ctx := t.Context()
t.Run("returns match when token hash equals MMDS hash", func(t *testing.T) {
t.Parallel()
token := "my-secret-token"
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken(token), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr(token))
assert.True(t, matches)
assert.True(t, exists)
})
t.Run("returns no match when token hash differs from MMDS hash", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken("different-token"), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("my-token"))
assert.False(t, matches)
assert.True(t, exists)
})
t.Run("returns exists but no match when request token is nil", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken("some-token"), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.False(t, matches)
assert.True(t, exists)
})
t.Run("returns false, false when MMDS returns error", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("any-token"))
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns false, false when MMDS returns empty hash with non-nil request", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("any-token"))
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns false, false when MMDS returns empty hash with nil request", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns true, true when MMDS returns hash of empty string with nil request (explicit reset)", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken(""), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.True(t, matches)
assert.True(t, exists)
})
}
func TestSetData(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := zerolog.Nop()
t.Run("access token updates", func(t *testing.T) {
t.Parallel()
tests := []struct {
name string
existingToken *SecureToken
requestToken *SecureToken
mmdsHash string
mmdsErr error
wantErr error
wantFinalToken *SecureToken
}{
{
name: "first-time setup: sets initial token",
existingToken: nil,
requestToken: secureTokenPtr("initial-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: secureTokenPtr("initial-token"),
},
{
name: "first-time setup: nil request token leaves token unset",
existingToken: nil,
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: nil,
},
{
name: "re-init with same token: token unchanged",
existingToken: secureTokenPtr("same-token"),
requestToken: secureTokenPtr("same-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: secureTokenPtr("same-token"),
},
{
name: "resume with MMDS: updates token when hash matches",
existingToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("new-token"),
mmdsErr: nil,
wantErr: nil,
wantFinalToken: secureTokenPtr("new-token"),
},
{
name: "resume with MMDS: fails when hash doesn't match",
existingToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("different-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenMismatch,
wantFinalToken: secureTokenPtr("old-token"),
},
{
name: "fails when existing token and request token mismatch without MMDS",
existingToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenMismatch,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when existing token but nil request token",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when existing token but nil request with MMDS present",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken("some-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when MMDS returns empty hash and request is nil (prevents unauthorized reset)",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "resets token when MMDS returns hash of empty string and request is nil (explicit reset)",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken(""),
mmdsErr: nil,
wantErr: nil,
wantFinalToken: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: tt.mmdsHash, err: tt.mmdsErr}
api := newTestAPI(tt.existingToken, mmdsClient)
data := PostInitJSONBody{
AccessToken: tt.requestToken,
}
err := api.SetData(ctx, logger, data)
if tt.wantErr != nil {
require.ErrorIs(t, err, tt.wantErr)
} else {
require.NoError(t, err)
}
if tt.wantFinalToken == nil {
assert.False(t, api.accessToken.IsSet(), "expected token to not be set")
} else {
require.True(t, api.accessToken.IsSet(), "expected token to be set")
assert.True(t, api.accessToken.EqualsSecure(tt.wantFinalToken), "expected token to match")
}
})
}
})
t.Run("sets environment variables", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
envVars := EnvVars{"FOO": "bar", "BAZ": "qux"}
data := PostInitJSONBody{
EnvVars: &envVars,
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
val, ok := api.defaults.EnvVars.Load("FOO")
assert.True(t, ok)
assert.Equal(t, "bar", val)
val, ok = api.defaults.EnvVars.Load("BAZ")
assert.True(t, ok)
assert.Equal(t, "qux", val)
})
t.Run("sets default user", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
data := PostInitJSONBody{
DefaultUser: utilsShared.ToPtr("testuser"),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.Equal(t, "testuser", api.defaults.User)
})
t.Run("does not set default user when empty", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
api.defaults.User = "original"
data := PostInitJSONBody{
DefaultUser: utilsShared.ToPtr(""),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.Equal(t, "original", api.defaults.User)
})
t.Run("sets default workdir", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
data := PostInitJSONBody{
DefaultWorkdir: utilsShared.ToPtr("/home/user"),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
require.NotNil(t, api.defaults.Workdir)
assert.Equal(t, "/home/user", *api.defaults.Workdir)
})
t.Run("does not set default workdir when empty", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
originalWorkdir := "/original"
api.defaults.Workdir = &originalWorkdir
data := PostInitJSONBody{
DefaultWorkdir: utilsShared.ToPtr(""),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
require.NotNil(t, api.defaults.Workdir)
assert.Equal(t, "/original", *api.defaults.Workdir)
})
t.Run("sets multiple fields at once", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
envVars := EnvVars{"KEY": "value"}
data := PostInitJSONBody{
AccessToken: secureTokenPtr("token"),
DefaultUser: utilsShared.ToPtr("user"),
DefaultWorkdir: utilsShared.ToPtr("/workdir"),
EnvVars: &envVars,
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.True(t, api.accessToken.Equals("token"), "expected token to match")
assert.Equal(t, "user", api.defaults.User)
assert.Equal(t, "/workdir", *api.defaults.Workdir)
val, ok := api.defaults.EnvVars.Load("KEY")
assert.True(t, ok)
assert.Equal(t, "value", val)
})
}

View File

@ -1,214 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"bytes"
"errors"
"sync"
"github.com/awnumar/memguard"
)
var (
ErrTokenNotSet = errors.New("access token not set")
ErrTokenEmpty = errors.New("empty token not allowed")
)
// SecureToken wraps memguard for secure token storage.
// It uses LockedBuffer which provides memory locking, guard pages,
// and secure zeroing on destroy.
type SecureToken struct {
mu sync.RWMutex
buffer *memguard.LockedBuffer
}
// Set securely replaces the token, destroying the old one first.
// The old token memory is zeroed before the new token is stored.
// The input byte slice is wiped after copying to secure memory.
// Returns ErrTokenEmpty if token is empty - use Destroy() to clear the token instead.
func (s *SecureToken) Set(token []byte) error {
if len(token) == 0 {
return ErrTokenEmpty
}
s.mu.Lock()
defer s.mu.Unlock()
// Destroy old token first (zeros memory)
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
// Create new LockedBuffer from bytes (source slice is wiped by memguard)
s.buffer = memguard.NewBufferFromBytes(token)
return nil
}
// UnmarshalJSON implements json.Unmarshaler to securely parse a JSON string
// directly into memguard, wiping the input bytes after copying.
//
// Access tokens are hex-encoded HMAC-SHA256 hashes (64 chars of [0-9a-f]),
// so they never contain JSON escape sequences.
func (s *SecureToken) UnmarshalJSON(data []byte) error {
// JSON strings are quoted, so minimum valid is `""` (2 bytes).
if len(data) < 2 || data[0] != '"' || data[len(data)-1] != '"' {
memguard.WipeBytes(data)
return errors.New("invalid secure token JSON string")
}
content := data[1 : len(data)-1]
// Access tokens are hex strings - reject if contains backslash
if bytes.ContainsRune(content, '\\') {
memguard.WipeBytes(data)
return errors.New("invalid secure token: unexpected escape sequence")
}
if len(content) == 0 {
memguard.WipeBytes(data)
return ErrTokenEmpty
}
s.mu.Lock()
defer s.mu.Unlock()
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
// Allocate secure buffer and copy directly into it
s.buffer = memguard.NewBuffer(len(content))
copy(s.buffer.Bytes(), content)
// Wipe the input data
memguard.WipeBytes(data)
return nil
}
// TakeFrom transfers the token from src to this SecureToken, destroying any
// existing token. The source token is cleared after transfer.
// This avoids copying the underlying bytes.
func (s *SecureToken) TakeFrom(src *SecureToken) {
if src == nil || s == src {
return
}
// Extract buffer from source
src.mu.Lock()
buffer := src.buffer
src.buffer = nil
src.mu.Unlock()
// Install buffer in destination
s.mu.Lock()
if s.buffer != nil {
s.buffer.Destroy()
}
s.buffer = buffer
s.mu.Unlock()
}
// Equals checks if token matches using constant-time comparison.
// Returns false if the receiver is nil.
func (s *SecureToken) Equals(token string) bool {
if s == nil {
return false
}
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return false
}
return s.buffer.EqualTo([]byte(token))
}
// EqualsSecure compares this token with another SecureToken using constant-time comparison.
// Returns false if either receiver or other is nil.
func (s *SecureToken) EqualsSecure(other *SecureToken) bool {
if s == nil || other == nil {
return false
}
if s == other {
return s.IsSet()
}
// Get a copy of other's bytes (avoids holding two locks simultaneously)
otherBytes, err := other.Bytes()
if err != nil {
return false
}
defer memguard.WipeBytes(otherBytes)
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return false
}
return s.buffer.EqualTo(otherBytes)
}
// IsSet returns true if a token is stored.
// Returns false if the receiver is nil.
func (s *SecureToken) IsSet() bool {
if s == nil {
return false
}
s.mu.RLock()
defer s.mu.RUnlock()
return s.buffer != nil && s.buffer.IsAlive()
}
// Bytes returns a copy of the token bytes (for signature generation).
// The caller should zero the returned slice after use.
// Returns ErrTokenNotSet if the receiver is nil.
func (s *SecureToken) Bytes() ([]byte, error) {
if s == nil {
return nil, ErrTokenNotSet
}
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return nil, ErrTokenNotSet
}
// Return a copy (unavoidable for signature generation)
src := s.buffer.Bytes()
result := make([]byte, len(src))
copy(result, src)
return result, nil
}
// Destroy securely wipes the token from memory.
// No-op if the receiver is nil.
func (s *SecureToken) Destroy() {
if s == nil {
return
}
s.mu.Lock()
defer s.mu.Unlock()
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
}

View File

@ -1,463 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"sync"
"testing"
"github.com/awnumar/memguard"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSecureTokenSetAndEquals(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Initially not set
assert.False(t, st.IsSet(), "token should not be set initially")
assert.False(t, st.Equals("any-token"), "equals should return false when not set")
// Set token
err := st.Set([]byte("test-token"))
require.NoError(t, err)
assert.True(t, st.IsSet(), "token should be set after Set()")
assert.True(t, st.Equals("test-token"), "equals should return true for correct token")
assert.False(t, st.Equals("wrong-token"), "equals should return false for wrong token")
assert.False(t, st.Equals(""), "equals should return false for empty token")
}
func TestSecureTokenReplace(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Set initial token
err := st.Set([]byte("first-token"))
require.NoError(t, err)
assert.True(t, st.Equals("first-token"))
// Replace with new token (old one should be destroyed)
err = st.Set([]byte("second-token"))
require.NoError(t, err)
assert.True(t, st.Equals("second-token"), "should match new token")
assert.False(t, st.Equals("first-token"), "should not match old token")
}
func TestSecureTokenDestroy(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Set and then destroy
err := st.Set([]byte("test-token"))
require.NoError(t, err)
assert.True(t, st.IsSet())
st.Destroy()
assert.False(t, st.IsSet(), "token should not be set after Destroy()")
assert.False(t, st.Equals("test-token"), "equals should return false after Destroy()")
// Destroy on already destroyed should be safe
st.Destroy()
assert.False(t, st.IsSet())
// Nil receiver should be safe
var nilToken *SecureToken
assert.False(t, nilToken.IsSet(), "nil receiver should return false for IsSet()")
assert.False(t, nilToken.Equals("anything"), "nil receiver should return false for Equals()")
assert.False(t, nilToken.EqualsSecure(st), "nil receiver should return false for EqualsSecure()")
nilToken.Destroy() // should not panic
_, err = nilToken.Bytes()
require.ErrorIs(t, err, ErrTokenNotSet, "nil receiver should return ErrTokenNotSet for Bytes()")
}
func TestSecureTokenBytes(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Bytes should return error when not set
_, err := st.Bytes()
require.ErrorIs(t, err, ErrTokenNotSet)
// Set token and get bytes
err = st.Set([]byte("test-token"))
require.NoError(t, err)
bytes, err := st.Bytes()
require.NoError(t, err)
assert.Equal(t, []byte("test-token"), bytes)
// Zero out the bytes (as caller should do)
memguard.WipeBytes(bytes)
// Original should still be intact
assert.True(t, st.Equals("test-token"), "original token should still work after zeroing copy")
// After destroy, bytes should fail
st.Destroy()
_, err = st.Bytes()
assert.ErrorIs(t, err, ErrTokenNotSet)
}
func TestSecureTokenConcurrentAccess(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.Set([]byte("initial-token"))
require.NoError(t, err)
var wg sync.WaitGroup
const numGoroutines = 100
// Concurrent reads
for range numGoroutines {
wg.Go(func() {
st.IsSet()
st.Equals("initial-token")
})
}
// Concurrent writes
for i := range 10 {
wg.Add(1)
go func(idx int) {
defer wg.Done()
st.Set([]byte("token-" + string(rune('a'+idx))))
}(i)
}
wg.Wait()
// Should still be in a valid state
assert.True(t, st.IsSet())
}
func TestSecureTokenEmptyToken(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Setting empty token should return an error
err := st.Set([]byte{})
require.ErrorIs(t, err, ErrTokenEmpty)
assert.False(t, st.IsSet(), "token should not be set after empty token error")
// Setting nil should also return an error
err = st.Set(nil)
require.ErrorIs(t, err, ErrTokenEmpty)
assert.False(t, st.IsSet(), "token should not be set after nil token error")
}
func TestSecureTokenEmptyTokenDoesNotClearExisting(t *testing.T) {
t.Parallel()
st := &SecureToken{}
// Set a valid token first
err := st.Set([]byte("valid-token"))
require.NoError(t, err)
assert.True(t, st.IsSet())
// Attempting to set empty token should fail and preserve existing token
err = st.Set([]byte{})
require.ErrorIs(t, err, ErrTokenEmpty)
assert.True(t, st.IsSet(), "existing token should be preserved after empty token error")
assert.True(t, st.Equals("valid-token"), "existing token value should be unchanged")
}
func TestSecureTokenUnmarshalJSON(t *testing.T) {
t.Parallel()
t.Run("unmarshals valid JSON string", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.UnmarshalJSON([]byte(`"my-secret-token"`))
require.NoError(t, err)
assert.True(t, st.IsSet())
assert.True(t, st.Equals("my-secret-token"))
})
t.Run("returns error for empty string", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.UnmarshalJSON([]byte(`""`))
require.ErrorIs(t, err, ErrTokenEmpty)
assert.False(t, st.IsSet())
})
t.Run("returns error for invalid JSON", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.UnmarshalJSON([]byte(`not-valid-json`))
require.Error(t, err)
assert.False(t, st.IsSet())
})
t.Run("replaces existing token", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.Set([]byte("old-token"))
require.NoError(t, err)
err = st.UnmarshalJSON([]byte(`"new-token"`))
require.NoError(t, err)
assert.True(t, st.Equals("new-token"))
assert.False(t, st.Equals("old-token"))
})
t.Run("wipes input buffer after parsing", func(t *testing.T) {
t.Parallel()
// Create a buffer with a known token
input := []byte(`"secret-token-12345"`)
original := make([]byte, len(input))
copy(original, input)
st := &SecureToken{}
err := st.UnmarshalJSON(input)
require.NoError(t, err)
// Verify the token was stored correctly
assert.True(t, st.Equals("secret-token-12345"))
// Verify the input buffer was wiped (all zeros)
for i, b := range input {
assert.Equal(t, byte(0), b, "byte at position %d should be zero, got %d", i, b)
}
})
t.Run("wipes input buffer on error", func(t *testing.T) {
t.Parallel()
// Create a buffer with an empty token (will error)
input := []byte(`""`)
st := &SecureToken{}
err := st.UnmarshalJSON(input)
require.Error(t, err)
// Verify the input buffer was still wiped
for i, b := range input {
assert.Equal(t, byte(0), b, "byte at position %d should be zero, got %d", i, b)
}
})
t.Run("rejects escape sequences", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.UnmarshalJSON([]byte(`"token\nwith\nnewlines"`))
require.Error(t, err)
assert.Contains(t, err.Error(), "escape sequence")
assert.False(t, st.IsSet())
})
}
func TestSecureTokenSetWipesInput(t *testing.T) {
t.Parallel()
t.Run("wipes input buffer after storing", func(t *testing.T) {
t.Parallel()
// Create a buffer with a known token
input := []byte("my-secret-token")
original := make([]byte, len(input))
copy(original, input)
st := &SecureToken{}
err := st.Set(input)
require.NoError(t, err)
// Verify the token was stored correctly
assert.True(t, st.Equals("my-secret-token"))
// Verify the input buffer was wiped (all zeros)
for i, b := range input {
assert.Equal(t, byte(0), b, "byte at position %d should be zero, got %d", i, b)
}
})
}
func TestSecureTokenTakeFrom(t *testing.T) {
t.Parallel()
t.Run("transfers token from source to destination", func(t *testing.T) {
t.Parallel()
src := &SecureToken{}
err := src.Set([]byte("source-token"))
require.NoError(t, err)
dst := &SecureToken{}
dst.TakeFrom(src)
assert.True(t, dst.IsSet())
assert.True(t, dst.Equals("source-token"))
assert.False(t, src.IsSet(), "source should be empty after transfer")
})
t.Run("replaces existing destination token", func(t *testing.T) {
t.Parallel()
src := &SecureToken{}
err := src.Set([]byte("new-token"))
require.NoError(t, err)
dst := &SecureToken{}
err = dst.Set([]byte("old-token"))
require.NoError(t, err)
dst.TakeFrom(src)
assert.True(t, dst.Equals("new-token"))
assert.False(t, dst.Equals("old-token"))
assert.False(t, src.IsSet())
})
t.Run("handles nil source", func(t *testing.T) {
t.Parallel()
dst := &SecureToken{}
err := dst.Set([]byte("existing-token"))
require.NoError(t, err)
dst.TakeFrom(nil)
assert.True(t, dst.IsSet(), "destination should be unchanged with nil source")
assert.True(t, dst.Equals("existing-token"))
})
t.Run("handles empty source", func(t *testing.T) {
t.Parallel()
src := &SecureToken{}
dst := &SecureToken{}
err := dst.Set([]byte("existing-token"))
require.NoError(t, err)
dst.TakeFrom(src)
assert.False(t, dst.IsSet(), "destination should be cleared when source is empty")
})
t.Run("self-transfer is no-op and does not deadlock", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.Set([]byte("token"))
require.NoError(t, err)
st.TakeFrom(st)
assert.True(t, st.IsSet(), "token should remain set after self-transfer")
assert.True(t, st.Equals("token"), "token value should be unchanged")
})
}
func TestSecureTokenEqualsSecure(t *testing.T) {
t.Parallel()
t.Run("returns true for matching tokens", func(t *testing.T) {
t.Parallel()
st1 := &SecureToken{}
err := st1.Set([]byte("same-token"))
require.NoError(t, err)
st2 := &SecureToken{}
err = st2.Set([]byte("same-token"))
require.NoError(t, err)
assert.True(t, st1.EqualsSecure(st2))
assert.True(t, st2.EqualsSecure(st1))
})
t.Run("concurrent TakeFrom and EqualsSecure do not deadlock", func(t *testing.T) {
t.Parallel()
// This test verifies the fix for the lock ordering deadlock bug.
const iterations = 100
for range iterations {
a := &SecureToken{}
err := a.Set([]byte("token-a"))
require.NoError(t, err)
b := &SecureToken{}
err = b.Set([]byte("token-b"))
require.NoError(t, err)
var wg sync.WaitGroup
wg.Add(2)
// Goroutine 1: a.TakeFrom(b)
go func() {
defer wg.Done()
a.TakeFrom(b)
}()
// Goroutine 2: b.EqualsSecure(a)
go func() {
defer wg.Done()
b.EqualsSecure(a)
}()
wg.Wait()
}
})
t.Run("returns false for different tokens", func(t *testing.T) {
t.Parallel()
st1 := &SecureToken{}
err := st1.Set([]byte("token-a"))
require.NoError(t, err)
st2 := &SecureToken{}
err = st2.Set([]byte("token-b"))
require.NoError(t, err)
assert.False(t, st1.EqualsSecure(st2))
})
t.Run("returns false when comparing with nil", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.Set([]byte("token"))
require.NoError(t, err)
assert.False(t, st.EqualsSecure(nil))
})
t.Run("returns false when other is not set", func(t *testing.T) {
t.Parallel()
st1 := &SecureToken{}
err := st1.Set([]byte("token"))
require.NoError(t, err)
st2 := &SecureToken{}
assert.False(t, st1.EqualsSecure(st2))
})
t.Run("returns false when self is not set", func(t *testing.T) {
t.Parallel()
st1 := &SecureToken{}
st2 := &SecureToken{}
err := st2.Set([]byte("token"))
require.NoError(t, err)
assert.False(t, st1.EqualsSecure(st2))
})
t.Run("self-comparison returns true when set", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
err := st.Set([]byte("token"))
require.NoError(t, err)
assert.True(t, st.EqualsSecure(st), "self-comparison should return true and not deadlock")
})
t.Run("self-comparison returns false when not set", func(t *testing.T) {
t.Parallel()
st := &SecureToken{}
assert.False(t, st.EqualsSecure(st), "self-comparison on unset token should return false")
})
}

View File

@ -1,25 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"net/http"
)
// PostSnapshotPrepare quiesces continuous goroutines (port scanner, forwarder)
// and forces a GC cycle before Firecracker takes a VM snapshot. This ensures
// the Go runtime's page allocator is in a consistent state when vCPUs are frozen.
//
// Called by the host agent as a best-effort signal before vm.Pause().
func (a *API) PostSnapshotPrepare(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
if a.portSubsystem != nil {
a.portSubsystem.Stop()
a.logger.Info().Msg("snapshot/prepare: port subsystem quiesced")
}
w.Header().Set("Cache-Control", "no-store")
w.WriteHeader(http.StatusNoContent)
}

View File

@ -1,108 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"context"
"encoding/json"
"net/http"
"sync"
"github.com/rs/zerolog"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/host"
publicport "git.omukk.dev/wrenn/sandbox/envd/internal/port"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
// MMDSClient provides access to MMDS metadata.
type MMDSClient interface {
GetAccessTokenHash(ctx context.Context) (string, error)
}
// DefaultMMDSClient is the production implementation that calls the real MMDS endpoint.
type DefaultMMDSClient struct{}
func (c *DefaultMMDSClient) GetAccessTokenHash(ctx context.Context) (string, error) {
return host.GetAccessTokenHashFromMMDS(ctx)
}
type API struct {
isNotFC bool
logger *zerolog.Logger
accessToken *SecureToken
defaults *execcontext.Defaults
version string
mmdsChan chan *host.MMDSOpts
hyperloopLock sync.Mutex
mmdsClient MMDSClient
lastSetTime *utils.AtomicMax
initLock sync.Mutex
// rootCtx is the parent context from main(), used to restart
// long-lived goroutines after snapshot restore.
rootCtx context.Context
portSubsystem *publicport.PortSubsystem
}
func New(l *zerolog.Logger, defaults *execcontext.Defaults, mmdsChan chan *host.MMDSOpts, isNotFC bool, rootCtx context.Context, portSubsystem *publicport.PortSubsystem, version string) *API {
return &API{
logger: l,
defaults: defaults,
mmdsChan: mmdsChan,
isNotFC: isNotFC,
mmdsClient: &DefaultMMDSClient{},
lastSetTime: utils.NewAtomicMax(),
accessToken: &SecureToken{},
rootCtx: rootCtx,
portSubsystem: portSubsystem,
version: version,
}
}
func (a *API) GetHealth(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
a.logger.Trace().Msg("Health check")
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{
"version": a.version,
})
}
func (a *API) GetMetrics(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
a.logger.Trace().Msg("Get metrics")
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "application/json")
metrics, err := host.GetMetrics()
if err != nil {
a.logger.Error().Err(err).Msg("Failed to get metrics")
w.WriteHeader(http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
if err := json.NewEncoder(w).Encode(metrics); err != nil {
a.logger.Error().Err(err).Msg("Failed to encode metrics")
}
}
func (a *API) getLogger(err error) *zerolog.Event {
if err != nil {
return a.logger.Error().Err(err) //nolint:zerologlint // this is only prep
}
return a.logger.Info() //nolint:zerologlint // this is only prep
}

View File

@ -1,311 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"encoding/json"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"os"
"os/user"
"path/filepath"
"strings"
"syscall"
"github.com/rs/zerolog"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
"git.omukk.dev/wrenn/sandbox/envd/internal/permissions"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
var ErrNoDiskSpace = fmt.Errorf("not enough disk space available")
func processFile(r *http.Request, path string, part io.Reader, uid, gid int, logger zerolog.Logger) (int, error) {
logger.Debug().
Str("path", path).
Msg("File processing")
err := permissions.EnsureDirs(filepath.Dir(path), uid, gid)
if err != nil {
err := fmt.Errorf("error ensuring directories: %w", err)
return http.StatusInternalServerError, err
}
canBePreChowned := false
stat, err := os.Stat(path)
if err != nil && !os.IsNotExist(err) {
errMsg := fmt.Errorf("error getting file info: %w", err)
return http.StatusInternalServerError, errMsg
} else if err == nil {
if stat.IsDir() {
err := fmt.Errorf("path is a directory: %s", path)
return http.StatusBadRequest, err
}
canBePreChowned = true
}
hasBeenChowned := false
if canBePreChowned {
err = os.Chown(path, uid, gid)
if err != nil {
if !os.IsNotExist(err) {
err = fmt.Errorf("error changing file ownership: %w", err)
return http.StatusInternalServerError, err
}
} else {
hasBeenChowned = true
}
}
file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o666)
if err != nil {
if errors.Is(err, syscall.ENOSPC) {
err = fmt.Errorf("not enough inodes available: %w", err)
return http.StatusInsufficientStorage, err
}
err := fmt.Errorf("error opening file: %w", err)
return http.StatusInternalServerError, err
}
defer file.Close()
if !hasBeenChowned {
err = os.Chown(path, uid, gid)
if err != nil {
err := fmt.Errorf("error changing file ownership: %w", err)
return http.StatusInternalServerError, err
}
}
_, err = file.ReadFrom(part)
if err != nil {
if errors.Is(err, syscall.ENOSPC) {
err = ErrNoDiskSpace
if r.ContentLength > 0 {
err = fmt.Errorf("attempted to write %d bytes: %w", r.ContentLength, err)
}
return http.StatusInsufficientStorage, err
}
err = fmt.Errorf("error writing file: %w", err)
return http.StatusInternalServerError, err
}
return http.StatusNoContent, nil
}
func resolvePath(part *multipart.Part, paths *UploadSuccess, u *user.User, defaultPath *string, params PostFilesParams) (string, error) {
var pathToResolve string
if params.Path != nil {
pathToResolve = *params.Path
} else {
var err error
customPart := utils.NewCustomPart(part)
pathToResolve, err = customPart.FileNameWithPath()
if err != nil {
return "", fmt.Errorf("error getting multipart custom part file name: %w", err)
}
}
filePath, err := permissions.ExpandAndResolve(pathToResolve, u, defaultPath)
if err != nil {
return "", fmt.Errorf("error resolving path: %w", err)
}
for _, entry := range *paths {
if entry.Path == filePath {
var alreadyUploaded []string
for _, uploadedFile := range *paths {
if uploadedFile.Path != filePath {
alreadyUploaded = append(alreadyUploaded, uploadedFile.Path)
}
}
errMsg := fmt.Errorf("you cannot upload multiple files to the same path '%s' in one upload request, only the first specified file was uploaded", filePath)
if len(alreadyUploaded) > 1 {
errMsg = fmt.Errorf("%w, also the following files were uploaded: %v", errMsg, strings.Join(alreadyUploaded, ", "))
}
return "", errMsg
}
}
return filePath, nil
}
func (a *API) handlePart(r *http.Request, part *multipart.Part, paths UploadSuccess, u *user.User, uid, gid int, operationID string, params PostFilesParams) (*EntryInfo, int, error) {
defer part.Close()
if part.FormName() != "file" {
return nil, http.StatusOK, nil
}
filePath, err := resolvePath(part, &paths, u, a.defaults.Workdir, params)
if err != nil {
return nil, http.StatusBadRequest, err
}
logger := a.logger.
With().
Str(string(logs.OperationIDKey), operationID).
Str("event_type", "file_processing").
Logger()
status, err := processFile(r, filePath, part, uid, gid, logger)
if err != nil {
return nil, status, err
}
return &EntryInfo{
Path: filePath,
Name: filepath.Base(filePath),
Type: File,
}, http.StatusOK, nil
}
func (a *API) PostFiles(w http.ResponseWriter, r *http.Request, params PostFilesParams) {
// Capture original body to ensure it's always closed
originalBody := r.Body
defer originalBody.Close()
var errorCode int
var errMsg error
var path string
if params.Path != nil {
path = *params.Path
}
operationID := logs.AssignOperationID()
// signing authorization if needed
err := a.validateSigning(r, params.Signature, params.SignatureExpiration, params.Username, path, SigningWriteOperation)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("error during auth validation")
jsonError(w, http.StatusUnauthorized, err)
return
}
username, err := execcontext.ResolveDefaultUsername(params.Username, a.defaults.User)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("no user specified")
jsonError(w, http.StatusBadRequest, err)
return
}
defer func() {
l := a.logger.
Err(errMsg).
Str("method", r.Method+" "+r.URL.Path).
Str(string(logs.OperationIDKey), operationID).
Str("path", path).
Str("username", username)
if errMsg != nil {
l = l.Int("error_code", errorCode)
}
l.Msg("File write")
}()
// Handle gzip-encoded request body
body, err := getDecompressedBody(r)
if err != nil {
errMsg = fmt.Errorf("error decompressing request body: %w", err)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
defer body.Close()
r.Body = body
f, err := r.MultipartReader()
if err != nil {
errMsg = fmt.Errorf("error parsing multipart form: %w", err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
u, err := user.Lookup(username)
if err != nil {
errMsg = fmt.Errorf("error looking up user '%s': %w", username, err)
errorCode = http.StatusUnauthorized
jsonError(w, errorCode, errMsg)
return
}
uid, gid, err := permissions.GetUserIdInts(u)
if err != nil {
errMsg = fmt.Errorf("error getting user ids: %w", err)
jsonError(w, http.StatusInternalServerError, errMsg)
return
}
paths := UploadSuccess{}
for {
part, partErr := f.NextPart()
if partErr == io.EOF {
// We're done reading the parts.
break
} else if partErr != nil {
errMsg = fmt.Errorf("error reading form: %w", partErr)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
break
}
entry, status, err := a.handlePart(r, part, paths, u, uid, gid, operationID, params)
if err != nil {
errorCode = status
errMsg = err
jsonError(w, errorCode, errMsg)
return
}
if entry != nil {
paths = append(paths, *entry)
}
}
data, err := json.Marshal(paths)
if err != nil {
errMsg = fmt.Errorf("error marshaling response: %w", err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
w.WriteHeader(http.StatusOK)
_, _ = w.Write(data)
}

View File

@ -1,251 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestProcessFile(t *testing.T) {
t.Parallel()
uid := os.Getuid()
gid := os.Getgid()
newRequest := func(content []byte) (*http.Request, io.Reader) {
request := &http.Request{
ContentLength: int64(len(content)),
}
buffer := bytes.NewBuffer(content)
return request, buffer
}
var emptyReq http.Request
var emptyPart *bytes.Buffer
var emptyLogger zerolog.Logger
t.Run("failed to ensure directories", func(t *testing.T) {
t.Parallel()
httpStatus, err := processFile(&emptyReq, "/proc/invalid/not-real", emptyPart, uid, gid, emptyLogger)
require.Error(t, err)
assert.Equal(t, http.StatusInternalServerError, httpStatus)
assert.ErrorContains(t, err, "error ensuring directories: ")
})
t.Run("attempt to replace directory with a file", func(t *testing.T) {
t.Parallel()
tempDir := t.TempDir()
httpStatus, err := processFile(&emptyReq, tempDir, emptyPart, uid, gid, emptyLogger)
require.Error(t, err)
assert.Equal(t, http.StatusBadRequest, httpStatus, err.Error())
assert.ErrorContains(t, err, "path is a directory: ")
})
t.Run("fail to create file", func(t *testing.T) {
t.Parallel()
httpStatus, err := processFile(&emptyReq, "/proc/invalid-filename", emptyPart, uid, gid, emptyLogger)
require.Error(t, err)
assert.Equal(t, http.StatusInternalServerError, httpStatus)
assert.ErrorContains(t, err, "error opening file: ")
})
t.Run("out of disk space", func(t *testing.T) {
t.Parallel()
// make a tiny tmpfs mount
mountSize := 1024
tempDir := createTmpfsMount(t, mountSize)
// create test file
firstFileSize := mountSize / 2
tempFile1 := filepath.Join(tempDir, "test-file-1")
// fill it up
cmd := exec.CommandContext(t.Context(),
"dd", "if=/dev/zero", "of="+tempFile1, fmt.Sprintf("bs=%d", firstFileSize), "count=1")
err := cmd.Run()
require.NoError(t, err)
// create a new file that would fill up the
secondFileContents := make([]byte, mountSize*2)
for index := range secondFileContents {
secondFileContents[index] = 'a'
}
// try to replace it
request, buffer := newRequest(secondFileContents)
tempFile2 := filepath.Join(tempDir, "test-file-2")
httpStatus, err := processFile(request, tempFile2, buffer, uid, gid, emptyLogger)
require.Error(t, err)
assert.Equal(t, http.StatusInsufficientStorage, httpStatus)
assert.ErrorContains(t, err, "attempted to write 2048 bytes: not enough disk space")
})
t.Run("happy path", func(t *testing.T) {
t.Parallel()
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, "test-file")
content := []byte("test-file-contents")
request, buffer := newRequest(content)
httpStatus, err := processFile(request, tempFile, buffer, uid, gid, emptyLogger)
require.NoError(t, err)
assert.Equal(t, http.StatusNoContent, httpStatus)
data, err := os.ReadFile(tempFile)
require.NoError(t, err)
assert.Equal(t, content, data)
})
t.Run("overwrite file on full disk", func(t *testing.T) {
t.Parallel()
// make a tiny tmpfs mount
sizeInBytes := 1024
tempDir := createTmpfsMount(t, 1024)
// create test file
tempFile := filepath.Join(tempDir, "test-file")
// fill it up
cmd := exec.CommandContext(t.Context(), "dd", "if=/dev/zero", "of="+tempFile, fmt.Sprintf("bs=%d", sizeInBytes), "count=1")
err := cmd.Run()
require.NoError(t, err)
// try to replace it
content := []byte("test-file-contents")
request, buffer := newRequest(content)
httpStatus, err := processFile(request, tempFile, buffer, uid, gid, emptyLogger)
require.NoError(t, err)
assert.Equal(t, http.StatusNoContent, httpStatus)
})
t.Run("write new file on full disk", func(t *testing.T) {
t.Parallel()
// make a tiny tmpfs mount
sizeInBytes := 1024
tempDir := createTmpfsMount(t, 1024)
// create test file
tempFile1 := filepath.Join(tempDir, "test-file")
// fill it up
cmd := exec.CommandContext(t.Context(), "dd", "if=/dev/zero", "of="+tempFile1, fmt.Sprintf("bs=%d", sizeInBytes), "count=1")
err := cmd.Run()
require.NoError(t, err)
// try to write a new file
tempFile2 := filepath.Join(tempDir, "test-file-2")
content := []byte("test-file-contents")
request, buffer := newRequest(content)
httpStatus, err := processFile(request, tempFile2, buffer, uid, gid, emptyLogger)
require.ErrorContains(t, err, "not enough disk space available")
assert.Equal(t, http.StatusInsufficientStorage, httpStatus)
})
t.Run("write new file with no inodes available", func(t *testing.T) {
t.Parallel()
// make a tiny tmpfs mount
tempDir := createTmpfsMountWithInodes(t, 1024, 2)
// create test file
tempFile1 := filepath.Join(tempDir, "test-file")
// fill it up
cmd := exec.CommandContext(t.Context(), "dd", "if=/dev/zero", "of="+tempFile1, fmt.Sprintf("bs=%d", 100), "count=1")
err := cmd.Run()
require.NoError(t, err)
// try to write a new file
tempFile2 := filepath.Join(tempDir, "test-file-2")
content := []byte("test-file-contents")
request, buffer := newRequest(content)
httpStatus, err := processFile(request, tempFile2, buffer, uid, gid, emptyLogger)
require.ErrorContains(t, err, "not enough inodes available")
assert.Equal(t, http.StatusInsufficientStorage, httpStatus)
})
t.Run("update sysfs or other virtual fs", func(t *testing.T) {
t.Parallel()
if os.Geteuid() != 0 {
t.Skip("skipping sysfs updates: Operation not permitted with non-root user")
}
filePath := "/sys/fs/cgroup/user.slice/cpu.weight"
newContent := []byte("102\n")
request, buffer := newRequest(newContent)
httpStatus, err := processFile(request, filePath, buffer, uid, gid, emptyLogger)
require.NoError(t, err)
assert.Equal(t, http.StatusNoContent, httpStatus)
data, err := os.ReadFile(filePath)
require.NoError(t, err)
assert.Equal(t, newContent, data)
})
t.Run("replace file", func(t *testing.T) {
t.Parallel()
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, "test-file")
err := os.WriteFile(tempFile, []byte("old-contents"), 0o644)
require.NoError(t, err)
newContent := []byte("new-file-contents")
request, buffer := newRequest(newContent)
httpStatus, err := processFile(request, tempFile, buffer, uid, gid, emptyLogger)
require.NoError(t, err)
assert.Equal(t, http.StatusNoContent, httpStatus)
data, err := os.ReadFile(tempFile)
require.NoError(t, err)
assert.Equal(t, newContent, data)
})
}
func createTmpfsMount(t *testing.T, sizeInBytes int) string {
t.Helper()
return createTmpfsMountWithInodes(t, sizeInBytes, 5)
}
func createTmpfsMountWithInodes(t *testing.T, sizeInBytes, inodesCount int) string {
t.Helper()
if os.Geteuid() != 0 {
t.Skip("skipping sysfs updates: Operation not permitted with non-root user")
}
tempDir := t.TempDir()
cmd := exec.CommandContext(t.Context(),
"mount",
"tmpfs",
tempDir,
"-t", "tmpfs",
"-o", fmt.Sprintf("size=%d,nr_inodes=%d", sizeInBytes, inodesCount))
err := cmd.Run()
require.NoError(t, err)
t.Cleanup(func() {
ctx := context.WithoutCancel(t.Context())
cmd := exec.CommandContext(ctx, "umount", tempDir)
err := cmd.Run()
require.NoError(t, err)
})
return tempDir
}

View File

@ -1,39 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package execcontext
import (
"errors"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
type Defaults struct {
EnvVars *utils.Map[string, string]
User string
Workdir *string
}
func ResolveDefaultWorkdir(workdir string, defaultWorkdir *string) string {
if workdir != "" {
return workdir
}
if defaultWorkdir != nil {
return *defaultWorkdir
}
return ""
}
func ResolveDefaultUsername(username *string, defaultUsername string) (string, error) {
if username != nil {
return *username, nil
}
if defaultUsername != "" {
return defaultUsername, nil
}
return "", errors.New("username not provided")
}

View File

@ -1,96 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package host
import (
"math"
"time"
"github.com/shirou/gopsutil/v4/cpu"
"github.com/shirou/gopsutil/v4/mem"
"golang.org/x/sys/unix"
)
type Metrics struct {
Timestamp int64 `json:"ts"` // Unix Timestamp in UTC
CPUCount uint32 `json:"cpu_count"` // Total CPU cores
CPUUsedPercent float32 `json:"cpu_used_pct"` // Percent rounded to 2 decimal places
// Deprecated: kept for backwards compatibility with older orchestrators.
MemTotalMiB uint64 `json:"mem_total_mib"` // Total virtual memory in MiB
// Deprecated: kept for backwards compatibility with older orchestrators.
MemUsedMiB uint64 `json:"mem_used_mib"` // Used virtual memory in MiB
MemTotal uint64 `json:"mem_total"` // Total virtual memory in bytes
MemUsed uint64 `json:"mem_used"` // Used virtual memory in bytes
DiskUsed uint64 `json:"disk_used"` // Used disk space in bytes
DiskTotal uint64 `json:"disk_total"` // Total disk space in bytes
}
func GetMetrics() (*Metrics, error) {
v, err := mem.VirtualMemory()
if err != nil {
return nil, err
}
memUsedMiB := v.Used / 1024 / 1024
memTotalMiB := v.Total / 1024 / 1024
cpuTotal, err := cpu.Counts(true)
if err != nil {
return nil, err
}
cpuUsedPcts, err := cpu.Percent(0, false)
if err != nil {
return nil, err
}
cpuUsedPct := cpuUsedPcts[0]
cpuUsedPctRounded := float32(cpuUsedPct)
if cpuUsedPct > 0 {
cpuUsedPctRounded = float32(math.Round(cpuUsedPct*100) / 100)
}
diskMetrics, err := diskStats("/")
if err != nil {
return nil, err
}
return &Metrics{
Timestamp: time.Now().UTC().Unix(),
CPUCount: uint32(cpuTotal),
CPUUsedPercent: cpuUsedPctRounded,
MemUsedMiB: memUsedMiB,
MemTotalMiB: memTotalMiB,
MemTotal: v.Total,
MemUsed: v.Used,
DiskUsed: diskMetrics.Total - diskMetrics.Available,
DiskTotal: diskMetrics.Total,
}, nil
}
type diskSpace struct {
Total uint64
Available uint64
}
func diskStats(path string) (diskSpace, error) {
var st unix.Statfs_t
if err := unix.Statfs(path, &st); err != nil {
return diskSpace{}, err
}
block := uint64(st.Bsize)
// all data blocks
total := st.Blocks * block
// blocks available
available := st.Bavail * block
return diskSpace{Total: total, Available: available}, nil
}

View File

@ -1,185 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package host
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"time"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
const (
WrennRunDir = "/run/wrenn" // store sandbox metadata files here
mmdsDefaultAddress = "169.254.169.254"
mmdsTokenExpiration = 60 * time.Second
mmdsAccessTokenRequestClientTimeout = 10 * time.Second
)
var mmdsAccessTokenClient = &http.Client{
Timeout: mmdsAccessTokenRequestClientTimeout,
Transport: &http.Transport{
DisableKeepAlives: true,
},
}
type MMDSOpts struct {
SandboxID string `json:"instanceID"`
TemplateID string `json:"envID"`
LogsCollectorAddress string `json:"address"`
AccessTokenHash string `json:"accessTokenHash"`
}
func (opts *MMDSOpts) Update(sandboxID, templateID, collectorAddress string) {
opts.SandboxID = sandboxID
opts.TemplateID = templateID
opts.LogsCollectorAddress = collectorAddress
}
func (opts *MMDSOpts) AddOptsToJSON(jsonLogs []byte) ([]byte, error) {
parsed := make(map[string]any)
err := json.Unmarshal(jsonLogs, &parsed)
if err != nil {
return nil, err
}
parsed["instanceID"] = opts.SandboxID
parsed["envID"] = opts.TemplateID
data, err := json.Marshal(parsed)
return data, err
}
func getMMDSToken(ctx context.Context, client *http.Client) (string, error) {
request, err := http.NewRequestWithContext(ctx, http.MethodPut, "http://"+mmdsDefaultAddress+"/latest/api/token", &bytes.Buffer{})
if err != nil {
return "", err
}
request.Header["X-metadata-token-ttl-seconds"] = []string{fmt.Sprint(mmdsTokenExpiration.Seconds())}
response, err := client.Do(request)
if err != nil {
return "", err
}
defer response.Body.Close()
body, err := io.ReadAll(response.Body)
if err != nil {
return "", err
}
token := string(body)
if len(token) == 0 {
return "", fmt.Errorf("mmds token is an empty string")
}
return token, nil
}
func getMMDSOpts(ctx context.Context, client *http.Client, token string) (*MMDSOpts, error) {
request, err := http.NewRequestWithContext(ctx, http.MethodGet, "http://"+mmdsDefaultAddress, &bytes.Buffer{})
if err != nil {
return nil, err
}
request.Header["X-metadata-token"] = []string{token}
request.Header["Accept"] = []string{"application/json"}
response, err := client.Do(request)
if err != nil {
return nil, err
}
defer response.Body.Close()
body, err := io.ReadAll(response.Body)
if err != nil {
return nil, err
}
var opts MMDSOpts
err = json.Unmarshal(body, &opts)
if err != nil {
return nil, err
}
return &opts, nil
}
// GetAccessTokenHashFromMMDS reads the access token hash from MMDS.
// This is used to validate that /init requests come from the orchestrator.
func GetAccessTokenHashFromMMDS(ctx context.Context) (string, error) {
token, err := getMMDSToken(ctx, mmdsAccessTokenClient)
if err != nil {
return "", fmt.Errorf("failed to get MMDS token: %w", err)
}
opts, err := getMMDSOpts(ctx, mmdsAccessTokenClient, token)
if err != nil {
return "", fmt.Errorf("failed to get MMDS opts: %w", err)
}
return opts.AccessTokenHash, nil
}
func PollForMMDSOpts(ctx context.Context, mmdsChan chan<- *MMDSOpts, envVars *utils.Map[string, string]) {
httpClient := &http.Client{}
defer httpClient.CloseIdleConnections()
ticker := time.NewTicker(50 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
fmt.Fprintf(os.Stderr, "context cancelled while waiting for mmds opts")
return
case <-ticker.C:
token, err := getMMDSToken(ctx, httpClient)
if err != nil {
fmt.Fprintf(os.Stderr, "error getting mmds token: %v\n", err)
continue
}
mmdsOpts, err := getMMDSOpts(ctx, httpClient, token)
if err != nil {
fmt.Fprintf(os.Stderr, "error getting mmds opts: %v\n", err)
continue
}
envVars.Store("WRENN_SANDBOX_ID", mmdsOpts.SandboxID)
envVars.Store("WRENN_TEMPLATE_ID", mmdsOpts.TemplateID)
if err := os.WriteFile(filepath.Join(WrennRunDir, ".WRENN_SANDBOX_ID"), []byte(mmdsOpts.SandboxID), 0o666); err != nil {
fmt.Fprintf(os.Stderr, "error writing sandbox ID file: %v\n", err)
}
if err := os.WriteFile(filepath.Join(WrennRunDir, ".WRENN_TEMPLATE_ID"), []byte(mmdsOpts.TemplateID), 0o666); err != nil {
fmt.Fprintf(os.Stderr, "error writing template ID file: %v\n", err)
}
if mmdsOpts.LogsCollectorAddress != "" {
mmdsChan <- mmdsOpts
}
return
}
}
}

View File

@ -1,49 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package logs
import (
"time"
"github.com/rs/zerolog"
)
const (
defaultMaxBufferSize = 2 << 15
defaultTimeout = 2 * time.Second
)
func LogBufferedDataEvents(dataCh <-chan []byte, logger *zerolog.Logger, eventType string) {
timer := time.NewTicker(defaultTimeout)
defer timer.Stop()
var buffer []byte
defer func() {
if len(buffer) > 0 {
logger.Info().Str(eventType, string(buffer)).Msg("Streaming process event (flush)")
}
}()
for {
select {
case <-timer.C:
if len(buffer) > 0 {
logger.Info().Str(eventType, string(buffer)).Msg("Streaming process event")
buffer = nil
}
case data, ok := <-dataCh:
if !ok {
return
}
buffer = append(buffer, data...)
if len(buffer) >= defaultMaxBufferSize {
logger.Info().Str(eventType, string(buffer)).Msg("Streaming process event")
buffer = nil
continue
}
}
}
}

View File

@ -1,174 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package exporter
import (
"bytes"
"context"
"fmt"
"log"
"net/http"
"os"
"sync"
"time"
"git.omukk.dev/wrenn/sandbox/envd/internal/host"
)
const ExporterTimeout = 10 * time.Second
type HTTPExporter struct {
client http.Client
logs [][]byte
isNotFC bool
mmdsOpts *host.MMDSOpts
// Concurrency coordination
triggers chan struct{}
logLock sync.RWMutex
mmdsLock sync.RWMutex
startOnce sync.Once
}
func NewHTTPLogsExporter(ctx context.Context, isNotFC bool, mmdsChan <-chan *host.MMDSOpts) *HTTPExporter {
exporter := &HTTPExporter{
client: http.Client{
Timeout: ExporterTimeout,
},
triggers: make(chan struct{}, 1),
isNotFC: isNotFC,
startOnce: sync.Once{},
mmdsOpts: &host.MMDSOpts{
SandboxID: "unknown",
TemplateID: "unknown",
LogsCollectorAddress: "",
},
}
go exporter.listenForMMDSOptsAndStart(ctx, mmdsChan)
return exporter
}
func (w *HTTPExporter) sendInstanceLogs(ctx context.Context, logs []byte, address string) error {
if address == "" {
return nil
}
request, err := http.NewRequestWithContext(ctx, http.MethodPost, address, bytes.NewBuffer(logs))
if err != nil {
return err
}
request.Header.Set("Content-Type", "application/json")
response, err := w.client.Do(request)
if err != nil {
return err
}
defer response.Body.Close()
return nil
}
func printLog(logs []byte) {
fmt.Fprintf(os.Stdout, "%v", string(logs))
}
func (w *HTTPExporter) listenForMMDSOptsAndStart(ctx context.Context, mmdsChan <-chan *host.MMDSOpts) {
for {
select {
case <-ctx.Done():
return
case mmdsOpts, ok := <-mmdsChan:
if !ok {
return
}
w.mmdsLock.Lock()
w.mmdsOpts.Update(mmdsOpts.SandboxID, mmdsOpts.TemplateID, mmdsOpts.LogsCollectorAddress)
w.mmdsLock.Unlock()
w.startOnce.Do(func() {
go w.start(ctx)
})
}
}
}
func (w *HTTPExporter) start(ctx context.Context) {
for range w.triggers {
logs := w.getAllLogs()
if len(logs) == 0 {
continue
}
if w.isNotFC {
for _, log := range logs {
fmt.Fprintf(os.Stdout, "%v", string(log))
}
continue
}
for _, logLine := range logs {
w.mmdsLock.RLock()
logLineWithOpts, err := w.mmdsOpts.AddOptsToJSON(logLine)
w.mmdsLock.RUnlock()
if err != nil {
log.Printf("error adding instance logging options (%+v) to JSON (%+v) with logs : %v\n", w.mmdsOpts, logLine, err)
printLog(logLine)
continue
}
err = w.sendInstanceLogs(ctx, logLineWithOpts, w.mmdsOpts.LogsCollectorAddress)
if err != nil {
log.Printf("error sending instance logs: %+v", err)
printLog(logLine)
continue
}
}
}
}
func (w *HTTPExporter) resumeProcessing() {
select {
case w.triggers <- struct{}{}:
default:
// Exporter processing already triggered
// This is expected behavior if the exporter is already processing logs
}
}
func (w *HTTPExporter) Write(logs []byte) (int, error) {
logsCopy := make([]byte, len(logs))
copy(logsCopy, logs)
go w.addLogs(logsCopy)
return len(logs), nil
}
func (w *HTTPExporter) getAllLogs() [][]byte {
w.logLock.Lock()
defer w.logLock.Unlock()
logs := w.logs
w.logs = nil
return logs
}
func (w *HTTPExporter) addLogs(logs []byte) {
w.logLock.Lock()
defer w.logLock.Unlock()
w.logs = append(w.logs, logs)
w.resumeProcessing()
}

View File

@ -1,174 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package logs
import (
"context"
"fmt"
"strconv"
"strings"
"sync/atomic"
"connectrpc.com/connect"
"github.com/rs/zerolog"
)
type OperationID string
const (
OperationIDKey OperationID = "operation_id"
DefaultHTTPMethod string = "POST"
)
var operationID = atomic.Int32{}
func AssignOperationID() string {
id := operationID.Add(1)
return strconv.Itoa(int(id))
}
func AddRequestIDToContext(ctx context.Context) context.Context {
return context.WithValue(ctx, OperationIDKey, AssignOperationID())
}
func formatMethod(method string) string {
parts := strings.Split(method, ".")
if len(parts) < 2 {
return method
}
split := strings.Split(parts[1], "/")
if len(split) < 2 {
return method
}
servicePart := split[0]
servicePart = strings.ToUpper(servicePart[:1]) + servicePart[1:]
methodPart := split[1]
methodPart = strings.ToLower(methodPart[:1]) + methodPart[1:]
return fmt.Sprintf("%s %s", servicePart, methodPart)
}
func NewUnaryLogInterceptor(logger *zerolog.Logger) connect.UnaryInterceptorFunc {
interceptor := func(next connect.UnaryFunc) connect.UnaryFunc {
return connect.UnaryFunc(func(
ctx context.Context,
req connect.AnyRequest,
) (connect.AnyResponse, error) {
ctx = AddRequestIDToContext(ctx)
res, err := next(ctx, req)
l := logger.
Err(err).
Str("method", DefaultHTTPMethod+" "+req.Spec().Procedure).
Str(string(OperationIDKey), ctx.Value(OperationIDKey).(string))
if err != nil {
l = l.Int("error_code", int(connect.CodeOf(err)))
}
if req != nil {
l = l.Interface("request", req.Any())
}
if res != nil && err == nil {
l = l.Interface("response", res.Any())
}
if res == nil && err == nil {
l = l.Interface("response", nil)
}
l.Msg(formatMethod(req.Spec().Procedure))
return res, err
})
}
return connect.UnaryInterceptorFunc(interceptor)
}
func LogServerStreamWithoutEvents[T any, R any](
ctx context.Context,
logger *zerolog.Logger,
req *connect.Request[R],
stream *connect.ServerStream[T],
handler func(ctx context.Context, req *connect.Request[R], stream *connect.ServerStream[T]) error,
) error {
ctx = AddRequestIDToContext(ctx)
l := logger.Debug().
Str("method", DefaultHTTPMethod+" "+req.Spec().Procedure).
Str(string(OperationIDKey), ctx.Value(OperationIDKey).(string))
if req != nil {
l = l.Interface("request", req.Any())
}
l.Msg(fmt.Sprintf("%s (server stream start)", formatMethod(req.Spec().Procedure)))
err := handler(ctx, req, stream)
logEvent := getErrDebugLogEvent(logger, err).
Str("method", DefaultHTTPMethod+" "+req.Spec().Procedure).
Str(string(OperationIDKey), ctx.Value(OperationIDKey).(string))
if err != nil {
logEvent = logEvent.Int("error_code", int(connect.CodeOf(err)))
} else {
logEvent = logEvent.Interface("response", nil)
}
logEvent.Msg(fmt.Sprintf("%s (server stream end)", formatMethod(req.Spec().Procedure)))
return err
}
func LogClientStreamWithoutEvents[T any, R any](
ctx context.Context,
logger *zerolog.Logger,
stream *connect.ClientStream[T],
handler func(ctx context.Context, stream *connect.ClientStream[T]) (*connect.Response[R], error),
) (*connect.Response[R], error) {
ctx = AddRequestIDToContext(ctx)
logger.Debug().
Str("method", DefaultHTTPMethod+" "+stream.Spec().Procedure).
Str(string(OperationIDKey), ctx.Value(OperationIDKey).(string)).
Msg(fmt.Sprintf("%s (client stream start)", formatMethod(stream.Spec().Procedure)))
res, err := handler(ctx, stream)
logEvent := getErrDebugLogEvent(logger, err).
Str("method", DefaultHTTPMethod+" "+stream.Spec().Procedure).
Str(string(OperationIDKey), ctx.Value(OperationIDKey).(string))
if err != nil {
logEvent = logEvent.Int("error_code", int(connect.CodeOf(err)))
}
if res != nil && err == nil {
logEvent = logEvent.Interface("response", res.Any())
}
if res == nil && err == nil {
logEvent = logEvent.Interface("response", nil)
}
logEvent.Msg(fmt.Sprintf("%s (client stream end)", formatMethod(stream.Spec().Procedure)))
return res, err
}
// Return logger with error level if err is not nil, otherwise return logger with debug level
func getErrDebugLogEvent(logger *zerolog.Logger, err error) *zerolog.Event {
if err != nil {
return logger.Error().Err(err) //nolint:zerologlint // this builds an event, it is not expected to return it
}
return logger.Debug() //nolint:zerologlint // this builds an event, it is not expected to return it
}

View File

@ -1,37 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package logs
import (
"context"
"io"
"os"
"time"
"github.com/rs/zerolog"
"git.omukk.dev/wrenn/sandbox/envd/internal/host"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs/exporter"
)
func NewLogger(ctx context.Context, isNotFC bool, mmdsChan <-chan *host.MMDSOpts) *zerolog.Logger {
zerolog.TimestampFieldName = "timestamp"
zerolog.TimeFieldFormat = time.RFC3339Nano
exporters := []io.Writer{}
if isNotFC {
exporters = append(exporters, os.Stdout)
} else {
exporters = append(exporters, exporter.NewHTTPLogsExporter(ctx, isNotFC, mmdsChan), os.Stdout)
}
l := zerolog.
New(io.MultiWriter(exporters...)).
With().
Timestamp().
Logger().
Level(zerolog.DebugLevel)
return &l
}

View File

@ -1,49 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package permissions
import (
"context"
"fmt"
"os/user"
"connectrpc.com/authn"
"connectrpc.com/connect"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
)
func AuthenticateUsername(_ context.Context, req authn.Request) (any, error) {
username, _, ok := req.BasicAuth()
if !ok {
// When no username is provided, ignore the authentication method (not all endpoints require it)
// Missing user is then handled in the GetAuthUser function
return nil, nil
}
u, err := GetUser(username)
if err != nil {
return nil, authn.Errorf("invalid username: '%s'", username)
}
return u, nil
}
func GetAuthUser(ctx context.Context, defaultUser string) (*user.User, error) {
u, ok := authn.GetInfo(ctx).(*user.User)
if !ok {
username, err := execcontext.ResolveDefaultUsername(nil, defaultUser)
if err != nil {
return nil, connect.NewError(connect.CodeUnauthenticated, fmt.Errorf("no user specified"))
}
u, err := GetUser(username)
if err != nil {
return nil, authn.Errorf("invalid default user: '%s'", username)
}
return u, nil
}
return u, nil
}

View File

@ -1,31 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package permissions
import (
"strconv"
"time"
"connectrpc.com/connect"
)
const defaultKeepAliveInterval = 90 * time.Second
func GetKeepAliveTicker[T any](req *connect.Request[T]) (*time.Ticker, func()) {
keepAliveIntervalHeader := req.Header().Get("Keepalive-Ping-Interval")
var interval time.Duration
keepAliveIntervalInt, err := strconv.Atoi(keepAliveIntervalHeader)
if err != nil {
interval = defaultKeepAliveInterval
} else {
interval = time.Duration(keepAliveIntervalInt) * time.Second
}
ticker := time.NewTicker(interval)
return ticker, func() {
ticker.Reset(interval)
}
}

View File

@ -1,98 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package permissions
import (
"errors"
"fmt"
"os"
"os/user"
"path/filepath"
"slices"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
)
func expand(path, homedir string) (string, error) {
if len(path) == 0 {
return path, nil
}
if path[0] != '~' {
return path, nil
}
if len(path) > 1 && path[1] != '/' && path[1] != '\\' {
return "", errors.New("cannot expand user-specific home dir")
}
return filepath.Join(homedir, path[1:]), nil
}
func ExpandAndResolve(path string, user *user.User, defaultPath *string) (string, error) {
path = execcontext.ResolveDefaultWorkdir(path, defaultPath)
path, err := expand(path, user.HomeDir)
if err != nil {
return "", fmt.Errorf("failed to expand path '%s' for user '%s': %w", path, user.Username, err)
}
if filepath.IsAbs(path) {
return path, nil
}
// The filepath.Abs can correctly resolve paths like /home/user/../file
path = filepath.Join(user.HomeDir, path)
abs, err := filepath.Abs(path)
if err != nil {
return "", fmt.Errorf("failed to resolve path '%s' for user '%s' with home dir '%s': %w", path, user.Username, user.HomeDir, err)
}
return abs, nil
}
func getSubpaths(path string) (subpaths []string) {
for {
subpaths = append(subpaths, path)
path = filepath.Dir(path)
if path == "/" {
break
}
}
slices.Reverse(subpaths)
return subpaths
}
func EnsureDirs(path string, uid, gid int) error {
subpaths := getSubpaths(path)
for _, subpath := range subpaths {
info, err := os.Stat(subpath)
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to stat directory: %w", err)
}
if err != nil && os.IsNotExist(err) {
err = os.Mkdir(subpath, 0o755)
if err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
err = os.Chown(subpath, uid, gid)
if err != nil {
return fmt.Errorf("failed to chown directory: %w", err)
}
continue
}
if !info.IsDir() {
return fmt.Errorf("path is a file: %s", subpath)
}
}
return nil
}

View File

@ -1,46 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package permissions
import (
"fmt"
"os/user"
"strconv"
)
func GetUserIdUints(u *user.User) (uid, gid uint32, err error) {
newUID, err := strconv.ParseUint(u.Uid, 10, 32)
if err != nil {
return 0, 0, fmt.Errorf("error parsing uid '%s': %w", u.Uid, err)
}
newGID, err := strconv.ParseUint(u.Gid, 10, 32)
if err != nil {
return 0, 0, fmt.Errorf("error parsing gid '%s': %w", u.Gid, err)
}
return uint32(newUID), uint32(newGID), nil
}
func GetUserIdInts(u *user.User) (uid, gid int, err error) {
newUID, err := strconv.ParseInt(u.Uid, 10, strconv.IntSize)
if err != nil {
return 0, 0, fmt.Errorf("error parsing uid '%s': %w", u.Uid, err)
}
newGID, err := strconv.ParseInt(u.Gid, 10, strconv.IntSize)
if err != nil {
return 0, 0, fmt.Errorf("error parsing gid '%s': %w", u.Gid, err)
}
return int(newUID), int(newGID), nil
}
func GetUser(username string) (u *user.User, err error) {
u, err = user.Lookup(username)
if err != nil {
return nil, fmt.Errorf("error looking up user '%s': %w", username, err)
}
return u, nil
}

View File

@ -1,165 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package port
import (
"bufio"
"encoding/hex"
"fmt"
"net"
"os"
"strconv"
"strings"
"syscall"
)
// ConnStat represents a single TCP connection read from /proc/net/tcp(6).
// It contains only the fields needed by the port scanner and forwarder.
type ConnStat struct {
LocalIP string
LocalPort uint32
Status string
Family uint32 // syscall.AF_INET or syscall.AF_INET6
Inode uint64 // socket inode, unique per connection
}
// tcpStates maps the hex state values from /proc/net/tcp to string names
// matching the gopsutil convention used by ScannerFilter.
var tcpStates = map[string]string{
"01": "ESTABLISHED",
"02": "SYN_SENT",
"03": "SYN_RECV",
"04": "FIN_WAIT1",
"05": "FIN_WAIT2",
"06": "TIME_WAIT",
"07": "CLOSE",
"08": "CLOSE_WAIT",
"09": "LAST_ACK",
"0A": "LISTEN",
"0B": "CLOSING",
}
// ReadTCPConnections reads /proc/net/tcp and /proc/net/tcp6 and returns
// all TCP connections. This avoids the /proc/{pid}/fd walk that gopsutil
// performs, which is unsafe across Firecracker snapshot/restore boundaries.
func ReadTCPConnections() ([]ConnStat, error) {
var conns []ConnStat
tcp4, err := parseProcNetTCP("/proc/net/tcp", syscall.AF_INET)
if err != nil {
return nil, fmt.Errorf("parse /proc/net/tcp: %w", err)
}
conns = append(conns, tcp4...)
tcp6, err := parseProcNetTCP("/proc/net/tcp6", syscall.AF_INET6)
if err != nil {
return nil, fmt.Errorf("parse /proc/net/tcp6: %w", err)
}
conns = append(conns, tcp6...)
return conns, nil
}
// parseProcNetTCP reads a single /proc/net/tcp or /proc/net/tcp6 file.
//
// Format (fields are whitespace-separated):
//
// sl local_address rem_address st tx_queue:rx_queue tr:tm->when retrnsmt uid timeout inode
// 0: 0100007F:1F90 00000000:0000 0A 00000000:00000000 00:00000000 00000000 1000 0 12345
func parseProcNetTCP(path string, family uint32) ([]ConnStat, error) {
f, err := os.Open(path)
if err != nil {
return nil, err
}
defer f.Close()
var conns []ConnStat
scanner := bufio.NewScanner(f)
// Skip header line.
scanner.Scan()
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" {
continue
}
fields := strings.Fields(line)
if len(fields) < 10 {
continue
}
// fields[1] = local_address (hex_ip:hex_port)
ip, port, err := parseHexAddr(fields[1], family)
if err != nil {
continue
}
// fields[3] = state (hex)
state, ok := tcpStates[fields[3]]
if !ok {
state = "UNKNOWN"
}
// fields[9] = inode
inode, err := strconv.ParseUint(fields[9], 10, 64)
if err != nil {
continue
}
conns = append(conns, ConnStat{
LocalIP: ip,
LocalPort: port,
Status: state,
Family: family,
Inode: inode,
})
}
return conns, scanner.Err()
}
// parseHexAddr parses "HEXIP:HEXPORT" from /proc/net/tcp.
// IPv4 addresses are 8 hex chars (4 bytes, little-endian per 32-bit word).
// IPv6 addresses are 32 hex chars (16 bytes, little-endian per 32-bit word).
func parseHexAddr(s string, family uint32) (string, uint32, error) {
parts := strings.SplitN(s, ":", 2)
if len(parts) != 2 {
return "", 0, fmt.Errorf("invalid address: %s", s)
}
port64, err := strconv.ParseUint(parts[1], 16, 32)
if err != nil {
return "", 0, err
}
ipHex := parts[0]
ipBytes, err := hex.DecodeString(ipHex)
if err != nil {
return "", 0, err
}
var ip net.IP
if family == syscall.AF_INET {
if len(ipBytes) != 4 {
return "", 0, fmt.Errorf("invalid IPv4 length: %d", len(ipBytes))
}
// /proc/net/tcp stores IPv4 as a single little-endian 32-bit word.
ip = net.IPv4(ipBytes[3], ipBytes[2], ipBytes[1], ipBytes[0])
} else {
if len(ipBytes) != 16 {
return "", 0, fmt.Errorf("invalid IPv6 length: %d", len(ipBytes))
}
// /proc/net/tcp6 stores IPv6 as four little-endian 32-bit words.
ip = make(net.IP, 16)
for i := 0; i < 4; i++ {
ip[i*4+0] = ipBytes[i*4+3]
ip[i*4+1] = ipBytes[i*4+2]
ip[i*4+2] = ipBytes[i*4+1]
ip[i*4+3] = ipBytes[i*4+0]
}
}
return ip.String(), uint32(port64), nil
}

View File

@ -1,240 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
// portf (port forward) periodaically scans opened TCP ports on the 127.0.0.1 (or localhost)
// and launches `socat` process for every such port in the background.
// socat forward traffic from `sourceIP`:port to the 127.0.0.1:port.
// WARNING: portf isn't thread safe!
package port
import (
"context"
"fmt"
"net"
"os/exec"
"syscall"
"github.com/rs/zerolog"
"git.omukk.dev/wrenn/sandbox/envd/internal/services/cgroups"
)
type PortState string
const (
PortStateForward PortState = "FORWARD"
PortStateDelete PortState = "DELETE"
)
var defaultGatewayIP = net.IPv4(169, 254, 0, 21)
type PortToForward struct {
socat *exec.Cmd
// Socket inode of the listening socket (unique per connection).
inode uint64
// family version of the ip.
family uint32
state PortState
port uint32
}
type Forwarder struct {
logger *zerolog.Logger
cgroupManager cgroups.Manager
// Map of ports that are being currently forwarded.
ports map[string]*PortToForward
scannerSubscriber *ScannerSubscriber
sourceIP net.IP
}
func NewForwarder(
logger *zerolog.Logger,
scanner *Scanner,
cgroupManager cgroups.Manager,
) *Forwarder {
scannerSub := scanner.AddSubscriber(
logger,
"port-forwarder",
// We only want to forward ports that are actively listening on localhost.
&ScannerFilter{
IPs: []string{"127.0.0.1", "localhost", "::1"},
State: "LISTEN",
},
)
return &Forwarder{
logger: logger,
sourceIP: defaultGatewayIP,
ports: make(map[string]*PortToForward),
scannerSubscriber: scannerSub,
cgroupManager: cgroupManager,
}
}
func (f *Forwarder) StartForwarding(ctx context.Context) {
if f.scannerSubscriber == nil {
f.logger.Error().Msg("Cannot start forwarding because scanner subscriber is nil")
return
}
for {
select {
case <-ctx.Done():
f.stopAllForwarding()
return
case procs, ok := <-f.scannerSubscriber.Messages:
if !ok {
f.stopAllForwarding()
return
}
// Now we are going to refresh all ports that are being forwarded in the `ports` map. Maybe add new ones
// and maybe remove some.
// Go through the ports that are currently being forwarded and set all of them
// to the `DELETE` state. We don't know yet if they will be there after refresh.
for _, v := range f.ports {
v.state = PortStateDelete
}
// Let's refresh our map of currently forwarded ports and mark the currently opened ones with the "FORWARD" state.
// This will make sure we won't delete them later.
for _, p := range procs {
key := fmt.Sprintf("%d-%d", p.Inode, p.LocalPort)
// We check if the opened port is in our map of forwarded ports.
val, portOk := f.ports[key]
if portOk {
// Just mark the port as being forwarded so we don't delete it.
// The actual socat process that handles forwarding should be running from the last iteration.
val.state = PortStateForward
} else {
f.logger.Debug().
Str("ip", p.LocalIP).
Uint32("port", p.LocalPort).
Uint32("family", familyToIPVersion(p.Family)).
Str("state", p.Status).
Msg("Detected new opened port on localhost that is not forwarded")
// The opened port wasn't in the map so we create a new PortToForward and start forwarding.
ptf := &PortToForward{
inode: p.Inode,
port: p.LocalPort,
state: PortStateForward,
family: familyToIPVersion(p.Family),
}
f.ports[key] = ptf
f.startPortForwarding(ctx, ptf)
}
}
// We go through the ports map one more time and stop forwarding all ports
// that stayed marked as "DELETE".
for _, v := range f.ports {
if v.state == PortStateDelete {
f.stopPortForwarding(v)
}
}
}
}
}
func (f *Forwarder) stopAllForwarding() {
for _, p := range f.ports {
f.stopPortForwarding(p)
}
f.ports = make(map[string]*PortToForward)
}
func (f *Forwarder) startPortForwarding(_ context.Context, p *PortToForward) {
// https://unix.stackexchange.com/questions/311492/redirect-application-listening-on-localhost-to-listening-on-external-interface
// socat -d -d TCP4-LISTEN:4000,bind=169.254.0.21,fork TCP4:localhost:4000
// reuseaddr is used to fix the "Address already in use" error when restarting socat quickly.
//
// We use exec.Command (not CommandContext) because stopAllForwarding kills
// socat via SIGKILL to the process group. CommandContext would also call
// cmd.Wait() on context cancellation, racing with the wait goroutine below.
cmd := exec.Command(
"socat", "-d", "-d", "-d",
fmt.Sprintf("TCP4-LISTEN:%v,bind=%s,reuseaddr,fork", p.port, f.sourceIP.To4()),
fmt.Sprintf("TCP%d:localhost:%v", p.family, p.port),
)
cgroupFD, ok := f.cgroupManager.GetFileDescriptor(cgroups.ProcessTypeSocat)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
CgroupFD: cgroupFD,
UseCgroupFD: ok,
}
f.logger.Debug().
Str("socatCmd", cmd.String()).
Uint64("inode", p.inode).
Uint32("family", p.family).
IPAddr("sourceIP", f.sourceIP.To4()).
Uint32("port", p.port).
Msg("About to start port forwarding")
if err := cmd.Start(); err != nil {
f.logger.
Error().
Str("socatCmd", cmd.String()).
Err(err).
Msg("Failed to start port forwarding - failed to start socat")
return
}
go func() {
if err := cmd.Wait(); err != nil {
f.logger.
Debug().
Str("socatCmd", cmd.String()).
Err(err).
Msg("Port forwarding socat process exited")
}
}()
p.socat = cmd
}
func (f *Forwarder) stopPortForwarding(p *PortToForward) {
if p.socat == nil {
return
}
defer func() { p.socat = nil }()
logger := f.logger.With().
Str("socatCmd", p.socat.String()).
Uint64("inode", p.inode).
Uint32("family", p.family).
IPAddr("sourceIP", f.sourceIP.To4()).
Uint32("port", p.port).
Logger()
logger.Debug().Msg("Stopping port forwarding")
if err := syscall.Kill(-p.socat.Process.Pid, syscall.SIGKILL); err != nil {
logger.Error().Err(err).Msg("Failed to kill process group")
return
}
logger.Debug().Msg("Stopped port forwarding")
}
func familyToIPVersion(family uint32) uint32 {
switch family {
case syscall.AF_INET:
return 4
case syscall.AF_INET6:
return 6
default:
return 0 // Unknown or unsupported family
}
}

View File

@ -1,70 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package port
import (
"context"
"sync"
"time"
"github.com/rs/zerolog"
)
type Scanner struct {
period time.Duration
// Plain mutex-protected map instead of concurrent-map. The concurrent-map
// library's Items() spawns goroutines and uses a WaitGroup internally,
// which corrupts Go runtime semaphore state across Firecracker snapshot/restore.
mu sync.RWMutex
subs map[string]*ScannerSubscriber
}
func NewScanner(period time.Duration) *Scanner {
return &Scanner{
period: period,
subs: make(map[string]*ScannerSubscriber),
}
}
func (s *Scanner) AddSubscriber(logger *zerolog.Logger, id string, filter *ScannerFilter) *ScannerSubscriber {
subscriber := NewScannerSubscriber(logger, id, filter)
s.mu.Lock()
s.subs[id] = subscriber
s.mu.Unlock()
return subscriber
}
func (s *Scanner) Unsubscribe(sub *ScannerSubscriber) {
s.mu.Lock()
delete(s.subs, sub.ID())
s.mu.Unlock()
sub.Destroy()
}
// ScanAndBroadcast starts scanning open TCP ports and broadcasts every open port to all subscribers.
// It exits when ctx is cancelled.
func (s *Scanner) ScanAndBroadcast(ctx context.Context) {
for {
// Read directly from /proc/net/tcp and /proc/net/tcp6 instead of
// using gopsutil's net.Connections(), which walks /proc/{pid}/fd
// and causes Go runtime corruption after Firecracker snapshot/restore.
conns, _ := ReadTCPConnections()
s.mu.RLock()
for _, sub := range s.subs {
sub.Signal(ctx, conns)
}
s.mu.RUnlock()
select {
case <-ctx.Done():
return
case <-time.After(s.period):
}
}
}

View File

@ -1,61 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package port
import (
"context"
"github.com/rs/zerolog"
)
// If we want to create a listener/subscriber pattern somewhere else we should move
// from a concrete implementation to combination of generics and interfaces.
type ScannerSubscriber struct {
logger *zerolog.Logger
filter *ScannerFilter
Messages chan ([]ConnStat)
id string
}
func NewScannerSubscriber(logger *zerolog.Logger, id string, filter *ScannerFilter) *ScannerSubscriber {
return &ScannerSubscriber{
logger: logger,
id: id,
filter: filter,
Messages: make(chan []ConnStat),
}
}
func (ss *ScannerSubscriber) ID() string {
return ss.id
}
func (ss *ScannerSubscriber) Destroy() {
close(ss.Messages)
}
// Signal sends the (filtered) connection list to the subscriber. It respects
// ctx cancellation so the scanner goroutine is never stuck waiting for a
// consumer that has already exited.
func (ss *ScannerSubscriber) Signal(ctx context.Context, conns []ConnStat) {
var payload []ConnStat
if ss.filter == nil {
payload = conns
} else {
filtered := []ConnStat{}
for i := range conns {
if ss.filter.Match(&conns[i]) {
filtered = append(filtered, conns[i])
}
}
payload = filtered
}
select {
case ss.Messages <- payload:
case <-ctx.Done():
}
}

View File

@ -1,27 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
package port
import (
"slices"
)
type ScannerFilter struct {
State string
IPs []string
}
func (sf *ScannerFilter) Match(conn *ConnStat) bool {
// Filter is an empty struct.
if sf.State == "" && len(sf.IPs) == 0 {
return false
}
ipMatch := slices.Contains(sf.IPs, conn.LocalIP)
if ipMatch && sf.State == conn.Status {
return true
}
return false
}

View File

@ -1,106 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package port
import (
"context"
"runtime"
"runtime/debug"
"sync"
"time"
"github.com/rs/zerolog"
"git.omukk.dev/wrenn/sandbox/envd/internal/services/cgroups"
)
// PortSubsystem owns the port scanner and forwarder lifecycle.
// It supports stop/restart across Firecracker snapshot/restore cycles.
type PortSubsystem struct {
logger *zerolog.Logger
cgroupManager cgroups.Manager
period time.Duration
mu sync.Mutex
cancel context.CancelFunc
wg *sync.WaitGroup // per-cycle WaitGroup; nil when not running
running bool
}
// NewPortSubsystem creates a new PortSubsystem. Call Start() to begin scanning.
func NewPortSubsystem(logger *zerolog.Logger, cgroupManager cgroups.Manager, period time.Duration) *PortSubsystem {
return &PortSubsystem{
logger: logger,
cgroupManager: cgroupManager,
period: period,
}
}
// Start creates a fresh scanner and forwarder, launching their goroutines.
// Safe to call multiple times; does nothing if already running.
func (p *PortSubsystem) Start(parentCtx context.Context) {
p.mu.Lock()
defer p.mu.Unlock()
if p.running {
return
}
ctx, cancel := context.WithCancel(parentCtx)
p.cancel = cancel
p.running = true
// Allocate a fresh WaitGroup for this lifecycle so a concurrent Stop
// on the previous cycle's WaitGroup cannot interfere.
wg := &sync.WaitGroup{}
p.wg = wg
scanner := NewScanner(p.period)
forwarder := NewForwarder(p.logger, scanner, p.cgroupManager)
wg.Add(2)
go func() {
defer wg.Done()
forwarder.StartForwarding(ctx)
}()
go func() {
defer wg.Done()
scanner.ScanAndBroadcast(ctx)
}()
}
// Stop quiesces the scanner and forwarder goroutines and forces a GC cycle
// to put the Go runtime's page allocator in a consistent state before snapshot.
// Blocks until both goroutines have exited. Safe to call if already stopped.
func (p *PortSubsystem) Stop() {
p.mu.Lock()
if !p.running {
p.mu.Unlock()
return
}
cancelFn := p.cancel
wg := p.wg
p.cancel = nil
p.wg = nil
p.running = false
p.mu.Unlock()
cancelFn()
wg.Wait()
// Force two GC cycles to ensure all spans are swept and the page
// allocator summary tree is fully consistent before the VM is frozen.
runtime.GC()
runtime.GC()
debug.FreeOSMemory()
}
// Restart stops the subsystem (if running) and starts it again with a fresh
// scanner and forwarder. Used after snapshot restore via PostInit.
func (p *PortSubsystem) Restart(parentCtx context.Context) {
p.Stop()
p.Start(parentCtx)
}

Some files were not shown because too many files have changed in this diff Show More