1
0
forked from wrenn/wrenn

180 Commits

Author SHA1 Message Date
a265c15c4d Add admin user management with is_active enforcement
Admin users page at /admin/users with paginated user list showing name,
email, team counts, role, join date, and active status toggle. Inactive
users are blocked from all authenticated endpoints immediately via DB
check in JWT middleware. OAuth login errors now show human-readable
messages on the login page.
2026-04-15 03:58:44 +06:00
d332630267 Add admin teams management page
Admin panel now includes a Teams page with paginated listing of all teams
(including soft-deleted), BYOC enable with confirmation dialog, and team
deletion with active capsule warnings. Shows member count, owner info,
active capsules, and channel count per team.
2026-04-15 03:36:37 +06:00
587f6ed8ad Merge pull request 'Implemented least-loaded host scheduler with bottleneck-first strategy' (#25) from feat/host-scheduler into dev
Reviewed-on: wrenn/wrenn#25
2026-04-14 21:03:25 +00:00
82d281b5b5 Implement least-loaded host scheduler with bottleneck-first strategy
Replace round-robin scheduling with resource-aware host selection that
picks the host with the most headroom at its tightest resource. Extends
the HostScheduler interface with memory/disk params for admission control.
2026-04-15 03:02:29 +06:00
17d5d07b3a Removed unused env vars from env example 2026-04-15 02:19:28 +06:00
71b87020c9 Remove redundant comments from login page glow animation 2026-04-14 04:32:17 +06:00
516890c49a Add background process execution API
Start long-running processes (web servers, daemons) without blocking the
HTTP request. Leverages envd's existing background process support
(context.Background(), List, Connect, SendSignal RPCs) and wires it
through the host agent and control plane layers.

New API surface:
- POST /v1/capsules/{id}/exec with background:true → 202 {pid, tag}
- GET /v1/capsules/{id}/processes → list running processes
- DELETE /v1/capsules/{id}/processes/{selector} → kill by PID or tag
- WS /v1/capsules/{id}/processes/{selector}/stream → reconnect to output

The {selector} param auto-detects: numeric = PID, string = tag.
Tags are auto-generated as "proc-" + 8 hex chars if not provided.
2026-04-14 03:57:01 +06:00
962860ba74 Pre-pause snapshot signal to prevent Go runtime crash on restore
envd crashes with "fatal error: bad summary data" after Firecracker
snapshot/restore because the page allocator radix tree is inconsistent
when vCPUs are frozen mid-allocation. The port scanner goroutine
allocates heavily every second, making it the primary trigger.

Add POST /snapshot/prepare to envd — the host agent calls it before
vm.Pause to quiesce continuous goroutines and force GC. On restore,
PostInit restarts the port subsystem via the existing /init endpoint.

- New PortSubsystem abstraction with Start/Stop/Restart lifecycle
- Context-based goroutine cancellation (replaces irreversible channel close)
- Context-aware Signal to prevent scanner/forwarder deadlock
- Fix forwarder goroutine leak (was spinning forever on closed channel)
- Kill socat children on stop to prevent orphans across snapshots
- Fix double cmd.Wait panic (exec.Command instead of CommandContext)
2026-04-13 05:21:10 +06:00
117c46a386 Fix: Auto-admin didn't work for oauth users 2026-04-13 05:00:37 +06:00
d828a6be08 Normalize dashboard page headers: add divider line and align button layout
Add consistent mt-6 border-b divider to Capsules, Metrics, and Templates
headers. Align Channels header to match Keys page pattern (items-center,
description inside the title group).
2026-04-13 04:59:40 +06:00
bbdb44afee Merge pull request 'Added manual template building' (#24) from feat/admin-panel into dev
Reviewed-on: wrenn/wrenn#24
2026-04-12 22:44:39 +00:00
784fe5c7a8 Polish admin capsule pages and improve shared components
- Admin list: remove redundant Open button, normalize with dashboard
  patterns (sorting, search highlight, auto-refresh, animations)
- Admin detail: breadcrumb header, status bar, visibility polling
- FilesTab: add treeOnly prop, compact mode uses 2/7 tree + 5/7 preview
  split, expand tree to full width when no file selected, improve copy
- MetricsPanel: hide Live badge in compact layout (redundant with status)
- DestroyDialog: accept destroyFn prop for admin capsule deletion
2026-04-13 04:41:51 +06:00
60c0de670c Extract MetricsPanel component and use it in admin capsule detail page
Moves all Chart.js metrics logic (polling, smoothing, chart init/update)
into a reusable MetricsPanel component with 'full' and 'compact' layout
modes. The admin capsule detail page now reuses MetricsPanel, TerminalTab,
and FilesTab — no duplicated code.
2026-04-13 04:16:53 +06:00
90bea52ccd Add admin capsule management, fix file browser for special files, normalize dialog styles
- Admin capsule CRUD: list, create (platform templates), get detail with
  terminal/files/metrics, snapshot, destroy
- First signup auto-promotes to platform admin
- JWT auth via query param for WebSocket connections
- File browser: handle non-regular files (devices, pipes, sockets) gracefully
  instead of showing raw backend errors
- Normalize admin template dialogs to match established dialog patterns:
  remove accent bars, unify animation/shadow/button styles
2026-04-13 04:12:36 +06:00
f920023ecf Block download for non-regular files in file browser
Disable the download button for symlinks and show a dedicated
preview pane explaining the symlink target and suggesting to
navigate to the target file instead. Guard handleDownload against
non-file types as a safety net.
2026-04-13 02:57:38 +06:00
19ddb1ab8b Normalize dialog styles across capsules and templates pages
Aligned all dialog boxes to a consistent pattern: same shadow
(--shadow-dialog), animation (fadeUp 0.2s ease), button sizing
(py-2, duration-150), and hover effects. Added template type
indicator dot to CreateCapsuleDialog combobox. Removed accent
gradient bars from templates page inline dialogs.
2026-04-13 02:48:58 +06:00
5633957b51 Explicit write when mounting rootfs for updates 2026-04-13 02:38:09 +06:00
eb47e22496 Merge pull request 'Fixed crash on non-regular files and connection leaks' (#23) from hotfix/file-browsing-error-for-dev into dev
Reviewed-on: wrenn/wrenn#23
2026-04-12 20:12:46 +00:00
b1595baa19 Updated env.example 2026-04-13 02:10:43 +06:00
da06ecb97b Fix file browser crash on non-regular files and connection leaks
- envd: reject non-regular files (devices, pipes, sockets) in GetFiles
  to prevent infinite reads from /dev/zero, /dev/urandom etc.
- host agent: add context cancellation check in ReadFileStream loop
  with proper Connect error codes
- frontend: abort in-flight file reads on file switch, directory
  navigation, and component teardown via AbortController
- frontend: guard against abort errors surfacing in UI, use try/finally
  for fileLoading state
2026-04-13 02:09:50 +06:00
0d5007089e Merge pull request 'Updated dependencies and fixed breaking changes' (#22) from fix/dependency-updates into dev
Reviewed-on: wrenn/wrenn#22
2026-04-12 18:26:57 +00:00
0e7b198768 Bump netlink v1.3.1 and netns v0.0.5
Fixes resource leaks in named namespace handlers, adds IFF_RUNNING
flag deserialization and RouteGetWithOptions.
2026-04-13 00:13:40 +06:00
9ad704c12b Update CP listen port to 9725 and public URL to app.wrenn.dev 2026-04-13 00:01:59 +06:00
0189d030bb Bump frontend and Go x/ dependencies
- vite 7→8, @sveltejs/vite-plugin-svelte 6→7, typescript 5→6
- golang.org/x/crypto v0.49→v0.50, golang.org/x/sys v0.42→v0.43 (both modules)
2026-04-13 00:01:53 +06:00
7b853a05ba Update pgx/v5 from v5.8.0 to v5.9.1
Picks up timestamp scan optimizations, ContextWatcher goroutine leak
fix, and stdlib ResetSession connection pool fix.
2026-04-12 22:50:28 +06:00
108b68c3fa Updated gitignore 2026-04-12 22:24:54 +06:00
565817273d Rename API routes /v1/sandboxes → /v1/capsules 2026-04-12 21:51:04 +06:00
ea65fb584c Merge pull request 'Completed template build for admins' (#21) from feat/admin-template-build into dev
Reviewed-on: wrenn/wrenn#21
2026-04-11 21:41:18 +00:00
25b5258841 COPY multi-source support, configurable rootfs size, build fixes
- COPY now supports multiple sources: COPY a.txt b.txt /dest/
  Last argument is always destination (matches Dockerfile semantics).
- COPY resolves relative destinations against current WORKDIR.
- WRENN_DEFAULT_ROOTFS_SIZE env var (e.g. 5G, 2Gi, 1000M, 512Mi)
  controls template rootfs expansion. Used both at agent startup
  (EnsureImageSizes) and after FlattenRootfs (shrink then re-expand).
- Pre-build now sets WORKDIR /home/wrenn-user after USER switch.
- Extracted archive files get chmod a+rX for readability.
- Path traversal validation on COPY sources.
2026-04-12 03:39:17 +06:00
46c43b95c2 Visual polish 2026-04-12 02:44:40 +06:00
000318f77e Fix runtime env leaking into templates, add hostname to /etc/hosts
- Filter out user-specific env vars (HOME, USER, LOGNAME, SHELL, etc.)
  from template default_env so they don't override envd's per-user
  resolution. Fixes bash sourcing /root/.bashrc as wrenn-user.
- Keep WRENN_SANDBOX (legitimate runtime flag), only filter per-sandbox
  IDs (WRENN_SANDBOX_ID, WRENN_TEMPLATE_ID).
- Add "127.0.0.1 sandbox" to /etc/hosts in wrenn-init.sh so sudo can
  resolve the hostname. Fixes "unable to resolve host sandbox" error.
- Move capsule lifecycle buttons (Pause/Resume/Snapshot/Destroy) to the
  same row as Stats/Files/Terminal tabs.
- Show vCPU/Memory for all template types with Required/Recommended
  tooltips on the user templates page.
2026-04-12 02:43:09 +06:00
f5eeb0ffcc Rename /dashboard/snapshots to /dashboard/templates, show specs for all template types
- Rename snapshots route to templates for consistency with sidebar label
- Show vCPU and Memory values for base templates (not just snapshots),
  with tooltip distinguishing "Required" vs "Recommended"
- Show recipe copy button in admin build logs
- Admin panel defaults to /admin/templates on entry
- WORKDIR creates directory if not present (mkdir -p)
- Use USER command in pre-build instead of raw adduser
- Fix Svelte whitespace stripping in step keyword display
2026-04-12 02:22:43 +06:00
75af2a4f66 Add USER, COPY, ENV persistence to template build system
Implement three new recipe commands for the admin template builder:

- USER <name>: creates the user (adduser + passwordless sudo), switches
  execution context so subsequent RUN/START commands run as that user
  via su wrapping. Last USER becomes the template's default_user.

- COPY <src> <dst>: copies files from an uploaded build archive
  (tar/tar.gz/zip) into the sandbox. Source paths validated against
  traversal. Ownership set to the current USER.

- ENV persistence: accumulated env vars stored in templates.default_env
  (JSONB) and injected via PostInit when sandboxes are created from the
  template, mirroring Docker's image metadata approach.

Supporting changes:
- Pre-build creates wrenn-user as default (via USER command)
- WORKDIR now creates the directory if it doesn't exist (mkdir -p)
- Per-step progress updates (ProgressFunc callback) for live UI
- Multipart form support on POST /v1/admin/builds for archive upload
- Proto: default_user/default_env fields on Create/ResumeSandboxRequest
- Host agent: SetDefaults calls PostInitWithDefaults on envd
- Control plane: reads template defaults, passes on sandbox create/resume
- Frontend: file upload widget, recipe copy button, keyword colors for
  USER/COPY, fixed Svelte whitespace stripping in step display
- Admin panel defaults to /admin/templates instead of /admin/hosts
- Migration adds default_user and default_env to templates and
  template_builds tables
2026-04-12 02:10:01 +06:00
f6c3dc0801 Merge pull request 'bugfix: preserve agent gRPC status codes and map AlreadyExists to 409 Conflict' (#20) from bugfix/mkdir-already-exists-409 into dev
Reviewed-on: wrenn/wrenn#20
2026-04-11 17:59:16 +00:00
f5a9a1209f fix: map CodeAlreadyExists to HTTP 409 Conflict
Updated the `agentErrToHTTP` switch statement to explicitly catch
`connect.CodeAlreadyExists` (as well as
`connect.CodeFailedPrecondition`)
and return `http.StatusConflict` (409) instead of falling through to the

default 502 Bad Gateway.
2026-04-11 23:54:48 +06:00
8d0356e372 fix: stop overwriting agent gRPC errors with CodeInternal
Removed the `connect.NewError(connect.CodeInternal, ...)` wrapper in the
Server's MakeDir proxy handler. Previously, this wrapper was catching
specific agent errors (like CodeAlreadyExists) and casting them into
generic Code 13 (Internal) errors, stripping the gRPC metadata.

This change allows the control-plane to act as a transparent pipeline,
ensuring the API gateway can properly interpret and route specific
filesystem failures.
2026-04-11 23:54:23 +06:00
c3c9ced9dd Remove API key auth requirement for sandbox port proxy connections
Sandbox URLs ({port}-{sandbox_id}.{domain}) are now accessible without
authentication. The sandbox ID in the hostname is sufficient for routing.
2026-04-11 13:59:07 +06:00
7d0a21644f Merge pull request 'Visual optimizations for the web UI' (#19) from fix/optimizations into dev
Reviewed-on: wrenn/wrenn#19
2026-04-11 02:24:01 +00:00
26917d432d Add syntax highlighting to file browser, harden capsules list
File browser:
- Add shiki-based syntax highlighting (lazy-loaded, zero initial bundle
  impact) with support for 30+ languages
- Cap highlighting at 2000 lines to avoid freezing on large files
- Pre-compute preview lines as derived state instead of re-splitting
  on every render
- Add content-visibility: auto on code lines for off-screen skip
- Remove per-line CSS transitions (unnecessary paint on 5000 elements)
- Cap row entrance animations to first 30 entries

Capsules list:
- Pause auto-refresh polling when browser tab is hidden
- Add empty state for search with no results
- Fix error state not clearing on successful refresh
- Fix action menu positioning near viewport edges
- Disable create button when no template selected
2026-04-11 07:49:11 +06:00
430fb9e70e Add per-provider brand colors to channels page
Give each provider (Discord, Slack, Teams, Google Chat, Telegram,
Matrix, Webhook) its own distinctive color for badges, row hover
stripes, and dialog tags. Move channel count into the header as a
serif numeral for stronger typographic hierarchy.
2026-04-11 07:14:13 +06:00
0807946d45 Replace template text input with searchable combobox, lock specs for snapshots
Template field is now a filterable dropdown that fetches available
templates on dialog open. Selecting a snapshot auto-fills and disables
vCPU/memory inputs since they must match the original capsule config.
2026-04-11 07:00:59 +06:00
11ca6935a6 Skip row fly-transitions on template filter change to prevent visual flicker
After initial page load animations complete, subsequent filter switches
render instantly (duration: 0) instead of replaying staggered fly-in/out
transitions that caused all rows to flash before filtering took effect.
2026-04-11 06:48:50 +06:00
e2f869bfc2 Minor textual change 2026-04-11 06:23:31 +06:00
21b82c2283 Optimize frontend polling: visibility API, range-based intervals, skip redundant redraws
Adds Page Visibility API to StatsPanel, templates, and capsule detail
pages so polling pauses when the browser tab is hidden. Capsule metrics
now use range-appropriate poll intervals (10s for 5m/10m, up to 120s for
24h) instead of a flat 10s. Chart updates are skipped when the data
fingerprint hasn't changed, avoiding unnecessary Canvas redraws.
2026-04-11 06:20:29 +06:00
dbad418093 Harden channels page: deduplicate dropdowns, add missing provider logos
Consolidate three identical click-outside $effect blocks into a reusable
useClickOutside helper. Extract duplicated events checkbox list into an
eventsDropdownItems snippet shared by create and edit dialogs. Add brand
SVG icons for Teams, Google Chat, and Matrix providers.
2026-04-11 06:18:36 +06:00
2bad843069 Extract SnapshotDialog and DestroyDialog into reusable components
Add lifecycle buttons (pause, resume, snapshot, destroy) to the
individual capsule detail page and refactor both the list and detail
pages to share the new dialog components.
2026-04-11 06:08:19 +06:00
9332f4ac18 Merge pull request 'Terminal connection (PTY)' (#18) from feat/ssh-connection into dev
Reviewed-on: wrenn/wrenn#18
2026-04-10 23:45:10 +00:00
cf191ca821 Harden file browser: cap preview lines, fix race conditions, download UX
- Cap text preview at 5,000 lines with truncation footer and download link
  to prevent browser freeze on large files (300k+ DOM nodes)
- Add request generation counters to discard stale API responses from
  rapid directory/file clicking
- Guard initial $effect with hasInitiallyLoaded to prevent double-load
- Add download loading state with spinner and disabled button
- Delay URL.revokeObjectURL by 5s so browser can start download
2026-04-11 05:43:32 +06:00
d2202c4f49 Harden terminal: binary-safe base64, auto-reconnect, session limits
- Replace btoa/atob with TextEncoder/TextDecoder for binary-safe base64
  encoding — fixes crash on multi-byte UTF-8 input (emoji, CJK, accents)
- Auto-reconnect on abnormal WebSocket close while session is live
- Cap concurrent sessions at 8 with disabled "+" button at limit
- Guard all ws.send() calls with try/catch via wsSend() wrapper
- Clean up input flush timer on session close and component destroy
- Close all sessions when capsule stops running (isRunning → false)
- Clean up orphaned display entry if DOM container fails to render
2026-04-11 05:35:53 +06:00
1826af37a5 Increase multiplexer fork buffer to 4096 to prevent output drops
64-entry buffer was too small for high-throughput PTY output (e.g.
ls -laihR /). The consumer couldn't drain fast enough over the RPC
stream, causing the non-blocking send fallback to silently discard
data. 4096 entries (~64MB at 16KB/chunk) handles sustained output
without drops while still preventing deadlock on stuck consumers.
2026-04-11 05:16:43 +06:00
acc721526d Polish terminal tab: merge status bar into tab strip, normalize sizing
- Merge separate status bar into unified tab bar (one row of chrome instead of two)
- Bump font/button/icon sizes to match rest of capsule page
- VS Code-style tab separators with intelligent hiding around active tab
- Hide tab bar when no sessions exist (empty state has its own CTA)
- Fix xterm background gaps by painting viewport/screen backgrounds
- Increase terminal font from 13px to 14px
2026-04-11 05:10:46 +06:00
4b2ff279f7 Add terminal tab to capsule detail page and fix envd process lookup bugs
- Add multi-session Terminal tab with xterm.js (session tabs, close, reconnect)
- Keep terminal mounted across tab switches to preserve sessions
- Persist active tab in URL (?tab=terminal) so refresh stays on terminal
- Buffer keystrokes (50ms) to reduce per-character RPC overhead
- Add WebSocket auth via ?token= query param for browser WS connections
- Enable ws:true in Vite dev proxy for WebSocket support

envd fixes (pre-existing bugs exposed by multi-session terminals):
- Fix getProcess tag Range: inverted return values caused early stop when
  multiple tagged processes existed, making SendInput fail with "not found"
- Fix multiplexer deadlock: blocking send to cancelled fork's unbuffered
  channel prevented process cleanup. Now uses buffered channels (cap 64)
  with non-blocking fallback
2026-04-11 04:27:16 +06:00
ab3fc4a807 Add interactive PTY terminal sessions for sandboxes
Wire envd's existing PTY process capabilities through the full stack:
hostagent proto (4 new RPCs: PtyAttach, PtySendInput, PtyResize, PtyKill),
envdclient, sandbox manager, and a new WebSocket endpoint at
GET /v1/sandboxes/{id}/pty with bidirectional JSON message protocol.

Sessions use tag-based identity for disconnect/reconnect support,
base64-encoded PTY data for binary safety, and a 120s inactivity timeout.
2026-04-11 02:42:59 +06:00
09f030d202 Replace file browser not-running state with centered empty state
The small bordered card looked broken and misaligned — now uses a
full-width centered layout with floating icon, matching the app's
empty-state pattern.
2026-04-10 23:32:17 +06:00
43c15c86de Merge pull request 'Added browser based filesystem interactions' (#16) from feat/file-interactions into dev
Reviewed-on: wrenn/wrenn#16
2026-04-10 13:40:39 +00:00
851f54a9e1 Polish file browser: add up button, normalize design, improve UX
Add parent directory button in breadcrumb bar, remove redundant ..
row from file list. Normalize styles to use design system tokens
(accent glow, iconFloat, fadeUp). Improve empty states, add staggered
row entrance animation, file extension badge, and clearer UX copy.
2026-04-10 19:24:24 +06:00
4ed17b2776 Fix stale WRENN_SANDBOX_ID and WRENN_TEMPLATE_ID after snapshot restore
After restoring a VM from snapshot, envd had already completed its initial
MMDS poll, so the metadata files in /run/wrenn/ and env vars retained values
from the original sandbox. Call POST /init after WaitUntilReady on both
resume and create-from-template paths to trigger envd to re-read MMDS.
2026-04-10 19:23:48 +06:00
0e6daaabe0 Fix file browser: use ~ as default path, support tilde expansion
- Default to ~ instead of hardcoded /home/user — envd resolves it
  to the actual home dir of the configured user
- Pass ~ and ~/... paths through to envd for server-side expansion
- Resolve actual absolute path from response entries for breadcrumbs
- Fall back to / if home dir is empty or doesn't exist
- Fix leftover label prop on admin templates CopyButton
2026-04-10 19:10:20 +06:00
82531b735c Add Files tab to capsule detail page with file browser and preview
Implements a split-panel file browser: directory tree on the left with
path input and breadcrumb navigation, file preview on the right with
line numbers. Binary/large files (>10MB) show a download prompt instead.

Also adds CopyButton component across capsule, snapshot, and template
pages, and fixes pre-existing type errors in StatsPanel and admin
templates page.
2026-04-10 18:43:11 +06:00
c9283cac70 Add filesystem operations (list, mkdir, remove) across full stack
Plumb ListDir, MakeDir, and RemovePath through all layers:
REST API → host agent RPC → envdclient → envd. These endpoints
enable a web file browser for sandbox filesystem interaction.

New endpoints (all under requireAPIKeyOrJWT):
- POST /v1/sandboxes/{id}/files/list
- POST /v1/sandboxes/{id}/files/mkdir
- POST /v1/sandboxes/{id}/files/remove
2026-04-10 18:05:13 +06:00
c1987b0bda Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-10 03:03:04 +06:00
2b31af8fde Merge branch 'main' of git.omukk.dev:wrenn/wrenn into dev 2026-04-10 02:50:50 +06:00
831c898b71 Merge pull request 'Added channels for external notifications' (#13) from feat/channels into dev
Reviewed-on: wrenn/sandbox#13
2026-04-09 19:20:36 +00:00
0f78982186 feat: channel audit logging, name cleaning, message formatting, and dashboard UI
- Add audit log entries for channel create, update, rotate_config, delete
- Clean channel names on create/update (trim, lowercase, spaces → hyphens,
  SafeName validation)
- Format chat notifications with full event details (resource, actor, team,
  timestamp) instead of one-liners
- Fix Discord split-line embeds by setting splitLines=No on shoutrrr URL
- Add channels dashboard page and sidebar navigation
2026-04-10 01:17:03 +06:00
84dd15d22b feat: add notification channels with provider integrations and retry
Implement a channels system for notifying teams via external providers
(Discord, Slack, Teams, Google Chat, Telegram, Matrix, webhook) when
lifecycle events occur (capsule/template/host state changes).

- Channel CRUD API under /v1/channels (JWT-only auth)
- Test endpoint to verify config before saving (POST /v1/channels/test)
- Secret rotation endpoint (PUT /v1/channels/{id}/config)
- AES-256-GCM encryption for provider secrets (WRENN_ENCRYPTION_KEY)
- Redis stream event publishing from audit logger
- Background dispatcher with consumer group and retry (10s, 30s)
- Webhook delivery with HMAC-SHA256 signing (X-WRENN-SIGNATURE)
- shoutrrr integration for chat providers
- Secrets never exposed in API responses
2026-04-09 17:06:06 +06:00
5148b5dd64 Updated CLAUDE.md 2026-04-09 14:28:39 +06:00
37d85ec998 chore: relicense from BSL 1.1 to Apache 2.0
Replace Business Source License with Apache License Version 2.0 across
LICENSE, envd/LICENSE, and NOTICE. Update NOTICE to remove BSL-era
framing that singled out Apache-only portions.
2026-04-09 14:28:19 +06:00
e2beef817d Expose host up/down audit events to BYOC teams and refresh dashboard navigation
Change host marked_down/marked_up audit log scope from "admin" to "team" so
BYOC team members can see when their hosts go unreachable or recover. Rename
BYOC sidebar entry to Hosts, add placeholder billing/usage pages, disable
unimplemented notifications/settings links, and point docs to external site.
2026-04-09 14:24:20 +06:00
a9ca13b238 Changed redis dependency to keydb 2026-04-09 00:47:19 +06:00
e3ffa576ce Fix review findings: IP collision, pause race, proxy path, ENV ordering, conn drain
- Fix IP address collision at slot 32768+ by using bitwise shifts instead of
  byte-truncating division in network slot addressing
- Add per-sandbox lifecycleMu to serialize concurrent Pause/Destroy calls
- Sanitize proxy forwarding path with path.Clean
- Sort ENV keys in recipe shell preamble for deterministic ordering
- Fix ConnTracker goroutine leak by adding cancel channel to Drain/Reset
- Update context_test to assert deterministic ENV ordering
2026-04-08 04:32:41 +06:00
dd50cfdcb1 fix: security hardening from CSO audit
- Add auth failure logging (login, API key, JWT) with IP/email/prefix
- Move OAuth JWT from URL params to short-lived cookies to prevent
  token leakage via browser history, server logs, and Referer headers
- Pin Swagger UI to v5.18.2 with SRI integrity hashes
- Upgrade Go toolchain to 1.25.8 (fixes 5 called stdlib vulns)
- Fix unchecked error in host agent credential refresh
- Add .gstack to .gitignore for security report artifacts
2026-04-08 03:46:31 +06:00
3675ecba65 chore: add gstack skill routing rules to CLAUDE.md 2026-04-08 02:28:02 +06:00
c8615466be Enforce mandatory mTLS for CP↔agent communication
Both the control plane and host agent now refuse to start without valid
mTLS configuration, closing the unauthenticated proxy/RPC attack surface
that existed when running in plain HTTP fallback mode.
2026-04-08 02:25:43 +06:00
2737288a2b Merge pull request 'Changes for a python code interpreter' (#12) from feat/python-code-interpreter into dev
Reviewed-on: wrenn/sandbox#12
2026-04-07 20:18:06 +00:00
0ea0e7cc70 Fix expandEnv regex, init script crash, healthcheck deadline, and test issues
- Fix envRegex: remove spurious (\$)? group that swallowed $$$, handle ${}
- wrenn-init.sh: add || true to networking commands under set -e, remove dead code
- waitForHealthcheck: use context deadline for unlimited retries instead of implicit 100 cap
- Make parseSandboxEnv a package-level function (unused receiver)
- Fix WrappedCommand test: map iteration order dependency, pre-expand env values
- Fix error wrapping: %v → %w per project conventions
- test-jupyter-kernel.py: move import to top-level, fix misleading comment
2026-04-08 02:14:53 +06:00
11e08e5b96 Merge branch 'dev' into feat/python-code-interpreter 2026-04-07 19:35:55 +00:00
4dc8cc3867 Removed incorrect example cert format 2026-04-07 19:35:26 +00:00
9852f96127 Modified expandEnv to use regex.
Updated recipefile with test script to check code execution with state
management
2026-04-07 22:56:56 +06:00
bf05677bef Merge branch 'dev' into feat/python-code-interpreter 2026-04-06 20:45:54 +00:00
4f340b8847 feat: add env expansion, sandbox env fetching, and configurable
healthchecks

Fix ENV instructions to expand $VAR references at set time using the
current env state, preventing self-referencing values like
PATH=/opt/venv/bin:$PATH from producing recursive expansions. Remove
expandEnv from shellPrefix to avoid double expansion.

Fetch sandbox environment variables via `env` before recipe execution
so ENV steps resolve against actual runtime values from the base
template image.

Replace hardcoded healthcheck timing with a Dockerfile-like flag parser
supporting --interval, --timeout, --start-period, and --retries. Add
start-period grace window and bounded retry counting to
waitForHealthcheck.

Add python-interpreter-v0-beta recipe and healthcheck files.
2026-04-07 01:15:43 +06:00
f57fe85492 Merge pull request 'Minor temporary fix for sitewide metrics' (#11) from patch/analytics into dev
Reviewed-on: wrenn/sandbox#11
2026-04-04 07:11:49 +00:00
9a52b47786 Minor temporary fix for sitewide metrics 2026-04-04 13:11:18 +06:00
ab38c8372c Merge pull request 'Feature: HTTP communication with sandbox' (#10) from code-interpreter into dev
Reviewed-on: wrenn/sandbox#10
2026-04-02 17:41:07 +00:00
8b5fa3438e Replace gopsutil port scanner with direct /proc/net/tcp reading
The envd port scanner used gopsutil's net.Connections() which walks
/proc/{pid}/fd to enumerate socket inodes. This corrupts Go runtime
semaphore state when the VM is paused mid-operation and restored from
a Firecracker snapshot.

Replace with a direct /proc/net/tcp + /proc/net/tcp6 parser that reads
a single file per address family — no /proc/{pid}/fd walk, no goroutines,
no WaitGroups. Also replace concurrent-map (smap) in the scanner with a
plain sync.RWMutex-protected map, since concurrent-map's Items() spawns
goroutines with a WaitGroup internally, which is equally unsafe across
snapshot boundaries.

Use socket inode instead of PID for the port forwarding map key, since
inode is available directly from /proc/net/tcp without the fd walk.
2026-04-01 15:47:28 +06:00
2b4c5e0176 Add pre-pause proxy connection drain and sandbox proxy caching
Introduce ConnTracker (atomic.Bool + WaitGroup) to track in-flight proxy
connections per sandbox. Before pausing a VM, the manager drains active
connections with a 2s grace period, preventing Go runtime corruption
inside the guest caused by stale TCP state surviving Firecracker
snapshot/restore.

Also add:
- AcquireProxyConn on Manager for atomic lookup + connection tracking
- Proxy cache (120s TTL) on CP SandboxProxyWrapper with single-query
  DB lookup (GetSandboxProxyTarget) to avoid two round-trips
- Reset() on ConnTracker to re-enable connections if pause fails
2026-04-01 15:09:44 +06:00
377e856c8f Fix lint warnings: drop deprecated Name field from snapshot response, check errcheck in benchmark
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 21:28:57 +06:00
948db13bed Add skip_pre_post build option, cancel endpoint, and recipe package
- skip_pre_post flag on builds bypasses apt update/clean pre/post steps for
  faster iteration when the recipe handles its own environment setup
- POST /v1/admin/builds/{id}/cancel endpoint marks an in-progress build as
  cancelled; UpdateBuildStatus now also sets completed_at for 'cancelled'
- internal/recipe: typed recipe parser and executor (RUN/ENV/COPY steps)
  replacing the raw string slice approach in the build worker
- pre/post build commands prefixed with RUN to match recipe step format
2026-03-30 21:24:52 +06:00
25ce0729d5 Add mTLS to CP→agent channel
- Internal ECDSA P-256 CA (WRENN_CA_CERT/WRENN_CA_KEY env vars); when absent
  the system falls back to plain HTTP so dev mode works without certificates
- Host leaf cert (7-day TTL, IP SAN) issued at registration and renewed on
  every JWT refresh; fingerprint + expiry stored in DB (cert_expires_at column
  replaces the removed mtls_enabled flag)
- CP ephemeral client cert (24-hour TTL) via CPCertStore with atomic hot-swap;
  background goroutine renews it every 12 hours without restarting the server
- Host agent uses tls.Listen + httpServer.Serve so GetCertificate callback is
  respected (ListenAndServeTLS always reads cert from disk)
- Sandbox reverse proxy now uses pool.Transport() so it shares the same TLS
  config as the Connect RPC clients instead of http.DefaultTransport
- Credentials file renamed host-credentials.json with cert_pem/key_pem/
  ca_cert_pem fields; duplicate register/refresh response structs collapsed
  to authResponse
2026-03-30 21:24:35 +06:00
88f919c4ca Rename sandbox prefix to cl-, add MMDS metadata, fix proxy port routing
- Change sandbox ID prefix from sb- to cl- (capsule) throughout
- Fix proxy URL regex character class: base36 uses 0-9a-z, not just hex
- Add MMDS V2 config and metadata to VM boot flow so envd can read
  WRENN_SANDBOX_ID and WRENN_TEMPLATE_ID from inside the guest
- Pass TemplateID through VMConfig into both fresh and snapshot boot paths
2026-03-30 17:12:05 +06:00
8f06fc554a Replace Full snapshot fallback with file-level diff merge
Always use Firecracker Diff snapshots (fast, only changed pages) and
merge diff files at the file level when the generation cap is reached.
The previous approach used Firecracker's Full snapshot type which dumps
all memory to disk and can timeout, losing all snapshot data on failure.

Add snapshot.MergeDiffs() which reads each block from the appropriate
generation's diff file via the header mapping and writes them into a
single consolidated file with a fresh generation-0 header.
2026-03-29 02:33:33 +06:00
1ca10230a9 Prefix network namespaces with wrenn-, add stale cleanup, lower diff cap
Rename ns-{idx} to wrenn-ns-{idx} and veth-{idx} to wrenn-veth-{idx}
to avoid collisions with other tools. Add CleanupStaleNamespaces() at
agent startup to remove orphaned namespaces, veths, iptables rules, and
routes from a previous crash. Lower maxDiffGenerations from 10 to 8 to
prevent Go runtime memory corruption from snapshot/restore drift.
2026-03-29 02:14:30 +06:00
46d60fc5a5 Seed minimal template in DB and protect it from deletion
Insert a minimal template row (all-zeros UUID) so it appears in both
team and admin template listings. Guard delete endpoints to prevent
removal of the minimal template.
2026-03-29 01:34:54 +06:00
906cc42d13 Rename AGENT_*/CP_LISTEN_ADDR env vars to WRENN_* prefix
AGENT_FILES_ROOTDIR → WRENN_DIR, AGENT_LISTEN_ADDR → WRENN_HOST_LISTEN_ADDR,
AGENT_CP_URL → WRENN_CP_URL, AGENT_HOST_INTERFACE → WRENN_HOST_INTERFACE,
CP_LISTEN_ADDR → WRENN_CP_LISTEN_ADDR. Consolidates all env vars under a
consistent WRENN_ namespace.
2026-03-29 00:30:20 +06:00
75b28ed899 Add UUID-based template IDs and team-scoped template directory layout
Introduces internal/layout package for centralized path construction,
migrates templates from name-based TEXT primary keys to UUID PKs with
team-scoped directories (WRENN_DIR/images/teams/{team_id}/{template_id}).
The built-in minimal template uses sentinel zero UUIDs. Proto messages
carry team_id + template_id alongside deprecated template name field.
Team deletion now cleans up template files across all hosts.
2026-03-29 00:30:10 +06:00
03e96629c7 Remove slug from team page UI 2026-03-28 20:45:57 +06:00
34af77e0d8 Fix snapshot race, delete auth, sparse dd, default disk to 5GB
Snapshot race fix:
- Pre-mark sandbox as "paused" in DB before issuing CreateSnapshot and
  PauseSandbox RPCs, preventing the reconciler from marking it "stopped"
  during the flatten window when the sandbox is gone from the host
  agent's in-memory map but DB still says "running"
- Revert status to "running" on RPC failure
- Check ctx.Err() before writing response to avoid writing to dead
  connections when client disconnects during long snapshot operations

Delete auth fix:
- Block non-admin deletion of platform templates (team_id = all-zeros)
  at DELETE /v1/snapshots/{name} with 403, preventing file deletion
  before the team ownership check fails

Sparse dd:
- Add conv=sparse to dd in FlattenSnapshot so flattened images preserve
  sparseness (~200MB actual vs 5GB logical)

Default disk size:
- Change default disk_size_mb from 20GB to 5GB across migration,
  manager, service, build, and EnsureImageSizes
- Disable split-button dropdown arrow for platform templates in
  dashboard snapshots page (teams cannot delete platform templates)
2026-03-28 14:30:18 +06:00
c89a664a37 Switch API ID format from UUID to base36 for compact, E2B-style IDs
DB stays native UUID; the format/parse layer now encodes 16 UUID bytes
as 25-char lowercase alphanumeric (base36) strings instead of the
standard 36-char hex-with-dashes format. e.g. sb-2e5glxi4g3qnhwci95qev0cg0
2026-03-27 00:53:51 +06:00
3509ca90e8 Add pre/post build stages, fix exec timeout, expand guest PATH
Build phases:
- Pre-build (apt update) and post-build (apt clean, autoremove, rm lists)
  run with 10-minute timeout; user recipe commands keep 30s timeout
- Log entries include phase field for UI grouping
- Always send explicit TimeoutSec to host agent (0 defaulted to 30s)

Frontend:
- Pre-build/post-build steps show phase label without exposing commands
- Recipe steps numbered independently starting from 1

Guest PATH:
- Add /usr/games:/usr/local/games to wrenn-init.sh PATH export
  (standard Ubuntu paths, needed for packages like cowsay)
2026-03-27 00:28:32 +06:00
c8acac92cc Add pre/post build stages to template builds
Pre-build: apt update
Post-build: apt clean, apt autoremove, rm apt lists

Total steps count includes pre/post commands for accurate progress bars.
2026-03-27 00:00:48 +06:00
5cb37bf2a0 Add admin template deletion with broadcast to all hosts
- DELETE /v1/admin/templates/{name} endpoint (admin-only)
- Broadcasts DeleteSnapshot RPC to all online hosts before removing DB record
- Frontend admin templates page uses deleteAdminTemplate() instead of
  team-scoped deleteSnapshot()
- Delete button shown for all template types, not just snapshots
2026-03-26 23:53:08 +06:00
c0d6381bbe Add disk_size_mb, auto-expand base images, admin templates endpoint
Disk sizing:
- Add disk_size_mb column to sandboxes table (default 20480 = 20GB)
- Add disk_size_mb to CreateSandboxRequest proto, passed through the
  full chain: service → RPC → host agent → sandbox manager → devicemapper
- devicemapper.CreateSnapshot takes separate cowSizeBytes param so the
  sparse CoW file can be sized independently from the origin
- EnsureImageSizes() runs at host agent startup: expands any base image
  smaller than 20GB via truncate + resize2fs (sparse, no extra physical
  disk). Sandboxes then get the full 20GB via fast dm-snapshot path
- FlattenRootfs shrinks output images with resize2fs -M so stored
  templates are compact; EnsureImageSizes re-expands on next startup

Admin templates visibility:
- Add GET /v1/admin/templates endpoint listing all templates across teams
- Frontend admin templates page uses listAdminTemplates() instead of
  team-scoped listSnapshots()
- Platform templates (team_id = all-zeros UUID) now visible to all teams:
  GetTemplateByTeam, ListTemplatesByTeam, ListTemplatesByTeamAndType
  queries include platform team_id in WHERE clause
2026-03-26 23:45:41 +06:00
4ddd494160 Switch database IDs from TEXT to native UUID
Consolidate 16 migrations into one with UUID columns for all entity
IDs. TEXT is kept only for polymorphic fields (audit_logs.actor_id,
resource_id) and template names. The id package now generates UUIDs
via google/uuid, with Format*/Parse* helpers for the prefixed wire
format (sb-{uuid}, usr-{uuid}, etc.). Auth context, services, and
handlers pass pgtype.UUID internally; conversion to/from prefixed
strings happens at API and RPC boundaries. Adds PlatformTeamID
(all-zeros UUID) for shared resources.
2026-03-26 16:16:21 +06:00
cdd89a7cee Fix review issues: detached contexts, loop device leak, timer leak, size_bytes
- Use context.Background() with timeout in destroySandbox/failBuild so
  cleanup and DB writes survive parent context cancellation on shutdown
- Fix loop device refcount leak in FlattenRootfs when dmDevice is nil
- Replace time.After with time.NewTimer in healthcheck polling to avoid
  goroutine leak when healthcheck passes early
- Capture size_bytes from CreateSnapshot/FlattenRootfs RPC responses
  instead of hardcoding 0 in the templates table insert
- Avoid leaking internal error details to API clients in build handler
2026-03-26 15:31:38 +06:00
1ce62934b3 Add template build system with admin panel, async workers, and FlattenRootfs RPC
Introduces an end-to-end template building pipeline: admins submit a recipe
(list of shell commands) via the dashboard, a Redis-backed worker pool spins
up a sandbox, executes each command, and produces either a full snapshot
(with healthcheck) or an image-only template (rootfs flattened via a new
FlattenRootfs host-agent RPC). Build progress and per-step logs are persisted
to a new template_builds table and polled by the frontend.

Backend:
- New FlattenRootfs RPC (proto + host agent + sandbox manager)
- BuildService with Redis queue (BLPOP) and configurable worker pool (default 2)
- Admin-only REST endpoints: POST/GET /v1/admin/builds, GET /v1/admin/builds/{id}
- Migration for template_builds table with JSONB logs and recipe columns
- sqlc queries for build CRUD and progress updates

Frontend:
- /admin/templates page with Templates + Builds tabs
- Create Template dialog with recipe textarea, healthcheck, specs
- Build history with expandable per-step logs, status badges, progress bars
- Auto-polling every 3s for active builds
- AdminSidebar updated with Templates nav item
2026-03-26 15:27:21 +06:00
6898528096 Replace one-shot clock_settime with chrony for continuous guest time sync
Switch from the envd /init endpoint pushing host time via syscall to
chronyd reading the KVM PTP hardware clock (/dev/ptp0) continuously.
This fixes clock drift between init calls and handles snapshot resume
gracefully.

Changes:
- Add clocksource=kvm-clock kernel boot arg
- Start chronyd in wrenn-init.sh before tini (PHC /dev/ptp0, makestep 1.0 -1)
- Remove clock_settime logic from envd SetData and shouldSetSystemTime
- Remove client.Init() clock sync calls from sandbox manager (3 sites)
- Remove Init() method from envdclient (no longer needed)
- Simplify rootfs scripts: socat/chrony now come from apt in the container
  image, only envd/wrenn-init/tini are injected by build scripts
2026-03-26 04:47:44 +06:00
12d1e356fa Minor UI copy updates across capsules and templates pages 2026-03-26 03:58:12 +06:00
139f86bf9c Fix static build: disable prerender for dynamic capsule detail route
The [id] route cannot be prerendered at build time since IDs are unknown.
With adapter-static's index.html fallback, the route is handled client-side.
2026-03-26 02:13:12 +06:00
b0a8b498a8 WIP: Add Caddy reverse proxy for dev environment
Add Caddy to docker-compose as the single entry point on port 8000:
- localhost -> /api/* stripped and proxied to CP:8080, /* to frontend:5173
- *.localhost -> proxied to CP:8080 (sandbox proxy catch-all)
- Direct /v1/*, /auth/*, /docs routes proxied to CP

Move CP from :8000 to :8080 (its default). Caddy takes :8000.
Update .env.example, vite proxy target (kept as fallback), and Makefile
dev targets (pg_isready via docker exec, frontend binds 0.0.0.0).

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:21 +06:00
4be65b0abb WIP: Add sandbox proxy catch-all to control plane
Add SandboxProxyWrapper that intercepts requests with Host headers
matching {port}-{sandbox_id}.{domain} and proxies them through the
owning host agent's /proxy endpoint.

Authentication is via X-API-Key only (no JWT). The API key's team must
own the sandbox. Export EnsureScheme from lifecycle package for reuse.

Request flow: SDK -> Caddy -> CP catch-all -> Host Agent -> sandbox VM.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:10 +06:00
f4675ebfc0 WIP: Add HTTP proxy endpoint to host agent
Add /proxy/{sandbox_id}/{port}/* handler that reverse-proxies HTTP
requests to services running inside sandbox VMs. The sandbox's host IP
(10.11.0.{idx}) is used as the upstream target.

Includes port validation (1-65535) and shared HTTP transport for
connection pooling. Supports WebSocket upgrades for protocols like
Jupyter's streaming API.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:12:01 +06:00
602ee470d9 WIP: Add socat injection to rootfs build scripts
Inject a statically-linked socat binary into rootfs images. envd's
port forwarder requires socat to bridge localhost-listening services
(e.g. Jupyter kernel) to the guest TAP interface.

Both scripts follow the same 3-step resolution: check rootfs, check
host, build from source (http://www.dest-unreach.org/socat/ v1.8.1.1).
Static linkage is verified before injection.

This is an intermediate state — needs further work for the full code
interpreter feature.
2026-03-26 02:11:54 +06:00
8cdf91d895 Merge pull request 'Added metrics' (#9) from metrics into dev
Reviewed-on: wrenn/sandbox#9
2026-03-25 16:40:06 +00:00
ed7880bc6c Add per-capsule stats detail page with live CPU/RAM charts
- New detail page at /dashboard/capsules/[id] with Stats and Files tabs
- Stats tab shows capsule info card (status, template, CPU, memory, disk,
  started, idle timeout) and two stacked Chart.js charts with live values
- Metrics API client with 10s polling and moving-average smoothing
- Capsule ID in list table is now a clickable link to the detail page
- Layout breadcrumb header (Capsules > sb-xxx) with back navigation
- Fix metrics sampler: use v.PID() directly as Firecracker PID since
  unshare -m execs (not forks) through the bash/ip-netns-exec/firecracker
  chain, so all share the same PID. Removes unused findChildPID.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 22:31:05 +06:00
27ff828e60 Push GetSandboxMetricPoints time filter into SQL
The query was fetching all rows for a (sandbox_id, tier) pair and
filtering by timestamp in Go. For repeatedly-paused sandboxes the
24h tier can accumulate up to 30 days of data, causing up to 120x
over-fetching for a 6h range request.

Add AND ts >= $3 to the query so Postgres filters on the primary key
(sandbox_id, tier, ts) directly. Drop the redundant Go-side loop.
2026-03-25 21:53:19 +06:00
6eacf0f735 Fix LIKE pattern injection in user email search
Escape LIKE metacharacters (% and _) in the email prefix before passing
to the SQL query, and enforce the documented '@' requirement to prevent
broad user enumeration. Move search logic out of TeamService into
usersHandler since it is a site-wide lookup, not team-scoped.
2026-03-25 21:53:09 +06:00
88cb24bb86 Minor improvement 2026-03-25 21:27:11 +06:00
49b0b646a8 Add 5m, 1h, 6h, 12h range filters to metrics endpoint
Maps each user-facing range to the appropriate underlying ring buffer
tier and applies a time cutoff filter. No new ring buffers needed —
5m/10m read from the 10m tier, 1h/2h from the 2h tier, 6h/12h/24h
from the 24h tier.
2026-03-25 20:44:28 +06:00
9acdbb5ae9 Add per-sandbox CPU/memory/disk metrics collection
Samples /proc/{fc_pid}/stat (CPU%), /proc/{fc_pid}/status (VmRSS), and
stat() on CoW files at 500ms intervals per running sandbox. Three tiered
ring buffers downsample into 30s and 5min averages for 10min/2h/24h
retention. Metrics are flushed to DB on pause (all tiers) and destroy
(24h only). New GetSandboxMetrics and FlushSandboxMetrics RPCs on the
host agent, proxied through GET /v1/sandboxes/{id}/metrics?range= on
the control plane. Returns live data for running sandboxes, DB data for
paused, and 404 for stopped.
2026-03-25 20:10:33 +06:00
7473c15f52 Bugfix: cgroup2 related error inside the sandbox 2026-03-25 19:45:57 +06:00
8d5ba3873a Fix capsules table blink on background poll refresh
Poll fetches now silently update data without triggering loading
states, spinner animations, or row fadeUp re-animations. Only manual
refresh shows the spin indicator.
2026-03-25 19:44:13 +06:00
b0e6f5ffb3 Bolder stats page layout with stronger visual hierarchy
- Accent stripes: 3px → 5px; indicator dots: 6px → 8px
- Peak values step down to text-[1.714rem]/text-secondary so Now values read as the clear hero
- Now labels: semibold + uppercase for weight parity with the metric
- Cell padding py-5 → py-6; outer gap-7/pt-4 → gap-8/pt-6 for breathing room
- Chart fills: 7-8% → 11-13% opacity; lines: 1.5 → 2px
- Tick labels brighter (#635f5c), grid lines slightly more visible
- Running capsules chart: min-height 220 → 260px
2026-03-25 18:18:04 +06:00
a69b0f579c Split CPU and RAM into separate side-by-side charts
CPU (vCPUs) and RAM (GB) use different units and scales, so combining
them on a dual-axis chart was misleading. Each now has its own chart
card, laid out side-by-side.
2026-03-25 16:39:25 +06:00
45793e181c Move metrics to after templates in sidebar nav 2026-03-25 16:08:38 +06:00
e3750f79f9 Fix metrics sampler to record zero-value snapshots when idle
SampleSandboxMetrics previously filtered WHERE status IN ('running',
'starting', 'paused'), which returned no rows when all capsules were
stopped. This caused zero snapshots to be skipped, leaving the
time-series charts with no trailing data points instead of showing
the expected zero values.

Remove the WHERE filter so the query groups by all teams that have
any sandbox row. The per-status FILTER clauses on the aggregates
already produce correct zero counts for stopped capsules.

Also includes the per-VM RAM ceiling formula change (sum(ceil(each/2))
instead of ceil(sum/2)).
2026-03-25 15:50:19 +06:00
930da8a578 Move metrics to dedicated nav item, simplify capsules page
- Add Metrics nav item to sidebar with bar chart icon
- Create /dashboard/metrics page wrapping StatsPanel
- Remove tabs from capsules page (list is now the only view)
- Flatten capsules route: /capsules directly shows the list,
  removing the /list and /stats sub-routes
- Strip redundant title/subtitle from StatsPanel (page header
  provides context)
2026-03-25 15:24:21 +06:00
47b0ed5b52 Fix metrics correctness, redesign stats page
- Replace stale snapshot read (GetCurrentMetrics) with live query
  (GetLiveMetrics) against sandboxes table — always returns correct
  zeros when no capsules are running
- Fix CPU reserved formula: running + starting only; paused VMs no
  longer contribute vCPUs (RAM reservation for paused unchanged)
- Merge top cards into 3 paired Now/Peak cards with colored accent
  borders (green/blue/amber matching chart colors)
- Move Live badge from Running Capsules card to page-level header
- Add colored category dots to card and chart headers
- Charts stacked vertically, flex-1 to fill remaining page height
- vCPUs chart color changed to blue (#5a9fd4), RAM stays amber
2026-03-25 15:11:46 +06:00
fee66bda50 Add live stats page with metrics sampling and route split
- New sandbox_metrics_snapshots table sampled every 10s (60-day retention)
- Background MetricsSampler goroutine wired into control plane startup
- GET /v1/sandboxes/stats?range=5m|1h|6h|24h|30d endpoint with adaptive
  polling intervals; reserved CPU/RAM uses ceil(paused/2) formula
- StatsPanel component: 4 stat cards + 2 Chart.js line charts (straight
  lines, integer y-axis for running count, dual-axis for CPU/RAM)
- Range filter persisted in URL query param; polls update data silently
  (no blink — loading state only shown on initial mount)
- Split /dashboard/capsules into /list and /stats sub-routes with shared
  layout; capsuleRunningCount store syncs badge across routes
- CreateCapsuleDialog extracted as reusable component
2026-03-25 14:41:05 +06:00
2349f585ae Bolder, more delightful frontend across all pages
- app.css: replace flat --shadow-sm token with real shadows; add
  --shadow-card and --shadow-dialog tokens; add @keyframes status-ping
  and .animate-status-ping utility (outward ring ripple, GPU-composited
  via will-change) for live running status dots
- login: headline 5rem → 6.5rem with tighter leading/tracking; expand
  container to 460px; add sage-green dot grid texture layer beneath the
  mouse-reactive glow for industrial depth
- capsules: upgrade all running dots (header chip + row indicators +
  status bar) from opacity-fade to ring ripple; apply --shadow-dialog
  to Launch and Snapshot dialogs
- keys: apply --shadow-dialog to all three dialogs
- audit: remove duplicate @keyframes fadeUp and iconFloat (redundant
  with app.css definitions, audit's fadeUp also subtly diverged)
- sidebar: active indicator bar taller and thicker (h-5 w-[3px] → h-6
  w-1); active bg more vivid (accent/12%); label font-medium →
  font-semibold; team dialog gets --shadow-dialog
2026-03-25 12:55:23 +06:00
d4eb24be7e Added snapshot name dialogue on the UI 2026-03-25 05:30:31 +06:00
0414fbe733 Merge pull request 'Added audit logs for users' (#7) from audit-logs into dev
Reviewed-on: wrenn/sandbox#7
2026-03-24 23:21:09 +00:00
6b76abe38e Remove expandable metadata from audit log rows
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 05:19:32 +06:00
3ce8fdcb02 Add audit logs frontend page
Infinite-scroll table with hierarchical filter dropdown, expandable
metadata rows, and status-coded visual signals per event severity.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 05:18:04 +06:00
1be30034bd Add audit log infrastructure and GET /v1/audit-logs endpoint
Introduces an append-only audit trail for all user and system actions:
sandbox lifecycle (create/pause/resume/destroy/auto-pause), snapshots,
team rename, API key create/revoke, member add/remove/leave/role_update,
and BYOC host add/delete/marked_down/marked_up.

- New audit_logs table (migration) with team_id, actor, resource,
  action, scope (team|admin), status (success|info|warning|error),
  metadata, and created_at
- AuditLogger (internal/audit) with named fire-and-forget methods per
  event; system actor used for background events (HostMonitor, TTL reaper)
- GET /v1/audit-logs: JWT-only, cursor pagination (max 200), multi-value
  filters for resource_type and action (comma-sep or repeated params);
  members see team-scoped events only, admins/owners see all
- AuthContext extended with APIKeyID + APIKeyName so API key requests
  record meaningful actor identity
- HostMonitor wired with AuditLogger for auto-pause and host marked_down
2026-03-25 05:15:16 +06:00
9878156798 Merge pull request 'Set up working host registration (including BYOC) with the CP' (#6) from host-registration into dev
Reviewed-on: wrenn/sandbox#6
2026-03-24 21:19:12 +00:00
e069b3e679 Add BYOC page, admin section, and is_byoc team visibility gating
- Frontend: BYOC hosts page (/dashboard/byoc) with register/delete flows,
  shimmer loading, pulsing online status, animated token reveal checkmark
- Frontend: Admin section (/admin/hosts) with platform + BYOC tabs, stat
  pills, skeleton loading, slide-in animations for new rows
- Frontend: AdminSidebar component with accent top bar and admin pill badge
- Frontend: BYOC nav item shown only when team.is_byoc is true (derived
  from teams store, not JWT); disabled for members
- Frontend: Admin shield button in Sidebar, visible only to platform admins
- Backend: is_admin in JWT claims + requireAdmin middleware (DB-validated)
- Backend: is_byoc added to teamResponse so frontend derives visibility
  from fresh team data rather than stale JWT fields
- Backend: SetBYOC admin endpoint (PUT /v1/admin/teams/{id}/byoc)
- Backend: Admin hosts list enriches BYOC entries with team_name
- Host agent: load .env file via godotenv on startup
2026-03-25 03:10:41 +06:00
9bf67aa7f7 Implement host registration, JWT refresh tokens, and multi-host scheduling
Replaces the hardcoded CP_HOST_AGENT_ADDR single-agent setup with a
DB-driven registration system supporting multiple host agents (BYOC).

Key changes:
- Host agents register via one-time token, receive a 7-day JWT + 60-day
  refresh token; heartbeat loop auto-refreshes on 401/403 and pauses all
  sandboxes if refresh fails
- HostClientPool: lazy Connect RPC client cache keyed by host ID, replacing
  the single static agent client throughout the API and service layers
- RoundRobinScheduler: picks an online host for each new sandbox via
  ListActiveHosts; extensible for future scheduling strategies
- HostMonitor (replaces Reconciler): passive heartbeat staleness check marks
  hosts unreachable and sandboxes missing after 90s; active reconciliation
  per online host restores missing-but-alive sandboxes and stops orphans
- Graceful host delete: returns 409 with affected sandbox list without
  ?force=true; force-delete destroys sandboxes then evicts pool client
- Snapshot delete broadcasts to all online hosts (templates have no host_id)
- sandbox.Manager.PauseAll: pauses all running VMs on CP connectivity loss
- New migration: host_refresh_tokens table with token rotation (issue-then-
  revoke ordering to prevent lockout on mid-rotation crash)
- New sandbox status 'missing' (reversible, unlike 'stopped') and host
  status 'unreachable'; both reflected in OpenAPI spec
- Fix: refresh token auth failure now returns 401 (was 400 via generic
  'invalid' substring match in serviceErrToHTTP)
2026-03-24 18:32:05 +06:00
f968da9768 Minor frontend enhancements 2026-03-24 17:25:00 +06:00
3932bc056e Add user names, team-scoped sandbox guard, and login robustness fixes
- Add name column to users (migration + sqlc regen); propagate through JWT
  claims, auth context, all auth/OAuth handlers, service layer, and frontend
- Sidebar and team page show name instead of email; team page splits Name/Email
  into separate columns
- Block sandbox creation in UI and API when user has no active team context
- loginTeam helper falls back to first active team when no default is set,
  fixing login for invited users with no is_default membership
- Exclude soft-deleted teams from GetDefaultTeamForUser, GetBYOCTeams queries
- Guard host creation against soft-deleted teams in service/host.go
- SwitchTeam re-fetches name from DB instead of trusting stale JWT claim
- Reset teams store on login so stale data from a previous session never persists
- Update openapi.yaml: add name to SignupRequest and AuthResponse schemas
2026-03-24 16:56:10 +06:00
aaeccd32ce Merge pull request 'Frontend consistency and improvements' (#5) from frontend-enhancement into dev
Reviewed-on: wrenn/sandbox#5
2026-03-24 10:00:27 +00:00
915d934c26 Frontend consistency pass: delight, audit, and normalization
Delight (keys page):
- Animated checkmark draw + circle pop on key reveal dialog open
- Key display area pulses accent glow on open to draw eye to "copy this"
- Copy button spring-bounces on successful copy (re-triggers on repeat)
- Empty state key icon floats (iconFloat, now global)
- Row hover uses scaleY left-accent stripe (matches capsules pattern)
- New key row flashes accent on reveal dialog dismiss (matches capsule-born)

Audit fixes (all dashboard pages):
- Page titles standardized to em dash: "Wrenn — X" across all four pages
- formatDate/timeAgo extracted to src/lib/utils/format.ts (string | undefined
  signatures); keys and snapshots now import from there instead of duplicating
- team formatDate gains undefined guard (kept local, date-only format differs)
- spin-once and iconFloat keyframes moved to app.css as globals; scoped copies
  removed from capsules and keys
- Snapshots empty state icon was referencing undefined @keyframes float; fixed
  to iconFloat

Normalization:
- Snapshots table rows: replaced ::before pseudo-element accent (opacity-only,
  single color) with DOM row-stripe element using scaleY transition, type-keyed
  color (green for snapshots, blue for images) — matches capsules pattern
- Create Key dialog: max-w-[400px] → max-w-[420px] to align with form dialogs
- Snapshots count and empty-state heading are now terminology-aware: shows
  "templates/snapshots/images" based on active filter; empty heading for all
  filter reads "No templates yet" instead of "No snapshots yet"

Not done (documented in audit, deferred):
- Sidebar nav items pointing to unimplemented routes (audit, usage, billing,
  notifications, settings) — left as-is, needs product decision
- Dialog max-widths fully normalized beyond Create Key — minor, deferred
- capsules timeAgo not imported from shared util (formatTime differs intentionally)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 15:51:11 +06:00
336080bb6d Merge pull request 'Added team related functionalities' (#4) from team-management into dev
Reviewed-on: wrenn/sandbox#4
2026-03-24 08:58:32 +00:00
90c296f5e1 Polish team page: delight micro-interactions and layout improvements
- Slug + Team ID rows collapsed into a 2-column grid for better density
- "you" badge moved inline with email instead of stacked below it
- Copy checkmark draws itself via SVG stroke-dashoffset animation
- New member row flashes accent-green on entry
- Removed member row slides out smoothly (fly transition)
- Member rows use staggered fly-in on page load
- Team name briefly highlights accent color after a successful rename
- Search result avatars get colorized initials based on email character
2026-03-24 14:56:19 +06:00
bf494f73fc Fix team name blink on navigation by lifting teams into a singleton store
Teams list was fetched on every Sidebar mount (each page navigation),
causing a flash from '…' to the real name on every tab switch. Move teams
into a module-level reactive store (teams.svelte.ts) that fetches once per
session and is shared between Sidebar and the team page.
2026-03-24 14:44:09 +06:00
71a7fdb76f Fix user search to trigger on 3 characters without requiring @
The anti-enumeration guard required @ in the email prefix, causing the
typeahead to silently return nothing until the user typed @. Replace with
a minimum 3-character length check to match the frontend trigger condition.
2026-03-24 14:41:01 +06:00
b3e8bdd171 Refine team management: name chars, danger zone, no-team state
- Allow hyphens, @, and apostrophes in team names (backend regex)
- After delete/leave, switch to next available team instead of logging
  out; if no teams remain, show a toast prompting to create one
- Disable delete/leave button when user has only one team, with
  explanatory hint to create another team first
- Show empty state on /dashboard/team when auth has no team context,
  pointing user to the sidebar to create a team
- Fetch all teams in parallel with team detail on page load to power
  the isLastTeam guard
2026-03-24 14:34:20 +06:00
1e681da738 Add team management frontend
- New /dashboard/team page with inline team name editing, slug/ID copy,
  members table with split-button (remove + make admin/member), add member
  typeahead, and danger zone (delete/leave) with confirmation dialogs
- Sidebar now fetches real teams from API, supports team switching and
  team creation via dialog
- Rename nav item Members → Team, route /dashboard/members → /dashboard/team
- New src/lib/api/team.ts with typed functions for all team endpoints
2026-03-24 14:21:53 +06:00
8e5d426638 Add team management endpoints
- Three-role model (owner/admin/member) with owner protection invariants
- Team CRUD: create, rename (admin+), soft-delete with VM cleanup (owner only)
- Member management: add by email, remove, role updates (admin+), leave
- Switch-team endpoint re-issues JWT after DB membership verification
- User email prefix search for add-member UI autocomplete
- JWT carries role as a hint; all authorization decisions verified from DB
- Team slug: immutable 12-char hex (e.g. a1b2c3-d1e2f3), reserved on soft-delete
- Migration adds slug + deleted_at to teams; backfills existing rows
2026-03-24 13:29:54 +06:00
4e26d7a292 Merge pull request 'Minor frontend enhancement' (#3) from frontend into dev
Reviewed-on: wrenn/sandbox#3
2026-03-24 06:36:17 +00:00
79eba782fb Updated design docs 2026-03-24 12:34:58 +06:00
b786a825d4 Polish dashboard frontend: spacing, copy, resilience
- Increase content padding (p-7→p-8) and table cell padding (px-4→px-5,
  py-3→py-4 for data rows) across capsules, keys, and snapshots pages
- Improve animation performance: wrenn-glow uses opacity instead of
  box-shadow (compositor-only, no paint cost)
- Add prefers-reduced-motion media query covering inline style animations
- Fix OAuth error display on login page (read ?error= param on mount)
- Harden clipboard copy with try-catch and toast fallback
- Improve empty state copy, dialog microcopy, and error messages
- Add retry button to error banners on keys page
- Replace "All systems operational" footer bar with a clean 1px divider
- Fix text truncation on long capsule/snapshot names (min-w-0 + truncate)
2026-03-24 12:33:18 +06:00
71564b202e Merge branch 'main' of git.omukk.dev:wrenn/sandbox into dev 2026-03-24 01:11:43 +06:00
5f0dbadea6 Fix snapshot and sandbox delete consistency
- Snapshot delete: make agent RPC failure a hard error so DB record is
  not removed when files cannot be deleted from disk
- Snapshot overwrite: call agent to delete old files before removing the
  DB record, preventing stale memfile.{uuid} generations from accumulating
  on disk across repeated overwrites
- Sandbox destroy: only swallow CodeNotFound from the agent (sandbox
  already gone / TTL-reaped); any other error now propagates to the caller
  instead of being silently ignored
2026-03-23 02:59:30 +06:00
36782e1b4f Add tini as PID 1, guest clock sync, and fix PATH in guest VMs
- Use tini as PID 1 in wrenn-init.sh so zombie processes are reaped
  and signals are forwarded correctly to envd
- Set standard PATH in wrenn-init.sh so child processes spawned by envd
  can find common binaries (fixes "nice: ls command not found")
- Add envdclient.Init() to POST /init on envd after every boot/resume,
  syncing the guest clock via unix.ClockSettime — critical after snapshot
  resume where the guest clock is frozen
- Run Init in a background goroutine so it doesn't block the CreateSandbox
  RPC response; a slow Init (vCPU busy with envd startup) was causing the
  RPC context to be canceled before the response reached the control plane
- Update rootfs-from-container.sh and update-debug-rootfs.sh to inject
  tini into the rootfs, checking the container image and host first,
  downloading from GitHub releases as fallback
2026-03-23 02:45:27 +06:00
97292ba0bf Added basic frontend (#1)
Reviewed-on: wrenn/sandbox#1
Co-authored-by: pptx704 <rafeed@omukk.dev>
Co-committed-by: pptx704 <rafeed@omukk.dev>
2026-03-22 19:01:38 +00:00
866f3ac012 Consolidate host agent path env vars into single AGENT_FILES_ROOTDIR
Replace AGENT_KERNEL_PATH, AGENT_IMAGES_PATH, AGENT_SANDBOXES_PATH,
AGENT_SNAPSHOTS_PATH, and AGENT_TOKEN_FILE with a single
AGENT_FILES_ROOTDIR (default /var/lib/wrenn) that derives all
subdirectory paths automatically.
2026-03-17 05:59:26 +06:00
2c66959b92 Add host registration, heartbeat, and multi-host management
Implements the full host ↔ control plane connection flow:

- Host CRUD endpoints (POST/GET/DELETE /v1/hosts) with role-based access:
  regular hosts admin-only, BYOC hosts for admins and team owners
- One-time registration token flow: admin creates host → gets token (1hr TTL
  in Redis + Postgres audit trail) → host agent registers with specs → gets
  long-lived JWT (1yr)
- Host agent registration client with automatic spec detection (arch, CPU,
  memory, disk) and token persistence to disk
- Periodic heartbeat (30s) via POST /v1/hosts/{id}/heartbeat with X-Host-Token
  auth and host ID cross-check
- Token regeneration endpoint (POST /v1/hosts/{id}/token) for retry after
  failed registration
- Tag management (add/remove/list) with team-scoped access control
- Host JWT with typ:"host" claim, cross-use prevention in both VerifyJWT and
  VerifyHostJWT
- requireHostToken middleware for host agent authentication
- DB-level race protection: RegisterHost uses AND status='pending' with
  rows-affected check; Redis GetDel for atomic token consume
- Migration for future mTLS support (cert_fingerprint, mtls_enabled columns)
- Host agent flags: --register (one-time token), --address (required ip:port)
- serviceErrToHTTP extended with "forbidden" → 403 mapping
- OpenAPI spec, .env.example, and README updated
2026-03-17 05:51:28 +06:00
e4ead076e3 Add admin users, BYOC teams, hosts schema, and Redis for host registration
Introduce three migrations: admin permissions (is_admin + permissions table),
BYOC team tracking, and multi-host support (hosts, host_tokens, host_tags).
Add Redis to dev infra and wire up client in control plane for ephemeral
host registration tokens. Add go-redis dependency.
2026-03-17 03:26:42 +06:00
1d59b50e49 Remove empty admin UI stubs
The internal/admin/ package was never imported or mounted — just
placeholder files. Removing to avoid confusion before the real
dashboard is built.
2026-03-16 05:39:43 +06:00
f38d5812d1 Extract shared service layer for sandbox, API key, and template operations
Moves business logic from API handlers into internal/service/ so that
both the REST API and the upcoming dashboard can share the same operations
without duplicating code. API handlers now delegate to the service layer
and only handle HTTP-specific concerns (request parsing, response formatting).
2026-03-16 05:39:30 +06:00
931b7d54b3 Add GitHub OAuth login with provider registry
Implement OAuth 2.0 login via GitHub as an alternative to email/password.
Uses a provider registry pattern (internal/auth/oauth/) so adding Google
or other providers later requires only a new Provider implementation.

Flow: GET /v1/auth/oauth/github redirects to GitHub, callback exchanges
the code for a user profile, upserts the user + team atomically, and
redirects to the frontend with a JWT token.

Key changes:
- Migration: make password_hash nullable, add oauth_providers table
- Provider registry with GitHubProvider (profile + email fallback)
- CSRF state cookie with HMAC-SHA256 validation
- Race-safe registration (23505 collision retries as login)
- Startup validation: CP_PUBLIC_URL required when OAuth is configured

Not fully tested — needs integration tests with a real GitHub OAuth app
and end-to-end testing with the frontend callback page.
2026-03-15 06:31:58 +06:00
477d4f8cf6 Add auto-pause TTL and ping endpoint for sandbox inactivity management
Replace the existing auto-destroy TTL behavior with auto-pause: when a
sandbox exceeds its timeout_sec of inactivity, the TTL reaper now pauses
it (snapshot + teardown) instead of destroying it, preserving the ability
to resume later.

Key changes:
- TTL reaper calls Pause instead of Destroy, with fallback to Destroy if
  pause fails (e.g. Firecracker process already gone)
- New PingSandbox RPC resets the in-memory LastActiveAt timer
- New POST /v1/sandboxes/{id}/ping REST endpoint resets both agent memory
  and DB last_active_at
- ListSandboxes RPC now includes auto_paused_sandbox_ids so the reconciler
  can distinguish auto-paused sandboxes from crashed ones in a single call
- Reconciler polls every 5s (was 30s) and marks auto-paused as "paused"
  vs orphaned as "stopped"
- Resume RPC accepts timeout_sec from DB so TTL survives pause/resume cycles
- Reaper checks every 2s (was 10s) and uses a detached context to avoid
  incomplete pauses on app shutdown
- Default timeout_sec changed from 300 to 0 (no auto-pause unless requested)
2026-03-15 05:15:18 +06:00
88246fac2b Fix sandbox lifecycle cleanup and dmsetup remove reliability
- Add retry with backoff to dmsetupRemove for transient "device busy"
  errors caused by kernel not releasing the device immediately after
  Firecracker exits. Only retries on "Device or resource busy"; other
  errors (not found, permission denied) return immediately.

- Thread context.Context through RemoveSnapshot/RestoreSnapshot so
  retries respect cancellation. Use context.Background() in all error
  cleanup paths to prevent cancelled contexts from skipping cleanup
  and leaking dm devices on the host.

- Resume vCPUs on pause failure: if snapshot creation or memfile
  processing fails after freezing the VM, unfreeze vCPUs so the
  sandbox stays usable instead of becoming a frozen zombie.

- Fix resource leaks in Pause when CoW rename or metadata write fails:
  properly clean up network, slot, loop device, and remove from boxes
  map instead of leaving a dead sandbox with leaked host resources.

- Fix Resume WaitUntilReady failure: roll back CoW file to the snapshot
  directory instead of deleting it, preserving the paused state so the
  user can retry.

- Skip m.loops.Release when RemoveSnapshot fails during pause since
  the stale dm device still references the origin loop device.

- Fix incorrect VCPUs placeholder in Resume VMConfig that used memory
  size instead of a sensible default.
2026-03-14 06:42:34 +06:00
1846168736 Fix device-mapper "Device or resource busy" error on sandbox resume
Pause was logging RemoveSnapshot failures as warnings and continuing,
which left stale dm devices behind. Resume then failed trying to create
a device with the same name.

- Make RemoveSnapshot failure a hard error in Pause (clean up remaining
  resources and return error instead of silently proceeding)
- Add defensive stale device cleanup in RestoreSnapshot before creating
  the new dm device
2026-03-14 03:57:14 +06:00
c92cc29b88 Add authentication, authorization, and team-scoped access control
Implement email/password auth with JWT sessions and API key auth for
sandbox lifecycle. Users get a default team on signup; sandboxes,
snapshots, and API keys are scoped to teams.

- Add user, team, users_teams, and team_api_keys tables (goose migrations)
- Add JWT middleware (Bearer token) for user management endpoints
- Add API key middleware (X-API-Key header, SHA-256 hashed) for sandbox ops
- Add signup/login handlers with transactional user+team creation
- Add API key CRUD endpoints (create/list/delete)
- Replace owner_id with team_id on sandboxes and templates
- Update all handlers to use team-scoped queries
- Add godotenv for .env file loading
- Update OpenAPI spec and test UI with auth flows
2026-03-14 03:57:06 +06:00
712b77b01c Add script to create rootfs from Docker container 2026-03-13 09:41:58 +06:00
80a99eec87 Add diff snapshots for re-pause to avoid UFFD fault-in storm
Use Firecracker's Diff snapshot type when re-pausing a previously
resumed sandbox, capturing only dirty pages instead of a full memory
dump. Chains up to 10 incremental generations before collapsing back
to a Full snapshot. Multi-generation diff files (memfile.{buildID})
are supported alongside the legacy single-file format in resume,
template creation, and snapshot existence checks.
2026-03-13 09:41:58 +06:00
a0d635ae5e Fix path traversal in template/snapshot names and network cleanup leaks
Add SafeName validator (allowlist regex) to reject directory traversal
in user-supplied template and snapshot names. Validated at both API
handlers (400 response) and sandbox manager (defense in depth).

Refactor CreateNetwork with rollback slice so partially created
resources (namespace, veth, routes, iptables rules) are cleaned up
on any error. Refactor RemoveNetwork to collect and return errors
instead of silently ignoring them.
2026-03-13 08:40:36 +06:00
63e9132d38 Add device-mapper snapshots, test UI, fix pause ordering and lint errors
- Replace reflink rootfs copy with device-mapper snapshots (shared
  read-only loop device per base template, per-sandbox sparse CoW file)
- Add devicemapper package with create/restore/remove/flatten operations
  and refcounted LoopRegistry for base image loop devices
- Fix pause ordering: destroy VM before removing dm-snapshot to avoid
  "device busy" error (FC must release the dm device first)
- Add test UI at GET /test for sandbox lifecycle management (create,
  pause, resume, destroy, exec, snapshot create/list/delete)
- Fix DirSize to report actual disk usage (stat.Blocks * 512) instead
  of apparent size, so sparse CoW files report correctly
- Add timing logs to pause flow for performance diagnostics
- Fix all lint errors across api, network, vm, uffd, and sandbox packages
- Remove obsolete internal/filesystem package (replaced by devicemapper)
- Update CLAUDE.md with device-mapper architecture documentation
2026-03-13 08:25:40 +06:00
778894b488 Made license related changes 2026-03-13 05:42:10 +06:00
a1bd439c75 Add sandbox snapshot and restore with UFFD lazy memory loading
Implement full snapshot lifecycle: pause (snapshot + free resources),
resume (UFFD-based lazy restore), and named snapshot templates that
can spawn new sandboxes from frozen VM state.

Key changes:
- Snapshot header system with generational diff mapping (inspired by e2b)
- UFFD server for lazy page fault handling during snapshot restore
- Stable rootfs symlink path (/tmp/fc-vm/) for snapshot compatibility
- Templates DB table and CRUD API endpoints (POST/GET/DELETE /v1/snapshots)
- CreateSnapshot/DeleteSnapshot RPCs in hostagent proto
- Reconciler excludes paused sandboxes (expected absent from host agent)
- Snapshot templates lock vcpus/memory to baked-in values
- Proper cleanup of uffd sockets and pause snapshot files on destroy
2026-03-12 09:19:37 +06:00
9b94df7f56 Rewrite CLAUDE.md and README.md
CLAUDE.md: replace bloated 850-line version with focused 230-line
guide. Fix inaccuracies (module path, build dir, Connect RPC vs gRPC,
buf vs protoc). Add detailed architecture with request flows, code
generation workflow, rootfs update process, and two-module gotchas.

README.md: add core deployment instructions (prerequisites, build,
host setup, configuration, running, rootfs workflow).
2026-03-11 06:37:11 +06:00
0c245e9e1c Fix guest VM outbound networking and DNS resolution
Add resolv.conf to wrenn-init so guests can resolve DNS, and fix the
host MASQUERADE rule to match vpeerIP (the actual source after namespace
SNAT) instead of hostIP.
2026-03-11 06:02:31 +06:00
b4d8edb65b Add streaming exec and file transfer endpoints
Add WebSocket-based streaming exec endpoint and streaming file
upload/download endpoints to the control plane API. Includes new
host agent RPC methods (ExecStream, StreamWriteFile, StreamReadFile),
envd client streaming support, and OpenAPI spec updates.
2026-03-11 05:42:42 +06:00
ec3360d9ad Add minimal control plane with REST API, database, and reconciler
- REST API (chi router): sandbox CRUD, exec, pause/resume, file write/read
- PostgreSQL persistence via pgx/v5 + sqlc (sandboxes table with goose migration)
- Connect RPC client to host agent for all VM operations
- Reconciler syncs host agent state with DB every 30s (detects TTL-reaped sandboxes)
- OpenAPI 3.1 spec served at /openapi.yaml, Swagger UI at /docs
- Added WriteFile/ReadFile RPCs to hostagent proto and implementations
- File upload via multipart form, download via JSON body POST
- sandbox_id propagated from control plane to host agent on create
2026-03-10 16:50:12 +06:00
d7b25b0891 updated license structure 2026-03-10 04:32:29 +06:00
34c89e814d Added basic license information 2026-03-10 04:28:51 +06:00
6f0c365d44 Add host agent RPC server with sandbox lifecycle management
Implement the host agent as a Connect RPC server that orchestrates
sandbox creation, destruction, pause/resume, and command execution.
Includes sandbox manager with TTL-based reaper, network slot allocator,
rootfs cloning, hostagent proto definition with generated stubs, and
test/debug scripts. Fix Firecracker process lifetime bug where VM was
tied to HTTP request context instead of background context.
2026-03-10 03:54:53 +06:00
c31ce90306 Centralize envd proto source of truth to proto/envd/
Remove duplicate proto files from envd/spec/ and update envd's
buf.gen.yaml to generate stubs from the canonical proto/envd/ location.
Both modules now generate their own Connect RPC stubs from the same
source protos.
2026-03-10 02:49:31 +06:00
7753938044 Add host agent with VM lifecycle, TAP networking, and envd client
Implements Phase 1: boot a Firecracker microVM, execute a command inside
it via envd, and get the output back. Uses raw Firecracker HTTP API via
Unix socket (not the Go SDK) for full control over the VM lifecycle.

- internal/vm: VM manager with create/pause/resume/destroy, Firecracker
  HTTP client, process launcher with unshare + ip netns exec isolation
- internal/network: per-sandbox network namespace with veth pair, TAP
  device, NAT rules, and IP forwarding
- internal/envdclient: Connect RPC client for envd process/filesystem
  services with health check retry
- cmd/host-agent: demo binary that boots a VM, runs "echo hello", prints
  output, and cleans up
- proto/envd: canonical proto files with buf + protoc-gen-connect-go
  code generation
- images/wrenn-init.sh: minimal PID 1 init script for guest VMs
- CLAUDE.md: updated architecture to reflect TAP networking (not vsock)
  and Firecracker HTTP API (not Go SDK)
2026-03-10 00:06:47 +06:00
a3898d68fb Port envd from e2b with internalized shared packages and Connect RPC
- Copy envd source from e2b-dev/infra, internalize shared dependencies
  into envd/internal/shared/ (keys, filesystem, id, smap, utils)
- Switch from gRPC to Connect RPC for all envd services
- Update module paths to git.omukk.dev/wrenn/{sandbox,sandbox/envd}
- Add proto specs (process, filesystem) with buf-based code generation
- Implement full envd: process exec, filesystem ops, port forwarding,
  cgroup management, MMDS integration, and HTTP API
- Update main module dependencies (firecracker SDK, pgx, goose, etc.)
- Remove placeholder .gitkeep files replaced by real implementations
2026-03-09 21:03:19 +06:00
361 changed files with 23346 additions and 16807 deletions

View File

@ -1,7 +1,3 @@
# Shared (applies to both control plane and host agent)
WRENN_DIR=/var/lib/wrenn
LOG_LEVEL=info
# Database # Database
DATABASE_URL=postgres://wrenn:wrenn@localhost:5432/wrenn?sslmode=disable DATABASE_URL=postgres://wrenn:wrenn@localhost:5432/wrenn?sslmode=disable
@ -13,10 +9,10 @@ WRENN_CP_LISTEN_ADDR=:9725
# Host Agent # Host Agent
WRENN_HOST_LISTEN_ADDR=:50051 WRENN_HOST_LISTEN_ADDR=:50051
WRENN_DIR=/var/lib/wrenn
WRENN_HOST_INTERFACE=eth0 WRENN_HOST_INTERFACE=eth0
WRENN_CP_URL=http://localhost:9725 WRENN_CP_URL=http://localhost:9725
WRENN_DEFAULT_ROOTFS_SIZE=5Gi WRENN_DEFAULT_ROOTFS_SIZE=5Gi
WRENN_FIRECRACKER_BIN=/usr/local/bin/firecracker
# Auth # Auth
JWT_SECRET= JWT_SECRET=
@ -38,10 +34,3 @@ OAUTH_GITHUB_CLIENT_ID=
OAUTH_GITHUB_CLIENT_SECRET= OAUTH_GITHUB_CLIENT_SECRET=
OAUTH_REDIRECT_URL=https://app.wrenn.dev OAUTH_REDIRECT_URL=https://app.wrenn.dev
CP_PUBLIC_URL=https://app.wrenn.dev CP_PUBLIC_URL=https://app.wrenn.dev
# SMTP — transactional email (optional; omit SMTP_HOST to disable)
SMTP_HOST=
SMTP_PORT=587
SMTP_USERNAME=
SMTP_PASSWORD=
SMTP_FROM_EMAIL=noreply@wrenn.dev

6
.gitignore vendored
View File

@ -36,14 +36,10 @@ go.work.sum
e2b/ e2b/
.impeccable.md .impeccable.md
.gstack .gstack
.mcp.json
## Builds ## Builds
builds/ builds/
## Rust
envd-rs/target/
## Frontend ## Frontend
frontend/node_modules/ frontend/node_modules/
frontend/.svelte-kit/ frontend/.svelte-kit/
@ -53,5 +49,3 @@ frontend/build/
internal/dashboard/static/* internal/dashboard/static/*
!internal/dashboard/static/.gitkeep.dual-graph/ !internal/dashboard/static/.gitkeep.dual-graph/
.dual-graph/ .dual-graph/
# Added by code-review-graph
.code-review-graph/

241
CLAUDE.md
View File

@ -12,10 +12,10 @@ All commands go through the Makefile. Never use raw `go build` or `go run`.
```bash ```bash
make build # Build all binaries → builds/ make build # Build all binaries → builds/
make build-cp # Control plane only make build-cp # Control plane only (builds frontend first)
make build-agent # Host agent only make build-agent # Host agent only
make build-envd # envd static binary (Rust, musl, verified statically linked) make build-envd # envd static binary (verified statically linked)
make build-frontend # SvelteKit dashboard → frontend/build/ (served by Caddy) make build-frontend # SvelteKit dashboard → internal/dashboard/static/
make dev # Full local dev: infra + migrate + control plane make dev # Full local dev: infra + migrate + control plane
make dev-infra # Start PostgreSQL + Prometheus + Grafana (Docker) make dev-infra # Start PostgreSQL + Prometheus + Grafana (Docker)
@ -23,13 +23,13 @@ make dev-down # Stop dev infra
make dev-cp # Control plane with hot reload (if air installed) make dev-cp # Control plane with hot reload (if air installed)
make dev-frontend # Vite dev server with HMR (port 5173) make dev-frontend # Vite dev server with HMR (port 5173)
make dev-agent # Host agent (sudo required) make dev-agent # Host agent (sudo required)
make dev-envd # envd in debug mode (--isnotfc, port 49983) make dev-envd # envd in TCP debug mode
make check # fmt + vet + lint + test (CI order) make check # fmt + vet + lint + test (CI order)
make test # Unit tests: go test -race -v ./internal/... make test # Unit tests: go test -race -v ./internal/...
make test-integration # Integration tests (require host agent + Firecracker) make test-integration # Integration tests (require host agent + Firecracker)
make fmt # gofmt make fmt # gofmt both modules
make vet # go vet make vet # go vet both modules
make lint # golangci-lint make lint # golangci-lint
make migrate-up # Apply pending migrations make migrate-up # Apply pending migrations
@ -38,8 +38,8 @@ make migrate-create name=xxx # Scaffold new goose migration (never create manua
make migrate-reset # Drop + re-apply all make migrate-reset # Drop + re-apply all
make generate # Proto (buf) + sqlc codegen make generate # Proto (buf) + sqlc codegen
make proto # buf generate for proto dirs make proto # buf generate for all proto dirs
make tidy # go mod tidy make tidy # go mod tidy both modules
``` ```
Run a single test: `go test -race -v -run TestName ./internal/path/...` Run a single test: `go test -race -v -run TestName ./internal/path/...`
@ -50,45 +50,35 @@ Run a single test: `go test -race -v -run TestName ./internal/path/...`
User SDK → HTTPS/WS → Control Plane → Connect RPC → Host Agent → HTTP/Connect RPC over TAP → envd (inside VM) User SDK → HTTPS/WS → Control Plane → Connect RPC → Host Agent → HTTP/Connect RPC over TAP → envd (inside VM)
``` ```
**Three binaries:** **Three binaries, two Go modules:**
| Binary | Language | Entry point | Runs as | | Binary | Module | Entry point | Runs as |
|--------|----------|-------------|---------| |--------|--------|-------------|---------|
| wrenn-cp | Go (`git.omukk.dev/wrenn/wrenn`) | `cmd/control-plane/main.go` | Unprivileged | | wrenn-cp | `git.omukk.dev/wrenn/wrenn` | `cmd/control-plane/main.go` | Unprivileged |
| wrenn-agent | Go (`git.omukk.dev/wrenn/wrenn`) | `cmd/host-agent/main.go` | `wrenn` user with capabilities (SYS_ADMIN, NET_ADMIN, NET_RAW, SYS_PTRACE, KILL, DAC_OVERRIDE, MKNOD) via setcap; also accepts root | | wrenn-agent | `git.omukk.dev/wrenn/wrenn` | `cmd/host-agent/main.go` | Root (NET_ADMIN + /dev/kvm) |
| envd | Rust (`envd-rs/`) | `envd-rs/src/main.rs` | PID 1 inside guest VM | | envd | `git.omukk.dev/wrenn/wrenn/envd` (standalone `envd/go.mod`) | `envd/main.go` | PID 1 inside guest VM |
envd is a standalone Rust binary (Tokio + Axum + connectrpc-rs). It is completely independent from the Go module — the only connection is the protobuf contract. It compiles to a statically linked musl binary baked into rootfs images. envd is a **completely independent Go module**. It is never imported by the main module. The only connection is the protobuf contract. It compiles to a static binary baked into rootfs images.
**Key architectural invariant:** The host agent is **stateful** (in-memory `boxes` map is the source of truth for running VMs). The control plane is **stateless** (all persistent state in PostgreSQL). The reconciler (`internal/api/reconciler.go`) bridges the gap — it periodically compares DB records against the host agent's live state and marks orphaned sandboxes as "stopped". **Key architectural invariant:** The host agent is **stateful** (in-memory `boxes` map is the source of truth for running VMs). The control plane is **stateless** (all persistent state in PostgreSQL). The reconciler (`internal/api/reconciler.go`) bridges the gap — it periodically compares DB records against the host agent's live state and marks orphaned sandboxes as "stopped".
### Control Plane ### Control Plane
**Internal packages:** `internal/api/`, `internal/email/` **Packages:** `internal/api/`, `internal/dashboard/`, `internal/auth/`, `internal/scheduler/`, `internal/lifecycle/`, `internal/config/`, `internal/db/`
**Public packages (importable by cloud repo):** `pkg/config/`, `pkg/db/`, `pkg/auth/`, `pkg/auth/oauth/`, `pkg/scheduler/`, `pkg/lifecycle/`, `pkg/channels/`, `pkg/audit/`, `pkg/service/`, `pkg/events/`, `pkg/id/`, `pkg/validate/` Startup (`cmd/control-plane/main.go`) wires: config (env vars) → pgxpool → `db.Queries` (sqlc-generated) → Connect RPC client to host agent → `api.Server`. Everything flows through constructor injection.
**Extension framework:** `pkg/cpextension/` (shared `Extension` interface + `ServerContext`), `pkg/cpserver/` (exported `Run()` entrypoint with functional options for cloud `main.go`) - **API Server** (`internal/api/server.go`): chi router with middleware. Creates handler structs (`sandboxHandler`, `execHandler`, `filesHandler`, etc.) injected with `db.Queries` and the host agent Connect RPC client. Routes under `/v1/capsules/*`.
The cloud repo imports this module as a Go dependency and calls `cpserver.Run(cpserver.WithExtensions(myExt))`. Each extension implements two methods: `RegisterRoutes(r chi.Router, sctx ServerContext)` to add HTTP routes, and `BackgroundWorkers(sctx ServerContext) []func(context.Context)` to add long-running goroutines. `ServerContext` carries all OSS services (DB, scheduler, auth, etc.) so extensions can use them without reimplementing anything. To expose a new OSS service to extensions, add it to `ServerContext` in `pkg/cpextension/extension.go` and populate it in `pkg/cpserver/run.go`.
**pkg/ vs internal/ decision rule:** A package belongs in `pkg/` only if the cloud repo needs to import it directly. Everything else stays in `internal/`. New OSS services (e.g. email, notifications) go in `internal/` — the cloud repo accesses them through `ServerContext`, not by importing the package. Do not put a service in `pkg/` just because the cloud repo uses it.
Startup (`cmd/control-plane/main.go`) is a thin wrapper: `cpserver.Run(cpserver.WithVersion(...))`. All 20 initialization steps live in `pkg/cpserver/run.go`: config → pgxpool → `db.Queries` → Redis → mTLS CA → host client pool → scheduler → OAuth → channels → audit logger → `api.New()` → background workers → HTTP server. Everything flows through constructor injection.
- **API Server** (`internal/api/server.go`): chi router with middleware. Creates handler structs (`sandboxHandler`, `execHandler`, `filesHandler`, etc.) injected with `db.Queries` and the host agent Connect RPC client. Routes under `/v1/capsules/*`. Accepts `[]cpextension.Extension` — each extension's `RegisterRoutes()` is called after all core routes are registered.
- **Reconciler** (`internal/api/reconciler.go`): background goroutine (every 30s) that compares DB records against `agent.ListSandboxes()` RPC. Marks orphaned DB entries as "stopped". - **Reconciler** (`internal/api/reconciler.go`): background goroutine (every 30s) that compares DB records against `agent.ListSandboxes()` RPC. Marks orphaned DB entries as "stopped".
- **Dashboard** (SvelteKit + Tailwind + Bits UI, built to static files in `frontend/build/`, served by Caddy as a reverse proxy) - **Dashboard** (SvelteKit + Tailwind + Bits UI, statically built and embedded via `go:embed`, served as catch-all at root)
- **Database**: PostgreSQL via pgx/v5. Queries generated by sqlc from `db/queries/*.sql``pkg/db/`. Migrations in `db/migrations/` (goose, plain SQL). `db/migrations/embed.go` exposes `migrations.FS` so the cloud repo can run OSS migrations via `go:embed`. - **Database**: PostgreSQL via pgx/v5. Queries generated by sqlc from `db/queries/sandboxes.sql`. Migrations in `db/migrations/` (goose, plain SQL).
- **Config** (`pkg/config/config.go`): purely environment variables (`DATABASE_URL`, `CP_LISTEN_ADDR`, `CP_HOST_AGENT_ADDR`), no YAML/file config. - **Config** (`internal/config/config.go`): purely environment variables (`DATABASE_URL`, `CP_LISTEN_ADDR`, `CP_HOST_AGENT_ADDR`), no YAML/file config.
### Host Agent ### Host Agent
**Packages:** `internal/hostagent/`, `internal/sandbox/`, `internal/vm/`, `internal/network/`, `internal/devicemapper/`, `internal/envdclient/`, `internal/snapshot/` **Packages:** `internal/hostagent/`, `internal/sandbox/`, `internal/vm/`, `internal/network/`, `internal/devicemapper/`, `internal/envdclient/`, `internal/snapshot/`
**Production deployment:** `scripts/prepare-wrenn-user.sh` creates the `wrenn` system user, sets Linux capabilities (setcap) on wrenn-agent and all child binaries (iptables, losetup, dmsetup, etc.), installs an apt hook to restore capabilities after package updates, configures udev rules for `/dev/net/tun`, loads required kernel modules, and writes systemd unit files for both services. No sudo grants — all privilege is via capabilities. Startup (`cmd/host-agent/main.go`) wires: root check → enable IP forwarding → clean up stale dm devices → `sandbox.Manager` (containing `vm.Manager` + `network.SlotAllocator` + `devicemapper.LoopRegistry`) → `hostagent.Server` (Connect RPC handler) → HTTP server.
Startup (`cmd/host-agent/main.go`) wires: root/capabilities check → enable IP forwarding → clean up stale dm devices → `sandbox.Manager` (containing `vm.Manager` + `network.SlotAllocator` + `devicemapper.LoopRegistry`) → `hostagent.Server` (Connect RPC handler) → HTTP server.
- **RPC Server** (`internal/hostagent/server.go`): implements `hostagentv1connect.HostAgentServiceHandler`. Thin wrapper — every method delegates to `sandbox.Manager`. Maps Connect error codes on return. - **RPC Server** (`internal/hostagent/server.go`): implements `hostagentv1connect.HostAgentServiceHandler`. Thin wrapper — every method delegates to `sandbox.Manager`. Maps Connect error codes on return.
- **Sandbox Manager** (`internal/sandbox/manager.go`): the core orchestration layer. Maintains in-memory state in `boxes map[string]*sandboxState` (protected by `sync.RWMutex`). Each `sandboxState` holds a `models.Sandbox`, a `*network.Slot`, and an `*envdclient.Client`. Runs a TTL reaper (every 10s) that auto-destroys timed-out sandboxes. - **Sandbox Manager** (`internal/sandbox/manager.go`): the core orchestration layer. Maintains in-memory state in `boxes map[string]*sandboxState` (protected by `sync.RWMutex`). Each `sandboxState` holds a `models.Sandbox`, a `*network.Slot`, and an `*envdclient.Client`. Runs a TTL reaper (every 10s) that auto-destroys timed-out sandboxes.
@ -99,17 +89,13 @@ Startup (`cmd/host-agent/main.go`) wires: root/capabilities check → enable IP
### envd (Guest Agent) ### envd (Guest Agent)
**Directory:** `envd-rs/` — standalone Rust crate **Module:** `envd/` with its own `go.mod` (`git.omukk.dev/wrenn/wrenn/envd`)
Runs as PID 1 inside the microVM via `wrenn-init.sh` (mounts procfs/sysfs/dev, sets hostname, writes resolv.conf, then execs envd via tini). Built with `cargo build --release --target x86_64-unknown-linux-musl`. Listens on TCP `0.0.0.0:49983`. Runs as PID 1 inside the microVM via `wrenn-init.sh` (mounts procfs/sysfs/dev, sets hostname, writes resolv.conf, then execs envd). Extracted from E2B (Apache 2.0), with shared packages internalized into `envd/internal/shared/`. Listens on TCP `0.0.0.0:49983`.
- **Stack**: Tokio (async runtime) + Axum (HTTP) + connectrpc-rs (Connect protocol RPC) - **ProcessService**: start processes, stream stdout/stderr, signal handling, PTY support
- **ProcessService** (Connect RPC): start/connect/list/signal processes, stream stdout/stderr, PTY support - **FilesystemService**: stat/list/mkdir/move/remove/watch files
- **FilesystemService** (Connect RPC): stat/list/mkdir/move/remove/watch files - **Health**: GET `/health`
- **HTTP endpoints**: GET `/health`, GET `/metrics`, POST `/init`, POST `/snapshot/prepare`, GET/POST `/files`
- **Proto codegen**: `connectrpc-build` compiles `proto/envd/*.proto` at `cargo build` time via `build.rs` — no committed stubs
- **Build**: `make build-envd` → static musl binary in `builds/envd`
- **Dev**: `make dev-envd``cargo run -- --isnotfc --port 49983`
### Dashboard (Frontend) ### Dashboard (Frontend)
@ -119,8 +105,8 @@ Runs as PID 1 inside the microVM via `wrenn-init.sh` (mounts procfs/sysfs/dev, s
- **Package manager**: pnpm - **Package manager**: pnpm
- **Routing**: SvelteKit file-based routing under `frontend/src/routes/` - **Routing**: SvelteKit file-based routing under `frontend/src/routes/`
- **Routing layout**: `/login` and `/signup` at root, authenticated pages under `/dashboard/*` (e.g. `/dashboard/capsules`, `/dashboard/keys`) - **Routing layout**: `/login` and `/signup` at root, authenticated pages under `/dashboard/*` (e.g. `/dashboard/capsules`, `/dashboard/keys`)
- **Build output**: `frontend/build/` — static files served by Caddy - **Build output**: `frontend/build/` → copied to `internal/dashboard/static/` → embedded via `go:embed` into the control plane binary
- **Serving**: Caddy reverse-proxies API requests to the control plane and serves the SvelteKit SPA directly. The control plane does not serve frontend assets. - **Serving**: `internal/dashboard/dashboard.go` registers a `NotFound` catch-all SPA handler with fallback to `index.html`. API routes (`/v1/*`, `/openapi.yaml`, `/docs`) are registered first and take priority
- **Dev workflow**: `make dev-frontend` runs Vite dev server on port 5173 with HMR. API calls proxy to `http://localhost:8000` - **Dev workflow**: `make dev-frontend` runs Vite dev server on port 5173 with HMR. API calls proxy to `http://localhost:8000`
- **Fonts**: Manrope (UI), Instrument Serif (headings), JetBrains Mono (code), Alice (brand wordmark) — all self-hosted via `@fontsource` - **Fonts**: Manrope (UI), Instrument Serif (headings), JetBrains Mono (code), Alice (brand wordmark) — all self-hosted via `@fontsource`
- **Dark mode**: class-based (`.dark` on `<html>`) with system preference detection + localStorage persistence - **Dark mode**: class-based (`.dark` on `<html>`) with system preference detection + localStorage persistence
@ -189,47 +175,52 @@ Routes defined in `internal/api/server.go`, handlers in `internal/api/handlers_*
### Proto (Connect RPC) ### Proto (Connect RPC)
Proto source of truth is `proto/envd/*.proto` and `proto/hostagent/*.proto`. Run `make proto` to regenerate Go stubs. Two `buf.gen.yaml` files control Go output: Proto source of truth is `proto/envd/*.proto` and `proto/hostagent/*.proto`. Run `make proto` to regenerate. Three `buf.gen.yaml` files control output:
| buf.gen.yaml location | Generates to | Used by | | buf.gen.yaml location | Generates to | Used by |
|---|---|---| |---|---|---|
| `proto/envd/buf.gen.yaml` | `proto/envd/gen/` | Main module (host agent's envd client) | | `proto/envd/buf.gen.yaml` | `proto/envd/gen/` | Main module (host agent's envd client) |
| `proto/hostagent/buf.gen.yaml` | `proto/hostagent/gen/` | Main module (control plane ↔ host agent) | | `proto/hostagent/buf.gen.yaml` | `proto/hostagent/gen/` | Main module (control plane ↔ host agent) |
| `envd/spec/buf.gen.yaml` | `envd/internal/services/spec/` | envd module (guest agent server) |
The Rust envd (`envd-rs/`) generates its own protobuf stubs at `cargo build` time via `connectrpc-build` in `envd-rs/build.rs`, reading from the same `proto/envd/*.proto` sources. No committed Rust stubs — they live in `OUT_DIR`. The envd `buf.gen.yaml` reads from `../../proto/envd/` (same source protos) but generates into envd's own module. This means the same `.proto` files produce two independent sets of Go stubs — one for each Go module.
To add a new RPC method: edit the `.proto` file → `make proto` (Go stubs) → rebuild envd-rs (Rust stubs generated automatically) → implement the handler on both sides. To add a new RPC method: edit the `.proto` file → `make proto` → implement the handler on both sides.
### sqlc ### sqlc
Config: `sqlc.yaml` (project root). Reads queries from `db/queries/*.sql`, reads schema from `db/migrations/`, outputs to `pkg/db/`. Config: `sqlc.yaml` (project root). Reads queries from `db/queries/*.sql`, reads schema from `db/migrations/`, outputs to `internal/db/`.
To add a new query: add it to the appropriate `.sql` file in `db/queries/``make generate` → use the new method on `*db.Queries`. To add a new query: add it to the appropriate `.sql` file in `db/queries/``make generate` → use the new method on `*db.Queries`.
## Key Technical Decisions ## Key Technical Decisions
- **Connect RPC** (not gRPC) for all RPC communication between components - **Connect RPC** (not gRPC) for all RPC communication between components
- **Buf + protoc-gen-connect-go** for Go code generation; **connectrpc-build** for Rust code generation in envd - **Buf + protoc-gen-connect-go** for code generation (not protoc-gen-go-grpc)
- **Raw Firecracker HTTP API** via Unix socket (not firecracker-go-sdk Machine type) - **Raw Firecracker HTTP API** via Unix socket (not firecracker-go-sdk Machine type)
- **TAP networking** (not vsock) for host-to-envd communication - **TAP networking** (not vsock) for host-to-envd communication
- **Device-mapper snapshots** for rootfs CoW — shared read-only loop device per base template, per-sandbox sparse CoW file, Firecracker gets `/dev/mapper/wrenn-{id}` - **Device-mapper snapshots** for rootfs CoW — shared read-only loop device per base template, per-sandbox sparse CoW file, Firecracker gets `/dev/mapper/wrenn-{id}`
- **PostgreSQL** via pgx/v5 + sqlc (type-safe query generation). Goose for migrations (plain SQL, up/down) - **PostgreSQL** via pgx/v5 + sqlc (type-safe query generation). Goose for migrations (plain SQL, up/down)
- **Dashboard**: SvelteKit (Svelte 5, adapter-static) + Tailwind CSS v4 + Bits UI. Built to static files in `frontend/build/`, served by Caddy (not embedded in the Go binary) - **Dashboard**: SvelteKit (Svelte 5, adapter-static) + Tailwind CSS v4 + Bits UI. Built to static files, embedded into the Go binary via `go:embed`, served as catch-all at root
- **Lago** for billing (external service, not in this codebase) - **Lago** for billing (external service, not in this codebase)
## Coding Conventions ## Coding Conventions
- **Go style**: `gofmt`, `go vet`, `context.Context` everywhere, errors wrapped with `fmt.Errorf("action: %w", err)`, `slog` for logging, no global state - **Go style**: `gofmt`, `go vet`, `context.Context` everywhere, errors wrapped with `fmt.Errorf("action: %w", err)`, `slog` for logging, no global state
- **Naming**: Sandbox IDs `sb-` + 8 hex, API keys `wrn_` + 32 chars, Host IDs `host-` + 8 hex - **Naming**: Sandbox IDs `sb-` + 8 hex, API keys `wrn_` + 32 chars, Host IDs `host-` + 8 hex
- **Dependencies**: Use `go get` to add Go deps, never hand-edit go.mod. For envd-rs deps: edit `envd-rs/Cargo.toml` - **Dependencies**: Use `go get` to add deps, never hand-edit go.mod. For envd deps: `cd envd && go get ...` (separate module)
- **Generated code**: Always commit generated code (proto stubs, sqlc). Never add generated code to .gitignore - **Generated code**: Always commit generated code (proto stubs, sqlc). Never add generated code to .gitignore
- **Migrations**: Always use `make migrate-create name=xxx`, never create migration files manually - **Migrations**: Always use `make migrate-create name=xxx`, never create migration files manually
- **Testing**: Table-driven tests for handlers and state machine transitions - **Testing**: Table-driven tests for handlers and state machine transitions
### Two-module gotcha
The main module (`go.mod`) and envd (`envd/go.mod`) are fully independent. `make tidy`, `make fmt`, `make vet` already operate on both. But when adding dependencies manually, remember to target the correct module (`cd envd && go get ...` for envd deps). `make proto` also generates stubs for both modules from the same proto sources.
## Rootfs & Guest Init ## Rootfs & Guest Init
- **wrenn-init** (`images/wrenn-init.sh`): the PID 1 init script baked into every rootfs. Mounts virtual filesystems, sets hostname, writes `/etc/resolv.conf`, then execs envd. - **wrenn-init** (`images/wrenn-init.sh`): the PID 1 init script baked into every rootfs. Mounts virtual filesystems, sets hostname, writes `/etc/resolv.conf`, then execs envd.
- **Updating the rootfs** after changing envd or wrenn-init: `bash scripts/update-minimal-rootfs.sh`. This builds envd via `make build-envd` (Rust → static musl binary), mounts the rootfs image, copies in the new binaries, and unmounts. Defaults to `/var/lib/wrenn/images/minimal.ext4`. - **Updating the rootfs** after changing envd or wrenn-init: `bash scripts/update-debug-rootfs.sh [rootfs_path]`. This builds envd via `make build-envd`, mounts the rootfs image, copies in the new binaries, and unmounts. Defaults to `/var/lib/wrenn/images/minimal.ext4`.
- Rootfs images are minimal debootstrap — no systemd, no coreutils beyond busybox. Use `/bin/sh -c` for shell builtins inside the guest. - Rootfs images are minimal debootstrap — no systemd, no coreutils beyond busybox. Use `/bin/sh -c` for shell builtins inside the guest.
## Fixed Paths (on host machine) ## Fixed Paths (on host machine)
@ -242,9 +233,7 @@ To add a new query: add it to the appropriate `.sql` file in `db/queries/` → `
## Design Context ## Design Context
### Users ### Users
Developers across the full spectrum — solo engineers building side projects, startup teams integrating sandboxed execution into products, and platform/infra engineers at larger organizations running production workloads on Firecracker microVMs. They arrive with context: they know what a process is, what a rootfs is, what a TTY means. The interface must feel at home for all three: approachable enough not to intimidate a hacker, precise enough to earn the trust of a production ops team. Never condescend, never oversimplify. Trust the user to understand what they're looking at. Developers across the full spectrum — solo engineers building side projects, startup teams integrating sandboxed execution into products, and platform/infra engineers at larger organizations. The interface must feel at home for all three: approachable enough not to intimidate a hacker, precise enough to earn the trust of a production ops team. Never condescend, never oversimplify. Trust the user to understand what they're looking at.
**Primary job to be done:** Understand what's running, act on it confidently, and get back to code.
### Brand Personality ### Brand Personality
**Precise. Warm. Uncompromising.** **Precise. Warm. Uncompromising.**
@ -254,9 +243,9 @@ Wrenn is an engineer's favorite tool — built with visible care, not assembled
Emotional goal: **in control.** Users leave a session with full confidence in what's running, what happened, and what comes next. Nothing is hidden, nothing is ambiguous. Emotional goal: **in control.** Users leave a session with full confidence in what's running, what happened, and what comes next. Nothing is hidden, nothing is ambiguous.
### Aesthetic Direction ### Aesthetic Direction
**Dark-only (permanently), industrial-warm, data-forward.** **Dark-first, industrial-warm, data-forward.**
No light mode planned. All design decisions should optimize for dark. The near-black-green background palette (`#0a0c0b` through `#2a302d`) reads as "black with intention" — not pitch black (cold) and not charcoal (dated). The sage green accent (`#5e8c58`) is muted and organic, a meaningful departure from the startup-green neon that saturates the developer tool space. The near-black-green background palette (`#0a0c0b` through `#2a302d`) reads as "black with intention" — not pitch black (cold) and not charcoal (dated). The sage green accent (`#5e8c58`) is muted and organic, a meaningful departure from the startup-green neon that saturates the developer tool space.
**Anti-references:** **Anti-references:**
- **Supabase**: avoid the friendly, approachable startup-green energy — too generic, too eager to please - **Supabase**: avoid the friendly, approachable startup-green energy — too generic, too eager to please
@ -270,95 +259,30 @@ No light mode planned. All design decisions should optimize for dark. The near-b
### Type System ### Type System
Four fonts with strict roles — this is the design system's strongest personality trait and must be respected: Four fonts with strict roles — this is the design system's strongest personality trait and must be respected:
| Font | CSS Class | Role | When to use | | Font | Role | When to use |
|------|-----------|------|-------------| |------|------|-------------|
| **Manrope** (variable, sans) | `font-sans` | UI workhorse | All body copy, nav, labels, buttons, form text | | **Manrope** (variable, sans) | UI workhorse | All body copy, nav, labels, buttons, form text |
| **Instrument Serif** | `font-serif` | Display / editorial | Page titles (h1), dialog headings, metric values, hero moments | | **Instrument Serif** | Display / editorial | Page titles (h1), dialog headings, metric values, hero moments |
| **JetBrains Mono** (variable) | `font-mono` | Data / code | IDs, timestamps, key prefixes, file paths, terminal output, metrics | | **JetBrains Mono** (variable) | Data / code | IDs, timestamps, key prefixes, file paths, terminal output, metrics |
| **Alice** | brand wordmark only | Brand wordmark | "Wrenn" in sidebar and login only — nowhere else | | **Alice** | Brand wordmark | "Wrenn" in sidebar and login only — nowhere else |
Instrument Serif at scale creates the signature editorial moments. Mono provides the precision signal for technical data. Never swap these roles. Instrument Serif at scale creates the signature editorial moments. Mono provides the precision signal for technical data. Never swap these roles.
**Tracking overrides (app.css):**
- `.font-serif``letter-spacing: 0.015em` (positive tracking; Instrument Serif reads less condensed at display sizes)
- `.font-mono``font-variant-numeric: tabular-nums` (numbers align in tables and metric displays)
**Type scale (root: 87.5% = 14px base):**
| Token | Value | Use |
|---|---|---|
| `--text-display` | 2.571rem (~36px) | Auth section headings |
| `--text-page` | 2rem (~28px) | Page h1 titles |
| `--text-heading` | 1.429rem (~20px) | Dialog headings, empty states |
| `--text-body` | 1rem (~14px) | Primary body, buttons, inputs |
| `--text-ui` | 0.929rem (~13px) | Nav labels, table cells |
| `--text-meta` | 0.857rem (~12px) | Key prefixes, minor info |
| `--text-label` | 0.786rem (~11px) | Uppercase section labels |
| `--text-badge` | 0.714rem (~10px) | Live badges, tiny indicators |
### Color System ### Color System
```
Backgrounds: bg-0 (#0a0c0b) through bg-5 (#2a302d) — 6 steps
Text: bright > primary > secondary > tertiary > muted — 5 levels
Accent: accent (#5e8c58) / accent-mid / accent-bright / glow / glow-mid
Status: amber (#d4a73c) / red (#cf8172) / blue (#5a9fd4)
```
All values are CSS custom properties in `frontend/src/app.css`. Use accent sparingly. It should feel earned — reserved for live/active state indicators, primary CTAs, focus rings, and active nav. When accent appears, it should register.
**Backgrounds (6-step near-black-green scale):** ### Upcoming Surfaces (design must accommodate)
| Token | Value | Use | - **Terminal / shell output**: streaming exec output, TTY sessions. Needs strong mono treatment, high contrast for long sessions.
|---|---|---| - **File browser**: filesystem tree inside capsule. Density matters — breadcrumbs, file icons, permission bits.
| `--color-bg-0` | `#0a0c0b` | Page base, sidebar deepest layer | - **SDK / docs embedding**: code samples, quickstart flows inline in dashboard. Code blocks must feel premium, not afterthought.
| `--color-bg-1` | `#0f1211` | Sidebar surface | - **Billing / usage charts**: pool consumption, cost curves, usage over time. Instrument Serif at large scale for metrics; chart containers should feel like instruments, not dashboards.
| `--color-bg-2` | `#141817` | Card backgrounds |
| `--color-bg-3` | `#1a1e1c` | Table headers, elevated surfaces |
| `--color-bg-4` | `#212624` | Hover states, inputs |
| `--color-bg-5` | `#2a302d` | Highlighted items, selected rows |
**Text (5-level hierarchy):**
| Token | Value | Use |
|---|---|---|
| `--color-text-bright` | `#eae7e2` | H1s, dialog headings |
| `--color-text-primary` | `#d0cdc6` | Body copy, primary labels |
| `--color-text-secondary` | `#9b9790` | Secondary labels, descriptions |
| `--color-text-tertiary` | `#6b6862` | Hints, placeholders |
| `--color-text-muted` | `#454340` | Dividers as text, ultra-subtle |
**Accent (sage green — use sparingly, must feel earned):**
| Token | Value | Use |
|---|---|---|
| `--color-accent` | `#5e8c58` | Primary CTA, live indicators, focus rings, active nav |
| `--color-accent-mid` | `#89a785` | Hover accent text |
| `--color-accent-bright` | `#a4c89f` | Accent on dark backgrounds |
| `--color-accent-glow` | `rgba(94,140,88,0.07)` | Subtle tinted backgrounds |
| `--color-accent-glow-mid` | `rgba(94,140,88,0.14)` | Hover tint on accent items |
**Status semantics:**
| Token | Value | Use |
|---|---|---|
| `--color-amber` | `#d4a73c` | Warning, paused state |
| `--color-red` | `#cf8172` | Error, destructive actions |
| `--color-blue` | `#5a9fd4` | Info, neutral system states |
**Borders:** `--color-border` (`#1f2321`) default; `--color-border-mid` (`#2a2f2c`) for inputs/hover.
### Component Patterns
**Buttons:**
- Primary: solid sage green (`--color-accent`), hover brightness boost + micro-lift (`-translate-y-px`)
- Secondary: bordered (`--color-border-mid`), text transitions to accent on hover
- Danger: red text + subtle red background on hover
- All: `transition-all duration-150`
**Inputs:**
- Border `--color-border`, background `--color-bg-2`; focus transitions border and icon to accent
- Group focus pattern: `group` wrapper + `group-focus-within:text-[var(--color-accent)]` on icon
**Tables / data lists:**
- Grid layout; header `bg-3` + uppercase `--text-label`; row hover `hover:bg-[var(--color-bg-3)]`
- Status stripe: left border color matches sandbox state
**Status indicators:** Running = animated ping + sage green dot; Paused = amber dot; Stopped = muted gray. Color is never the sole differentiator.
**Modals & dialogs:** Border + shadow only — no accent gradient bars/strips. `fadeUp` 0.35s entrance.
**Empty states:** Large icon with glow, Instrument Serif heading, secondary body text, CTA below, `iconFloat` 4s animation.
**Animations (always respect `prefers-reduced-motion`):** `fadeUp` (entrance), `status-ping` (live indicator), `iconFloat` (empty states), `spin-once` (refresh), staggered `animation-delay` on lists.
### Design Principles ### Design Principles
@ -371,42 +295,3 @@ All values are CSS custom properties in `frontend/src/app.css`.
4. **Legible at speed.** Users scan dashboards in seconds. Strong typographic contrast (serif h1, mono IDs, sans body), consistent patterns, and predictable placement let users orientate instantly without reading everything. 4. **Legible at speed.** Users scan dashboards in seconds. Strong typographic contrast (serif h1, mono IDs, sans body), consistent patterns, and predictable placement let users orientate instantly without reading everything.
5. **Craft signals trust.** For infrastructure that runs production code, the quality of the UI is a proxy for the quality of the product. Pixel-level decisions matter. Polish is not decoration — it's a trust signal. 5. **Craft signals trust.** For infrastructure that runs production code, the quality of the UI is a proxy for the quality of the product. Pixel-level decisions matter. Polish is not decoration — it's a trust signal.
<!-- code-review-graph MCP tools -->
## MCP Tools: code-review-graph
**IMPORTANT: This project has a knowledge graph. ALWAYS use the
code-review-graph MCP tools BEFORE using Grep/Glob/Read to explore
the codebase.** The graph is faster, cheaper (fewer tokens), and gives
you structural context (callers, dependents, test coverage) that file
scanning cannot.
### When to use graph tools FIRST
- **Exploring code**: `semantic_search_nodes` or `query_graph` instead of Grep
- **Understanding impact**: `get_impact_radius` instead of manually tracing imports
- **Code review**: `detect_changes` + `get_review_context` instead of reading entire files
- **Finding relationships**: `query_graph` with callers_of/callees_of/imports_of/tests_for
- **Architecture questions**: `get_architecture_overview` + `list_communities`
Fall back to Grep/Glob/Read **only** when the graph doesn't cover what you need.
### Key Tools
| Tool | Use when |
|------|----------|
| `detect_changes` | Reviewing code changes — gives risk-scored analysis |
| `get_review_context` | Need source snippets for review — token-efficient |
| `get_impact_radius` | Understanding blast radius of a change |
| `get_affected_flows` | Finding which execution paths are impacted |
| `query_graph` | Tracing callers, callees, imports, tests, dependencies |
| `semantic_search_nodes` | Finding functions/classes by name or keyword |
| `get_architecture_overview` | Understanding high-level codebase structure |
| `refactor_tool` | Planning renames, finding dead code |
### Workflow
1. The graph auto-updates on file changes (via hooks).
2. Use `detect_changes` for code review.
3. Use `get_affected_flows` to understand impact.
4. Use `query_graph` pattern="tests_for" to check coverage.

View File

@ -2,10 +2,8 @@
# Variables # Variables
# ═══════════════════════════════════════════════════ # ═══════════════════════════════════════════════════
DATABASE_URL ?= postgres://wrenn:wrenn@localhost:5432/wrenn?sslmode=disable DATABASE_URL ?= postgres://wrenn:wrenn@localhost:5432/wrenn?sslmode=disable
BIN_DIR := $(shell pwd)/builds GOBIN := $(shell pwd)/builds
COMMIT := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown") ENVD_DIR := envd
VERSION_CP := $(shell cat VERSION_CP 2>/dev/null | tr -d '[:space:]' || echo "0.0.0-dev")
VERSION_AGENT := $(shell cat VERSION_AGENT 2>/dev/null | tr -d '[:space:]' || echo "0.0.0-dev")
LDFLAGS := -s -w LDFLAGS := -s -w
# ═══════════════════════════════════════════════════ # ═══════════════════════════════════════════════════
@ -19,15 +17,15 @@ build-frontend:
cd frontend && pnpm install --frozen-lockfile && pnpm build cd frontend && pnpm install --frozen-lockfile && pnpm build
build-cp: build-cp:
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_CP) -X main.commit=$(COMMIT)" -o $(BIN_DIR)/wrenn-cp ./cmd/control-plane go build -v -ldflags="$(LDFLAGS)" -o $(GOBIN)/wrenn-cp ./cmd/control-plane
build-agent: build-agent:
go build -v -ldflags="$(LDFLAGS) -X main.version=$(VERSION_AGENT) -X main.commit=$(COMMIT)" -o $(BIN_DIR)/wrenn-agent ./cmd/host-agent go build -v -ldflags="$(LDFLAGS)" -o $(GOBIN)/wrenn-agent ./cmd/host-agent
build-envd: build-envd:
cd envd-rs && ENVD_COMMIT=$(COMMIT) cargo build --release --target x86_64-unknown-linux-musl cd $(ENVD_DIR) && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
@cp envd-rs/target/x86_64-unknown-linux-musl/release/envd $(BIN_DIR)/envd go build -ldflags="$(LDFLAGS)" -o $(GOBIN)/envd .
@file $(BIN_DIR)/envd | grep -q "static-pie linked" || \ @file $(GOBIN)/envd | grep -q "statically linked" || \
(echo "ERROR: envd is not statically linked!" && exit 1) (echo "ERROR: envd is not statically linked!" && exit 1)
# ═══════════════════════════════════════════════════ # ═══════════════════════════════════════════════════
@ -58,7 +56,8 @@ dev-frontend:
cd frontend && pnpm dev --port 5173 --host 0.0.0.0 cd frontend && pnpm dev --port 5173 --host 0.0.0.0
dev-envd: dev-envd:
cd envd-rs && cargo run -- --isnotfc --port 49983 cd $(ENVD_DIR) && go run . --debug --listen-tcp :3002
# ═══════════════════════════════════════════════════ # ═══════════════════════════════════════════════════
# Database (goose) # Database (goose)
@ -91,6 +90,7 @@ generate: proto sqlc
proto: proto:
cd proto/envd && buf generate cd proto/envd && buf generate
cd proto/hostagent && buf generate cd proto/hostagent && buf generate
cd $(ENVD_DIR)/spec && buf generate
sqlc: sqlc:
sqlc generate sqlc generate
@ -102,12 +102,14 @@ sqlc:
fmt: fmt:
gofmt -w . gofmt -w .
cd $(ENVD_DIR) && gofmt -w .
lint: lint:
golangci-lint run ./... golangci-lint run ./...
vet: vet:
go vet ./... go vet ./...
cd $(ENVD_DIR) && go vet ./...
test: test:
go test -race -v ./internal/... go test -race -v ./internal/...
@ -119,6 +121,7 @@ test-all: test test-integration
tidy: tidy:
go mod tidy go mod tidy
cd $(ENVD_DIR) && go mod tidy
## Run all quality checks in CI order ## Run all quality checks in CI order
check: fmt vet lint test check: fmt vet lint test
@ -148,8 +151,8 @@ setup-host:
sudo bash scripts/setup-host.sh sudo bash scripts/setup-host.sh
install: build install: build
sudo cp $(BIN_DIR)/wrenn-cp /usr/local/bin/ sudo cp $(GOBIN)/wrenn-cp /usr/local/bin/
sudo cp $(BIN_DIR)/wrenn-agent /usr/local/bin/ sudo cp $(GOBIN)/wrenn-agent /usr/local/bin/
sudo cp deploy/systemd/*.service /etc/systemd/system/ sudo cp deploy/systemd/*.service /etc/systemd/system/
sudo systemctl daemon-reload sudo systemctl daemon-reload
@ -160,7 +163,7 @@ install: build
clean: clean:
rm -rf builds/ rm -rf builds/
cd envd-rs && cargo clean cd $(ENVD_DIR) && rm -f envd
# ═══════════════════════════════════════════════════ # ═══════════════════════════════════════════════════
# Help # Help
@ -176,11 +179,11 @@ help:
@echo " make dev-cp Control plane (hot reload if air installed)" @echo " make dev-cp Control plane (hot reload if air installed)"
@echo " make dev-frontend Vite dev server with HMR (port 5173)" @echo " make dev-frontend Vite dev server with HMR (port 5173)"
@echo " make dev-agent Host agent (sudo required)" @echo " make dev-agent Host agent (sudo required)"
@echo " make dev-envd envd in debug mode (--isnotfc, port 49983)" @echo " make dev-envd envd in TCP debug mode"
@echo "" @echo ""
@echo " make build Build all binaries → builds/" @echo " make build Build all binaries → builds/"
@echo " make build-frontend Build SvelteKit dashboard → frontend/build/" @echo " make build-frontend Build SvelteKit dashboard → frontend/build/"
@echo " make build-envd Build envd static binary (Rust, musl)" @echo " make build-envd Build envd static binary"
@echo "" @echo ""
@echo " make migrate-up Apply migrations" @echo " make migrate-up Apply migrations"
@echo " make migrate-create name=xxx New migration" @echo " make migrate-create name=xxx New migration"

19
NOTICE Normal file
View File

@ -0,0 +1,19 @@
Wrenn Sandbox
Copyright (c) 2026 M/S Omukk, Bangladesh
This project includes software derived from the following project:
Project: e2b infra
Repository: https://github.com/e2b-dev/infra
The following files and directories in this repository contain code derived from the above project:
- envd/
- proto/envd/*.proto
- internal/snapshot/
- internal/uffd/
Modifications to this code were made by M/S Omukk.
Copyright (c) 2023 FoundryLabs, Inc.
Modifications Copyright (c) 2026 M/S Omukk, Bangladesh

134
README.md
View File

@ -2,17 +2,16 @@
Secure infrastructure for AI Secure infrastructure for AI
## Prerequisites ## Deployment
### Prerequisites
- Linux host with `/dev/kvm` access (bare metal or nested virt) - Linux host with `/dev/kvm` access (bare metal or nested virt)
- Firecracker binary at `/usr/local/bin/firecracker` - Firecracker binary at `/usr/local/bin/firecracker`
- PostgreSQL - PostgreSQL
- Go 1.25+ - Go 1.25+
- Rust 1.88+ with `x86_64-unknown-linux-musl` target (`rustup target add x86_64-unknown-linux-musl`)
- pnpm (for frontend)
- Docker (for dev infra and rootfs builds)
## Build ### Build
```bash ```bash
make build # outputs to builds/ make build # outputs to builds/
@ -20,77 +19,30 @@ make build # outputs to builds/
Produces three binaries: `wrenn-cp` (control plane), `wrenn-agent` (host agent), `envd` (guest agent). Produces three binaries: `wrenn-cp` (control plane), `wrenn-agent` (host agent), `envd` (guest agent).
## Host setup ### Host setup
The host agent needs a kernel, a minimal rootfs image, and working directories on the host machine. The host agent machine needs:
### Directory structure
```
/var/lib/wrenn/
├── kernels/
│ └── vmlinux # uncompressed Linux kernel (not bzImage)
├── images/
│ └── minimal/
│ └── rootfs.ext4 # base rootfs (all other templates snapshot from this)
├── sandboxes/ # per-sandbox CoW files (created at runtime)
└── snapshots/ # pause/hibernate snapshot files (created at runtime)
```
Create the directories:
```bash ```bash
sudo mkdir -p /var/lib/wrenn/{kernels,images/minimal,sandboxes,snapshots} # Kernel for guest VMs
mkdir -p /var/lib/wrenn/kernels
# Place a vmlinux kernel at /var/lib/wrenn/kernels/vmlinux
# Rootfs images
mkdir -p /var/lib/wrenn/images
# Build or place .ext4 rootfs images (e.g., minimal.ext4)
# Sandbox working directory
mkdir -p /var/lib/wrenn/sandboxes
# Snapshots directory
mkdir -p /var/lib/wrenn/snapshots
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
``` ```
### Kernel ### Configure
Place an uncompressed `vmlinux` kernel at `/var/lib/wrenn/kernels/vmlinux`. Versioned kernels (`vmlinux-{semver}`) are also supported — the agent picks the latest by semver.
### Minimal rootfs
The minimal rootfs is the base image that all other templates (Python, Node, etc.) are built on top of via device-mapper snapshots. It must contain:
| Package | Why |
|---------|-----|
| `socat` | Bidirectional relay for port forwarding |
| `chrony` | Time sync from KVM PTP clock (`/dev/ptp0`) |
| `tini` | PID 1 zombie reaper (injected by build script, not apt) |
| `sudo` | User privilege management inside the guest |
| `wget` | HTTP fetching |
| `curl` | HTTP client |
| `ca-certificates` | TLS certificate verification |
**To build a rootfs from a Docker container:**
1. Create and configure a container with the required packages:
```bash
docker run -it --name wrenn-minimal debian:bookworm bash
# Inside the container:
apt update && apt install -y socat chrony sudo wget curl ca-certificates
exit
```
2. Export to a rootfs image (builds envd, injects wrenn-init + tini, shrinks to minimum size):
```bash
sudo bash scripts/rootfs-from-container.sh wrenn-minimal minimal
```
**To update an existing rootfs** after changing envd or `wrenn-init.sh`:
```bash
bash scripts/update-minimal-rootfs.sh
```
This rebuilds envd via `make build-envd` and copies the fresh binaries into the mounted rootfs image.
### IP forwarding
```bash
sudo sysctl -w net.ipv4.ip_forward=1
```
## Configure
Copy `.env.example` to `.env` and edit: Copy `.env.example` to `.env` and edit:
@ -107,21 +59,25 @@ WRENN_HOST_LISTEN_ADDR=:50051
WRENN_DIR=/var/lib/wrenn WRENN_DIR=/var/lib/wrenn
``` ```
## Development ### Run
```bash ```bash
make dev # Start PostgreSQL (Docker), run migrations, start control plane # Apply database migrations
make dev-agent # Start host agent (separate terminal, sudo) make migrate-up
make dev-frontend # Vite dev server with HMR (port 5173)
make check # fmt + vet + lint + test # Start control plane
./builds/wrenn-cp
``` ```
Control plane listens on `WRENN_CP_LISTEN_ADDR` (default `:8000`).
### Host registration ### Host registration
Hosts must be registered with the control plane before they can serve sandboxes. Hosts must be registered with the control plane before they can serve sandboxes.
1. **Create a host record** (via API or dashboard): 1. **Create a host record** (via API or dashboard):
```bash ```bash
# As an admin (JWT auth)
curl -X POST http://localhost:8000/v1/hosts \ curl -X POST http://localhost:8000/v1/hosts \
-H "Authorization: Bearer $JWT_TOKEN" \ -H "Authorization: Bearer $JWT_TOKEN" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
@ -131,16 +87,17 @@ Hosts must be registered with the control plane before they can serve sandboxes.
2. **Start the host agent** with the registration token and its externally-reachable address: 2. **Start the host agent** with the registration token and its externally-reachable address:
```bash ```bash
sudo WRENN_CP_URL=http://localhost:8000 \ sudo WRENN_CP_URL=http://cp-host:8000 \
./builds/wrenn-agent \ ./builds/wrenn-agent \
--register <token-from-step-1> \ --register <token-from-step-1> \
--address <host-ip>:50051 --address 10.0.1.5:50051
``` ```
On first startup the agent sends its specs (arch, CPU, memory, disk) to the control plane, receives a long-lived host JWT, and saves it to `$WRENN_DIR/host-token`. On first startup the agent sends its specs (arch, CPU, memory, disk) to the control plane, receives a long-lived host JWT, and saves it to `$WRENN_DIR/host-token`.
3. **Subsequent startups** don't need `--register` — the agent loads the saved JWT automatically: 3. **Subsequent startups** don't need `--register` — the agent loads the saved JWT automatically:
```bash ```bash
sudo ./builds/wrenn-agent --address <host-ip>:50051 sudo WRENN_CP_URL=http://cp-host:8000 \
./builds/wrenn-agent --address 10.0.1.5:50051
``` ```
4. **If registration fails** (e.g., network error after token was consumed), regenerate a token: 4. **If registration fails** (e.g., network error after token was consumed), regenerate a token:
@ -150,6 +107,23 @@ Hosts must be registered with the control plane before they can serve sandboxes.
``` ```
Then restart the agent with the new token. Then restart the agent with the new token.
The agent sends heartbeats to the control plane every 30 seconds. The agent sends heartbeats to the control plane every 30 seconds. Host agent listens on `WRENN_HOST_LISTEN_ADDR` (default `:50051`).
### Rootfs images
envd must be baked into every rootfs image. After building:
```bash
make build-envd
bash scripts/update-debug-rootfs.sh /var/lib/wrenn/images/minimal.ext4
```
## Development
```bash
make dev # Start PostgreSQL (Docker), run migrations, start control plane
make dev-agent # Start host agent (separate terminal, sudo)
make check # fmt + vet + lint + test
```
See `CLAUDE.md` for full architecture documentation. See `CLAUDE.md` for full architecture documentation.

View File

@ -1 +0,0 @@
0.1.2

View File

@ -1 +0,0 @@
0.1.5

View File

@ -1,15 +1,191 @@
package main package main
import "git.omukk.dev/wrenn/wrenn/pkg/cpserver" import (
"context"
"log/slog"
"net/http"
"os"
"os/signal"
"strings"
"syscall"
"time"
// Set via -ldflags at build time. "github.com/jackc/pgx/v5/pgxpool"
var ( "github.com/redis/go-redis/v9"
version = "dev"
commit = "unknown" "git.omukk.dev/wrenn/wrenn/internal/api"
"git.omukk.dev/wrenn/wrenn/internal/audit"
"git.omukk.dev/wrenn/wrenn/internal/auth"
"git.omukk.dev/wrenn/wrenn/internal/auth/oauth"
"git.omukk.dev/wrenn/wrenn/internal/channels"
"git.omukk.dev/wrenn/wrenn/internal/config"
"git.omukk.dev/wrenn/wrenn/internal/db"
"git.omukk.dev/wrenn/wrenn/internal/lifecycle"
"git.omukk.dev/wrenn/wrenn/internal/scheduler"
) )
func main() { func main() {
cpserver.Run( slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
cpserver.WithVersion(version, commit), Level: slog.LevelDebug,
) })))
cfg := config.Load()
if len(cfg.JWTSecret) < 32 {
slog.Error("JWT_SECRET must be at least 32 characters")
os.Exit(1)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Database connection pool.
pool, err := pgxpool.New(ctx, cfg.DatabaseURL)
if err != nil {
slog.Error("failed to connect to database", "error", err)
os.Exit(1)
}
defer pool.Close()
if err := pool.Ping(ctx); err != nil {
slog.Error("failed to ping database", "error", err)
os.Exit(1)
}
slog.Info("connected to database")
queries := db.New(pool)
// Redis client.
redisOpts, err := redis.ParseURL(cfg.RedisURL)
if err != nil {
slog.Error("failed to parse REDIS_URL", "error", err)
os.Exit(1)
}
rdb := redis.NewClient(redisOpts)
defer rdb.Close()
if err := rdb.Ping(ctx).Err(); err != nil {
slog.Error("failed to ping redis", "error", err)
os.Exit(1)
}
slog.Info("connected to redis")
// mTLS is mandatory — parse internal CA for CP↔agent communication.
if cfg.CACert == "" || cfg.CAKey == "" {
slog.Error("WRENN_CA_CERT and WRENN_CA_KEY are required — mTLS is mandatory for CP↔agent communication")
os.Exit(1)
}
ca, err := auth.ParseCA(cfg.CACert, cfg.CAKey)
if err != nil {
slog.Error("failed to parse mTLS CA from environment", "error", err)
os.Exit(1)
}
slog.Info("mTLS enabled: CA loaded")
// Host client pool — manages Connect RPC clients to host agents.
cpCertStore, err := auth.NewCPCertStore(ca)
if err != nil {
slog.Error("failed to issue CP client certificate", "error", err)
os.Exit(1)
}
// Renew the CP client certificate periodically so it never expires
// while the control plane is running (TTL = 24h, renewal = every 12h).
go func() {
ticker := time.NewTicker(auth.CPCertRenewInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if err := cpCertStore.Refresh(); err != nil {
slog.Error("failed to renew CP client certificate", "error", err)
} else {
slog.Info("CP client certificate renewed")
}
}
}
}()
hostPool := lifecycle.NewHostClientPoolTLS(auth.CPClientTLSConfig(ca, cpCertStore))
slog.Info("host client pool: mTLS enabled")
// Scheduler — picks a host for each new sandbox (least-loaded, bottleneck-first).
hostScheduler := scheduler.NewLeastLoadedScheduler(queries)
// OAuth provider registry.
oauthRegistry := oauth.NewRegistry()
if cfg.OAuthGitHubClientID != "" && cfg.OAuthGitHubClientSecret != "" {
if cfg.CPPublicURL == "" {
slog.Error("CP_PUBLIC_URL must be set when OAuth providers are configured")
os.Exit(1)
}
callbackURL := strings.TrimRight(cfg.CPPublicURL, "/") + "/auth/oauth/github/callback"
ghProvider := oauth.NewGitHubProvider(cfg.OAuthGitHubClientID, cfg.OAuthGitHubClientSecret, callbackURL)
oauthRegistry.Register(ghProvider)
slog.Info("registered OAuth provider", "provider", "github")
}
// Channels: publisher, service, dispatcher.
if len(cfg.EncryptionKeyHex) != 64 {
slog.Error("WRENN_ENCRYPTION_KEY must be a hex-encoded 32-byte key (64 hex chars)")
os.Exit(1)
}
channelPub := channels.NewPublisher(rdb)
channelSvc := &channels.Service{DB: queries, EncKey: cfg.EncryptionKey}
channelDispatcher := channels.NewDispatcher(rdb, queries, cfg.EncryptionKey)
// Shared audit logger with event publishing.
al := audit.NewWithPublisher(queries, channelPub)
// API server.
srv := api.New(queries, hostPool, hostScheduler, pool, rdb, []byte(cfg.JWTSecret), oauthRegistry, cfg.OAuthRedirectURL, ca, al, channelSvc)
// Start template build workers (2 concurrent).
stopBuildWorkers := srv.BuildSvc.StartWorkers(ctx, 2)
defer stopBuildWorkers()
// Start channel event dispatcher.
channelDispatcher.Start(ctx)
// Start host monitor (passive + active reconciliation every 30s).
monitor := api.NewHostMonitor(queries, hostPool, al, 30*time.Second)
monitor.Start(ctx)
// Start metrics sampler (records per-team sandbox stats every 10s).
sampler := api.NewMetricsSampler(queries, 10*time.Second)
sampler.Start(ctx)
// Wrap the API handler with the sandbox proxy so that requests with
// {port}-{sandbox_id}.{domain} Host headers are routed to the sandbox's
// host agent. All other requests pass through to the normal API router.
proxyWrapper := api.NewSandboxProxyWrapper(srv.Handler(), queries, hostPool)
httpServer := &http.Server{
Addr: cfg.ListenAddr,
Handler: proxyWrapper,
}
// Graceful shutdown on signal.
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
go func() {
sig := <-sigCh
slog.Info("received signal, shutting down", "signal", sig)
cancel()
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer shutdownCancel()
if err := httpServer.Shutdown(shutdownCtx); err != nil {
slog.Error("http server shutdown error", "error", err)
}
}()
slog.Info("control plane starting", "addr", cfg.ListenAddr)
if err := httpServer.ListenAndServe(); err != nil && err != http.ErrServerClosed {
slog.Error("http server error", "error", err)
os.Exit(1)
}
slog.Info("control plane stopped")
} }

View File

@ -1,40 +1,28 @@
package main package main
import ( import (
"bufio"
"context" "context"
"crypto/tls" "crypto/tls"
"flag" "flag"
"fmt"
"log/slog" "log/slog"
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"strconv"
"strings"
"sync" "sync"
"syscall" "syscall"
"time" "time"
"github.com/joho/godotenv" "github.com/joho/godotenv"
"git.omukk.dev/wrenn/wrenn/internal/auth"
"git.omukk.dev/wrenn/wrenn/internal/devicemapper" "git.omukk.dev/wrenn/wrenn/internal/devicemapper"
"git.omukk.dev/wrenn/wrenn/internal/hostagent" "git.omukk.dev/wrenn/wrenn/internal/hostagent"
"git.omukk.dev/wrenn/wrenn/internal/layout"
"git.omukk.dev/wrenn/wrenn/internal/network" "git.omukk.dev/wrenn/wrenn/internal/network"
"git.omukk.dev/wrenn/wrenn/internal/sandbox" "git.omukk.dev/wrenn/wrenn/internal/sandbox"
"git.omukk.dev/wrenn/wrenn/pkg/auth"
"git.omukk.dev/wrenn/wrenn/pkg/logging"
"git.omukk.dev/wrenn/wrenn/proto/hostagent/gen/hostagentv1connect" "git.omukk.dev/wrenn/wrenn/proto/hostagent/gen/hostagentv1connect"
) )
// Set via -ldflags at build time.
var (
version = "dev"
commit = "unknown"
)
func main() { func main() {
// Best-effort load — missing .env file is fine. // Best-effort load — missing .env file is fine.
_ = godotenv.Load() _ = godotenv.Load()
@ -43,24 +31,18 @@ func main() {
advertiseAddr := flag.String("address", "", "Externally-reachable address (ip:port) for this host agent") advertiseAddr := flag.String("address", "", "Externally-reachable address (ip:port) for this host agent")
flag.Parse() flag.Parse()
rootDir := envOrDefault("WRENN_DIR", "/var/lib/wrenn") slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{
cleanupLog := logging.Setup(filepath.Join(rootDir, "logs"), "host-agent") Level: slog.LevelDebug,
defer cleanupLog() })))
if err := checkPrivileges(); err != nil { if os.Geteuid() != 0 {
slog.Error("insufficient privileges", "error", err) slog.Error("host agent must run as root")
os.Exit(1) os.Exit(1)
} }
// Enable IP forwarding (required for NAT). The write may fail if running // Enable IP forwarding (required for NAT).
// as non-root without DAC_OVERRIDE on this path — that's OK if the systemd
// unit's ExecStartPre already set it. We verify the value regardless.
if err := os.WriteFile("/proc/sys/net/ipv4/ip_forward", []byte("1"), 0644); err != nil { if err := os.WriteFile("/proc/sys/net/ipv4/ip_forward", []byte("1"), 0644); err != nil {
slog.Warn("failed to enable ip_forward (may have been set by systemd unit)", "error", err) slog.Warn("failed to enable ip_forward", "error", err)
}
if b, err := os.ReadFile("/proc/sys/net/ipv4/ip_forward"); err != nil || strings.TrimSpace(string(b)) != "1" {
slog.Error("ip_forward is not enabled — sandbox networking will be broken", "error", err)
os.Exit(1)
} }
// Clean up stale resources from a previous crash. // Clean up stale resources from a previous crash.
@ -68,6 +50,7 @@ func main() {
network.CleanupStaleNamespaces() network.CleanupStaleNamespaces()
listenAddr := envOrDefault("WRENN_HOST_LISTEN_ADDR", ":50051") listenAddr := envOrDefault("WRENN_HOST_LISTEN_ADDR", ":50051")
rootDir := envOrDefault("WRENN_DIR", "/var/lib/wrenn")
cpURL := os.Getenv("WRENN_CP_URL") cpURL := os.Getenv("WRENN_CP_URL")
credsFile := filepath.Join(rootDir, "host-credentials.json") credsFile := filepath.Join(rootDir, "host-credentials.json")
@ -99,31 +82,9 @@ func main() {
os.Exit(1) os.Exit(1)
} }
// Resolve latest kernel version.
kernelPath, kernelVersion, err := layout.LatestKernel(rootDir)
if err != nil {
slog.Error("failed to find kernel", "error", err)
os.Exit(1)
}
slog.Info("resolved kernel", "version", kernelVersion, "path", kernelPath)
// Detect firecracker version.
fcBin := envOrDefault("WRENN_FIRECRACKER_BIN", "/usr/local/bin/firecracker")
fcVersion, err := sandbox.DetectFirecrackerVersion(fcBin)
if err != nil {
slog.Error("failed to detect firecracker version", "error", err)
os.Exit(1)
}
slog.Info("resolved firecracker", "version", fcVersion, "path", fcBin)
cfg := sandbox.Config{ cfg := sandbox.Config{
WrennDir: rootDir, WrennDir: rootDir,
DefaultRootfsSizeMB: defaultRootfsSizeMB, DefaultRootfsSizeMB: defaultRootfsSizeMB,
KernelPath: kernelPath,
KernelVersion: kernelVersion,
FirecrackerBin: fcBin,
FirecrackerVersion: fcVersion,
AgentVersion: version,
} }
mgr := sandbox.New(cfg) mgr := sandbox.New(cfg)
@ -148,18 +109,7 @@ func main() {
slog.Info("host registered", "host_id", creds.HostID) slog.Info("host registered", "host_id", creds.HostID)
// httpServer is declared here so the shutdown func can reference it. // httpServer is declared here so the shutdown func can reference it.
// ReadTimeout/WriteTimeout are intentionally omitted — they would kill httpServer := &http.Server{Addr: listenAddr}
// long-lived Connect RPC streams and WebSocket proxy connections.
httpServer := &http.Server{
Addr: listenAddr,
ReadHeaderTimeout: 10 * time.Second,
IdleTimeout: 620 * time.Second, // > typical LB upstream timeout (600s)
// Disable HTTP/2: empty non-nil map prevents Go from registering
// the h2 ALPN token. Connect RPC works over HTTP/1.1; HTTP/2
// multiplexing causes HOL blocking when a slow sandbox RPC stalls
// the shared connection.
TLSNextProto: make(map[string]func(*http.Server, *tls.Conn, http.Handler)),
}
// mTLS is mandatory — refuse to start without a valid certificate. // mTLS is mandatory — refuse to start without a valid certificate.
var certStore hostagent.CertStore var certStore hostagent.CertStore
@ -191,7 +141,6 @@ func main() {
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second) shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer shutdownCancel() defer shutdownCancel()
mgr.Shutdown(shutdownCtx) mgr.Shutdown(shutdownCtx)
sandbox.ShrinkMinimalImage(rootDir)
if err := httpServer.Shutdown(shutdownCtx); err != nil { if err := httpServer.Shutdown(shutdownCtx); err != nil {
slog.Error("http server shutdown error", "error", err) slog.Error("http server shutdown error", "error", err)
} }
@ -204,7 +153,6 @@ func main() {
path, handler := hostagentv1connect.NewHostAgentServiceHandler(srv) path, handler := hostagentv1connect.NewHostAgentServiceHandler(srv)
proxyHandler := hostagent.NewProxyHandler(mgr) proxyHandler := hostagent.NewProxyHandler(mgr)
mgr.SetOnDestroy(proxyHandler.EvictProxy)
mux := http.NewServeMux() mux := http.NewServeMux()
mux.Handle(path, handler) mux.Handle(path, handler)
@ -245,7 +193,7 @@ func main() {
doShutdown("signal: " + sig.String()) doShutdown("signal: " + sig.String())
}() }()
slog.Info("host agent starting", "addr", listenAddr, "host_id", creds.HostID, "version", version, "commit", commit) slog.Info("host agent starting", "addr", listenAddr, "host_id", creds.HostID)
// TLSConfig is always set (mTLS is mandatory). Create the TLS listener // TLSConfig is always set (mTLS is mandatory). Create the TLS listener
// manually because ListenAndServeTLS requires on-disk cert/key paths // manually because ListenAndServeTLS requires on-disk cert/key paths
// but we use GetCertificate callback for hot-swap support. // but we use GetCertificate callback for hot-swap support.
@ -268,63 +216,3 @@ func envOrDefault(key, def string) string {
} }
return def return def
} }
// checkPrivileges verifies the process has the required Linux capabilities.
// Always reads CapEff — even for root — because a root process inside a
// restricted container (e.g. docker --cap-drop=all) may not have all caps.
func checkPrivileges() error {
capEff, err := readEffectiveCaps()
if err != nil {
return fmt.Errorf("read capabilities: %w", err)
}
// All capabilities required by the host agent at runtime.
required := []struct {
bit uint
name string
}{
{1, "CAP_DAC_OVERRIDE"}, // /dev/loop*, /dev/mapper/*, /dev/net/tun
{5, "CAP_KILL"}, // SIGTERM/SIGKILL to Firecracker processes
{12, "CAP_NET_ADMIN"}, // netlink, iptables, routing, TAP/veth
{13, "CAP_NET_RAW"}, // raw sockets (iptables)
{19, "CAP_SYS_PTRACE"}, // reading /proc/self/ns/net (netns.Get)
{21, "CAP_SYS_ADMIN"}, // netns, mount ns, losetup, dmsetup
{27, "CAP_MKNOD"}, // device-mapper node creation
}
var missing []string
for _, cap := range required {
if capEff&(1<<cap.bit) == 0 {
missing = append(missing, cap.name)
}
}
if len(missing) > 0 {
return fmt.Errorf("missing capabilities: %s — run as root or apply setcap to the binary",
strings.Join(missing, ", "))
}
return nil
}
// readEffectiveCaps parses the CapEff bitmask from /proc/self/status.
func readEffectiveCaps() (uint64, error) {
f, err := os.Open("/proc/self/status")
if err != nil {
return 0, err
}
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
line := scanner.Text()
if hexStr, ok := strings.CutPrefix(line, "CapEff:"); ok {
return strconv.ParseUint(strings.TrimSpace(hexStr), 16, 64)
}
}
if err := scanner.Err(); err != nil {
return 0, fmt.Errorf("read /proc/self/status: %w", err)
}
return 0, fmt.Errorf("CapEff not found in /proc/self/status")
}

View File

@ -171,7 +171,7 @@ CREATE TABLE audit_logs (
metadata JSONB NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX idx_audit_logs_team_time ON audit_logs(team_id, created_at DESC, id DESC); CREATE INDEX idx_audit_logs_team_time ON audit_logs(team_id, created_at DESC);
CREATE INDEX idx_audit_logs_team_resource ON audit_logs(team_id, resource_type, created_at DESC); CREATE INDEX idx_audit_logs_team_resource ON audit_logs(team_id, resource_type, created_at DESC);
-- sandbox_metrics_snapshots -- sandbox_metrics_snapshots

View File

@ -9,10 +9,4 @@ VALUES ('00000000-0000-0000-0000-000000000000', 'Platform', 'platform')
ON CONFLICT (id) DO NOTHING; ON CONFLICT (id) DO NOTHING;
-- +goose Down -- +goose Down
-- Delete dependent rows that reference the platform team via foreign keys.
-- Order matters: children before parent.
DELETE FROM sandboxes WHERE team_id = '00000000-0000-0000-0000-000000000000';
DELETE FROM team_api_keys WHERE team_id = '00000000-0000-0000-0000-000000000000';
DELETE FROM users_teams WHERE team_id = '00000000-0000-0000-0000-000000000000';
DELETE FROM hosts WHERE team_id = '00000000-0000-0000-0000-000000000000';
DELETE FROM teams WHERE id = '00000000-0000-0000-0000-000000000000'; DELETE FROM teams WHERE id = '00000000-0000-0000-0000-000000000000';

View File

@ -1,9 +0,0 @@
-- +goose Up
ALTER TABLE sandboxes ADD COLUMN metadata JSONB NOT NULL DEFAULT '{}';
ALTER TABLE templates ADD COLUMN metadata JSONB NOT NULL DEFAULT '{}';
ALTER TABLE template_builds ADD COLUMN metadata JSONB NOT NULL DEFAULT '{}';
-- +goose Down
ALTER TABLE sandboxes DROP COLUMN metadata;
ALTER TABLE templates DROP COLUMN metadata;
ALTER TABLE template_builds DROP COLUMN metadata;

View File

@ -1,15 +0,0 @@
-- +goose Up
ALTER TABLE users ADD COLUMN status TEXT NOT NULL DEFAULT 'active';
-- Backfill from existing columns.
UPDATE users SET status = 'deleted' WHERE deleted_at IS NOT NULL;
UPDATE users SET status = 'disabled' WHERE is_active = false AND deleted_at IS NULL;
ALTER TABLE users DROP COLUMN is_active;
-- +goose Down
ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT TRUE;
UPDATE users SET is_active = false WHERE status IN ('inactive', 'disabled', 'deleted');
ALTER TABLE users DROP COLUMN status;

View File

@ -1,72 +0,0 @@
-- +goose Up
-- users_teams: remove membership when user is deleted
ALTER TABLE users_teams DROP CONSTRAINT users_teams_user_id_fkey;
ALTER TABLE users_teams ADD CONSTRAINT users_teams_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
-- oauth_providers: remove auth links when user is deleted
ALTER TABLE oauth_providers DROP CONSTRAINT oauth_providers_user_id_fkey;
ALTER TABLE oauth_providers ADD CONSTRAINT oauth_providers_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
-- admin_permissions: remove permissions when user is deleted
ALTER TABLE admin_permissions DROP CONSTRAINT admin_permissions_user_id_fkey;
ALTER TABLE admin_permissions ADD CONSTRAINT admin_permissions_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
-- team_api_keys.created_by: make nullable, SET NULL on user delete
ALTER TABLE team_api_keys ALTER COLUMN created_by DROP NOT NULL;
ALTER TABLE team_api_keys DROP CONSTRAINT team_api_keys_created_by_fkey;
ALTER TABLE team_api_keys ADD CONSTRAINT team_api_keys_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id) ON DELETE SET NULL;
-- hosts.created_by: make nullable, SET NULL on user delete
ALTER TABLE hosts ALTER COLUMN created_by DROP NOT NULL;
ALTER TABLE hosts DROP CONSTRAINT hosts_created_by_fkey;
ALTER TABLE hosts ADD CONSTRAINT hosts_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id) ON DELETE SET NULL;
-- host_tokens.created_by: make nullable, SET NULL on user delete
ALTER TABLE host_tokens ALTER COLUMN created_by DROP NOT NULL;
ALTER TABLE host_tokens DROP CONSTRAINT host_tokens_created_by_fkey;
ALTER TABLE host_tokens ADD CONSTRAINT host_tokens_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id) ON DELETE SET NULL;
-- +goose Down
-- Revert host_tokens.created_by
ALTER TABLE host_tokens DROP CONSTRAINT host_tokens_created_by_fkey;
UPDATE host_tokens SET created_by = '00000000-0000-0000-0000-000000000000' WHERE created_by IS NULL;
ALTER TABLE host_tokens ALTER COLUMN created_by SET NOT NULL;
ALTER TABLE host_tokens ADD CONSTRAINT host_tokens_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id);
-- Revert hosts.created_by
ALTER TABLE hosts DROP CONSTRAINT hosts_created_by_fkey;
UPDATE hosts SET created_by = '00000000-0000-0000-0000-000000000000' WHERE created_by IS NULL;
ALTER TABLE hosts ALTER COLUMN created_by SET NOT NULL;
ALTER TABLE hosts ADD CONSTRAINT hosts_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id);
-- Revert team_api_keys.created_by
ALTER TABLE team_api_keys DROP CONSTRAINT team_api_keys_created_by_fkey;
UPDATE team_api_keys SET created_by = '00000000-0000-0000-0000-000000000000' WHERE created_by IS NULL;
ALTER TABLE team_api_keys ALTER COLUMN created_by SET NOT NULL;
ALTER TABLE team_api_keys ADD CONSTRAINT team_api_keys_created_by_fkey
FOREIGN KEY (created_by) REFERENCES users(id);
-- Revert admin_permissions
ALTER TABLE admin_permissions DROP CONSTRAINT admin_permissions_user_id_fkey;
ALTER TABLE admin_permissions ADD CONSTRAINT admin_permissions_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id);
-- Revert oauth_providers
ALTER TABLE oauth_providers DROP CONSTRAINT oauth_providers_user_id_fkey;
ALTER TABLE oauth_providers ADD CONSTRAINT oauth_providers_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id);
-- Revert users_teams
ALTER TABLE users_teams DROP CONSTRAINT users_teams_user_id_fkey;
ALTER TABLE users_teams ADD CONSTRAINT users_teams_user_id_fkey
FOREIGN KEY (user_id) REFERENCES users(id);

View File

@ -1,11 +0,0 @@
-- +goose Up
CREATE TABLE daily_usage (
team_id UUID NOT NULL,
day DATE NOT NULL,
cpu_minutes NUMERIC(18, 4) NOT NULL DEFAULT 0,
ram_mb_minutes NUMERIC(18, 4) NOT NULL DEFAULT 0,
PRIMARY KEY (team_id, day)
);
-- +goose Down
DROP TABLE daily_usage;

View File

@ -1,10 +0,0 @@
// Package migrations embeds the SQL migration files so that external modules
// (such as the cloud edition) can access them programmatically.
package migrations
import "embed"
// FS contains all SQL migration files.
//
//go:embed *.sql
var FS embed.FS

View File

@ -13,7 +13,7 @@ SELECT * FROM team_api_keys WHERE team_id = $1 ORDER BY created_at DESC;
SELECT k.id, k.team_id, k.name, k.key_hash, k.key_prefix, k.created_by, k.created_at, k.last_used, SELECT k.id, k.team_id, k.name, k.key_hash, k.key_prefix, k.created_by, k.created_at, k.last_used,
u.email AS creator_email u.email AS creator_email
FROM team_api_keys k FROM team_api_keys k
LEFT JOIN users u ON u.id = k.created_by JOIN users u ON u.id = k.created_by
WHERE k.team_id = $1 WHERE k.team_id = $1
ORDER BY k.created_at DESC; ORDER BY k.created_at DESC;
@ -22,9 +22,3 @@ DELETE FROM team_api_keys WHERE id = $1 AND team_id = $2;
-- name: UpdateAPIKeyLastUsed :exec -- name: UpdateAPIKeyLastUsed :exec
UPDATE team_api_keys SET last_used = NOW() WHERE id = $1; UPDATE team_api_keys SET last_used = NOW() WHERE id = $1;
-- name: DeleteAPIKeysByTeam :exec
DELETE FROM team_api_keys WHERE team_id = $1;
-- name: DeleteAPIKeysByCreator :exec
DELETE FROM team_api_keys WHERE created_by = $1;

View File

@ -2,15 +2,6 @@
INSERT INTO audit_logs (id, team_id, actor_type, actor_id, actor_name, resource_type, resource_id, action, scope, status, metadata) INSERT INTO audit_logs (id, team_id, actor_type, actor_id, actor_name, resource_type, resource_id, action, scope, status, metadata)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11); VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11);
-- name: AnonymizeAuditLogsByUserID :exec
UPDATE audit_logs
SET actor_name = CASE WHEN actor_id = $1 THEN 'deleted-user' ELSE actor_name END,
actor_id = CASE WHEN actor_id = $1 THEN NULL ELSE actor_id END,
resource_id = CASE WHEN resource_type = 'member' AND resource_id = $1 THEN NULL ELSE resource_id END,
metadata = CASE WHEN resource_type = 'member' AND resource_id = $1 AND metadata ? 'email' THEN metadata - 'email' ELSE metadata END
WHERE actor_id = $1
OR (resource_type = 'member' AND resource_id = $1);
-- name: ListAuditLogs :many -- name: ListAuditLogs :many
SELECT * FROM audit_logs SELECT * FROM audit_logs
WHERE team_id = $1 WHERE team_id = $1

View File

@ -22,9 +22,6 @@ RETURNING *;
-- name: DeleteChannelByTeam :exec -- name: DeleteChannelByTeam :exec
DELETE FROM channels WHERE id = $1 AND team_id = $2; DELETE FROM channels WHERE id = $1 AND team_id = $2;
-- name: DeleteAllChannelsByTeam :exec
DELETE FROM channels WHERE team_id = $1;
-- name: ListChannelsForEvent :many -- name: ListChannelsForEvent :many
SELECT * FROM channels SELECT * FROM channels
WHERE team_id = $1 WHERE team_id = $1

View File

@ -51,13 +51,6 @@ WHERE sandbox_id = $1 AND tier = $2;
DELETE FROM sandbox_metric_points DELETE FROM sandbox_metric_points
WHERE ts < EXTRACT(EPOCH FROM NOW() - INTERVAL '30 days')::BIGINT; WHERE ts < EXTRACT(EPOCH FROM NOW() - INTERVAL '30 days')::BIGINT;
-- name: DeleteMetricsSnapshotsByTeam :exec
DELETE FROM sandbox_metrics_snapshots WHERE team_id = $1;
-- name: DeleteMetricPointsByTeam :exec
DELETE FROM sandbox_metric_points
WHERE sandbox_id IN (SELECT id FROM sandboxes WHERE team_id = $1);
-- name: SampleSandboxMetrics :many -- name: SampleSandboxMetrics :many
-- Aggregates per-team resource usage from the live sandboxes table. -- Aggregates per-team resource usage from the live sandboxes table.
-- Groups by all teams that have any sandbox row (including stopped) so that -- Groups by all teams that have any sandbox row (including stopped) so that
@ -73,35 +66,3 @@ SELECT
+ COALESCE(SUM(CEIL(memory_mb::NUMERIC / 2)) FILTER (WHERE status = 'paused'), 0))::INTEGER AS memory_mb_reserved + COALESCE(SUM(CEIL(memory_mb::NUMERIC / 2)) FILTER (WHERE status = 'paused'), 0))::INTEGER AS memory_mb_reserved
FROM sandboxes FROM sandboxes
GROUP BY team_id; GROUP BY team_id;
-- name: GetTeamsWithSnapshots :many
SELECT DISTINCT team_id
FROM sandbox_metrics_snapshots
WHERE sampled_at > NOW() - INTERVAL '93 days';
-- name: ComputeDailyUsageForDay :one
SELECT
COALESCE(SUM(vcpus_reserved * 10.0 / 60.0), 0)::NUMERIC(18,4) AS cpu_minutes,
COALESCE(SUM(memory_mb_reserved * 10.0 / 60.0), 0)::NUMERIC(18,4) AS ram_mb_minutes
FROM sandbox_metrics_snapshots
WHERE team_id = $1
AND sampled_at >= $2
AND sampled_at < $3;
-- name: UpsertDailyUsage :exec
INSERT INTO daily_usage (team_id, day, cpu_minutes, ram_mb_minutes)
VALUES ($1, $2, $3, $4)
ON CONFLICT (team_id, day) DO UPDATE
SET cpu_minutes = EXCLUDED.cpu_minutes,
ram_mb_minutes = EXCLUDED.ram_mb_minutes;
-- name: GetDailyUsage :many
SELECT day, cpu_minutes, ram_mb_minutes
FROM daily_usage
WHERE team_id = $1
AND day >= $2
AND day <= $3
ORDER BY day ASC;
-- name: DeleteDailyUsageByTeam :exec
DELETE FROM daily_usage WHERE team_id = $1;

View File

@ -5,9 +5,3 @@ VALUES ($1, $2, $3, $4);
-- name: GetOAuthProvider :one -- name: GetOAuthProvider :one
SELECT * FROM oauth_providers SELECT * FROM oauth_providers
WHERE provider = $1 AND provider_id = $2; WHERE provider = $1 AND provider_id = $2;
-- name: GetOAuthProvidersByUserID :many
SELECT * FROM oauth_providers WHERE user_id = $1;
-- name: DeleteOAuthProvider :exec
DELETE FROM oauth_providers WHERE user_id = $1 AND provider = $2;

View File

@ -1,6 +1,6 @@
-- name: InsertSandbox :one -- name: InsertSandbox :one
INSERT INTO sandboxes (id, team_id, host_id, template, status, vcpus, memory_mb, timeout_sec, disk_size_mb, template_id, template_team_id, metadata) INSERT INTO sandboxes (id, team_id, host_id, template, status, vcpus, memory_mb, timeout_sec, disk_size_mb, template_id, template_team_id)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
RETURNING *; RETURNING *;
-- name: GetSandbox :one -- name: GetSandbox :one
@ -62,7 +62,7 @@ WHERE id = ANY($1::uuid[]);
-- name: ListActiveSandboxesByTeam :many -- name: ListActiveSandboxesByTeam :many
SELECT * FROM sandboxes SELECT * FROM sandboxes
WHERE team_id = $1 AND status IN ('running', 'paused', 'starting', 'hibernated') WHERE team_id = $1 AND status IN ('running', 'paused', 'starting')
ORDER BY created_at DESC; ORDER BY created_at DESC;
-- name: MarkSandboxesMissingByHost :exec -- name: MarkSandboxesMissingByHost :exec
@ -74,12 +74,6 @@ SET status = 'missing',
last_updated = NOW() last_updated = NOW()
WHERE host_id = $1 AND status IN ('running', 'starting', 'pending'); WHERE host_id = $1 AND status IN ('running', 'starting', 'pending');
-- name: UpdateSandboxMetadata :exec
UPDATE sandboxes
SET metadata = $2,
last_updated = NOW()
WHERE id = $1;
-- name: BulkRestoreRunning :exec -- name: BulkRestoreRunning :exec
-- Called by the reconciler when a host comes back online and its sandboxes are -- Called by the reconciler when a host comes back online and its sandboxes are
-- confirmed alive. Restores only sandboxes that are in 'missing' state. -- confirmed alive. Restores only sandboxes that are in 'missing' state.

View File

@ -74,26 +74,6 @@ WHERE t.id != '00000000-0000-0000-0000-000000000000'
ORDER BY t.deleted_at ASC NULLS FIRST, t.created_at DESC ORDER BY t.deleted_at ASC NULLS FIRST, t.created_at DESC
LIMIT $1 OFFSET $2; LIMIT $1 OFFSET $2;
-- name: ListSoleOwnedTeams :many
-- Returns teams where the user is the owner and no other members exist.
SELECT t.id FROM teams t
JOIN users_teams ut ON ut.team_id = t.id
WHERE ut.user_id = $1
AND ut.role = 'owner'
AND t.deleted_at IS NULL
AND NOT EXISTS (
SELECT 1 FROM users_teams ut2
WHERE ut2.team_id = t.id AND ut2.user_id <> $1
);
-- name: GetOwnedTeamIDs :many
-- Returns team IDs where the given user has the 'owner' role.
SELECT t.id FROM teams t
JOIN users_teams ut ON ut.team_id = t.id
WHERE ut.user_id = $1
AND ut.role = 'owner'
AND t.deleted_at IS NULL;
-- name: CountTeamsAdmin :one -- name: CountTeamsAdmin :one
SELECT COUNT(*)::int AS total SELECT COUNT(*)::int AS total
FROM teams FROM teams

View File

@ -34,5 +34,5 @@ WHERE id = $1;
-- name: UpdateBuildDefaults :exec -- name: UpdateBuildDefaults :exec
UPDATE template_builds UPDATE template_builds
SET default_user = $2, default_env = $3, metadata = $4 SET default_user = $2, default_env = $3
WHERE id = $1; WHERE id = $1;

View File

@ -1,6 +1,6 @@
-- name: InsertTemplate :one -- name: InsertTemplate :one
INSERT INTO templates (id, name, type, vcpus, memory_mb, size_bytes, team_id, default_user, default_env, metadata) INSERT INTO templates (id, name, type, vcpus, memory_mb, size_bytes, team_id, default_user, default_env)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
RETURNING *; RETURNING *;
-- name: GetTemplate :one -- name: GetTemplate :one

View File

@ -4,21 +4,16 @@ VALUES ($1, $2, $3, $4)
RETURNING *; RETURNING *;
-- name: GetUserByEmail :one -- name: GetUserByEmail :one
SELECT * FROM users WHERE email = $1 AND status != 'deleted'; SELECT * FROM users WHERE email = $1;
-- name: GetUserByID :one -- name: GetUserByID :one
SELECT * FROM users WHERE id = $1 AND status != 'deleted'; SELECT * FROM users WHERE id = $1;
-- name: InsertUserOAuth :one -- name: InsertUserOAuth :one
INSERT INTO users (id, email, name) INSERT INTO users (id, email, name)
VALUES ($1, $2, $3) VALUES ($1, $2, $3)
RETURNING *; RETURNING *;
-- name: InsertUserInactive :one
INSERT INTO users (id, email, password_hash, name, status)
VALUES ($1, $2, $3, $4, 'inactive')
RETURNING *;
-- name: SetUserAdmin :exec -- name: SetUserAdmin :exec
UPDATE users SET is_admin = $2, updated_at = NOW() WHERE id = $1; UPDATE users SET is_admin = $2, updated_at = NOW() WHERE id = $1;
@ -43,9 +38,6 @@ SELECT EXISTS(
-- name: CountUsers :one -- name: CountUsers :one
SELECT COUNT(*) FROM users; SELECT COUNT(*) FROM users;
-- name: CountActiveUsers :one
SELECT COUNT(*) FROM users WHERE status = 'active';
-- name: SearchUsersByEmailPrefix :many -- name: SearchUsersByEmailPrefix :many
SELECT id, email FROM users WHERE email LIKE $1 || '%' ORDER BY email LIMIT 10; SELECT id, email FROM users WHERE email LIKE $1 || '%' ORDER BY email LIMIT 10;
@ -58,41 +50,19 @@ SELECT
u.email, u.email,
u.name, u.name,
u.is_admin, u.is_admin,
u.status, u.is_active,
u.created_at, u.created_at,
(SELECT COUNT(*) FROM users_teams ut WHERE ut.user_id = u.id)::int AS teams_joined, (SELECT COUNT(*) FROM users_teams ut WHERE ut.user_id = u.id)::int AS teams_joined,
(SELECT COUNT(*) FROM users_teams ut WHERE ut.user_id = u.id AND ut.role = 'owner')::int AS teams_owned (SELECT COUNT(*) FROM users_teams ut WHERE ut.user_id = u.id AND ut.role = 'owner')::int AS teams_owned
FROM users u FROM users u
WHERE u.status != 'deleted' WHERE u.deleted_at IS NULL
ORDER BY u.created_at DESC ORDER BY u.created_at DESC
LIMIT $1 OFFSET $2; LIMIT $1 OFFSET $2;
-- name: CountUsersAdmin :one -- name: CountUsersAdmin :one
SELECT COUNT(*)::int AS total SELECT COUNT(*)::int AS total
FROM users FROM users
WHERE status != 'deleted'; WHERE deleted_at IS NULL;
-- name: SetUserStatus :exec -- name: SetUserActive :exec
UPDATE users SET status = $2, updated_at = NOW() WHERE id = $1; UPDATE users SET is_active = $2, updated_at = NOW() WHERE id = $1;
-- name: UpdateUserPassword :exec
UPDATE users SET password_hash = $2, updated_at = NOW() WHERE id = $1;
-- name: SoftDeleteUser :exec
UPDATE users SET deleted_at = NOW(), status = 'deleted', updated_at = NOW() WHERE id = $1;
-- name: CountUserOwnedTeamsWithOtherMembers :one
SELECT COUNT(DISTINCT ut.team_id)::int
FROM users_teams ut
WHERE ut.user_id = $1
AND ut.role = 'owner'
AND EXISTS (
SELECT 1 FROM users_teams ut2
WHERE ut2.team_id = ut.team_id AND ut2.user_id <> $1
);
-- name: ListExpiredSoftDeletedUsers :many
SELECT id, email FROM users WHERE deleted_at IS NOT NULL AND deleted_at < NOW() - INTERVAL '15 days';
-- name: HardDeleteUser :exec
DELETE FROM users WHERE id = $1;

View File

View File

@ -1,19 +0,0 @@
/var/lib/wrenn/logs/control-plane.log
/var/lib/wrenn/logs/host-agent.log
{
daily
rotate 3
missingok
notifempty
dateext
dateformat -%Y-%m-%d
compress
delaycompress
sharedscripts
postrotate
# Signal the processes to reopen their log files.
# Use SIGHUP — both binaries handle it gracefully.
pkill -HUP -f wrenn-cp || true
pkill -HUP -f wrenn-agent || true
endscript
}

View File

View File

@ -1,2 +0,0 @@
[target.x86_64-unknown-linux-musl]
linker = "musl-gcc"

2622
envd-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,83 +0,0 @@
[package]
name = "envd"
version = "0.2.0"
edition = "2024"
rust-version = "1.88"
[dependencies]
# Async runtime
tokio = { version = "1", features = ["full"] }
# HTTP framework
axum = { version = "0.8", features = ["multipart"] }
tower = { version = "0.5", features = ["util"] }
tower-http = { version = "0.6", features = ["cors", "fs"] }
tower-service = "0.3"
# RPC (Connect protocol — serves Connect + gRPC + gRPC-Web on same port)
connectrpc = { version = "0.3", features = ["axum"] }
buffa-types = { path = "buffa-types-shim" }
# CLI
clap = { version = "4", features = ["derive"] }
# Serialization
serde = { version = "1", features = ["derive"] }
serde_json = "1"
# Logging
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["json", "env-filter"] }
# System metrics
sysinfo = "0.33"
# Unix syscalls
nix = { version = "0.30", features = ["fs", "process", "signal", "user", "term", "mount", "ioctl"] }
# Concurrent map
dashmap = "6"
# Crypto
sha2 = "0.10"
hmac = "0.12"
hex = "0.4"
base64 = "0.22"
# Secure memory
zeroize = { version = "1", features = ["derive"] }
# File watching
notify = "7"
# Compression
flate2 = "1"
# HTTP client (MMDS polling)
reqwest = { version = "0.12", default-features = false, features = ["json"] }
# Directory walking
walkdir = "2"
# Misc
libc = "0.2"
bytes = "1"
http = "1"
http-body-util = "0.1"
futures = "0.3"
tokio-util = { version = "0.7", features = ["io"] }
subtle = "2"
http-body = "1.0.1"
buffa = "0.3"
async-stream = "0.3.6"
mime_guess = "2"
[build-dependencies]
connectrpc-build = "0.3"
[profile.release]
strip = true
lto = true
opt-level = "z"
codegen-units = 1
panic = "abort"

View File

@ -1,141 +0,0 @@
# envd (Rust)
Wrenn guest agent daemon — runs as PID 1 inside Firecracker microVMs. Provides process management, filesystem operations, file transfer, port forwarding, and VM lifecycle control over Connect RPC and HTTP.
Rust rewrite of `envd/` (Go). Drop-in replacement — same wire protocol, same endpoints, same CLI flags.
## Prerequisites
- Rust 1.88+ (required by `connectrpc` 0.3.3)
- `protoc` (protobuf compiler, for proto codegen at build time)
- `musl-tools` (for static linking)
```bash
# Ubuntu/Debian
sudo apt install musl-tools protobuf-compiler
# Rust musl target
rustup target add x86_64-unknown-linux-musl
```
## Building
### Static binary (production — what goes into the rootfs)
```bash
cd envd-rs
ENVD_COMMIT=$(git rev-parse --short HEAD) \
cargo build --release --target x86_64-unknown-linux-musl
```
Output: `target/x86_64-unknown-linux-musl/release/envd`
Verify static linking:
```bash
file target/x86_64-unknown-linux-musl/release/envd
# should say: "statically linked"
ldd target/x86_64-unknown-linux-musl/release/envd
# should say: "not a dynamic executable"
```
### Debug binary (dev machine, dynamically linked)
```bash
cd envd-rs
cargo build
```
Run locally (outside a VM):
```bash
./target/debug/envd --isnotfc --port 49983
```
### Via Makefile (from repo root)
```bash
make build-envd # static musl release build
make build-envd-go # Go version (for comparison)
```
## CLI Flags
```
--port <PORT> Listen port [default: 49983]
--isnotfc Not running inside Firecracker (disables MMDS, cgroups)
--version Print version and exit
--commit Print git commit and exit
--cmd <CMD> Spawn a process at startup (e.g. --cmd "/bin/bash")
--cgroup-root <PATH> Cgroup v2 root [default: /sys/fs/cgroup]
```
## Endpoints
### HTTP
| Method | Path | Description |
|--------|---------------------|--------------------------------------|
| GET | `/health` | Health check, triggers post-restore |
| GET | `/metrics` | System metrics (CPU, memory, disk) |
| GET | `/envs` | Current environment variables |
| POST | `/init` | Host agent init (token, env, mounts) |
| POST | `/snapshot/prepare` | Quiesce before Firecracker snapshot |
| GET | `/files` | Download file (gzip, range support) |
| POST | `/files` | Upload file(s) via multipart |
### Connect RPC (same port)
| Service | RPCs |
|------------|-------------------------------------------------------------------------|
| Process | List, Start, Connect, Update, StreamInput, SendInput, SendSignal, CloseStdin |
| Filesystem | Stat, MakeDir, Move, ListDir, Remove, WatchDir, CreateWatcher, GetWatcherEvents, RemoveWatcher |
## Architecture
```
42 files, ~4200 LOC Rust
Binary: ~4 MB (stripped, LTO, musl static)
src/
├── main.rs # Entry point, CLI, server setup
├── state.rs # Shared AppState
├── config.rs # Constants
├── conntracker.rs # TCP connection tracking for snapshot/restore
├── execcontext.rs # Default user/workdir/env
├── logging.rs # tracing-subscriber (JSON or pretty)
├── util.rs # AtomicMax
├── auth/ # Token, signing, middleware
├── crypto/ # SHA-256, SHA-512, HMAC
├── host/ # MMDS polling, system metrics
├── http/ # Axum handlers (health, init, snapshot, files, encoding)
├── permissions/ # Path resolution, user lookup, chown
├── rpc/ # Connect RPC services
│ ├── pb.rs # Generated proto types
│ ├── process_*.rs # Process service + handler (PTY, pipe, broadcast)
│ ├── filesystem_*.rs # Filesystem service (stat, list, watch, mkdir, move, remove)
│ └── entry.rs # EntryInfo builder
├── port/ # Port subsystem
│ ├── conn.rs # /proc/net/tcp parser
│ ├── scanner.rs # Periodic TCP port scanner
│ ├── forwarder.rs # socat-based port forwarding
│ └── subsystem.rs # Lifecycle (start/stop/restart)
└── cgroups/ # Cgroup v2 manager (pty/user/socat groups)
```
## Updating the rootfs
After building the static binary, copy it into the rootfs:
```bash
bash scripts/update-debug-rootfs.sh [rootfs_path]
```
Or manually:
```bash
sudo mount -o loop /var/lib/wrenn/images/minimal.ext4 /mnt
sudo cp target/x86_64-unknown-linux-musl/release/envd /mnt/usr/bin/envd
sudo umount /mnt
```

View File

@ -1,12 +0,0 @@
[package]
name = "buffa-types"
version = "0.3.0"
edition = "2024"
publish = false
[dependencies]
buffa = "0.3"
serde = { version = "1", features = ["derive"] }
[build-dependencies]
connectrpc-build = "0.3"

View File

@ -1,9 +0,0 @@
fn main() {
connectrpc_build::Config::new()
.files(&["/usr/include/google/protobuf/timestamp.proto"])
.includes(&["/usr/include"])
.include_file("_types.rs")
.emit_register_fn(false)
.compile()
.unwrap();
}

View File

@ -1,6 +0,0 @@
#![allow(dead_code, non_camel_case_types, unused_imports, clippy::derivable_impls)]
use ::buffa;
use ::serde;
include!(concat!(env!("OUT_DIR"), "/_types.rs"));

View File

@ -1,11 +0,0 @@
fn main() {
connectrpc_build::Config::new()
.files(&[
"../proto/envd/process.proto",
"../proto/envd/filesystem.proto",
])
.includes(&["../proto/envd", "/usr/include"])
.include_file("_connectrpc.rs")
.compile()
.unwrap();
}

View File

@ -1,3 +0,0 @@
[toolchain]
channel = "stable"
targets = ["x86_64-unknown-linux-gnu", "x86_64-unknown-linux-musl"]

View File

@ -1,56 +0,0 @@
use std::sync::Arc;
use axum::extract::Request;
use axum::http::StatusCode;
use axum::middleware::Next;
use axum::response::{IntoResponse, Response};
use serde_json::json;
use crate::auth::token::SecureToken;
const ACCESS_TOKEN_HEADER: &str = "x-access-token";
/// Paths excluded from general token auth.
/// Format: "METHOD/path"
const AUTH_EXCLUDED: &[&str] = &[
"GET/health",
"GET/files",
"POST/files",
"POST/init",
"POST/snapshot/prepare",
];
/// Axum middleware that checks X-Access-Token header.
pub async fn auth_layer(
request: Request,
next: Next,
access_token: Arc<SecureToken>,
) -> Response {
if access_token.is_set() {
let method = request.method().as_str();
let path = request.uri().path();
let key = format!("{method}{path}");
let is_excluded = AUTH_EXCLUDED.iter().any(|p| *p == key);
let header_val = request
.headers()
.get(ACCESS_TOKEN_HEADER)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if !access_token.equals(header_val) && !is_excluded {
tracing::error!("unauthorized access attempt");
return (
StatusCode::UNAUTHORIZED,
axum::Json(json!({
"code": 401,
"message": "unauthorized access, please provide a valid access token or method signing if supported"
})),
)
.into_response();
}
}
next.run(request).await
}

View File

@ -1,3 +0,0 @@
pub mod token;
pub mod signing;
pub mod middleware;

View File

@ -1,85 +0,0 @@
use crate::auth::token::SecureToken;
use crate::crypto;
use zeroize::Zeroize;
pub const READ_OPERATION: &str = "read";
pub const WRITE_OPERATION: &str = "write";
/// Generate a v1 signature: `v1_{sha256_base64(path:operation:username:token[:expiration])}`
pub fn generate_signature(
token: &SecureToken,
path: &str,
username: &str,
operation: &str,
expiration: Option<i64>,
) -> Result<String, &'static str> {
let mut token_bytes = token.bytes().ok_or("access token is not set")?;
let payload = match expiration {
Some(exp) => format!(
"{}:{}:{}:{}:{}",
path,
operation,
username,
String::from_utf8_lossy(&token_bytes),
exp
),
None => format!(
"{}:{}:{}:{}",
path,
operation,
username,
String::from_utf8_lossy(&token_bytes),
),
};
token_bytes.zeroize();
let hash = crypto::sha256::hash_without_prefix(payload.as_bytes());
Ok(format!("v1_{hash}"))
}
/// Validate a request's signing. Returns Ok(()) if valid.
pub fn validate_signing(
token: &SecureToken,
header_token: Option<&str>,
signature: Option<&str>,
signature_expiration: Option<i64>,
username: &str,
path: &str,
operation: &str,
) -> Result<(), String> {
if !token.is_set() {
return Ok(());
}
if let Some(ht) = header_token {
if !ht.is_empty() {
if token.equals(ht) {
return Ok(());
}
return Err("access token present in header but does not match".into());
}
}
let sig = signature.ok_or("missing signature query parameter")?;
let expected = generate_signature(token, path, username, operation, signature_expiration)
.map_err(|e| format!("error generating signing key: {e}"))?;
if expected != sig {
return Err("invalid signature".into());
}
if let Some(exp) = signature_expiration {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
if exp < now {
return Err("signature is already expired".into());
}
}
Ok(())
}

View File

@ -1,127 +0,0 @@
use std::sync::RwLock;
use subtle::ConstantTimeEq;
use zeroize::Zeroize;
/// Secure token storage with constant-time comparison and zeroize-on-drop.
///
/// Mirrors Go's SecureToken backed by memguard.LockedBuffer.
/// In Rust we rely on `zeroize` for Drop-based zeroing.
pub struct SecureToken {
inner: RwLock<Option<Vec<u8>>>,
}
impl SecureToken {
pub fn new() -> Self {
Self {
inner: RwLock::new(None),
}
}
pub fn set(&self, token: &[u8]) -> Result<(), &'static str> {
if token.is_empty() {
return Err("empty token not allowed");
}
let mut guard = self.inner.write().unwrap();
if let Some(ref mut old) = *guard {
old.zeroize();
}
*guard = Some(token.to_vec());
Ok(())
}
pub fn is_set(&self) -> bool {
let guard = self.inner.read().unwrap();
guard.is_some()
}
/// Constant-time comparison.
pub fn equals(&self, other: &str) -> bool {
let guard = self.inner.read().unwrap();
match guard.as_ref() {
Some(buf) => buf.as_slice().ct_eq(other.as_bytes()).into(),
None => false,
}
}
/// Constant-time comparison with another SecureToken.
pub fn equals_secure(&self, other: &SecureToken) -> bool {
let other_bytes = match other.bytes() {
Some(b) => b,
None => return false,
};
let guard = self.inner.read().unwrap();
let result = match guard.as_ref() {
Some(buf) => buf.as_slice().ct_eq(&other_bytes).into(),
None => false,
};
// other_bytes dropped here, Vec<u8> doesn't auto-zeroize but
// we accept this — same as Go's `defer memguard.WipeBytes(otherBytes)`
result
}
/// Returns a copy of the token bytes (for signature generation).
pub fn bytes(&self) -> Option<Vec<u8>> {
let guard = self.inner.read().unwrap();
guard.as_ref().map(|b| b.clone())
}
/// Transfer token from another SecureToken, clearing the source.
pub fn take_from(&self, src: &SecureToken) {
let taken = {
let mut src_guard = src.inner.write().unwrap();
src_guard.take()
};
let mut guard = self.inner.write().unwrap();
if let Some(ref mut old) = *guard {
old.zeroize();
}
*guard = taken;
}
pub fn destroy(&self) {
let mut guard = self.inner.write().unwrap();
if let Some(ref mut buf) = *guard {
buf.zeroize();
}
*guard = None;
}
}
impl Drop for SecureToken {
fn drop(&mut self) {
if let Ok(mut guard) = self.inner.write() {
if let Some(ref mut buf) = *guard {
buf.zeroize();
}
}
}
}
/// Deserialize from JSON string, matching Go's UnmarshalJSON behavior.
/// Expects a quoted JSON string. Rejects escape sequences.
impl SecureToken {
pub fn from_json_bytes(data: &mut [u8]) -> Result<Self, &'static str> {
if data.len() < 2 || data[0] != b'"' || data[data.len() - 1] != b'"' {
data.zeroize();
return Err("invalid secure token JSON string");
}
let content = &data[1..data.len() - 1];
if content.contains(&b'\\') {
data.zeroize();
return Err("invalid secure token: unexpected escape sequence");
}
if content.is_empty() {
data.zeroize();
return Err("empty token not allowed");
}
let token = Self::new();
token.set(content).map_err(|_| "failed to set token")?;
data.zeroize();
Ok(token)
}
}

View File

@ -1,66 +0,0 @@
use std::collections::HashMap;
use std::fs;
use std::os::unix::io::{OwnedFd, RawFd};
use std::path::PathBuf;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum ProcessType {
Pty,
User,
Socat,
}
pub trait CgroupManager: Send + Sync {
fn get_fd(&self, proc_type: ProcessType) -> Option<RawFd>;
}
pub struct Cgroup2Manager {
fds: HashMap<ProcessType, OwnedFd>,
}
impl Cgroup2Manager {
pub fn new(root: &str, configs: &[(ProcessType, &str, &[(&str, &str)])]) -> Result<Self, String> {
let mut fds = HashMap::new();
for (proc_type, sub_path, properties) in configs {
let full_path = PathBuf::from(root).join(sub_path);
fs::create_dir_all(&full_path).map_err(|e| {
format!("failed to create cgroup {}: {e}", full_path.display())
})?;
for (name, value) in *properties {
let prop_path = full_path.join(name);
fs::write(&prop_path, value).map_err(|e| {
format!("failed to write cgroup property {}: {e}", prop_path.display())
})?;
}
let fd = nix::fcntl::open(
&full_path,
nix::fcntl::OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
)
.map_err(|e| format!("failed to open cgroup {}: {e}", full_path.display()))?;
fds.insert(*proc_type, fd);
}
Ok(Self { fds })
}
}
impl CgroupManager for Cgroup2Manager {
fn get_fd(&self, proc_type: ProcessType) -> Option<RawFd> {
use std::os::unix::io::AsRawFd;
self.fds.get(&proc_type).map(|fd| fd.as_raw_fd())
}
}
pub struct NoopCgroupManager;
impl CgroupManager for NoopCgroupManager {
fn get_fd(&self, _proc_type: ProcessType) -> Option<RawFd> {
None
}
}

View File

@ -1,16 +0,0 @@
use std::time::Duration;
pub const DEFAULT_PORT: u16 = 49983;
pub const IDLE_TIMEOUT: Duration = Duration::from_secs(640);
pub const CORS_MAX_AGE: Duration = Duration::from_secs(7200);
pub const PORT_SCANNER_INTERVAL: Duration = Duration::from_millis(1000);
pub const DEFAULT_USER: &str = "root";
pub const WRENN_RUN_DIR: &str = "/run/wrenn";
pub const KILOBYTE: u64 = 1024;
pub const MEGABYTE: u64 = 1024 * KILOBYTE;
pub const MMDS_ADDRESS: &str = "169.254.169.254";
pub const MMDS_POLL_INTERVAL: Duration = Duration::from_millis(50);
pub const MMDS_TOKEN_EXPIRATION_SECS: u64 = 60;
pub const MMDS_ACCESS_TOKEN_CLIENT_TIMEOUT: Duration = Duration::from_secs(10);

View File

@ -1,79 +0,0 @@
use std::collections::HashSet;
use std::sync::Mutex;
/// Tracks active TCP connections for snapshot/restore lifecycle.
///
/// Before snapshot: close idle connections, record active ones.
/// After restore: close all pre-snapshot connections (zombie TCP sockets).
///
/// In Rust/axum, we don't have Go's ConnState callback. Instead we track
/// connections via a tower middleware that registers connection IDs.
/// For the initial implementation, we track by a simple connection counter
/// and rely on axum's graceful shutdown mechanics.
pub struct ConnTracker {
inner: Mutex<ConnTrackerInner>,
}
struct ConnTrackerInner {
active: HashSet<u64>,
pre_snapshot: Option<HashSet<u64>>,
next_id: u64,
keepalives_enabled: bool,
}
impl ConnTracker {
pub fn new() -> Self {
Self {
inner: Mutex::new(ConnTrackerInner {
active: HashSet::new(),
pre_snapshot: None,
next_id: 0,
keepalives_enabled: true,
}),
}
}
pub fn register_connection(&self) -> u64 {
let mut inner = self.inner.lock().unwrap();
let id = inner.next_id;
inner.next_id += 1;
inner.active.insert(id);
id
}
pub fn remove_connection(&self, id: u64) {
let mut inner = self.inner.lock().unwrap();
inner.active.remove(&id);
if let Some(ref mut pre) = inner.pre_snapshot {
pre.remove(&id);
}
}
pub fn prepare_for_snapshot(&self) {
let mut inner = self.inner.lock().unwrap();
inner.keepalives_enabled = false;
inner.pre_snapshot = Some(inner.active.clone());
tracing::info!(
active_connections = inner.active.len(),
"snapshot: recorded pre-snapshot connections, keep-alives disabled"
);
}
pub fn restore_after_snapshot(&self) {
let mut inner = self.inner.lock().unwrap();
if let Some(pre) = inner.pre_snapshot.take() {
let zombie_count = pre.len();
for id in &pre {
inner.active.remove(id);
}
if zombie_count > 0 {
tracing::info!(zombie_count, "restore: closed zombie connections");
}
}
inner.keepalives_enabled = true;
}
pub fn keepalives_enabled(&self) -> bool {
self.inner.lock().unwrap().keepalives_enabled
}
}

View File

@ -1,22 +0,0 @@
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
pub fn compute(key: &[u8], data: &[u8]) -> String {
let mut mac = HmacSha256::new_from_slice(key).expect("HMAC accepts any key length");
mac.update(data);
let result = mac.finalize();
hex::encode(result.into_bytes())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_hmac_sha256() {
let result = compute(b"key", b"message");
assert_eq!(result.len(), 64); // SHA-256 hex = 64 chars
}
}

View File

@ -1,3 +0,0 @@
pub mod sha256;
pub mod sha512;
pub mod hmac_sha256;

View File

@ -1,33 +0,0 @@
use base64::Engine;
use base64::engine::general_purpose::STANDARD_NO_PAD;
use sha2::{Digest, Sha256};
pub fn hash(data: &[u8]) -> String {
let h = Sha256::digest(data);
let encoded = STANDARD_NO_PAD.encode(h);
format!("$sha256${encoded}")
}
pub fn hash_without_prefix(data: &[u8]) -> String {
let h = Sha256::digest(data);
STANDARD_NO_PAD.encode(h)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_hash_format() {
let result = hash(b"test");
assert!(result.starts_with("$sha256$"));
assert!(!result.contains('='));
}
#[test]
fn test_hash_without_prefix() {
let result = hash_without_prefix(b"test");
assert!(!result.starts_with("$sha256$"));
assert!(!result.contains('='));
}
}

View File

@ -1,24 +0,0 @@
use sha2::{Digest, Sha512};
pub fn hash_access_token(token: &str) -> String {
let h = Sha512::digest(token.as_bytes());
hex::encode(h)
}
pub fn hash_access_token_bytes(token: &[u8]) -> String {
let h = Sha512::digest(token);
hex::encode(h)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_hash_access_token() {
let h1 = hash_access_token("test");
let h2 = hash_access_token_bytes(b"test");
assert_eq!(h1, h2);
assert_eq!(h1.len(), 128); // SHA-512 hex = 128 chars
}
}

View File

@ -1,42 +0,0 @@
use dashmap::DashMap;
use std::sync::Arc;
#[derive(Clone)]
pub struct Defaults {
pub env_vars: Arc<DashMap<String, String>>,
pub user: String,
pub workdir: Option<String>,
}
impl Defaults {
pub fn new(user: &str) -> Self {
Self {
env_vars: Arc::new(DashMap::new()),
user: user.to_string(),
workdir: None,
}
}
}
pub fn resolve_default_workdir(workdir: &str, default_workdir: Option<&str>) -> String {
if !workdir.is_empty() {
return workdir.to_string();
}
if let Some(dw) = default_workdir {
return dw.to_string();
}
String::new()
}
pub fn resolve_default_username<'a>(
username: Option<&'a str>,
default_username: &'a str,
) -> Result<&'a str, &'static str> {
if let Some(u) = username {
return Ok(u);
}
if !default_username.is_empty() {
return Ok(default_username);
}
Err("username not provided")
}

View File

@ -1,73 +0,0 @@
use std::ffi::CString;
use std::time::{SystemTime, UNIX_EPOCH};
use serde::Serialize;
#[derive(Serialize)]
pub struct Metrics {
pub ts: i64,
pub cpu_count: u32,
pub cpu_used_pct: f32,
pub mem_total_mib: u64,
pub mem_used_mib: u64,
pub mem_total: u64,
pub mem_used: u64,
pub disk_used: u64,
pub disk_total: u64,
}
pub fn get_metrics() -> Result<Metrics, String> {
use sysinfo::System;
let mut sys = System::new();
sys.refresh_memory();
sys.refresh_cpu_all();
std::thread::sleep(std::time::Duration::from_millis(100));
sys.refresh_cpu_all();
let cpu_count = sys.cpus().len() as u32;
let cpu_used_pct = sys.global_cpu_usage();
let cpu_used_pct_rounded = if cpu_used_pct > 0.0 {
(cpu_used_pct * 100.0).round() / 100.0
} else {
0.0
};
let mem_total = sys.total_memory();
let mem_used = sys.used_memory();
let (disk_total, disk_used) = disk_stats("/")?;
let ts = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
Ok(Metrics {
ts,
cpu_count,
cpu_used_pct: cpu_used_pct_rounded,
mem_total_mib: mem_total / 1024 / 1024,
mem_used_mib: mem_used / 1024 / 1024,
mem_total,
mem_used,
disk_used,
disk_total,
})
}
fn disk_stats(path: &str) -> Result<(u64, u64), String> {
let c_path = CString::new(path).unwrap();
let mut stat: libc::statfs = unsafe { std::mem::zeroed() };
let ret = unsafe { libc::statfs(c_path.as_ptr(), &mut stat) };
if ret != 0 {
return Err(format!("statfs failed: {}", std::io::Error::last_os_error()));
}
let block = stat.f_bsize as u64;
let total = stat.f_blocks * block;
let available = stat.f_bavail * block;
Ok((total, total - available))
}

View File

@ -1,120 +0,0 @@
use std::sync::Arc;
use std::time::Duration;
use dashmap::DashMap;
use serde::Deserialize;
use tokio_util::sync::CancellationToken;
use crate::config::{MMDS_ADDRESS, MMDS_POLL_INTERVAL, MMDS_TOKEN_EXPIRATION_SECS, WRENN_RUN_DIR};
#[derive(Debug, Clone, Deserialize)]
pub struct MMDSOpts {
#[serde(rename = "instanceID")]
pub sandbox_id: String,
#[serde(rename = "envID")]
pub template_id: String,
#[serde(rename = "address", default)]
pub logs_collector_address: String,
#[serde(rename = "accessTokenHash", default)]
pub access_token_hash: String,
}
async fn get_mmds_token(client: &reqwest::Client) -> Result<String, String> {
let resp = client
.put(format!("http://{MMDS_ADDRESS}/latest/api/token"))
.header(
"X-metadata-token-ttl-seconds",
MMDS_TOKEN_EXPIRATION_SECS.to_string(),
)
.send()
.await
.map_err(|e| format!("mmds token request failed: {e}"))?;
let token = resp.text().await.map_err(|e| format!("mmds token read: {e}"))?;
if token.is_empty() {
return Err("mmds token is an empty string".into());
}
Ok(token)
}
async fn get_mmds_opts(client: &reqwest::Client, token: &str) -> Result<MMDSOpts, String> {
let resp = client
.get(format!("http://{MMDS_ADDRESS}"))
.header("X-metadata-token", token)
.header("Accept", "application/json")
.send()
.await
.map_err(|e| format!("mmds opts request failed: {e}"))?;
resp.json::<MMDSOpts>()
.await
.map_err(|e| format!("mmds opts parse: {e}"))
}
pub async fn get_access_token_hash() -> Result<String, String> {
let client = reqwest::Client::builder()
.timeout(Duration::from_secs(10))
.no_proxy()
.build()
.map_err(|e| format!("http client: {e}"))?;
let token = get_mmds_token(&client).await?;
let opts = get_mmds_opts(&client, &token).await?;
Ok(opts.access_token_hash)
}
/// Polls MMDS every 50ms until metadata is available.
/// Stores sandbox_id and template_id in env_vars and writes to /run/wrenn/ files.
pub async fn poll_for_opts(
env_vars: Arc<DashMap<String, String>>,
cancel: CancellationToken,
) -> Option<MMDSOpts> {
let client = reqwest::Client::builder()
.no_proxy()
.build()
.ok()?;
let mut interval = tokio::time::interval(MMDS_POLL_INTERVAL);
loop {
tokio::select! {
_ = cancel.cancelled() => {
tracing::warn!("context cancelled while waiting for mmds opts");
return None;
}
_ = interval.tick() => {
let token = match get_mmds_token(&client).await {
Ok(t) => t,
Err(e) => {
tracing::debug!(error = %e, "mmds token poll");
continue;
}
};
let opts = match get_mmds_opts(&client, &token).await {
Ok(o) => o,
Err(e) => {
tracing::debug!(error = %e, "mmds opts poll");
continue;
}
};
env_vars.insert("WRENN_SANDBOX_ID".into(), opts.sandbox_id.clone());
env_vars.insert("WRENN_TEMPLATE_ID".into(), opts.template_id.clone());
let run_dir = std::path::Path::new(WRENN_RUN_DIR);
if let Err(e) = std::fs::create_dir_all(run_dir) {
tracing::error!(error = %e, "mmds: failed to create run dir");
}
if let Err(e) = std::fs::write(run_dir.join(".WRENN_SANDBOX_ID"), &opts.sandbox_id) {
tracing::error!(error = %e, "mmds: failed to write .WRENN_SANDBOX_ID");
}
if let Err(e) = std::fs::write(run_dir.join(".WRENN_TEMPLATE_ID"), &opts.template_id) {
tracing::error!(error = %e, "mmds: failed to write .WRENN_TEMPLATE_ID");
}
return Some(opts);
}
}
}
}

View File

@ -1,2 +0,0 @@
pub mod metrics;
pub mod mmds;

View File

@ -1,147 +0,0 @@
use axum::http::Request;
const ENCODING_GZIP: &str = "gzip";
const ENCODING_IDENTITY: &str = "identity";
const ENCODING_WILDCARD: &str = "*";
const SUPPORTED_ENCODINGS: &[&str] = &[ENCODING_GZIP];
struct EncodingWithQuality {
encoding: String,
quality: f64,
}
fn parse_encoding_with_quality(value: &str) -> EncodingWithQuality {
let value = value.trim();
let mut quality = 1.0;
if let Some(idx) = value.find(';') {
let params = &value[idx + 1..];
let enc = value[..idx].trim();
for param in params.split(';') {
let param = param.trim();
if let Some(stripped) = param.strip_prefix("q=").or_else(|| param.strip_prefix("Q=")) {
if let Ok(q) = stripped.parse::<f64>() {
quality = q;
}
}
}
return EncodingWithQuality {
encoding: enc.to_ascii_lowercase(),
quality,
};
}
EncodingWithQuality {
encoding: value.to_ascii_lowercase(),
quality,
}
}
fn parse_accept_encoding_header(header: &str) -> (Vec<EncodingWithQuality>, bool) {
if header.is_empty() {
return (Vec::new(), false);
}
let encodings: Vec<EncodingWithQuality> =
header.split(',').map(|v| parse_encoding_with_quality(v)).collect();
let mut identity_rejected = false;
let mut identity_explicitly_accepted = false;
let mut wildcard_rejected = false;
for eq in &encodings {
match eq.encoding.as_str() {
ENCODING_IDENTITY => {
if eq.quality == 0.0 {
identity_rejected = true;
} else {
identity_explicitly_accepted = true;
}
}
ENCODING_WILDCARD => {
if eq.quality == 0.0 {
wildcard_rejected = true;
}
}
_ => {}
}
}
if wildcard_rejected && !identity_explicitly_accepted {
identity_rejected = true;
}
(encodings, identity_rejected)
}
pub fn is_identity_acceptable<B>(r: &Request<B>) -> bool {
let header = r
.headers()
.get("accept-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let (_, rejected) = parse_accept_encoding_header(header);
!rejected
}
pub fn parse_accept_encoding<B>(r: &Request<B>) -> Result<&'static str, String> {
let header = r
.headers()
.get("accept-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if header.is_empty() {
return Ok(ENCODING_IDENTITY);
}
let (mut encodings, identity_rejected) = parse_accept_encoding_header(header);
encodings.sort_by(|a, b| b.quality.partial_cmp(&a.quality).unwrap_or(std::cmp::Ordering::Equal));
for eq in &encodings {
if eq.quality == 0.0 {
continue;
}
if eq.encoding == ENCODING_IDENTITY {
return Ok(ENCODING_IDENTITY);
}
if eq.encoding == ENCODING_WILDCARD {
if identity_rejected && !SUPPORTED_ENCODINGS.is_empty() {
return Ok(SUPPORTED_ENCODINGS[0]);
}
return Ok(ENCODING_IDENTITY);
}
if eq.encoding == ENCODING_GZIP {
return Ok(ENCODING_GZIP);
}
}
if !identity_rejected {
return Ok(ENCODING_IDENTITY);
}
Err(format!("no acceptable encoding found, supported: {SUPPORTED_ENCODINGS:?}"))
}
pub fn parse_content_encoding<B>(r: &Request<B>) -> Result<&'static str, String> {
let header = r
.headers()
.get("content-encoding")
.and_then(|v| v.to_str().ok())
.unwrap_or("");
if header.is_empty() {
return Ok(ENCODING_IDENTITY);
}
let encoding = header.trim().to_ascii_lowercase();
if encoding == ENCODING_IDENTITY {
return Ok(ENCODING_IDENTITY);
}
if SUPPORTED_ENCODINGS.contains(&encoding.as_str()) {
return Ok(ENCODING_GZIP);
}
Err(format!("unsupported Content-Encoding: {header}, supported: {SUPPORTED_ENCODINGS:?}"))
}

View File

@ -1,25 +0,0 @@
use std::collections::HashMap;
use std::sync::Arc;
use axum::Json;
use axum::extract::State;
use axum::http::header;
use axum::response::IntoResponse;
use crate::state::AppState;
pub async fn get_envs(State(state): State<Arc<AppState>>) -> impl IntoResponse {
tracing::debug!("getting env vars");
let envs: HashMap<String, String> = state
.defaults
.env_vars
.iter()
.map(|entry| (entry.key().clone(), entry.value().clone()))
.collect();
(
[(header::CACHE_CONTROL, "no-store")],
Json(envs),
)
}

View File

@ -1,20 +0,0 @@
use axum::Json;
use axum::http::StatusCode;
use axum::response::IntoResponse;
use serde::Serialize;
#[derive(Serialize)]
struct ErrorBody {
code: u16,
message: String,
}
pub fn json_error(status: StatusCode, message: &str) -> impl IntoResponse {
(
status,
Json(ErrorBody {
code: status.as_u16(),
message: message.to_string(),
}),
)
}

View File

@ -1,443 +0,0 @@
use std::io::Write as _;
use std::path::Path;
use std::sync::Arc;
use axum::body::Body;
use axum::extract::{FromRequest, Query, Request, State};
use axum::http::{StatusCode, header};
use axum::response::{IntoResponse, Response};
use serde::{Deserialize, Serialize};
use crate::auth::signing;
use crate::execcontext;
use crate::http::encoding;
use crate::permissions::path::{ensure_dirs, expand_and_resolve};
use crate::permissions::user::lookup_user;
use crate::state::AppState;
const ACCESS_TOKEN_HEADER: &str = "x-access-token";
#[derive(Deserialize)]
pub struct FileParams {
pub path: Option<String>,
pub username: Option<String>,
pub signature: Option<String>,
pub signature_expiration: Option<i64>,
}
#[derive(Serialize)]
struct EntryInfo {
path: String,
name: String,
r#type: &'static str,
}
fn json_error(status: StatusCode, msg: &str) -> Response {
let body = serde_json::json!({ "code": status.as_u16(), "message": msg });
(status, axum::Json(body)).into_response()
}
fn extract_header_token(req: &Request) -> Option<&str> {
req.headers()
.get(ACCESS_TOKEN_HEADER)
.and_then(|v| v.to_str().ok())
}
fn validate_file_signing(
state: &AppState,
header_token: Option<&str>,
params: &FileParams,
path: &str,
operation: &str,
username: &str,
) -> Result<(), String> {
signing::validate_signing(
&state.access_token,
header_token,
params.signature.as_deref(),
params.signature_expiration,
username,
path,
operation,
)
}
/// GET /files — download a file
pub async fn get_files(
State(state): State<Arc<AppState>>,
Query(params): Query<FileParams>,
req: Request,
) -> Response {
let path_str = params.path.as_deref().unwrap_or("");
let header_token = extract_header_token(&req);
let username = match execcontext::resolve_default_username(
params.username.as_deref(),
&state.defaults.user,
) {
Ok(u) => u.to_string(),
Err(e) => return json_error(StatusCode::BAD_REQUEST, e),
};
if let Err(e) = validate_file_signing(
&state,
header_token,
&params,
path_str,
signing::READ_OPERATION,
&username,
) {
return json_error(StatusCode::UNAUTHORIZED, &e);
}
let user = match lookup_user(&username) {
Ok(u) => u,
Err(e) => return json_error(StatusCode::UNAUTHORIZED, &e),
};
let home_dir = user.dir.to_string_lossy().to_string();
let resolved = match expand_and_resolve(path_str, &home_dir, state.defaults.workdir.as_deref())
{
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
};
let meta = match std::fs::metadata(&resolved) {
Ok(m) => m,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
return json_error(
StatusCode::NOT_FOUND,
&format!("path '{}' does not exist", resolved),
);
}
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error checking path: {e}"),
);
}
};
if meta.is_dir() {
return json_error(
StatusCode::BAD_REQUEST,
&format!("path '{}' is a directory", resolved),
);
}
if !meta.file_type().is_file() {
return json_error(
StatusCode::BAD_REQUEST,
&format!("path '{}' is not a regular file", resolved),
);
}
let accept_enc = match encoding::parse_accept_encoding(&req) {
Ok(e) => e,
Err(e) => return json_error(StatusCode::NOT_ACCEPTABLE, &e),
};
let has_range_or_conditional = req.headers().get("range").is_some()
|| req.headers().get("if-modified-since").is_some()
|| req.headers().get("if-none-match").is_some()
|| req.headers().get("if-range").is_some();
let use_encoding = if has_range_or_conditional {
if !encoding::is_identity_acceptable(&req) {
return json_error(
StatusCode::NOT_ACCEPTABLE,
"identity encoding not acceptable for Range or conditional request",
);
}
"identity"
} else {
accept_enc
};
let file_data = match std::fs::read(&resolved) {
Ok(d) => d,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error reading file: {e}"),
);
}
};
let filename = Path::new(&resolved)
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
let content_disposition = format!("inline; filename=\"{}\"", filename);
let content_type = mime_guess::from_path(&resolved)
.first_raw()
.unwrap_or("application/octet-stream");
if use_encoding == "gzip" {
let mut encoder =
flate2::write::GzEncoder::new(Vec::new(), flate2::Compression::default());
if let Err(e) = encoder.write_all(&file_data) {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("gzip encoding error: {e}"),
);
}
let compressed = match encoder.finish() {
Ok(d) => d,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("gzip finish error: {e}"),
);
}
};
return Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_ENCODING, "gzip")
.header(header::CONTENT_DISPOSITION, content_disposition)
.header(header::VARY, "Accept-Encoding")
.body(Body::from(compressed))
.unwrap();
}
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CONTENT_DISPOSITION, content_disposition)
.header(header::VARY, "Accept-Encoding")
.header(header::CONTENT_LENGTH, file_data.len())
.body(Body::from(file_data))
.unwrap()
}
/// POST /files — upload file(s) via multipart
pub async fn post_files(
State(state): State<Arc<AppState>>,
Query(params): Query<FileParams>,
req: Request,
) -> Response {
let path_str = params.path.as_deref().unwrap_or("");
let header_token = extract_header_token(&req);
let username = match execcontext::resolve_default_username(
params.username.as_deref(),
&state.defaults.user,
) {
Ok(u) => u.to_string(),
Err(e) => return json_error(StatusCode::BAD_REQUEST, e),
};
if let Err(e) = validate_file_signing(
&state,
header_token,
&params,
path_str,
signing::WRITE_OPERATION,
&username,
) {
return json_error(StatusCode::UNAUTHORIZED, &e);
}
let user = match lookup_user(&username) {
Ok(u) => u,
Err(e) => return json_error(StatusCode::UNAUTHORIZED, &e),
};
let home_dir = user.dir.to_string_lossy().to_string();
let uid = user.uid;
let gid = user.gid;
let content_enc = match encoding::parse_content_encoding(&req) {
Ok(e) => e,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
};
let mut multipart = match axum::extract::Multipart::from_request(req, &()).await {
Ok(m) => m,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error parsing multipart: {e}"),
);
}
};
let mut uploaded: Vec<EntryInfo> = Vec::new();
while let Ok(Some(field)) = multipart.next_field().await {
let field_name = field.name().unwrap_or("").to_string();
if field_name != "file" {
continue;
}
let file_path = if !path_str.is_empty() {
match expand_and_resolve(path_str, &home_dir, state.defaults.workdir.as_deref()) {
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
}
} else {
let fname = field
.file_name()
.unwrap_or("upload")
.to_string();
match expand_and_resolve(&fname, &home_dir, state.defaults.workdir.as_deref()) {
Ok(p) => p,
Err(e) => return json_error(StatusCode::BAD_REQUEST, &e),
}
};
if uploaded.iter().any(|e| e.path == file_path) {
return json_error(
StatusCode::BAD_REQUEST,
&format!("cannot upload multiple files to same path '{}'", file_path),
);
}
let raw_bytes = match field.bytes().await {
Ok(b) => b,
Err(e) => {
return json_error(
StatusCode::INTERNAL_SERVER_ERROR,
&format!("error reading field: {e}"),
);
}
};
let data = if content_enc == "gzip" {
use std::io::Read;
let mut decoder = flate2::read::GzDecoder::new(&raw_bytes[..]);
let mut buf = Vec::new();
match decoder.read_to_end(&mut buf) {
Ok(_) => buf,
Err(e) => {
return json_error(
StatusCode::BAD_REQUEST,
&format!("gzip decompression failed: {e}"),
);
}
}
} else {
raw_bytes.to_vec()
};
if let Err(e) = process_file(&file_path, &data, uid, gid) {
let (status, msg) = e;
return json_error(status, &msg);
}
let name = Path::new(&file_path)
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
uploaded.push(EntryInfo {
path: file_path,
name,
r#type: "file",
});
}
axum::Json(uploaded).into_response()
}
fn process_file(
path: &str,
data: &[u8],
uid: nix::unistd::Uid,
gid: nix::unistd::Gid,
) -> Result<(), (StatusCode, String)> {
let dir = Path::new(path)
.parent()
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_default();
if !dir.is_empty() {
ensure_dirs(&dir, uid, gid).map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error ensuring directories: {e}"),
)
})?;
}
let can_pre_chown = match std::fs::metadata(path) {
Ok(meta) => {
if meta.is_dir() {
return Err((
StatusCode::BAD_REQUEST,
format!("path is a directory: {path}"),
));
}
true
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => false,
Err(e) => {
return Err((
StatusCode::INTERNAL_SERVER_ERROR,
format!("error getting file info: {e}"),
))
}
};
let mut chowned = false;
if can_pre_chown {
match std::os::unix::fs::chown(path, Some(uid.as_raw()), Some(gid.as_raw())) {
Ok(()) => chowned = true,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(e) => {
return Err((
StatusCode::INTERNAL_SERVER_ERROR,
format!("error changing ownership: {e}"),
))
}
}
}
let mut file = std::fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.mode(0o666)
.open(path)
.map_err(|e| {
if e.raw_os_error() == Some(libc::ENOSPC) {
return (
StatusCode::INSUFFICIENT_STORAGE,
"not enough disk space available".to_string(),
);
}
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error opening file: {e}"),
)
})?;
if !chowned {
std::os::unix::fs::chown(path, Some(uid.as_raw()), Some(gid.as_raw())).map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error changing ownership: {e}"),
)
})?;
}
file.write_all(data).map_err(|e| {
if e.raw_os_error() == Some(libc::ENOSPC) {
return (
StatusCode::INSUFFICIENT_STORAGE,
"not enough disk space available".to_string(),
);
}
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("error writing file: {e}"),
)
})?;
Ok(())
}
use std::os::unix::fs::OpenOptionsExt;

View File

@ -1,39 +0,0 @@
use std::sync::Arc;
use std::sync::atomic::Ordering;
use axum::Json;
use axum::extract::State;
use axum::http::header;
use axum::response::IntoResponse;
use serde_json::json;
use crate::state::AppState;
pub async fn get_health(State(state): State<Arc<AppState>>) -> impl IntoResponse {
if state
.needs_restore
.compare_exchange(true, false, Ordering::AcqRel, Ordering::Relaxed)
.is_ok()
{
post_restore_recovery(&state);
}
tracing::trace!("health check");
(
[(header::CACHE_CONTROL, "no-store")],
Json(json!({ "version": state.version })),
)
}
fn post_restore_recovery(state: &AppState) {
tracing::info!("restore: post-restore recovery (no GC needed in Rust)");
state.conn_tracker.restore_after_snapshot();
tracing::info!("restore: zombie connections closed");
if let Some(ref ps) = state.port_subsystem {
ps.restart();
tracing::info!("restore: port subsystem restarted");
}
}

View File

@ -1,274 +0,0 @@
use std::collections::HashMap;
use std::sync::Arc;
use std::sync::atomic::Ordering;
use axum::Json;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use serde::Deserialize;
use crate::crypto;
use crate::host::mmds;
use crate::state::AppState;
#[derive(Deserialize, Default)]
#[serde(rename_all = "camelCase")]
pub struct InitRequest {
pub access_token: Option<String>,
pub default_user: Option<String>,
pub default_workdir: Option<String>,
pub env_vars: Option<HashMap<String, String>>,
pub hyperloop_ip: Option<String>,
pub timestamp: Option<String>,
pub volume_mounts: Option<Vec<VolumeMount>>,
}
#[derive(Deserialize)]
pub struct VolumeMount {
pub nfs_target: String,
pub path: String,
}
/// POST /init — called by host agent after boot and after every resume.
pub async fn post_init(
State(state): State<Arc<AppState>>,
body: Option<Json<InitRequest>>,
) -> impl IntoResponse {
let init_req = body.map(|b| b.0).unwrap_or_default();
// Validate access token if provided
if let Some(ref token_str) = init_req.access_token {
if let Err(e) = validate_init_access_token(&state, token_str).await {
tracing::error!(error = %e, "init: access token validation failed");
return (StatusCode::UNAUTHORIZED, e).into_response();
}
}
// Idempotent timestamp check
if let Some(ref ts_str) = init_req.timestamp {
if let Ok(ts) = chrono_parse_to_nanos(ts_str) {
if !state.last_set_time.set_to_greater(ts) {
// Stale request, skip data updates
return trigger_restore_and_respond(&state).await;
}
}
}
// Apply env vars
if let Some(ref vars) = init_req.env_vars {
tracing::debug!(count = vars.len(), "setting env vars");
for (k, v) in vars {
state.defaults.env_vars.insert(k.clone(), v.clone());
}
}
// Set access token
if let Some(ref token_str) = init_req.access_token {
if !token_str.is_empty() {
tracing::debug!("setting access token");
let _ = state.access_token.set(token_str.as_bytes());
} else if state.access_token.is_set() {
tracing::debug!("clearing access token");
state.access_token.destroy();
}
}
// Set default user
if let Some(ref user) = init_req.default_user {
if !user.is_empty() {
tracing::debug!(user = %user, "setting default user");
let mut defaults = state.defaults.clone();
defaults.user = user.clone();
// Note: In Rust we'd need interior mutability for this.
// For now, env_vars (DashMap) handles concurrent access.
// User/workdir mutation deferred to full state refactor.
}
}
// Hyperloop /etc/hosts setup
if let Some(ref ip) = init_req.hyperloop_ip {
let ip = ip.clone();
let env_vars = Arc::clone(&state.defaults.env_vars);
tokio::spawn(async move {
setup_hyperloop(&ip, &env_vars).await;
});
}
// NFS mounts
if let Some(ref mounts) = init_req.volume_mounts {
for mount in mounts {
let target = mount.nfs_target.clone();
let path = mount.path.clone();
tokio::spawn(async move {
setup_nfs(&target, &path).await;
});
}
}
// Re-poll MMDS in background
if state.is_fc {
let env_vars = Arc::clone(&state.defaults.env_vars);
let cancel = tokio_util::sync::CancellationToken::new();
let cancel_clone = cancel.clone();
tokio::spawn(async move {
tokio::time::timeout(std::time::Duration::from_secs(60), async {
mmds::poll_for_opts(env_vars, cancel_clone).await;
})
.await
.ok();
});
}
trigger_restore_and_respond(&state).await
}
async fn trigger_restore_and_respond(state: &AppState) -> axum::response::Response {
// Safety net: if health check's postRestoreRecovery hasn't run yet
if state
.needs_restore
.compare_exchange(true, false, Ordering::AcqRel, Ordering::Relaxed)
.is_ok()
{
post_restore_recovery(state);
}
state.conn_tracker.restore_after_snapshot();
if let Some(ref ps) = state.port_subsystem {
ps.restart();
}
(
StatusCode::NO_CONTENT,
[(header::CACHE_CONTROL, "no-store")],
)
.into_response()
}
fn post_restore_recovery(state: &AppState) {
tracing::info!("restore: post-restore recovery (no GC needed in Rust)");
state.conn_tracker.restore_after_snapshot();
if let Some(ref ps) = state.port_subsystem {
ps.restart();
tracing::info!("restore: port subsystem restarted");
}
}
async fn validate_init_access_token(state: &AppState, request_token: &str) -> Result<(), String> {
// Fast path: matches existing token
if state.access_token.is_set() && !request_token.is_empty() && state.access_token.equals(request_token) {
return Ok(());
}
// Check MMDS hash
if state.is_fc {
if let Ok(mmds_hash) = mmds::get_access_token_hash().await {
if !mmds_hash.is_empty() {
if request_token.is_empty() {
let empty_hash = crypto::sha512::hash_access_token("");
if mmds_hash == empty_hash {
return Ok(());
}
} else {
let token_hash = crypto::sha512::hash_access_token(request_token);
if mmds_hash == token_hash {
return Ok(());
}
}
return Err("access token validation failed".into());
}
}
}
// First-time setup: no existing token and no MMDS
if !state.access_token.is_set() {
return Ok(());
}
if request_token.is_empty() {
return Err("access token reset not authorized".into());
}
Err("access token validation failed".into())
}
async fn setup_hyperloop(address: &str, env_vars: &dashmap::DashMap<String, String>) {
// Write to /etc/hosts: events.wrenn.local → address
let entry = format!("{address} events.wrenn.local\n");
match std::fs::read_to_string("/etc/hosts") {
Ok(contents) => {
let filtered: String = contents
.lines()
.filter(|line| !line.contains("events.wrenn.local"))
.collect::<Vec<_>>()
.join("\n");
let new_contents = format!("{filtered}\n{entry}");
if let Err(e) = std::fs::write("/etc/hosts", new_contents) {
tracing::error!(error = %e, "failed to modify hosts file");
return;
}
}
Err(e) => {
tracing::error!(error = %e, "failed to read hosts file");
return;
}
}
env_vars.insert(
"WRENN_EVENTS_ADDRESS".into(),
format!("http://{address}"),
);
}
async fn setup_nfs(nfs_target: &str, path: &str) {
let mkdir = tokio::process::Command::new("mkdir")
.args(["-p", path])
.output()
.await;
if let Err(e) = mkdir {
tracing::error!(error = %e, path, "nfs: mkdir failed");
return;
}
let mount = tokio::process::Command::new("mount")
.args([
"-v",
"-t",
"nfs",
"-o",
"mountproto=tcp,mountport=2049,proto=tcp,port=2049,nfsvers=3,noacl",
nfs_target,
path,
])
.output()
.await;
match mount {
Ok(output) => {
let stdout = String::from_utf8_lossy(&output.stdout);
let stderr = String::from_utf8_lossy(&output.stderr);
if output.status.success() {
tracing::info!(nfs_target, path, stdout = %stdout, "nfs: mount success");
} else {
tracing::error!(nfs_target, path, stderr = %stderr, "nfs: mount failed");
}
}
Err(e) => {
tracing::error!(error = %e, nfs_target, path, "nfs: mount command failed");
}
}
}
fn chrono_parse_to_nanos(ts: &str) -> Result<i64, ()> {
// Parse RFC3339 timestamp to nanoseconds since epoch
// Simple approach: parse as seconds + fractional
let secs = ts.parse::<f64>().ok();
if let Some(s) = secs {
return Ok((s * 1_000_000_000.0) as i64);
}
// Try RFC3339 format
// For now, fall back to allowing the update
Err(())
}

View File

@ -1,88 +0,0 @@
use std::sync::Arc;
use std::time::{SystemTime, UNIX_EPOCH};
use axum::Json;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use serde::Serialize;
use crate::state::AppState;
#[derive(Serialize)]
pub struct Metrics {
ts: i64,
cpu_count: u32,
cpu_used_pct: f32,
mem_total_mib: u64,
mem_used_mib: u64,
mem_total: u64,
mem_used: u64,
disk_used: u64,
disk_total: u64,
}
pub async fn get_metrics(State(state): State<Arc<AppState>>) -> impl IntoResponse {
tracing::trace!("get metrics");
match collect_metrics(&state) {
Ok(m) => (
StatusCode::OK,
[(header::CACHE_CONTROL, "no-store")],
Json(m),
)
.into_response(),
Err(e) => {
tracing::error!(error = %e, "failed to get metrics");
StatusCode::INTERNAL_SERVER_ERROR.into_response()
}
}
}
fn collect_metrics(state: &AppState) -> Result<Metrics, String> {
let cpu_count = state.cpu_count();
let cpu_used_pct_rounded = state.cpu_used_pct();
let mut sys = sysinfo::System::new();
sys.refresh_memory();
let mem_total = sys.total_memory();
let mem_used = sys.used_memory();
let mem_total_mib = mem_total / 1024 / 1024;
let mem_used_mib = mem_used / 1024 / 1024;
let (disk_total, disk_used) = disk_stats("/").map_err(|e| e.to_string())?;
let ts = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as i64;
Ok(Metrics {
ts,
cpu_count,
cpu_used_pct: cpu_used_pct_rounded,
mem_total_mib,
mem_used_mib,
mem_total,
mem_used,
disk_used,
disk_total,
})
}
fn disk_stats(path: &str) -> Result<(u64, u64), nix::Error> {
use std::ffi::CString;
let c_path = CString::new(path).unwrap();
let mut stat: libc::statfs = unsafe { std::mem::zeroed() };
let ret = unsafe { libc::statfs(c_path.as_ptr(), &mut stat) };
if ret != 0 {
return Err(nix::Error::last());
}
let block = stat.f_bsize as u64;
let total = stat.f_blocks * block;
let available = stat.f_bavail * block;
Ok((total, total - available))
}

View File

@ -1,56 +0,0 @@
pub mod encoding;
pub mod envs;
pub mod error;
pub mod files;
pub mod health;
pub mod init;
pub mod metrics;
pub mod snapshot;
use std::sync::Arc;
use std::time::Duration;
use axum::Router;
use axum::routing::{get, post};
use http::header::{CACHE_CONTROL, HeaderName};
use http::Method;
use tower_http::cors::{AllowHeaders, AllowMethods, AllowOrigin, CorsLayer};
use crate::config::CORS_MAX_AGE;
use crate::state::AppState;
pub fn router(state: Arc<AppState>) -> Router {
let cors = CorsLayer::new()
.allow_origin(AllowOrigin::any())
.allow_methods(AllowMethods::list([
Method::HEAD,
Method::GET,
Method::POST,
Method::PUT,
Method::PATCH,
Method::DELETE,
]))
.allow_headers(AllowHeaders::any())
.expose_headers([
HeaderName::from_static("location"),
CACHE_CONTROL,
HeaderName::from_static("x-content-type-options"),
HeaderName::from_static("connect-content-encoding"),
HeaderName::from_static("connect-protocol-version"),
HeaderName::from_static("grpc-encoding"),
HeaderName::from_static("grpc-message"),
HeaderName::from_static("grpc-status"),
HeaderName::from_static("grpc-status-details-bin"),
])
.max_age(Duration::from_secs(CORS_MAX_AGE.as_secs()));
Router::new()
.route("/health", get(health::get_health))
.route("/metrics", get(metrics::get_metrics))
.route("/envs", get(envs::get_envs))
.route("/init", post(init::post_init))
.route("/snapshot/prepare", post(snapshot::post_snapshot_prepare))
.route("/files", get(files::get_files).post(files::post_files))
.layer(cors)
.with_state(state)
}

View File

@ -1,32 +0,0 @@
use std::sync::Arc;
use std::sync::atomic::Ordering;
use axum::extract::State;
use axum::http::{StatusCode, header};
use axum::response::IntoResponse;
use crate::state::AppState;
/// POST /snapshot/prepare — quiesce subsystems before Firecracker snapshot.
///
/// In Rust there is no GC dance. We just:
/// 1. Stop port subsystem
/// 2. Close idle connections via conntracker
/// 3. Set needs_restore flag
pub async fn post_snapshot_prepare(State(state): State<Arc<AppState>>) -> impl IntoResponse {
if let Some(ref ps) = state.port_subsystem {
ps.stop();
tracing::info!("snapshot/prepare: port subsystem stopped");
}
state.conn_tracker.prepare_for_snapshot();
tracing::info!("snapshot/prepare: connections prepared");
state.needs_restore.store(true, Ordering::Release);
tracing::info!("snapshot/prepare: ready for freeze");
(
StatusCode::NO_CONTENT,
[(header::CACHE_CONTROL, "no-store")],
)
}

View File

@ -1,17 +0,0 @@
use tracing_subscriber::{EnvFilter, fmt, layer::SubscriberExt, util::SubscriberInitExt};
pub fn init(json: bool) {
let filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info"));
if json {
tracing_subscriber::registry()
.with(filter)
.with(fmt::layer().json().flatten_event(true))
.init();
} else {
tracing_subscriber::registry()
.with(filter)
.with(fmt::layer())
.init();
}
}

View File

@ -1,224 +0,0 @@
#![allow(dead_code)]
mod auth;
mod cgroups;
mod config;
mod conntracker;
mod crypto;
mod execcontext;
mod host;
mod http;
mod logging;
mod permissions;
mod port;
mod rpc;
mod state;
mod util;
use std::fs;
use std::net::SocketAddr;
use std::path::Path;
use std::sync::Arc;
use clap::Parser;
use tokio::net::TcpListener;
use tokio_util::sync::CancellationToken;
use config::{DEFAULT_PORT, DEFAULT_USER, WRENN_RUN_DIR};
use execcontext::Defaults;
use port::subsystem::PortSubsystem;
use state::AppState;
const VERSION: &str = env!("CARGO_PKG_VERSION");
const COMMIT: &str = {
match option_env!("ENVD_COMMIT") {
Some(c) => c,
None => "unknown",
}
};
#[derive(Parser)]
#[command(name = "envd", about = "Wrenn guest agent daemon")]
struct Cli {
#[arg(long, default_value_t = DEFAULT_PORT)]
port: u16,
#[arg(long = "isnotfc", default_value_t = false)]
is_not_fc: bool,
#[arg(long)]
version: bool,
#[arg(long)]
commit: bool,
#[arg(long = "cmd", default_value = "")]
start_cmd: String,
#[arg(long = "cgroup-root", default_value = "/sys/fs/cgroup")]
cgroup_root: String,
}
#[tokio::main]
async fn main() {
let cli = Cli::parse();
if cli.version {
println!("{VERSION}");
return;
}
if cli.commit {
println!("{COMMIT}");
return;
}
let use_json = !cli.is_not_fc;
logging::init(use_json);
if let Err(e) = fs::create_dir_all(WRENN_RUN_DIR) {
tracing::error!(error = %e, "failed to create wrenn run directory");
}
let defaults = Defaults::new(DEFAULT_USER);
let is_fc_str = if cli.is_not_fc { "false" } else { "true" };
defaults
.env_vars
.insert("WRENN_SANDBOX".into(), is_fc_str.into());
let wrenn_sandbox_path = Path::new(WRENN_RUN_DIR).join(".WRENN_SANDBOX");
if let Err(e) = fs::write(&wrenn_sandbox_path, is_fc_str.as_bytes()) {
tracing::error!(error = %e, "failed to write sandbox file");
}
let cancel = CancellationToken::new();
// MMDS polling (only in FC mode)
if !cli.is_not_fc {
let env_vars = Arc::clone(&defaults.env_vars);
let cancel_clone = cancel.clone();
tokio::spawn(async move {
host::mmds::poll_for_opts(env_vars, cancel_clone).await;
});
}
// Cgroup manager
let cgroup_manager: Arc<dyn cgroups::CgroupManager> =
match cgroups::Cgroup2Manager::new(
&cli.cgroup_root,
&[
(
cgroups::ProcessType::Pty,
"wrenn/pty",
&[] as &[(&str, &str)],
),
(
cgroups::ProcessType::User,
"wrenn/user",
&[] as &[(&str, &str)],
),
(
cgroups::ProcessType::Socat,
"wrenn/socat",
&[] as &[(&str, &str)],
),
],
) {
Ok(m) => {
tracing::info!("cgroup2 manager initialized");
Arc::new(m)
}
Err(e) => {
tracing::warn!(error = %e, "cgroup2 init failed, using noop");
Arc::new(cgroups::NoopCgroupManager)
}
};
// Port subsystem
let port_subsystem = Arc::new(PortSubsystem::new(Arc::clone(&cgroup_manager)));
port_subsystem.start();
tracing::info!("port subsystem started");
let state = AppState::new(
defaults,
VERSION.to_string(),
COMMIT.to_string(),
!cli.is_not_fc,
Some(Arc::clone(&port_subsystem)),
);
// RPC services (Connect protocol — serves Connect + gRPC + gRPC-Web on same port)
let connect_router = rpc::rpc_router(Arc::clone(&state));
let app = http::router(Arc::clone(&state))
.fallback_service(connect_router.into_axum_service());
// --cmd: spawn initial process if specified
if !cli.start_cmd.is_empty() {
let cmd = cli.start_cmd.clone();
let state_clone = Arc::clone(&state);
tokio::spawn(async move {
spawn_initial_command(&cmd, &state_clone);
});
}
let addr = SocketAddr::from(([0, 0, 0, 0], cli.port));
tracing::info!(port = cli.port, version = VERSION, commit = COMMIT, "envd starting");
let listener = TcpListener::bind(addr).await.expect("failed to bind");
let graceful = axum::serve(listener, app).with_graceful_shutdown(async move {
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
.expect("failed to register SIGTERM")
.recv()
.await;
tracing::info!("SIGTERM received, shutting down");
});
if let Err(e) = graceful.await {
tracing::error!(error = %e, "server error");
}
port_subsystem.stop();
cancel.cancel();
}
fn spawn_initial_command(cmd: &str, state: &AppState) {
use crate::permissions::user::lookup_user;
use crate::rpc::process_handler;
use std::collections::HashMap;
let user = match lookup_user(&state.defaults.user) {
Ok(u) => u,
Err(e) => {
tracing::error!(error = %e, "cmd: failed to lookup user");
return;
}
};
let home = user.dir.to_string_lossy().to_string();
let cwd = state
.defaults
.workdir
.as_deref()
.unwrap_or(&home);
match process_handler::spawn_process(
cmd,
&[],
&HashMap::new(),
cwd,
None,
false,
Some("init-cmd".to_string()),
&user,
&state.defaults.env_vars,
) {
Ok(handle) => {
tracing::info!(pid = handle.pid, cmd, "initial command spawned");
}
Err(e) => {
tracing::error!(error = %e, cmd, "failed to spawn initial command");
}
}
}

View File

@ -1,2 +0,0 @@
pub mod user;
pub mod path;

View File

@ -1,72 +0,0 @@
use std::fs;
use std::os::unix::fs::chown;
use std::path::{Path, PathBuf};
use nix::unistd::{Gid, Uid};
fn expand_tilde(path: &str, home_dir: &str) -> Result<String, String> {
if path.is_empty() || !path.starts_with('~') {
return Ok(path.to_string());
}
if path.len() > 1 && path.as_bytes()[1] != b'/' && path.as_bytes()[1] != b'\\' {
return Err("cannot expand user-specific home dir".into());
}
Ok(format!("{}{}", home_dir, &path[1..]))
}
pub fn expand_and_resolve(
path: &str,
home_dir: &str,
default_path: Option<&str>,
) -> Result<String, String> {
let path = if path.is_empty() {
default_path.unwrap_or("").to_string()
} else {
path.to_string()
};
let path = expand_tilde(&path, home_dir)?;
if Path::new(&path).is_absolute() {
return Ok(path);
}
let joined = PathBuf::from(home_dir).join(&path);
joined
.canonicalize()
.or_else(|_| Ok(joined))
.map(|p| p.to_string_lossy().to_string())
}
pub fn ensure_dirs(path: &str, uid: Uid, gid: Gid) -> Result<(), String> {
let path = Path::new(path);
let mut current = PathBuf::new();
for component in path.components() {
current.push(component);
let current_str = current.to_string_lossy();
if current_str == "/" {
continue;
}
match fs::metadata(&current) {
Ok(meta) => {
if !meta.is_dir() {
return Err(format!("path is a file: {current_str}"));
}
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
fs::create_dir(&current)
.map_err(|e| format!("failed to create directory {current_str}: {e}"))?;
chown(&current, Some(uid.as_raw()), Some(gid.as_raw()))
.map_err(|e| format!("failed to chown directory {current_str}: {e}"))?;
}
Err(e) => {
return Err(format!("failed to stat directory {current_str}: {e}"));
}
}
}
Ok(())
}

View File

@ -1,32 +0,0 @@
use nix::unistd::{Gid, Group, Uid, User};
pub fn lookup_user(username: &str) -> Result<User, String> {
User::from_name(username)
.map_err(|e| format!("error looking up user '{username}': {e}"))?
.ok_or_else(|| format!("user '{username}' not found"))
}
pub fn get_uid_gid(user: &User) -> (Uid, Gid) {
(user.uid, user.gid)
}
pub fn get_user_groups(user: &User) -> Vec<Gid> {
let c_name = std::ffi::CString::new(user.name.as_str()).unwrap();
nix::unistd::getgrouplist(&c_name, user.gid).unwrap_or_default()
}
pub fn lookup_username_by_uid(uid: Uid) -> String {
User::from_uid(uid)
.ok()
.flatten()
.map(|u| u.name)
.unwrap_or_else(|| uid.to_string())
}
pub fn lookup_groupname_by_gid(gid: Gid) -> String {
Group::from_gid(gid)
.ok()
.flatten()
.map(|g| g.name)
.unwrap_or_else(|| gid.to_string())
}

View File

@ -1,112 +0,0 @@
use std::io::{self, BufRead};
#[derive(Debug, Clone)]
pub struct ConnStat {
pub local_ip: String,
pub local_port: u32,
pub status: String,
pub family: u32,
pub inode: u64,
}
fn tcp_state_name(hex: &str) -> &'static str {
match hex {
"01" => "ESTABLISHED",
"02" => "SYN_SENT",
"03" => "SYN_RECV",
"04" => "FIN_WAIT1",
"05" => "FIN_WAIT2",
"06" => "TIME_WAIT",
"07" => "CLOSE",
"08" => "CLOSE_WAIT",
"09" => "LAST_ACK",
"0A" => "LISTEN",
"0B" => "CLOSING",
_ => "UNKNOWN",
}
}
pub fn read_tcp_connections() -> Vec<ConnStat> {
let mut conns = Vec::new();
if let Ok(c) = parse_proc_net_tcp("/proc/net/tcp", libc::AF_INET as u32) {
conns.extend(c);
}
if let Ok(c) = parse_proc_net_tcp("/proc/net/tcp6", libc::AF_INET6 as u32) {
conns.extend(c);
}
conns
}
fn parse_proc_net_tcp(path: &str, family: u32) -> io::Result<Vec<ConnStat>> {
let file = std::fs::File::open(path)?;
let reader = io::BufReader::new(file);
let mut conns = Vec::new();
let mut first = true;
for line in reader.lines() {
let line = line?;
if first {
first = false;
continue;
}
let line = line.trim().to_string();
if line.is_empty() {
continue;
}
let fields: Vec<&str> = line.split_whitespace().collect();
if fields.len() < 10 {
continue;
}
let (ip, port) = match parse_hex_addr(fields[1], family) {
Some(v) => v,
None => continue,
};
let state = tcp_state_name(fields[3]);
let inode: u64 = match fields[9].parse() {
Ok(v) => v,
Err(_) => continue,
};
conns.push(ConnStat {
local_ip: ip,
local_port: port,
status: state.to_string(),
family,
inode,
});
}
Ok(conns)
}
fn parse_hex_addr(s: &str, family: u32) -> Option<(String, u32)> {
let (ip_hex, port_hex) = s.split_once(':')?;
let port = u32::from_str_radix(port_hex, 16).ok()?;
let ip_bytes = hex::decode(ip_hex).ok()?;
let ip_str = if family == libc::AF_INET as u32 {
if ip_bytes.len() != 4 {
return None;
}
format!("{}.{}.{}.{}", ip_bytes[3], ip_bytes[2], ip_bytes[1], ip_bytes[0])
} else {
if ip_bytes.len() != 16 {
return None;
}
let mut octets = [0u8; 16];
for i in 0..4 {
octets[i * 4] = ip_bytes[i * 4 + 3];
octets[i * 4 + 1] = ip_bytes[i * 4 + 2];
octets[i * 4 + 2] = ip_bytes[i * 4 + 1];
octets[i * 4 + 3] = ip_bytes[i * 4];
}
let addr = std::net::Ipv6Addr::from(octets);
addr.to_string()
};
Some((ip_str, port))
}

View File

@ -1,181 +0,0 @@
use std::collections::HashMap;
use std::os::unix::process::CommandExt;
use std::process::Command;
use std::sync::Arc;
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use crate::cgroups::{CgroupManager, ProcessType};
use super::conn::ConnStat;
const DEFAULT_GATEWAY_IP: &str = "169.254.0.21";
#[derive(PartialEq)]
enum PortState {
Forward,
Delete,
}
struct PortToForward {
pid: Option<u32>,
inode: u64,
family: u32,
state: PortState,
port: u32,
}
fn family_to_ip_version(family: u32) -> u32 {
if family == libc::AF_INET as u32 {
4
} else if family == libc::AF_INET6 as u32 {
6
} else {
0
}
}
pub struct Forwarder {
cgroup_manager: Arc<dyn CgroupManager>,
ports: HashMap<String, PortToForward>,
source_ip: String,
}
impl Forwarder {
pub fn new(cgroup_manager: Arc<dyn CgroupManager>) -> Self {
Self {
cgroup_manager,
ports: HashMap::new(),
source_ip: DEFAULT_GATEWAY_IP.to_string(),
}
}
pub async fn start_forwarding(
&mut self,
mut rx: mpsc::Receiver<Vec<ConnStat>>,
cancel: CancellationToken,
) {
loop {
tokio::select! {
_ = cancel.cancelled() => {
self.stop_all();
return;
}
msg = rx.recv() => {
match msg {
Some(conns) => self.process_scan(conns),
None => {
self.stop_all();
return;
}
}
}
}
}
}
fn process_scan(&mut self, conns: Vec<ConnStat>) {
for ptf in self.ports.values_mut() {
ptf.state = PortState::Delete;
}
for conn in &conns {
let key = format!("{}-{}", conn.inode, conn.local_port);
if let Some(ptf) = self.ports.get_mut(&key) {
ptf.state = PortState::Forward;
} else {
tracing::debug!(
ip = %conn.local_ip,
port = conn.local_port,
family = family_to_ip_version(conn.family),
"detected new port on localhost"
);
let mut ptf = PortToForward {
pid: None,
inode: conn.inode,
family: family_to_ip_version(conn.family),
state: PortState::Forward,
port: conn.local_port,
};
self.start_port_forwarding(&mut ptf);
self.ports.insert(key, ptf);
}
}
let to_stop: Vec<String> = self
.ports
.iter()
.filter(|(_, v)| v.state == PortState::Delete)
.map(|(k, _)| k.clone())
.collect();
for key in to_stop {
if let Some(ptf) = self.ports.get(&key) {
stop_port_forwarding(ptf);
}
self.ports.remove(&key);
}
}
fn start_port_forwarding(&self, ptf: &mut PortToForward) {
let listen_arg = format!(
"TCP4-LISTEN:{},bind={},reuseaddr,fork",
ptf.port, self.source_ip
);
let connect_arg = format!("TCP{}:localhost:{}", ptf.family, ptf.port);
let mut cmd = Command::new("socat");
cmd.args(["-d", "-d", "-d", &listen_arg, &connect_arg]);
unsafe {
let cgroup_fd = self.cgroup_manager.get_fd(ProcessType::Socat);
cmd.pre_exec(move || {
libc::setpgid(0, 0);
if let Some(fd) = cgroup_fd {
let pid_str = format!("{}", libc::getpid());
let tasks_path = format!("/proc/self/fd/{}/cgroup.procs", fd);
let _ = std::fs::write(&tasks_path, pid_str.as_bytes());
}
Ok(())
});
}
tracing::debug!(
port = ptf.port,
inode = ptf.inode,
family = ptf.family,
source_ip = %self.source_ip,
"starting port forwarding"
);
match cmd.spawn() {
Ok(child) => {
ptf.pid = Some(child.id());
std::thread::spawn(move || {
let mut child = child;
let _ = child.wait();
});
}
Err(e) => {
tracing::error!(error = %e, port = ptf.port, "failed to start socat");
}
}
}
fn stop_all(&mut self) {
for ptf in self.ports.values() {
stop_port_forwarding(ptf);
}
self.ports.clear();
}
}
fn stop_port_forwarding(ptf: &PortToForward) {
if let Some(pid) = ptf.pid {
tracing::debug!(port = ptf.port, pid, "stopping port forwarding");
unsafe {
libc::kill(-(pid as i32), libc::SIGKILL);
}
}
}

View File

@ -1,4 +0,0 @@
pub mod conn;
pub mod forwarder;
pub mod scanner;
pub mod subsystem;

View File

@ -1,79 +0,0 @@
use std::sync::{Arc, RwLock};
use std::time::Duration;
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use super::conn::{ConnStat, read_tcp_connections};
pub struct ScannerFilter {
pub ips: Vec<String>,
pub state: String,
}
impl ScannerFilter {
pub fn matches(&self, conn: &ConnStat) -> bool {
if self.state.is_empty() && self.ips.is_empty() {
return false;
}
self.ips.contains(&conn.local_ip) && self.state == conn.status
}
}
pub struct ScannerSubscriber {
pub tx: mpsc::Sender<Vec<ConnStat>>,
pub filter: Option<ScannerFilter>,
}
pub struct Scanner {
period: Duration,
subs: RwLock<Vec<(String, Arc<ScannerSubscriber>)>>,
}
impl Scanner {
pub fn new(period: Duration) -> Self {
Self {
period,
subs: RwLock::new(Vec::new()),
}
}
pub fn add_subscriber(
&self,
id: &str,
filter: Option<ScannerFilter>,
) -> mpsc::Receiver<Vec<ConnStat>> {
let (tx, rx) = mpsc::channel(4);
let sub = Arc::new(ScannerSubscriber { tx, filter });
let mut subs = self.subs.write().unwrap();
subs.push((id.to_string(), sub));
rx
}
pub fn remove_subscriber(&self, id: &str) {
let mut subs = self.subs.write().unwrap();
subs.retain(|(sid, _)| sid != id);
}
pub async fn scan_and_broadcast(&self, cancel: CancellationToken) {
loop {
let conns = read_tcp_connections();
{
let subs = self.subs.read().unwrap();
for (_, sub) in subs.iter() {
let payload = match &sub.filter {
Some(f) => conns.iter().filter(|c| f.matches(c)).cloned().collect(),
None => conns.clone(),
};
let _ = sub.tx.try_send(payload);
}
}
tokio::select! {
_ = cancel.cancelled() => return,
_ = tokio::time::sleep(self.period) => {}
}
}
}
}

View File

@ -1,78 +0,0 @@
use std::sync::Arc;
use tokio_util::sync::CancellationToken;
use crate::cgroups::CgroupManager;
use crate::config::PORT_SCANNER_INTERVAL;
use super::forwarder::Forwarder;
use super::scanner::{Scanner, ScannerFilter};
pub struct PortSubsystem {
cgroup_manager: Arc<dyn CgroupManager>,
cancel: std::sync::Mutex<Option<CancellationToken>>,
}
impl PortSubsystem {
pub fn new(cgroup_manager: Arc<dyn CgroupManager>) -> Self {
Self {
cgroup_manager,
cancel: std::sync::Mutex::new(None),
}
}
pub fn start(&self) {
let mut guard = self.cancel.lock().unwrap();
if guard.is_some() {
return;
}
let cancel = CancellationToken::new();
*guard = Some(cancel.clone());
drop(guard);
let cgroup_manager = Arc::clone(&self.cgroup_manager);
let cancel_scanner = cancel.clone();
let cancel_forwarder = cancel.clone();
tokio::spawn(async move {
let scanner = Arc::new(Scanner::new(PORT_SCANNER_INTERVAL));
let rx = scanner.add_subscriber(
"port-forwarder",
Some(ScannerFilter {
ips: vec![
"127.0.0.1".to_string(),
"localhost".to_string(),
"::1".to_string(),
],
state: "LISTEN".to_string(),
}),
);
let scanner_clone = Arc::clone(&scanner);
let scanner_handle = tokio::spawn(async move {
scanner_clone.scan_and_broadcast(cancel_scanner).await;
});
let forwarder_handle = tokio::spawn(async move {
let mut forwarder = Forwarder::new(cgroup_manager);
forwarder.start_forwarding(rx, cancel_forwarder).await;
});
let _ = tokio::join!(scanner_handle, forwarder_handle);
});
}
pub fn stop(&self) {
let mut guard = self.cancel.lock().unwrap();
if let Some(cancel) = guard.take() {
cancel.cancel();
}
}
pub fn restart(&self) {
self.stop();
self.start();
}
}

View File

@ -1,142 +0,0 @@
use std::os::unix::fs::MetadataExt;
use std::path::Path;
use connectrpc::{ConnectError, ErrorCode};
use crate::permissions::user::{lookup_groupname_by_gid, lookup_username_by_uid};
use crate::rpc::pb::filesystem::{EntryInfo, FileType};
use nix::unistd::{Gid, Uid};
const NFS_SUPER_MAGIC: i64 = 0x6969;
const CIFS_MAGIC: i64 = 0xFF534D42;
const SMB_SUPER_MAGIC: i64 = 0x517B;
const SMB2_MAGIC_NUMBER: i64 = 0xFE534D42;
const FUSE_SUPER_MAGIC: i64 = 0x65735546;
pub fn is_network_mount(path: &str) -> Result<bool, String> {
let c_path = std::ffi::CString::new(path).map_err(|e| e.to_string())?;
let mut stat: libc::statfs = unsafe { std::mem::zeroed() };
let ret = unsafe { libc::statfs(c_path.as_ptr(), &mut stat) };
if ret != 0 {
return Err(format!(
"statfs {path}: {}",
std::io::Error::last_os_error()
));
}
let fs_type = stat.f_type as i64;
Ok(matches!(
fs_type,
NFS_SUPER_MAGIC | CIFS_MAGIC | SMB_SUPER_MAGIC | SMB2_MAGIC_NUMBER | FUSE_SUPER_MAGIC
))
}
pub fn build_entry_info(path: &str) -> Result<EntryInfo, ConnectError> {
let p = Path::new(path);
let lstat = std::fs::symlink_metadata(p).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("file not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error getting file info: {e}"))
}
})?;
let is_symlink = lstat.file_type().is_symlink();
let (file_type, mode, symlink_target) = if is_symlink {
let target = std::fs::canonicalize(p)
.map(|t| t.to_string_lossy().to_string())
.unwrap_or_else(|_| path.to_string());
let target_type = match std::fs::metadata(p) {
Ok(meta) => meta_to_file_type(&meta),
Err(_) => FileType::FILE_TYPE_UNSPECIFIED,
};
let target_mode = std::fs::metadata(p)
.map(|m| m.mode() & 0o7777)
.unwrap_or(0);
(target_type, target_mode, Some(target))
} else {
let ft = meta_to_file_type(&lstat);
let mode = lstat.mode() & 0o7777;
(ft, mode, None)
};
let uid = lstat.uid();
let gid = lstat.gid();
let owner = lookup_username_by_uid(Uid::from_raw(uid));
let group = lookup_groupname_by_gid(Gid::from_raw(gid));
let modified_time = {
let mtime_sec = lstat.mtime();
let mtime_nsec = lstat.mtime_nsec() as i32;
if mtime_sec == 0 && mtime_nsec == 0 {
None
} else {
Some(buffa_types::google::protobuf::Timestamp {
seconds: mtime_sec,
nanos: mtime_nsec,
..Default::default()
})
}
};
let name = p
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_default();
let permissions = format_permissions(lstat.mode());
Ok(EntryInfo {
name,
r#type: buffa::EnumValue::Known(file_type),
path: path.to_string(),
size: lstat.len() as i64,
mode,
permissions,
owner,
group,
modified_time: modified_time.into(),
symlink_target: symlink_target,
..Default::default()
})
}
fn meta_to_file_type(meta: &std::fs::Metadata) -> FileType {
if meta.is_file() {
FileType::FILE_TYPE_FILE
} else if meta.is_dir() {
FileType::FILE_TYPE_DIRECTORY
} else if meta.file_type().is_symlink() {
FileType::FILE_TYPE_SYMLINK
} else {
FileType::FILE_TYPE_UNSPECIFIED
}
}
fn format_permissions(mode: u32) -> String {
let file_type = match mode & libc::S_IFMT {
libc::S_IFDIR => 'd',
libc::S_IFLNK => 'L',
libc::S_IFREG => '-',
libc::S_IFBLK => 'b',
libc::S_IFCHR => 'c',
libc::S_IFIFO => 'p',
libc::S_IFSOCK => 'S',
_ => '?',
};
let perms = mode & 0o777;
let mut s = String::with_capacity(10);
s.push(file_type);
for shift in [6, 3, 0] {
let bits = (perms >> shift) & 7;
s.push(if bits & 4 != 0 { 'r' } else { '-' });
s.push(if bits & 2 != 0 { 'w' } else { '-' });
s.push(if bits & 1 != 0 { 'x' } else { '-' });
}
s
}

View File

@ -1,402 +0,0 @@
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use connectrpc::{ConnectError, Context, ErrorCode};
use dashmap::DashMap;
use futures::Stream;
use crate::permissions::path::{ensure_dirs, expand_and_resolve};
use crate::permissions::user::lookup_user;
use crate::rpc::entry::build_entry_info;
use crate::rpc::pb::filesystem::*;
use crate::state::AppState;
pub struct FilesystemServiceImpl {
state: Arc<AppState>,
watchers: DashMap<String, WatcherHandle>,
}
struct WatcherHandle {
events: Arc<Mutex<Vec<FilesystemEvent>>>,
_watcher: notify::RecommendedWatcher,
}
impl FilesystemServiceImpl {
pub fn new(state: Arc<AppState>) -> Self {
Self {
state,
watchers: DashMap::new(),
}
}
fn resolve_path(&self, path: &str, ctx: &Context) -> Result<String, ConnectError> {
let username = extract_username(ctx).unwrap_or_else(|| self.state.defaults.user.clone());
let user = lookup_user(&username).map_err(|e| {
ConnectError::new(ErrorCode::Unauthenticated, format!("invalid user: {e}"))
})?;
let home_dir = user.dir.to_string_lossy().to_string();
let default_workdir = self.state.defaults.workdir.as_deref();
expand_and_resolve(path, &home_dir, default_workdir)
.map_err(|e| ConnectError::new(ErrorCode::InvalidArgument, e))
}
}
fn extract_username(ctx: &Context) -> Option<String> {
ctx.extensions.get::<AuthUser>().map(|u| u.0.clone())
}
#[derive(Clone)]
pub struct AuthUser(pub String);
impl Filesystem for FilesystemServiceImpl {
async fn stat(
&self,
ctx: Context,
request: buffa::view::OwnedView<StatRequestView<'static>>,
) -> Result<(StatResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
let entry = build_entry_info(&path)?;
Ok((
StatResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn make_dir(
&self,
ctx: Context,
request: buffa::view::OwnedView<MakeDirRequestView<'static>>,
) -> Result<(MakeDirResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
match std::fs::metadata(&path) {
Ok(meta) => {
if meta.is_dir() {
return Err(ConnectError::new(
ErrorCode::AlreadyExists,
format!("directory already exists: {path}"),
));
}
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("path exists but is not a directory: {path}"),
));
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(e) => {
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error getting file info: {e}"),
));
}
}
let username = extract_username(&ctx).unwrap_or_else(|| self.state.defaults.user.clone());
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
ensure_dirs(&path, user.uid, user.gid)
.map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
let entry = build_entry_info(&path)?;
Ok((
MakeDirResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn r#move(
&self,
ctx: Context,
request: buffa::view::OwnedView<MoveRequestView<'static>>,
) -> Result<(MoveResponse, Context), ConnectError> {
let source = self.resolve_path(request.source, &ctx)?;
let destination = self.resolve_path(request.destination, &ctx)?;
let username = extract_username(&ctx).unwrap_or_else(|| self.state.defaults.user.clone());
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
if let Some(parent) = Path::new(&destination).parent() {
ensure_dirs(&parent.to_string_lossy(), user.uid, user.gid)
.map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
}
std::fs::rename(&source, &destination).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("source not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error renaming: {e}"))
}
})?;
let entry = build_entry_info(&destination)?;
Ok((
MoveResponse {
entry: entry.into(),
..Default::default()
},
ctx,
))
}
async fn list_dir(
&self,
ctx: Context,
request: buffa::view::OwnedView<ListDirRequestView<'static>>,
) -> Result<(ListDirResponse, Context), ConnectError> {
let mut depth = request.depth as usize;
if depth == 0 {
depth = 1;
}
let path = self.resolve_path(request.path, &ctx)?;
let resolved = std::fs::canonicalize(&path).map_err(|e| {
if e.kind() == std::io::ErrorKind::NotFound {
ConnectError::new(ErrorCode::NotFound, format!("path not found: {e}"))
} else {
ConnectError::new(ErrorCode::Internal, format!("error resolving path: {e}"))
}
})?;
let resolved_str = resolved.to_string_lossy().to_string();
let meta = std::fs::metadata(&resolved).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error getting file info: {e}"))
})?;
if !meta.is_dir() {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("path is not a directory: {path}"),
));
}
let entries = walk_dir(&path, &resolved_str, depth)?;
Ok((
ListDirResponse {
entries,
..Default::default()
},
ctx,
))
}
async fn remove(
&self,
ctx: Context,
request: buffa::view::OwnedView<RemoveRequestView<'static>>,
) -> Result<(RemoveResponse, Context), ConnectError> {
let path = self.resolve_path(request.path, &ctx)?;
if let Err(e1) = std::fs::remove_dir_all(&path) {
if let Err(e2) = std::fs::remove_file(&path) {
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error removing: {e1}; also tried as file: {e2}"),
));
}
}
Ok((RemoveResponse { ..Default::default() }, ctx))
}
async fn watch_dir(
&self,
_ctx: Context,
_request: buffa::view::OwnedView<WatchDirRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<WatchDirResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
Err(ConnectError::new(
ErrorCode::Unimplemented,
"watch_dir streaming not yet implemented",
))
}
async fn create_watcher(
&self,
ctx: Context,
request: buffa::view::OwnedView<CreateWatcherRequestView<'static>>,
) -> Result<(CreateWatcherResponse, Context), ConnectError> {
use notify::{RecursiveMode, Watcher};
let path = self.resolve_path(request.path, &ctx)?;
let recursive = request.recursive;
if let Ok(true) = crate::rpc::entry::is_network_mount(&path) {
return Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"watching network mounts is not supported",
));
}
let watcher_id = simple_id();
let events: Arc<Mutex<Vec<FilesystemEvent>>> = Arc::new(Mutex::new(Vec::new()));
let events_cb = Arc::clone(&events);
let mut watcher = notify::recommended_watcher(
move |res: Result<notify::Event, notify::Error>| {
if let Ok(event) = res {
let event_type = match event.kind {
notify::EventKind::Create(_) => EventType::EVENT_TYPE_CREATE,
notify::EventKind::Modify(notify::event::ModifyKind::Data(_)) => {
EventType::EVENT_TYPE_WRITE
}
notify::EventKind::Modify(notify::event::ModifyKind::Metadata(_)) => {
EventType::EVENT_TYPE_CHMOD
}
notify::EventKind::Remove(_) => EventType::EVENT_TYPE_REMOVE,
notify::EventKind::Modify(notify::event::ModifyKind::Name(_)) => {
EventType::EVENT_TYPE_RENAME
}
_ => return,
};
for p in &event.paths {
if let Ok(mut guard) = events_cb.lock() {
guard.push(FilesystemEvent {
name: p.to_string_lossy().to_string(),
r#type: buffa::EnumValue::Known(event_type),
..Default::default()
});
}
}
}
},
)
.map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("failed to create watcher: {e}"))
})?;
let mode = if recursive {
RecursiveMode::Recursive
} else {
RecursiveMode::NonRecursive
};
watcher.watch(Path::new(&path), mode).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("failed to watch path: {e}"))
})?;
self.watchers.insert(
watcher_id.clone(),
WatcherHandle {
events,
_watcher: watcher,
},
);
Ok((
CreateWatcherResponse {
watcher_id,
..Default::default()
},
ctx,
))
}
async fn get_watcher_events(
&self,
ctx: Context,
request: buffa::view::OwnedView<GetWatcherEventsRequestView<'static>>,
) -> Result<(GetWatcherEventsResponse, Context), ConnectError> {
let watcher_id: &str = request.watcher_id;
let handle = self.watchers.get(watcher_id).ok_or_else(|| {
ConnectError::new(
ErrorCode::NotFound,
format!("watcher not found: {watcher_id}"),
)
})?;
let events = {
let mut guard = handle.events.lock().unwrap();
std::mem::take(&mut *guard)
};
Ok((
GetWatcherEventsResponse {
events,
..Default::default()
},
ctx,
))
}
async fn remove_watcher(
&self,
ctx: Context,
request: buffa::view::OwnedView<RemoveWatcherRequestView<'static>>,
) -> Result<(RemoveWatcherResponse, Context), ConnectError> {
let watcher_id: &str = request.watcher_id;
self.watchers.remove(watcher_id);
Ok((RemoveWatcherResponse { ..Default::default() }, ctx))
}
}
fn walk_dir(
requested_path: &str,
resolved_path: &str,
depth: usize,
) -> Result<Vec<EntryInfo>, ConnectError> {
let mut entries = Vec::new();
let base = Path::new(resolved_path);
for result in walkdir::WalkDir::new(resolved_path)
.min_depth(1)
.max_depth(depth)
.follow_links(false)
{
let dir_entry = match result {
Ok(e) => e,
Err(e) => {
if e.io_error()
.is_some_and(|io| io.kind() == std::io::ErrorKind::NotFound)
{
continue;
}
return Err(ConnectError::new(
ErrorCode::Internal,
format!("error reading directory: {e}"),
));
}
};
let entry_path = dir_entry.path();
let mut entry = match build_entry_info(&entry_path.to_string_lossy()) {
Ok(e) => e,
Err(e) if e.code == ErrorCode::NotFound => continue,
Err(e) => return Err(e),
};
if let Ok(rel) = entry_path.strip_prefix(base) {
let remapped = PathBuf::from(requested_path).join(rel);
entry.path = remapped.to_string_lossy().to_string();
}
entries.push(entry);
}
Ok(entries)
}
fn simple_id() -> String {
use std::time::{SystemTime, UNIX_EPOCH};
let nanos = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
format!("w-{nanos:x}")
}

View File

@ -1,26 +0,0 @@
pub mod pb;
pub mod entry;
pub mod process_handler;
pub mod process_service;
pub mod filesystem_service;
use std::sync::Arc;
use crate::rpc::process_service::ProcessServiceImpl;
use crate::rpc::filesystem_service::FilesystemServiceImpl;
use crate::state::AppState;
use pb::process::ProcessExt;
use pb::filesystem::FilesystemExt;
/// Build the connect-rust Router with both RPC services registered.
pub fn rpc_router(state: Arc<AppState>) -> connectrpc::Router {
let process_svc = Arc::new(ProcessServiceImpl::new(Arc::clone(&state)));
let filesystem_svc = Arc::new(FilesystemServiceImpl::new(Arc::clone(&state)));
let router = connectrpc::Router::new();
let router = process_svc.register(router);
let router = filesystem_svc.register(router);
router
}

View File

@ -1,10 +0,0 @@
#![allow(dead_code, non_camel_case_types, unused_imports, clippy::derivable_impls)]
use ::buffa;
use ::buffa_types;
use ::connectrpc;
use ::futures;
use ::http_body;
use ::serde;
include!(concat!(env!("OUT_DIR"), "/_connectrpc.rs"));

View File

@ -1,402 +0,0 @@
use std::io::Read;
use std::os::unix::process::CommandExt;
use std::process::Stdio;
use std::sync::{Arc, Mutex};
use connectrpc::{ConnectError, ErrorCode};
use nix::pty::{openpty, Winsize};
use nix::sys::signal::{self, Signal};
use nix::unistd::Pid;
use tokio::sync::broadcast;
use crate::rpc::pb::process::*;
const STD_CHUNK_SIZE: usize = 32768;
const PTY_CHUNK_SIZE: usize = 16384;
const BROADCAST_CAPACITY: usize = 4096;
#[derive(Clone)]
pub enum DataEvent {
Stdout(Vec<u8>),
Stderr(Vec<u8>),
Pty(Vec<u8>),
}
#[derive(Clone)]
pub struct EndEvent {
pub exit_code: i32,
pub exited: bool,
pub status: String,
pub error: Option<String>,
}
pub struct ProcessHandle {
pub config: ProcessConfig,
pub tag: Option<String>,
pub pid: u32,
data_tx: broadcast::Sender<DataEvent>,
end_tx: broadcast::Sender<EndEvent>,
stdin: Mutex<Option<std::process::ChildStdin>>,
pty_master: Mutex<Option<std::fs::File>>,
}
impl ProcessHandle {
pub fn subscribe_data(&self) -> broadcast::Receiver<DataEvent> {
self.data_tx.subscribe()
}
pub fn subscribe_end(&self) -> broadcast::Receiver<EndEvent> {
self.end_tx.subscribe()
}
pub fn send_signal(&self, sig: Signal) -> Result<(), ConnectError> {
signal::kill(Pid::from_raw(self.pid as i32), sig).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error sending signal: {e}"))
})
}
pub fn write_stdin(&self, data: &[u8]) -> Result<(), ConnectError> {
use std::io::Write;
let mut guard = self.stdin.lock().unwrap();
match guard.as_mut() {
Some(stdin) => stdin.write_all(data).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error writing to stdin: {e}"))
}),
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"stdin not enabled or closed",
)),
}
}
pub fn write_pty(&self, data: &[u8]) -> Result<(), ConnectError> {
use std::io::Write;
let mut guard = self.pty_master.lock().unwrap();
match guard.as_mut() {
Some(master) => master.write_all(data).map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error writing to pty: {e}"))
}),
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"pty not assigned to process",
)),
}
}
pub fn close_stdin(&self) -> Result<(), ConnectError> {
if self.pty_master.lock().unwrap().is_some() {
return Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"cannot close stdin for PTY process — send Ctrl+D (0x04) instead",
));
}
let mut guard = self.stdin.lock().unwrap();
*guard = None;
Ok(())
}
pub fn resize_pty(&self, cols: u16, rows: u16) -> Result<(), ConnectError> {
let guard = self.pty_master.lock().unwrap();
match guard.as_ref() {
Some(master) => {
use std::os::unix::io::AsRawFd;
let ws = libc::winsize {
ws_row: rows,
ws_col: cols,
ws_xpixel: 0,
ws_ypixel: 0,
};
let ret = unsafe { libc::ioctl(master.as_raw_fd(), libc::TIOCSWINSZ, &ws) };
if ret != 0 {
return Err(ConnectError::new(
ErrorCode::Internal,
format!(
"ioctl TIOCSWINSZ failed: {}",
std::io::Error::last_os_error()
),
));
}
Ok(())
}
None => Err(ConnectError::new(
ErrorCode::FailedPrecondition,
"tty not assigned to process",
)),
}
}
}
pub fn spawn_process(
cmd_str: &str,
args: &[String],
envs: &std::collections::HashMap<String, String>,
cwd: &str,
pty_opts: Option<(u16, u16)>,
enable_stdin: bool,
tag: Option<String>,
user: &nix::unistd::User,
default_env_vars: &dashmap::DashMap<String, String>,
) -> Result<Arc<ProcessHandle>, ConnectError> {
let mut env: Vec<(String, String)> = Vec::new();
env.push(("PATH".into(), std::env::var("PATH").unwrap_or_default()));
let home = user.dir.to_string_lossy().to_string();
env.push(("HOME".into(), home));
env.push(("USER".into(), user.name.clone()));
env.push(("LOGNAME".into(), user.name.clone()));
default_env_vars.iter().for_each(|entry| {
env.push((entry.key().clone(), entry.value().clone()));
});
for (k, v) in envs {
env.push((k.clone(), v.clone()));
}
let nice_delta = 0 - current_nice();
let oom_script = format!(
r#"echo 100 > /proc/$$/oom_score_adj && exec /usr/bin/nice -n {} "${{@}}""#,
nice_delta
);
let mut wrapper_args = vec![
"-c".to_string(),
oom_script,
"--".to_string(),
cmd_str.to_string(),
];
wrapper_args.extend_from_slice(args);
let uid = user.uid.as_raw();
let gid = user.gid.as_raw();
let (data_tx, _) = broadcast::channel(BROADCAST_CAPACITY);
let (end_tx, _) = broadcast::channel(16);
let config = ProcessConfig {
cmd: cmd_str.to_string(),
args: args.to_vec(),
envs: envs.clone(),
cwd: Some(cwd.to_string()),
..Default::default()
};
if let Some((cols, rows)) = pty_opts {
let pty_result = openpty(
Some(&Winsize {
ws_row: rows,
ws_col: cols,
ws_xpixel: 0,
ws_ypixel: 0,
}),
None,
)
.map_err(|e| ConnectError::new(ErrorCode::Internal, format!("openpty failed: {e}")))?;
let master_fd = pty_result.master;
let slave_fd = pty_result.slave;
let mut command = std::process::Command::new("/bin/sh");
command
.args(&wrapper_args)
.env_clear()
.envs(env.iter().map(|(k, v)| (k.as_str(), v.as_str())))
.current_dir(cwd);
unsafe {
use std::os::unix::io::AsRawFd;
let slave_raw = slave_fd.as_raw_fd();
let master_raw = master_fd.as_raw_fd();
command.pre_exec(move || {
libc::close(master_raw);
nix::unistd::setsid()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e))?;
libc::ioctl(slave_raw, libc::TIOCSCTTY, 0);
libc::dup2(slave_raw, 0);
libc::dup2(slave_raw, 1);
libc::dup2(slave_raw, 2);
if slave_raw > 2 {
libc::close(slave_raw);
}
libc::setgid(gid);
libc::setuid(uid);
Ok(())
});
}
command.stdin(Stdio::null());
command.stdout(Stdio::null());
command.stderr(Stdio::null());
let child = command.spawn().map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error starting pty process: {e}"))
})?;
drop(slave_fd);
let pid = child.id();
let master_file: std::fs::File = master_fd.into();
let master_clone = master_file.try_clone().unwrap();
let handle = Arc::new(ProcessHandle {
config,
tag,
pid,
data_tx: data_tx.clone(),
end_tx: end_tx.clone(),
stdin: Mutex::new(None),
pty_master: Mutex::new(Some(master_file)),
});
let data_tx_clone = data_tx.clone();
std::thread::spawn(move || {
let mut master = master_clone;
let mut buf = vec![0u8; PTY_CHUNK_SIZE];
loop {
match master.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = data_tx_clone.send(DataEvent::Pty(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
let end_tx_clone = end_tx.clone();
std::thread::spawn(move || {
let mut child = child;
match child.wait() {
Ok(s) => {
let _ = end_tx_clone.send(EndEvent {
exit_code: s.code().unwrap_or(-1),
exited: s.code().is_some(),
status: format!("{s}"),
error: None,
});
}
Err(e) => {
let _ = end_tx_clone.send(EndEvent {
exit_code: -1,
exited: false,
status: "error".into(),
error: Some(e.to_string()),
});
}
}
});
tracing::info!(pid, cmd = cmd_str, "process started (pty)");
Ok(handle)
} else {
let mut command = std::process::Command::new("/bin/sh");
command
.args(&wrapper_args)
.env_clear()
.envs(env.iter().map(|(k, v)| (k.as_str(), v.as_str())))
.current_dir(cwd)
.stdout(Stdio::piped())
.stderr(Stdio::piped());
if enable_stdin {
command.stdin(Stdio::piped());
} else {
command.stdin(Stdio::null());
}
unsafe {
command.pre_exec(move || {
libc::setgid(gid);
libc::setuid(uid);
Ok(())
});
}
let mut child = command.spawn().map_err(|e| {
ConnectError::new(ErrorCode::Internal, format!("error starting process: {e}"))
})?;
let pid = child.id();
let stdin = child.stdin.take();
let stdout = child.stdout.take();
let stderr = child.stderr.take();
let handle = Arc::new(ProcessHandle {
config,
tag,
pid,
data_tx: data_tx.clone(),
end_tx: end_tx.clone(),
stdin: Mutex::new(stdin),
pty_master: Mutex::new(None),
});
if let Some(mut out) = stdout {
let tx = data_tx.clone();
std::thread::spawn(move || {
let mut buf = vec![0u8; STD_CHUNK_SIZE];
loop {
match out.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = tx.send(DataEvent::Stdout(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
}
if let Some(mut err_pipe) = stderr {
let tx = data_tx.clone();
std::thread::spawn(move || {
let mut buf = vec![0u8; STD_CHUNK_SIZE];
loop {
match err_pipe.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = tx.send(DataEvent::Stderr(buf[..n].to_vec()));
}
Err(_) => break,
}
}
});
}
let end_tx_clone = end_tx.clone();
std::thread::spawn(move || {
match child.wait() {
Ok(s) => {
let _ = end_tx_clone.send(EndEvent {
exit_code: s.code().unwrap_or(-1),
exited: s.code().is_some(),
status: format!("{s}"),
error: None,
});
}
Err(e) => {
let _ = end_tx_clone.send(EndEvent {
exit_code: -1,
exited: false,
status: "error".into(),
error: Some(e.to_string()),
});
}
}
});
tracing::info!(pid, cmd = cmd_str, "process started (pipe)");
Ok(handle)
}
}
fn current_nice() -> i32 {
unsafe {
*libc::__errno_location() = 0;
let prio = libc::getpriority(libc::PRIO_PROCESS, 0);
if *libc::__errno_location() != 0 {
return 0;
}
20 - prio
}
}

View File

@ -1,449 +0,0 @@
use std::collections::HashMap;
use std::pin::Pin;
use std::sync::Arc;
use connectrpc::{ConnectError, Context, ErrorCode};
use dashmap::DashMap;
use futures::Stream;
use crate::permissions::path::expand_and_resolve;
use crate::permissions::user::lookup_user;
use crate::rpc::pb::process::*;
use crate::rpc::process_handler::{self, DataEvent, ProcessHandle};
use crate::state::AppState;
pub struct ProcessServiceImpl {
state: Arc<AppState>,
processes: DashMap<u32, Arc<ProcessHandle>>,
}
impl ProcessServiceImpl {
pub fn new(state: Arc<AppState>) -> Self {
Self {
state,
processes: DashMap::new(),
}
}
fn get_process_by_selector(
&self,
selector: &ProcessSelectorView,
) -> Result<Arc<ProcessHandle>, ConnectError> {
match &selector.selector {
Some(process_selector::SelectorView::Pid(pid)) => {
let pid_val = *pid;
self.processes
.get(&pid_val)
.map(|entry| Arc::clone(entry.value()))
.ok_or_else(|| {
ConnectError::new(
ErrorCode::NotFound,
format!("process with pid {pid_val} not found"),
)
})
}
Some(process_selector::SelectorView::Tag(tag)) => {
let tag_str: &str = tag;
for entry in self.processes.iter() {
if let Some(ref t) = entry.value().tag {
if t == tag_str {
return Ok(Arc::clone(entry.value()));
}
}
}
Err(ConnectError::new(
ErrorCode::NotFound,
format!("process with tag {tag_str} not found"),
))
}
None => Err(ConnectError::new(
ErrorCode::InvalidArgument,
"process selector required",
)),
}
}
fn spawn_from_request(
&self,
request: &StartRequestView<'_>,
) -> Result<Arc<ProcessHandle>, ConnectError> {
let proc_config = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process config required")
})?;
let username = self.state.defaults.user.clone();
let user =
lookup_user(&username).map_err(|e| ConnectError::new(ErrorCode::Internal, e))?;
let cmd: &str = proc_config.cmd;
let args: Vec<String> = proc_config.args.iter().map(|s| s.to_string()).collect();
let envs: HashMap<String, String> = proc_config
.envs
.iter()
.map(|(k, v)| (k.to_string(), v.to_string()))
.collect();
let home_dir = user.dir.to_string_lossy().to_string();
let cwd_str: &str = proc_config.cwd.unwrap_or("");
let cwd = expand_and_resolve(cwd_str, &home_dir, self.state.defaults.workdir.as_deref())
.map_err(|e| ConnectError::new(ErrorCode::InvalidArgument, e))?;
let effective_cwd = if cwd.is_empty() { "/" } else { &cwd };
if let Err(_) = std::fs::metadata(effective_cwd) {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
format!("cwd '{effective_cwd}' does not exist"),
));
}
let pty_opts = request.pty.as_option().and_then(|pty| {
pty.size
.as_option()
.map(|sz| (sz.cols as u16, sz.rows as u16))
});
let enable_stdin = request.stdin.unwrap_or(true);
let tag = request.tag.map(|s| s.to_string());
tracing::info!(
cmd = cmd,
has_pty = pty_opts.is_some(),
pty_size = ?pty_opts,
tag = ?tag,
stdin = enable_stdin,
cwd = effective_cwd,
user = %username,
"process.Start request"
);
let handle = process_handler::spawn_process(
cmd,
&args,
&envs,
effective_cwd,
pty_opts,
enable_stdin,
tag,
&user,
&self.state.defaults.env_vars,
)?;
self.processes.insert(handle.pid, Arc::clone(&handle));
let processes = self.processes.clone();
let pid = handle.pid;
let mut end_rx = handle.subscribe_end();
tokio::spawn(async move {
let _ = end_rx.recv().await;
processes.remove(&pid);
});
Ok(handle)
}
}
impl Process for ProcessServiceImpl {
async fn list(
&self,
ctx: Context,
_request: buffa::view::OwnedView<ListRequestView<'static>>,
) -> Result<(ListResponse, Context), ConnectError> {
let processes: Vec<ProcessInfo> = self
.processes
.iter()
.map(|entry| {
let h = entry.value();
ProcessInfo {
config: buffa::MessageField::some(h.config.clone()),
pid: h.pid,
tag: h.tag.clone(),
..Default::default()
}
})
.collect();
Ok((
ListResponse {
processes,
..Default::default()
},
ctx,
))
}
async fn start(
&self,
ctx: Context,
request: buffa::view::OwnedView<StartRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<StartResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
let handle = self.spawn_from_request(&request)?;
let pid = handle.pid;
let mut data_rx = handle.subscribe_data();
let mut end_rx = handle.subscribe_end();
let stream = async_stream::stream! {
yield Ok(make_start_response(pid));
loop {
match data_rx.recv().await {
Ok(ev) => yield Ok(make_data_start_response(ev)),
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
if let Ok(end) = end_rx.recv().await {
yield Ok(make_end_start_response(end));
}
};
Ok((Box::pin(stream), ctx))
}
async fn connect(
&self,
ctx: Context,
request: buffa::view::OwnedView<ConnectRequestView<'static>>,
) -> Result<
(
Pin<Box<dyn Stream<Item = Result<ConnectResponse, ConnectError>> + Send>>,
Context,
),
ConnectError,
> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
let pid = handle.pid;
let mut data_rx = handle.subscribe_data();
let mut end_rx = handle.subscribe_end();
let stream = async_stream::stream! {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(ProcessEvent {
event: Some(process_event::Event::Start(Box::new(
process_event::StartEvent { pid, ..Default::default() },
))),
..Default::default()
}),
..Default::default()
});
loop {
match data_rx.recv().await {
Ok(ev) => {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_data_event(ev)),
..Default::default()
});
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
if let Ok(end) = end_rx.recv().await {
yield Ok(ConnectResponse {
event: buffa::MessageField::some(make_end_event(end)),
..Default::default()
});
}
};
Ok((Box::pin(stream), ctx))
}
async fn update(
&self,
ctx: Context,
request: buffa::view::OwnedView<UpdateRequestView<'static>>,
) -> Result<(UpdateResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
if let Some(pty) = request.pty.as_option() {
if let Some(size) = pty.size.as_option() {
handle.resize_pty(size.cols as u16, size.rows as u16)?;
}
}
Ok((UpdateResponse { ..Default::default() }, ctx))
}
async fn stream_input(
&self,
ctx: Context,
mut requests: Pin<
Box<
dyn Stream<
Item = Result<
buffa::view::OwnedView<StreamInputRequestView<'static>>,
ConnectError,
>,
> + Send,
>,
>,
) -> Result<(StreamInputResponse, Context), ConnectError> {
use futures::StreamExt;
let mut handle: Option<Arc<ProcessHandle>> = None;
while let Some(result) = requests.next().await {
let req = result?;
match &req.event {
Some(stream_input_request::EventView::Start(start)) => {
if let Some(selector) = start.process.as_option() {
handle = Some(self.get_process_by_selector(selector)?);
}
}
Some(stream_input_request::EventView::Data(data)) => {
let h = handle.as_ref().ok_or_else(|| {
ConnectError::new(ErrorCode::FailedPrecondition, "no start event received")
})?;
if let Some(input) = data.input.as_option() {
write_input(h, input)?;
}
}
Some(stream_input_request::EventView::Keepalive(_)) => {}
None => {}
}
}
Ok((StreamInputResponse { ..Default::default() }, ctx))
}
async fn send_input(
&self,
ctx: Context,
request: buffa::view::OwnedView<SendInputRequestView<'static>>,
) -> Result<(SendInputResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
if let Some(input) = request.input.as_option() {
write_input(&handle, input)?;
}
Ok((SendInputResponse { ..Default::default() }, ctx))
}
async fn send_signal(
&self,
ctx: Context,
request: buffa::view::OwnedView<SendSignalRequestView<'static>>,
) -> Result<(SendSignalResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
let sig = match request.signal.as_known() {
Some(Signal::SIGNAL_SIGKILL) => nix::sys::signal::Signal::SIGKILL,
Some(Signal::SIGNAL_SIGTERM) => nix::sys::signal::Signal::SIGTERM,
_ => {
return Err(ConnectError::new(
ErrorCode::InvalidArgument,
"invalid or unspecified signal",
))
}
};
handle.send_signal(sig)?;
Ok((SendSignalResponse { ..Default::default() }, ctx))
}
async fn close_stdin(
&self,
ctx: Context,
request: buffa::view::OwnedView<CloseStdinRequestView<'static>>,
) -> Result<(CloseStdinResponse, Context), ConnectError> {
let selector = request.process.as_option().ok_or_else(|| {
ConnectError::new(ErrorCode::InvalidArgument, "process selector required")
})?;
let handle = self.get_process_by_selector(selector)?;
handle.close_stdin()?;
Ok((CloseStdinResponse { ..Default::default() }, ctx))
}
}
fn write_input(handle: &ProcessHandle, input: &ProcessInputView) -> Result<(), ConnectError> {
match &input.input {
Some(process_input::InputView::Pty(d)) => handle.write_pty(d),
Some(process_input::InputView::Stdin(d)) => handle.write_stdin(d),
None => Ok(()),
}
}
fn make_start_response(pid: u32) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(ProcessEvent {
event: Some(process_event::Event::Start(Box::new(
process_event::StartEvent {
pid,
..Default::default()
},
))),
..Default::default()
}),
..Default::default()
}
}
fn make_data_event(ev: DataEvent) -> ProcessEvent {
let output = match ev {
DataEvent::Stdout(d) => Some(process_event::data_event::Output::Stdout(d.into())),
DataEvent::Stderr(d) => Some(process_event::data_event::Output::Stderr(d.into())),
DataEvent::Pty(d) => Some(process_event::data_event::Output::Pty(d.into())),
};
ProcessEvent {
event: Some(process_event::Event::Data(Box::new(
process_event::DataEvent {
output,
..Default::default()
},
))),
..Default::default()
}
}
fn make_data_start_response(ev: DataEvent) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(make_data_event(ev)),
..Default::default()
}
}
fn make_end_event(end: process_handler::EndEvent) -> ProcessEvent {
ProcessEvent {
event: Some(process_event::Event::End(Box::new(
process_event::EndEvent {
exit_code: end.exit_code,
exited: end.exited,
status: end.status,
error: end.error,
..Default::default()
},
))),
..Default::default()
}
}
fn make_end_start_response(end: process_handler::EndEvent) -> StartResponse {
StartResponse {
event: buffa::MessageField::some(make_end_event(end)),
..Default::default()
}
}

View File

@ -1,87 +0,0 @@
use std::sync::atomic::{AtomicBool, AtomicU32, Ordering};
use std::sync::Arc;
use crate::auth::token::SecureToken;
use crate::conntracker::ConnTracker;
use crate::execcontext::Defaults;
use crate::port::subsystem::PortSubsystem;
use crate::util::AtomicMax;
pub struct AppState {
pub defaults: Defaults,
pub version: String,
pub commit: String,
pub is_fc: bool,
pub needs_restore: AtomicBool,
pub last_set_time: AtomicMax,
pub access_token: SecureToken,
pub conn_tracker: ConnTracker,
pub port_subsystem: Option<Arc<PortSubsystem>>,
pub cpu_used_pct: AtomicU32,
pub cpu_count: AtomicU32,
}
impl AppState {
pub fn new(
defaults: Defaults,
version: String,
commit: String,
is_fc: bool,
port_subsystem: Option<Arc<PortSubsystem>>,
) -> Arc<Self> {
let state = Arc::new(Self {
defaults,
version,
commit,
is_fc,
needs_restore: AtomicBool::new(false),
last_set_time: AtomicMax::new(),
access_token: SecureToken::new(),
conn_tracker: ConnTracker::new(),
port_subsystem,
cpu_used_pct: AtomicU32::new(0),
cpu_count: AtomicU32::new(0),
});
let state_clone = Arc::clone(&state);
std::thread::spawn(move || {
cpu_sampler(state_clone);
});
state
}
pub fn cpu_used_pct(&self) -> f32 {
f32::from_bits(self.cpu_used_pct.load(Ordering::Relaxed))
}
pub fn cpu_count(&self) -> u32 {
self.cpu_count.load(Ordering::Relaxed)
}
}
fn cpu_sampler(state: Arc<AppState>) {
use sysinfo::System;
let mut sys = System::new();
sys.refresh_cpu_all();
loop {
std::thread::sleep(std::time::Duration::from_secs(1));
sys.refresh_cpu_all();
let pct = sys.global_cpu_usage();
let rounded = if pct > 0.0 {
(pct * 100.0).round() / 100.0
} else {
0.0
};
state
.cpu_used_pct
.store(rounded.to_bits(), Ordering::Relaxed);
state
.cpu_count
.store(sys.cpus().len() as u32, Ordering::Relaxed);
}
}

View File

@ -1,33 +0,0 @@
use std::sync::atomic::{AtomicI64, Ordering};
pub struct AtomicMax {
val: AtomicI64,
}
impl AtomicMax {
pub fn new() -> Self {
Self {
val: AtomicI64::new(i64::MIN),
}
}
/// Sets the stored value to `new` if `new` is strictly greater than
/// the current value. Returns `true` if the value was updated.
pub fn set_to_greater(&self, new: i64) -> bool {
loop {
let current = self.val.load(Ordering::Acquire);
if new <= current {
return false;
}
match self.val.compare_exchange_weak(
current,
new,
Ordering::Release,
Ordering::Relaxed,
) {
Ok(_) => return true,
Err(_) => continue,
}
}
}
}

202
envd/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 FoundryLabs, Inc.
Modifications Copyright (c) 2026 M/S Omukk, Bangladesh
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

62
envd/Makefile Normal file
View File

@ -0,0 +1,62 @@
BUILD := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown")
LDFLAGS := -s -w -X=main.commitSHA=$(BUILD)
BUILDS := ../builds
# ═══════════════════════════════════════════════════
# Build
# ═══════════════════════════════════════════════════
.PHONY: build build-debug
build:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o $(BUILDS)/envd .
@file $(BUILDS)/envd | grep -q "statically linked" || \
(echo "ERROR: envd is not statically linked!" && exit 1)
build-debug:
CGO_ENABLED=1 go build -race -gcflags=all="-N -l" -ldflags="-X=main.commitSHA=$(BUILD)" -o $(BUILDS)/debug/envd .
# ═══════════════════════════════════════════════════
# Run (debug mode, not inside a VM)
# ═══════════════════════════════════════════════════
.PHONY: run-debug
run-debug: build-debug
$(BUILDS)/debug/envd -isnotfc -port 49983
# ═══════════════════════════════════════════════════
# Code Generation
# ═══════════════════════════════════════════════════
.PHONY: generate proto openapi
generate: proto openapi
proto:
cd spec && buf generate --template buf.gen.yaml
openapi:
go generate ./internal/api/...
# ═══════════════════════════════════════════════════
# Quality
# ═══════════════════════════════════════════════════
.PHONY: fmt vet test tidy
fmt:
gofmt -w .
vet:
go vet ./...
test:
go test -race -v ./...
tidy:
go mod tidy
# ═══════════════════════════════════════════════════
# Clean
# ═══════════════════════════════════════════════════
.PHONY: clean
clean:
rm -f $(BUILDS)/envd $(BUILDS)/debug/envd

42
envd/go.mod Normal file
View File

@ -0,0 +1,42 @@
module git.omukk.dev/wrenn/sandbox/envd
go 1.25.8
require (
connectrpc.com/authn v0.1.0
connectrpc.com/connect v1.19.1
connectrpc.com/cors v0.1.0
github.com/awnumar/memguard v0.23.0
github.com/creack/pty v1.1.24
github.com/dchest/uniuri v1.2.0
github.com/e2b-dev/fsnotify v0.0.1
github.com/go-chi/chi/v5 v5.2.5
github.com/google/uuid v1.6.0
github.com/oapi-codegen/runtime v1.2.0
github.com/orcaman/concurrent-map/v2 v2.0.1
github.com/rs/cors v1.11.1
github.com/rs/zerolog v1.34.0
github.com/shirou/gopsutil/v4 v4.26.2
github.com/stretchr/testify v1.11.1
github.com/txn2/txeh v1.8.0
golang.org/x/sys v0.43.0
google.golang.org/protobuf v1.36.11
)
require (
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/awnumar/memcall v0.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/ebitengine/purego v0.10.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/tklauser/go-sysconf v0.3.16 // indirect
github.com/tklauser/numcpus v0.11.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
golang.org/x/crypto v0.50.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

92
envd/go.sum Normal file
View File

@ -0,0 +1,92 @@
connectrpc.com/authn v0.1.0 h1:m5weACjLWwgwcjttvUDyTPICJKw74+p2obBVrf8hT9E=
connectrpc.com/authn v0.1.0/go.mod h1:AwNZK/KYbqaJzRYadTuAaoz6sYQSPdORPqh1TOPIkgY=
connectrpc.com/connect v1.19.1 h1:R5M57z05+90EfEvCY1b7hBxDVOUl45PrtXtAV2fOC14=
connectrpc.com/connect v1.19.1/go.mod h1:tN20fjdGlewnSFeZxLKb0xwIZ6ozc3OQs2hTXy4du9w=
connectrpc.com/cors v0.1.0 h1:f3gTXJyDZPrDIZCQ567jxfD9PAIpopHiRDnJRt3QuOQ=
connectrpc.com/cors v0.1.0/go.mod h1:v8SJZCPfHtGH1zsm+Ttajpozd4cYIUryl4dFB6QEpfg=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/awnumar/memcall v0.4.0 h1:B7hgZYdfH6Ot1Goaz8jGne/7i8xD4taZie/PNSFZ29g=
github.com/awnumar/memcall v0.4.0/go.mod h1:8xOx1YbfyuCg3Fy6TO8DK0kZUua3V42/goA5Ru47E8w=
github.com/awnumar/memguard v0.23.0 h1:sJ3a1/SWlcuKIQ7MV+R9p0Pvo9CWsMbGZvcZQtmc68A=
github.com/awnumar/memguard v0.23.0/go.mod h1:olVofBrsPdITtJ2HgxQKrEYEMyIBAIciVG4wNnZhW9M=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dchest/uniuri v1.2.0 h1:koIcOUdrTIivZgSLhHQvKgqdWZq5d7KdMEWF1Ud6+5g=
github.com/dchest/uniuri v1.2.0/go.mod h1:fSzm4SLHzNZvWLvWJew423PhAzkpNQYq+uNLq4kxhkY=
github.com/e2b-dev/fsnotify v0.0.1 h1:7j0I98HD6VehAuK/bcslvW4QDynAULtOuMZtImihjVk=
github.com/e2b-dev/fsnotify v0.0.1/go.mod h1:jAuDjregRrUixKneTRQwPI847nNuPFg3+n5QM/ku/JM=
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/oapi-codegen/runtime v1.2.0 h1:RvKc1CVS1QeKSNzO97FBQbSMZyQ8s6rZd+LpmzwHMP4=
github.com/oapi-codegen/runtime v1.2.0/go.mod h1:Y7ZhmmlE8ikZOmuHRRndiIm7nf3xcVv+YMweKgG1DT0=
github.com/orcaman/concurrent-map/v2 v2.0.1 h1:jOJ5Pg2w1oeB6PeDurIYf6k9PQ+aTITr/6lP/L/zp6c=
github.com/orcaman/concurrent-map/v2 v2.0.1/go.mod h1:9Eq3TG2oBe5FirmYWQfYO5iH1q0Jv47PLaNK++uCdOM=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/shirou/gopsutil/v4 v4.26.2 h1:X8i6sicvUFih4BmYIGT1m2wwgw2VG9YgrDTi7cIRGUI=
github.com/shirou/gopsutil/v4 v4.26.2/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
github.com/txn2/txeh v1.8.0 h1:G1vZgom6+P/xWwU53AMOpcZgC5ni382ukcPP1TDVYHk=
github.com/txn2/txeh v1.8.0/go.mod h1:rRI3Egi3+AFmEXQjft051YdYbxeCT3nFmBLsNCZZaxM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=

View File

@ -0,0 +1,604 @@
// Package api provides primitives to interact with the openapi HTTP API.
//
// Code generated by github.com/oapi-codegen/oapi-codegen/v2 version v2.6.0 DO NOT EDIT.
package api
import (
"context"
"fmt"
"net/http"
"time"
"github.com/go-chi/chi/v5"
"github.com/oapi-codegen/runtime"
openapi_types "github.com/oapi-codegen/runtime/types"
)
const (
AccessTokenAuthScopes = "AccessTokenAuth.Scopes"
)
// Defines values for EntryInfoType.
const (
File EntryInfoType = "file"
)
// Valid indicates whether the value is a known member of the EntryInfoType enum.
func (e EntryInfoType) Valid() bool {
switch e {
case File:
return true
default:
return false
}
}
// EntryInfo defines model for EntryInfo.
type EntryInfo struct {
// Name Name of the file
Name string `json:"name"`
// Path Path to the file
Path string `json:"path"`
// Type Type of the file
Type EntryInfoType `json:"type"`
}
// EntryInfoType Type of the file
type EntryInfoType string
// EnvVars Environment variables to set
type EnvVars map[string]string
// Error defines model for Error.
type Error struct {
// Code Error code
Code int `json:"code"`
// Message Error message
Message string `json:"message"`
}
// Metrics Resource usage metrics
type Metrics struct {
// CpuCount Number of CPU cores
CpuCount *int `json:"cpu_count,omitempty"`
// CpuUsedPct CPU usage percentage
CpuUsedPct *float32 `json:"cpu_used_pct,omitempty"`
// DiskTotal Total disk space in bytes
DiskTotal *int `json:"disk_total,omitempty"`
// DiskUsed Used disk space in bytes
DiskUsed *int `json:"disk_used,omitempty"`
// MemTotal Total virtual memory in bytes
MemTotal *int `json:"mem_total,omitempty"`
// MemUsed Used virtual memory in bytes
MemUsed *int `json:"mem_used,omitempty"`
// Ts Unix timestamp in UTC for current sandbox time
Ts *int64 `json:"ts,omitempty"`
}
// VolumeMount Volume
type VolumeMount struct {
NfsTarget string `json:"nfs_target"`
Path string `json:"path"`
}
// FilePath defines model for FilePath.
type FilePath = string
// Signature defines model for Signature.
type Signature = string
// SignatureExpiration defines model for SignatureExpiration.
type SignatureExpiration = int
// User defines model for User.
type User = string
// FileNotFound defines model for FileNotFound.
type FileNotFound = Error
// InternalServerError defines model for InternalServerError.
type InternalServerError = Error
// InvalidPath defines model for InvalidPath.
type InvalidPath = Error
// InvalidUser defines model for InvalidUser.
type InvalidUser = Error
// NotEnoughDiskSpace defines model for NotEnoughDiskSpace.
type NotEnoughDiskSpace = Error
// UploadSuccess defines model for UploadSuccess.
type UploadSuccess = []EntryInfo
// GetFilesParams defines parameters for GetFiles.
type GetFilesParams struct {
// Path Path to the file, URL encoded. Can be relative to user's home directory.
Path *FilePath `form:"path,omitempty" json:"path,omitempty"`
// Username User used for setting the owner, or resolving relative paths.
Username *User `form:"username,omitempty" json:"username,omitempty"`
// Signature Signature used for file access permission verification.
Signature *Signature `form:"signature,omitempty" json:"signature,omitempty"`
// SignatureExpiration Signature expiration used for defining the expiration time of the signature.
SignatureExpiration *SignatureExpiration `form:"signature_expiration,omitempty" json:"signature_expiration,omitempty"`
}
// PostFilesMultipartBody defines parameters for PostFiles.
type PostFilesMultipartBody struct {
File *openapi_types.File `json:"file,omitempty"`
}
// PostFilesParams defines parameters for PostFiles.
type PostFilesParams struct {
// Path Path to the file, URL encoded. Can be relative to user's home directory.
Path *FilePath `form:"path,omitempty" json:"path,omitempty"`
// Username User used for setting the owner, or resolving relative paths.
Username *User `form:"username,omitempty" json:"username,omitempty"`
// Signature Signature used for file access permission verification.
Signature *Signature `form:"signature,omitempty" json:"signature,omitempty"`
// SignatureExpiration Signature expiration used for defining the expiration time of the signature.
SignatureExpiration *SignatureExpiration `form:"signature_expiration,omitempty" json:"signature_expiration,omitempty"`
}
// PostInitJSONBody defines parameters for PostInit.
type PostInitJSONBody struct {
// AccessToken Access token for secure access to envd service
AccessToken *SecureToken `json:"accessToken,omitempty"`
// DefaultUser The default user to use for operations
DefaultUser *string `json:"defaultUser,omitempty"`
// DefaultWorkdir The default working directory to use for operations
DefaultWorkdir *string `json:"defaultWorkdir,omitempty"`
// EnvVars Environment variables to set
EnvVars *EnvVars `json:"envVars,omitempty"`
// HyperloopIP IP address of the hyperloop server to connect to
HyperloopIP *string `json:"hyperloopIP,omitempty"`
// Timestamp The current timestamp in RFC3339 format
Timestamp *time.Time `json:"timestamp,omitempty"`
VolumeMounts *[]VolumeMount `json:"volumeMounts,omitempty"`
}
// PostFilesMultipartRequestBody defines body for PostFiles for multipart/form-data ContentType.
type PostFilesMultipartRequestBody PostFilesMultipartBody
// PostInitJSONRequestBody defines body for PostInit for application/json ContentType.
type PostInitJSONRequestBody PostInitJSONBody
// ServerInterface represents all server handlers.
type ServerInterface interface {
// Get the environment variables
// (GET /envs)
GetEnvs(w http.ResponseWriter, r *http.Request)
// Download a file
// (GET /files)
GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams)
// Upload a file and ensure the parent directories exist. If the file exists, it will be overwritten.
// (POST /files)
PostFiles(w http.ResponseWriter, r *http.Request, params PostFilesParams)
// Check the health of the service
// (GET /health)
GetHealth(w http.ResponseWriter, r *http.Request)
// Set initial vars, ensure the time and metadata is synced with the host
// (POST /init)
PostInit(w http.ResponseWriter, r *http.Request)
// Get the stats of the service
// (GET /metrics)
GetMetrics(w http.ResponseWriter, r *http.Request)
// Quiesce continuous goroutines before Firecracker snapshot
// (POST /snapshot/prepare)
PostSnapshotPrepare(w http.ResponseWriter, r *http.Request)
}
// Unimplemented server implementation that returns http.StatusNotImplemented for each endpoint.
type Unimplemented struct{}
// Get the environment variables
// (GET /envs)
func (_ Unimplemented) GetEnvs(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Download a file
// (GET /files)
func (_ Unimplemented) GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams) {
w.WriteHeader(http.StatusNotImplemented)
}
// Upload a file and ensure the parent directories exist. If the file exists, it will be overwritten.
// (POST /files)
func (_ Unimplemented) PostFiles(w http.ResponseWriter, r *http.Request, params PostFilesParams) {
w.WriteHeader(http.StatusNotImplemented)
}
// Check the health of the service
// (GET /health)
func (_ Unimplemented) GetHealth(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Set initial vars, ensure the time and metadata is synced with the host
// (POST /init)
func (_ Unimplemented) PostInit(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Get the stats of the service
// (GET /metrics)
func (_ Unimplemented) GetMetrics(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// Quiesce continuous goroutines before Firecracker snapshot
// (POST /snapshot/prepare)
func (_ Unimplemented) PostSnapshotPrepare(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
// ServerInterfaceWrapper converts contexts to parameters.
type ServerInterfaceWrapper struct {
Handler ServerInterface
HandlerMiddlewares []MiddlewareFunc
ErrorHandlerFunc func(w http.ResponseWriter, r *http.Request, err error)
}
type MiddlewareFunc func(http.Handler) http.Handler
// GetEnvs operation middleware
func (siw *ServerInterfaceWrapper) GetEnvs(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetEnvs(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetFiles operation middleware
func (siw *ServerInterfaceWrapper) GetFiles(w http.ResponseWriter, r *http.Request) {
var err error
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
// Parameter object where we will unmarshal all parameters from the context
var params GetFilesParams
// ------------- Optional query parameter "path" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "path", r.URL.Query(), &params.Path, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "path", Err: err})
return
}
// ------------- Optional query parameter "username" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "username", r.URL.Query(), &params.Username, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "username", Err: err})
return
}
// ------------- Optional query parameter "signature" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature", r.URL.Query(), &params.Signature, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature", Err: err})
return
}
// ------------- Optional query parameter "signature_expiration" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature_expiration", r.URL.Query(), &params.SignatureExpiration, runtime.BindQueryParameterOptions{Type: "integer", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature_expiration", Err: err})
return
}
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetFiles(w, r, params)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostFiles operation middleware
func (siw *ServerInterfaceWrapper) PostFiles(w http.ResponseWriter, r *http.Request) {
var err error
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
// Parameter object where we will unmarshal all parameters from the context
var params PostFilesParams
// ------------- Optional query parameter "path" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "path", r.URL.Query(), &params.Path, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "path", Err: err})
return
}
// ------------- Optional query parameter "username" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "username", r.URL.Query(), &params.Username, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "username", Err: err})
return
}
// ------------- Optional query parameter "signature" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature", r.URL.Query(), &params.Signature, runtime.BindQueryParameterOptions{Type: "string", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature", Err: err})
return
}
// ------------- Optional query parameter "signature_expiration" -------------
err = runtime.BindQueryParameterWithOptions("form", true, false, "signature_expiration", r.URL.Query(), &params.SignatureExpiration, runtime.BindQueryParameterOptions{Type: "integer", Format: ""})
if err != nil {
siw.ErrorHandlerFunc(w, r, &InvalidParamFormatError{ParamName: "signature_expiration", Err: err})
return
}
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostFiles(w, r, params)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetHealth operation middleware
func (siw *ServerInterfaceWrapper) GetHealth(w http.ResponseWriter, r *http.Request) {
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetHealth(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostInit operation middleware
func (siw *ServerInterfaceWrapper) PostInit(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostInit(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// GetMetrics operation middleware
func (siw *ServerInterfaceWrapper) GetMetrics(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
ctx = context.WithValue(ctx, AccessTokenAuthScopes, []string{})
r = r.WithContext(ctx)
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.GetMetrics(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
// PostSnapshotPrepare operation middleware
func (siw *ServerInterfaceWrapper) PostSnapshotPrepare(w http.ResponseWriter, r *http.Request) {
handler := http.Handler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
siw.Handler.PostSnapshotPrepare(w, r)
}))
for _, middleware := range siw.HandlerMiddlewares {
handler = middleware(handler)
}
handler.ServeHTTP(w, r)
}
type UnescapedCookieParamError struct {
ParamName string
Err error
}
func (e *UnescapedCookieParamError) Error() string {
return fmt.Sprintf("error unescaping cookie parameter '%s'", e.ParamName)
}
func (e *UnescapedCookieParamError) Unwrap() error {
return e.Err
}
type UnmarshalingParamError struct {
ParamName string
Err error
}
func (e *UnmarshalingParamError) Error() string {
return fmt.Sprintf("Error unmarshaling parameter %s as JSON: %s", e.ParamName, e.Err.Error())
}
func (e *UnmarshalingParamError) Unwrap() error {
return e.Err
}
type RequiredParamError struct {
ParamName string
}
func (e *RequiredParamError) Error() string {
return fmt.Sprintf("Query argument %s is required, but not found", e.ParamName)
}
type RequiredHeaderError struct {
ParamName string
Err error
}
func (e *RequiredHeaderError) Error() string {
return fmt.Sprintf("Header parameter %s is required, but not found", e.ParamName)
}
func (e *RequiredHeaderError) Unwrap() error {
return e.Err
}
type InvalidParamFormatError struct {
ParamName string
Err error
}
func (e *InvalidParamFormatError) Error() string {
return fmt.Sprintf("Invalid format for parameter %s: %s", e.ParamName, e.Err.Error())
}
func (e *InvalidParamFormatError) Unwrap() error {
return e.Err
}
type TooManyValuesForParamError struct {
ParamName string
Count int
}
func (e *TooManyValuesForParamError) Error() string {
return fmt.Sprintf("Expected one value for %s, got %d", e.ParamName, e.Count)
}
// Handler creates http.Handler with routing matching OpenAPI spec.
func Handler(si ServerInterface) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{})
}
type ChiServerOptions struct {
BaseURL string
BaseRouter chi.Router
Middlewares []MiddlewareFunc
ErrorHandlerFunc func(w http.ResponseWriter, r *http.Request, err error)
}
// HandlerFromMux creates http.Handler with routing matching OpenAPI spec based on the provided mux.
func HandlerFromMux(si ServerInterface, r chi.Router) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{
BaseRouter: r,
})
}
func HandlerFromMuxWithBaseURL(si ServerInterface, r chi.Router, baseURL string) http.Handler {
return HandlerWithOptions(si, ChiServerOptions{
BaseURL: baseURL,
BaseRouter: r,
})
}
// HandlerWithOptions creates http.Handler with additional options
func HandlerWithOptions(si ServerInterface, options ChiServerOptions) http.Handler {
r := options.BaseRouter
if r == nil {
r = chi.NewRouter()
}
if options.ErrorHandlerFunc == nil {
options.ErrorHandlerFunc = func(w http.ResponseWriter, r *http.Request, err error) {
http.Error(w, err.Error(), http.StatusBadRequest)
}
}
wrapper := ServerInterfaceWrapper{
Handler: si,
HandlerMiddlewares: options.Middlewares,
ErrorHandlerFunc: options.ErrorHandlerFunc,
}
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/envs", wrapper.GetEnvs)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/files", wrapper.GetFiles)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/files", wrapper.PostFiles)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/health", wrapper.GetHealth)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/init", wrapper.PostInit)
})
r.Group(func(r chi.Router) {
r.Get(options.BaseURL+"/metrics", wrapper.GetMetrics)
})
r.Group(func(r chi.Router) {
r.Post(options.BaseURL+"/snapshot/prepare", wrapper.PostSnapshotPrepare)
})
return r
}

133
envd/internal/api/auth.go Normal file
View File

@ -0,0 +1,133 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"errors"
"fmt"
"net/http"
"slices"
"strconv"
"strings"
"time"
"github.com/awnumar/memguard"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
)
const (
SigningReadOperation = "read"
SigningWriteOperation = "write"
accessTokenHeader = "X-Access-Token"
)
// paths that are always allowed without general authentication
// POST/init is secured via MMDS hash validation instead
var authExcludedPaths = []string{
"GET/health",
"GET/files",
"POST/files",
"POST/init",
"POST/snapshot/prepare",
}
func (a *API) WithAuthorization(handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
if a.accessToken.IsSet() {
authHeader := req.Header.Get(accessTokenHeader)
// check if this path is allowed without authentication (e.g., health check, endpoints supporting signing)
allowedPath := slices.Contains(authExcludedPaths, req.Method+req.URL.Path)
if !a.accessToken.Equals(authHeader) && !allowedPath {
a.logger.Error().Msg("Trying to access secured envd without correct access token")
err := fmt.Errorf("unauthorized access, please provide a valid access token or method signing if supported")
jsonError(w, http.StatusUnauthorized, err)
return
}
}
handler.ServeHTTP(w, req)
})
}
func (a *API) generateSignature(path string, username string, operation string, signatureExpiration *int64) (string, error) {
tokenBytes, err := a.accessToken.Bytes()
if err != nil {
return "", fmt.Errorf("access token is not set: %w", err)
}
defer memguard.WipeBytes(tokenBytes)
var signature string
hasher := keys.NewSHA256Hashing()
if signatureExpiration == nil {
signature = strings.Join([]string{path, operation, username, string(tokenBytes)}, ":")
} else {
signature = strings.Join([]string{path, operation, username, string(tokenBytes), strconv.FormatInt(*signatureExpiration, 10)}, ":")
}
return fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(signature))), nil
}
func (a *API) validateSigning(r *http.Request, signature *string, signatureExpiration *int, username *string, path string, operation string) (err error) {
var expectedSignature string
// no need to validate signing key if access token is not set
if !a.accessToken.IsSet() {
return nil
}
// check if access token is sent in the header
tokenFromHeader := r.Header.Get(accessTokenHeader)
if tokenFromHeader != "" {
if !a.accessToken.Equals(tokenFromHeader) {
return fmt.Errorf("access token present in header but does not match")
}
return nil
}
if signature == nil {
return fmt.Errorf("missing signature query parameter")
}
// Empty string is used when no username is provided and the default user should be used
signatureUsername := ""
if username != nil {
signatureUsername = *username
}
if signatureExpiration == nil {
expectedSignature, err = a.generateSignature(path, signatureUsername, operation, nil)
} else {
exp := int64(*signatureExpiration)
expectedSignature, err = a.generateSignature(path, signatureUsername, operation, &exp)
}
if err != nil {
a.logger.Error().Err(err).Msg("error generating signing key")
return errors.New("invalid signature")
}
// signature validation
if expectedSignature != *signature {
return fmt.Errorf("invalid signature")
}
// signature expiration
if signatureExpiration != nil {
exp := int64(*signatureExpiration)
if exp < time.Now().Unix() {
return fmt.Errorf("signature is already expired")
}
}
return nil
}

View File

@ -0,0 +1,64 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"fmt"
"strconv"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
)
func TestKeyGenerationAlgorithmIsStable(t *testing.T) {
t.Parallel()
apiToken := "secret-access-token"
secureToken := &SecureToken{}
err := secureToken.Set([]byte(apiToken))
require.NoError(t, err)
api := &API{accessToken: secureToken}
path := "/path/to/demo.txt"
username := "root"
operation := "write"
timestamp := time.Now().Unix()
signature, err := api.generateSignature(path, username, operation, &timestamp)
require.NoError(t, err)
assert.NotEmpty(t, signature)
// locally generated signature
hasher := keys.NewSHA256Hashing()
localSignatureTmp := fmt.Sprintf("%s:%s:%s:%s:%s", path, operation, username, apiToken, strconv.FormatInt(timestamp, 10))
localSignature := fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(localSignatureTmp)))
assert.Equal(t, localSignature, signature)
}
func TestKeyGenerationAlgorithmWithoutExpirationIsStable(t *testing.T) {
t.Parallel()
apiToken := "secret-access-token"
secureToken := &SecureToken{}
err := secureToken.Set([]byte(apiToken))
require.NoError(t, err)
api := &API{accessToken: secureToken}
path := "/path/to/resource.txt"
username := "user"
operation := "read"
signature, err := api.generateSignature(path, username, operation, nil)
require.NoError(t, err)
assert.NotEmpty(t, signature)
// locally generated signature
hasher := keys.NewSHA256Hashing()
localSignatureTmp := fmt.Sprintf("%s:%s:%s:%s", path, operation, username, apiToken)
localSignature := fmt.Sprintf("v1_%s", hasher.HashWithoutPrefix([]byte(localSignatureTmp)))
assert.Equal(t, localSignature, signature)
}

View File

@ -0,0 +1,10 @@
# SPDX-License-Identifier: Apache-2.0
# yaml-language-server: $schema=https://raw.githubusercontent.com/deepmap/oapi-codegen/HEAD/configuration-schema.json
package: api
output: api.gen.go
generate:
models: true
chi-server: true
client: false

View File

@ -0,0 +1,187 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"compress/gzip"
"errors"
"fmt"
"io"
"mime"
"net/http"
"os"
"os/user"
"path/filepath"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
"git.omukk.dev/wrenn/sandbox/envd/internal/permissions"
)
func (a *API) GetFiles(w http.ResponseWriter, r *http.Request, params GetFilesParams) {
defer r.Body.Close()
var errorCode int
var errMsg error
var path string
if params.Path != nil {
path = *params.Path
}
operationID := logs.AssignOperationID()
// signing authorization if needed
err := a.validateSigning(r, params.Signature, params.SignatureExpiration, params.Username, path, SigningReadOperation)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("error during auth validation")
jsonError(w, http.StatusUnauthorized, err)
return
}
username, err := execcontext.ResolveDefaultUsername(params.Username, a.defaults.User)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("no user specified")
jsonError(w, http.StatusBadRequest, err)
return
}
defer func() {
l := a.logger.
Err(errMsg).
Str("method", r.Method+" "+r.URL.Path).
Str(string(logs.OperationIDKey), operationID).
Str("path", path).
Str("username", username)
if errMsg != nil {
l = l.Int("error_code", errorCode)
}
l.Msg("File read")
}()
u, err := user.Lookup(username)
if err != nil {
errMsg = fmt.Errorf("error looking up user '%s': %w", username, err)
errorCode = http.StatusUnauthorized
jsonError(w, errorCode, errMsg)
return
}
resolvedPath, err := permissions.ExpandAndResolve(path, u, a.defaults.Workdir)
if err != nil {
errMsg = fmt.Errorf("error expanding and resolving path '%s': %w", path, err)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
stat, err := os.Stat(resolvedPath)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
errMsg = fmt.Errorf("path '%s' does not exist", resolvedPath)
errorCode = http.StatusNotFound
jsonError(w, errorCode, errMsg)
return
}
errMsg = fmt.Errorf("error checking if path exists '%s': %w", resolvedPath, err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
if stat.IsDir() {
errMsg = fmt.Errorf("path '%s' is a directory", resolvedPath)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
// Reject anything that isn't a regular file (devices, pipes, sockets, etc.).
// Reading device files like /dev/zero or /dev/urandom produces infinite data
// and will exhaust memory on all layers of the stack.
if !stat.Mode().IsRegular() {
errMsg = fmt.Errorf("path '%s' is not a regular file", resolvedPath)
errorCode = http.StatusBadRequest
jsonError(w, errorCode, errMsg)
return
}
// Validate Accept-Encoding header
encoding, err := parseAcceptEncoding(r)
if err != nil {
errMsg = fmt.Errorf("error parsing Accept-Encoding: %w", err)
errorCode = http.StatusNotAcceptable
jsonError(w, errorCode, errMsg)
return
}
// Tell caches to store separate variants for different Accept-Encoding values
w.Header().Set("Vary", "Accept-Encoding")
// Fall back to identity for Range or conditional requests to preserve http.ServeContent
// behavior (206 Partial Content, 304 Not Modified). However, we must check if identity
// is acceptable per the Accept-Encoding header.
hasRangeOrConditional := r.Header.Get("Range") != "" ||
r.Header.Get("If-Modified-Since") != "" ||
r.Header.Get("If-None-Match") != "" ||
r.Header.Get("If-Range") != ""
if hasRangeOrConditional {
if !isIdentityAcceptable(r) {
errMsg = fmt.Errorf("identity encoding not acceptable for Range or conditional request")
errorCode = http.StatusNotAcceptable
jsonError(w, errorCode, errMsg)
return
}
encoding = EncodingIdentity
}
file, err := os.Open(resolvedPath)
if err != nil {
errMsg = fmt.Errorf("error opening file '%s': %w", resolvedPath, err)
errorCode = http.StatusInternalServerError
jsonError(w, errorCode, errMsg)
return
}
defer file.Close()
w.Header().Set("Content-Disposition", mime.FormatMediaType("inline", map[string]string{"filename": filepath.Base(resolvedPath)}))
// Serve with gzip encoding if requested.
if encoding == EncodingGzip {
w.Header().Set("Content-Encoding", EncodingGzip)
// Set Content-Type based on file extension, preserving the original type
contentType := mime.TypeByExtension(filepath.Ext(path))
if contentType == "" {
contentType = "application/octet-stream"
}
w.Header().Set("Content-Type", contentType)
gw := gzip.NewWriter(w)
defer gw.Close()
_, err = io.Copy(gw, file)
if err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("error writing gzip response")
}
return
}
http.ServeContent(w, r, path, stat.ModTime(), file)
}

View File

@ -0,0 +1,405 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"bytes"
"compress/gzip"
"context"
"io"
"mime/multipart"
"net/http"
"net/http/httptest"
"net/url"
"os"
"os/user"
"path/filepath"
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
func TestGetFilesContentDisposition(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
tests := []struct {
name string
filename string
expectedHeader string
}{
{
name: "simple filename",
filename: "test.txt",
expectedHeader: `inline; filename=test.txt`,
},
{
name: "filename with extension",
filename: "presentation.pptx",
expectedHeader: `inline; filename=presentation.pptx`,
},
{
name: "filename with multiple dots",
filename: "archive.tar.gz",
expectedHeader: `inline; filename=archive.tar.gz`,
},
{
name: "filename with spaces",
filename: "my document.pdf",
expectedHeader: `inline; filename="my document.pdf"`,
},
{
name: "filename with quotes",
filename: `file"name.txt`,
expectedHeader: `inline; filename="file\"name.txt"`,
},
{
name: "filename with backslash",
filename: `file\name.txt`,
expectedHeader: `inline; filename="file\\name.txt"`,
},
{
name: "unicode filename",
filename: "\u6587\u6863.pdf", // 文档.pdf in Chinese
expectedHeader: "inline; filename*=utf-8''%E6%96%87%E6%A1%A3.pdf",
},
{
name: "dotfile preserved",
filename: ".env",
expectedHeader: `inline; filename=.env`,
},
{
name: "dotfile with extension preserved",
filename: ".gitignore",
expectedHeader: `inline; filename=.gitignore`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
// Create a temp directory and file
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, tt.filename)
err := os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify Content-Disposition header
contentDisposition := resp.Header.Get("Content-Disposition")
assert.Equal(t, tt.expectedHeader, contentDisposition, "Content-Disposition header should be set with correct filename")
})
}
}
func TestGetFilesContentDispositionWithNestedPath(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
// Create a temp directory with nested structure
tempDir := t.TempDir()
nestedDir := filepath.Join(tempDir, "subdir", "another")
err = os.MkdirAll(nestedDir, 0o755)
require.NoError(t, err)
filename := "document.pdf"
tempFile := filepath.Join(nestedDir, filename)
err = os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify Content-Disposition header uses only the base filename, not the full path
contentDisposition := resp.Header.Get("Content-Disposition")
assert.Equal(t, `inline; filename=document.pdf`, contentDisposition, "Content-Disposition should contain only the filename, not the path")
}
func TestGetFiles_GzipEncoding_ExplicitIdentityOffWithRange(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
// Create a temp directory with a test file
tempDir := t.TempDir()
filename := "document.pdf"
tempFile := filepath.Join(tempDir, filename)
err = os.WriteFile(tempFile, []byte("test content"), 0o644)
require.NoError(t, err)
// Create test API
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
// Create request and response recorder
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
req.Header.Set("Accept-Encoding", "gzip; q=1,*; q=0")
req.Header.Set("Range", "bytes=0-4") // Request first 5 bytes
w := httptest.NewRecorder()
// Call the handler
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
// Check response
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusNotAcceptable, resp.StatusCode)
}
func TestGetFiles_GzipDownload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("hello world, this is a test file for gzip compression")
// Create a temp file with known content
tempDir := t.TempDir()
tempFile := filepath.Join(tempDir, "test.txt")
err = os.WriteFile(tempFile, originalContent, 0o644)
require.NoError(t, err)
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
req := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(tempFile), nil)
req.Header.Set("Accept-Encoding", "gzip")
w := httptest.NewRecorder()
params := GetFilesParams{
Path: &tempFile,
Username: &currentUser.Username,
}
api.GetFiles(w, req, params)
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
assert.Equal(t, "gzip", resp.Header.Get("Content-Encoding"))
assert.Equal(t, "text/plain; charset=utf-8", resp.Header.Get("Content-Type"))
// Decompress the gzip response body
gzReader, err := gzip.NewReader(resp.Body)
require.NoError(t, err)
defer gzReader.Close()
decompressed, err := io.ReadAll(gzReader)
require.NoError(t, err)
assert.Equal(t, originalContent, decompressed)
}
func TestPostFiles_GzipUpload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("hello world, this is a test file uploaded with gzip")
// Build a multipart body
var multipartBuf bytes.Buffer
mpWriter := multipart.NewWriter(&multipartBuf)
part, err := mpWriter.CreateFormFile("file", "uploaded.txt")
require.NoError(t, err)
_, err = part.Write(originalContent)
require.NoError(t, err)
err = mpWriter.Close()
require.NoError(t, err)
// Gzip-compress the entire multipart body
var gzBuf bytes.Buffer
gzWriter := gzip.NewWriter(&gzBuf)
_, err = gzWriter.Write(multipartBuf.Bytes())
require.NoError(t, err)
err = gzWriter.Close()
require.NoError(t, err)
// Create test API
tempDir := t.TempDir()
destPath := filepath.Join(tempDir, "uploaded.txt")
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
req := httptest.NewRequest(http.MethodPost, "/files?path="+url.QueryEscape(destPath), &gzBuf)
req.Header.Set("Content-Type", mpWriter.FormDataContentType())
req.Header.Set("Content-Encoding", "gzip")
w := httptest.NewRecorder()
params := PostFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.PostFiles(w, req, params)
resp := w.Result()
defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
// Verify the file was written with the original (decompressed) content
data, err := os.ReadFile(destPath)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
}
func TestGzipUploadThenGzipDownload(t *testing.T) {
t.Parallel()
currentUser, err := user.Current()
require.NoError(t, err)
originalContent := []byte("round-trip gzip test: upload compressed, download compressed, verify match")
// --- Upload with gzip ---
// Build a multipart body
var multipartBuf bytes.Buffer
mpWriter := multipart.NewWriter(&multipartBuf)
part, err := mpWriter.CreateFormFile("file", "roundtrip.txt")
require.NoError(t, err)
_, err = part.Write(originalContent)
require.NoError(t, err)
err = mpWriter.Close()
require.NoError(t, err)
// Gzip-compress the entire multipart body
var gzBuf bytes.Buffer
gzWriter := gzip.NewWriter(&gzBuf)
_, err = gzWriter.Write(multipartBuf.Bytes())
require.NoError(t, err)
err = gzWriter.Close()
require.NoError(t, err)
tempDir := t.TempDir()
destPath := filepath.Join(tempDir, "roundtrip.txt")
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
User: currentUser.Username,
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
uploadReq := httptest.NewRequest(http.MethodPost, "/files?path="+url.QueryEscape(destPath), &gzBuf)
uploadReq.Header.Set("Content-Type", mpWriter.FormDataContentType())
uploadReq.Header.Set("Content-Encoding", "gzip")
uploadW := httptest.NewRecorder()
uploadParams := PostFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.PostFiles(uploadW, uploadReq, uploadParams)
uploadResp := uploadW.Result()
defer uploadResp.Body.Close()
require.Equal(t, http.StatusOK, uploadResp.StatusCode)
// --- Download with gzip ---
downloadReq := httptest.NewRequest(http.MethodGet, "/files?path="+url.QueryEscape(destPath), nil)
downloadReq.Header.Set("Accept-Encoding", "gzip")
downloadW := httptest.NewRecorder()
downloadParams := GetFilesParams{
Path: &destPath,
Username: &currentUser.Username,
}
api.GetFiles(downloadW, downloadReq, downloadParams)
downloadResp := downloadW.Result()
defer downloadResp.Body.Close()
require.Equal(t, http.StatusOK, downloadResp.StatusCode)
assert.Equal(t, "gzip", downloadResp.Header.Get("Content-Encoding"))
// Decompress and verify content matches original
gzReader, err := gzip.NewReader(downloadResp.Body)
require.NoError(t, err)
defer gzReader.Close()
decompressed, err := io.ReadAll(gzReader)
require.NoError(t, err)
assert.Equal(t, originalContent, decompressed)
}

View File

@ -0,0 +1,229 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"compress/gzip"
"fmt"
"io"
"net/http"
"slices"
"sort"
"strconv"
"strings"
)
const (
// EncodingGzip is the gzip content encoding.
EncodingGzip = "gzip"
// EncodingIdentity means no encoding (passthrough).
EncodingIdentity = "identity"
// EncodingWildcard means any encoding is acceptable.
EncodingWildcard = "*"
)
// SupportedEncodings lists the content encodings supported for file transfer.
// The order matters - encodings are checked in order of preference.
var SupportedEncodings = []string{
EncodingGzip,
}
// encodingWithQuality holds an encoding name and its quality value.
type encodingWithQuality struct {
encoding string
quality float64
}
// isSupportedEncoding checks if the given encoding is in the supported list.
// Per RFC 7231, content-coding values are case-insensitive.
func isSupportedEncoding(encoding string) bool {
return slices.Contains(SupportedEncodings, strings.ToLower(encoding))
}
// parseEncodingWithQuality parses an encoding value and extracts the quality.
// Returns the encoding name (lowercased) and quality value (default 1.0 if not specified).
// Per RFC 7231, content-coding values are case-insensitive.
func parseEncodingWithQuality(value string) encodingWithQuality {
value = strings.TrimSpace(value)
quality := 1.0
if idx := strings.Index(value, ";"); idx != -1 {
params := value[idx+1:]
value = strings.TrimSpace(value[:idx])
// Parse q=X.X parameter
for param := range strings.SplitSeq(params, ";") {
param = strings.TrimSpace(param)
if strings.HasPrefix(strings.ToLower(param), "q=") {
if q, err := strconv.ParseFloat(param[2:], 64); err == nil {
quality = q
}
}
}
}
// Normalize encoding to lowercase per RFC 7231
return encodingWithQuality{encoding: strings.ToLower(value), quality: quality}
}
// parseEncoding extracts the encoding name from a header value, stripping quality.
func parseEncoding(value string) string {
return parseEncodingWithQuality(value).encoding
}
// parseContentEncoding parses the Content-Encoding header and returns the encoding.
// Returns an error if an unsupported encoding is specified.
// If no Content-Encoding header is present, returns empty string.
func parseContentEncoding(r *http.Request) (string, error) {
header := r.Header.Get("Content-Encoding")
if header == "" {
return EncodingIdentity, nil
}
encoding := parseEncoding(header)
if encoding == EncodingIdentity {
return EncodingIdentity, nil
}
if !isSupportedEncoding(encoding) {
return "", fmt.Errorf("unsupported Content-Encoding: %s, supported: %v", header, SupportedEncodings)
}
return encoding, nil
}
// parseAcceptEncodingHeader parses the Accept-Encoding header and returns
// the parsed encodings along with the identity rejection state.
// Per RFC 7231 Section 5.3.4, identity is acceptable unless excluded by
// "identity;q=0" or "*;q=0" without a more specific entry for identity with q>0.
func parseAcceptEncodingHeader(header string) ([]encodingWithQuality, bool) {
if header == "" {
return nil, false // identity not rejected when header is empty
}
// Parse all encodings with their quality values
var encodings []encodingWithQuality
for value := range strings.SplitSeq(header, ",") {
eq := parseEncodingWithQuality(value)
encodings = append(encodings, eq)
}
// Check if identity is rejected per RFC 7231 Section 5.3.4:
// identity is acceptable unless excluded by "identity;q=0" or "*;q=0"
// without a more specific entry for identity with q>0.
identityRejected := false
identityExplicitlyAccepted := false
wildcardRejected := false
for _, eq := range encodings {
switch eq.encoding {
case EncodingIdentity:
if eq.quality == 0 {
identityRejected = true
} else {
identityExplicitlyAccepted = true
}
case EncodingWildcard:
if eq.quality == 0 {
wildcardRejected = true
}
}
}
if wildcardRejected && !identityExplicitlyAccepted {
identityRejected = true
}
return encodings, identityRejected
}
// isIdentityAcceptable checks if identity encoding is acceptable based on the
// Accept-Encoding header. Per RFC 7231 section 5.3.4, identity is always
// implicitly acceptable unless explicitly rejected with q=0.
func isIdentityAcceptable(r *http.Request) bool {
header := r.Header.Get("Accept-Encoding")
_, identityRejected := parseAcceptEncodingHeader(header)
return !identityRejected
}
// parseAcceptEncoding parses the Accept-Encoding header and returns the best
// supported encoding based on quality values. Per RFC 7231 section 5.3.4,
// identity is always implicitly acceptable unless explicitly rejected with q=0.
// If no Accept-Encoding header is present, returns empty string (identity).
func parseAcceptEncoding(r *http.Request) (string, error) {
header := r.Header.Get("Accept-Encoding")
if header == "" {
return EncodingIdentity, nil
}
encodings, identityRejected := parseAcceptEncodingHeader(header)
// Sort by quality value (highest first)
sort.Slice(encodings, func(i, j int) bool {
return encodings[i].quality > encodings[j].quality
})
// Find the best supported encoding
for _, eq := range encodings {
// Skip encodings with q=0 (explicitly rejected)
if eq.quality == 0 {
continue
}
if eq.encoding == EncodingIdentity {
return EncodingIdentity, nil
}
// Wildcard means any encoding is acceptable - return a supported encoding if identity is rejected
if eq.encoding == EncodingWildcard {
if identityRejected && len(SupportedEncodings) > 0 {
return SupportedEncodings[0], nil
}
return EncodingIdentity, nil
}
if isSupportedEncoding(eq.encoding) {
return eq.encoding, nil
}
}
// Per RFC 7231, identity is implicitly acceptable unless rejected
if !identityRejected {
return EncodingIdentity, nil
}
// Identity rejected and no supported encodings found
return "", fmt.Errorf("no acceptable encoding found, supported: %v", SupportedEncodings)
}
// getDecompressedBody returns a reader that decompresses the request body based on
// Content-Encoding header. Returns the original body if no encoding is specified.
// Returns an error if an unsupported encoding is specified.
// The caller is responsible for closing both the returned ReadCloser and the
// original request body (r.Body) separately.
func getDecompressedBody(r *http.Request) (io.ReadCloser, error) {
encoding, err := parseContentEncoding(r)
if err != nil {
return nil, err
}
if encoding == EncodingIdentity {
return r.Body, nil
}
switch encoding {
case EncodingGzip:
gzReader, err := gzip.NewReader(r.Body)
if err != nil {
return nil, fmt.Errorf("failed to create gzip reader: %w", err)
}
return gzReader, nil
default:
// This shouldn't happen if isSupportedEncoding is correct
return nil, fmt.Errorf("encoding %s is supported but not implemented", encoding)
}
}

View File

@ -0,0 +1,496 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestIsSupportedEncoding(t *testing.T) {
t.Parallel()
t.Run("gzip is supported", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("gzip"))
})
t.Run("GZIP is supported (case-insensitive)", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("GZIP"))
})
t.Run("Gzip is supported (case-insensitive)", func(t *testing.T) {
t.Parallel()
assert.True(t, isSupportedEncoding("Gzip"))
})
t.Run("br is not supported", func(t *testing.T) {
t.Parallel()
assert.False(t, isSupportedEncoding("br"))
})
t.Run("deflate is not supported", func(t *testing.T) {
t.Parallel()
assert.False(t, isSupportedEncoding("deflate"))
})
}
func TestParseEncodingWithQuality(t *testing.T) {
t.Parallel()
t.Run("returns encoding with default quality 1.0", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001)
})
t.Run("parses quality value", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=0.5")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.5, eq.quality, 0.001)
})
t.Run("parses quality value with whitespace", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip ; q=0.8")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.8, eq.quality, 0.001)
})
t.Run("handles q=0", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=0")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.0, eq.quality, 0.001)
})
t.Run("handles invalid quality value", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("gzip;q=invalid")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001) // defaults to 1.0 on parse error
})
t.Run("trims whitespace from encoding", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality(" gzip ")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 1.0, eq.quality, 0.001)
})
t.Run("normalizes encoding to lowercase", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("GZIP")
assert.Equal(t, "gzip", eq.encoding)
})
t.Run("normalizes mixed case encoding", func(t *testing.T) {
t.Parallel()
eq := parseEncodingWithQuality("Gzip;q=0.5")
assert.Equal(t, "gzip", eq.encoding)
assert.InDelta(t, 0.5, eq.quality, 0.001)
})
}
func TestParseEncoding(t *testing.T) {
t.Parallel()
t.Run("returns encoding as-is", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip"))
})
t.Run("trims whitespace", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding(" gzip "))
})
t.Run("strips quality value", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip;q=1.0"))
})
t.Run("strips quality value with whitespace", func(t *testing.T) {
t.Parallel()
assert.Equal(t, "gzip", parseEncoding("gzip ; q=0.5"))
})
}
func TestParseContentEncoding(t *testing.T) {
t.Parallel()
t.Run("returns identity when no header", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when Content-Encoding is gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "gzip")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Content-Encoding is GZIP (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "GZIP")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Content-Encoding is Gzip (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "Gzip")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity for identity encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "identity")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns error for unsupported encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "br")
_, err := parseContentEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "unsupported Content-Encoding")
assert.Contains(t, err.Error(), "supported: [gzip]")
})
t.Run("handles gzip with quality value", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", nil)
req.Header.Set("Content-Encoding", "gzip;q=1.0")
encoding, err := parseContentEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
}
func TestParseAcceptEncoding(t *testing.T) {
t.Parallel()
t.Run("returns identity when no header", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when Accept-Encoding is gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when Accept-Encoding is GZIP (case-insensitive)", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "GZIP")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip when gzip is among multiple encodings", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "deflate, gzip, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns gzip with quality value", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=1.0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity for identity encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns identity for wildcard encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity for unsupported encoding only", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity when only unsupported encodings", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "deflate, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("selects gzip when it has highest quality", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br;q=0.5, gzip;q=1.0, deflate;q=0.8")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("selects gzip even with lower quality when others unsupported", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br;q=1.0, gzip;q=0.5")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding)
})
t.Run("returns identity when it has higher quality than gzip", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0.5, identity;q=1.0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("skips encoding with q=0", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0, identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("falls back to identity when gzip rejected and no other supported", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "gzip;q=0, br")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns error when identity explicitly rejected and no supported encoding", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "br, identity;q=0")
_, err := parseAcceptEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "no acceptable encoding found")
})
t.Run("returns gzip for wildcard when identity rejected", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*, identity;q=0")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, "gzip", encoding) // wildcard with identity rejected returns supported encoding
})
t.Run("returns error when wildcard rejected and no explicit identity", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0")
_, err := parseAcceptEncoding(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "no acceptable encoding found")
})
t.Run("returns identity when wildcard rejected but identity explicitly accepted", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0, identity")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingIdentity, encoding)
})
t.Run("returns gzip when wildcard rejected but gzip explicitly accepted", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequestWithContext(t.Context(), http.MethodGet, "/test", nil)
req.Header.Set("Accept-Encoding", "*;q=0, gzip")
encoding, err := parseAcceptEncoding(req)
require.NoError(t, err)
assert.Equal(t, EncodingGzip, encoding)
})
}
func TestGetDecompressedBody(t *testing.T) {
t.Parallel()
t.Run("returns original body when no Content-Encoding header", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
body, err := getDecompressedBody(req)
require.NoError(t, err)
assert.Equal(t, req.Body, body, "should return original body")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, content, data)
})
t.Run("decompresses gzip body when Content-Encoding is gzip", func(t *testing.T) {
t.Parallel()
originalContent := []byte("test content to compress")
var compressed bytes.Buffer
gw := gzip.NewWriter(&compressed)
_, err := gw.Write(originalContent)
require.NoError(t, err)
err = gw.Close()
require.NoError(t, err)
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(compressed.Bytes()))
req.Header.Set("Content-Encoding", "gzip")
body, err := getDecompressedBody(req)
require.NoError(t, err)
defer body.Close()
assert.NotEqual(t, req.Body, body, "should return a new gzip reader")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
})
t.Run("returns error for invalid gzip data", func(t *testing.T) {
t.Parallel()
invalidGzip := []byte("this is not gzip data")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(invalidGzip))
req.Header.Set("Content-Encoding", "gzip")
_, err := getDecompressedBody(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to create gzip reader")
})
t.Run("returns original body for identity encoding", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
req.Header.Set("Content-Encoding", "identity")
body, err := getDecompressedBody(req)
require.NoError(t, err)
assert.Equal(t, req.Body, body, "should return original body")
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, content, data)
})
t.Run("returns error for unsupported encoding", func(t *testing.T) {
t.Parallel()
content := []byte("test content")
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(content))
req.Header.Set("Content-Encoding", "br")
_, err := getDecompressedBody(req)
require.Error(t, err)
assert.Contains(t, err.Error(), "unsupported Content-Encoding")
})
t.Run("handles gzip with quality value", func(t *testing.T) {
t.Parallel()
originalContent := []byte("test content to compress")
var compressed bytes.Buffer
gw := gzip.NewWriter(&compressed)
_, err := gw.Write(originalContent)
require.NoError(t, err)
err = gw.Close()
require.NoError(t, err)
req, _ := http.NewRequestWithContext(t.Context(), http.MethodPost, "/test", bytes.NewReader(compressed.Bytes()))
req.Header.Set("Content-Encoding", "gzip;q=1.0")
body, err := getDecompressedBody(req)
require.NoError(t, err)
defer body.Close()
data, err := io.ReadAll(body)
require.NoError(t, err)
assert.Equal(t, originalContent, data)
})
}

31
envd/internal/api/envs.go Normal file
View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"encoding/json"
"net/http"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
)
func (a *API) GetEnvs(w http.ResponseWriter, _ *http.Request) {
operationID := logs.AssignOperationID()
a.logger.Debug().Str(string(logs.OperationIDKey), operationID).Msg("Getting env vars")
envs := make(EnvVars)
a.defaults.EnvVars.Range(func(key, value string) bool {
envs[key] = value
return true
})
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
if err := json.NewEncoder(w).Encode(envs); err != nil {
a.logger.Error().Err(err).Str(string(logs.OperationIDKey), operationID).Msg("Failed to encode env vars")
}
}

View File

@ -0,0 +1,23 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"encoding/json"
"errors"
"net/http"
)
func jsonError(w http.ResponseWriter, code int, err error) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.WriteHeader(code)
encodeErr := json.NewEncoder(w).Encode(Error{
Code: code,
Message: err.Error(),
})
if encodeErr != nil {
http.Error(w, errors.Join(encodeErr, err).Error(), http.StatusInternalServerError)
}
}

View File

@ -0,0 +1,5 @@
// SPDX-License-Identifier: Apache-2.0
package api
//go:generate go run github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen -config cfg.yaml ../../spec/envd.yaml

296
envd/internal/api/init.go Normal file
View File

@ -0,0 +1,296 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/netip"
"os/exec"
"time"
"git.omukk.dev/wrenn/sandbox/envd/internal/host"
"git.omukk.dev/wrenn/sandbox/envd/internal/logs"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
"github.com/awnumar/memguard"
"github.com/rs/zerolog"
"github.com/txn2/txeh"
)
var (
ErrAccessTokenMismatch = errors.New("access token validation failed")
ErrAccessTokenResetNotAuthorized = errors.New("access token reset not authorized")
)
// validateInitAccessToken validates the access token for /init requests.
// Token is valid if it matches the existing token OR the MMDS hash.
// If neither exists, first-time setup is allowed.
func (a *API) validateInitAccessToken(ctx context.Context, requestToken *SecureToken) error {
requestTokenSet := requestToken.IsSet()
// Fast path: token matches existing
if a.accessToken.IsSet() && requestTokenSet && a.accessToken.EqualsSecure(requestToken) {
return nil
}
// Check MMDS only if token didn't match existing
matchesMMDS, mmdsExists := a.checkMMDSHash(ctx, requestToken)
switch {
case matchesMMDS:
return nil
case !a.accessToken.IsSet() && !mmdsExists:
return nil // first-time setup
case !requestTokenSet:
return ErrAccessTokenResetNotAuthorized
default:
return ErrAccessTokenMismatch
}
}
// checkMMDSHash checks if the request token matches the MMDS hash.
// Returns (matches, mmdsExists).
//
// The MMDS hash is set by the orchestrator during Resume:
// - hash(token): requires this specific token
// - hash(""): explicitly allows nil token (token reset authorized)
// - "": MMDS not properly configured, no authorization granted
func (a *API) checkMMDSHash(ctx context.Context, requestToken *SecureToken) (bool, bool) {
if a.isNotFC {
return false, false
}
mmdsHash, err := a.mmdsClient.GetAccessTokenHash(ctx)
if err != nil {
return false, false
}
if mmdsHash == "" {
return false, false
}
if !requestToken.IsSet() {
return mmdsHash == keys.HashAccessToken(""), true
}
tokenBytes, err := requestToken.Bytes()
if err != nil {
return false, true
}
defer memguard.WipeBytes(tokenBytes)
return keys.HashAccessTokenBytes(tokenBytes) == mmdsHash, true
}
func (a *API) PostInit(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
ctx := r.Context()
operationID := logs.AssignOperationID()
logger := a.logger.With().Str(string(logs.OperationIDKey), operationID).Logger()
if r.Body != nil {
// Read raw body so we can wipe it after parsing
body, err := io.ReadAll(r.Body)
// Ensure body is wiped after we're done
defer memguard.WipeBytes(body)
if err != nil {
logger.Error().Msgf("Failed to read request body: %v", err)
w.WriteHeader(http.StatusBadRequest)
return
}
var initRequest PostInitJSONBody
if len(body) > 0 {
err = json.Unmarshal(body, &initRequest)
if err != nil {
logger.Error().Msgf("Failed to decode request: %v", err)
w.WriteHeader(http.StatusBadRequest)
return
}
}
// Ensure request token is destroyed if not transferred via TakeFrom.
// This handles: validation failures, timestamp-based skips, and any early returns.
// Safe because Destroy() is nil-safe and TakeFrom clears the source.
defer initRequest.AccessToken.Destroy()
a.initLock.Lock()
defer a.initLock.Unlock()
// Update data only if the request is newer or if there's no timestamp at all
if initRequest.Timestamp == nil || a.lastSetTime.SetToGreater(initRequest.Timestamp.UnixNano()) {
err = a.SetData(ctx, logger, initRequest)
if err != nil {
switch {
case errors.Is(err, ErrAccessTokenMismatch), errors.Is(err, ErrAccessTokenResetNotAuthorized):
w.WriteHeader(http.StatusUnauthorized)
default:
logger.Error().Msgf("Failed to set data: %v", err)
w.WriteHeader(http.StatusBadRequest)
}
w.Write([]byte(err.Error()))
return
}
}
}
go func() { //nolint:contextcheck // TODO: fix this later
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
host.PollForMMDSOpts(ctx, a.mmdsChan, a.defaults.EnvVars)
}()
// Start the port scanner and forwarder if they were stopped by a
// pre-snapshot prepare call. Start is a no-op if already running,
// so this is safe on first boot and only takes effect after restore.
if a.portSubsystem != nil {
a.portSubsystem.Start(a.rootCtx)
}
w.Header().Set("Cache-Control", "no-store")
w.Header().Set("Content-Type", "")
w.WriteHeader(http.StatusNoContent)
}
func (a *API) SetData(ctx context.Context, logger zerolog.Logger, data PostInitJSONBody) error {
// Validate access token before proceeding with any action
// The request must provide a token that is either:
// 1. Matches the existing access token (if set), OR
// 2. Matches the MMDS hash (for token change during resume)
if err := a.validateInitAccessToken(ctx, data.AccessToken); err != nil {
return err
}
if data.EnvVars != nil {
logger.Debug().Msg(fmt.Sprintf("Setting %d env vars", len(*data.EnvVars)))
for key, value := range *data.EnvVars {
logger.Debug().Msgf("Setting env var for %s", key)
a.defaults.EnvVars.Store(key, value)
}
}
if data.AccessToken.IsSet() {
logger.Debug().Msg("Setting access token")
a.accessToken.TakeFrom(data.AccessToken)
} else if a.accessToken.IsSet() {
logger.Debug().Msg("Clearing access token")
a.accessToken.Destroy()
}
if data.HyperloopIP != nil {
go a.SetupHyperloop(*data.HyperloopIP)
}
if data.DefaultUser != nil && *data.DefaultUser != "" {
logger.Debug().Msgf("Setting default user to: %s", *data.DefaultUser)
a.defaults.User = *data.DefaultUser
}
if data.DefaultWorkdir != nil && *data.DefaultWorkdir != "" {
logger.Debug().Msgf("Setting default workdir to: %s", *data.DefaultWorkdir)
a.defaults.Workdir = data.DefaultWorkdir
}
if data.VolumeMounts != nil {
for _, volume := range *data.VolumeMounts {
logger.Debug().Msgf("Mounting %s at %q", volume.NfsTarget, volume.Path)
go a.setupNfs(context.WithoutCancel(ctx), volume.NfsTarget, volume.Path)
}
}
return nil
}
func (a *API) setupNfs(ctx context.Context, nfsTarget, path string) {
commands := [][]string{
{"mkdir", "-p", path},
{"mount", "-v", "-t", "nfs", "-o", "mountproto=tcp,mountport=2049,proto=tcp,port=2049,nfsvers=3,noacl", nfsTarget, path},
}
for _, command := range commands {
data, err := exec.CommandContext(ctx, command[0], command[1:]...).CombinedOutput()
logger := a.getLogger(err)
logger.
Strs("command", command).
Str("output", string(data)).
Msg("Mount NFS")
if err != nil {
return
}
}
}
func (a *API) SetupHyperloop(address string) {
a.hyperloopLock.Lock()
defer a.hyperloopLock.Unlock()
if err := rewriteHostsFile(address, "/etc/hosts"); err != nil {
a.logger.Error().Err(err).Msg("failed to modify hosts file")
} else {
a.defaults.EnvVars.Store("WRENN_EVENTS_ADDRESS", fmt.Sprintf("http://%s", address))
}
}
const eventsHost = "events.wrenn.local"
func rewriteHostsFile(address, path string) error {
hosts, err := txeh.NewHosts(&txeh.HostsConfig{
ReadFilePath: path,
WriteFilePath: path,
})
if err != nil {
return fmt.Errorf("failed to create hosts: %w", err)
}
// Update /etc/hosts to point events.wrenn.local to the hyperloop IP
// This will remove any existing entries for events.wrenn.local first
ipFamily, err := getIPFamily(address)
if err != nil {
return fmt.Errorf("failed to get ip family: %w", err)
}
if ok, current, _ := hosts.HostAddressLookup(eventsHost, ipFamily); ok && current == address {
return nil // nothing to be done
}
hosts.AddHost(address, eventsHost)
return hosts.Save()
}
var (
ErrInvalidAddress = errors.New("invalid IP address")
ErrUnknownAddressFormat = errors.New("unknown IP address format")
)
func getIPFamily(address string) (txeh.IPFamily, error) {
addressIP, err := netip.ParseAddr(address)
if err != nil {
return txeh.IPFamilyV4, fmt.Errorf("failed to parse IP address: %w", err)
}
switch {
case addressIP.Is4():
return txeh.IPFamilyV4, nil
case addressIP.Is6():
return txeh.IPFamilyV6, nil
default:
return txeh.IPFamilyV4, fmt.Errorf("%w: %s", ErrUnknownAddressFormat, address)
}
}

View File

@ -0,0 +1,524 @@
// SPDX-License-Identifier: Apache-2.0
// Modifications by M/S Omukk
package api
import (
"context"
"os"
"path/filepath"
"strings"
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.omukk.dev/wrenn/sandbox/envd/internal/execcontext"
"git.omukk.dev/wrenn/sandbox/envd/internal/shared/keys"
utilsShared "git.omukk.dev/wrenn/sandbox/envd/internal/shared/utils"
"git.omukk.dev/wrenn/sandbox/envd/internal/utils"
)
func TestSimpleCases(t *testing.T) {
t.Parallel()
testCases := map[string]func(string) string{
"both newlines": func(s string) string { return s },
"no newline prefix": func(s string) string { return strings.TrimPrefix(s, "\n") },
"no newline suffix": func(s string) string { return strings.TrimSuffix(s, "\n") },
"no newline prefix or suffix": strings.TrimSpace,
}
for name, preprocessor := range testCases {
t.Run(name, func(t *testing.T) {
t.Parallel()
tempDir := t.TempDir()
value := `
# comment
127.0.0.1 one.host
127.0.0.2 two.host
`
value = preprocessor(value)
inputPath := filepath.Join(tempDir, "hosts")
err := os.WriteFile(inputPath, []byte(value), 0o644)
require.NoError(t, err)
err = rewriteHostsFile("127.0.0.3", inputPath)
require.NoError(t, err)
data, err := os.ReadFile(inputPath)
require.NoError(t, err)
assert.Equal(t, `# comment
127.0.0.1 one.host
127.0.0.2 two.host
127.0.0.3 events.wrenn.local`, strings.TrimSpace(string(data)))
})
}
}
func secureTokenPtr(s string) *SecureToken {
token := &SecureToken{}
_ = token.Set([]byte(s))
return token
}
type mockMMDSClient struct {
hash string
err error
}
func (m *mockMMDSClient) GetAccessTokenHash(_ context.Context) (string, error) {
return m.hash, m.err
}
func newTestAPI(accessToken *SecureToken, mmdsClient MMDSClient) *API {
logger := zerolog.Nop()
defaults := &execcontext.Defaults{
EnvVars: utils.NewMap[string, string](),
}
api := New(&logger, defaults, nil, false, context.Background(), nil)
if accessToken != nil {
api.accessToken.TakeFrom(accessToken)
}
api.mmdsClient = mmdsClient
return api
}
func TestValidateInitAccessToken(t *testing.T) {
t.Parallel()
ctx := t.Context()
tests := []struct {
name string
accessToken *SecureToken
requestToken *SecureToken
mmdsHash string
mmdsErr error
wantErr error
}{
{
name: "fast path: token matches existing",
accessToken: secureTokenPtr("secret-token"),
requestToken: secureTokenPtr("secret-token"),
mmdsHash: "",
mmdsErr: nil,
wantErr: nil,
},
{
name: "MMDS match: token hash matches MMDS hash",
accessToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("new-token"),
mmdsErr: nil,
wantErr: nil,
},
{
name: "first-time setup: no existing token, MMDS error",
accessToken: nil,
requestToken: secureTokenPtr("new-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
},
{
name: "first-time setup: no existing token, empty MMDS hash",
accessToken: nil,
requestToken: secureTokenPtr("new-token"),
mmdsHash: "",
mmdsErr: nil,
wantErr: nil,
},
{
name: "first-time setup: both tokens nil, no MMDS",
accessToken: nil,
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
},
{
name: "mismatch: existing token differs from request, no MMDS",
accessToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenMismatch,
},
{
name: "mismatch: existing token differs from request, MMDS hash mismatch",
accessToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: keys.HashAccessToken("different-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenMismatch,
},
{
name: "conflict: existing token, nil request, MMDS exists",
accessToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken("some-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
},
{
name: "conflict: existing token, nil request, no MMDS",
accessToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenResetNotAuthorized,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: tt.mmdsHash, err: tt.mmdsErr}
api := newTestAPI(tt.accessToken, mmdsClient)
err := api.validateInitAccessToken(ctx, tt.requestToken)
if tt.wantErr != nil {
require.Error(t, err)
assert.ErrorIs(t, err, tt.wantErr)
} else {
require.NoError(t, err)
}
})
}
}
func TestCheckMMDSHash(t *testing.T) {
t.Parallel()
ctx := t.Context()
t.Run("returns match when token hash equals MMDS hash", func(t *testing.T) {
t.Parallel()
token := "my-secret-token"
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken(token), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr(token))
assert.True(t, matches)
assert.True(t, exists)
})
t.Run("returns no match when token hash differs from MMDS hash", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken("different-token"), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("my-token"))
assert.False(t, matches)
assert.True(t, exists)
})
t.Run("returns exists but no match when request token is nil", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken("some-token"), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.False(t, matches)
assert.True(t, exists)
})
t.Run("returns false, false when MMDS returns error", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("any-token"))
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns false, false when MMDS returns empty hash with non-nil request", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, secureTokenPtr("any-token"))
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns false, false when MMDS returns empty hash with nil request", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.False(t, matches)
assert.False(t, exists)
})
t.Run("returns true, true when MMDS returns hash of empty string with nil request (explicit reset)", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: keys.HashAccessToken(""), err: nil}
api := newTestAPI(nil, mmdsClient)
matches, exists := api.checkMMDSHash(ctx, nil)
assert.True(t, matches)
assert.True(t, exists)
})
}
func TestSetData(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := zerolog.Nop()
t.Run("access token updates", func(t *testing.T) {
t.Parallel()
tests := []struct {
name string
existingToken *SecureToken
requestToken *SecureToken
mmdsHash string
mmdsErr error
wantErr error
wantFinalToken *SecureToken
}{
{
name: "first-time setup: sets initial token",
existingToken: nil,
requestToken: secureTokenPtr("initial-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: secureTokenPtr("initial-token"),
},
{
name: "first-time setup: nil request token leaves token unset",
existingToken: nil,
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: nil,
},
{
name: "re-init with same token: token unchanged",
existingToken: secureTokenPtr("same-token"),
requestToken: secureTokenPtr("same-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: nil,
wantFinalToken: secureTokenPtr("same-token"),
},
{
name: "resume with MMDS: updates token when hash matches",
existingToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("new-token"),
mmdsErr: nil,
wantErr: nil,
wantFinalToken: secureTokenPtr("new-token"),
},
{
name: "resume with MMDS: fails when hash doesn't match",
existingToken: secureTokenPtr("old-token"),
requestToken: secureTokenPtr("new-token"),
mmdsHash: keys.HashAccessToken("different-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenMismatch,
wantFinalToken: secureTokenPtr("old-token"),
},
{
name: "fails when existing token and request token mismatch without MMDS",
existingToken: secureTokenPtr("existing-token"),
requestToken: secureTokenPtr("wrong-token"),
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenMismatch,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when existing token but nil request token",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: assert.AnError,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when existing token but nil request with MMDS present",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken("some-token"),
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "conflict when MMDS returns empty hash and request is nil (prevents unauthorized reset)",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: "",
mmdsErr: nil,
wantErr: ErrAccessTokenResetNotAuthorized,
wantFinalToken: secureTokenPtr("existing-token"),
},
{
name: "resets token when MMDS returns hash of empty string and request is nil (explicit reset)",
existingToken: secureTokenPtr("existing-token"),
requestToken: nil,
mmdsHash: keys.HashAccessToken(""),
mmdsErr: nil,
wantErr: nil,
wantFinalToken: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: tt.mmdsHash, err: tt.mmdsErr}
api := newTestAPI(tt.existingToken, mmdsClient)
data := PostInitJSONBody{
AccessToken: tt.requestToken,
}
err := api.SetData(ctx, logger, data)
if tt.wantErr != nil {
require.ErrorIs(t, err, tt.wantErr)
} else {
require.NoError(t, err)
}
if tt.wantFinalToken == nil {
assert.False(t, api.accessToken.IsSet(), "expected token to not be set")
} else {
require.True(t, api.accessToken.IsSet(), "expected token to be set")
assert.True(t, api.accessToken.EqualsSecure(tt.wantFinalToken), "expected token to match")
}
})
}
})
t.Run("sets environment variables", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
envVars := EnvVars{"FOO": "bar", "BAZ": "qux"}
data := PostInitJSONBody{
EnvVars: &envVars,
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
val, ok := api.defaults.EnvVars.Load("FOO")
assert.True(t, ok)
assert.Equal(t, "bar", val)
val, ok = api.defaults.EnvVars.Load("BAZ")
assert.True(t, ok)
assert.Equal(t, "qux", val)
})
t.Run("sets default user", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
data := PostInitJSONBody{
DefaultUser: utilsShared.ToPtr("testuser"),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.Equal(t, "testuser", api.defaults.User)
})
t.Run("does not set default user when empty", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
api.defaults.User = "original"
data := PostInitJSONBody{
DefaultUser: utilsShared.ToPtr(""),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.Equal(t, "original", api.defaults.User)
})
t.Run("sets default workdir", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
data := PostInitJSONBody{
DefaultWorkdir: utilsShared.ToPtr("/home/user"),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
require.NotNil(t, api.defaults.Workdir)
assert.Equal(t, "/home/user", *api.defaults.Workdir)
})
t.Run("does not set default workdir when empty", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
originalWorkdir := "/original"
api.defaults.Workdir = &originalWorkdir
data := PostInitJSONBody{
DefaultWorkdir: utilsShared.ToPtr(""),
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
require.NotNil(t, api.defaults.Workdir)
assert.Equal(t, "/original", *api.defaults.Workdir)
})
t.Run("sets multiple fields at once", func(t *testing.T) {
t.Parallel()
mmdsClient := &mockMMDSClient{hash: "", err: assert.AnError}
api := newTestAPI(nil, mmdsClient)
envVars := EnvVars{"KEY": "value"}
data := PostInitJSONBody{
AccessToken: secureTokenPtr("token"),
DefaultUser: utilsShared.ToPtr("user"),
DefaultWorkdir: utilsShared.ToPtr("/workdir"),
EnvVars: &envVars,
}
err := api.SetData(ctx, logger, data)
require.NoError(t, err)
assert.True(t, api.accessToken.Equals("token"), "expected token to match")
assert.Equal(t, "user", api.defaults.User)
assert.Equal(t, "/workdir", *api.defaults.Workdir)
val, ok := api.defaults.EnvVars.Load("KEY")
assert.True(t, ok)
assert.Equal(t, "value", val)
})
}

View File

@ -0,0 +1,214 @@
// SPDX-License-Identifier: Apache-2.0
package api
import (
"bytes"
"errors"
"sync"
"github.com/awnumar/memguard"
)
var (
ErrTokenNotSet = errors.New("access token not set")
ErrTokenEmpty = errors.New("empty token not allowed")
)
// SecureToken wraps memguard for secure token storage.
// It uses LockedBuffer which provides memory locking, guard pages,
// and secure zeroing on destroy.
type SecureToken struct {
mu sync.RWMutex
buffer *memguard.LockedBuffer
}
// Set securely replaces the token, destroying the old one first.
// The old token memory is zeroed before the new token is stored.
// The input byte slice is wiped after copying to secure memory.
// Returns ErrTokenEmpty if token is empty - use Destroy() to clear the token instead.
func (s *SecureToken) Set(token []byte) error {
if len(token) == 0 {
return ErrTokenEmpty
}
s.mu.Lock()
defer s.mu.Unlock()
// Destroy old token first (zeros memory)
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
// Create new LockedBuffer from bytes (source slice is wiped by memguard)
s.buffer = memguard.NewBufferFromBytes(token)
return nil
}
// UnmarshalJSON implements json.Unmarshaler to securely parse a JSON string
// directly into memguard, wiping the input bytes after copying.
//
// Access tokens are hex-encoded HMAC-SHA256 hashes (64 chars of [0-9a-f]),
// so they never contain JSON escape sequences.
func (s *SecureToken) UnmarshalJSON(data []byte) error {
// JSON strings are quoted, so minimum valid is `""` (2 bytes).
if len(data) < 2 || data[0] != '"' || data[len(data)-1] != '"' {
memguard.WipeBytes(data)
return errors.New("invalid secure token JSON string")
}
content := data[1 : len(data)-1]
// Access tokens are hex strings - reject if contains backslash
if bytes.ContainsRune(content, '\\') {
memguard.WipeBytes(data)
return errors.New("invalid secure token: unexpected escape sequence")
}
if len(content) == 0 {
memguard.WipeBytes(data)
return ErrTokenEmpty
}
s.mu.Lock()
defer s.mu.Unlock()
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
// Allocate secure buffer and copy directly into it
s.buffer = memguard.NewBuffer(len(content))
copy(s.buffer.Bytes(), content)
// Wipe the input data
memguard.WipeBytes(data)
return nil
}
// TakeFrom transfers the token from src to this SecureToken, destroying any
// existing token. The source token is cleared after transfer.
// This avoids copying the underlying bytes.
func (s *SecureToken) TakeFrom(src *SecureToken) {
if src == nil || s == src {
return
}
// Extract buffer from source
src.mu.Lock()
buffer := src.buffer
src.buffer = nil
src.mu.Unlock()
// Install buffer in destination
s.mu.Lock()
if s.buffer != nil {
s.buffer.Destroy()
}
s.buffer = buffer
s.mu.Unlock()
}
// Equals checks if token matches using constant-time comparison.
// Returns false if the receiver is nil.
func (s *SecureToken) Equals(token string) bool {
if s == nil {
return false
}
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return false
}
return s.buffer.EqualTo([]byte(token))
}
// EqualsSecure compares this token with another SecureToken using constant-time comparison.
// Returns false if either receiver or other is nil.
func (s *SecureToken) EqualsSecure(other *SecureToken) bool {
if s == nil || other == nil {
return false
}
if s == other {
return s.IsSet()
}
// Get a copy of other's bytes (avoids holding two locks simultaneously)
otherBytes, err := other.Bytes()
if err != nil {
return false
}
defer memguard.WipeBytes(otherBytes)
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return false
}
return s.buffer.EqualTo(otherBytes)
}
// IsSet returns true if a token is stored.
// Returns false if the receiver is nil.
func (s *SecureToken) IsSet() bool {
if s == nil {
return false
}
s.mu.RLock()
defer s.mu.RUnlock()
return s.buffer != nil && s.buffer.IsAlive()
}
// Bytes returns a copy of the token bytes (for signature generation).
// The caller should zero the returned slice after use.
// Returns ErrTokenNotSet if the receiver is nil.
func (s *SecureToken) Bytes() ([]byte, error) {
if s == nil {
return nil, ErrTokenNotSet
}
s.mu.RLock()
defer s.mu.RUnlock()
if s.buffer == nil || !s.buffer.IsAlive() {
return nil, ErrTokenNotSet
}
// Return a copy (unavoidable for signature generation)
src := s.buffer.Bytes()
result := make([]byte, len(src))
copy(result, src)
return result, nil
}
// Destroy securely wipes the token from memory.
// No-op if the receiver is nil.
func (s *SecureToken) Destroy() {
if s == nil {
return
}
s.mu.Lock()
defer s.mu.Unlock()
if s.buffer != nil {
s.buffer.Destroy()
s.buffer = nil
}
}

Some files were not shown because too many files have changed in this diff Show More