mirror of
https://github.com/awizemann/scarf.git
synced 2026-05-10 18:44:45 +00:00
Compare commits
21 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0384c6ef17 | |||
| f36fb55ebe | |||
| 1823160546 | |||
| d2a447fcc4 | |||
| 76bfeb34d4 | |||
| 85a4ec0e14 | |||
| 1453c7a841 | |||
| bd21a539e6 | |||
| d3055702ef | |||
| ee1d705abc | |||
| 8e3dafe4c6 | |||
| c51241dc72 | |||
| ec03627bcd | |||
| f8069a4481 | |||
| 110170d6e9 | |||
| 1293cfa23b | |||
| ba8bf14ff0 | |||
| 4212200dca | |||
| 5920923d92 | |||
| 00ca7229df | |||
| 679dedf132 |
@@ -1,6 +1,7 @@
|
||||
# Xcode
|
||||
build/
|
||||
.gh-pages-worktree/
|
||||
.wiki-worktree/
|
||||
DerivedData/
|
||||
*.pbxuser
|
||||
!default.pbxuser
|
||||
@@ -52,3 +53,6 @@ scarf/standards/backups/
|
||||
# history. RELEASE_NOTES.md stays tracked (committed with the version bump).
|
||||
releases/v*/*.zip
|
||||
releases/v*/appcast-entry.xml
|
||||
|
||||
# Wiki helper: personal patterns (hostnames, IPs) blocked from the wiki push.
|
||||
scripts/wiki-blocklist.txt
|
||||
|
||||
@@ -59,6 +59,29 @@ The script bumps version, archives Universal (arm64 + x86_64) + ARM64-only varia
|
||||
|
||||
**Prerequisites (one-time, already set up on Alan's machine):** Developer ID Application cert in login Keychain (team `3Q6X2L86C4`), notarytool keychain profile `scarf-notary`, Sparkle EdDSA private key in Keychain item `https://sparkle-project.org`, `gh-pages` branch + GitHub Pages enabled. See the header of [scripts/release.sh](scripts/release.sh) and the Releases section in [README.md](README.md) for details.
|
||||
|
||||
## Wiki
|
||||
|
||||
Public documentation lives in the GitHub wiki at https://github.com/awizemann/scarf/wiki. The wiki is a separate git repo cloned to `.wiki-worktree/` in the repo root (gitignored, sibling to `.gh-pages-worktree/`). Internal dev notes stay in `scarf/docs/`; the wiki is for public-facing reference.
|
||||
|
||||
**Update the wiki when:**
|
||||
- A new feature module is added under `scarf/scarf/scarf/Features/` → extend the relevant User Guide page.
|
||||
- A new core service is added under `Core/Services/` → extend `Core-Services.md`.
|
||||
- Architecture changes (AppCoordinator, transport, MVVM-F rule, sandbox) → `Architecture-Overview.md` + the specific sub-page.
|
||||
- Hermes version bumps in this file → `Hermes-Version-Compatibility.md`.
|
||||
- `scripts/release.sh` completes a full (non-draft) release → bump latest-version on `Home.md` + append to `Release-Notes-Index.md`.
|
||||
- Keyboard shortcut or sidebar section changes → `Keyboard-Shortcuts.md` / `Sidebar-and-Navigation.md`.
|
||||
|
||||
**Skip for:** bug fixes with no user-observable change, pure refactors, typos, test-only changes, internal cleanups.
|
||||
|
||||
```bash
|
||||
./scripts/wiki.sh pull # always first
|
||||
# edit .wiki-worktree/*.md with normal tools
|
||||
./scripts/wiki.sh commit "docs: describe X" # runs secret-scan
|
||||
./scripts/wiki.sh push # runs secret-scan again, then push
|
||||
```
|
||||
|
||||
**Never** commit API keys, tokens, `.env` files, private keys, or real hostnames/IPs to the wiki. The script's two-pass secret-scan blocks common token patterns and a user-maintained blocklist at `scripts/wiki-blocklist.txt` (gitignored). Do not bypass without explicit approval. Full workflow on the wiki itself at `.wiki-worktree/Wiki-Maintenance.md`.
|
||||
|
||||
## Hermes Version
|
||||
|
||||
Targets Hermes v0.9.0 (v2026.4.13). Log lines may carry an optional `[session_id]` tag between the level and logger name — `HermesLogService.parseLine` treats the session tag as an optional capture group, so older untagged lines still parse.
|
||||
|
||||
@@ -33,6 +33,10 @@ Rules:
|
||||
- The app only reads from `~/.hermes/state.db` (never writes). Memory files are the exception.
|
||||
- Swift 6 strict concurrency: `@MainActor` default isolation, `nonisolated` for service methods.
|
||||
|
||||
## Documentation
|
||||
|
||||
Public docs live in the [GitHub wiki](https://github.com/awizemann/scarf/wiki). Small fixes (typos, clarifications) can be made via the "Edit" button on any wiki page — you need push access to the main repo. For larger changes, clone the wiki locally (`git clone git@github.com:awizemann/scarf.wiki.git`) or open an issue describing the proposed change.
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
Open an issue with:
|
||||
|
||||
@@ -17,15 +17,45 @@
|
||||
<a href="https://www.buymeacoffee.com/awizemann"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me a Coffee" height="28"></a>
|
||||
</p>
|
||||
|
||||
## What's New in 1.6
|
||||
## What's New in 2.0
|
||||
|
||||
- **Multi-server** — Manage multiple Hermes installations (local + any number of remotes) from one app. Each window binds to one server; open them side-by-side.
|
||||
- **Remote Hermes over SSH** — Every feature that worked against your local `~/.hermes/` now works against a remote host. File I/O routes through `scp`/`sftp`; chat ACP runs over `ssh -T`; SQLite is served from atomic `.backup` snapshots pulled on file-watcher ticks.
|
||||
- **Chat UX overhaul** — No more white-screen flash on first message, no more scroll jumping into whitespace during streaming, failed prompts explain themselves instead of silently spinning forever.
|
||||
- **Correctness pass** — Fixed remote WAL error spam, stale-snapshot session resume, auto-resume of dead cron sessions, 230+ Swift 6 concurrency warnings.
|
||||
|
||||
See the full [v2.0.0 release notes](https://github.com/awizemann/scarf/releases/tag/v2.0.0).
|
||||
|
||||
### Previously, in 1.6
|
||||
|
||||
- **Platforms** — Native GUI setup for all 13 messaging platforms, no more hand-editing `.env`
|
||||
- **Credential Pools** — Fixed OAuth flow and API-key handling; pick providers from a catalog
|
||||
- **Model Picker** — Hierarchical browser backed by the 111-provider models.dev cache
|
||||
- **Settings tabs** — 10 organized tabs covering ~60 previously hidden config fields
|
||||
- **Configure sidebar** — New section for Personalities, Quick Commands, Plugins, Webhooks, Profiles
|
||||
- **Configure sidebar** — Personalities, Quick Commands, Plugins, Webhooks, Profiles
|
||||
|
||||
See the full [v1.6.0 release notes](https://github.com/awizemann/scarf/releases/tag/v1.6.0).
|
||||
See the [v1.6.0 release notes](https://github.com/awizemann/scarf/releases/tag/v1.6.0) for the full 1.6 series.
|
||||
|
||||
## Multi-server, one window per server
|
||||
|
||||
Scarf 2.0 is a multi-window app. Each window is bound to exactly one Hermes server — your local `~/.hermes/` is synthesized automatically, and you can add remotes via **File → Open Server…** → **Add Server** (host, user, port, optional identity file). Open a second window for a different server and the two run side-by-side with independent state.
|
||||
|
||||
Remote Hermes is reached over system SSH — the same `~/.ssh/config`, ssh-agent, ProxyJump, and ControlMaster pooling your terminal uses. File I/O flows through `scp`/`sftp`; SQLite is served from atomic `sqlite3 .backup` snapshots cached under `~/Library/Caches/scarf/snapshots/<server-id>/`; chat (ACP) tunnels as `ssh -T host -- hermes acp` with JSON-RPC over stdio end-to-end. Everything in the feature list below works against remote identically to local.
|
||||
|
||||
### Remote setup requirements
|
||||
|
||||
The remote host must have:
|
||||
|
||||
1. **SSH access** — key-based auth via your local ssh-agent. Scarf never prompts for passphrases; run `ssh-add` once in Terminal before connecting.
|
||||
2. **`sqlite3`** on the remote `$PATH` — needed for the atomic DB snapshots. Install on the remote with `apt install sqlite3` (Ubuntu/Debian), `yum install sqlite` (RHEL/Fedora), or `apk add sqlite` (Alpine).
|
||||
3. **`pgrep`** on the remote `$PATH` — used by the Dashboard "is Hermes running" check. Standard on every distro; install `procps` if missing.
|
||||
4. **`~/.hermes/` readable by the SSH user**. When Hermes runs as a separate user (systemd service, Docker container), the SSH user needs read access to `config.yaml` and `state.db`. Either (a) SSH as the Hermes user, (b) `chmod` Hermes's home to be group-readable and add your SSH user to that group, or (c) set the **Hermes data directory** field when adding the server to point at the right location (e.g. `/var/lib/hermes/.hermes`).
|
||||
|
||||
### Troubleshooting remote connections
|
||||
|
||||
If the connection pill is green but the Dashboard shows "Stopped", "unknown", or empty values, the SSH user can't read the Hermes state files. Open **Manage Servers → 🩺 Run Diagnostics** (or click the yellow "Can't read Hermes state" pill in the toolbar). The diagnostics sheet runs fourteen checks in one SSH session — connectivity, `sqlite3` presence, read access to `config.yaml` and `state.db`, the effective non-login `$PATH` — and tells you exactly which one fails and why, with remediation hints for each. Use the **Copy Full Report** button to paste the full output into a bug report.
|
||||
|
||||
For the common "Hermes isn't at the default path" case (systemd services, Docker), **Test Connection** in the Add Server sheet now probes `/var/lib/hermes/.hermes`, `/opt/hermes/.hermes`, `/home/hermes/.hermes`, and `/root/.hermes` when it can't find `state.db` at `~/.hermes/`, and offers a one-click fill if it finds any of them.
|
||||
|
||||
## Features
|
||||
|
||||
@@ -77,7 +107,8 @@ Custom, agent-generated dashboards for any project. Define stat boxes, charts, t
|
||||
|
||||
- macOS 14.6+ (Sonoma)
|
||||
- Xcode 16.0+
|
||||
- [Hermes agent](https://github.com/hermes-ai/hermes-agent) v0.6.0+ installed at `~/.hermes/` (v0.9.0 recommended for full feature support)
|
||||
- [Hermes agent](https://github.com/hermes-ai/hermes-agent) v0.6.0+ installed at `~/.hermes/` on each target host (v0.9.0+ recommended for full feature support)
|
||||
- For remote servers: SSH access (key-based), `sqlite3` on the remote (for atomic DB snapshots), and the `hermes` CLI resolvable from the remote user's `PATH` or at a path you specify per server.
|
||||
|
||||
### Compatibility
|
||||
|
||||
@@ -88,9 +119,10 @@ Scarf reads Hermes's SQLite database and parses CLI output from `hermes status`,
|
||||
| v0.6.0 (2026-03-30) | Verified |
|
||||
| v0.7.0 (2026-04-03) | Verified |
|
||||
| v0.8.0 (2026-04-08) | Verified |
|
||||
| v0.9.0 (2026-04-13) | Verified (recommended for full 1.6 feature support) |
|
||||
| v0.9.0 (2026-04-13) | Verified |
|
||||
| v0.10.0 (2026-04-18) | Verified (recommended for full 2.0 feature support) |
|
||||
|
||||
Scarf 1.6 targets Hermes v0.9.0 specifically for the new Platforms, Credentials, Skills Hub, and Cron write features. Earlier Hermes versions remain supported for the monitoring and session features but may not expose every new setup form.
|
||||
Scarf 2.0 targets Hermes v0.10.0 for the ACP session/fork/list/resume capabilities used by remote chat. Earlier Hermes versions remain supported for monitoring, sessions, and file-based features; ACP-specific behavior may gracefully degrade on older agents.
|
||||
|
||||
If a Hermes update changes the database schema or CLI output format, Scarf may need to be updated. Check the [Health](#features) view for compatibility warnings.
|
||||
|
||||
|
||||
@@ -0,0 +1,58 @@
|
||||
## What's New in 2.0
|
||||
|
||||
Scarf now manages **multiple Hermes installations** — your local `~/.hermes/` plus any number of remote Hermes instances reached over SSH. Every feature that worked on your Mac now works against a Linux server, a Mac mini on the network, or whatever other host has Hermes installed.
|
||||
|
||||
This is a major version bump because the entire service layer was rewritten around a `ServerContext` + `ServerTransport` abstraction, and because the window model changed from single-window-single-server to multi-window-one-server-per-window.
|
||||
|
||||
### Multi-server
|
||||
|
||||
- **Manage Servers** sheet lets you add, rename, and remove remote servers. Each entry is an SSH target (`user@host`, port, optional identity file, optional `remoteHome` override if your install isn't at `~/.hermes/`).
|
||||
- Each window is bound to exactly one server. Open a second window via **File → Open Server** → pick a different server, and the two run side-by-side with independent state — chat, dashboards, activity, sessions, the lot.
|
||||
- The menu bar status icon shows a summary across all registered servers (green hare = any Hermes running anywhere).
|
||||
- Window-state restoration: quit + relaunch re-opens every window you had open, each reconnected to its bound server.
|
||||
|
||||
### Remote over SSH
|
||||
|
||||
- **ControlMaster connection pooling** — after the first auth, each remote primitive is a ~5ms tunnel call. Uses the system `ssh`, `scp`, `sftp` so your `~/.ssh/config`, ssh-agent, 1Password/Secretive SSH agents, and ProxyJump all work unchanged.
|
||||
- **DB access via atomic snapshots** — Scarf runs `sqlite3 .backup` on the remote (WAL-safe, won't corrupt), flips the snapshot out of WAL mode, and pulls it down with `scp`. Snapshots are cached under `~/Library/Caches/scarf/snapshots/<server-id>/` and re-pulled when the file watcher sees a change on the remote's `state.db`.
|
||||
- **ACP chat over SSH** — the Agent Client Protocol tunnel runs `ssh -T host -- hermes acp`. JSON-RPC over stdio travels end-to-end unmodified, so Rich Chat, streaming, tool calls, permission dialogs, and compression all work against the remote agent identically to local.
|
||||
- **File watcher** — local uses FSEvents (instant); remote polls `stat` mtime every 3s with ControlMaster keeping the cost bounded. Views auto-refresh on any tick.
|
||||
- **Cleanup on server-remove** — deleting a remote closes its ControlMaster socket (`ssh -O exit`), prunes its snapshot cache, and invalidates any process-wide caches keyed to its ID. App launch also sweeps orphaned snapshot dirs whose UUIDs are no longer in the registry.
|
||||
|
||||
### Chat UX overhaul
|
||||
|
||||
All of these were visible bugs during remote dogfooding and are now fixed on both local and remote:
|
||||
|
||||
- **No more white-screen flash** on the first message of a session. `RichChatView` used to swap `ContentUnavailableView` out for the message list, which tore down and recreated the entire ScrollView hierarchy. The empty state now lives inside the ScrollView itself.
|
||||
- **No more scroll-jumping to whitespace** at stream start/finish. Replaced six racing `onChange`-driven scroll calls with SwiftUI's built-in `.defaultScrollAnchor(.bottom)`, which is implemented inside the layout pass and doesn't overshoot LazyVStack content.
|
||||
- **Resuming a session on a remote now shows its full history.** The DB snapshot is refreshed on session-load — previously it was pulled once on first open and never again, so any messages the remote wrote since launch were invisible.
|
||||
- **"Continue from last session" surfaces errors** instead of silently doing nothing when SSH is down.
|
||||
- **Typing into a blank Chat always creates a new session.** Previously it auto-resumed the most recently active session in the DB, which often picked up a cron-spawned session that Hermes had already garbage-collected — producing a silent prompt failure.
|
||||
- **Failed prompts now explain themselves.** When the agent returns `stopReason: "refusal"`, `"error"`, or `"max_tokens"` with no assistant output, a system message appears under your prompt explaining what happened. No more spinning "Agent working…" forever.
|
||||
|
||||
### Correctness — remote SQLite
|
||||
|
||||
- The WAL-error spam (`cannot open file at line 51044 of [f0ca7bba1c] — os_unix.c:51044: (2) open(/Users/…/state.db-wal) - No such file or directory`) is gone. `sqlite3 .backup` preserves the source DB's journal mode; the scp'd copy used to try to open a WAL sidecar that doesn't exist. The snapshot script now runs `PRAGMA journal_mode=DELETE` after `.backup` on the remote, and Scarf opens remote snapshots with `file:…?immutable=1` as defense-in-depth.
|
||||
- **Concurrent snapshot dedupe** — a new `SnapshotCoordinator` actor makes sure that when Dashboard + Sessions + Activity all ask for a fresh snapshot at the same moment (e.g. on a file-watcher tick), only one SSH backup runs; the other callers await the in-flight pull and share the result.
|
||||
|
||||
### Under the hood
|
||||
|
||||
- New `ServerContext` value type flows through `.environment()` to every view and ViewModel. Every file and process operation routes through `context.makeTransport()` — `LocalTransport` (`FileManager`, `Process`, FSEvents) or `SSHTransport` (ssh, scp, sftp, mtime polling). The protocol is small enough that each transport is ~400 lines.
|
||||
- Swift 6 complete-concurrency sweep: ~230 warnings reduced to 1. `ServerContext`, `HermesPathSet`, `ServerTransport`, all service inits, and every value-type accessor are explicitly `nonisolated`. Hand-written `Codable` conformances for the nine types whose synthesized conformances were inferred `@MainActor` by Swift 6's default-isolation rule (`ACPRequest`, `ACPRawMessage`, `GatewayState`, `PlatformState`, `HermesCronJob`, `CronSchedule`, `CronJobsFile`, `AuthFile`, `AuthEntry`).
|
||||
- ACP cwd now comes from the *remote* `$HOME`, probed once on first connect and cached per server. Previously it passed your local Mac's home path to the ACP adapter, which only worked by coincidence when the remote username matched.
|
||||
|
||||
### Compatibility
|
||||
|
||||
Hermes v0.10.0 is now verified alongside v0.6–v0.9. Scarf builds its session/message `SELECT` columns based on an additive schema detection (`hasV07Schema`), so newer Hermes versions with extra columns don't break queries.
|
||||
|
||||
### Migration from 1.6.x
|
||||
|
||||
- Sparkle will offer the update automatically. Trigger manually via **Scarf → Check for Updates…** or the menu bar.
|
||||
- Your local server is synthesized automatically — existing 1.6.x users see "Local" in the server list with no setup needed.
|
||||
- `servers.json` is created on first add-remote. Location: `~/Library/Application Support/scarf/servers.json`.
|
||||
- Nothing you configured in 1.6.x (OAuth tokens, credential pools, cron jobs, MCP servers, platform setup) is touched. Those live in `~/.hermes/` and remain the source of truth.
|
||||
|
||||
### Known limitations
|
||||
|
||||
- Remote file watching is 3s mtime polling (vs. FSEvents on local). If you need sub-second updates on a remote, that's a followup.
|
||||
- The `session/load` ACP call against an already-deleted session returns success-with-no-body from the Hermes adapter — Scarf now detects the resulting `stopReason: "refusal"` and surfaces it, but the underlying Hermes behavior is an upstream-adapter bug that should also get a proper error response.
|
||||
@@ -0,0 +1,68 @@
|
||||
## What's New in 2.0.1
|
||||
|
||||
Hotfix for [#19](https://github.com/awizemann/scarf/issues/19) and the related reports from the first day of v2.0: users' remote SSH connections would show a green "Connected" pill but every view (Dashboard, Sessions, Activity, Chat) read as empty / "not running" / "not configured". Three distinct environments reported it — Docker Hermes on a LAN, homelab VM over Tailscale, Ubuntu VPS — and every one was a silent file-access failure on the remote that Scarf wasn't surfacing.
|
||||
|
||||
### Errors no longer disappear
|
||||
|
||||
Every remote read (`config.yaml`, `gateway_state.json`, `state.db`, `pgrep`) used to silently substitute an empty value on *any* failure — permission denied, missing file, `sqlite3` not installed, connection drop — they all looked identical to the UI. Now:
|
||||
|
||||
- Each failure logs a specific warning via `os.Logger` (visible in Console.app under subsystem `com.scarf`).
|
||||
- The Dashboard shows an orange banner above the stats with the exact error (e.g. "Permission denied reading `~/.hermes/state.db`") and a **Run Diagnostics…** button.
|
||||
- `HermesDataService` exposes a `lastOpenError` so views can explain *why* state.db couldn't be opened, rather than just rendering zeros.
|
||||
- Routine "file doesn't exist" cases (optional `skill.yaml` metadata, `gateway_state.json` before Hermes starts, `memories/USER.md` on fresh installs) are detected and **not** logged as warnings — only real errors (permission denied, connection drops, `sqlite3` missing) hit the log. Prevents Console from filling with false-positive noise when directory walks encounter optional files.
|
||||
|
||||
### New Remote Diagnostics sheet
|
||||
|
||||
Accessible from **Manage Servers → 🩺** per-server button, or by clicking the orange connection pill when Scarf can see the server but can't read Hermes state. Runs fourteen checks in a single SSH session and shows pass/fail for each, plus a targeted hint per failure:
|
||||
|
||||
- SSH connectivity and auth
|
||||
- Remote user identity and `$HOME` resolution
|
||||
- `~/.hermes` directory existence and readability
|
||||
- `config.yaml` readable (existence *and* actual read access — the old probe only checked existence)
|
||||
- `state.db` readable
|
||||
- `sqlite3` installed on the remote (required for the atomic snapshot Scarf pulls)
|
||||
- `sqlite3` can actually open `state.db`
|
||||
- `hermes` binary on the non-login `$PATH` (what runtime uses)
|
||||
- `hermes` binary on the login `$PATH` (what the Test Connection probe uses)
|
||||
- `pgrep` available (for the "is Hermes running" check)
|
||||
|
||||
One **Copy Full Report** button dumps every check as plain text for bug reports, and a raw-output disclosure panel shows the exact stdout/stderr the remote returned whenever any probe fails — so transport-level problems are self-diagnosing.
|
||||
|
||||
The diagnostics script is piped to `/bin/sh -s` on stdin rather than passed as `sh -c <script>` argv. The latter was getting split line-by-line by the remote's login shell (newlines parsed as command separators), which stranded variables set on line 1 in an ephemeral `sh` subprocess that exited before line 2 could use them. Stdin-piping runs the whole script in one `sh` process with variable scope preserved.
|
||||
|
||||
### Connection pill gains a "degraded" state
|
||||
|
||||
The pill used to be green as long as SSH connected; now after connectivity passes it runs a second-tier check (`test -r $HOME/.hermes/config.yaml`). If that fails, the pill turns **orange** with "Connected — can't read Hermes state" and clicking it opens Remote Diagnostics directly. This is the exact symptom mode in #19, and it's now one click away from a specific answer.
|
||||
|
||||
The pill's visual also got a pass: the colored dot is replaced with a state-specific SF Symbol (`checkmark.circle.fill` / `stethoscope` / `arrow.triangle.2.circlepath` / `exclamationmark.triangle.fill`), which reads more like a clickable toolbar tool and doubles as the status signal. No custom pill background anymore — the toolbar's native `.principal` bezel is the frame.
|
||||
|
||||
### Auto-suggest the correct `remoteHome` during Add Server
|
||||
|
||||
When Test Connection can't find `state.db` at the configured (or default) path, it now also probes the common alternate locations — `/var/lib/hermes/.hermes`, `/opt/hermes/.hermes`, `/home/hermes/.hermes`, `/root/.hermes` — and offers a one-click "Use this" fill if it finds one. Removes the need to know that systemd-installed Hermes lives at `/var/lib/hermes/.hermes` by convention.
|
||||
|
||||
### Clearer copy for the `remoteHome` field
|
||||
|
||||
The Add Server sheet field is now labeled "Hermes data directory" with a description explaining when you'd override it (systemd service installs, Docker sidecars) and noting that Test Connection auto-suggests.
|
||||
|
||||
### README has a new "Remote setup requirements" section
|
||||
|
||||
Four concrete prerequisites (SSH, `sqlite3`, `pgrep`, read access to `~/.hermes`) and a troubleshooting paragraph pointing at Remote Diagnostics.
|
||||
|
||||
### Migrating from 2.0.0
|
||||
|
||||
Sparkle will offer the update automatically. Settings and server list are preserved verbatim — this is purely additive (new diagnostics surface, new error banners, auto-suggest in Test Connection). If you were affected by #19, run Remote Diagnostics after updating; the sheet should pinpoint the specific file access issue and suggest a fix.
|
||||
|
||||
### Under the hood
|
||||
|
||||
- New types: `RemoteDiagnosticsViewModel`, `RemoteDiagnosticsView`. Both are local to Scarf; no new transport protocol.
|
||||
- `HermesFileService` gains `loadConfigResult()`, `loadGatewayStateResult()`, `hermesPIDResult()`, `readFileResult()`, `readFileDataResult()` — Result-returning variants that preserve the error. Legacy `loadConfig()` etc. still exist as thin forwarders for callers that don't need diagnostics.
|
||||
- `HermesDataService.open()` records `lastOpenError` with humanized hints for "sqlite3 not installed", "permission denied", and "file not found" — the three failure modes that produce 90% of issue #19 symptoms.
|
||||
- `ConnectionStatusViewModel` status enum gains `.degraded(reason:)` between `.connected` and `.error`.
|
||||
- `TestConnectionProbe` result enum gains `suggestedRemoteHome: String?` carrying any alternate-location hit.
|
||||
|
||||
### Known follow-ups (not in 2.0.1)
|
||||
|
||||
- `TestConnectionProbe` uses a direct-argv ssh invocation that's functionally correct but fragile (works by accident when split across the login shell). Should be ported to the stdin-pipe pattern the diagnostics sheet now uses.
|
||||
- Remaining `try?`-swallowed read paths beyond the four Dashboard-surfacing ones — Cron, Memory, Skills, MCP Servers, Platforms still silently render empty on read errors. Same fix pattern applies, low priority.
|
||||
- `hermesBinaryHint` is only populated when the user clicks Test Connection; if they skip it, ACP chat and CLI calls fall back to bare `hermes` which requires it on the non-interactive PATH (rarely true for `~/.local/bin` installs). The connection-pill's second-tier probe could auto-populate this.
|
||||
- Docker-host support: when users SSH to a Docker host, `pgrep` and `~/.hermes/` on the host don't see what's inside the container. Needs a `docker exec` wrapping option per server.
|
||||
@@ -0,0 +1,41 @@
|
||||
## What's New in 2.0.2
|
||||
|
||||
The actual root cause of [#19](https://github.com/awizemann/scarf/issues/19), found and patched by Scarf's first external contributor. v2.0.1 added the diagnostics UI assuming file-perm root cause; v2.0.2 fixes the underlying bug for everyone, regardless of perms.
|
||||
|
||||
### macOS Unix domain socket path limit (the real #19)
|
||||
|
||||
OpenSSH's ControlMaster multiplexes our bursty stat/cat/cp traffic over one TCP session per host. The socket path is bound by `bind(2)` to a Unix domain socket — and macOS' `sun_path` is **104 bytes including the NUL terminator**.
|
||||
|
||||
Scarf's old socket path was `~/Library/Caches/scarf/ssh/<%C>` where `%C` is OpenSSH's 64-char SHA1 hash of `(local user, host, port, remote user)`. For a username like `alex.maksimchuk`, the full path landed at **105 bytes** — one byte over the limit. ssh exited 255 with `unix_listener: path "..." too long for Unix domain socket`. Our `LogLevel=QUIET` flag (set so ACP's line-delimited JSON stays binary-clean) suppressed the diagnostic, and the user just saw "Remote command exited 255" — which the UI rendered as the silent empty-data state every reporter in #19 described.
|
||||
|
||||
The fix is to use a much shorter path:
|
||||
|
||||
```swift
|
||||
"/tmp/scarf-ssh-\(getuid())" // ~17 bytes + 64 hash + sep + NUL = ~83 bytes
|
||||
```
|
||||
|
||||
Per-user uid suffix keeps two local users' sockets from colliding in the shared `/tmp`, and 0700 perms on the dir keep them inaccessible to other users.
|
||||
|
||||
**Massive thanks to Alex Maksimchuk ([@aliatx2017](https://github.com/aliatx2017)) — Scarf's first external PR contributor — for diagnosing and patching this in [#20](https://github.com/awizemann/scarf/pull/20).** That diagnosis only happened because Alex bothered to read the codebase, reproduce against multiple usernames including a Termux/Android instance, and walk back from the cryptic exit code to the actual `bind()` failure. This release wouldn't exist without that work.
|
||||
|
||||
### Hardening on top of the fix
|
||||
|
||||
Three additions on top of Alex's patch, layered in via separate commits to keep the original change reviewable:
|
||||
|
||||
- **Defensive ownership check on the socket dir.** `/tmp` is world-writable, so a malicious local user could pre-create `/tmp/scarf-ssh-<uid>` and trick Scarf into using a hostile directory (we'd silently fail to chmod it back to 0700, since we wouldn't own it). `ensureControlDir` now uses POSIX `mkdir(0700)` (atomic, sets perms at create time) and on `EEXIST` runs `lstat` to verify the entry is a directory we own with mode 0700 — symlink → refuse, wrong owner → refuse + log to `os.Logger`, wrong mode → repair. Closes the `/tmp` pre-creation hole that's the standard concern for any per-user `/tmp` path.
|
||||
- **Launch-time sweep of stale sockets.** `ServerRegistry.sweepOrphanCaches` already prunes orphaned snapshot directories on launch; it now also removes ControlMaster socket files older than 30 minutes. Socket basenames are `%C` hashes (not ServerIDs), so we can't keep "still registered" sockets the way the snapshot sweep does — but `ControlPersist` is 600s, so anything older than 30 minutes is guaranteed to be a dead orphan from a crashed master, an unclean app exit, or a server removed while another Scarf instance was holding the dir. Keeps `/tmp/scarf-ssh-<uid>/` from accumulating indefinitely until reboot, while leaving any concurrent Scarf instance's live sockets untouched.
|
||||
- **Regression test for the path-length invariant.** `scarfTests` was a stub — it now has two tests: one asserting `controlDirPath().utf8.count + 1 + 64 + 1 ≤ 104` (would have caught the original #19 bug in CI), one asserting the path includes the current uid (pins the per-user-isolation invariant against a future "simplification" that drops it).
|
||||
|
||||
### v2.0.1 diagnostics work is still useful
|
||||
|
||||
The diagnostics sheet, orange "degraded" pill, dashboard error banner, and `remoteHome` auto-suggest from v2.0.1 all still ship — they just turn out not to have been the right diagnosis for the original three reporters. They remain valuable for the *other* connection-failure modes they were designed to surface (missing `sqlite3` on the remote, real permission errors, container/host visibility gaps, custom Hermes data directories). If you upgrade to v2.0.2 and *still* see incomplete data, run Remote Diagnostics from **Manage Servers → 🩺** and the sheet will tell you why.
|
||||
|
||||
### Migrating from 2.0.0 / 2.0.1 / draft 2.0.1
|
||||
|
||||
Sparkle will offer the update automatically. Settings and server list are preserved verbatim. The first time v2.0.2 connects to a remote, it'll create `/tmp/scarf-ssh-<uid>/` with mode 0700; the old `~/Library/Caches/scarf/ssh/` directory becomes unused (you can delete it manually, or leave it — macOS will sweep it eventually).
|
||||
|
||||
The previous v2.0.1 draft download remains available for anyone who already grabbed it — it's still a valid build with the diagnostics work. v2.0.2 is the recommended upgrade path.
|
||||
|
||||
### Reporters of #19
|
||||
|
||||
@cmalpass, @flyespresso, @maikokan — please grab v2.0.2 and confirm the dashboard populates without needing to run Remote Diagnostics first. If it still doesn't, the diagnostics sheet should now have a much better chance of pinpointing what's left.
|
||||
@@ -424,7 +424,7 @@
|
||||
CODE_SIGN_ENTITLEMENTS = scarf/scarf.entitlements;
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
COMBINE_HIDPI_IMAGES = YES;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
ENABLE_APP_SANDBOX = NO;
|
||||
ENABLE_HARDENED_RUNTIME = YES;
|
||||
@@ -436,7 +436,7 @@
|
||||
"@executable_path/../Frameworks",
|
||||
);
|
||||
MACOSX_DEPLOYMENT_TARGET = 14.6;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarf.app;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
REGISTER_APP_GROUPS = YES;
|
||||
@@ -458,7 +458,7 @@
|
||||
CODE_SIGN_ENTITLEMENTS = scarf/scarf.entitlements;
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
COMBINE_HIDPI_IMAGES = YES;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
ENABLE_APP_SANDBOX = NO;
|
||||
ENABLE_HARDENED_RUNTIME = YES;
|
||||
@@ -470,7 +470,7 @@
|
||||
"@executable_path/../Frameworks",
|
||||
);
|
||||
MACOSX_DEPLOYMENT_TARGET = 14.6;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarf.app;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
REGISTER_APP_GROUPS = YES;
|
||||
@@ -488,11 +488,11 @@
|
||||
buildSettings = {
|
||||
BUNDLE_LOADER = "$(TEST_HOST)";
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
GENERATE_INFOPLIST_FILE = YES;
|
||||
MACOSX_DEPLOYMENT_TARGET = 26.2;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarfTests;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
STRING_CATALOG_GENERATE_SYMBOLS = NO;
|
||||
@@ -509,11 +509,11 @@
|
||||
buildSettings = {
|
||||
BUNDLE_LOADER = "$(TEST_HOST)";
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
GENERATE_INFOPLIST_FILE = YES;
|
||||
MACOSX_DEPLOYMENT_TARGET = 26.2;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarfTests;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
STRING_CATALOG_GENERATE_SYMBOLS = NO;
|
||||
@@ -529,10 +529,10 @@
|
||||
isa = XCBuildConfiguration;
|
||||
buildSettings = {
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
GENERATE_INFOPLIST_FILE = YES;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarfUITests;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
STRING_CATALOG_GENERATE_SYMBOLS = NO;
|
||||
@@ -548,10 +548,10 @@
|
||||
isa = XCBuildConfiguration;
|
||||
buildSettings = {
|
||||
CODE_SIGN_STYLE = Automatic;
|
||||
CURRENT_PROJECT_VERSION = 18;
|
||||
CURRENT_PROJECT_VERSION = 21;
|
||||
DEVELOPMENT_TEAM = 3Q6X2L86C4;
|
||||
GENERATE_INFOPLIST_FILE = YES;
|
||||
MARKETING_VERSION = 1.6.2;
|
||||
MARKETING_VERSION = 2.0.2;
|
||||
PRODUCT_BUNDLE_IDENTIFIER = com.scarfUITests;
|
||||
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||
STRING_CATALOG_GENERATE_SYMBOLS = NO;
|
||||
|
||||
@@ -2,40 +2,78 @@ import SwiftUI
|
||||
|
||||
struct ContentView: View {
|
||||
@Environment(AppCoordinator.self) private var coordinator
|
||||
@Environment(\.serverContext) private var serverContext
|
||||
/// Per-window connection status. Constructed from the window's
|
||||
/// `serverContext` once; lifetime matches the window.
|
||||
@State private var connectionStatus: ConnectionStatusViewModel
|
||||
|
||||
init() {
|
||||
_connectionStatus = State(initialValue: ConnectionStatusViewModel(context: .local))
|
||||
}
|
||||
|
||||
var body: some View {
|
||||
NavigationSplitView {
|
||||
SidebarView()
|
||||
} detail: {
|
||||
detailView
|
||||
.toolbar {
|
||||
ToolbarItem(placement: .navigation) {
|
||||
ServerSwitcherToolbar()
|
||||
}
|
||||
if serverContext.isRemote {
|
||||
// `.principal` centers the pill in the toolbar —
|
||||
// the native emphasis bezel is the intended frame;
|
||||
// the pill's own visual content (icon + label, no
|
||||
// background) sits inside it in balance.
|
||||
ToolbarItem(placement: .principal) {
|
||||
ConnectionStatusPill(status: connectionStatus)
|
||||
}
|
||||
}
|
||||
}
|
||||
.onAppear {
|
||||
// The actual context is injected via @Environment, which
|
||||
// isn't available in `init`. Rebuild the monitor here
|
||||
// the first time we know the real context. Safe to call
|
||||
// repeatedly; `startMonitoring()` cancels + restarts.
|
||||
if connectionStatus.context.id != serverContext.id {
|
||||
connectionStatus = ConnectionStatusViewModel(context: serverContext)
|
||||
}
|
||||
connectionStatus.startMonitoring()
|
||||
}
|
||||
.onDisappear { connectionStatus.stopMonitoring() }
|
||||
}
|
||||
}
|
||||
|
||||
@ViewBuilder
|
||||
private var detailView: some View {
|
||||
// Each routed view receives the window's `serverContext` in its
|
||||
// init so its `@State` ViewModel is constructed bound to the right
|
||||
// server. This is what makes multi-window work — without it,
|
||||
// every window's VMs default-construct with `.local` even though
|
||||
// the surrounding env has the right context.
|
||||
switch coordinator.selectedSection {
|
||||
case .dashboard: DashboardView()
|
||||
case .insights: InsightsView()
|
||||
case .sessions: SessionsView()
|
||||
case .activity: ActivityView()
|
||||
case .projects: ProjectsView()
|
||||
case .dashboard: DashboardView(context: serverContext)
|
||||
case .insights: InsightsView(context: serverContext)
|
||||
case .sessions: SessionsView(context: serverContext)
|
||||
case .activity: ActivityView(context: serverContext)
|
||||
case .projects: ProjectsView(context: serverContext)
|
||||
case .chat: ChatView()
|
||||
case .memory: MemoryView()
|
||||
case .skills: SkillsView()
|
||||
case .platforms: PlatformsView()
|
||||
case .personalities: PersonalitiesView()
|
||||
case .quickCommands: QuickCommandsView()
|
||||
case .credentialPools: CredentialPoolsView()
|
||||
case .plugins: PluginsView()
|
||||
case .webhooks: WebhooksView()
|
||||
case .profiles: ProfilesView()
|
||||
case .tools: ToolsView()
|
||||
case .mcpServers: MCPServersView()
|
||||
case .gateway: GatewayView()
|
||||
case .cron: CronView()
|
||||
case .health: HealthView()
|
||||
case .logs: LogsView()
|
||||
case .settings: SettingsView()
|
||||
case .memory: MemoryView(context: serverContext)
|
||||
case .skills: SkillsView(context: serverContext)
|
||||
case .platforms: PlatformsView(context: serverContext)
|
||||
case .personalities: PersonalitiesView(context: serverContext)
|
||||
case .quickCommands: QuickCommandsView(context: serverContext)
|
||||
case .credentialPools: CredentialPoolsView(context: serverContext)
|
||||
case .plugins: PluginsView(context: serverContext)
|
||||
case .webhooks: WebhooksView(context: serverContext)
|
||||
case .profiles: ProfilesView(context: serverContext)
|
||||
case .tools: ToolsView(context: serverContext)
|
||||
case .mcpServers: MCPServersView(context: serverContext)
|
||||
case .gateway: GatewayView(context: serverContext)
|
||||
case .cron: CronView(context: serverContext)
|
||||
case .health: HealthView(context: serverContext)
|
||||
case .logs: LogsView(context: serverContext)
|
||||
case .settings: SettingsView(context: serverContext)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,39 +2,83 @@ import Foundation
|
||||
|
||||
// MARK: - JSON-RPC Transport
|
||||
|
||||
struct ACPRequest: Encodable {
|
||||
let jsonrpc = "2.0"
|
||||
let id: Int
|
||||
let method: String
|
||||
let params: [String: AnyCodable]
|
||||
// Hand-written `encode(to:)` / `init(from:)` with explicit `nonisolated` so
|
||||
// Swift 6's default-isolation doesn't synthesize a MainActor-isolated
|
||||
// conformance — which would prevent these payloads from being encoded or
|
||||
// decoded inside `ACPClient`'s actor context (the JSON-RPC read/write loop).
|
||||
// The member list must stay in sync with the stored properties above.
|
||||
|
||||
struct ACPRequest: Encodable, Sendable {
|
||||
nonisolated let jsonrpc = "2.0"
|
||||
nonisolated let id: Int
|
||||
nonisolated let method: String
|
||||
nonisolated let params: [String: AnyCodable]
|
||||
|
||||
enum CodingKeys: String, CodingKey { case jsonrpc, id, method, params }
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encode(jsonrpc, forKey: .jsonrpc)
|
||||
try c.encode(id, forKey: .id)
|
||||
try c.encode(method, forKey: .method)
|
||||
try c.encode(params, forKey: .params)
|
||||
}
|
||||
}
|
||||
|
||||
struct ACPRawMessage: Decodable {
|
||||
let jsonrpc: String?
|
||||
let id: Int?
|
||||
let method: String?
|
||||
let result: AnyCodable?
|
||||
let error: ACPError?
|
||||
let params: AnyCodable?
|
||||
struct ACPRawMessage: Decodable, Sendable {
|
||||
nonisolated let jsonrpc: String?
|
||||
nonisolated let id: Int?
|
||||
nonisolated let method: String?
|
||||
nonisolated let result: AnyCodable?
|
||||
nonisolated let error: ACPError?
|
||||
nonisolated let params: AnyCodable?
|
||||
|
||||
var isResponse: Bool { id != nil && method == nil }
|
||||
var isNotification: Bool { method != nil && id == nil }
|
||||
var isRequest: Bool { method != nil && id != nil }
|
||||
nonisolated var isResponse: Bool { id != nil && method == nil }
|
||||
nonisolated var isNotification: Bool { method != nil && id == nil }
|
||||
nonisolated var isRequest: Bool { method != nil && id != nil }
|
||||
|
||||
enum CodingKeys: String, CodingKey { case jsonrpc, id, method, result, error, params }
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.jsonrpc = try c.decodeIfPresent(String.self, forKey: .jsonrpc)
|
||||
self.id = try c.decodeIfPresent(Int.self, forKey: .id)
|
||||
self.method = try c.decodeIfPresent(String.self, forKey: .method)
|
||||
self.result = try c.decodeIfPresent(AnyCodable.self, forKey: .result)
|
||||
self.error = try c.decodeIfPresent(ACPError.self, forKey: .error)
|
||||
self.params = try c.decodeIfPresent(AnyCodable.self, forKey: .params)
|
||||
}
|
||||
}
|
||||
|
||||
struct ACPError: Decodable, Sendable {
|
||||
let code: Int
|
||||
let message: String
|
||||
nonisolated let code: Int
|
||||
nonisolated let message: String
|
||||
|
||||
enum CodingKeys: String, CodingKey { case code, message }
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.code = try c.decode(Int.self, forKey: .code)
|
||||
self.message = try c.decode(String.self, forKey: .message)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - AnyCodable (for dynamic JSON)
|
||||
|
||||
struct AnyCodable: Codable, Sendable {
|
||||
let value: Any
|
||||
struct AnyCodable: Codable, @unchecked Sendable {
|
||||
nonisolated let value: Any
|
||||
|
||||
init(_ value: Any) { self.value = value }
|
||||
nonisolated init(_ value: Any) { self.value = value }
|
||||
|
||||
init(from decoder: Decoder) throws {
|
||||
// NOT marked `nonisolated`: Swift's default-isolation treats writes to a
|
||||
// `let value: Any` stored property as MainActor-isolated even when the
|
||||
// property is declared nonisolated (Any can't be strictly Sendable, so
|
||||
// the compiler can't prove the write is safe off-main). Leaving the
|
||||
// init as default-isolated silences the mutation warnings; the Decodable
|
||||
// conformance is still usable from ACPClient's nonisolated read loop
|
||||
// because all callers are already @preconcurrency with respect to
|
||||
// `AnyCodable` (it's @unchecked Sendable).
|
||||
init(from decoder: any Decoder) throws {
|
||||
let container = try decoder.singleValueContainer()
|
||||
if container.decodeNil() {
|
||||
value = NSNull()
|
||||
@@ -55,7 +99,7 @@ struct AnyCodable: Codable, Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
func encode(to encoder: Encoder) throws {
|
||||
func encode(to encoder: any Encoder) throws {
|
||||
var container = encoder.singleValueContainer()
|
||||
switch value {
|
||||
case is NSNull:
|
||||
@@ -79,10 +123,10 @@ struct AnyCodable: Codable, Sendable {
|
||||
|
||||
// MARK: - Accessors
|
||||
|
||||
var stringValue: String? { value as? String }
|
||||
var intValue: Int? { value as? Int }
|
||||
var dictValue: [String: Any]? { value as? [String: Any] }
|
||||
var arrayValue: [Any]? { value as? [Any] }
|
||||
nonisolated var stringValue: String? { value as? String }
|
||||
nonisolated var intValue: Int? { value as? Int }
|
||||
nonisolated var dictValue: [String: Any]? { value as? [String: Any] }
|
||||
nonisolated var arrayValue: [Any]? { value as? [Any] }
|
||||
}
|
||||
|
||||
// MARK: - ACP Events (parsed from session/update notifications)
|
||||
@@ -154,7 +198,7 @@ struct ACPPromptResult: Sendable {
|
||||
// MARK: - Event Parsing
|
||||
|
||||
enum ACPEventParser {
|
||||
static func parse(notification: ACPRawMessage) -> ACPEvent? {
|
||||
nonisolated static func parse(notification: ACPRawMessage) -> ACPEvent? {
|
||||
guard notification.method == "session/update",
|
||||
let params = notification.params?.dictValue,
|
||||
let sessionId = params["sessionId"] as? String,
|
||||
@@ -202,7 +246,7 @@ enum ACPEventParser {
|
||||
}
|
||||
}
|
||||
|
||||
static func parsePermissionRequest(_ message: ACPRawMessage) -> ACPEvent? {
|
||||
nonisolated static func parsePermissionRequest(_ message: ACPRawMessage) -> ACPEvent? {
|
||||
guard message.method == "session/request_permission",
|
||||
let params = message.params?.dictValue,
|
||||
let sessionId = params["sessionId"] as? String,
|
||||
@@ -226,7 +270,7 @@ enum ACPEventParser {
|
||||
|
||||
// MARK: - Content Extraction
|
||||
|
||||
private static func extractContentText(from update: [String: Any]) -> String {
|
||||
nonisolated private static func extractContentText(from update: [String: Any]) -> String {
|
||||
if let content = update["content"] as? [String: Any],
|
||||
let text = content["text"] as? String {
|
||||
return text
|
||||
@@ -234,7 +278,7 @@ enum ACPEventParser {
|
||||
return ""
|
||||
}
|
||||
|
||||
private static func extractContentArrayText(from update: [String: Any]) -> String {
|
||||
nonisolated private static func extractContentArrayText(from update: [String: Any]) -> String {
|
||||
if let contentArray = update["content"] as? [[String: Any]] {
|
||||
return contentArray.compactMap { item -> String? in
|
||||
guard let inner = item["content"] as? [String: Any] else { return nil }
|
||||
|
||||
@@ -9,7 +9,7 @@ struct AuxiliaryModel: Sendable, Equatable {
|
||||
var apiKey: String
|
||||
var timeout: Int
|
||||
|
||||
static let empty = AuxiliaryModel(provider: "auto", model: "", baseURL: "", apiKey: "", timeout: 30)
|
||||
nonisolated static let empty = AuxiliaryModel(provider: "auto", model: "", baseURL: "", apiKey: "", timeout: 30)
|
||||
}
|
||||
|
||||
/// Group of display-related settings mirroring the `display:` block in config.yaml.
|
||||
@@ -23,7 +23,7 @@ struct DisplaySettings: Sendable, Equatable {
|
||||
var toolPreviewLength: Int
|
||||
var busyInputMode: String // e.g. "interrupt"
|
||||
|
||||
static let empty = DisplaySettings(
|
||||
nonisolated static let empty = DisplaySettings(
|
||||
skin: "default",
|
||||
compact: false,
|
||||
resumeDisplay: "full",
|
||||
@@ -54,7 +54,7 @@ struct TerminalSettings: Sendable, Equatable {
|
||||
var daytonaImage: String
|
||||
var singularityImage: String
|
||||
|
||||
static let empty = TerminalSettings(
|
||||
nonisolated static let empty = TerminalSettings(
|
||||
cwd: ".",
|
||||
timeout: 180,
|
||||
envPassthrough: [],
|
||||
@@ -82,7 +82,7 @@ struct BrowserSettings: Sendable, Equatable {
|
||||
var allowPrivateURLs: Bool
|
||||
var camofoxManagedPersistence: Bool
|
||||
|
||||
static let empty = BrowserSettings(
|
||||
nonisolated static let empty = BrowserSettings(
|
||||
inactivityTimeout: 120,
|
||||
commandTimeout: 30,
|
||||
recordSessions: false,
|
||||
@@ -115,7 +115,7 @@ struct VoiceSettings: Sendable, Equatable {
|
||||
var sttOpenAIModel: String
|
||||
var sttMistralModel: String
|
||||
|
||||
static let empty = VoiceSettings(
|
||||
nonisolated static let empty = VoiceSettings(
|
||||
recordKey: "ctrl+b",
|
||||
maxRecordingSeconds: 120,
|
||||
silenceDuration: 3.0,
|
||||
@@ -147,7 +147,7 @@ struct AuxiliarySettings: Sendable, Equatable {
|
||||
var mcp: AuxiliaryModel
|
||||
var flushMemories: AuxiliaryModel
|
||||
|
||||
static let empty = AuxiliarySettings(
|
||||
nonisolated static let empty = AuxiliarySettings(
|
||||
vision: .empty,
|
||||
webExtract: .empty,
|
||||
compression: .empty,
|
||||
@@ -170,7 +170,7 @@ struct SecuritySettings: Sendable, Equatable {
|
||||
var blocklistEnabled: Bool
|
||||
var blocklistDomains: [String]
|
||||
|
||||
static let empty = SecuritySettings(
|
||||
nonisolated static let empty = SecuritySettings(
|
||||
redactSecrets: true,
|
||||
redactPII: false,
|
||||
tirithEnabled: true,
|
||||
@@ -188,7 +188,7 @@ struct HumanDelaySettings: Sendable, Equatable {
|
||||
var minMS: Int
|
||||
var maxMS: Int
|
||||
|
||||
static let empty = HumanDelaySettings(mode: "off", minMS: 800, maxMS: 2500)
|
||||
nonisolated static let empty = HumanDelaySettings(mode: "off", minMS: 800, maxMS: 2500)
|
||||
}
|
||||
|
||||
/// Compression / context routing.
|
||||
@@ -198,14 +198,14 @@ struct CompressionSettings: Sendable, Equatable {
|
||||
var targetRatio: Double
|
||||
var protectLastN: Int
|
||||
|
||||
static let empty = CompressionSettings(enabled: true, threshold: 0.5, targetRatio: 0.2, protectLastN: 20)
|
||||
nonisolated static let empty = CompressionSettings(enabled: true, threshold: 0.5, targetRatio: 0.2, protectLastN: 20)
|
||||
}
|
||||
|
||||
struct CheckpointSettings: Sendable, Equatable {
|
||||
var enabled: Bool
|
||||
var maxSnapshots: Int
|
||||
|
||||
static let empty = CheckpointSettings(enabled: true, maxSnapshots: 50)
|
||||
nonisolated static let empty = CheckpointSettings(enabled: true, maxSnapshots: 50)
|
||||
}
|
||||
|
||||
struct LoggingSettings: Sendable, Equatable {
|
||||
@@ -213,7 +213,7 @@ struct LoggingSettings: Sendable, Equatable {
|
||||
var maxSizeMB: Int
|
||||
var backupCount: Int
|
||||
|
||||
static let empty = LoggingSettings(level: "INFO", maxSizeMB: 5, backupCount: 3)
|
||||
nonisolated static let empty = LoggingSettings(level: "INFO", maxSizeMB: 5, backupCount: 3)
|
||||
}
|
||||
|
||||
struct DelegationSettings: Sendable, Equatable {
|
||||
@@ -223,7 +223,7 @@ struct DelegationSettings: Sendable, Equatable {
|
||||
var apiKey: String
|
||||
var maxIterations: Int
|
||||
|
||||
static let empty = DelegationSettings(model: "", provider: "", baseURL: "", apiKey: "", maxIterations: 50)
|
||||
nonisolated static let empty = DelegationSettings(model: "", provider: "", baseURL: "", apiKey: "", maxIterations: 50)
|
||||
}
|
||||
|
||||
/// Discord-specific platform settings (`discord.*`). Other platforms currently have thinner schemas.
|
||||
@@ -233,7 +233,7 @@ struct DiscordSettings: Sendable, Equatable {
|
||||
var autoThread: Bool
|
||||
var reactions: Bool
|
||||
|
||||
static let empty = DiscordSettings(requireMention: true, freeResponseChannels: "", autoThread: true, reactions: true)
|
||||
nonisolated static let empty = DiscordSettings(requireMention: true, freeResponseChannels: "", autoThread: true, reactions: true)
|
||||
}
|
||||
|
||||
/// Telegram settings under `telegram.*` in config.yaml. Most Telegram tuning is
|
||||
@@ -243,7 +243,7 @@ struct TelegramSettings: Sendable, Equatable {
|
||||
var requireMention: Bool
|
||||
var reactions: Bool
|
||||
|
||||
static let empty = TelegramSettings(requireMention: true, reactions: false)
|
||||
nonisolated static let empty = TelegramSettings(requireMention: true, reactions: false)
|
||||
}
|
||||
|
||||
/// Slack settings under `platforms.slack.*` (and a couple of top-level keys).
|
||||
@@ -253,7 +253,7 @@ struct SlackSettings: Sendable, Equatable {
|
||||
var replyInThread: Bool
|
||||
var replyBroadcast: Bool
|
||||
|
||||
static let empty = SlackSettings(replyToMode: "first", requireMention: true, replyInThread: true, replyBroadcast: false)
|
||||
nonisolated static let empty = SlackSettings(replyToMode: "first", requireMention: true, replyInThread: true, replyBroadcast: false)
|
||||
}
|
||||
|
||||
/// Matrix settings under `matrix.*`.
|
||||
@@ -262,7 +262,7 @@ struct MatrixSettings: Sendable, Equatable {
|
||||
var autoThread: Bool
|
||||
var dmMentionThreads: Bool
|
||||
|
||||
static let empty = MatrixSettings(requireMention: true, autoThread: true, dmMentionThreads: false)
|
||||
nonisolated static let empty = MatrixSettings(requireMention: true, autoThread: true, dmMentionThreads: false)
|
||||
}
|
||||
|
||||
/// Mattermost settings. Mattermost is mostly driven by env vars; config.yaml
|
||||
@@ -272,7 +272,7 @@ struct MattermostSettings: Sendable, Equatable {
|
||||
var requireMention: Bool
|
||||
var replyMode: String // "thread" | "off"
|
||||
|
||||
static let empty = MattermostSettings(requireMention: true, replyMode: "off")
|
||||
nonisolated static let empty = MattermostSettings(requireMention: true, replyMode: "off")
|
||||
}
|
||||
|
||||
/// WhatsApp settings under `whatsapp.*`.
|
||||
@@ -280,7 +280,7 @@ struct WhatsAppSettings: Sendable, Equatable {
|
||||
var unauthorizedDMBehavior: String // "pair" | "ignore"
|
||||
var replyPrefix: String
|
||||
|
||||
static let empty = WhatsAppSettings(unauthorizedDMBehavior: "pair", replyPrefix: "")
|
||||
nonisolated static let empty = WhatsAppSettings(unauthorizedDMBehavior: "pair", replyPrefix: "")
|
||||
}
|
||||
|
||||
/// Home Assistant filters under `platforms.homeassistant.extra`. Hermes ignores
|
||||
@@ -292,7 +292,7 @@ struct HomeAssistantSettings: Sendable, Equatable {
|
||||
var ignoreEntities: [String]
|
||||
var cooldownSeconds: Int
|
||||
|
||||
static let empty = HomeAssistantSettings(watchDomains: [], watchEntities: [], watchAll: false, ignoreEntities: [], cooldownSeconds: 30)
|
||||
nonisolated static let empty = HomeAssistantSettings(watchDomains: [], watchEntities: [], watchAll: false, ignoreEntities: [], cooldownSeconds: 30)
|
||||
}
|
||||
|
||||
// MARK: - Root Config
|
||||
@@ -359,7 +359,7 @@ struct HermesConfig: Sendable {
|
||||
var whatsapp: WhatsAppSettings
|
||||
var homeAssistant: HomeAssistantSettings
|
||||
|
||||
static let empty = HermesConfig(
|
||||
nonisolated static let empty = HermesConfig(
|
||||
model: "unknown",
|
||||
provider: "unknown",
|
||||
maxTurns: 0,
|
||||
@@ -418,13 +418,16 @@ struct HermesConfig: Sendable {
|
||||
)
|
||||
}
|
||||
|
||||
// Hand-written `init(from:)` so Swift 6 doesn't synthesize a
|
||||
// MainActor-isolated Decodable conformance (which would fail to be used from
|
||||
// `HermesFileService.loadGatewayState()`, a nonisolated method).
|
||||
struct GatewayState: Sendable, Codable {
|
||||
let pid: Int?
|
||||
let kind: String?
|
||||
let gatewayState: String?
|
||||
let exitReason: String?
|
||||
let platforms: [String: PlatformState]?
|
||||
let updatedAt: String?
|
||||
nonisolated let pid: Int?
|
||||
nonisolated let kind: String?
|
||||
nonisolated let gatewayState: String?
|
||||
nonisolated let exitReason: String?
|
||||
nonisolated let platforms: [String: PlatformState]?
|
||||
nonisolated let updatedAt: String?
|
||||
|
||||
enum CodingKeys: String, CodingKey {
|
||||
case pid, kind
|
||||
@@ -434,16 +437,50 @@ struct GatewayState: Sendable, Codable {
|
||||
case updatedAt = "updated_at"
|
||||
}
|
||||
|
||||
var isRunning: Bool {
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.pid = try c.decodeIfPresent(Int.self, forKey: .pid)
|
||||
self.kind = try c.decodeIfPresent(String.self, forKey: .kind)
|
||||
self.gatewayState = try c.decodeIfPresent(String.self, forKey: .gatewayState)
|
||||
self.exitReason = try c.decodeIfPresent(String.self, forKey: .exitReason)
|
||||
self.platforms = try c.decodeIfPresent([String: PlatformState].self, forKey: .platforms)
|
||||
self.updatedAt = try c.decodeIfPresent(String.self, forKey: .updatedAt)
|
||||
}
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encodeIfPresent(pid, forKey: .pid)
|
||||
try c.encodeIfPresent(kind, forKey: .kind)
|
||||
try c.encodeIfPresent(gatewayState, forKey: .gatewayState)
|
||||
try c.encodeIfPresent(exitReason, forKey: .exitReason)
|
||||
try c.encodeIfPresent(platforms, forKey: .platforms)
|
||||
try c.encodeIfPresent(updatedAt, forKey: .updatedAt)
|
||||
}
|
||||
|
||||
nonisolated var isRunning: Bool {
|
||||
gatewayState == "running"
|
||||
}
|
||||
|
||||
var statusText: String {
|
||||
nonisolated var statusText: String {
|
||||
gatewayState ?? "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
struct PlatformState: Sendable, Codable {
|
||||
let connected: Bool?
|
||||
let error: String?
|
||||
nonisolated let connected: Bool?
|
||||
nonisolated let error: String?
|
||||
|
||||
enum CodingKeys: String, CodingKey { case connected, error }
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.connected = try c.decodeIfPresent(Bool.self, forKey: .connected)
|
||||
self.error = try c.decodeIfPresent(String.self, forKey: .error)
|
||||
}
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encodeIfPresent(connected, forKey: .connected)
|
||||
try c.encodeIfPresent(error, forKey: .error)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,48 +1,70 @@
|
||||
import Foundation
|
||||
import SQLite3
|
||||
|
||||
/// Deprecated module-level path statics. Preserved as thin forwarders to
|
||||
/// `ServerContext.local.paths` so existing call sites continue to compile
|
||||
/// while Phase 1 migrates them to a per-server `ServerContext`.
|
||||
///
|
||||
/// New code should accept a `ServerContext` and read `context.paths.<field>`.
|
||||
enum HermesPaths: Sendable {
|
||||
private nonisolated static let userHome: String = ProcessInfo.processInfo.environment["HOME"]
|
||||
?? NSHomeDirectory()
|
||||
@available(*, deprecated, message: "use ServerContext.paths.home")
|
||||
nonisolated static var home: String { ServerContext.local.paths.home }
|
||||
|
||||
nonisolated static let home: String = userHome + "/.hermes"
|
||||
nonisolated static let stateDB: String = home + "/state.db"
|
||||
nonisolated static let configYAML: String = home + "/config.yaml"
|
||||
nonisolated static let memoriesDir: String = home + "/memories"
|
||||
nonisolated static let memoryMD: String = memoriesDir + "/MEMORY.md"
|
||||
nonisolated static let userMD: String = memoriesDir + "/USER.md"
|
||||
nonisolated static let sessionsDir: String = home + "/sessions"
|
||||
nonisolated static let cronJobsJSON: String = home + "/cron/jobs.json"
|
||||
nonisolated static let cronOutputDir: String = home + "/cron/output"
|
||||
nonisolated static let gatewayStateJSON: String = home + "/gateway_state.json"
|
||||
nonisolated static let skillsDir: String = home + "/skills"
|
||||
nonisolated static let errorsLog: String = home + "/logs/errors.log"
|
||||
nonisolated static let agentLog: String = home + "/logs/agent.log"
|
||||
nonisolated static let gatewayLog: String = home + "/logs/gateway.log"
|
||||
nonisolated static let scarfDir: String = home + "/scarf"
|
||||
nonisolated static let projectsRegistry: String = scarfDir + "/projects.json"
|
||||
nonisolated static let mcpTokensDir: String = home + "/mcp-tokens"
|
||||
@available(*, deprecated, message: "use ServerContext.paths.stateDB")
|
||||
nonisolated static var stateDB: String { ServerContext.local.paths.stateDB }
|
||||
|
||||
/// Install locations we look for the `hermes` binary in, in priority order.
|
||||
/// Checked every access so a user installing via a different method doesn't
|
||||
/// need to relaunch Scarf.
|
||||
nonisolated static let hermesBinaryCandidates: [String] = [
|
||||
userHome + "/.local/bin/hermes", // pipx / pip --user (default)
|
||||
"/opt/homebrew/bin/hermes", // Homebrew on Apple Silicon
|
||||
"/usr/local/bin/hermes", // Homebrew on Intel / manual install
|
||||
userHome + "/.hermes/bin/hermes" // Some self-install layouts
|
||||
]
|
||||
@available(*, deprecated, message: "use ServerContext.paths.configYAML")
|
||||
nonisolated static var configYAML: String { ServerContext.local.paths.configYAML }
|
||||
|
||||
/// Resolved path to the `hermes` executable. Returns the first candidate
|
||||
/// that exists and is executable; falls back to the pipx default so error
|
||||
/// messages ("Expected at …") still make sense on a fresh machine.
|
||||
nonisolated static var hermesBinary: String {
|
||||
for path in hermesBinaryCandidates
|
||||
where FileManager.default.isExecutableFile(atPath: path) {
|
||||
return path
|
||||
}
|
||||
return hermesBinaryCandidates[0]
|
||||
@available(*, deprecated, message: "use ServerContext.paths.memoriesDir")
|
||||
nonisolated static var memoriesDir: String { ServerContext.local.paths.memoriesDir }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.memoryMD")
|
||||
nonisolated static var memoryMD: String { ServerContext.local.paths.memoryMD }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.userMD")
|
||||
nonisolated static var userMD: String { ServerContext.local.paths.userMD }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.sessionsDir")
|
||||
nonisolated static var sessionsDir: String { ServerContext.local.paths.sessionsDir }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.cronJobsJSON")
|
||||
nonisolated static var cronJobsJSON: String { ServerContext.local.paths.cronJobsJSON }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.cronOutputDir")
|
||||
nonisolated static var cronOutputDir: String { ServerContext.local.paths.cronOutputDir }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.gatewayStateJSON")
|
||||
nonisolated static var gatewayStateJSON: String { ServerContext.local.paths.gatewayStateJSON }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.skillsDir")
|
||||
nonisolated static var skillsDir: String { ServerContext.local.paths.skillsDir }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.errorsLog")
|
||||
nonisolated static var errorsLog: String { ServerContext.local.paths.errorsLog }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.agentLog")
|
||||
nonisolated static var agentLog: String { ServerContext.local.paths.agentLog }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.gatewayLog")
|
||||
nonisolated static var gatewayLog: String { ServerContext.local.paths.gatewayLog }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.scarfDir")
|
||||
nonisolated static var scarfDir: String { ServerContext.local.paths.scarfDir }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.projectsRegistry")
|
||||
nonisolated static var projectsRegistry: String { ServerContext.local.paths.projectsRegistry }
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.mcpTokensDir")
|
||||
nonisolated static var mcpTokensDir: String { ServerContext.local.paths.mcpTokensDir }
|
||||
|
||||
@available(*, deprecated, message: "use HermesPathSet.hermesBinaryCandidates")
|
||||
nonisolated static var hermesBinaryCandidates: [String] {
|
||||
HermesPathSet.hermesBinaryCandidates
|
||||
}
|
||||
|
||||
@available(*, deprecated, message: "use ServerContext.paths.hermesBinary")
|
||||
nonisolated static var hermesBinary: String { ServerContext.local.paths.hermesBinary }
|
||||
}
|
||||
|
||||
// MARK: - SQLite Constants
|
||||
|
||||
@@ -1,24 +1,24 @@
|
||||
import Foundation
|
||||
|
||||
struct HermesCronJob: Identifiable, Sendable, Codable {
|
||||
let id: String
|
||||
let name: String
|
||||
let prompt: String
|
||||
let skills: [String]?
|
||||
let model: String?
|
||||
let schedule: CronSchedule
|
||||
let enabled: Bool
|
||||
let state: String
|
||||
let deliver: String?
|
||||
let nextRunAt: String?
|
||||
let lastRunAt: String?
|
||||
let lastError: String?
|
||||
let preRunScript: String?
|
||||
let deliveryFailures: Int?
|
||||
let lastDeliveryError: String?
|
||||
let timeoutType: String?
|
||||
let timeoutSeconds: Int?
|
||||
let silent: Bool?
|
||||
nonisolated let id: String
|
||||
nonisolated let name: String
|
||||
nonisolated let prompt: String
|
||||
nonisolated let skills: [String]?
|
||||
nonisolated let model: String?
|
||||
nonisolated let schedule: CronSchedule
|
||||
nonisolated let enabled: Bool
|
||||
nonisolated let state: String
|
||||
nonisolated let deliver: String?
|
||||
nonisolated let nextRunAt: String?
|
||||
nonisolated let lastRunAt: String?
|
||||
nonisolated let lastError: String?
|
||||
nonisolated let preRunScript: String?
|
||||
nonisolated let deliveryFailures: Int?
|
||||
nonisolated let lastDeliveryError: String?
|
||||
nonisolated let timeoutType: String?
|
||||
nonisolated let timeoutSeconds: Int?
|
||||
nonisolated let silent: Bool?
|
||||
|
||||
enum CodingKeys: String, CodingKey {
|
||||
case id, name, prompt, skills, model, schedule, enabled, state, deliver, silent
|
||||
@@ -32,7 +32,51 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
|
||||
case timeoutSeconds = "timeout_seconds"
|
||||
}
|
||||
|
||||
var stateIcon: String {
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.id = try c.decode(String.self, forKey: .id)
|
||||
self.name = try c.decode(String.self, forKey: .name)
|
||||
self.prompt = try c.decode(String.self, forKey: .prompt)
|
||||
self.skills = try c.decodeIfPresent([String].self, forKey: .skills)
|
||||
self.model = try c.decodeIfPresent(String.self, forKey: .model)
|
||||
self.schedule = try c.decode(CronSchedule.self, forKey: .schedule)
|
||||
self.enabled = try c.decode(Bool.self, forKey: .enabled)
|
||||
self.state = try c.decode(String.self, forKey: .state)
|
||||
self.deliver = try c.decodeIfPresent(String.self, forKey: .deliver)
|
||||
self.nextRunAt = try c.decodeIfPresent(String.self, forKey: .nextRunAt)
|
||||
self.lastRunAt = try c.decodeIfPresent(String.self, forKey: .lastRunAt)
|
||||
self.lastError = try c.decodeIfPresent(String.self, forKey: .lastError)
|
||||
self.preRunScript = try c.decodeIfPresent(String.self, forKey: .preRunScript)
|
||||
self.deliveryFailures = try c.decodeIfPresent(Int.self, forKey: .deliveryFailures)
|
||||
self.lastDeliveryError = try c.decodeIfPresent(String.self, forKey: .lastDeliveryError)
|
||||
self.timeoutType = try c.decodeIfPresent(String.self, forKey: .timeoutType)
|
||||
self.timeoutSeconds = try c.decodeIfPresent(Int.self, forKey: .timeoutSeconds)
|
||||
self.silent = try c.decodeIfPresent(Bool.self, forKey: .silent)
|
||||
}
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encode(id, forKey: .id)
|
||||
try c.encode(name, forKey: .name)
|
||||
try c.encode(prompt, forKey: .prompt)
|
||||
try c.encodeIfPresent(skills, forKey: .skills)
|
||||
try c.encodeIfPresent(model, forKey: .model)
|
||||
try c.encode(schedule, forKey: .schedule)
|
||||
try c.encode(enabled, forKey: .enabled)
|
||||
try c.encode(state, forKey: .state)
|
||||
try c.encodeIfPresent(deliver, forKey: .deliver)
|
||||
try c.encodeIfPresent(nextRunAt, forKey: .nextRunAt)
|
||||
try c.encodeIfPresent(lastRunAt, forKey: .lastRunAt)
|
||||
try c.encodeIfPresent(lastError, forKey: .lastError)
|
||||
try c.encodeIfPresent(preRunScript, forKey: .preRunScript)
|
||||
try c.encodeIfPresent(deliveryFailures, forKey: .deliveryFailures)
|
||||
try c.encodeIfPresent(lastDeliveryError, forKey: .lastDeliveryError)
|
||||
try c.encodeIfPresent(timeoutType, forKey: .timeoutType)
|
||||
try c.encodeIfPresent(timeoutSeconds, forKey: .timeoutSeconds)
|
||||
try c.encodeIfPresent(silent, forKey: .silent)
|
||||
}
|
||||
|
||||
nonisolated var stateIcon: String {
|
||||
switch state {
|
||||
case "scheduled": return "clock"
|
||||
case "running": return "play.circle"
|
||||
@@ -42,7 +86,7 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
|
||||
}
|
||||
}
|
||||
|
||||
var deliveryDisplay: String? {
|
||||
nonisolated var deliveryDisplay: String? {
|
||||
guard let deliver, !deliver.isEmpty else { return nil }
|
||||
// v0.9.0 extends Discord routing to threads: `discord:<chat>:<thread>`.
|
||||
if deliver.hasPrefix("discord:") {
|
||||
@@ -59,10 +103,10 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
|
||||
}
|
||||
|
||||
struct CronSchedule: Sendable, Codable {
|
||||
let kind: String
|
||||
let runAt: String?
|
||||
let display: String?
|
||||
let expression: String?
|
||||
nonisolated let kind: String
|
||||
nonisolated let runAt: String?
|
||||
nonisolated let display: String?
|
||||
nonisolated let expression: String?
|
||||
|
||||
enum CodingKeys: String, CodingKey {
|
||||
case kind
|
||||
@@ -70,14 +114,45 @@ struct CronSchedule: Sendable, Codable {
|
||||
case display
|
||||
case expression
|
||||
}
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.kind = try c.decode(String.self, forKey: .kind)
|
||||
self.runAt = try c.decodeIfPresent(String.self, forKey: .runAt)
|
||||
self.display = try c.decodeIfPresent(String.self, forKey: .display)
|
||||
self.expression = try c.decodeIfPresent(String.self, forKey: .expression)
|
||||
}
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encode(kind, forKey: .kind)
|
||||
try c.encodeIfPresent(runAt, forKey: .runAt)
|
||||
try c.encodeIfPresent(display, forKey: .display)
|
||||
try c.encodeIfPresent(expression, forKey: .expression)
|
||||
}
|
||||
}
|
||||
|
||||
// Hand-written `init(from:)` / `encode(to:)` so Swift 6 doesn't synthesize a
|
||||
// MainActor-isolated Codable conformance — `HermesFileService.loadCronJobs`
|
||||
// is nonisolated and needs to decode this from a background task.
|
||||
struct CronJobsFile: Sendable, Codable {
|
||||
let jobs: [HermesCronJob]
|
||||
let updatedAt: String?
|
||||
nonisolated let jobs: [HermesCronJob]
|
||||
nonisolated let updatedAt: String?
|
||||
|
||||
enum CodingKeys: String, CodingKey {
|
||||
case jobs
|
||||
case updatedAt = "updated_at"
|
||||
}
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.jobs = try c.decode([HermesCronJob].self, forKey: .jobs)
|
||||
self.updatedAt = try c.decodeIfPresent(String.self, forKey: .updatedAt)
|
||||
}
|
||||
|
||||
nonisolated func encode(to encoder: any Encoder) throws {
|
||||
var c = encoder.container(keyedBy: CodingKeys.self)
|
||||
try c.encode(jobs, forKey: .jobs)
|
||||
try c.encodeIfPresent(updatedAt, forKey: .updatedAt)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,92 @@
|
||||
import Foundation
|
||||
|
||||
/// The filesystem layout of a Hermes installation, parameterized by the
|
||||
/// `home` directory. The same layout is used for local installations (where
|
||||
/// `home` is an absolute macOS path like `/Users/alan/.hermes`) and for
|
||||
/// remote installations reached over SSH (where `home` is a remote path like
|
||||
/// `/home/deploy/.hermes` or an unexpanded `~/.hermes` that the remote shell
|
||||
/// will resolve).
|
||||
///
|
||||
/// Every path that used to live as a module-level static on `HermesPaths` is
|
||||
/// an instance property here. `ServerContext.paths` is the canonical way to
|
||||
/// reach these values; the old `HermesPaths` statics are preserved as
|
||||
/// deprecated forwarders so Phase 1 can migrate call sites incrementally.
|
||||
struct HermesPathSet: Sendable, Hashable {
|
||||
let home: String
|
||||
/// `true` when this path set belongs to a remote installation. Affects
|
||||
/// only `hermesBinary` resolution — every other path is identical in
|
||||
/// shape between local and remote.
|
||||
let isRemote: Bool
|
||||
/// Pre-resolved remote binary path (e.g. `/home/deploy/.local/bin/hermes`).
|
||||
/// Populated by `SSHTransport` once `command -v hermes` has run on the
|
||||
/// target host. Unused when `isRemote == false`.
|
||||
let binaryHint: String?
|
||||
|
||||
// MARK: - Defaults
|
||||
|
||||
/// Absolute path to the local user's `~/.hermes` directory.
|
||||
nonisolated static let defaultLocalHome: String = {
|
||||
let user = ProcessInfo.processInfo.environment["HOME"] ?? NSHomeDirectory()
|
||||
return user + "/.hermes"
|
||||
}()
|
||||
|
||||
/// Default remote home when the user doesn't override it in `SSHConfig`.
|
||||
/// We leave `~` unexpanded on purpose — the remote shell resolves it.
|
||||
nonisolated static let defaultRemoteHome: String = "~/.hermes"
|
||||
|
||||
// MARK: - Paths (mirror of the old HermesPaths layout)
|
||||
|
||||
nonisolated var stateDB: String { home + "/state.db" }
|
||||
nonisolated var configYAML: String { home + "/config.yaml" }
|
||||
nonisolated var envFile: String { home + "/.env" }
|
||||
nonisolated var authJSON: String { home + "/auth.json" }
|
||||
nonisolated var soulMD: String { home + "/SOUL.md" }
|
||||
nonisolated var pluginsDir: String { home + "/plugins" }
|
||||
nonisolated var memoriesDir: String { home + "/memories" }
|
||||
nonisolated var memoryMD: String { memoriesDir + "/MEMORY.md" }
|
||||
nonisolated var userMD: String { memoriesDir + "/USER.md" }
|
||||
nonisolated var sessionsDir: String { home + "/sessions" }
|
||||
nonisolated var cronJobsJSON: String { home + "/cron/jobs.json" }
|
||||
nonisolated var cronOutputDir: String { home + "/cron/output" }
|
||||
nonisolated var gatewayStateJSON: String { home + "/gateway_state.json" }
|
||||
nonisolated var skillsDir: String { home + "/skills" }
|
||||
nonisolated var errorsLog: String { home + "/logs/errors.log" }
|
||||
nonisolated var agentLog: String { home + "/logs/agent.log" }
|
||||
nonisolated var gatewayLog: String { home + "/logs/gateway.log" }
|
||||
nonisolated var scarfDir: String { home + "/scarf" }
|
||||
nonisolated var projectsRegistry: String { scarfDir + "/projects.json" }
|
||||
nonisolated var mcpTokensDir: String { home + "/mcp-tokens" }
|
||||
|
||||
// MARK: - Binary resolution
|
||||
|
||||
/// Install locations we probe for the local `hermes` binary, in priority
|
||||
/// order. Checked on every access so a user installing via a different
|
||||
/// method doesn't need to relaunch Scarf.
|
||||
nonisolated static let hermesBinaryCandidates: [String] = {
|
||||
let user = ProcessInfo.processInfo.environment["HOME"] ?? NSHomeDirectory()
|
||||
return [
|
||||
user + "/.local/bin/hermes", // pipx / pip --user (default)
|
||||
"/opt/homebrew/bin/hermes", // Homebrew on Apple Silicon
|
||||
"/usr/local/bin/hermes", // Homebrew on Intel / manual install
|
||||
user + "/.hermes/bin/hermes" // Some self-install layouts
|
||||
]
|
||||
}()
|
||||
|
||||
/// Resolved path to the `hermes` executable for this installation.
|
||||
///
|
||||
/// Local: returns the first executable candidate, falling back to the
|
||||
/// pipx default so error messages still make sense on a fresh machine.
|
||||
///
|
||||
/// Remote: returns `binaryHint` (populated at connect time) or bare
|
||||
/// `"hermes"` as a last-resort default that relies on the remote `$PATH`.
|
||||
nonisolated var hermesBinary: String {
|
||||
if isRemote {
|
||||
return binaryHint ?? "hermes"
|
||||
}
|
||||
for path in Self.hermesBinaryCandidates
|
||||
where FileManager.default.isExecutableFile(atPath: path) {
|
||||
return path
|
||||
}
|
||||
return Self.hermesBinaryCandidates[0]
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,253 @@
|
||||
import Foundation
|
||||
import SwiftUI
|
||||
import AppKit
|
||||
|
||||
/// Stable identifier for a server entry in the user's registry. Backed by
|
||||
/// `UUID` so it round-trips through `servers.json` and SwiftUI window-state
|
||||
/// restoration without collisions.
|
||||
typealias ServerID = UUID
|
||||
|
||||
/// Connection parameters for a remote Hermes installation reached over SSH.
|
||||
/// All fields are optional except `host` — unset values defer to the user's
|
||||
/// `~/.ssh/config` and the OpenSSH defaults.
|
||||
struct SSHConfig: Sendable, Hashable, Codable {
|
||||
/// Hostname or `~/.ssh/config` alias.
|
||||
var host: String
|
||||
/// Remote username. `nil` → defer to `~/.ssh/config` or the local user.
|
||||
var user: String?
|
||||
/// TCP port. `nil` → 22 (or whatever `~/.ssh/config` says).
|
||||
var port: Int?
|
||||
/// Absolute path to a private key. `nil` → defer to ssh-agent /
|
||||
/// `~/.ssh/config` identity files.
|
||||
var identityFile: String?
|
||||
/// Override for the remote `$HOME/.hermes` directory. `nil` uses
|
||||
/// `HermesPathSet.defaultRemoteHome` (`~/.hermes`, shell-expanded on the
|
||||
/// remote side).
|
||||
var remoteHome: String?
|
||||
/// Resolved remote path to the `hermes` binary. Populated by
|
||||
/// `SSHTransport` after the first `command -v hermes` probe; cached here
|
||||
/// so subsequent calls skip the round trip.
|
||||
var hermesBinaryHint: String?
|
||||
}
|
||||
|
||||
/// Distinguishes a local installation (the user's own `~/.hermes`) from a
|
||||
/// remote one reached over SSH. Service behavior is identical in shape but
|
||||
/// dispatches to different I/O primitives in Phase 2.
|
||||
enum ServerKind: Sendable, Hashable, Codable {
|
||||
case local
|
||||
case ssh(SSHConfig)
|
||||
}
|
||||
|
||||
/// The per-server value that flows through `.environment` and gets handed to
|
||||
/// every service and ViewModel in Phase 1. One `ServerContext` corresponds to
|
||||
/// one Hermes installation; multi-window scenes in Phase 3 will construct
|
||||
/// one per window.
|
||||
///
|
||||
/// **Why every member is `nonisolated`.** This file imports `AppKit`
|
||||
/// (`NSWorkspace.shared.open` in `openInLocalEditor`), which under Swift 6's
|
||||
/// upcoming default-isolation rules pulls the whole struct to `@MainActor`.
|
||||
/// `ServerContext` is a plain `Sendable` value — accessing `.local`, `.paths`,
|
||||
/// `.isRemote`, or `makeTransport()` from a background actor must not trap
|
||||
/// the caller into hopping MainActor. `nonisolated` on each member keeps
|
||||
/// them callable from any context; the one MainActor-dependent method
|
||||
/// (`openInLocalEditor`) lives in the extension below.
|
||||
struct ServerContext: Sendable, Hashable, Identifiable {
|
||||
let id: ServerID
|
||||
var displayName: String
|
||||
var kind: ServerKind
|
||||
|
||||
/// Path layout for this server. Cheap — all path components are computed
|
||||
/// on demand from `home`, no I/O.
|
||||
nonisolated var paths: HermesPathSet {
|
||||
switch kind {
|
||||
case .local:
|
||||
return HermesPathSet(
|
||||
home: HermesPathSet.defaultLocalHome,
|
||||
isRemote: false,
|
||||
binaryHint: nil
|
||||
)
|
||||
case .ssh(let config):
|
||||
return HermesPathSet(
|
||||
home: config.remoteHome ?? HermesPathSet.defaultRemoteHome,
|
||||
isRemote: true,
|
||||
binaryHint: config.hermesBinaryHint
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
nonisolated var isRemote: Bool {
|
||||
if case .ssh = kind { return true }
|
||||
return false
|
||||
}
|
||||
|
||||
/// Construct the `ServerTransport` for this context. Local contexts get
|
||||
/// a `LocalTransport`; SSH contexts get an `SSHTransport` configured
|
||||
/// from `SSHConfig`. Each call returns a fresh value — transports are
|
||||
/// cheap and stateless beyond disk caches.
|
||||
nonisolated func makeTransport() -> any ServerTransport {
|
||||
switch kind {
|
||||
case .local:
|
||||
return LocalTransport(contextID: id)
|
||||
case .ssh(let config):
|
||||
return SSHTransport(contextID: id, config: config, displayName: displayName)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Well-known singletons
|
||||
|
||||
/// Stable UUID for the built-in "this machine" entry. Hard-coded so the
|
||||
/// local context has the same identity across launches, and so persisted
|
||||
/// window-state restorations that reference it continue to resolve even
|
||||
/// if `servers.json` hasn't been touched yet.
|
||||
nonisolated private static let localID = ServerID(uuidString: "00000000-0000-0000-0000-000000000001")!
|
||||
|
||||
/// The default "this machine" context. Used everywhere in Phase 0/1 and
|
||||
/// remains the fallback when no remote server is selected.
|
||||
nonisolated static let local = ServerContext(
|
||||
id: localID,
|
||||
displayName: "Local",
|
||||
kind: .local
|
||||
)
|
||||
}
|
||||
|
||||
// MARK: - Remote user-home resolution
|
||||
|
||||
/// Process-wide cache of each server's resolved user `$HOME`. Probed once per
|
||||
/// `ServerID` via the transport, then memoized for the app's lifetime — home
|
||||
/// directories don't change under us, and the probe is a ~5ms SSH round-trip
|
||||
/// with ControlMaster. Used by anything that needs to hand a working
|
||||
/// directory to the ACP agent or the Hermes CLI on the correct host.
|
||||
private actor UserHomeCache {
|
||||
static let shared = UserHomeCache()
|
||||
private var cache: [ServerID: String] = [:]
|
||||
|
||||
func resolve(for context: ServerContext) async -> String {
|
||||
if let cached = cache[context.id] { return cached }
|
||||
let resolved = await probe(context: context)
|
||||
cache[context.id] = resolved
|
||||
return resolved
|
||||
}
|
||||
|
||||
func invalidate(contextID: ServerID) {
|
||||
cache.removeValue(forKey: contextID)
|
||||
}
|
||||
|
||||
private func probe(context: ServerContext) async -> String {
|
||||
if !context.isRemote { return NSHomeDirectory() }
|
||||
let transport = context.makeTransport()
|
||||
let result = try? transport.runProcess(
|
||||
executable: "/bin/sh",
|
||||
args: ["-c", "echo $HOME"],
|
||||
stdin: nil,
|
||||
timeout: 10
|
||||
)
|
||||
let out = result?.stdoutString.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
|
||||
// Fall back to `~` (unexpanded) so ACP at least gets a plausible cwd
|
||||
// rather than a local Mac path. The remote side will expand it if
|
||||
// passed through a shell; if not, failures are surfaced by ACP itself.
|
||||
return out.isEmpty ? "~" : out
|
||||
}
|
||||
}
|
||||
|
||||
extension ServerContext {
|
||||
/// Resolved absolute path to the user's home directory on the target host.
|
||||
/// Local: `NSHomeDirectory()`. Remote: probed `$HOME` over SSH, cached.
|
||||
/// Use this — not `NSHomeDirectory()` — whenever you're passing a `cwd`
|
||||
/// or user path to a process that runs on the target host.
|
||||
func resolvedUserHome() async -> String {
|
||||
await UserHomeCache.shared.resolve(for: self)
|
||||
}
|
||||
|
||||
/// Called when a server is removed from the registry, so the process-wide
|
||||
/// caches keyed by `ServerID` don't hold stale entries forever.
|
||||
static func invalidateCaches(for contextID: ServerID) async {
|
||||
await UserHomeCache.shared.invalidate(contextID: contextID)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Convenience file I/O via the right transport
|
||||
|
||||
/// Centralized file I/O entry points for VMs that don't own a service. Every
|
||||
/// call goes through the context's transport, so reads/writes hit the local
|
||||
/// disk for `.local` and ssh/scp for `.ssh` automatically.
|
||||
///
|
||||
/// **Always** prefer `context.readText(...)` over `String(contentsOfFile: ...)`
|
||||
/// when the path comes from `context.paths`. The Foundation file APIs are
|
||||
/// LOCAL ONLY — using them with a remote path silently returns nil because
|
||||
/// the remote path doesn't exist on this Mac.
|
||||
extension ServerContext {
|
||||
/// Read a UTF-8 text file. `nil` on any error (missing, transport down,
|
||||
/// invalid encoding).
|
||||
nonisolated func readText(_ path: String) -> String? {
|
||||
guard let data = try? makeTransport().readFile(path) else { return nil }
|
||||
return String(data: data, encoding: .utf8)
|
||||
}
|
||||
|
||||
/// Read raw bytes. `nil` on any error.
|
||||
nonisolated func readData(_ path: String) -> Data? {
|
||||
try? makeTransport().readFile(path)
|
||||
}
|
||||
|
||||
/// Atomic write. Returns `true` on success, `false` on any error
|
||||
/// (caller is expected to surface failures via UI when relevant).
|
||||
@discardableResult
|
||||
nonisolated func writeText(_ path: String, content: String) -> Bool {
|
||||
guard let data = content.data(using: .utf8) else { return false }
|
||||
do {
|
||||
try makeTransport().writeFile(path, data: data)
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
/// Existence check. Local: `FileManager`. Remote: `ssh test -e`.
|
||||
nonisolated func fileExists(_ path: String) -> Bool {
|
||||
makeTransport().fileExists(path)
|
||||
}
|
||||
|
||||
/// File modification timestamp, or `nil` if the file doesn't exist.
|
||||
nonisolated func modificationDate(_ path: String) -> Date? {
|
||||
makeTransport().stat(path)?.mtime
|
||||
}
|
||||
|
||||
/// Invoke the `hermes` CLI on this server and return its combined output
|
||||
/// + exit code. Local: spawns the local binary via `Process`. Remote:
|
||||
/// rounds through `ssh host hermes …`. Use this from any VM that needs
|
||||
/// to fire off a CLI command — never spawn `hermes` via `Process()`
|
||||
/// directly, because that path bypasses the transport for remote.
|
||||
@discardableResult
|
||||
nonisolated func runHermes(_ args: [String], timeout: TimeInterval = 60, stdin: String? = nil) -> (output: String, exitCode: Int32) {
|
||||
let result = HermesFileService(context: self).runHermesCLI(args: args, timeout: timeout, stdinInput: stdin)
|
||||
return (result.output, result.exitCode)
|
||||
}
|
||||
|
||||
/// Reveal the file at `path` in the user's local editor (via
|
||||
/// `NSWorkspace.open`). For remote contexts this is a no-op — the
|
||||
/// file doesn't exist on this Mac, so opening it would fail silently
|
||||
/// or worse, open the wrong file from the local filesystem.
|
||||
/// Returns `true` if opened, `false` if the call was skipped.
|
||||
@discardableResult
|
||||
func openInLocalEditor(_ path: String) -> Bool {
|
||||
guard !isRemote else { return false }
|
||||
NSWorkspace.shared.open(URL(fileURLWithPath: path))
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - SwiftUI environment plumbing
|
||||
|
||||
/// `ServerContext` is a value type, so SwiftUI's `.environment(_:)` (which
|
||||
/// requires an `@Observable` class) doesn't accept it directly. We expose it
|
||||
/// through a custom `EnvironmentKey` — views read it with
|
||||
/// `@Environment(\.serverContext) private var serverContext`.
|
||||
private struct ServerContextEnvironmentKey: EnvironmentKey {
|
||||
static let defaultValue: ServerContext = .local
|
||||
}
|
||||
|
||||
extension EnvironmentValues {
|
||||
var serverContext: ServerContext {
|
||||
get { self[ServerContextEnvironmentKey.self] }
|
||||
set { self[ServerContextEnvironmentKey.self] = newValue }
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,167 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
/// Persisted entry for a user-added server. `ServerContext` itself is a value
|
||||
/// type we rebuild from these fields at runtime — we persist the minimum that
|
||||
/// uniquely identifies a connection, not the whole context struct, so future
|
||||
/// fields we add to `ServerContext` don't force a migration.
|
||||
struct ServerEntry: Identifiable, Codable, Hashable, Sendable {
|
||||
var id: ServerID
|
||||
var displayName: String
|
||||
var kind: ServerKind
|
||||
/// User preference: open this server in a window on launch. Phase 3
|
||||
/// multi-window uses this; Phase 2 ignores it.
|
||||
var openOnLaunch: Bool = false
|
||||
|
||||
var context: ServerContext {
|
||||
ServerContext(id: id, displayName: displayName, kind: kind)
|
||||
}
|
||||
}
|
||||
|
||||
/// On-disk envelope for `servers.json`. Schema-versioned so future changes
|
||||
/// can migrate without losing data.
|
||||
private struct RegistryFile: Codable {
|
||||
var schemaVersion: Int
|
||||
var entries: [ServerEntry]
|
||||
}
|
||||
|
||||
/// App-scoped store for user-added servers. `local` is synthesized (not
|
||||
/// persisted) and always appears first in `allContexts`. Remote entries are
|
||||
/// loaded from `~/Library/Application Support/scarf/servers.json`.
|
||||
///
|
||||
/// Observable so SwiftUI views binding to `entries` redraw when a server is
|
||||
/// added, renamed, or removed.
|
||||
@Observable
|
||||
@MainActor
|
||||
final class ServerRegistry {
|
||||
private static let logger = Logger(subsystem: "com.scarf", category: "ServerRegistry")
|
||||
private static let currentSchemaVersion = 1
|
||||
|
||||
/// Remote (user-added) entries. Observable: views redraw on mutation.
|
||||
private(set) var entries: [ServerEntry] = []
|
||||
|
||||
private let storeURL: URL
|
||||
|
||||
init() {
|
||||
let support = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first
|
||||
?? URL(fileURLWithPath: NSHomeDirectory() + "/Library/Application Support")
|
||||
let dir = support.appendingPathComponent("scarf", isDirectory: true)
|
||||
self.storeURL = dir.appendingPathComponent("servers.json")
|
||||
load()
|
||||
}
|
||||
|
||||
// MARK: - Lookup
|
||||
|
||||
/// The implicit local server plus every persisted remote entry, in list
|
||||
/// order. Use this when populating UI like the toolbar switcher.
|
||||
var allContexts: [ServerContext] {
|
||||
[.local] + entries.map { $0.context }
|
||||
}
|
||||
|
||||
/// Resolve an ID to a context, or `nil` if the entry no longer exists.
|
||||
/// Used by the multi-window root to detect "this window points at a
|
||||
/// server you've since removed" and show a dedicated empty state.
|
||||
func context(for id: ServerID) -> ServerContext? {
|
||||
if id == ServerContext.local.id { return .local }
|
||||
if let entry = entries.first(where: { $0.id == id }) {
|
||||
return entry.context
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// MARK: - Mutations
|
||||
|
||||
/// Optional callback fired whenever `entries` changes. The app wires
|
||||
/// this to `ServerLiveStatusRegistry.rebuild()` so the menu-bar fanout
|
||||
/// stays in sync without polling the entries array.
|
||||
var onEntriesChanged: (() -> Void)?
|
||||
|
||||
@discardableResult
|
||||
func addServer(displayName: String, config: SSHConfig) -> ServerEntry {
|
||||
let entry = ServerEntry(
|
||||
id: ServerID(),
|
||||
displayName: displayName,
|
||||
kind: .ssh(config)
|
||||
)
|
||||
entries.append(entry)
|
||||
save()
|
||||
onEntriesChanged?()
|
||||
return entry
|
||||
}
|
||||
|
||||
func updateServer(_ id: ServerID, displayName: String?, config: SSHConfig?) {
|
||||
guard let idx = entries.firstIndex(where: { $0.id == id }) else { return }
|
||||
if let name = displayName { entries[idx].displayName = name }
|
||||
if let cfg = config { entries[idx].kind = .ssh(cfg) }
|
||||
save()
|
||||
onEntriesChanged?()
|
||||
}
|
||||
|
||||
func removeServer(_ id: ServerID) {
|
||||
// Grab the entry BEFORE removing it so we can tear down its transport
|
||||
// state. Without this the user would leak a ControlMaster socket
|
||||
// (~10min TTL) and a snapshot cache dir (indefinite) per removed
|
||||
// server — harmless individually, ugly at scale.
|
||||
let removed = entries.first { $0.id == id }
|
||||
entries.removeAll { $0.id == id }
|
||||
save()
|
||||
|
||||
if let removed, case .ssh(let config) = removed.kind {
|
||||
let transport = SSHTransport(contextID: id, config: config, displayName: removed.displayName)
|
||||
transport.closeControlMaster()
|
||||
}
|
||||
SSHTransport.pruneSnapshotCache(for: id)
|
||||
// Drop process-wide cache entries keyed on this ServerID so a future
|
||||
// re-add with a colliding ID (theoretical — UUIDs are random, but be
|
||||
// defensive) doesn't serve stale data.
|
||||
Task.detached { await ServerContext.invalidateCaches(for: id) }
|
||||
|
||||
onEntriesChanged?()
|
||||
}
|
||||
|
||||
// MARK: - App-launch sweep
|
||||
|
||||
/// Remove snapshot cache directories whose UUID isn't in the current
|
||||
/// registry. Handles the case where the user removed a server while the
|
||||
/// app was closed — we want the cache to converge to the registry's
|
||||
/// state at launch rather than carrying forever.
|
||||
func sweepOrphanCaches() {
|
||||
var keep: Set<ServerID> = [ServerContext.local.id]
|
||||
for entry in entries { keep.insert(entry.id) }
|
||||
SSHTransport.sweepOrphanSnapshots(keeping: keep)
|
||||
SSHTransport.sweepStaleControlSockets()
|
||||
}
|
||||
|
||||
// MARK: - Persistence
|
||||
|
||||
private func load() {
|
||||
guard FileManager.default.fileExists(atPath: storeURL.path) else {
|
||||
entries = []
|
||||
return
|
||||
}
|
||||
do {
|
||||
let data = try Data(contentsOf: storeURL)
|
||||
let file = try JSONDecoder().decode(RegistryFile.self, from: data)
|
||||
entries = file.entries
|
||||
} catch {
|
||||
Self.logger.error("Failed to load servers.json: \(error.localizedDescription)")
|
||||
entries = []
|
||||
}
|
||||
}
|
||||
|
||||
private func save() {
|
||||
do {
|
||||
try FileManager.default.createDirectory(
|
||||
at: storeURL.deletingLastPathComponent(),
|
||||
withIntermediateDirectories: true
|
||||
)
|
||||
let file = RegistryFile(schemaVersion: Self.currentSchemaVersion, entries: entries)
|
||||
let encoder = JSONEncoder()
|
||||
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
|
||||
let data = try encoder.encode(file)
|
||||
try data.write(to: storeURL, options: .atomic)
|
||||
} catch {
|
||||
Self.logger.error("Failed to save servers.json: \(error.localizedDescription)")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -24,6 +24,14 @@ actor ACPClient {
|
||||
private(set) var currentSessionId: String?
|
||||
private(set) var statusMessage = ""
|
||||
|
||||
let context: ServerContext
|
||||
private let transport: any ServerTransport
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
/// Ring buffer of recent stderr lines from `hermes acp` — used to attach
|
||||
/// a diagnostic tail to user-visible errors. Capped to avoid unbounded
|
||||
/// growth when the subprocess logs heavily.
|
||||
@@ -75,9 +83,15 @@ actor ACPClient {
|
||||
self._eventStream = stream
|
||||
self.eventContinuation = continuation
|
||||
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
proc.arguments = ["acp"]
|
||||
// For local: Process is `hermes acp` directly.
|
||||
// For remote: the transport returns a Process configured as
|
||||
// `/usr/bin/ssh -T <opts> host -- <hermes> acp`. ACP's JSON-RPC
|
||||
// over stdio works identically because `-T` keeps the ssh channel
|
||||
// byte-clean and stdin/stdout travel end-to-end unmodified.
|
||||
let proc = transport.makeProcess(
|
||||
executable: context.paths.hermesBinary,
|
||||
args: ["acp"]
|
||||
)
|
||||
|
||||
let stdin = Pipe()
|
||||
let stdout = Pipe()
|
||||
@@ -88,11 +102,28 @@ actor ACPClient {
|
||||
proc.standardError = stderr
|
||||
|
||||
// ACP uses JSON-RPC over pipes — do NOT set TERM to avoid terminal escape pollution.
|
||||
// Use the enriched environment so any tools hermes spawns (MCP servers,
|
||||
// shell commands) can find brew/nvm/asdf binaries on PATH.
|
||||
var env = HermesFileService.enrichedEnvironment()
|
||||
env.removeValue(forKey: "TERM")
|
||||
proc.environment = env
|
||||
if context.isRemote {
|
||||
// Remote: this is the LOCAL ssh process spawning `ssh host …
|
||||
// hermes acp`. We don't forward our local PATH/credentials to
|
||||
// the remote (hermes runs under the remote user's login env),
|
||||
// but the ssh binary itself needs SSH_AUTH_SOCK to reach the
|
||||
// local ssh-agent for key-based auth.
|
||||
var env = ProcessInfo.processInfo.environment
|
||||
let shellEnv = HermesFileService.enrichedEnvironment()
|
||||
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
|
||||
if env[key] == nil, let v = shellEnv[key], !v.isEmpty {
|
||||
env[key] = v
|
||||
}
|
||||
}
|
||||
env.removeValue(forKey: "TERM")
|
||||
proc.environment = env
|
||||
} else {
|
||||
// Local: enriched env so any tools hermes spawns (MCP servers,
|
||||
// shell commands) can find brew/nvm/asdf binaries on PATH.
|
||||
var env = HermesFileService.enrichedEnvironment()
|
||||
env.removeValue(forKey: "TERM")
|
||||
proc.environment = env
|
||||
}
|
||||
|
||||
proc.terminationHandler = { [weak self] proc in
|
||||
Task { await self?.handleTermination(exitCode: proc.terminationStatus) }
|
||||
@@ -405,14 +436,14 @@ actor ACPClient {
|
||||
guard !lineData.isEmpty else { continue }
|
||||
|
||||
if let lineStr = String(data: lineData, encoding: .utf8) {
|
||||
await self?.logger.debug("ACP recv: \(lineStr.prefix(200))")
|
||||
self?.logger.debug("ACP recv: \(lineStr.prefix(200))")
|
||||
}
|
||||
|
||||
do {
|
||||
let message = try JSONDecoder().decode(ACPRawMessage.self, from: lineData)
|
||||
await self?.handleMessage(message)
|
||||
} catch {
|
||||
await self?.logger.warning("Failed to decode ACP message: \(error.localizedDescription)")
|
||||
self?.logger.warning("Failed to decode ACP message: \(error.localizedDescription)")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -428,7 +459,7 @@ actor ACPClient {
|
||||
if data.isEmpty { break }
|
||||
if let text = String(data: data, encoding: .utf8)?.trimmingCharacters(in: .whitespacesAndNewlines),
|
||||
!text.isEmpty {
|
||||
await self?.logger.info("ACP stderr: \(text.prefix(500))")
|
||||
self?.logger.info("ACP stderr: \(text.prefix(500))")
|
||||
await self?.appendStderr(text)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,25 +1,151 @@
|
||||
import Foundation
|
||||
import SQLite3
|
||||
import os
|
||||
|
||||
/// Dedupes concurrent `snapshotSQLite` calls for the same server. When the
|
||||
/// file watcher ticks, Dashboard + Sessions + Activity (+ Chat's loadHistory)
|
||||
/// can all ask for a fresh snapshot within the same millisecond — without
|
||||
/// coordination they each spawn their own `ssh host sqlite3 .backup; scp`
|
||||
/// round-trip, three parallel backups of the same DB. Callers in flight for
|
||||
/// the same `ServerID` await the first caller's Task and share its result.
|
||||
actor SnapshotCoordinator {
|
||||
static let shared = SnapshotCoordinator()
|
||||
private var inFlight: [ServerID: Task<URL, Error>] = [:]
|
||||
|
||||
func snapshot(
|
||||
remotePath: String,
|
||||
contextID: ServerID,
|
||||
transport: any ServerTransport
|
||||
) async throws -> URL {
|
||||
if let existing = inFlight[contextID] {
|
||||
return try await existing.value
|
||||
}
|
||||
let task = Task<URL, Error> {
|
||||
try transport.snapshotSQLite(remotePath: remotePath)
|
||||
}
|
||||
inFlight[contextID] = task
|
||||
defer { inFlight[contextID] = nil }
|
||||
return try await task.value
|
||||
}
|
||||
}
|
||||
|
||||
actor HermesDataService {
|
||||
private static let logger = Logger(subsystem: "com.scarf", category: "HermesDataService")
|
||||
|
||||
private var db: OpaquePointer?
|
||||
private var hasV07Schema = false
|
||||
/// Local filesystem path we last opened. For remote contexts this is
|
||||
/// the cached snapshot under `~/Library/Caches/scarf/snapshots/<id>/`.
|
||||
private var openedAtPath: String?
|
||||
/// Last error from `open()` / `refresh()`, user-presentable. `nil` means
|
||||
/// the last attempt succeeded. Views surface this when their own load
|
||||
/// path fails, so the user sees "Permission denied reading state.db"
|
||||
/// instead of an empty Dashboard with no explanation.
|
||||
private(set) var lastOpenError: String?
|
||||
|
||||
func open() -> Bool {
|
||||
let context: ServerContext
|
||||
private let transport: any ServerTransport
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
func open() async -> Bool {
|
||||
if db != nil { return true }
|
||||
let path = HermesPaths.stateDB
|
||||
guard FileManager.default.fileExists(atPath: path) else { return false }
|
||||
let flags = SQLITE_OPEN_READONLY | SQLITE_OPEN_NOMUTEX
|
||||
let result = sqlite3_open_v2(path, &db, flags, nil)
|
||||
let localPath: String
|
||||
if context.isRemote {
|
||||
// Pull a fresh snapshot from the remote host. Uses `sqlite3
|
||||
// .backup` on the remote, which is WAL-safe; a plain cp would
|
||||
// corrupt. Routed through SnapshotCoordinator so concurrent
|
||||
// view models don't each spawn a parallel SSH backup for the
|
||||
// same server.
|
||||
do {
|
||||
let url = try await SnapshotCoordinator.shared.snapshot(
|
||||
remotePath: context.paths.stateDB,
|
||||
contextID: context.id,
|
||||
transport: transport
|
||||
)
|
||||
localPath = url.path
|
||||
lastOpenError = nil
|
||||
} catch {
|
||||
lastOpenError = humanize(error)
|
||||
Self.logger.warning("snapshotSQLite failed: \(error.localizedDescription, privacy: .public)")
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
localPath = context.paths.stateDB
|
||||
guard FileManager.default.fileExists(atPath: localPath) else {
|
||||
lastOpenError = "Hermes state database not found at \(localPath)."
|
||||
return false
|
||||
}
|
||||
}
|
||||
// Remote snapshots are point-in-time copies that no one writes to;
|
||||
// opening them with `immutable=1` tells SQLite to skip WAL/SHM and
|
||||
// locking entirely, which is both faster and avoids spurious
|
||||
// "unable to open database file" errors if the snapshot ever gets
|
||||
// pulled mid-checkpoint. Local points at the live Hermes DB where
|
||||
// the process already has WAL enabled in the header, so a plain
|
||||
// readonly open is the right thing.
|
||||
let flags: Int32
|
||||
let openPath: String
|
||||
if context.isRemote {
|
||||
openPath = "file:\(localPath)?immutable=1"
|
||||
flags = SQLITE_OPEN_READONLY | SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_URI
|
||||
} else {
|
||||
openPath = localPath
|
||||
flags = SQLITE_OPEN_READONLY | SQLITE_OPEN_NOMUTEX
|
||||
}
|
||||
let result = sqlite3_open_v2(openPath, &db, flags, nil)
|
||||
guard result == SQLITE_OK else {
|
||||
let msg: String
|
||||
if let db {
|
||||
msg = String(cString: sqlite3_errmsg(db))
|
||||
} else {
|
||||
msg = "sqlite3_open_v2 returned \(result)"
|
||||
}
|
||||
lastOpenError = "Couldn't open state.db: \(msg)"
|
||||
Self.logger.warning("sqlite3_open_v2 failed (\(result)) at \(localPath, privacy: .public): \(msg, privacy: .public)")
|
||||
db = nil
|
||||
return false
|
||||
}
|
||||
sqlite3_exec(db, "PRAGMA journal_mode=WAL", nil, nil, nil)
|
||||
openedAtPath = localPath
|
||||
lastOpenError = nil
|
||||
detectSchema()
|
||||
return true
|
||||
}
|
||||
|
||||
/// Turn a transport error into the one-line string Dashboard shows. Adds
|
||||
/// hints for the common "sqlite3 not installed" and "permission denied"
|
||||
/// cases so users know what to do.
|
||||
private nonisolated func humanize(_ error: Error) -> String {
|
||||
let desc = (error as? LocalizedError)?.errorDescription ?? error.localizedDescription
|
||||
let lower = desc.lowercased()
|
||||
if lower.contains("sqlite3: command not found") || lower.contains("sqlite3: not found") {
|
||||
return "sqlite3 is not installed on \(context.displayName). Install it with `apt install sqlite3` (Ubuntu/Debian) or `yum install sqlite` (RHEL/Fedora)."
|
||||
}
|
||||
if lower.contains("permission denied") {
|
||||
return "Permission denied reading Hermes state on \(context.displayName). The SSH user may not have read access to ~/.hermes/state.db — try Run Diagnostics."
|
||||
}
|
||||
if lower.contains("no such file") {
|
||||
return "Hermes state not found at ~/.hermes on \(context.displayName). If Hermes is installed elsewhere, set its data directory in Manage Servers."
|
||||
}
|
||||
return desc
|
||||
}
|
||||
|
||||
/// Force a fresh snapshot pull + reopen. Used on session-load and in
|
||||
/// any path that needs the UI to reflect writes Hermes just made.
|
||||
/// Without this, remote snapshots would be frozen at the first `open()`
|
||||
/// for the app's lifetime — new messages added to a resumed session
|
||||
/// would never appear because the snapshot was pulled before they were
|
||||
/// written. Local contexts pay essentially nothing: close+reopen on a
|
||||
/// live DB is a no-op.
|
||||
@discardableResult
|
||||
func refresh() async -> Bool {
|
||||
close()
|
||||
return await open()
|
||||
}
|
||||
|
||||
func close() {
|
||||
if let db {
|
||||
sqlite3_close(db)
|
||||
@@ -431,11 +557,10 @@ actor HermesDataService {
|
||||
}
|
||||
|
||||
func stateDBModificationDate() -> Date? {
|
||||
let walPath = HermesPaths.stateDB + "-wal"
|
||||
let dbPath = HermesPaths.stateDB
|
||||
let fm = FileManager.default
|
||||
let walDate = (try? fm.attributesOfItem(atPath: walPath))?[.modificationDate] as? Date
|
||||
let dbDate = (try? fm.attributesOfItem(atPath: dbPath))?[.modificationDate] as? Date
|
||||
// For remote contexts we stat the remote paths. For local it's the
|
||||
// same FileManager lookup as before, just via the transport.
|
||||
let walDate = transport.stat(context.paths.stateDB + "-wal")?.mtime
|
||||
let dbDate = transport.stat(context.paths.stateDB)?.mtime
|
||||
if let w = walDate, let d = dbDate {
|
||||
return max(w, d)
|
||||
}
|
||||
|
||||
@@ -22,15 +22,24 @@ struct HermesEnvService: Sendable {
|
||||
|
||||
/// Path to `~/.hermes/.env`. Kept configurable for tests.
|
||||
let path: String
|
||||
let transport: any ServerTransport
|
||||
|
||||
init(path: String = HermesPaths.home + "/.env") {
|
||||
nonisolated init(context: ServerContext = .local) {
|
||||
self.path = context.paths.envFile
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
/// Escape hatch for tests that want to point at a fixture path directly.
|
||||
init(path: String) {
|
||||
self.path = path
|
||||
self.transport = LocalTransport()
|
||||
}
|
||||
|
||||
/// Read the .env file into a `[key: value]` dict. Comments and commented-out
|
||||
/// assignments are ignored. Missing file returns an empty dict.
|
||||
func load() -> [String: String] {
|
||||
guard let content = try? String(contentsOfFile: path, encoding: .utf8) else {
|
||||
guard let data = try? transport.readFile(path),
|
||||
let content = String(data: data, encoding: .utf8) else {
|
||||
return [:]
|
||||
}
|
||||
var result: [String: String] = [:]
|
||||
@@ -69,7 +78,8 @@ struct HermesEnvService: Sendable {
|
||||
var lines: [String]
|
||||
|
||||
// Start from existing file contents, or a minimal header if creating new.
|
||||
if let content = try? String(contentsOfFile: path, encoding: .utf8) {
|
||||
if let data = try? transport.readFile(path),
|
||||
let content = String(data: data, encoding: .utf8) {
|
||||
lines = content.components(separatedBy: "\n")
|
||||
// Trim a single trailing empty line from splitting the final newline;
|
||||
// we'll re-add it on write.
|
||||
@@ -105,7 +115,8 @@ struct HermesEnvService: Sendable {
|
||||
/// uncommenting. If the key doesn't exist, this is a no-op.
|
||||
@discardableResult
|
||||
func unset(_ key: String) -> Bool {
|
||||
guard let content = try? String(contentsOfFile: path, encoding: .utf8) else {
|
||||
guard let data = try? transport.readFile(path),
|
||||
let content = String(data: data, encoding: .utf8) else {
|
||||
return true
|
||||
}
|
||||
var lines = content.components(separatedBy: "\n")
|
||||
@@ -125,28 +136,18 @@ struct HermesEnvService: Sendable {
|
||||
|
||||
// MARK: - Internals
|
||||
|
||||
/// Writes the entire file in one shot via a tmp + rename to avoid corrupting
|
||||
/// `.env` if the process is killed mid-write. Preserves `0600` permissions
|
||||
/// since `.env` typically holds secrets.
|
||||
/// Writes the entire file in one shot through the transport. For local
|
||||
/// contexts this ends up doing the same atomic-rename dance as before
|
||||
/// (via `LocalTransport.writeFile`). For remote contexts this goes
|
||||
/// through `scp` + remote `mv`, still atomic from Hermes's point of
|
||||
/// view.
|
||||
private func atomicWrite(_ content: String) -> Bool {
|
||||
let tmp = path + ".tmp"
|
||||
guard let data = content.data(using: .utf8) else { return false }
|
||||
do {
|
||||
try content.write(toFile: tmp, atomically: false, encoding: .utf8)
|
||||
// Mirror the typical `.env` mode of `0600` (owner read/write only).
|
||||
try FileManager.default.setAttributes([.posixPermissions: 0o600], ofItemAtPath: tmp)
|
||||
// Swap into place. FileManager.replaceItem handles the replacement
|
||||
// atomically on the same volume; fall back to a two-step rename.
|
||||
let destURL = URL(fileURLWithPath: path)
|
||||
let tmpURL = URL(fileURLWithPath: tmp)
|
||||
if FileManager.default.fileExists(atPath: path) {
|
||||
_ = try FileManager.default.replaceItemAt(destURL, withItemAt: tmpURL)
|
||||
} else {
|
||||
try FileManager.default.moveItem(at: tmpURL, to: destURL)
|
||||
}
|
||||
try transport.writeFile(path, data: data)
|
||||
return true
|
||||
} catch {
|
||||
logger.error("Failed to write .env: \(error.localizedDescription)")
|
||||
try? FileManager.default.removeItem(atPath: tmp)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,15 +1,34 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
struct HermesFileService: Sendable {
|
||||
|
||||
nonisolated static let logger = Logger(subsystem: "com.scarf", category: "HermesFileService")
|
||||
|
||||
let context: ServerContext
|
||||
let transport: any ServerTransport
|
||||
|
||||
nonisolated init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
// MARK: - Config
|
||||
|
||||
func loadConfig() -> HermesConfig {
|
||||
guard let content = readFile(HermesPaths.configYAML) else { return .empty }
|
||||
nonisolated func loadConfig() -> HermesConfig {
|
||||
guard let content = readFile(context.paths.configYAML) else { return .empty }
|
||||
return parseConfig(content)
|
||||
}
|
||||
|
||||
private func parseConfig(_ yaml: String) -> HermesConfig {
|
||||
/// Error-surfacing config load. Used by Dashboard to show the user a
|
||||
/// specific reason when config.yaml can't be read on a remote host
|
||||
/// (permission denied, missing file, sqlite3 not installed, etc.)
|
||||
/// instead of silently falling back to `.empty`.
|
||||
nonisolated func loadConfigResult() -> Result<HermesConfig, Error> {
|
||||
readFileResult(context.paths.configYAML).map { parseConfig($0) }
|
||||
}
|
||||
|
||||
nonisolated private func parseConfig(_ yaml: String) -> HermesConfig {
|
||||
let parsed = Self.parseNestedYAML(yaml)
|
||||
let values = parsed.values
|
||||
let lists = parsed.lists
|
||||
@@ -372,59 +391,80 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - Gateway State
|
||||
|
||||
func loadGatewayState() -> GatewayState? {
|
||||
guard let data = readFileData(HermesPaths.gatewayStateJSON) else { return nil }
|
||||
nonisolated func loadGatewayState() -> GatewayState? {
|
||||
guard let data = readFileData(context.paths.gatewayStateJSON) else { return nil }
|
||||
do {
|
||||
return try JSONDecoder().decode(GatewayState.self, from: data)
|
||||
} catch {
|
||||
print("[Scarf] Failed to decode gateway state: \(error.localizedDescription)")
|
||||
Self.logger.warning("Failed to decode gateway state: \(error.localizedDescription, privacy: .public)")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
/// Error-surfacing gateway-state load. `.success(nil)` means the file
|
||||
/// doesn't exist yet (gateway hasn't written state — normal when Hermes
|
||||
/// is stopped). `.failure` means the file exists but couldn't be read
|
||||
/// (permission denied, connection down, JSON corruption).
|
||||
nonisolated func loadGatewayStateResult() -> Result<GatewayState?, Error> {
|
||||
// Distinguish "file doesn't exist yet" (normal, returns .success(nil))
|
||||
// from "file exists but we can't read or parse it" (error).
|
||||
if !transport.fileExists(context.paths.gatewayStateJSON) {
|
||||
return .success(nil)
|
||||
}
|
||||
switch readFileDataResult(context.paths.gatewayStateJSON) {
|
||||
case .success(let data):
|
||||
do {
|
||||
return .success(try JSONDecoder().decode(GatewayState.self, from: data))
|
||||
} catch {
|
||||
Self.logger.warning("Failed to decode gateway state: \(error.localizedDescription, privacy: .public)")
|
||||
return .failure(error)
|
||||
}
|
||||
case .failure(let err):
|
||||
return .failure(err)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Memory
|
||||
|
||||
func loadMemoryProfiles() -> [String] {
|
||||
let fm = FileManager.default
|
||||
guard let entries = try? fm.contentsOfDirectory(atPath: HermesPaths.memoriesDir) else { return [] }
|
||||
nonisolated func loadMemoryProfiles() -> [String] {
|
||||
guard let entries = try? transport.listDirectory(context.paths.memoriesDir) else { return [] }
|
||||
return entries.filter { name in
|
||||
var isDir: ObjCBool = false
|
||||
let path = HermesPaths.memoriesDir + "/" + name
|
||||
return fm.fileExists(atPath: path, isDirectory: &isDir) && isDir.boolValue
|
||||
let path = context.paths.memoriesDir + "/" + name
|
||||
return transport.stat(path)?.isDirectory == true
|
||||
}.sorted()
|
||||
}
|
||||
|
||||
func loadMemory(profile: String = "") -> String {
|
||||
nonisolated func loadMemory(profile: String = "") -> String {
|
||||
let path = memoryPath(profile: profile, file: "MEMORY.md")
|
||||
return readFile(path) ?? ""
|
||||
}
|
||||
|
||||
func loadUserProfile(profile: String = "") -> String {
|
||||
nonisolated func loadUserProfile(profile: String = "") -> String {
|
||||
let path = memoryPath(profile: profile, file: "USER.md")
|
||||
return readFile(path) ?? ""
|
||||
}
|
||||
|
||||
func saveMemory(_ content: String, profile: String = "") {
|
||||
nonisolated func saveMemory(_ content: String, profile: String = "") {
|
||||
let path = memoryPath(profile: profile, file: "MEMORY.md")
|
||||
writeFile(path, content: content)
|
||||
}
|
||||
|
||||
func saveUserProfile(_ content: String, profile: String = "") {
|
||||
nonisolated func saveUserProfile(_ content: String, profile: String = "") {
|
||||
let path = memoryPath(profile: profile, file: "USER.md")
|
||||
writeFile(path, content: content)
|
||||
}
|
||||
|
||||
private func memoryPath(profile: String, file: String) -> String {
|
||||
nonisolated private func memoryPath(profile: String, file: String) -> String {
|
||||
if profile.isEmpty {
|
||||
return HermesPaths.memoriesDir + "/" + file
|
||||
return context.paths.memoriesDir + "/" + file
|
||||
}
|
||||
return HermesPaths.memoriesDir + "/" + profile + "/" + file
|
||||
return context.paths.memoriesDir + "/" + profile + "/" + file
|
||||
}
|
||||
|
||||
// MARK: - Cron
|
||||
|
||||
func loadCronJobs() -> [HermesCronJob] {
|
||||
guard let data = readFileData(HermesPaths.cronJobsJSON) else { return [] }
|
||||
nonisolated func loadCronJobs() -> [HermesCronJob] {
|
||||
guard let data = readFileData(context.paths.cronJobsJSON) else { return [] }
|
||||
do {
|
||||
let file = try JSONDecoder().decode(CronJobsFile.self, from: data)
|
||||
return file.jobs
|
||||
@@ -434,10 +474,9 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
func loadCronOutput(jobId: String) -> String? {
|
||||
let dir = HermesPaths.cronOutputDir
|
||||
let fm = FileManager.default
|
||||
guard let files = try? fm.contentsOfDirectory(atPath: dir) else { return nil }
|
||||
nonisolated func loadCronOutput(jobId: String) -> String? {
|
||||
let dir = context.paths.cronOutputDir
|
||||
guard let files = try? transport.listDirectory(dir) else { return nil }
|
||||
let matching = files.filter { $0.contains(jobId) }.sorted().last
|
||||
guard let filename = matching else { return nil }
|
||||
return readFile(dir + "/" + filename)
|
||||
@@ -445,22 +484,19 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - Skills
|
||||
|
||||
func loadSkills() -> [HermesSkillCategory] {
|
||||
let dir = HermesPaths.skillsDir
|
||||
let fm = FileManager.default
|
||||
guard let categories = try? fm.contentsOfDirectory(atPath: dir) else { return [] }
|
||||
nonisolated func loadSkills() -> [HermesSkillCategory] {
|
||||
let dir = context.paths.skillsDir
|
||||
guard let categories = try? transport.listDirectory(dir) else { return [] }
|
||||
|
||||
return categories.sorted().compactMap { categoryName in
|
||||
let categoryPath = dir + "/" + categoryName
|
||||
var isDir: ObjCBool = false
|
||||
guard fm.fileExists(atPath: categoryPath, isDirectory: &isDir), isDir.boolValue else { return nil }
|
||||
guard let skillNames = try? fm.contentsOfDirectory(atPath: categoryPath) else { return nil }
|
||||
guard transport.stat(categoryPath)?.isDirectory == true else { return nil }
|
||||
guard let skillNames = try? transport.listDirectory(categoryPath) else { return nil }
|
||||
|
||||
let skills = skillNames.sorted().compactMap { skillName -> HermesSkill? in
|
||||
let skillPath = categoryPath + "/" + skillName
|
||||
var isSkillDir: ObjCBool = false
|
||||
guard fm.fileExists(atPath: skillPath, isDirectory: &isSkillDir), isSkillDir.boolValue else { return nil }
|
||||
let files = (try? fm.contentsOfDirectory(atPath: skillPath)) ?? []
|
||||
guard transport.stat(skillPath)?.isDirectory == true else { return nil }
|
||||
let files = (try? transport.listDirectory(skillPath)) ?? []
|
||||
let requiredConfig = parseSkillRequiredConfig(skillPath + "/skill.yaml")
|
||||
return HermesSkill(
|
||||
id: categoryName + "/" + skillName,
|
||||
@@ -477,25 +513,25 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
func loadSkillContent(path: String) -> String {
|
||||
nonisolated func loadSkillContent(path: String) -> String {
|
||||
guard isValidSkillPath(path) else { return "" }
|
||||
return readFile(path) ?? ""
|
||||
}
|
||||
|
||||
func saveSkillContent(path: String, content: String) {
|
||||
nonisolated func saveSkillContent(path: String, content: String) {
|
||||
guard isValidSkillPath(path) else { return }
|
||||
writeFile(path, content: content)
|
||||
}
|
||||
|
||||
private func isValidSkillPath(_ path: String) -> Bool {
|
||||
guard !path.contains(".."), path.hasPrefix(HermesPaths.skillsDir) else {
|
||||
nonisolated private func isValidSkillPath(_ path: String) -> Bool {
|
||||
guard !path.contains(".."), path.hasPrefix(context.paths.skillsDir) else {
|
||||
print("[Scarf] Rejected skill path outside skills directory: \(path)")
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
private func parseSkillRequiredConfig(_ path: String) -> [String] {
|
||||
nonisolated private func parseSkillRequiredConfig(_ path: String) -> [String] {
|
||||
guard let content = readFile(path) else { return [] }
|
||||
var result: [String] = []
|
||||
var inRequiredConfig = false
|
||||
@@ -521,13 +557,12 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - MCP Servers
|
||||
|
||||
func loadMCPServers() -> [HermesMCPServer] {
|
||||
guard let yaml = readFile(HermesPaths.configYAML) else { return [] }
|
||||
nonisolated func loadMCPServers() -> [HermesMCPServer] {
|
||||
guard let yaml = readFile(context.paths.configYAML) else { return [] }
|
||||
let parsed = parseMCPServersBlock(yaml: yaml)
|
||||
let fm = FileManager.default
|
||||
return parsed.map { server in
|
||||
let tokenPath = HermesPaths.mcpTokensDir + "/" + server.name + ".json"
|
||||
let hasToken = fm.fileExists(atPath: tokenPath)
|
||||
let tokenPath = context.paths.mcpTokensDir + "/" + server.name + ".json"
|
||||
let hasToken = transport.fileExists(tokenPath)
|
||||
guard hasToken != server.hasOAuthToken else { return server }
|
||||
return HermesMCPServer(
|
||||
name: server.name,
|
||||
@@ -554,7 +589,7 @@ struct HermesFileService: Sendable {
|
||||
/// Args are written separately via `setMCPServerArgs` to avoid argparse issues with `-`-prefixed args like `-y`.
|
||||
/// Pipes `y\n` because the CLI prompts to save even when the initial connection check fails (which it will, since we intentionally add no args first).
|
||||
@discardableResult
|
||||
func addMCPServerStdio(name: String, command: String, args: [String]) -> (exitCode: Int32, output: String) {
|
||||
nonisolated func addMCPServerStdio(name: String, command: String, args: [String]) -> (exitCode: Int32, output: String) {
|
||||
let addResult = runHermesCLI(
|
||||
args: ["mcp", "add", name, "--command", command],
|
||||
timeout: 45,
|
||||
@@ -568,7 +603,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func addMCPServerHTTP(name: String, url: String, auth: String?) -> (exitCode: Int32, output: String) {
|
||||
nonisolated func addMCPServerHTTP(name: String, url: String, auth: String?) -> (exitCode: Int32, output: String) {
|
||||
var cliArgs: [String] = ["mcp", "add", name, "--url", url]
|
||||
if let auth, !auth.isEmpty {
|
||||
cliArgs.append(contentsOf: ["--auth", auth])
|
||||
@@ -577,14 +612,14 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func setMCPServerArgs(name: String, args: [String]) -> Bool {
|
||||
nonisolated func setMCPServerArgs(name: String, args: [String]) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
Self.replaceOrInsertList(header: "args", items: args, in: &entryLines)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func removeMCPServer(name: String) -> (exitCode: Int32, output: String) {
|
||||
nonisolated func removeMCPServer(name: String) -> (exitCode: Int32, output: String) {
|
||||
runHermesCLI(args: ["mcp", "remove", name], timeout: 30)
|
||||
}
|
||||
|
||||
@@ -613,7 +648,7 @@ struct HermesFileService: Sendable {
|
||||
)
|
||||
}
|
||||
|
||||
private static func parseToolListFromTestOutput(_ output: String) -> [String] {
|
||||
nonisolated private static func parseToolListFromTestOutput(_ output: String) -> [String] {
|
||||
var tools: [String] = []
|
||||
for rawLine in output.components(separatedBy: "\n") {
|
||||
let line = rawLine.trimmingCharacters(in: .whitespaces)
|
||||
@@ -629,35 +664,35 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func toggleMCPServerEnabled(name: String, enabled: Bool) -> Bool {
|
||||
nonisolated func toggleMCPServerEnabled(name: String, enabled: Bool) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
Self.replaceOrInsertScalar(key: "enabled", value: enabled ? "true" : "false", in: &entryLines)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func setMCPServerEnv(name: String, env: [String: String]) -> Bool {
|
||||
nonisolated func setMCPServerEnv(name: String, env: [String: String]) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
Self.replaceOrInsertSubMap(header: "env", map: env, in: &entryLines)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func setMCPServerHeaders(name: String, headers: [String: String]) -> Bool {
|
||||
nonisolated func setMCPServerHeaders(name: String, headers: [String: String]) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
Self.replaceOrInsertSubMap(header: "headers", map: headers, in: &entryLines)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func updateMCPToolFilters(name: String, include: [String], exclude: [String], resources: Bool, prompts: Bool) -> Bool {
|
||||
nonisolated func updateMCPToolFilters(name: String, include: [String], exclude: [String], resources: Bool, prompts: Bool) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
Self.replaceOrInsertToolsBlock(include: include, exclude: exclude, resources: resources, prompts: prompts, in: &entryLines)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func setMCPServerTimeouts(name: String, timeout: Int?, connectTimeout: Int?) -> Bool {
|
||||
nonisolated func setMCPServerTimeouts(name: String, timeout: Int?, connectTimeout: Int?) -> Bool {
|
||||
patchMCPServerField(name: name) { entryLines in
|
||||
if let timeout {
|
||||
Self.replaceOrInsertScalar(key: "timeout", value: String(timeout), in: &entryLines)
|
||||
@@ -673,10 +708,10 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func deleteMCPOAuthToken(name: String) -> Bool {
|
||||
let path = HermesPaths.mcpTokensDir + "/" + name + ".json"
|
||||
nonisolated func deleteMCPOAuthToken(name: String) -> Bool {
|
||||
let path = context.paths.mcpTokensDir + "/" + name + ".json"
|
||||
do {
|
||||
try FileManager.default.removeItem(atPath: path)
|
||||
try transport.removeFile(path)
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
@@ -684,7 +719,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func restartGateway() -> (exitCode: Int32, output: String) {
|
||||
nonisolated func restartGateway() -> (exitCode: Int32, output: String) {
|
||||
runHermesCLI(args: ["gateway", "restart"], timeout: 30)
|
||||
}
|
||||
|
||||
@@ -696,7 +731,7 @@ struct HermesFileService: Sendable {
|
||||
let suffix: [String]
|
||||
}
|
||||
|
||||
private func extractMCPBlock(yaml: String) -> MCPBlockLocation {
|
||||
nonisolated private func extractMCPBlock(yaml: String) -> MCPBlockLocation {
|
||||
let lines = yaml.components(separatedBy: "\n")
|
||||
var blockStart = -1
|
||||
var blockEnd = lines.count
|
||||
@@ -739,7 +774,7 @@ struct HermesFileService: Sendable {
|
||||
)
|
||||
}
|
||||
|
||||
fileprivate func parseMCPServersBlock(yaml: String) -> [HermesMCPServer] {
|
||||
nonisolated fileprivate func parseMCPServersBlock(yaml: String) -> [HermesMCPServer] {
|
||||
let location = extractMCPBlock(yaml: yaml)
|
||||
guard location.block.count > 1 else { return [] }
|
||||
|
||||
@@ -875,8 +910,8 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - MCP YAML: surgical patcher
|
||||
|
||||
private func patchMCPServerField(name: String, mutate: (inout [String]) -> Void) -> Bool {
|
||||
guard let yaml = readFile(HermesPaths.configYAML) else { return false }
|
||||
nonisolated private func patchMCPServerField(name: String, mutate: (inout [String]) -> Void) -> Bool {
|
||||
guard let yaml = readFile(context.paths.configYAML) else { return false }
|
||||
let location = extractMCPBlock(yaml: yaml)
|
||||
guard !location.block.isEmpty else { return false }
|
||||
|
||||
@@ -925,13 +960,13 @@ struct HermesFileService: Sendable {
|
||||
combined.append(contentsOf: block)
|
||||
combined.append(contentsOf: location.suffix)
|
||||
let newYAML = combined.joined(separator: "\n")
|
||||
writeFile(HermesPaths.configYAML, content: newYAML)
|
||||
writeFile(context.paths.configYAML, content: newYAML)
|
||||
return true
|
||||
}
|
||||
|
||||
// MARK: - MCP YAML: mutators
|
||||
|
||||
private static func replaceOrInsertScalar(key: String, value: String, in lines: inout [String]) {
|
||||
nonisolated private static func replaceOrInsertScalar(key: String, value: String, in lines: inout [String]) {
|
||||
// entry header is at lines[0] at indent 2. Scalars live at indent 4.
|
||||
for index in 1..<lines.count {
|
||||
let line = lines[index]
|
||||
@@ -949,7 +984,7 @@ struct HermesFileService: Sendable {
|
||||
lines.insert(" \(key): \(value)", at: 1)
|
||||
}
|
||||
|
||||
private static func removeScalar(key: String, in lines: inout [String]) {
|
||||
nonisolated private static func removeScalar(key: String, in lines: inout [String]) {
|
||||
var removeIndex: Int?
|
||||
for index in 1..<lines.count {
|
||||
let line = lines[index]
|
||||
@@ -968,7 +1003,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
private static func replaceOrInsertList(header: String, items: [String], in lines: inout [String]) {
|
||||
nonisolated private static func replaceOrInsertList(header: String, items: [String], in lines: inout [String]) {
|
||||
var headerIndex: Int?
|
||||
var removeEnd: Int?
|
||||
for index in 1..<lines.count {
|
||||
@@ -1026,7 +1061,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
private static func replaceOrInsertSubMap(header: String, map: [String: String], in lines: inout [String]) {
|
||||
nonisolated private static func replaceOrInsertSubMap(header: String, map: [String: String], in lines: inout [String]) {
|
||||
var headerIndex: Int?
|
||||
var removeEnd: Int?
|
||||
for index in 1..<lines.count {
|
||||
@@ -1084,7 +1119,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
private static func replaceOrInsertToolsBlock(include: [String], exclude: [String], resources: Bool, prompts: Bool, in lines: inout [String]) {
|
||||
nonisolated private static func replaceOrInsertToolsBlock(include: [String], exclude: [String], resources: Bool, prompts: Bool, in lines: inout [String]) {
|
||||
var headerIndex: Int?
|
||||
var removeEnd: Int?
|
||||
for index in 1..<lines.count {
|
||||
@@ -1133,7 +1168,7 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
|
||||
private static func yamlScalar(_ value: String) -> String {
|
||||
nonisolated private static func yamlScalar(_ value: String) -> String {
|
||||
if value.isEmpty { return "\"\"" }
|
||||
// YAML 1.2 reserved indicators that change meaning at the start of a
|
||||
// scalar: @ * & ? | > ! % , [ ] { } < ` ' " — plus space (would be
|
||||
@@ -1157,7 +1192,7 @@ struct HermesFileService: Sendable {
|
||||
return value
|
||||
}
|
||||
|
||||
private static func unquote(_ value: String) -> String {
|
||||
nonisolated private static func unquote(_ value: String) -> String {
|
||||
var v = value
|
||||
if (v.hasPrefix("\"") && v.hasSuffix("\"") && v.count >= 2) || (v.hasPrefix("'") && v.hasSuffix("'") && v.count >= 2) {
|
||||
v = String(v.dropFirst().dropLast())
|
||||
@@ -1167,46 +1202,79 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - Hermes Process
|
||||
|
||||
func isHermesRunning() -> Bool {
|
||||
nonisolated func isHermesRunning() -> Bool {
|
||||
hermesPID() != nil
|
||||
}
|
||||
|
||||
func hermesPID() -> pid_t? {
|
||||
let pipe = Pipe()
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: "/usr/bin/pgrep")
|
||||
process.arguments = ["-f", "hermes"]
|
||||
process.standardOutput = pipe
|
||||
process.standardError = Pipe()
|
||||
nonisolated func hermesPID() -> pid_t? {
|
||||
switch hermesPIDResult() {
|
||||
case .success(let pid): return pid
|
||||
case .failure: return nil
|
||||
}
|
||||
}
|
||||
|
||||
/// Error-surfacing variant. `.success(nil)` means `pgrep` ran successfully
|
||||
/// and found no hermes process (Hermes is genuinely not running).
|
||||
/// `.failure` means we couldn't probe at all (pgrep missing, connection
|
||||
/// down, permission issue) — a *different* UX from "not running".
|
||||
nonisolated func hermesPIDResult() -> Result<pid_t?, Error> {
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
let output = String(data: data, encoding: .utf8) ?? ""
|
||||
guard let firstLine = output.components(separatedBy: "\n").first(where: { !$0.isEmpty }),
|
||||
let pid = pid_t(firstLine.trimmingCharacters(in: .whitespaces)) else { return nil }
|
||||
return pid
|
||||
let result = try transport.runProcess(
|
||||
executable: "/usr/bin/pgrep",
|
||||
args: ["-f", "hermes"],
|
||||
stdin: nil,
|
||||
timeout: 5
|
||||
)
|
||||
// pgrep exits 1 when nothing matches — that's "not running", NOT an
|
||||
// error. Anything else (127=command not found, 255=ssh failure) is.
|
||||
if result.exitCode == 0 {
|
||||
if let firstLine = result.stdoutString
|
||||
.components(separatedBy: "\n")
|
||||
.first(where: { !$0.isEmpty }),
|
||||
let pid = pid_t(firstLine.trimmingCharacters(in: .whitespaces)) {
|
||||
return .success(pid)
|
||||
}
|
||||
return .success(nil)
|
||||
} else if result.exitCode == 1 {
|
||||
return .success(nil) // genuinely not running
|
||||
} else {
|
||||
let err = TransportError.commandFailed(exitCode: result.exitCode, stderr: result.stderrString)
|
||||
Self.logger.warning("pgrep failed (exit \(result.exitCode)): \(result.stderrString, privacy: .public)")
|
||||
return .failure(err)
|
||||
}
|
||||
} catch {
|
||||
return nil
|
||||
Self.logger.warning("pgrep transport error: \(error.localizedDescription, privacy: .public)")
|
||||
return .failure(error)
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
func stopHermes() -> Bool {
|
||||
nonisolated func stopHermes() -> Bool {
|
||||
// v0.9.0 fixed `hermes gateway stop` so it issues `launchctl bootout` and
|
||||
// waits for exit. Use the CLI to avoid racing launchd's KeepAlive respawn.
|
||||
if runHermesCLI(args: ["gateway", "stop"]).exitCode == 0 {
|
||||
return true
|
||||
}
|
||||
guard let pid = hermesPID() else { return false }
|
||||
// For remote we can't issue a raw `kill(2)` — route through `kill(1)`
|
||||
// via the transport. Local uses the syscall for its minimal overhead.
|
||||
if context.isRemote {
|
||||
let result = try? transport.runProcess(
|
||||
executable: "/bin/kill",
|
||||
args: ["-TERM", String(pid)],
|
||||
stdin: nil,
|
||||
timeout: 5
|
||||
)
|
||||
return (result?.exitCode ?? -1) == 0
|
||||
}
|
||||
return kill(pid, SIGTERM) == 0
|
||||
}
|
||||
|
||||
nonisolated func hermesBinaryPath() -> String? {
|
||||
// Single source of truth for install-location candidates lives in
|
||||
// HermesPaths.hermesBinaryCandidates — keeps pipx/brew/manual lookups
|
||||
// HermesPathSet.hermesBinaryCandidates — keeps pipx/brew/manual lookups
|
||||
// consistent across the app.
|
||||
return HermesPaths.hermesBinaryCandidates
|
||||
return HermesPathSet.hermesBinaryCandidates
|
||||
.first { FileManager.default.isExecutableFile(atPath: $0) }
|
||||
}
|
||||
|
||||
@@ -1216,14 +1284,20 @@ struct HermesFileService: Sendable {
|
||||
/// resolves AI provider auth by reading env vars — a GUI-launched Scarf
|
||||
/// subprocess sees none of the `export ANTHROPIC_API_KEY=…` lines from
|
||||
/// the user's shell init files.
|
||||
private static let shellEnvKeys: [String] = [
|
||||
nonisolated private static let shellEnvKeys: [String] = [
|
||||
"PATH",
|
||||
"ANTHROPIC_API_KEY", "ANTHROPIC_TOKEN", "ANTHROPIC_BASE_URL",
|
||||
"OPENAI_API_KEY", "OPENAI_BASE_URL",
|
||||
"OPENROUTER_API_KEY",
|
||||
"GEMINI_API_KEY", "GOOGLE_API_KEY",
|
||||
"GROQ_API_KEY", "MISTRAL_API_KEY", "XAI_API_KEY",
|
||||
"CLAUDE_CODE_OAUTH_TOKEN"
|
||||
"CLAUDE_CODE_OAUTH_TOKEN",
|
||||
// SSH agent socket — set by 1Password / Secretive / a manual
|
||||
// `ssh-add` in the user's shell rc. GUI-launched apps don't inherit
|
||||
// these by default, so without harvesting them here, `ssh` spawned
|
||||
// from Scarf can't reach the agent and authentication fails with
|
||||
// "Permission denied" (exit 255) even though terminal ssh works.
|
||||
"SSH_AUTH_SOCK", "SSH_AGENT_PID"
|
||||
]
|
||||
|
||||
/// Env vars harvested from the user's login shell. Computed once and cached.
|
||||
@@ -1238,7 +1312,7 @@ struct HermesFileService: Sendable {
|
||||
/// 2. If that yields no PATH (timed out / prompt framework broke it),
|
||||
/// fall back to `zsh -l` (login only) with a 3-second timeout.
|
||||
/// 3. If that also fails, hardcoded sane-default PATH; no credentials.
|
||||
private static let enrichedShellEnv: [String: String] = {
|
||||
nonisolated private static let enrichedShellEnv: [String: String] = {
|
||||
// Build a shell script that prints `KEY\0VALUE\0` for each key.
|
||||
// Using printf with \0 as separator lets us unambiguously split the
|
||||
// output even if a value contains newlines.
|
||||
@@ -1278,7 +1352,7 @@ struct HermesFileService: Sendable {
|
||||
/// `KEY\0VALUE\0`-delimited output. Returns nil on timeout/failure.
|
||||
/// When `interactive` is true, injects env vars that suppress common
|
||||
/// prompt frameworks so the shell doesn't hang waiting for terminal setup.
|
||||
private static func runShellProbe(script: String, interactive: Bool, timeout: TimeInterval) -> [String: String]? {
|
||||
nonisolated private static func runShellProbe(script: String, interactive: Bool, timeout: TimeInterval) -> [String: String]? {
|
||||
let pipe = Pipe()
|
||||
let errPipe = Pipe()
|
||||
let process = Process()
|
||||
@@ -1366,19 +1440,27 @@ struct HermesFileService: Sendable {
|
||||
/// delegation tasks
|
||||
/// Used by Chat to warn the user before `hermes acp` fails on send with
|
||||
/// "No Anthropic credentials found".
|
||||
nonisolated static func hasAnyAICredential() -> Bool {
|
||||
let credentialKeys = shellEnvKeys.filter { $0 != "PATH" && $0 != "ANTHROPIC_BASE_URL" && $0 != "OPENAI_BASE_URL" }
|
||||
let env = enrichedEnvironment()
|
||||
for key in credentialKeys {
|
||||
if let value = env[key], !value.isEmpty {
|
||||
return true
|
||||
///
|
||||
/// **Local context:** also checks Scarf's process / login-shell env.
|
||||
/// **Remote context:** skips that step — our process env has nothing to
|
||||
/// do with the remote `hermes acp`'s runtime env. The remote `.env` /
|
||||
/// `auth.json` / `config.yaml` are still checked through the transport.
|
||||
nonisolated func hasAnyAICredential() -> Bool {
|
||||
let credentialKeys = Self.shellEnvKeys.filter { $0 != "PATH" && $0 != "ANTHROPIC_BASE_URL" && $0 != "OPENAI_BASE_URL" }
|
||||
|
||||
if !context.isRemote {
|
||||
let env = Self.enrichedEnvironment()
|
||||
for key in credentialKeys {
|
||||
if let value = env[key], !value.isEmpty {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
// Scan ~/.hermes/.env for KEY= lines. Uses a simple substring check —
|
||||
// good enough for a preflight hint; hermes itself does the real parse.
|
||||
let envPath = HermesPaths.home + "/.env"
|
||||
if let data = try? String(contentsOfFile: envPath, encoding: .utf8) {
|
||||
for line in data.split(separator: "\n") {
|
||||
// Scan .env (via transport — local file or scp) for KEY= lines.
|
||||
// Uses a simple substring check — good enough for a preflight hint;
|
||||
// hermes itself does the real parse.
|
||||
if let envText = readFile(context.paths.envFile) {
|
||||
for line in envText.split(separator: "\n") {
|
||||
let trimmed = line.trimmingCharacters(in: .whitespaces)
|
||||
if trimmed.isEmpty || trimmed.hasPrefix("#") { continue }
|
||||
for key in credentialKeys where trimmed.hasPrefix("\(key)=") || trimmed.hasPrefix("export \(key)=") {
|
||||
@@ -1392,12 +1474,11 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
}
|
||||
// Scan ~/.hermes/auth.json — the Credential Pools file written by the
|
||||
// Configure → Credential Pools UI. Schema is
|
||||
// Scan auth.json (Credential Pools file written by the Configure →
|
||||
// Credential Pools UI). Schema:
|
||||
// { "credential_pool": { "<provider>": [ { "access_token": "...", ... }, ... ] } }
|
||||
// Defensive parse: any malformed input falls through to the next check.
|
||||
let authPath = HermesPaths.home + "/auth.json"
|
||||
if let data = try? Data(contentsOf: URL(fileURLWithPath: authPath)),
|
||||
if let data = readFileData(context.paths.authJSON),
|
||||
let root = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
|
||||
let pool = root["credential_pool"] as? [String: Any] {
|
||||
for (_, entries) in pool {
|
||||
@@ -1409,11 +1490,10 @@ struct HermesFileService: Sendable {
|
||||
}
|
||||
}
|
||||
}
|
||||
// Scan ~/.hermes/config.yaml for `api_key:` lines with a non-empty
|
||||
// value. Covers both `auxiliary.<task>.api_key` and `delegation.api_key`
|
||||
// without needing to parse the YAML structure — any leaf `api_key: ...`
|
||||
// with a value means Hermes has a credential to fall back on.
|
||||
if let text = try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8) {
|
||||
// Scan config.yaml for `api_key:` lines with a non-empty value.
|
||||
// Covers both `auxiliary.<task>.api_key` and `delegation.api_key`
|
||||
// without needing to parse YAML structure.
|
||||
if let text = readFile(context.paths.configYAML) {
|
||||
for line in text.split(separator: "\n") {
|
||||
let trimmed = line.trimmingCharacters(in: .whitespaces)
|
||||
guard trimmed.hasPrefix("api_key:") else { continue }
|
||||
@@ -1427,41 +1507,36 @@ struct HermesFileService: Sendable {
|
||||
|
||||
@discardableResult
|
||||
nonisolated func runHermesCLI(args: [String], timeout: TimeInterval = 60, stdinInput: String? = nil) -> (exitCode: Int32, output: String) {
|
||||
guard let binary = hermesBinaryPath() else { return (-1, "") }
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
let stdinPipe: Pipe? = stdinInput != nil ? Pipe() : nil
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: binary)
|
||||
process.arguments = args
|
||||
process.environment = Self.enrichedEnvironment()
|
||||
process.standardOutput = stdoutPipe
|
||||
process.standardError = stderrPipe
|
||||
if let stdinPipe { process.standardInput = stdinPipe }
|
||||
defer {
|
||||
try? stdoutPipe.fileHandleForReading.close()
|
||||
try? stdoutPipe.fileHandleForWriting.close()
|
||||
try? stderrPipe.fileHandleForReading.close()
|
||||
try? stderrPipe.fileHandleForWriting.close()
|
||||
try? stdinPipe?.fileHandleForReading.close()
|
||||
try? stdinPipe?.fileHandleForWriting.close()
|
||||
// Resolve the executable path — for remote, prefer the cached
|
||||
// `hermesBinaryHint` on the SSHConfig (populated by the Test
|
||||
// Connection probe) and fall back to bare `hermes` which relies on
|
||||
// the remote user's `$PATH`.
|
||||
let binary: String
|
||||
if context.isRemote {
|
||||
binary = context.paths.hermesBinary
|
||||
} else {
|
||||
guard let local = hermesBinaryPath() else { return (-1, "") }
|
||||
binary = local
|
||||
}
|
||||
|
||||
let stdinData = stdinInput?.data(using: .utf8)
|
||||
do {
|
||||
try process.run()
|
||||
if let stdinInput, let stdinPipe, let data = stdinInput.data(using: .utf8) {
|
||||
stdinPipe.fileHandleForWriting.write(data)
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
}
|
||||
let deadline = Date().addingTimeInterval(timeout)
|
||||
while process.isRunning && Date() < deadline {
|
||||
Thread.sleep(forTimeInterval: 0.05)
|
||||
}
|
||||
if process.isRunning { process.terminate() }
|
||||
process.waitUntilExit()
|
||||
let outData = stdoutPipe.fileHandleForReading.readDataToEndOfFile()
|
||||
let errData = stderrPipe.fileHandleForReading.readDataToEndOfFile()
|
||||
let combined = (String(data: outData, encoding: .utf8) ?? "") + (String(data: errData, encoding: .utf8) ?? "")
|
||||
return (process.terminationStatus, combined)
|
||||
let result = try transport.runProcess(
|
||||
executable: binary,
|
||||
args: args,
|
||||
stdin: stdinData,
|
||||
timeout: timeout
|
||||
)
|
||||
// Match the legacy signature: combined stdout+stderr in one
|
||||
// String so callers that grep through output don't need to
|
||||
// change. Stderr after stdout mirrors what the old Process impl
|
||||
// produced since both pipes were drained in that order.
|
||||
let combined = result.stdoutString + result.stderrString
|
||||
return (result.exitCode, combined)
|
||||
} catch let error as TransportError {
|
||||
return (-1, error.diagnosticStderr.isEmpty
|
||||
? (error.errorDescription ?? "transport error")
|
||||
: error.diagnosticStderr)
|
||||
} catch {
|
||||
return (-1, error.localizedDescription)
|
||||
}
|
||||
@@ -1469,17 +1544,91 @@ struct HermesFileService: Sendable {
|
||||
|
||||
// MARK: - File I/O
|
||||
|
||||
private func readFile(_ path: String) -> String? {
|
||||
try? String(contentsOfFile: path, encoding: .utf8)
|
||||
/// Read a UTF-8 text file through the transport. Missing files and any
|
||||
/// transport error surface as `nil` — callers that don't need the
|
||||
/// specific error reason keep using this. New call sites that want to
|
||||
/// show a user-actionable message should use `readFileResult`.
|
||||
nonisolated private func readFile(_ path: String) -> String? {
|
||||
switch readFileResult(path) {
|
||||
case .success(let s):
|
||||
return s
|
||||
case .failure:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
private func readFileData(_ path: String) -> Data? {
|
||||
FileManager.default.contents(atPath: path)
|
||||
nonisolated private func readFileData(_ path: String) -> Data? {
|
||||
switch readFileDataResult(path) {
|
||||
case .success(let d):
|
||||
return d
|
||||
case .failure:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
private func writeFile(_ path: String, content: String) {
|
||||
/// Error-surfacing read. Returns the decoded text on success, or the
|
||||
/// underlying `TransportError` (or raw error for local failures) on
|
||||
/// failure. Every failure is also logged via `os.Logger` — the warning
|
||||
/// trail in Console.app is how we diagnose "connection green, data
|
||||
/// empty" bug reports without needing to wire the error through every
|
||||
/// existing call site.
|
||||
nonisolated func readFileResult(_ path: String) -> Result<String, Error> {
|
||||
switch readFileDataResult(path) {
|
||||
case .success(let data):
|
||||
guard let s = String(data: data, encoding: .utf8) else {
|
||||
let err = TransportError.fileIO(path: path, underlying: "file is not valid UTF-8")
|
||||
Self.logger.warning("readFile(\(path, privacy: .public)): not UTF-8")
|
||||
return .failure(err)
|
||||
}
|
||||
return .success(s)
|
||||
case .failure(let err):
|
||||
return .failure(err)
|
||||
}
|
||||
}
|
||||
|
||||
nonisolated func readFileDataResult(_ path: String) -> Result<Data, Error> {
|
||||
do {
|
||||
try content.write(toFile: path, atomically: true, encoding: .utf8)
|
||||
let data = try transport.readFile(path)
|
||||
return .success(data)
|
||||
} catch {
|
||||
// Don't log "No such file" — that's a routine, expected case
|
||||
// for optional files (skill.yaml, gateway_state.json before
|
||||
// Hermes starts, ~/.hermes/memories/USER.md on fresh installs,
|
||||
// etc.). The caller still gets the Result.failure so it can
|
||||
// distinguish missing from present-but-unreadable.
|
||||
// Log everything else — permission denied, connection drops,
|
||||
// sqlite3 missing — since those are actionable diagnostics.
|
||||
if !Self.isFileNotFound(error) {
|
||||
Self.logger.warning("readFile(\(path, privacy: .public)) failed: \(error.localizedDescription, privacy: .public)")
|
||||
}
|
||||
return .failure(error)
|
||||
}
|
||||
}
|
||||
|
||||
/// `true` iff the error represents "file does not exist" as opposed to
|
||||
/// a permission / transport / parse failure. Used to suppress routine
|
||||
/// logging for optional files while still surfacing real problems.
|
||||
nonisolated private static func isFileNotFound(_ error: Error) -> Bool {
|
||||
if let transportErr = error as? TransportError,
|
||||
case .fileIO(_, let underlying) = transportErr {
|
||||
return underlying.lowercased().contains("no such file")
|
||||
}
|
||||
// Cocoa NSFileNoSuchFileError (returned by LocalTransport when
|
||||
// reading a missing file via FileManager).
|
||||
let ns = error as NSError
|
||||
if ns.domain == NSCocoaErrorDomain && ns.code == 260 { return true }
|
||||
if ns.domain == NSPOSIXErrorDomain && ns.code == 2 { return true } // ENOENT
|
||||
return false
|
||||
}
|
||||
|
||||
/// Write a UTF-8 text file atomically through the transport. Matches the
|
||||
/// old pre-transport behavior (print + swallow on error) because the
|
||||
/// callers don't have a UI path for surfacing I/O failures — that's
|
||||
/// planned for Phase 4.
|
||||
nonisolated private func writeFile(_ path: String, content: String) {
|
||||
guard let data = content.data(using: .utf8) else { return }
|
||||
do {
|
||||
try transport.writeFile(path, data: data)
|
||||
} catch {
|
||||
print("[Scarf] Failed to write \(path): \(error.localizedDescription)")
|
||||
}
|
||||
|
||||
@@ -6,33 +6,66 @@ final class HermesFileWatcher {
|
||||
private var coreSources: [DispatchSourceFileSystemObject] = []
|
||||
private var projectSources: [DispatchSourceFileSystemObject] = []
|
||||
private var timer: Timer?
|
||||
/// Remote polling task. Non-nil only when `context.isRemote`. Cancelled
|
||||
/// on `stopWatching()`.
|
||||
private var remotePollTask: Task<Void, Never>?
|
||||
|
||||
let context: ServerContext
|
||||
private let transport: any ServerTransport
|
||||
|
||||
nonisolated init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
/// Canonical list of paths we observe. Used for both FSEvents (local)
|
||||
/// and mtime polling (remote).
|
||||
private var watchedCorePaths: [String] {
|
||||
let paths = context.paths
|
||||
return [
|
||||
paths.stateDB,
|
||||
paths.stateDB + "-wal",
|
||||
paths.configYAML,
|
||||
paths.home + "/.env",
|
||||
paths.memoryMD,
|
||||
paths.userMD,
|
||||
paths.cronJobsJSON,
|
||||
paths.gatewayStateJSON,
|
||||
paths.agentLog,
|
||||
paths.errorsLog,
|
||||
paths.gatewayLog,
|
||||
paths.projectsRegistry,
|
||||
paths.mcpTokensDir
|
||||
]
|
||||
}
|
||||
|
||||
func startWatching() {
|
||||
let paths = [
|
||||
HermesPaths.stateDB,
|
||||
HermesPaths.stateDB + "-wal",
|
||||
HermesPaths.configYAML,
|
||||
HermesPaths.home + "/.env", // Platform setup forms write here.
|
||||
HermesPaths.memoryMD,
|
||||
HermesPaths.userMD,
|
||||
HermesPaths.cronJobsJSON,
|
||||
HermesPaths.gatewayStateJSON,
|
||||
HermesPaths.agentLog,
|
||||
HermesPaths.errorsLog,
|
||||
HermesPaths.gatewayLog,
|
||||
HermesPaths.projectsRegistry,
|
||||
HermesPaths.mcpTokensDir
|
||||
]
|
||||
if context.isRemote {
|
||||
// FSEvents doesn't reach across SSH. Drive lastChangeDate off
|
||||
// the transport's AsyncStream, which polls stat mtime on a
|
||||
// shared ControlMaster channel (~5ms per tick).
|
||||
let stream = transport.watchPaths(watchedCorePaths)
|
||||
remotePollTask = Task { [weak self] in
|
||||
for await _ in stream {
|
||||
await MainActor.run { [weak self] in
|
||||
self?.lastChangeDate = Date()
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
for path in paths {
|
||||
for path in watchedCorePaths {
|
||||
if let source = makeSource(for: path) {
|
||||
coreSources.append(source)
|
||||
}
|
||||
}
|
||||
|
||||
timer = Timer.scheduledTimer(withTimeInterval: 5.0, repeats: true) { [weak self] _ in
|
||||
self?.lastChangeDate = Date()
|
||||
}
|
||||
// No heartbeat timer: every observing view runs its `.onChange`
|
||||
// refresh whenever `lastChangeDate` ticks, so a 5s unconditional
|
||||
// tick was triggering wasted reloads across many subscribers
|
||||
// (Dashboard, Memory, Cron, Gateway, Platforms, Projects, Chat).
|
||||
// FSEvents reliably fires on real changes; menu-bar Start/Stop
|
||||
// touches `gateway_state.json` which the watcher catches.
|
||||
}
|
||||
|
||||
func stopWatching() {
|
||||
@@ -43,9 +76,15 @@ final class HermesFileWatcher {
|
||||
projectSources.removeAll()
|
||||
timer?.invalidate()
|
||||
timer = nil
|
||||
remotePollTask?.cancel()
|
||||
remotePollTask = nil
|
||||
}
|
||||
|
||||
func updateProjectWatches(_ dashboardPaths: [String]) {
|
||||
// Remote contexts don't support per-project FSEvents watches today —
|
||||
// the shared mtime poll covers the core set. Adding per-project
|
||||
// polling is a Phase 4 polish item.
|
||||
guard !context.isRemote else { return }
|
||||
for source in projectSources {
|
||||
source.cancel()
|
||||
}
|
||||
|
||||
@@ -33,10 +33,46 @@ actor HermesLogService {
|
||||
private var currentPath: String?
|
||||
private var entryCounter = 0
|
||||
|
||||
/// Remote tailing state. When set, we're reading from `ssh host tail -F`
|
||||
/// instead of a local file. Process stdout pipe drives `readNewLines()`;
|
||||
/// process lifecycle is the actor's responsibility.
|
||||
private var remoteTailProcess: Process?
|
||||
private var remoteTailBuffer: String = ""
|
||||
|
||||
let context: ServerContext
|
||||
private let transport: any ServerTransport
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
func openLog(path: String) {
|
||||
closeLog()
|
||||
currentPath = path
|
||||
fileHandle = FileHandle(forReadingAtPath: path)
|
||||
if context.isRemote {
|
||||
// Spawn `ssh host tail -F` and pipe stdout into our buffer. `-F`
|
||||
// follows the file through rotations — important for remote
|
||||
// log rotation setups (logrotate).
|
||||
let proc = transport.makeProcess(
|
||||
executable: "/usr/bin/tail",
|
||||
args: ["-n", String(QueryDefaults.logLineLimit), "-F", path]
|
||||
)
|
||||
let outPipe = Pipe()
|
||||
proc.standardOutput = outPipe
|
||||
proc.standardError = Pipe()
|
||||
do {
|
||||
try proc.run()
|
||||
remoteTailProcess = proc
|
||||
fileHandle = outPipe.fileHandleForReading
|
||||
} catch {
|
||||
print("[Scarf] Failed to start remote tail: \(error.localizedDescription)")
|
||||
remoteTailProcess = nil
|
||||
fileHandle = nil
|
||||
}
|
||||
} else {
|
||||
fileHandle = FileHandle(forReadingAtPath: path)
|
||||
}
|
||||
}
|
||||
|
||||
func closeLog() {
|
||||
@@ -47,11 +83,29 @@ actor HermesLogService {
|
||||
}
|
||||
fileHandle = nil
|
||||
currentPath = nil
|
||||
if let proc = remoteTailProcess, proc.isRunning {
|
||||
proc.terminate()
|
||||
}
|
||||
remoteTailProcess = nil
|
||||
remoteTailBuffer = ""
|
||||
}
|
||||
|
||||
func readLastLines(count: Int = QueryDefaults.logLineLimit) -> [LogEntry] {
|
||||
guard let path = currentPath,
|
||||
let data = FileManager.default.contents(atPath: path) else { return [] }
|
||||
guard let path = currentPath else { return [] }
|
||||
if context.isRemote {
|
||||
// For the initial load we bypass the streaming tail and run a
|
||||
// one-shot `tail -n <count>` for a clean bounded read.
|
||||
let result = try? transport.runProcess(
|
||||
executable: "/usr/bin/tail",
|
||||
args: ["-n", String(count), path],
|
||||
stdin: nil,
|
||||
timeout: 30
|
||||
)
|
||||
let content = result?.stdoutString ?? ""
|
||||
let lines = content.components(separatedBy: "\n").filter { !$0.isEmpty }
|
||||
return lines.map { parseLine($0) }
|
||||
}
|
||||
guard let data = FileManager.default.contents(atPath: path) else { return [] }
|
||||
let content = String(data: data, encoding: .utf8) ?? ""
|
||||
let lines = content.components(separatedBy: "\n").filter { !$0.isEmpty }
|
||||
let lastLines = Array(lines.suffix(count))
|
||||
@@ -62,13 +116,29 @@ actor HermesLogService {
|
||||
guard let handle = fileHandle else { return [] }
|
||||
let data = handle.availableData
|
||||
guard !data.isEmpty else { return [] }
|
||||
let content = String(data: data, encoding: .utf8) ?? ""
|
||||
let lines = content.components(separatedBy: "\n").filter { !$0.isEmpty }
|
||||
let chunk = String(data: data, encoding: .utf8) ?? ""
|
||||
if context.isRemote {
|
||||
// Remote tail emits bytes as they arrive — not line-aligned.
|
||||
// Buffer partials across reads so we don't split a line mid-way.
|
||||
remoteTailBuffer += chunk
|
||||
guard let lastNewline = remoteTailBuffer.lastIndex(of: "\n") else {
|
||||
return []
|
||||
}
|
||||
let complete = String(remoteTailBuffer[..<lastNewline])
|
||||
remoteTailBuffer = String(remoteTailBuffer[remoteTailBuffer.index(after: lastNewline)...])
|
||||
let lines = complete.components(separatedBy: "\n").filter { !$0.isEmpty }
|
||||
return lines.map { parseLine($0) }
|
||||
}
|
||||
let lines = chunk.components(separatedBy: "\n").filter { !$0.isEmpty }
|
||||
return lines.map { parseLine($0) }
|
||||
}
|
||||
|
||||
func seekToEnd() {
|
||||
fileHandle?.seekToEndOfFile()
|
||||
// Only meaningful for local FileHandles — remote tail starts at the
|
||||
// end implicitly after `readLastLines` drained the initial load.
|
||||
if !context.isRemote {
|
||||
fileHandle?.seekToEndOfFile()
|
||||
}
|
||||
}
|
||||
|
||||
private func parseLine(_ line: String) -> LogEntry {
|
||||
|
||||
@@ -53,9 +53,17 @@ struct HermesProviderInfo: Sendable, Identifiable, Hashable {
|
||||
struct ModelCatalogService: Sendable {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "ModelCatalogService")
|
||||
let path: String
|
||||
let transport: any ServerTransport
|
||||
|
||||
init(path: String = HermesPaths.home + "/models_dev_cache.json") {
|
||||
nonisolated init(context: ServerContext = .local) {
|
||||
self.path = context.paths.home + "/models_dev_cache.json"
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
/// Escape hatch for tests.
|
||||
init(path: String) {
|
||||
self.path = path
|
||||
self.transport = LocalTransport()
|
||||
}
|
||||
|
||||
/// All providers, sorted by display name.
|
||||
@@ -159,7 +167,7 @@ struct ModelCatalogService: Sendable {
|
||||
// MARK: - Decoding
|
||||
|
||||
private func loadCatalog() -> [String: ProviderEntry]? {
|
||||
guard let data = try? Data(contentsOf: URL(fileURLWithPath: path)) else {
|
||||
guard let data = try? transport.readFile(path) else {
|
||||
return nil
|
||||
}
|
||||
do {
|
||||
|
||||
@@ -2,10 +2,18 @@ import Foundation
|
||||
|
||||
struct ProjectDashboardService: Sendable {
|
||||
|
||||
let context: ServerContext
|
||||
let transport: any ServerTransport
|
||||
|
||||
nonisolated init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
}
|
||||
|
||||
// MARK: - Registry
|
||||
|
||||
func loadRegistry() -> ProjectRegistry {
|
||||
guard let data = FileManager.default.contents(atPath: HermesPaths.projectsRegistry) else {
|
||||
guard let data = try? transport.readFile(context.paths.projectsRegistry) else {
|
||||
return ProjectRegistry(projects: [])
|
||||
}
|
||||
do {
|
||||
@@ -17,10 +25,10 @@ struct ProjectDashboardService: Sendable {
|
||||
}
|
||||
|
||||
func saveRegistry(_ registry: ProjectRegistry) {
|
||||
let dir = HermesPaths.scarfDir
|
||||
if !FileManager.default.fileExists(atPath: dir) {
|
||||
let dir = context.paths.scarfDir
|
||||
if !transport.fileExists(dir) {
|
||||
do {
|
||||
try FileManager.default.createDirectory(atPath: dir, withIntermediateDirectories: true)
|
||||
try transport.createDirectory(dir)
|
||||
} catch {
|
||||
print("[Scarf] Failed to create scarf directory: \(error.localizedDescription)")
|
||||
return
|
||||
@@ -28,18 +36,20 @@ struct ProjectDashboardService: Sendable {
|
||||
}
|
||||
guard let data = try? JSONEncoder().encode(registry) else { return }
|
||||
// Pretty-print for readability (agents may read this file)
|
||||
let writeData: Data
|
||||
if let pretty = try? JSONSerialization.jsonObject(with: data),
|
||||
let formatted = try? JSONSerialization.data(withJSONObject: pretty, options: [.prettyPrinted, .sortedKeys]) {
|
||||
FileManager.default.createFile(atPath: HermesPaths.projectsRegistry, contents: formatted)
|
||||
writeData = formatted
|
||||
} else {
|
||||
FileManager.default.createFile(atPath: HermesPaths.projectsRegistry, contents: data)
|
||||
writeData = data
|
||||
}
|
||||
try? transport.writeFile(context.paths.projectsRegistry, data: writeData)
|
||||
}
|
||||
|
||||
// MARK: - Dashboard
|
||||
|
||||
func loadDashboard(for project: ProjectEntry) -> ProjectDashboard? {
|
||||
guard let data = FileManager.default.contents(atPath: project.dashboardPath) else {
|
||||
guard let data = try? transport.readFile(project.dashboardPath) else {
|
||||
return nil
|
||||
}
|
||||
do {
|
||||
@@ -51,13 +61,10 @@ struct ProjectDashboardService: Sendable {
|
||||
}
|
||||
|
||||
func dashboardExists(for project: ProjectEntry) -> Bool {
|
||||
FileManager.default.fileExists(atPath: project.dashboardPath)
|
||||
transport.fileExists(project.dashboardPath)
|
||||
}
|
||||
|
||||
func dashboardModificationDate(for project: ProjectEntry) -> Date? {
|
||||
guard let attrs = try? FileManager.default.attributesOfItem(atPath: project.dashboardPath) else {
|
||||
return nil
|
||||
}
|
||||
return attrs[.modificationDate] as? Date
|
||||
transport.stat(project.dashboardPath)?.mtime
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,191 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
/// `ServerTransport` over the local filesystem. Thin wrapper around
|
||||
/// `FileManager`, `Process`, and `DispatchSourceFileSystemObject` — the APIs
|
||||
/// services were already using before Phase 2.
|
||||
struct LocalTransport: ServerTransport {
|
||||
nonisolated private static let logger = Logger(subsystem: "com.scarf", category: "LocalTransport")
|
||||
|
||||
let contextID: ServerID
|
||||
let isRemote: Bool = false
|
||||
|
||||
nonisolated init(contextID: ServerID = ServerContext.local.id) {
|
||||
self.contextID = contextID
|
||||
}
|
||||
|
||||
// MARK: - Files
|
||||
|
||||
func readFile(_ path: String) throws -> Data {
|
||||
do {
|
||||
return try Data(contentsOf: URL(fileURLWithPath: path))
|
||||
} catch {
|
||||
throw TransportError.fileIO(path: path, underlying: error.localizedDescription)
|
||||
}
|
||||
}
|
||||
|
||||
func writeFile(_ path: String, data: Data) throws {
|
||||
let tmp = path + ".scarf.tmp"
|
||||
do {
|
||||
try data.write(to: URL(fileURLWithPath: tmp))
|
||||
// Preserve `0600` for dotfiles holding secrets (.env, .auth, ...).
|
||||
// The existing files already use 0600 via HermesEnvService; we
|
||||
// mirror that here so a brand-new file created via this write
|
||||
// also starts with safe permissions.
|
||||
if Self.shouldEnforcePrivateMode(for: path) {
|
||||
try FileManager.default.setAttributes([.posixPermissions: 0o600], ofItemAtPath: tmp)
|
||||
}
|
||||
// Atomic swap onto the final path.
|
||||
let destURL = URL(fileURLWithPath: path)
|
||||
let tmpURL = URL(fileURLWithPath: tmp)
|
||||
if FileManager.default.fileExists(atPath: path) {
|
||||
_ = try FileManager.default.replaceItemAt(destURL, withItemAt: tmpURL)
|
||||
} else {
|
||||
// Ensure parent exists.
|
||||
let parent = (path as NSString).deletingLastPathComponent
|
||||
if !parent.isEmpty, !FileManager.default.fileExists(atPath: parent) {
|
||||
try FileManager.default.createDirectory(atPath: parent, withIntermediateDirectories: true)
|
||||
}
|
||||
try FileManager.default.moveItem(at: tmpURL, to: destURL)
|
||||
}
|
||||
} catch {
|
||||
try? FileManager.default.removeItem(atPath: tmp)
|
||||
throw TransportError.fileIO(path: path, underlying: error.localizedDescription)
|
||||
}
|
||||
}
|
||||
|
||||
func fileExists(_ path: String) -> Bool {
|
||||
FileManager.default.fileExists(atPath: path)
|
||||
}
|
||||
|
||||
func stat(_ path: String) -> FileStat? {
|
||||
guard let attrs = try? FileManager.default.attributesOfItem(atPath: path) else {
|
||||
return nil
|
||||
}
|
||||
let size = (attrs[.size] as? Int64) ?? Int64((attrs[.size] as? Int) ?? 0)
|
||||
let mtime = (attrs[.modificationDate] as? Date) ?? Date(timeIntervalSince1970: 0)
|
||||
let isDir = (attrs[.type] as? FileAttributeType) == .typeDirectory
|
||||
return FileStat(size: size, mtime: mtime, isDirectory: isDir)
|
||||
}
|
||||
|
||||
func listDirectory(_ path: String) throws -> [String] {
|
||||
do {
|
||||
return try FileManager.default.contentsOfDirectory(atPath: path)
|
||||
} catch {
|
||||
throw TransportError.fileIO(path: path, underlying: error.localizedDescription)
|
||||
}
|
||||
}
|
||||
|
||||
func createDirectory(_ path: String) throws {
|
||||
do {
|
||||
try FileManager.default.createDirectory(atPath: path, withIntermediateDirectories: true)
|
||||
} catch {
|
||||
throw TransportError.fileIO(path: path, underlying: error.localizedDescription)
|
||||
}
|
||||
}
|
||||
|
||||
func removeFile(_ path: String) throws {
|
||||
guard FileManager.default.fileExists(atPath: path) else { return }
|
||||
do {
|
||||
try FileManager.default.removeItem(atPath: path)
|
||||
} catch {
|
||||
throw TransportError.fileIO(path: path, underlying: error.localizedDescription)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Processes
|
||||
|
||||
func runProcess(executable: String, args: [String], stdin: Data?, timeout: TimeInterval?) throws -> ProcessResult {
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: executable)
|
||||
proc.arguments = args
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
let stdinPipe = Pipe()
|
||||
proc.standardOutput = stdoutPipe
|
||||
proc.standardError = stderrPipe
|
||||
if stdin != nil { proc.standardInput = stdinPipe }
|
||||
do {
|
||||
try proc.run()
|
||||
} catch {
|
||||
throw TransportError.other(message: "Failed to launch \(executable): \(error.localizedDescription)")
|
||||
}
|
||||
if let stdin {
|
||||
try? stdinPipe.fileHandleForWriting.write(contentsOf: stdin)
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
}
|
||||
// Timeout handling: poll every 100ms up to timeout, kill on overrun.
|
||||
if let timeout {
|
||||
let deadline = Date().addingTimeInterval(timeout)
|
||||
while proc.isRunning && Date() < deadline {
|
||||
Thread.sleep(forTimeInterval: 0.1)
|
||||
}
|
||||
if proc.isRunning {
|
||||
proc.terminate()
|
||||
let partial = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
try? stdoutPipe.fileHandleForReading.close()
|
||||
try? stderrPipe.fileHandleForReading.close()
|
||||
throw TransportError.timeout(seconds: timeout, partialStdout: partial)
|
||||
}
|
||||
} else {
|
||||
proc.waitUntilExit()
|
||||
}
|
||||
let out = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
let err = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
try? stdoutPipe.fileHandleForReading.close()
|
||||
try? stderrPipe.fileHandleForReading.close()
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
return ProcessResult(exitCode: proc.terminationStatus, stdout: out, stderr: err)
|
||||
}
|
||||
|
||||
func makeProcess(executable: String, args: [String]) -> Process {
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: executable)
|
||||
proc.arguments = args
|
||||
return proc
|
||||
}
|
||||
|
||||
// MARK: - SQLite
|
||||
|
||||
func snapshotSQLite(remotePath: String) throws -> URL {
|
||||
// Local case: no copy needed. Services open the path directly.
|
||||
URL(fileURLWithPath: remotePath)
|
||||
}
|
||||
|
||||
// MARK: - Watching
|
||||
|
||||
func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent> {
|
||||
AsyncStream { continuation in
|
||||
// Build the source list immutably, then hand a value-typed copy
|
||||
// to onTermination. Swift 6's concurrent-capture rule rejects a
|
||||
// `var sources` shared between the outer builder and the inner
|
||||
// termination closure.
|
||||
let sources: [DispatchSourceFileSystemObject] = paths.compactMap { path in
|
||||
let fd = Darwin.open(path, O_EVTONLY)
|
||||
guard fd >= 0 else { return nil }
|
||||
let src = DispatchSource.makeFileSystemObjectSource(
|
||||
fileDescriptor: fd,
|
||||
eventMask: [.write, .extend, .rename],
|
||||
queue: .global()
|
||||
)
|
||||
src.setEventHandler { continuation.yield(.anyChanged) }
|
||||
src.setCancelHandler { Darwin.close(fd) }
|
||||
src.resume()
|
||||
return src
|
||||
}
|
||||
continuation.onTermination = { _ in
|
||||
for s in sources { s.cancel() }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Helpers
|
||||
|
||||
/// Heuristic: files that conventionally hold secrets should be created
|
||||
/// with restrictive permissions so a future `scp` or editor doesn't end
|
||||
/// up exposing them.
|
||||
private static func shouldEnforcePrivateMode(for path: String) -> Bool {
|
||||
let name = (path as NSString).lastPathComponent
|
||||
return name == ".env" || name == "auth.json" || name.hasSuffix("-tokens.json")
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,591 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
/// `ServerTransport` that reaches a remote Hermes installation through the
|
||||
/// system `ssh`, `scp`, and `sftp` binaries.
|
||||
///
|
||||
/// Why system ssh (not a native library): the user's `~/.ssh/config`,
|
||||
/// ssh-agent, 1Password/Secretive agents, ProxyJump, and ControlMaster
|
||||
/// multiplexing all work for free. OpenSSH also owns crypto — a smaller
|
||||
/// audit surface than dragging libssh2 along.
|
||||
///
|
||||
/// **ControlMaster matters.** Without it, every remote primitive (stat, cat,
|
||||
/// cp) authenticates from scratch — 500ms-2s per call. With ControlMaster
|
||||
/// `auto` + `ControlPersist 600`, the first call authenticates, subsequent
|
||||
/// calls reuse the same TCP/crypto session at ~5ms each. We point the
|
||||
/// control socket at `~/Library/Caches/scarf/ssh/%C` so multiple Scarf
|
||||
/// windows pointed at the same host share one session cleanly.
|
||||
struct SSHTransport: ServerTransport {
|
||||
nonisolated private static let logger = Logger(subsystem: "com.scarf", category: "SSHTransport")
|
||||
|
||||
let contextID: ServerID
|
||||
let isRemote: Bool = true
|
||||
|
||||
let config: SSHConfig
|
||||
let displayName: String
|
||||
|
||||
nonisolated init(contextID: ServerID, config: SSHConfig, displayName: String) {
|
||||
self.contextID = contextID
|
||||
self.config = config
|
||||
self.displayName = displayName
|
||||
}
|
||||
|
||||
// MARK: - ssh/scp binary discovery
|
||||
|
||||
nonisolated private var sshBinary: String { "/usr/bin/ssh" }
|
||||
nonisolated private var scpBinary: String { "/usr/bin/scp" }
|
||||
|
||||
/// The fully-qualified `user@host` spec (or just `host` if no user set).
|
||||
nonisolated private var hostSpec: String {
|
||||
if let user = config.user, !user.isEmpty { return "\(user)@\(config.host)" }
|
||||
return config.host
|
||||
}
|
||||
|
||||
/// Absolute path to this server's ControlMaster socket directory. One
|
||||
/// socket per server, lives under the app's Caches so macOS can sweep it.
|
||||
nonisolated private var controlDir: String { Self.controlDirPath() }
|
||||
|
||||
/// Per-server snapshot cache directory (for SQLite `.backup` drops).
|
||||
nonisolated private var snapshotDir: String { Self.snapshotDirPath(for: contextID) }
|
||||
|
||||
/// Shared control-master socket directory (one dir, sockets within it are
|
||||
/// per-host via OpenSSH's `%C` token). Exposed as a static so
|
||||
/// cleanup paths (`ServerRegistry.removeServer`, app-launch sweep) can
|
||||
/// compute it without instantiating a transport.
|
||||
///
|
||||
/// Uses a short path under /tmp to stay within the 104-byte macOS
|
||||
/// Unix domain socket limit. The Caches path
|
||||
/// (~/Library/Caches/scarf/ssh/%C) can exceed this limit when the
|
||||
/// username is long, causing ssh to exit 255.
|
||||
nonisolated static func controlDirPath() -> String {
|
||||
return "/tmp/scarf-ssh-\(getuid())"
|
||||
}
|
||||
|
||||
/// Snapshot cache directory for a given server. Stable per-ID so repeated
|
||||
/// connections to the same server share the cache, and so cleanup can
|
||||
/// find it from the ID alone.
|
||||
nonisolated static func snapshotDirPath(for contextID: ServerID) -> String {
|
||||
let base = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.path
|
||||
?? NSHomeDirectory() + "/Library/Caches"
|
||||
return base + "/scarf/snapshots/\(contextID.uuidString)"
|
||||
}
|
||||
|
||||
/// Root of the snapshot cache (all servers). Used by the app-launch sweep
|
||||
/// that prunes dirs whose UUID no longer appears in the registry.
|
||||
nonisolated static func snapshotRootPath() -> String {
|
||||
let base = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.path
|
||||
?? NSHomeDirectory() + "/Library/Caches"
|
||||
return base + "/scarf/snapshots"
|
||||
}
|
||||
|
||||
/// Remove the snapshot directory for a server (no-op if absent). Called
|
||||
/// on `removeServer` and on app-launch for orphaned dirs.
|
||||
static func pruneSnapshotCache(for contextID: ServerID) {
|
||||
let dir = snapshotDirPath(for: contextID)
|
||||
try? FileManager.default.removeItem(atPath: dir)
|
||||
}
|
||||
|
||||
/// Walk the snapshot root and delete any directory whose UUID isn't in
|
||||
/// `keep`. Called once at app launch so snapshots from servers the user
|
||||
/// removed while the app was closed don't linger.
|
||||
static func sweepOrphanSnapshots(keeping keep: Set<ServerID>) {
|
||||
let root = snapshotRootPath()
|
||||
guard let entries = try? FileManager.default.contentsOfDirectory(atPath: root) else { return }
|
||||
for name in entries {
|
||||
if let id = ServerID(uuidString: name), keep.contains(id) { continue }
|
||||
try? FileManager.default.removeItem(atPath: root + "/" + name)
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove ControlMaster socket files older than `staleAfter` seconds.
|
||||
///
|
||||
/// Socket basenames are %C hashes (not ServerIDs), so we can't keep "still
|
||||
/// registered" sockets the way `sweepOrphanSnapshots` does. But
|
||||
/// `ControlPersist` is 600s — anything older than 30 minutes is guaranteed
|
||||
/// to be a dead orphan from a crashed master, an unclean app exit, or a
|
||||
/// server removed while another Scarf instance was holding the dir.
|
||||
/// Wiping these on launch keeps `/tmp/scarf-ssh-<uid>/` from accumulating
|
||||
/// indefinitely until reboot, while leaving any concurrent Scarf
|
||||
/// instance's live sockets (always <600s old) untouched.
|
||||
static func sweepStaleControlSockets(staleAfter: TimeInterval = 1800) {
|
||||
let root = controlDirPath()
|
||||
guard let entries = try? FileManager.default.contentsOfDirectory(atPath: root) else { return }
|
||||
let cutoff = Date().addingTimeInterval(-staleAfter)
|
||||
for name in entries {
|
||||
let path = root + "/" + name
|
||||
guard let attrs = try? FileManager.default.attributesOfItem(atPath: path),
|
||||
let mtime = attrs[.modificationDate] as? Date
|
||||
else { continue }
|
||||
if mtime < cutoff {
|
||||
try? FileManager.default.removeItem(atPath: path)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Ask OpenSSH to shut down this host's ControlMaster socket, so the TCP
|
||||
/// session isn't held open after the user removes this server. If no
|
||||
/// master is currently running, `ssh -O exit` exits non-zero — we ignore
|
||||
/// the exit code because the desired end state (no master) is reached
|
||||
/// either way.
|
||||
func closeControlMaster() {
|
||||
ensureControlDir()
|
||||
let args = sshArgs(extra: ["-O", "exit", hostSpec])
|
||||
_ = try? runLocal(executable: sshBinary, args: args, stdin: nil, timeout: 10)
|
||||
}
|
||||
|
||||
/// Common ssh options used by every invocation. Keep every `-o` flag
|
||||
/// here so we never drift between calls.
|
||||
///
|
||||
/// - `ControlMaster=auto` + `ControlPersist=600` gives us free connection
|
||||
/// pooling for the bursty stat/cat/cp traffic the services produce.
|
||||
/// - `StrictHostKeyChecking=accept-new` writes new hosts to
|
||||
/// `known_hosts` silently the first time but blocks on key mismatch —
|
||||
/// the UX surfaced by `TransportError.hostKeyMismatch`.
|
||||
/// - `ServerAliveInterval=30` makes dropped connections surface as a
|
||||
/// process exit rather than a hang.
|
||||
/// - `LogLevel=QUIET` suppresses the login banner so ACP's line-delimited
|
||||
/// JSON stays binary-clean.
|
||||
nonisolated private func sshArgs(extra: [String] = []) -> [String] {
|
||||
var args: [String] = [
|
||||
"-o", "ControlMaster=auto",
|
||||
"-o", "ControlPath=\(controlDir)/%C",
|
||||
"-o", "ControlPersist=600",
|
||||
"-o", "ServerAliveInterval=30",
|
||||
"-o", "ServerAliveCountMax=3",
|
||||
"-o", "ConnectTimeout=10",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "LogLevel=QUIET",
|
||||
"-o", "BatchMode=yes" // Never prompt for passphrases; ssh-agent only.
|
||||
]
|
||||
if let port = config.port { args += ["-p", String(port)] }
|
||||
if let id = config.identityFile, !id.isEmpty {
|
||||
args += ["-i", id]
|
||||
}
|
||||
args += extra
|
||||
return args
|
||||
}
|
||||
|
||||
/// Ensure the ControlMaster socket directory exists, is a real directory
|
||||
/// (not a symlink), is owned by us, and has mode 0700. Called before every
|
||||
/// ssh invocation.
|
||||
///
|
||||
/// Defensive against `/tmp` pre-creation: any local user can create
|
||||
/// `/tmp/scarf-ssh-<uid>` before Scarf launches. Plain `mkdir -p` plus
|
||||
/// `setAttributes` would silently accept a hostile dir (since the chmod
|
||||
/// fails when we don't own it, and the Foundation API swallows that). So
|
||||
/// we use POSIX `mkdir` (atomic, sets perms at create time, doesn't
|
||||
/// follow symlinks) and `lstat` to verify ownership when the entry
|
||||
/// already exists.
|
||||
nonisolated private func ensureControlDir() {
|
||||
let path = controlDir
|
||||
|
||||
let mkResult = path.withCString { mkdir($0, 0o700) }
|
||||
if mkResult == 0 { return }
|
||||
|
||||
let mkErr = errno
|
||||
if mkErr != EEXIST {
|
||||
Self.logger.error("Failed to create ControlDir \(path, privacy: .public): errno=\(mkErr)")
|
||||
return
|
||||
}
|
||||
|
||||
var st = Darwin.stat()
|
||||
let lstatResult = path.withCString { lstat($0, &st) }
|
||||
guard lstatResult == 0 else {
|
||||
Self.logger.error("Could not lstat existing ControlDir \(path, privacy: .public): errno=\(errno)")
|
||||
return
|
||||
}
|
||||
guard (st.st_mode & S_IFMT) == S_IFDIR else {
|
||||
Self.logger.error("ControlDir \(path, privacy: .public) exists but is not a directory (possibly a symlink) — refusing to use")
|
||||
return
|
||||
}
|
||||
guard st.st_uid == getuid() else {
|
||||
Self.logger.error("ControlDir \(path, privacy: .public) owned by uid \(st.st_uid), expected \(getuid()) — refusing to use")
|
||||
return
|
||||
}
|
||||
if (st.st_mode & 0o777) != 0o700 {
|
||||
Self.logger.warning("ControlDir \(path, privacy: .public) had mode \(String(st.st_mode & 0o777, radix: 8), privacy: .public), repairing to 700")
|
||||
_ = path.withCString { chmod($0, 0o700) }
|
||||
}
|
||||
}
|
||||
|
||||
/// Shell-quote a single argument for remote execution. The remote shell
|
||||
/// receives our argv joined with spaces, so anything containing
|
||||
/// whitespace/metacharacters must be quoted to survive that flattening.
|
||||
nonisolated private static func shellQuote(_ s: String) -> String {
|
||||
if s.isEmpty { return "''" }
|
||||
// Safe subset: alphanumerics + a few shell-inert characters.
|
||||
let safe = CharacterSet(charactersIn: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789@%+=:,./-_")
|
||||
if s.unicodeScalars.allSatisfy({ safe.contains($0) }) { return s }
|
||||
// Wrap in single quotes; close/reopen around any embedded single quote.
|
||||
return "'" + s.replacingOccurrences(of: "'", with: "'\\''") + "'"
|
||||
}
|
||||
|
||||
/// Format a path for inclusion in a remote `sh -c` command. **Critical**
|
||||
/// for any path containing `~/`: bash/zsh do NOT expand `~` inside
|
||||
/// quotes (single OR double), so a single-quoted `'~/.hermes/foo'` is
|
||||
/// passed to commands as the literal seven-character string
|
||||
/// `~/.hermes/foo` and lookups fail. We rewrite the leading `~/` to
|
||||
/// `$HOME/` (which DOES expand inside double quotes) and emit the path
|
||||
/// double-quoted so embedded spaces / metacharacters are still safe.
|
||||
///
|
||||
/// Why not single-quote: that would make `$HOME` literal too. We
|
||||
/// specifically need partial-expansion semantics, which is what double
|
||||
/// quotes give us.
|
||||
nonisolated private static func remotePathArg(_ path: String) -> String {
|
||||
var p = path
|
||||
if p.hasPrefix("~/") {
|
||||
p = "$HOME/" + p.dropFirst(2)
|
||||
} else if p == "~" {
|
||||
p = "$HOME"
|
||||
}
|
||||
let escaped = p
|
||||
.replacingOccurrences(of: "\\", with: "\\\\")
|
||||
.replacingOccurrences(of: "\"", with: "\\\"")
|
||||
return "\"\(escaped)\""
|
||||
}
|
||||
|
||||
/// Run a remote shell command. Wraps in `sh -c '<command>'` and uses
|
||||
/// the standard ssh-after-host placement (no `--` separator — that
|
||||
/// would be sent to the remote shell as a literal first token, which
|
||||
/// most shells reject as "command not found"). The `command` is
|
||||
/// single-quoted via `shellQuote` so ssh's argv-join-by-space doesn't
|
||||
/// split it across multiple shell tokens on the remote side.
|
||||
@discardableResult
|
||||
nonisolated private func runRemoteShell(_ command: String, timeout: TimeInterval? = 60) throws -> ProcessResult {
|
||||
var args = sshArgs()
|
||||
args.append(hostSpec)
|
||||
args.append("sh")
|
||||
args.append("-c")
|
||||
args.append(Self.shellQuote(command))
|
||||
return try runLocal(executable: sshBinary, args: args, stdin: nil, timeout: timeout)
|
||||
}
|
||||
|
||||
// MARK: - Files
|
||||
|
||||
func readFile(_ path: String) throws -> Data {
|
||||
// `cat` is the simplest portable "give me file bytes" command; we
|
||||
// don't need scp's progress machinery for typical config/memory
|
||||
// files (<1 MB each).
|
||||
let result = try runRemoteShell("cat \(Self.remotePathArg(path))")
|
||||
if result.exitCode != 0 {
|
||||
let errText = result.stderrString
|
||||
// Missing file looks like exit 1 + "No such file" — surface as a
|
||||
// typed fileIO error so callers that treat missing == "empty"
|
||||
// behave the same as they do locally.
|
||||
if errText.contains("No such file") {
|
||||
throw TransportError.fileIO(path: path, underlying: "No such file or directory")
|
||||
}
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: result.exitCode, stderr: errText)
|
||||
}
|
||||
return result.stdout
|
||||
}
|
||||
|
||||
func writeFile(_ path: String, data: Data) throws {
|
||||
// Atomic pattern:
|
||||
// 1. scp to `<path>.scarf.tmp` on the remote
|
||||
// 2. ssh `mv <tmp> <path>` — atomic on POSIX within the same FS
|
||||
// Hermes never sees a partial write.
|
||||
let tmp = path + ".scarf.tmp"
|
||||
|
||||
// scp from a local temp file (scp reads from disk, not stdin).
|
||||
let localTmpURL = FileManager.default.temporaryDirectory.appendingPathComponent(
|
||||
"scarf-scp-\(UUID().uuidString).tmp"
|
||||
)
|
||||
do {
|
||||
try data.write(to: localTmpURL)
|
||||
} catch {
|
||||
throw TransportError.fileIO(path: path, underlying: "local temp write: \(error.localizedDescription)")
|
||||
}
|
||||
defer { try? FileManager.default.removeItem(at: localTmpURL) }
|
||||
|
||||
ensureControlDir()
|
||||
var scpArgs: [String] = [
|
||||
"-o", "ControlMaster=auto",
|
||||
"-o", "ControlPath=\(controlDir)/%C",
|
||||
"-o", "ControlPersist=600",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "LogLevel=QUIET",
|
||||
"-o", "BatchMode=yes"
|
||||
]
|
||||
if let port = config.port { scpArgs += ["-P", String(port)] }
|
||||
if let id = config.identityFile, !id.isEmpty { scpArgs += ["-i", id] }
|
||||
scpArgs.append(localTmpURL.path)
|
||||
scpArgs.append("\(hostSpec):\(tmp)")
|
||||
|
||||
let scpResult = try runLocal(executable: scpBinary, args: scpArgs, stdin: nil, timeout: 60)
|
||||
if scpResult.exitCode != 0 {
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: scpResult.exitCode, stderr: scpResult.stderrString)
|
||||
}
|
||||
|
||||
// Now atomic mv on the remote. Note: scp/sftp DOES expand `~` (it
|
||||
// goes through the SSH file transfer protocol, not a remote shell),
|
||||
// so the upload landed at the resolved $HOME path. The mv is a
|
||||
// shell command and needs the $HOME-rewritten path to find it.
|
||||
let mvResult = try runRemoteShell("mv \(Self.remotePathArg(tmp)) \(Self.remotePathArg(path))")
|
||||
if mvResult.exitCode != 0 {
|
||||
// Best-effort cleanup of the orphan tmp.
|
||||
_ = try? runRemoteShell("rm -f \(Self.remotePathArg(tmp))")
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: mvResult.exitCode, stderr: mvResult.stderrString)
|
||||
}
|
||||
}
|
||||
|
||||
func fileExists(_ path: String) -> Bool {
|
||||
guard let result = try? runRemoteShell("test -e \(Self.remotePathArg(path))") else {
|
||||
return false
|
||||
}
|
||||
return result.exitCode == 0
|
||||
}
|
||||
|
||||
func stat(_ path: String) -> FileStat? {
|
||||
// macOS and Linux `stat` differ in flags. `stat -f` is macOS's BSD
|
||||
// form; `stat -c` is GNU/Linux. We try the GNU form first (typical
|
||||
// remote target) and fall back to BSD. The format strings use
|
||||
// double quotes — safe inside our outer single-quoted sh -c.
|
||||
let linux = try? runRemoteShell(#"stat -c "%s %Y %F" \#(Self.remotePathArg(path))"#)
|
||||
if let result = linux, result.exitCode == 0 {
|
||||
return Self.parseStatOutput(result.stdoutString)
|
||||
}
|
||||
let bsd = try? runRemoteShell(#"stat -f "%z %m %HT" \#(Self.remotePathArg(path))"#)
|
||||
if let result = bsd, result.exitCode == 0 {
|
||||
return Self.parseStatOutput(result.stdoutString)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
private static func parseStatOutput(_ s: String) -> FileStat? {
|
||||
// Expected: "<bytes> <unix-epoch-secs> <type>" where <type> is either
|
||||
// a GNU word ("regular file", "directory") or a BSD word ("Regular
|
||||
// File", "Directory"). Only the first word of <type> matters for
|
||||
// isDirectory.
|
||||
let parts = s.trimmingCharacters(in: .whitespacesAndNewlines).split(separator: " ", maxSplits: 2)
|
||||
guard parts.count >= 2 else { return nil }
|
||||
let size = Int64(parts[0]) ?? 0
|
||||
let mtimeSecs = TimeInterval(parts[1]) ?? 0
|
||||
let typeStr = parts.count == 3 ? parts[2].lowercased() : ""
|
||||
let isDir = typeStr.contains("directory")
|
||||
return FileStat(size: size, mtime: Date(timeIntervalSince1970: mtimeSecs), isDirectory: isDir)
|
||||
}
|
||||
|
||||
func listDirectory(_ path: String) throws -> [String] {
|
||||
// `ls -A` lists all entries (incl. dotfiles) except `.`/`..`, one per
|
||||
// line. Sort order matches local FileManager.contentsOfDirectory.
|
||||
let result = try runRemoteShell("ls -A \(Self.remotePathArg(path))")
|
||||
if result.exitCode != 0 {
|
||||
if result.stderrString.contains("No such file") {
|
||||
throw TransportError.fileIO(path: path, underlying: "No such file or directory")
|
||||
}
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: result.exitCode, stderr: result.stderrString)
|
||||
}
|
||||
return result.stdoutString
|
||||
.split(separator: "\n", omittingEmptySubsequences: true)
|
||||
.map(String.init)
|
||||
}
|
||||
|
||||
func createDirectory(_ path: String) throws {
|
||||
let result = try runRemoteShell("mkdir -p \(Self.remotePathArg(path))")
|
||||
if result.exitCode != 0 {
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: result.exitCode, stderr: result.stderrString)
|
||||
}
|
||||
}
|
||||
|
||||
func removeFile(_ path: String) throws {
|
||||
let result = try runRemoteShell("rm -f \(Self.remotePathArg(path))")
|
||||
if result.exitCode != 0 {
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: result.exitCode, stderr: result.stderrString)
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Processes
|
||||
|
||||
func runProcess(executable: String, args: [String], stdin: Data?, timeout: TimeInterval?) throws -> ProcessResult {
|
||||
// Wrap in `sh -c '<exe> <arg> <arg>'` with `~/`-rewritten paths so
|
||||
// home-relative args expand on the remote. The executable might be
|
||||
// `~/.local/bin/hermes` or just `hermes`; either survives.
|
||||
let cmd = ([executable] + args).map { Self.remotePathArg($0) }.joined(separator: " ")
|
||||
var sshArgv = sshArgs()
|
||||
sshArgv.append(hostSpec)
|
||||
sshArgv.append("sh")
|
||||
sshArgv.append("-c")
|
||||
sshArgv.append(Self.shellQuote(cmd))
|
||||
return try runLocal(executable: sshBinary, args: sshArgv, stdin: stdin, timeout: timeout)
|
||||
}
|
||||
|
||||
func makeProcess(executable: String, args: [String]) -> Process {
|
||||
ensureControlDir()
|
||||
// `-T` disables pty allocation — critical for binary-clean stdin/stdout
|
||||
// (ACP JSON-RPC, log tail bytes). Same sh -c wrapping as runProcess
|
||||
// so home-relative paths in `executable`/`args` actually expand.
|
||||
let cmd = ([executable] + args).map { Self.remotePathArg($0) }.joined(separator: " ")
|
||||
var sshArgv = sshArgs()
|
||||
sshArgv.insert("-T", at: 0)
|
||||
sshArgv.append(hostSpec)
|
||||
sshArgv.append("sh")
|
||||
sshArgv.append("-c")
|
||||
sshArgv.append(Self.shellQuote(cmd))
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: sshBinary)
|
||||
proc.arguments = sshArgv
|
||||
proc.environment = Self.sshSubprocessEnvironment()
|
||||
return proc
|
||||
}
|
||||
|
||||
/// Environment for an ssh/scp subprocess: process env merged with
|
||||
/// SSH_AUTH_SOCK / SSH_AGENT_PID harvested from the user's login shell.
|
||||
/// Without this, GUI-launched Scarf can't reach 1Password / Secretive /
|
||||
/// `ssh-add`'d keys that the user's terminal sees fine.
|
||||
nonisolated private static func sshSubprocessEnvironment() -> [String: String] {
|
||||
var env = ProcessInfo.processInfo.environment
|
||||
let shellEnv = HermesFileService.enrichedEnvironment()
|
||||
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
|
||||
if env[key] == nil, let value = shellEnv[key], !value.isEmpty {
|
||||
env[key] = value
|
||||
}
|
||||
}
|
||||
return env
|
||||
}
|
||||
|
||||
// MARK: - SQLite snapshot
|
||||
|
||||
func snapshotSQLite(remotePath: String) throws -> URL {
|
||||
try? FileManager.default.createDirectory(atPath: snapshotDir, withIntermediateDirectories: true)
|
||||
let localPath = snapshotDir + "/state.db"
|
||||
// `.backup` is WAL-safe: sqlite takes a consistent snapshot without
|
||||
// blocking writers. A plain `cp` of a WAL-mode DB could corrupt.
|
||||
let remoteTmp = "/tmp/scarf-snapshot-\(UUID().uuidString).db"
|
||||
// sqlite3's `.backup` is a dot-command, not a CLI arg. The whole
|
||||
// dot-command must be one shell argument (double-quoted) so sqlite3
|
||||
// receives it as a single command; the backup path inside it is
|
||||
// single-quoted so sqlite3 parses it correctly. The DB path is a
|
||||
// separate shell argument and goes through `remotePathArg`
|
||||
// (double-quoted, $HOME-aware) so `~/.hermes/state.db` actually
|
||||
// resolves on the remote.
|
||||
//
|
||||
// The second sqlite3 invocation flips the snapshot out of WAL mode
|
||||
// so the scp'd file is self-contained: `.backup` preserves the
|
||||
// source's journal_mode in the destination header, so without this
|
||||
// step the client would need the `-wal`/`-shm` sidecars too, and
|
||||
// every read would fail with "unable to open database file".
|
||||
//
|
||||
// Final shell command on the remote:
|
||||
// sqlite3 "$HOME/.hermes/state.db" ".backup '/tmp/scarf-snapshot-XYZ.db'" \
|
||||
// && sqlite3 '/tmp/scarf-snapshot-XYZ.db' "PRAGMA journal_mode=DELETE;"
|
||||
let backupScript = #"sqlite3 \#(Self.remotePathArg(remotePath)) ".backup '\#(remoteTmp)'" && sqlite3 '\#(remoteTmp)' "PRAGMA journal_mode=DELETE;" > /dev/null"#
|
||||
let backup = try runRemoteShell(backupScript)
|
||||
if backup.exitCode != 0 {
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: backup.exitCode, stderr: backup.stderrString)
|
||||
}
|
||||
// scp the backup down. scp/sftp expands `~` natively (it goes
|
||||
// through the SSH file-transfer protocol, not a remote shell), so
|
||||
// remoteTmp's `/tmp/...` absolute path round-trips as-is.
|
||||
ensureControlDir()
|
||||
var scpArgs: [String] = [
|
||||
"-o", "ControlMaster=auto",
|
||||
"-o", "ControlPath=\(controlDir)/%C",
|
||||
"-o", "ControlPersist=600",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "LogLevel=QUIET",
|
||||
"-o", "BatchMode=yes"
|
||||
]
|
||||
if let port = config.port { scpArgs += ["-P", String(port)] }
|
||||
if let id = config.identityFile, !id.isEmpty { scpArgs += ["-i", id] }
|
||||
scpArgs.append("\(hostSpec):\(remoteTmp)")
|
||||
scpArgs.append(localPath)
|
||||
let pull = try runLocal(executable: scpBinary, args: scpArgs, stdin: nil, timeout: 120)
|
||||
// Regardless of pull outcome, try to clean up the remote tmp.
|
||||
_ = try? runRemoteShell("rm -f \(Self.remotePathArg(remoteTmp))")
|
||||
if pull.exitCode != 0 {
|
||||
throw TransportError.classifySSHFailure(host: config.host, exitCode: pull.exitCode, stderr: pull.stderrString)
|
||||
}
|
||||
return URL(fileURLWithPath: localPath)
|
||||
}
|
||||
|
||||
// MARK: - Watching
|
||||
|
||||
func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent> {
|
||||
// Polling: call `stat -c %Y` on all paths every 3s and yield a single
|
||||
// `.anyChanged` when any mtime changed vs. the prior tick. ControlMaster
|
||||
// makes each stat ~5ms so the cost is bounded.
|
||||
AsyncStream { continuation in
|
||||
let task = Task.detached { [self] in
|
||||
var lastSignature: String = ""
|
||||
while !Task.isCancelled {
|
||||
// Build one shell command that stats all paths in one
|
||||
// ssh round-trip. Missing paths print "0" which still
|
||||
// participates correctly in change detection. Paths
|
||||
// get the `~`→`$HOME` rewrite via remotePathArg.
|
||||
let argList = paths.map { Self.remotePathArg($0) }.joined(separator: " ")
|
||||
let cmd = "for p in \(argList); do stat -c %Y \"$p\" 2>/dev/null || stat -f %m \"$p\" 2>/dev/null || echo 0; done"
|
||||
do {
|
||||
let result = try runRemoteShell(cmd, timeout: 30)
|
||||
let signature = result.stdoutString.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||
if !signature.isEmpty && signature != lastSignature {
|
||||
if !lastSignature.isEmpty {
|
||||
continuation.yield(.anyChanged)
|
||||
}
|
||||
lastSignature = signature
|
||||
}
|
||||
} catch {
|
||||
// Transient failure (connection drop) — skip this tick.
|
||||
Self.logger.debug("watchPaths poll failed: \(String(describing: error))")
|
||||
}
|
||||
try? await Task.sleep(nanoseconds: 3_000_000_000)
|
||||
}
|
||||
}
|
||||
continuation.onTermination = { _ in task.cancel() }
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Private helpers
|
||||
|
||||
/// Spawn a local process (ssh/scp/etc.) and collect its result. Mirrors
|
||||
/// `LocalTransport.runProcess` — duplicated rather than shared because
|
||||
/// SSH-specific code paths live on this type and we want all Process
|
||||
/// lifecycle in one place per transport.
|
||||
nonisolated private func runLocal(executable: String, args: [String], stdin: Data?, timeout: TimeInterval?) throws -> ProcessResult {
|
||||
ensureControlDir()
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: executable)
|
||||
proc.arguments = args
|
||||
// Inherit the user's shell environment so ssh can reach the
|
||||
// ssh-agent socket. GUI-launched apps don't see SSH_AUTH_SOCK by
|
||||
// default — without this, terminal ssh works (because the user's
|
||||
// shell exports it) but Scarf-launched ssh fails auth with exit 255.
|
||||
proc.environment = Self.sshSubprocessEnvironment()
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
let stdinPipe = Pipe()
|
||||
proc.standardOutput = stdoutPipe
|
||||
proc.standardError = stderrPipe
|
||||
if stdin != nil { proc.standardInput = stdinPipe }
|
||||
do {
|
||||
try proc.run()
|
||||
} catch {
|
||||
throw TransportError.other(message: "Failed to launch \(executable): \(error.localizedDescription)")
|
||||
}
|
||||
if let stdin {
|
||||
try? stdinPipe.fileHandleForWriting.write(contentsOf: stdin)
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
}
|
||||
if let timeout {
|
||||
let deadline = Date().addingTimeInterval(timeout)
|
||||
while proc.isRunning && Date() < deadline {
|
||||
Thread.sleep(forTimeInterval: 0.1)
|
||||
}
|
||||
if proc.isRunning {
|
||||
proc.terminate()
|
||||
let partial = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
try? stdoutPipe.fileHandleForReading.close()
|
||||
try? stderrPipe.fileHandleForReading.close()
|
||||
throw TransportError.timeout(seconds: timeout, partialStdout: partial)
|
||||
}
|
||||
} else {
|
||||
proc.waitUntilExit()
|
||||
}
|
||||
let out = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
let err = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
try? stdoutPipe.fileHandleForReading.close()
|
||||
try? stderrPipe.fileHandleForReading.close()
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
return ProcessResult(exitCode: proc.terminationStatus, stdout: out, stderr: err)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,102 @@
|
||||
import Foundation
|
||||
|
||||
/// Unified I/O surface shared by local and remote Hermes installations.
|
||||
///
|
||||
/// **Design rationale.** The services that read Hermes state (`~/.hermes/…`)
|
||||
/// and spawn the `hermes` CLI all boil down to a handful of primitives:
|
||||
/// read/write/list files, stat file attributes, run a process to completion,
|
||||
/// spawn a long-running stdio process for streaming, take a consistent DB
|
||||
/// snapshot, observe file changes. `ServerTransport` exposes exactly those
|
||||
/// primitives so the same service code works against either a local
|
||||
/// filesystem or a remote host reached over SSH.
|
||||
///
|
||||
/// The primitives are deliberately **synchronous where possible** (file I/O,
|
||||
/// process `run` + wait) so services don't need to become `async` end-to-end.
|
||||
/// The two naturally-streaming cases — log tail and ACP stdio — use
|
||||
/// `makeProcess` which returns a configured `Process`; services own the
|
||||
/// stdio pipes and lifecycle exactly as they do today.
|
||||
protocol ServerTransport: Sendable {
|
||||
/// Identifies the context this transport serves. Used for cache
|
||||
/// namespacing (e.g. per-server SQLite snapshot directories).
|
||||
nonisolated var contextID: ServerID { get }
|
||||
|
||||
/// `true` if this transport talks to a remote host over SSH.
|
||||
nonisolated var isRemote: Bool { get }
|
||||
|
||||
// MARK: - Files
|
||||
|
||||
nonisolated func readFile(_ path: String) throws -> Data
|
||||
/// Atomic write: the file at `path` is either the previous contents or
|
||||
/// the new contents, never a partial write. Preserves `0600` mode for
|
||||
/// paths that match `.env` conventions so secrets stay owner-only.
|
||||
nonisolated func writeFile(_ path: String, data: Data) throws
|
||||
nonisolated func fileExists(_ path: String) -> Bool
|
||||
nonisolated func stat(_ path: String) -> FileStat?
|
||||
nonisolated func listDirectory(_ path: String) throws -> [String]
|
||||
/// Create directories including intermediates. No-op if already present.
|
||||
nonisolated func createDirectory(_ path: String) throws
|
||||
/// Delete a file. No-op if absent.
|
||||
nonisolated func removeFile(_ path: String) throws
|
||||
|
||||
// MARK: - Processes
|
||||
|
||||
/// Run a process to completion and capture its stdout/stderr. For remote
|
||||
/// transports this actually invokes `ssh host -- executable args…` under
|
||||
/// the hood; for local it spawns `executable` directly.
|
||||
nonisolated func runProcess(
|
||||
executable: String,
|
||||
args: [String],
|
||||
stdin: Data?,
|
||||
timeout: TimeInterval?
|
||||
) throws -> ProcessResult
|
||||
|
||||
/// Return a `Process` configured for the target — already pointed at the
|
||||
/// right executable with the right arguments, but **not yet started**.
|
||||
/// Callers attach their own `Pipe`s and call `run()`. Used by ACPClient
|
||||
/// (JSON-RPC over stdio) and by `HermesLogService`'s streaming tail.
|
||||
///
|
||||
/// Local: `executable` + `args` verbatim.
|
||||
/// Remote: `/usr/bin/ssh` + connection flags + `[host, "--", executable, args…]`.
|
||||
nonisolated func makeProcess(executable: String, args: [String]) -> Process
|
||||
|
||||
// MARK: - SQLite
|
||||
|
||||
/// Return a local filesystem URL pointing at a fresh, consistent copy of
|
||||
/// the SQLite database at `remotePath`. For local transports this is
|
||||
/// just the remote path unchanged. For SSH transports this performs
|
||||
/// `sqlite3 .backup` on the remote side and scp's the backup into
|
||||
/// `~/Library/Caches/scarf/<serverID>/state.db`, returning that URL.
|
||||
nonisolated func snapshotSQLite(remotePath: String) throws -> URL
|
||||
|
||||
// MARK: - Watching
|
||||
|
||||
/// Observe changes to a set of paths and yield events when any of them
|
||||
/// change. Local: FSEvents. Remote: polls `stat` mtime every 3s.
|
||||
nonisolated func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent>
|
||||
}
|
||||
|
||||
/// Stat-style file metadata. `nil` (return value) means the file does not
|
||||
/// exist or couldn't be queried.
|
||||
struct FileStat: Sendable, Hashable {
|
||||
let size: Int64
|
||||
let mtime: Date
|
||||
let isDirectory: Bool
|
||||
}
|
||||
|
||||
/// Result of a one-shot process invocation.
|
||||
struct ProcessResult: Sendable {
|
||||
let exitCode: Int32
|
||||
let stdout: Data
|
||||
let stderr: Data
|
||||
|
||||
nonisolated var stdoutString: String { String(data: stdout, encoding: .utf8) ?? "" }
|
||||
nonisolated var stderrString: String { String(data: stderr, encoding: .utf8) ?? "" }
|
||||
}
|
||||
|
||||
enum WatchEvent: Sendable {
|
||||
/// Any path in the watched set changed; implementations may coalesce
|
||||
/// rapid changes into one event. Consumers should treat this as "refresh
|
||||
/// whatever you were displaying" rather than expecting fine-grained
|
||||
/// per-path signals.
|
||||
case anyChanged
|
||||
}
|
||||
@@ -0,0 +1,86 @@
|
||||
import Foundation
|
||||
|
||||
/// Typed errors surfaced by `ServerTransport` implementations. The UI
|
||||
/// distinguishes these so user-visible messages can be specific
|
||||
/// ("authentication failed" vs. "command failed") without having to grep
|
||||
/// stderr strings.
|
||||
enum TransportError: LocalizedError {
|
||||
/// `ssh`/`scp` could not reach the host or hit a protocol-level issue
|
||||
/// (name resolution, connection refused, route error).
|
||||
case hostUnreachable(host: String, stderr: String)
|
||||
/// Remote rejected our credentials. Typically means no ssh-agent key is
|
||||
/// loaded, or the loaded keys don't match any `authorized_keys` entry.
|
||||
case authenticationFailed(host: String, stderr: String)
|
||||
/// Remote `~/.ssh/known_hosts` fingerprint no longer matches. Blocking —
|
||||
/// we never auto-accept on mismatch.
|
||||
case hostKeyMismatch(host: String, stderr: String)
|
||||
/// The command ran on the remote but exited non-zero.
|
||||
case commandFailed(exitCode: Int32, stderr: String)
|
||||
/// Local filesystem operation failed (read/write/stat) with the OS error
|
||||
/// message attached.
|
||||
case fileIO(path: String, underlying: String)
|
||||
/// Timed out waiting for a process to finish. `partialStdout` carries
|
||||
/// whatever output was captured before the timer fired.
|
||||
case timeout(seconds: TimeInterval, partialStdout: Data)
|
||||
/// Something we didn't plan for. Fall-through bucket with enough context
|
||||
/// for a bug report.
|
||||
case other(message: String)
|
||||
|
||||
var errorDescription: String? {
|
||||
switch self {
|
||||
case .hostUnreachable(let host, _):
|
||||
return "Can't reach \(host). Check the hostname, network, and SSH config."
|
||||
case .authenticationFailed(let host, _):
|
||||
return "SSH authentication to \(host) failed. Ensure your key is loaded in ssh-agent."
|
||||
case .hostKeyMismatch(let host, _):
|
||||
return "Host key for \(host) has changed. Inspect ~/.ssh/known_hosts before continuing."
|
||||
case .commandFailed(let code, let stderr):
|
||||
// Trim stderr to a single line for the summary; full text is in
|
||||
// the associated value for disclosure views.
|
||||
let firstLine = stderr.split(separator: "\n").first.map(String.init) ?? ""
|
||||
return "Remote command exited \(code). \(firstLine)"
|
||||
case .fileIO(let path, let msg):
|
||||
return "File I/O failed at \(path): \(msg)"
|
||||
case .timeout(let secs, _):
|
||||
return "Command timed out after \(Int(secs))s."
|
||||
case .other(let msg):
|
||||
return msg
|
||||
}
|
||||
}
|
||||
|
||||
/// Full stderr (if any) for display in a disclosure view. Empty string
|
||||
/// when there's no additional detail worth showing.
|
||||
var diagnosticStderr: String {
|
||||
switch self {
|
||||
case .hostUnreachable(_, let s),
|
||||
.authenticationFailed(_, let s),
|
||||
.hostKeyMismatch(_, let s),
|
||||
.commandFailed(_, let s):
|
||||
return s
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
/// Heuristic classifier: convert the ssh/scp stderr of a failed command
|
||||
/// into a specific `TransportError`. Used by `SSHTransport` after a
|
||||
/// non-zero exit. Defaults to `.commandFailed` when no known marker
|
||||
/// matches.
|
||||
static func classifySSHFailure(host: String, exitCode: Int32, stderr: String) -> TransportError {
|
||||
let s = stderr.lowercased()
|
||||
if s.contains("permission denied") || s.contains("authentication failed")
|
||||
|| s.contains("publickey") && s.contains("denied") {
|
||||
return .authenticationFailed(host: host, stderr: stderr)
|
||||
}
|
||||
if s.contains("host key verification failed")
|
||||
|| s.contains("remote host identification has changed") {
|
||||
return .hostKeyMismatch(host: host, stderr: stderr)
|
||||
}
|
||||
if s.contains("no route to host") || s.contains("connection refused")
|
||||
|| s.contains("connection timed out") || s.contains("could not resolve hostname")
|
||||
|| s.contains("connection closed by") && s.contains("port 22") {
|
||||
return .hostUnreachable(host: host, stderr: stderr)
|
||||
}
|
||||
return .commandFailed(exitCode: exitCode, stderr: stderr)
|
||||
}
|
||||
}
|
||||
@@ -2,7 +2,14 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class ActivityViewModel {
|
||||
private let dataService = HermesDataService()
|
||||
let context: ServerContext
|
||||
private let dataService: HermesDataService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.dataService = HermesDataService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var toolMessages: [HermesMessage] = []
|
||||
var filterKind: ToolKind?
|
||||
@@ -45,7 +52,12 @@ final class ActivityViewModel {
|
||||
|
||||
func load() async {
|
||||
isLoading = true
|
||||
let opened = await dataService.open()
|
||||
// refresh() = close + reopen, which forces a fresh snapshot pull on
|
||||
// remote contexts. Using open() here would short-circuit after the
|
||||
// first load and show stale data for the view's lifetime. The DB
|
||||
// stays open after load() returns so selectEntry() can read tool
|
||||
// results without re-opening — cleanup() closes on disappear.
|
||||
let opened = await dataService.refresh()
|
||||
guard opened else {
|
||||
isLoading = false
|
||||
return
|
||||
|
||||
@@ -1,8 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct ActivityView: View {
|
||||
@State private var viewModel = ActivityViewModel()
|
||||
@State private var viewModel: ActivityViewModel
|
||||
@Environment(AppCoordinator.self) private var coordinator
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: ActivityViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(spacing: 0) {
|
||||
@@ -17,6 +23,9 @@ struct ActivityView: View {
|
||||
}
|
||||
.navigationTitle("Activity")
|
||||
.task { await viewModel.load() }
|
||||
.onChange(of: fileWatcher.lastChangeDate) {
|
||||
Task { await viewModel.load() }
|
||||
}
|
||||
.onDisappear { Task { await viewModel.cleanup() } }
|
||||
}
|
||||
|
||||
|
||||
@@ -6,8 +6,29 @@ import os
|
||||
@Observable
|
||||
final class ChatViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "ChatViewModel")
|
||||
private let dataService = HermesDataService()
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let dataService: HermesDataService
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.dataService = HermesDataService(context: context)
|
||||
self.fileService = HermesFileService(context: context)
|
||||
self.richChatViewModel = RichChatViewModel(context: context)
|
||||
// Probe hermes binary existence once off-main, then cache. Doing
|
||||
// this synchronously inside `hermesBinaryExists`'s getter would
|
||||
// block main on every chat-body re-evaluation — for a remote
|
||||
// context that's a SSH `test -e` round-trip on every streaming
|
||||
// chunk, which manifests as the chat screen flashing or going
|
||||
// blank during prompts.
|
||||
Task.detached(priority: .userInitiated) { [context] in
|
||||
let exists = context.fileExists(context.paths.hermesBinary)
|
||||
await MainActor.run { [weak self] in
|
||||
self?.hermesBinaryExists = exists
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
var recentSessions: [HermesSession] = []
|
||||
var sessionPreviews: [String: String] = [:]
|
||||
@@ -17,7 +38,7 @@ final class ChatViewModel {
|
||||
var ttsEnabled = false
|
||||
var isRecording = false
|
||||
var displayMode: ChatDisplayMode = .richChat
|
||||
let richChatViewModel = RichChatViewModel()
|
||||
let richChatViewModel: RichChatViewModel
|
||||
private var coordinator: Coordinator?
|
||||
|
||||
// ACP state
|
||||
@@ -43,14 +64,17 @@ final class ChatViewModel {
|
||||
private static let reconnectBaseDelay: UInt64 = 1_000_000_000 // 1 second
|
||||
private static let maxReconnectDelay: UInt64 = 16_000_000_000 // 16 seconds
|
||||
|
||||
var hermesBinaryExists: Bool {
|
||||
FileManager.default.fileExists(atPath: HermesPaths.hermesBinary)
|
||||
}
|
||||
/// Cached result of probing for `hermes` on the target server. Updated
|
||||
/// once at init by a detached task; defaults to `true` so the chat
|
||||
/// view doesn't briefly flash "Hermes not found" while the async
|
||||
/// probe runs. Set to `false` only after the probe confirms the
|
||||
/// binary really isn't there.
|
||||
var hermesBinaryExists: Bool = true
|
||||
|
||||
/// Re-checks env + `~/.hermes/.env` for AI-provider credentials and
|
||||
/// updates `missingCredentials`. Cheap — safe to call from view `.task`.
|
||||
func refreshCredentialPreflight() {
|
||||
missingCredentials = !HermesFileService.hasAnyAICredential()
|
||||
missingCredentials = !fileService.hasAnyAICredential()
|
||||
}
|
||||
|
||||
/// Clears the error/hint/details triplet so future failures overwrite
|
||||
@@ -114,7 +138,14 @@ final class ChatViewModel {
|
||||
// Find most recent session and resume via ACP
|
||||
Task { @MainActor in
|
||||
let opened = await dataService.open()
|
||||
guard opened else { return }
|
||||
if !opened {
|
||||
acpError = context.isRemote
|
||||
? "Couldn't reach \(context.displayName). Check the SSH connection and try again."
|
||||
: "Couldn't open the Hermes state database."
|
||||
acpErrorHint = nil
|
||||
acpErrorDetails = nil
|
||||
return
|
||||
}
|
||||
let sessionId = await dataService.fetchMostRecentlyActiveSessionId()
|
||||
await dataService.close()
|
||||
if let sessionId {
|
||||
@@ -143,23 +174,22 @@ final class ChatViewModel {
|
||||
}
|
||||
}
|
||||
|
||||
/// Start ACP for the current or most recent session, then send the queued prompt.
|
||||
/// Start ACP for the current session (or create a new one), then send the
|
||||
/// queued prompt. Typing into a blank Chat screen ALWAYS creates a new
|
||||
/// session — the "Continue from Last Session" button is the explicit path
|
||||
/// for resuming. The previous behavior (falling back to the most recently
|
||||
/// active session in the DB) would pick up cron/background sessions the
|
||||
/// user never interacted with; those can be garbage-collected by Hermes
|
||||
/// between the DB read and ACP `session/load`, producing a silent prompt
|
||||
/// failure with no UI feedback.
|
||||
private func autoStartACPAndSend(text: String) {
|
||||
// Show the user message immediately
|
||||
richChatViewModel.addUserMessage(text: text)
|
||||
|
||||
Task { @MainActor in
|
||||
// Find a session to resume: prefer current sessionId, then most recent
|
||||
var sessionToResume = richChatViewModel.sessionId
|
||||
if sessionToResume == nil {
|
||||
let opened = await dataService.open()
|
||||
if opened {
|
||||
sessionToResume = await dataService.fetchMostRecentlyActiveSessionId()
|
||||
await dataService.close()
|
||||
}
|
||||
}
|
||||
let sessionToResume = richChatViewModel.sessionId
|
||||
|
||||
let client = ACPClient()
|
||||
let client = ACPClient(context: context)
|
||||
self.acpClient = client
|
||||
|
||||
do {
|
||||
@@ -168,7 +198,7 @@ final class ChatViewModel {
|
||||
startACPEventLoop(client: client)
|
||||
startHealthMonitor(client: client)
|
||||
|
||||
let cwd = NSHomeDirectory()
|
||||
let cwd = await context.resolvedUserHome()
|
||||
|
||||
hasActiveProcess = true
|
||||
|
||||
@@ -247,7 +277,7 @@ final class ChatViewModel {
|
||||
clearACPErrorState()
|
||||
acpStatus = "Starting..."
|
||||
|
||||
let client = ACPClient()
|
||||
let client = ACPClient(context: context)
|
||||
self.acpClient = client
|
||||
|
||||
Task { @MainActor in
|
||||
@@ -258,7 +288,7 @@ final class ChatViewModel {
|
||||
startACPEventLoop(client: client)
|
||||
startHealthMonitor(client: client)
|
||||
|
||||
let cwd = NSHomeDirectory()
|
||||
let cwd = await context.resolvedUserHome()
|
||||
|
||||
// Mark active BEFORE setting session ID so .task(id:) sees isACPMode=true
|
||||
// and doesn't wipe messages with a DB refresh
|
||||
@@ -385,11 +415,11 @@ final class ChatViewModel {
|
||||
guard !Task.isCancelled else { return }
|
||||
}
|
||||
|
||||
let client = ACPClient()
|
||||
let client = ACPClient(context: context)
|
||||
do {
|
||||
try await client.start()
|
||||
|
||||
let cwd = NSHomeDirectory()
|
||||
let cwd = await context.resolvedUserHome()
|
||||
let resolvedSessionId: String
|
||||
|
||||
// Try resumeSession first (designed for reconnection), then loadSession.
|
||||
@@ -542,11 +572,44 @@ final class ChatViewModel {
|
||||
var env = ProcessInfo.processInfo.environment
|
||||
env["TERM"] = "xterm-256color"
|
||||
env["COLORTERM"] = "truecolor"
|
||||
// Inherit ssh-agent socket for remote so password-less auth works.
|
||||
if context.isRemote {
|
||||
let shellEnv = HermesFileService.enrichedEnvironment()
|
||||
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
|
||||
if env[key] == nil, let v = shellEnv[key], !v.isEmpty {
|
||||
env[key] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
let envArray = env.map { "\($0.key)=\($0.value)" }
|
||||
|
||||
// For remote: wrap the invocation in `ssh -t host -- hermes <args>`
|
||||
// so the embedded terminal opens a pty against the remote and the
|
||||
// hermes TUI gets the bytes it expects. `-t` requests a pty (the
|
||||
// SwiftTerm view is one).
|
||||
let exe: String
|
||||
let argv: [String]
|
||||
if context.isRemote, case .ssh(let cfg) = context.kind {
|
||||
let host = cfg.user.map { "\($0)@\(cfg.host)" } ?? cfg.host
|
||||
exe = "/usr/bin/ssh"
|
||||
var sshArgs: [String] = ["-t"]
|
||||
if let port = cfg.port { sshArgs += ["-p", String(port)] }
|
||||
if let id = cfg.identityFile, !id.isEmpty { sshArgs += ["-i", id] }
|
||||
sshArgs += ["-o", "StrictHostKeyChecking=accept-new"]
|
||||
sshArgs += ["-o", "BatchMode=yes"]
|
||||
sshArgs.append(host)
|
||||
sshArgs.append("--")
|
||||
sshArgs.append(context.paths.hermesBinary)
|
||||
sshArgs.append(contentsOf: arguments)
|
||||
argv = sshArgs
|
||||
} else {
|
||||
exe = context.paths.hermesBinary
|
||||
argv = arguments
|
||||
}
|
||||
|
||||
terminal.startProcess(
|
||||
executable: HermesPaths.hermesBinary,
|
||||
args: arguments,
|
||||
executable: exe,
|
||||
args: argv,
|
||||
environment: envArray,
|
||||
execName: nil
|
||||
)
|
||||
|
||||
@@ -25,7 +25,14 @@ struct MessageGroup: Identifiable {
|
||||
|
||||
@Observable
|
||||
final class RichChatViewModel {
|
||||
private let dataService = HermesDataService()
|
||||
let context: ServerContext
|
||||
private let dataService: HermesDataService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.dataService = HermesDataService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var messages: [HermesMessage] = []
|
||||
var currentSession: HermesSession?
|
||||
@@ -231,7 +238,44 @@ final class RichChatViewModel {
|
||||
}
|
||||
|
||||
private func handlePromptComplete(response: ACPPromptResult) {
|
||||
// Detect a failed prompt that produced no assistant output — e.g.
|
||||
// Hermes returning `stopReason: "refusal"` when the session was
|
||||
// silently garbage-collected, or `"error"` when the ACP call itself
|
||||
// threw. Without surfacing this, the user sees their prompt sitting
|
||||
// alone under "Agent working…" that never completes with any text.
|
||||
let hadAssistantOutput = streamingAssistantText.isEmpty == false
|
||||
|| messages.last?.isAssistant == true
|
||||
finalizeStreamingMessage()
|
||||
|
||||
if !hadAssistantOutput, response.stopReason != "end_turn" {
|
||||
let reason: String
|
||||
switch response.stopReason {
|
||||
case "refusal":
|
||||
reason = "The agent refused to respond (the session may have been cleared on the server). Try starting a new session from the Session menu."
|
||||
case "error":
|
||||
reason = "The prompt failed — check the ACP error banner above for details."
|
||||
case "max_tokens":
|
||||
reason = "The response was cut off before the agent could produce any output (max_tokens reached before any tokens were emitted)."
|
||||
default:
|
||||
reason = "The prompt ended without a response (stopReason: \(response.stopReason))."
|
||||
}
|
||||
let id = nextLocalId
|
||||
nextLocalId -= 1
|
||||
messages.append(HermesMessage(
|
||||
id: id,
|
||||
sessionId: sessionId ?? "",
|
||||
role: "system",
|
||||
content: reason,
|
||||
toolCallId: nil,
|
||||
toolCalls: [],
|
||||
toolName: nil,
|
||||
timestamp: Date(),
|
||||
tokenCount: nil,
|
||||
finishReason: response.stopReason,
|
||||
reasoning: nil
|
||||
))
|
||||
}
|
||||
|
||||
// Accumulate token usage from this prompt
|
||||
acpInputTokens += response.inputTokens
|
||||
acpOutputTokens += response.outputTokens
|
||||
@@ -391,7 +435,11 @@ final class RichChatViewModel {
|
||||
/// (e.g., CLI session) with the current ACP session.
|
||||
func loadSessionHistory(sessionId: String, acpSessionId: String? = nil) async {
|
||||
self.sessionId = sessionId
|
||||
let opened = await dataService.open()
|
||||
// Force a fresh snapshot pull on remote contexts. An earlier open()
|
||||
// would have cached a stale copy — on resume we need whatever
|
||||
// Hermes has actually persisted since then, or the resumed session
|
||||
// will show only history up to the moment the snapshot was taken.
|
||||
let opened = await dataService.refresh()
|
||||
guard opened else { return }
|
||||
|
||||
var allMessages = await dataService.fetchMessages(sessionId: sessionId)
|
||||
@@ -434,7 +482,10 @@ final class RichChatViewModel {
|
||||
}
|
||||
|
||||
func refreshMessages() async {
|
||||
let opened = await dataService.open()
|
||||
// Polling tick (terminal mode): pull a fresh snapshot so remote
|
||||
// reflects Hermes writes since the last tick. On local this is a
|
||||
// cheap reopen of the live DB.
|
||||
let opened = await dataService.refresh()
|
||||
guard opened else { return }
|
||||
|
||||
if sessionId == nil {
|
||||
|
||||
@@ -304,7 +304,7 @@ struct ChatView: View {
|
||||
ContentUnavailableView(
|
||||
"Hermes Not Found",
|
||||
systemImage: "terminal",
|
||||
description: Text("Expected at \(HermesPaths.hermesBinary)")
|
||||
description: Text("Expected at \(viewModel.context.paths.hermesBinary)")
|
||||
)
|
||||
.frame(maxWidth: .infinity, maxHeight: .infinity)
|
||||
}
|
||||
@@ -331,7 +331,7 @@ struct ChatView: View {
|
||||
ContentUnavailableView(
|
||||
"Hermes Not Found",
|
||||
systemImage: "terminal",
|
||||
description: Text("Expected at \(HermesPaths.hermesBinary)")
|
||||
description: Text("Expected at \(viewModel.context.paths.hermesBinary)")
|
||||
)
|
||||
.frame(maxWidth: .infinity, maxHeight: .infinity)
|
||||
}
|
||||
@@ -359,7 +359,7 @@ struct ChatView: View {
|
||||
|
||||
// MARK: - Permission Approval View
|
||||
|
||||
extension RichChatViewModel.PendingPermission: @retroactive Identifiable {
|
||||
extension RichChatViewModel.PendingPermission: Identifiable {
|
||||
var id: Int { requestId }
|
||||
}
|
||||
|
||||
|
||||
@@ -6,19 +6,31 @@ struct RichChatMessageList: View {
|
||||
/// External trigger to force a scroll-to-bottom (e.g., from "Return to Active Session").
|
||||
var scrollTrigger: UUID = UUID()
|
||||
|
||||
/// Track the last group's assistant content length to detect streaming updates.
|
||||
private var scrollAnchor: String {
|
||||
if isWorking { return "typing-indicator" }
|
||||
if let last = groups.last { return "group-\(last.id)" }
|
||||
return "scroll-top"
|
||||
}
|
||||
|
||||
/// Why `.defaultScrollAnchor(.bottom)` *alone* and no `proxy.scrollTo`.
|
||||
///
|
||||
/// `.defaultScrollAnchor(.bottom)` tells SwiftUI to pin the viewport to
|
||||
/// the bottom of the content automatically — as messages stream in or
|
||||
/// new turns arrive, the scroll position tracks the bottom edge.
|
||||
///
|
||||
/// We used to also call `proxy.scrollTo(lastID, anchor: .bottom)` from
|
||||
/// six different `onChange` handlers during streaming. The two
|
||||
/// mechanisms fought each other: the ScrollViewReader can resolve an ID
|
||||
/// to a position **before** LazyVStack has finished laying out that
|
||||
/// row, so `scrollTo` would land past the actual content — the
|
||||
/// "viewport showing whitespace, chat is above" symptom. Removing the
|
||||
/// manual scroll and trusting `defaultScrollAnchor` eliminates the race.
|
||||
///
|
||||
/// The only remaining explicit scroll is `scrollTrigger` for the "Return
|
||||
/// to Active Session" button; that fires rarely, after layout has
|
||||
/// settled, so the overshoot doesn't happen.
|
||||
var body: some View {
|
||||
ScrollViewReader { proxy in
|
||||
ScrollView {
|
||||
LazyVStack(alignment: .leading, spacing: 16) {
|
||||
Spacer(minLength: 0)
|
||||
.id("scroll-top")
|
||||
if groups.isEmpty && !isWorking {
|
||||
emptyState
|
||||
}
|
||||
|
||||
ForEach(groups) { group in
|
||||
MessageGroupView(group: group)
|
||||
.id("group-\(group.id)")
|
||||
@@ -32,50 +44,39 @@ struct RichChatMessageList: View {
|
||||
.padding()
|
||||
}
|
||||
.defaultScrollAnchor(.bottom)
|
||||
// Scroll to bottom when view first appears with content
|
||||
.onAppear {
|
||||
if !groups.isEmpty {
|
||||
DispatchQueue.main.async {
|
||||
scrollToBottom(proxy: proxy, animated: false)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Scroll on new groups
|
||||
.onChange(of: groups.count) {
|
||||
scrollToBottom(proxy: proxy)
|
||||
}
|
||||
// Scroll when agent starts/stops working
|
||||
.onChange(of: isWorking) {
|
||||
scrollToBottom(proxy: proxy)
|
||||
}
|
||||
// Scroll on streaming content updates (group content changes)
|
||||
.onChange(of: scrollAnchor) {
|
||||
scrollToBottom(proxy: proxy)
|
||||
}
|
||||
// Scroll on last message content change (streaming text)
|
||||
.onChange(of: groups.last?.assistantMessages.last?.content ?? "") {
|
||||
scrollToBottom(proxy: proxy, animated: false)
|
||||
}
|
||||
// Scroll on tool call count change
|
||||
.onChange(of: groups.last?.toolCallCount ?? 0) {
|
||||
scrollToBottom(proxy: proxy)
|
||||
}
|
||||
// Scroll on external trigger (e.g., "Return to Active Session" button)
|
||||
.onChange(of: scrollTrigger) {
|
||||
scrollToBottom(proxy: proxy)
|
||||
let target = lastAnchorID
|
||||
withAnimation(.easeOut(duration: 0.15)) {
|
||||
proxy.scrollTo(target, anchor: .bottom)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private func scrollToBottom(proxy: ScrollViewProxy, animated: Bool = true) {
|
||||
let target = scrollAnchor
|
||||
if animated {
|
||||
withAnimation(.easeOut(duration: 0.15)) {
|
||||
proxy.scrollTo(target, anchor: .bottom)
|
||||
}
|
||||
} else {
|
||||
proxy.scrollTo(target, anchor: .bottom)
|
||||
/// Anchor ID used by the explicit scrollTrigger path. Prefers the typing
|
||||
/// indicator when visible (so we scroll to the very bottom of the
|
||||
/// current turn), otherwise the last group.
|
||||
private var lastAnchorID: String {
|
||||
if isWorking { return "typing-indicator" }
|
||||
if let last = groups.last { return "group-\(last.id)" }
|
||||
return "group-0"
|
||||
}
|
||||
|
||||
private var emptyState: some View {
|
||||
VStack(spacing: 12) {
|
||||
Image(systemName: "bubble.left.and.text.bubble.right")
|
||||
.font(.system(size: 40))
|
||||
.foregroundStyle(.tertiary)
|
||||
Text("Chat Messages")
|
||||
.font(.title3)
|
||||
.fontWeight(.semibold)
|
||||
Text("Messages will appear here as the conversation progresses.")
|
||||
.font(.callout)
|
||||
.foregroundStyle(.secondary)
|
||||
.multilineTextAlignment(.center)
|
||||
}
|
||||
.frame(maxWidth: .infinity)
|
||||
.padding(.vertical, 80)
|
||||
}
|
||||
|
||||
private var typingIndicator: some View {
|
||||
@@ -108,7 +109,17 @@ struct MessageGroupView: View {
|
||||
RichMessageBubble(message: user, toolResults: [:])
|
||||
}
|
||||
|
||||
ForEach(group.assistantMessages.filter(\.isAssistant)) { message in
|
||||
// Identify by array offset rather than `message.id`. The
|
||||
// streaming assistant message starts with id=0 and gets a
|
||||
// new negative id when finalized — using `\.id` would make
|
||||
// SwiftUI think the bubble disappeared and a new one appeared
|
||||
// (destroying + recreating the view, which manifests as the
|
||||
// chat flashing or jumping right when the prompt completes).
|
||||
// Within a single group the assistant messages are
|
||||
// append-only, so offset is a stable identity for the
|
||||
// group's lifetime.
|
||||
let assistantMessages = group.assistantMessages.filter(\.isAssistant)
|
||||
ForEach(Array(assistantMessages.enumerated()), id: \.offset) { _, message in
|
||||
RichMessageBubble(message: message, toolResults: group.toolResults)
|
||||
}
|
||||
|
||||
|
||||
@@ -21,20 +21,15 @@ struct RichChatView: View {
|
||||
)
|
||||
Divider()
|
||||
|
||||
if richChat.messageGroups.isEmpty && !richChat.isAgentWorking {
|
||||
ContentUnavailableView(
|
||||
"Chat Messages",
|
||||
systemImage: "bubble.left.and.text.bubble.right",
|
||||
description: Text("Messages will appear here as the conversation progresses.")
|
||||
)
|
||||
.frame(maxWidth: .infinity, maxHeight: .infinity)
|
||||
} else {
|
||||
RichChatMessageList(
|
||||
groups: richChat.messageGroups,
|
||||
isWorking: richChat.isAgentWorking,
|
||||
scrollTrigger: richChat.scrollTrigger
|
||||
)
|
||||
}
|
||||
// Always mount RichChatMessageList; empty state lives inside it.
|
||||
// Swapping between a ContentUnavailableView and the ScrollView
|
||||
// hierarchy on first message caused a full view tree rebuild,
|
||||
// which manifests as a white flash.
|
||||
RichChatMessageList(
|
||||
groups: richChat.messageGroups,
|
||||
isWorking: richChat.isAgentWorking,
|
||||
scrollTrigger: richChat.scrollTrigger
|
||||
)
|
||||
|
||||
Divider()
|
||||
RichChatInputBar(
|
||||
|
||||
@@ -0,0 +1,74 @@
|
||||
import SwiftUI
|
||||
|
||||
/// Translucent loading overlay used by feature views while their VM's
|
||||
/// `load()` runs in the background. Shows a centered ProgressView with
|
||||
/// optional label; the underlying content stays visible (just dimmed)
|
||||
/// when it's already populated, or the overlay fully covers an empty
|
||||
/// section so the user sees activity instead of "nothing here yet".
|
||||
///
|
||||
/// Usage:
|
||||
/// ```swift
|
||||
/// SomeContent()
|
||||
/// .loadingOverlay(viewModel.isLoading, label: "Loading credentials…", isEmpty: viewModel.pools.isEmpty)
|
||||
/// ```
|
||||
///
|
||||
/// The `isEmpty` flag controls whether the overlay covers the full view
|
||||
/// (when there's no stale content to show under it) or just dims it
|
||||
/// (when refreshing existing data).
|
||||
struct LoadingOverlay: ViewModifier {
|
||||
let isLoading: Bool
|
||||
let label: String
|
||||
let isEmpty: Bool
|
||||
|
||||
func body(content: Content) -> some View {
|
||||
content
|
||||
.overlay {
|
||||
if isLoading {
|
||||
if isEmpty {
|
||||
// Full cover: empty state. User has no data to look at,
|
||||
// so own the whole pane with the spinner.
|
||||
VStack(spacing: 12) {
|
||||
ProgressView()
|
||||
.controlSize(.large)
|
||||
Text(label)
|
||||
.font(.callout)
|
||||
.foregroundStyle(.secondary)
|
||||
}
|
||||
.frame(maxWidth: .infinity, maxHeight: .infinity)
|
||||
.background(Color(NSColor.windowBackgroundColor))
|
||||
} else {
|
||||
// Stale-content refresh: top-trailing pill so the
|
||||
// user sees data is being refreshed without losing
|
||||
// their place.
|
||||
VStack {
|
||||
HStack {
|
||||
Spacer()
|
||||
HStack(spacing: 6) {
|
||||
ProgressView()
|
||||
.controlSize(.small)
|
||||
Text(label)
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
}
|
||||
.padding(.horizontal, 10)
|
||||
.padding(.vertical, 5)
|
||||
.background(.thinMaterial, in: Capsule())
|
||||
.padding(8)
|
||||
}
|
||||
Spacer()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
extension View {
|
||||
/// Show a loading indicator while `isLoading` is true. If `isEmpty` is
|
||||
/// also true, the indicator covers the full view; otherwise it shows
|
||||
/// as a small refresh pill in the top-trailing corner so existing
|
||||
/// content stays visible.
|
||||
func loadingOverlay(_ isLoading: Bool, label: String = "Loading…", isEmpty: Bool = false) -> some View {
|
||||
modifier(LoadingOverlay(isLoading: isLoading, label: label, isEmpty: isEmpty))
|
||||
}
|
||||
}
|
||||
@@ -28,6 +28,12 @@ struct HermesCredentialPool: Identifiable, Sendable {
|
||||
@MainActor
|
||||
final class CredentialPoolsViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "CredentialPoolsViewModel")
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.oauthFlow = OAuthFlowController(context: context)
|
||||
}
|
||||
|
||||
var pools: [HermesCredentialPool] = []
|
||||
var isLoading = false
|
||||
@@ -37,7 +43,7 @@ final class CredentialPoolsViewModel {
|
||||
/// can extract the authorization URL, pop it open with an explicit button,
|
||||
/// and feed the code back via stdin. See OAuthFlowController for why we
|
||||
/// moved off the embedded-terminal approach.
|
||||
let oauthFlow = OAuthFlowController()
|
||||
let oauthFlow: OAuthFlowController
|
||||
var oauthProvider: String = ""
|
||||
/// Convenience — the sheet keys a lot of UI off "is the flow running?".
|
||||
var oauthInProgress: Bool { oauthFlow.isRunning }
|
||||
@@ -47,34 +53,42 @@ final class CredentialPoolsViewModel {
|
||||
/// Source of truth is `~/.hermes/auth.json`. Parsing box-drawn `hermes auth list`
|
||||
/// output is fragile — the JSON file is structured, stable, and already stores
|
||||
/// exactly the pool data the UI needs. We never display full tokens.
|
||||
///
|
||||
/// Runs the file reads on a detached task so the synchronous SSH calls
|
||||
/// (which can block for hundreds of milliseconds even with ControlMaster
|
||||
/// multiplexing) don't freeze the main thread / spin the beach ball.
|
||||
func load() {
|
||||
isLoading = true
|
||||
defer { isLoading = false }
|
||||
let ctx = context
|
||||
Task.detached { [weak self] in
|
||||
let authData = ctx.readData(ctx.paths.authJSON)
|
||||
let yaml = ctx.readText(ctx.paths.configYAML) ?? ""
|
||||
let strategies = Self.parseStrategies(from: yaml)
|
||||
|
||||
let authPath = HermesPaths.home + "/auth.json"
|
||||
let strategies = parseStrategies()
|
||||
let decodedPools: [HermesCredentialPool]
|
||||
if let data = authData,
|
||||
let decoded = try? JSONDecoder().decode(AuthFile.self, from: data) {
|
||||
decodedPools = Self.buildPools(from: decoded, strategies: strategies)
|
||||
} else {
|
||||
decodedPools = []
|
||||
}
|
||||
|
||||
guard let data = try? Data(contentsOf: URL(fileURLWithPath: authPath)) else {
|
||||
pools = []
|
||||
return
|
||||
}
|
||||
do {
|
||||
let decoded = try JSONDecoder().decode(AuthFile.self, from: data)
|
||||
pools = Self.buildPools(from: decoded, strategies: strategies)
|
||||
} catch {
|
||||
logger.error("Failed to decode auth.json: \(error.localizedDescription)")
|
||||
pools = []
|
||||
await MainActor.run { [weak self] in
|
||||
self?.pools = decodedPools
|
||||
self?.isLoading = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The `credential_pool_strategies:` map lives in config.yaml as `<provider>: <strategy>`.
|
||||
private func parseStrategies() -> [String: String] {
|
||||
guard let yaml = try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8) else { return [:] }
|
||||
/// Pure-function form so it's safe to call from the detached load task.
|
||||
nonisolated private static func parseStrategies(from yaml: String) -> [String: String] {
|
||||
guard !yaml.isEmpty else { return [:] }
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
return parsed.maps["credential_pool_strategies"] ?? [:]
|
||||
}
|
||||
|
||||
private static func buildPools(from auth: AuthFile, strategies: [String: String]) -> [HermesCredentialPool] {
|
||||
nonisolated private static func buildPools(from auth: AuthFile, strategies: [String: String]) -> [HermesCredentialPool] {
|
||||
auth.credential_pool.keys.sorted().map { provider in
|
||||
let entries = auth.credential_pool[provider] ?? []
|
||||
let creds = entries.enumerated().map { index, entry in
|
||||
@@ -100,7 +114,7 @@ final class CredentialPoolsViewModel {
|
||||
|
||||
/// Return last 4 chars prefixed with "…", or "" if the token is too short.
|
||||
/// Callers MUST NOT pass the full token anywhere user-visible beyond this.
|
||||
private static func tail(of token: String) -> String {
|
||||
nonisolated private static func tail(of token: String) -> String {
|
||||
guard token.count >= 4 else { return "" }
|
||||
return "…" + String(token.suffix(4))
|
||||
}
|
||||
@@ -206,21 +220,7 @@ final class CredentialPoolsViewModel {
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
process.arguments = arguments
|
||||
process.environment = HermesFileService.enrichedEnvironment()
|
||||
let pipe = Pipe()
|
||||
process.standardOutput = pipe
|
||||
process.standardError = Pipe()
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
return (String(data: data, encoding: .utf8) ?? "", process.terminationStatus)
|
||||
} catch {
|
||||
return ("", -1)
|
||||
}
|
||||
context.runHermes(arguments)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -229,16 +229,40 @@ final class CredentialPoolsViewModel {
|
||||
// All fields are optional because the format evolves and we want decoding to
|
||||
// succeed even if hermes adds new keys or omits some for certain auth types.
|
||||
|
||||
private struct AuthFile: Decodable {
|
||||
let credential_pool: [String: [AuthEntry]]
|
||||
// Hand-written `init(from:)` so Swift 6 doesn't synthesize a MainActor-
|
||||
// isolated conformance — auth.json decode runs in `load()`'s detached task.
|
||||
private struct AuthFile: Decodable, Sendable {
|
||||
nonisolated let credential_pool: [String: [AuthEntry]]
|
||||
|
||||
enum CodingKeys: String, CodingKey { case credential_pool }
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.credential_pool = try c.decode([String: [AuthEntry]].self, forKey: .credential_pool)
|
||||
}
|
||||
}
|
||||
|
||||
private struct AuthEntry: Decodable {
|
||||
let id: String?
|
||||
let label: String?
|
||||
let auth_type: String?
|
||||
let source: String?
|
||||
let access_token: String?
|
||||
let last_status: String?
|
||||
let request_count: Int?
|
||||
private struct AuthEntry: Decodable, Sendable {
|
||||
nonisolated let id: String?
|
||||
nonisolated let label: String?
|
||||
nonisolated let auth_type: String?
|
||||
nonisolated let source: String?
|
||||
nonisolated let access_token: String?
|
||||
nonisolated let last_status: String?
|
||||
nonisolated let request_count: Int?
|
||||
|
||||
enum CodingKeys: String, CodingKey {
|
||||
case id, label, auth_type, source, access_token, last_status, request_count
|
||||
}
|
||||
|
||||
nonisolated init(from decoder: any Decoder) throws {
|
||||
let c = try decoder.container(keyedBy: CodingKeys.self)
|
||||
self.id = try c.decodeIfPresent(String.self, forKey: .id)
|
||||
self.label = try c.decodeIfPresent(String.self, forKey: .label)
|
||||
self.auth_type = try c.decodeIfPresent(String.self, forKey: .auth_type)
|
||||
self.source = try c.decodeIfPresent(String.self, forKey: .source)
|
||||
self.access_token = try c.decodeIfPresent(String.self, forKey: .access_token)
|
||||
self.last_status = try c.decodeIfPresent(String.self, forKey: .last_status)
|
||||
self.request_count = try c.decodeIfPresent(Int.self, forKey: .request_count)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,6 +27,12 @@ import os
|
||||
@MainActor
|
||||
final class OAuthFlowController {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "OAuthFlowController")
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
|
||||
// MARK: - Observable state
|
||||
|
||||
@@ -82,10 +88,20 @@ final class OAuthFlowController {
|
||||
args += ["--label", trimmedLabel]
|
||||
}
|
||||
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
proc.arguments = args
|
||||
proc.environment = HermesFileService.enrichedEnvironment()
|
||||
// Use the transport so OAuth works against remote contexts too:
|
||||
// local spawns hermes directly, remote rounds through ssh -T while
|
||||
// preserving stdin (for the auth-code prompt) and stdout (for the
|
||||
// URL parser).
|
||||
let proc = context.makeTransport().makeProcess(
|
||||
executable: context.paths.hermesBinary,
|
||||
args: args
|
||||
)
|
||||
if !context.isRemote {
|
||||
// Only enrich env locally — the remote ssh process gets the
|
||||
// remote login env naturally, and exporting our local API keys
|
||||
// into it would be wrong.
|
||||
proc.environment = HermesFileService.enrichedEnvironment()
|
||||
}
|
||||
|
||||
let outPipe = Pipe()
|
||||
let inPipe = Pipe()
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
import SwiftUI
|
||||
|
||||
struct CredentialPoolsView: View {
|
||||
@State private var viewModel = CredentialPoolsViewModel()
|
||||
@State private var viewModel: CredentialPoolsViewModel
|
||||
@State private var showAddSheet = false
|
||||
@State private var pendingRemove: HermesCredential?
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: CredentialPoolsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
@@ -24,6 +29,11 @@ struct CredentialPoolsView: View {
|
||||
.frame(maxWidth: .infinity, alignment: .topLeading)
|
||||
}
|
||||
.navigationTitle("Credential Pools")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Loading credentials…",
|
||||
isEmpty: viewModel.pools.isEmpty
|
||||
)
|
||||
.onAppear { viewModel.load() }
|
||||
.sheet(isPresented: $showAddSheet) {
|
||||
AddCredentialSheet(viewModel: viewModel) {
|
||||
@@ -194,7 +204,7 @@ private struct AddCredentialSheet: View {
|
||||
@State private var oauthStarted: Bool = false
|
||||
@State private var authCode: String = ""
|
||||
|
||||
private let catalog = ModelCatalogService()
|
||||
private var catalog: ModelCatalogService { ModelCatalogService(context: viewModel.context) }
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -5,7 +5,14 @@ import os
|
||||
@Observable
|
||||
final class CronViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "CronViewModel")
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var jobs: [HermesCronJob] = []
|
||||
var selectedJob: HermesCronJob?
|
||||
@@ -14,19 +21,37 @@ final class CronViewModel {
|
||||
var message: String?
|
||||
var showCreateSheet = false
|
||||
var editingJob: HermesCronJob?
|
||||
var isLoading = false
|
||||
|
||||
func load() {
|
||||
jobs = fileService.loadCronJobs()
|
||||
availableSkills = fileService.loadSkills().flatMap { $0.skills.map(\.id) }.sorted()
|
||||
if let selected = selectedJob, let refreshed = jobs.first(where: { $0.id == selected.id }) {
|
||||
selectedJob = refreshed
|
||||
jobOutput = fileService.loadCronOutput(jobId: refreshed.id)
|
||||
isLoading = true
|
||||
let svc = fileService
|
||||
let selectedID = selectedJob?.id
|
||||
Task.detached { [weak self] in
|
||||
// Three sync transport ops on remote — keep them off main.
|
||||
let jobs = svc.loadCronJobs()
|
||||
let skills = svc.loadSkills().flatMap { $0.skills.map(\.id) }.sorted()
|
||||
let refreshed = selectedID.flatMap { id in jobs.first(where: { $0.id == id }) }
|
||||
let output = refreshed.flatMap { svc.loadCronOutput(jobId: $0.id) }
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.jobs = jobs
|
||||
self.availableSkills = skills
|
||||
if let refreshed { self.selectedJob = refreshed }
|
||||
if output != nil { self.jobOutput = output }
|
||||
self.isLoading = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func selectJob(_ job: HermesCronJob) {
|
||||
selectedJob = job
|
||||
jobOutput = fileService.loadCronOutput(jobId: job.id)
|
||||
let svc = fileService
|
||||
let jobID = job.id
|
||||
Task.detached { [weak self] in
|
||||
let output = svc.loadCronOutput(jobId: jobID)
|
||||
await MainActor.run { [weak self] in self?.jobOutput = output }
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - CLI wrappers
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct CronView: View {
|
||||
@State private var viewModel = CronViewModel()
|
||||
@State private var viewModel: CronViewModel
|
||||
@State private var pendingDelete: HermesCronJob?
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: CronViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
HSplitView {
|
||||
jobsList
|
||||
@@ -12,6 +17,7 @@ struct CronView: View {
|
||||
.frame(minWidth: 400)
|
||||
}
|
||||
.navigationTitle("Cron Jobs")
|
||||
.loadingOverlay(viewModel.isLoading, label: "Loading cron jobs…", isEmpty: viewModel.jobs.isEmpty)
|
||||
.onAppear { viewModel.load() }
|
||||
.sheet(isPresented: $viewModel.showCreateSheet) {
|
||||
CronJobEditor(mode: .create, availableSkills: viewModel.availableSkills) { form in
|
||||
|
||||
@@ -2,8 +2,16 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class DashboardViewModel {
|
||||
private let dataService = HermesDataService()
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let dataService: HermesDataService
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.dataService = HermesDataService(context: context)
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var stats = HermesDataService.SessionStats.empty
|
||||
var recentSessions: [HermesSession] = []
|
||||
@@ -13,18 +21,73 @@ final class DashboardViewModel {
|
||||
var hermesRunning = false
|
||||
var isLoading = true
|
||||
|
||||
/// User-presentable error banner. Set when any of the remote reads
|
||||
/// (state.db snapshot, config.yaml, gateway_state.json, pgrep) failed
|
||||
/// in a way that's not just "file doesn't exist yet". Dashboard renders
|
||||
/// this above the stats with a "Run Diagnostics…" button. `nil` = no
|
||||
/// surfaceable error.
|
||||
var lastReadError: String?
|
||||
|
||||
func load() async {
|
||||
isLoading = true
|
||||
let opened = await dataService.open()
|
||||
// refresh() = close + reopen, forces a fresh remote snapshot. Cheap
|
||||
// on local (live DB reopen).
|
||||
let opened = await dataService.refresh()
|
||||
var collectedErrors: [String] = []
|
||||
if opened {
|
||||
stats = await dataService.fetchStats()
|
||||
recentSessions = await dataService.fetchSessions(limit: 5)
|
||||
sessionPreviews = await dataService.fetchSessionPreviews(limit: 5)
|
||||
await dataService.close()
|
||||
} else if let msg = await dataService.lastOpenError {
|
||||
collectedErrors.append(msg)
|
||||
}
|
||||
// The fileService methods are synchronous and route through the
|
||||
// transport. For remote contexts each call is a blocking ssh
|
||||
// round-trip — do them off the main thread to avoid spinning the
|
||||
// beach ball during the load.
|
||||
let svc = fileService
|
||||
struct LoadResults: Sendable {
|
||||
let cfg: Result<HermesConfig, Error>
|
||||
let gw: Result<GatewayState?, Error>
|
||||
let running: Result<pid_t?, Error>
|
||||
}
|
||||
let results = await Task.detached { () -> LoadResults in
|
||||
LoadResults(
|
||||
cfg: svc.loadConfigResult(),
|
||||
gw: svc.loadGatewayStateResult(),
|
||||
running: svc.hermesPIDResult()
|
||||
)
|
||||
}.value
|
||||
|
||||
switch results.cfg {
|
||||
case .success(let c): config = c
|
||||
case .failure(let e):
|
||||
config = .empty
|
||||
collectedErrors.append("config.yaml — \(e.localizedDescription)")
|
||||
}
|
||||
switch results.gw {
|
||||
case .success(let g): gatewayState = g
|
||||
case .failure(let e):
|
||||
gatewayState = nil
|
||||
collectedErrors.append("gateway_state.json — \(e.localizedDescription)")
|
||||
}
|
||||
switch results.running {
|
||||
case .success(let pid): hermesRunning = (pid != nil)
|
||||
case .failure(let e):
|
||||
hermesRunning = false
|
||||
collectedErrors.append("pgrep — \(e.localizedDescription)")
|
||||
}
|
||||
|
||||
// Only surface when there's a real error AND we're on a remote
|
||||
// context. Local contexts rarely hit these paths (live DB, local
|
||||
// filesystem), and a transient "file doesn't exist yet" on fresh
|
||||
// installs shouldn't scare users.
|
||||
if context.isRemote, !collectedErrors.isEmpty {
|
||||
lastReadError = collectedErrors.joined(separator: "\n")
|
||||
} else {
|
||||
lastReadError = nil
|
||||
}
|
||||
config = fileService.loadConfig()
|
||||
gatewayState = fileService.loadGatewayState()
|
||||
hermesRunning = fileService.isHermesRunning()
|
||||
isLoading = false
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,13 +1,22 @@
|
||||
import SwiftUI
|
||||
|
||||
struct DashboardView: View {
|
||||
@State private var viewModel = DashboardViewModel()
|
||||
@State private var viewModel: DashboardViewModel
|
||||
@State private var showDiagnostics = false
|
||||
@Environment(AppCoordinator.self) private var coordinator
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: DashboardViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 20) {
|
||||
if let err = viewModel.lastReadError {
|
||||
readErrorBanner(err)
|
||||
}
|
||||
statusSection
|
||||
statsSection
|
||||
recentSessionsSection
|
||||
@@ -16,10 +25,53 @@ struct DashboardView: View {
|
||||
.frame(maxWidth: .infinity, alignment: .topLeading)
|
||||
}
|
||||
.navigationTitle("Dashboard")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Loading dashboard…",
|
||||
isEmpty: viewModel.recentSessions.isEmpty
|
||||
)
|
||||
.task { await viewModel.load() }
|
||||
.onChange(of: fileWatcher.lastChangeDate) {
|
||||
Task { await viewModel.load() }
|
||||
}
|
||||
.sheet(isPresented: $showDiagnostics) {
|
||||
RemoteDiagnosticsView(context: viewModel.context)
|
||||
}
|
||||
}
|
||||
|
||||
/// Banner shown above the Dashboard when one or more remote reads
|
||||
/// failed (permission denied, missing sqlite3, wrong home dir, etc.).
|
||||
/// Replaces the old silent-failure mode where empty values just
|
||||
/// appeared as "Stopped / unknown / 0" with no explanation.
|
||||
private func readErrorBanner(_ err: String) -> some View {
|
||||
VStack(alignment: .leading, spacing: 8) {
|
||||
HStack(alignment: .top, spacing: 8) {
|
||||
Image(systemName: "exclamationmark.triangle.fill")
|
||||
.foregroundStyle(.orange)
|
||||
VStack(alignment: .leading, spacing: 4) {
|
||||
Text("Can't read Hermes state on \(viewModel.context.displayName)")
|
||||
.font(.headline)
|
||||
Text(err)
|
||||
.font(.caption.monospaced())
|
||||
.foregroundStyle(.secondary)
|
||||
.textSelection(.enabled)
|
||||
.fixedSize(horizontal: false, vertical: true)
|
||||
}
|
||||
Spacer()
|
||||
Button {
|
||||
showDiagnostics = true
|
||||
} label: {
|
||||
Label("Run Diagnostics…", systemImage: "stethoscope")
|
||||
}
|
||||
.controlSize(.regular)
|
||||
}
|
||||
}
|
||||
.padding(12)
|
||||
.background(Color.orange.opacity(0.1), in: RoundedRectangle(cornerRadius: 8))
|
||||
.overlay(
|
||||
RoundedRectangle(cornerRadius: 8)
|
||||
.strokeBorder(Color.orange.opacity(0.3), lineWidth: 1)
|
||||
)
|
||||
}
|
||||
|
||||
private var statusSection: some View {
|
||||
|
||||
@@ -37,6 +37,12 @@ struct PendingPairing: Identifiable {
|
||||
|
||||
@Observable
|
||||
final class GatewayViewModel {
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
var gateway = GatewayInfo(pid: nil, state: "unknown", exitReason: nil, startTime: nil, updatedAt: nil, platforms: [], isLoaded: false, isStale: false)
|
||||
var approvedUsers: [PairedUser] = []
|
||||
var pendingPairings: [PendingPairing] = []
|
||||
@@ -45,52 +51,26 @@ final class GatewayViewModel {
|
||||
|
||||
func load() {
|
||||
isLoading = true
|
||||
loadGatewayStatus()
|
||||
loadPairing()
|
||||
isLoading = false
|
||||
}
|
||||
|
||||
func startGateway() {
|
||||
runHermes(["gateway", "start"])
|
||||
actionMessage = "Gateway start requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { [weak self] in
|
||||
self?.loadGatewayStatus()
|
||||
self?.actionMessage = nil
|
||||
let ctx = context
|
||||
Task.detached { [weak self] in
|
||||
// Two sync transport calls + two CLI invocations — substantial
|
||||
// remote latency. Detach the whole load and commit at the end.
|
||||
let status = Self.fetchGatewayStatus(context: ctx)
|
||||
let pairing = Self.fetchPairing(context: ctx)
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.gateway = status
|
||||
self.approvedUsers = pairing.approved
|
||||
self.pendingPairings = pairing.pending
|
||||
self.isLoading = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func stopGateway() {
|
||||
runHermes(["gateway", "stop"])
|
||||
actionMessage = "Gateway stop requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { [weak self] in
|
||||
self?.loadGatewayStatus()
|
||||
self?.actionMessage = nil
|
||||
}
|
||||
}
|
||||
|
||||
func restartGateway() {
|
||||
runHermes(["gateway", "restart"])
|
||||
actionMessage = "Gateway restart requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.loadGatewayStatus()
|
||||
self?.actionMessage = nil
|
||||
}
|
||||
}
|
||||
|
||||
func approvePairing(platform: String, code: String) {
|
||||
runHermes(["pairing", "approve", platform, code])
|
||||
loadPairing()
|
||||
}
|
||||
|
||||
func revokeUser(_ user: PairedUser) {
|
||||
runHermes(["pairing", "revoke", user.platform, user.userId])
|
||||
approvedUsers.removeAll { $0.id == user.id }
|
||||
}
|
||||
|
||||
// MARK: - Private
|
||||
|
||||
private func loadGatewayStatus() {
|
||||
let stateJSON = FileManager.default.contents(atPath: HermesPaths.gatewayStateJSON)
|
||||
/// Static form of the gateway-status walk so the detached load can call
|
||||
/// it without bouncing back to MainActor.
|
||||
nonisolated private static func fetchGatewayStatus(context: ServerContext) -> GatewayInfo {
|
||||
let stateJSON = context.readData(context.paths.gatewayStateJSON)
|
||||
var pid: Int?
|
||||
var state = "unknown"
|
||||
var exitReason: String?
|
||||
@@ -117,21 +97,21 @@ final class GatewayViewModel {
|
||||
}
|
||||
}
|
||||
|
||||
let statusOutput = runHermes(["gateway", "status"]).output
|
||||
let statusOutput = context.runHermes(["gateway", "status"]).output
|
||||
let isLoaded = statusOutput.contains("service is loaded")
|
||||
let isStale = statusOutput.contains("stale")
|
||||
|
||||
gateway = GatewayInfo(
|
||||
return GatewayInfo(
|
||||
pid: pid, state: state, exitReason: exitReason,
|
||||
startTime: startTime, updatedAt: updatedAt,
|
||||
platforms: platforms, isLoaded: isLoaded, isStale: isStale
|
||||
)
|
||||
}
|
||||
|
||||
private func loadPairing() {
|
||||
let output = runHermes(["pairing", "list"]).output
|
||||
approvedUsers = []
|
||||
pendingPairings = []
|
||||
nonisolated private static func fetchPairing(context: ServerContext) -> (approved: [PairedUser], pending: [PendingPairing]) {
|
||||
let output = context.runHermes(["pairing", "list"]).output
|
||||
var approved: [PairedUser] = []
|
||||
var pending: [PendingPairing] = []
|
||||
|
||||
var inApproved = false
|
||||
var inPending = false
|
||||
@@ -147,31 +127,59 @@ final class GatewayViewModel {
|
||||
let platform = String(parts[0])
|
||||
let userId = String(parts[1])
|
||||
let name = parts[2...].joined(separator: " ")
|
||||
approvedUsers.append(PairedUser(platform: platform, userId: userId, name: name))
|
||||
}
|
||||
if inPending && parts.count >= 2 {
|
||||
approved.append(PairedUser(platform: platform, userId: userId, name: name))
|
||||
} else if inPending && parts.count >= 2 {
|
||||
let platform = String(parts[0])
|
||||
let code = String(parts[1])
|
||||
pendingPairings.append(PendingPairing(platform: platform, code: code))
|
||||
pending.append(PendingPairing(platform: platform, code: code))
|
||||
}
|
||||
}
|
||||
return (approved, pending)
|
||||
}
|
||||
|
||||
func startGateway() {
|
||||
runHermes(["gateway", "start"])
|
||||
actionMessage = "Gateway start requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { [weak self] in
|
||||
self?.load()
|
||||
self?.actionMessage = nil
|
||||
}
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
process.arguments = arguments
|
||||
let pipe = Pipe()
|
||||
process.standardOutput = pipe
|
||||
process.standardError = Pipe()
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
return (String(data: data, encoding: .utf8) ?? "", process.terminationStatus)
|
||||
} catch {
|
||||
return ("", -1)
|
||||
func stopGateway() {
|
||||
runHermes(["gateway", "stop"])
|
||||
actionMessage = "Gateway stop requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { [weak self] in
|
||||
self?.load()
|
||||
self?.actionMessage = nil
|
||||
}
|
||||
}
|
||||
|
||||
func restartGateway() {
|
||||
runHermes(["gateway", "restart"])
|
||||
actionMessage = "Gateway restart requested"
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.load()
|
||||
self?.actionMessage = nil
|
||||
}
|
||||
}
|
||||
|
||||
func approvePairing(platform: String, code: String) {
|
||||
runHermes(["pairing", "approve", platform, code])
|
||||
load()
|
||||
}
|
||||
|
||||
func revokeUser(_ user: PairedUser) {
|
||||
runHermes(["pairing", "revoke", user.platform, user.userId])
|
||||
approvedUsers.removeAll { $0.id == user.id }
|
||||
}
|
||||
|
||||
// MARK: - Private
|
||||
// (loadGatewayStatus / loadPairing were moved to static helpers above
|
||||
// so the detached load() can run them without touching MainActor state.)
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
context.runHermes(arguments)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct GatewayView: View {
|
||||
@State private var viewModel = GatewayViewModel()
|
||||
@State private var viewModel: GatewayViewModel
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: GatewayViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 24) {
|
||||
|
||||
@@ -22,7 +22,14 @@ struct HealthSection: Identifiable {
|
||||
|
||||
@Observable
|
||||
final class HealthViewModel {
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var version = ""
|
||||
var updateInfo = ""
|
||||
@@ -43,19 +50,50 @@ final class HealthViewModel {
|
||||
|
||||
func load() {
|
||||
isLoading = true
|
||||
refreshProcessStatus()
|
||||
loadVersion()
|
||||
let statusOutput = runHermes(["status"]).output
|
||||
statusSections = parseOutput(statusOutput)
|
||||
let doctorOutput = runHermes(["doctor"]).output
|
||||
doctorSections = parseOutput(doctorOutput)
|
||||
computeCounts()
|
||||
isLoading = false
|
||||
let ctx = context
|
||||
let svc = fileService
|
||||
// Health runs four sync transport-mediated commands plus a process
|
||||
// probe — that's 4-5 ssh round-trips on remote, easily 1-2s. Detach
|
||||
// the whole load.
|
||||
Task.detached { [weak self] in
|
||||
let pid = svc.hermesPID()
|
||||
let versionOutput = ctx.runHermes(["version"]).output
|
||||
let statusOutput = ctx.runHermes(["status"]).output
|
||||
let doctorOutput = ctx.runHermes(["doctor"]).output
|
||||
|
||||
let lines = versionOutput.components(separatedBy: "\n")
|
||||
let version = lines.first ?? ""
|
||||
let updateLine = lines.first(where: { $0.contains("commits behind") })
|
||||
let hasUpdate = updateLine != nil
|
||||
let updateInfo = updateLine?.trimmingCharacters(in: .whitespaces) ?? ""
|
||||
|
||||
let statusSections = Self.parseOutputStatic(statusOutput)
|
||||
let doctorSections = Self.parseOutputStatic(doctorOutput)
|
||||
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.hermesPID = pid
|
||||
self.hermesRunning = pid != nil
|
||||
self.version = version
|
||||
self.updateInfo = updateInfo
|
||||
self.hasUpdate = hasUpdate
|
||||
self.statusSections = statusSections
|
||||
self.doctorSections = doctorSections
|
||||
self.computeCounts()
|
||||
self.isLoading = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func refreshProcessStatus() {
|
||||
hermesPID = fileService.hermesPID()
|
||||
hermesRunning = hermesPID != nil
|
||||
let svc = fileService
|
||||
Task.detached { [weak self] in
|
||||
let pid = svc.hermesPID()
|
||||
await MainActor.run { [weak self] in
|
||||
self?.hermesPID = pid
|
||||
self?.hermesRunning = pid != nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func stopHermes() {
|
||||
@@ -101,6 +139,96 @@ final class HealthViewModel {
|
||||
}
|
||||
}
|
||||
|
||||
/// Static-callable form for the detached load() task. The instance
|
||||
/// `parseOutput` below delegates here so existing call sites still work.
|
||||
nonisolated static func parseOutputStatic(_ output: String) -> [HealthSection] {
|
||||
var sections: [HealthSection] = []
|
||||
var currentTitle = ""
|
||||
var currentChecks: [HealthCheck] = []
|
||||
|
||||
for line in output.components(separatedBy: "\n") {
|
||||
let trimmed = line.trimmingCharacters(in: .whitespaces)
|
||||
|
||||
if trimmed.hasPrefix("◆ ") {
|
||||
if !currentTitle.isEmpty {
|
||||
sections.append(HealthSection(
|
||||
title: currentTitle,
|
||||
icon: iconForSectionStatic(currentTitle),
|
||||
checks: currentChecks
|
||||
))
|
||||
}
|
||||
currentTitle = String(trimmed.dropFirst(2))
|
||||
currentChecks = []
|
||||
continue
|
||||
}
|
||||
|
||||
if trimmed.hasPrefix("✓ ") {
|
||||
let text = String(trimmed.dropFirst(2))
|
||||
let (label, detail) = splitCheckStatic(text)
|
||||
currentChecks.append(HealthCheck(label: label, status: .ok, detail: detail))
|
||||
} else if trimmed.hasPrefix("⚠ ") || trimmed.hasPrefix("⚠") {
|
||||
let text = trimmed.replacingOccurrences(of: "⚠ ", with: "").replacingOccurrences(of: "⚠", with: "")
|
||||
let (label, detail) = splitCheckStatic(text)
|
||||
currentChecks.append(HealthCheck(label: label, status: .warning, detail: detail))
|
||||
} else if trimmed.hasPrefix("✗ ") {
|
||||
let text = String(trimmed.dropFirst(2))
|
||||
let (label, detail) = splitCheckStatic(text)
|
||||
currentChecks.append(HealthCheck(label: label, status: .error, detail: detail))
|
||||
} else if trimmed.hasPrefix("→ ") || trimmed.hasPrefix("Error:") {
|
||||
if !currentChecks.isEmpty {
|
||||
let last = currentChecks.removeLast()
|
||||
let extra = trimmed.replacingOccurrences(of: "→ ", with: "").replacingOccurrences(of: "Error:", with: "").trimmingCharacters(in: .whitespaces)
|
||||
let combined = [last.detail, extra].compactMap { $0 }.joined(separator: " ")
|
||||
currentChecks.append(HealthCheck(label: last.label, status: last.status, detail: combined))
|
||||
}
|
||||
} else if !trimmed.isEmpty && trimmed.contains(":") && !trimmed.hasPrefix("┌") && !trimmed.hasPrefix("│") && !trimmed.hasPrefix("└") && !trimmed.hasPrefix("─") && !trimmed.hasPrefix("Run ") && !trimmed.hasPrefix("Found ") && !trimmed.hasPrefix("Tip:") {
|
||||
let parts = trimmed.split(separator: ":", maxSplits: 1)
|
||||
if parts.count == 2 {
|
||||
let key = parts[0].trimmingCharacters(in: .whitespaces)
|
||||
let val = parts[1].trimmingCharacters(in: .whitespaces)
|
||||
if !key.isEmpty && key.count < 30 {
|
||||
currentChecks.append(HealthCheck(label: key, status: .ok, detail: val))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !currentTitle.isEmpty {
|
||||
sections.append(HealthSection(
|
||||
title: currentTitle,
|
||||
icon: iconForSectionStatic(currentTitle),
|
||||
checks: currentChecks
|
||||
))
|
||||
}
|
||||
return sections
|
||||
}
|
||||
|
||||
nonisolated private static func splitCheckStatic(_ text: String) -> (String, String?) {
|
||||
if let range = text.range(of: ":") {
|
||||
let label = String(text[..<range.lowerBound]).trimmingCharacters(in: .whitespaces)
|
||||
let detail = String(text[range.upperBound...]).trimmingCharacters(in: .whitespaces)
|
||||
return (label, detail.isEmpty ? nil : detail)
|
||||
}
|
||||
return (text, nil)
|
||||
}
|
||||
|
||||
nonisolated private static func iconForSectionStatic(_ title: String) -> String {
|
||||
let lower = title.lowercased()
|
||||
if lower.contains("system") || lower.contains("environment") { return "desktopcomputer" }
|
||||
if lower.contains("config") { return "doc.text" }
|
||||
if lower.contains("model") || lower.contains("provider") { return "brain" }
|
||||
if lower.contains("memory") { return "memorychip" }
|
||||
if lower.contains("session") { return "list.bullet" }
|
||||
if lower.contains("gateway") || lower.contains("platform") { return "antenna.radiowaves.left.and.right" }
|
||||
if lower.contains("skill") { return "wrench.and.screwdriver" }
|
||||
if lower.contains("mcp") { return "cube.box" }
|
||||
if lower.contains("plugin") { return "puzzlepiece" }
|
||||
if lower.contains("auth") || lower.contains("credential") { return "key" }
|
||||
if lower.contains("disk") || lower.contains("storage") { return "internaldrive" }
|
||||
if lower.contains("update") { return "arrow.triangle.2.circlepath" }
|
||||
return "circle"
|
||||
}
|
||||
|
||||
private func parseOutput(_ output: String) -> [HealthSection] {
|
||||
var sections: [HealthSection] = []
|
||||
var currentTitle = ""
|
||||
@@ -237,19 +365,6 @@ final class HealthViewModel {
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
process.arguments = arguments
|
||||
let pipe = Pipe()
|
||||
process.standardOutput = pipe
|
||||
process.standardError = pipe
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
return (String(data: data, encoding: .utf8) ?? "", process.terminationStatus)
|
||||
} catch {
|
||||
return ("", -1)
|
||||
}
|
||||
context.runHermes(arguments)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
import SwiftUI
|
||||
|
||||
struct HealthView: View {
|
||||
@State private var viewModel = HealthViewModel()
|
||||
@State private var viewModel: HealthViewModel
|
||||
@State private var expandedSection: UUID?
|
||||
@State private var selectedTab = 0
|
||||
@State private var showShareConfirm = false
|
||||
@State private var showDiagnostics = false
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: HealthViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(spacing: 0) {
|
||||
headerBar
|
||||
@@ -43,6 +48,11 @@ struct HealthView: View {
|
||||
}
|
||||
}
|
||||
.navigationTitle("Health")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Running health checks…",
|
||||
isEmpty: viewModel.statusSections.isEmpty && viewModel.doctorSections.isEmpty
|
||||
)
|
||||
.onAppear { viewModel.load() }
|
||||
.confirmationDialog("Upload debug report?", isPresented: $showShareConfirm) {
|
||||
Button("Upload", role: .destructive) {
|
||||
|
||||
@@ -56,7 +56,14 @@ struct NotableSession: Identifiable {
|
||||
|
||||
@Observable
|
||||
final class InsightsViewModel {
|
||||
private let dataService = HermesDataService()
|
||||
let context: ServerContext
|
||||
private let dataService: HermesDataService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.dataService = HermesDataService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var period: InsightsPeriod = .month
|
||||
var isLoading = true
|
||||
@@ -85,7 +92,9 @@ final class InsightsViewModel {
|
||||
|
||||
func load() async {
|
||||
isLoading = true
|
||||
let opened = await dataService.open()
|
||||
// refresh() forces a fresh remote snapshot each load. On local it's
|
||||
// a cheap reopen of the live DB.
|
||||
let opened = await dataService.refresh()
|
||||
guard opened else {
|
||||
isLoading = false
|
||||
return
|
||||
|
||||
@@ -1,8 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct InsightsView: View {
|
||||
@State private var viewModel = InsightsViewModel()
|
||||
@State private var viewModel: InsightsViewModel
|
||||
@Environment(AppCoordinator.self) private var coordinator
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: InsightsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
@@ -23,6 +29,9 @@ struct InsightsView: View {
|
||||
.onChange(of: viewModel.period) {
|
||||
Task { await viewModel.load() }
|
||||
}
|
||||
.onChange(of: fileWatcher.lastChangeDate) {
|
||||
Task { await viewModel.load() }
|
||||
}
|
||||
}
|
||||
|
||||
private var periodPicker: some View {
|
||||
|
||||
@@ -2,7 +2,13 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class LogsViewModel {
|
||||
private let logService = HermesLogService()
|
||||
let context: ServerContext
|
||||
private let logService: HermesLogService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.logService = HermesLogService(context: context)
|
||||
}
|
||||
|
||||
var entries: [LogEntry] = []
|
||||
var selectedLogFile: LogFile = .agent
|
||||
@@ -17,13 +23,13 @@ final class LogsViewModel {
|
||||
case gateway = "gateway.log"
|
||||
|
||||
var id: String { rawValue }
|
||||
}
|
||||
|
||||
var path: String {
|
||||
switch self {
|
||||
case .agent: return HermesPaths.agentLog
|
||||
case .errors: return HermesPaths.errorsLog
|
||||
case .gateway: return HermesPaths.gatewayLog
|
||||
}
|
||||
private func path(for file: LogFile) -> String {
|
||||
switch file {
|
||||
case .agent: return context.paths.agentLog
|
||||
case .errors: return context.paths.errorsLog
|
||||
case .gateway: return context.paths.gatewayLog
|
||||
}
|
||||
}
|
||||
|
||||
@@ -62,7 +68,7 @@ final class LogsViewModel {
|
||||
}
|
||||
|
||||
func load() async {
|
||||
await logService.openLog(path: selectedLogFile.path)
|
||||
await logService.openLog(path: path(for: selectedLogFile))
|
||||
entries = await logService.readLastLines(count: 500)
|
||||
await logService.seekToEnd()
|
||||
startPolling()
|
||||
@@ -71,7 +77,7 @@ final class LogsViewModel {
|
||||
func switchLogFile(_ file: LogFile) async {
|
||||
selectedLogFile = file
|
||||
entries = []
|
||||
await logService.openLog(path: file.path)
|
||||
await logService.openLog(path: path(for: file))
|
||||
entries = await logService.readLastLines(count: 500)
|
||||
await logService.seekToEnd()
|
||||
}
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
import SwiftUI
|
||||
|
||||
struct LogsView: View {
|
||||
@State private var viewModel = LogsViewModel()
|
||||
@State private var viewModel: LogsViewModel
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: LogsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(spacing: 0) {
|
||||
|
||||
@@ -8,7 +8,8 @@ final class MCPServerEditorViewModel {
|
||||
var value: String
|
||||
}
|
||||
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
let server: HermesMCPServer
|
||||
|
||||
var envDraft: [KeyValueRow]
|
||||
@@ -23,8 +24,10 @@ final class MCPServerEditorViewModel {
|
||||
var isSaving: Bool = false
|
||||
var saveError: String?
|
||||
|
||||
init(server: HermesMCPServer) {
|
||||
init(server: HermesMCPServer, context: ServerContext = .local) {
|
||||
self.server = server
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
self.envDraft = server.env.keys.sorted().map { KeyValueRow(key: $0, value: server.env[$0] ?? "") }
|
||||
self.headersDraft = server.headers.keys.sorted().map { KeyValueRow(key: $0, value: server.headers[$0] ?? "") }
|
||||
self.includeDraft = server.toolsInclude.joined(separator: ", ")
|
||||
@@ -73,27 +76,33 @@ final class MCPServerEditorViewModel {
|
||||
let prompts = promptsEnabled
|
||||
|
||||
Task.detached {
|
||||
var success = true
|
||||
switch transport {
|
||||
case .stdio:
|
||||
if !service.setMCPServerEnv(name: name, env: envMap) { success = false }
|
||||
case .http:
|
||||
if !service.setMCPServerHeaders(name: name, headers: headerMap) { success = false }
|
||||
}
|
||||
if !service.updateMCPToolFilters(
|
||||
name: name,
|
||||
include: include,
|
||||
exclude: exclude,
|
||||
resources: resources,
|
||||
prompts: prompts
|
||||
) { success = false }
|
||||
if !service.setMCPServerTimeouts(name: name, timeout: timeoutValue, connectTimeout: connectValue) {
|
||||
success = false
|
||||
}
|
||||
// Compute success as an immutable so the MainActor.run closure
|
||||
// captures a value, not a mutable var. Swift 6 rejects
|
||||
// var-captures across concurrent closures as data races.
|
||||
let success: Bool = {
|
||||
var ok = true
|
||||
switch transport {
|
||||
case .stdio:
|
||||
if !service.setMCPServerEnv(name: name, env: envMap) { ok = false }
|
||||
case .http:
|
||||
if !service.setMCPServerHeaders(name: name, headers: headerMap) { ok = false }
|
||||
}
|
||||
if !service.updateMCPToolFilters(
|
||||
name: name,
|
||||
include: include,
|
||||
exclude: exclude,
|
||||
resources: resources,
|
||||
prompts: prompts
|
||||
) { ok = false }
|
||||
if !service.setMCPServerTimeouts(name: name, timeout: timeoutValue, connectTimeout: connectValue) {
|
||||
ok = false
|
||||
}
|
||||
return ok
|
||||
}()
|
||||
await MainActor.run {
|
||||
self.isSaving = false
|
||||
if !success {
|
||||
self.saveError = "One or more fields could not be written. Check \(HermesPaths.configYAML)."
|
||||
self.saveError = "One or more fields could not be written. Check \(self.context.paths.configYAML)."
|
||||
}
|
||||
completion(success)
|
||||
}
|
||||
|
||||
@@ -2,7 +2,14 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class MCPServersViewModel {
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var servers: [HermesMCPServer] = []
|
||||
var selectedServerName: String?
|
||||
@@ -41,10 +48,19 @@ final class MCPServersViewModel {
|
||||
|
||||
func load() {
|
||||
isLoading = true
|
||||
servers = fileService.loadMCPServers()
|
||||
isLoading = false
|
||||
if let name = selectedServerName, !servers.contains(where: { $0.name == name }) {
|
||||
selectedServerName = nil
|
||||
let svc = fileService
|
||||
Task.detached { [weak self] in
|
||||
// loadMCPServers reads config.yaml + lists mcp-tokens — both
|
||||
// are sync transport calls that block on remote ssh round-trips.
|
||||
let result = svc.loadMCPServers()
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.servers = result
|
||||
self.isLoading = false
|
||||
if let name = self.selectedServerName, !result.contains(where: { $0.name == name }) {
|
||||
self.selectedServerName = nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
import SwiftUI
|
||||
|
||||
struct MCPServersView: View {
|
||||
@State private var viewModel = MCPServersViewModel()
|
||||
@State private var viewModel: MCPServersViewModel
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: MCPServersViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
HSplitView {
|
||||
@@ -11,6 +16,11 @@ struct MCPServersView: View {
|
||||
.frame(minWidth: 500)
|
||||
}
|
||||
.navigationTitle("MCP Servers (\(viewModel.servers.count))")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Loading MCP servers…",
|
||||
isEmpty: viewModel.servers.isEmpty
|
||||
)
|
||||
.searchable(text: $viewModel.searchText, prompt: "Filter servers...")
|
||||
.toolbar {
|
||||
ToolbarItemGroup(placement: .primaryAction) {
|
||||
|
||||
@@ -2,7 +2,14 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class MemoryViewModel {
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var memoryContent = ""
|
||||
var userContent = ""
|
||||
@@ -12,6 +19,7 @@ final class MemoryViewModel {
|
||||
var editText = ""
|
||||
var profiles: [String] = []
|
||||
var activeProfile = ""
|
||||
var isLoading = false
|
||||
|
||||
enum EditTarget {
|
||||
case memory, user
|
||||
@@ -30,20 +38,40 @@ final class MemoryViewModel {
|
||||
var hasMultipleProfiles: Bool { !profiles.isEmpty }
|
||||
|
||||
func load() {
|
||||
let config = fileService.loadConfig()
|
||||
memoryProvider = config.memoryProvider
|
||||
profiles = fileService.loadMemoryProfiles()
|
||||
if activeProfile.isEmpty {
|
||||
activeProfile = config.memoryProfile
|
||||
isLoading = true
|
||||
let svc = fileService
|
||||
let currentProfile = activeProfile
|
||||
// Sync transport calls would beach-ball the UI on remote — dispatch
|
||||
// off main, then commit results back on MainActor.
|
||||
Task.detached { [weak self] in
|
||||
let config = svc.loadConfig()
|
||||
let profiles = svc.loadMemoryProfiles()
|
||||
let profile = currentProfile.isEmpty ? config.memoryProfile : currentProfile
|
||||
let memory = svc.loadMemory(profile: profile)
|
||||
let user = svc.loadUserProfile(profile: profile)
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.memoryProvider = config.memoryProvider
|
||||
self.profiles = profiles
|
||||
self.activeProfile = profile
|
||||
self.memoryContent = memory
|
||||
self.userContent = user
|
||||
self.isLoading = false
|
||||
}
|
||||
}
|
||||
memoryContent = fileService.loadMemory(profile: activeProfile)
|
||||
userContent = fileService.loadUserProfile(profile: activeProfile)
|
||||
}
|
||||
|
||||
func switchProfile(_ profile: String) {
|
||||
activeProfile = profile
|
||||
memoryContent = fileService.loadMemory(profile: profile)
|
||||
userContent = fileService.loadUserProfile(profile: profile)
|
||||
let svc = fileService
|
||||
Task.detached { [weak self] in
|
||||
let memory = svc.loadMemory(profile: profile)
|
||||
let user = svc.loadUserProfile(profile: profile)
|
||||
await MainActor.run { [weak self] in
|
||||
self?.memoryContent = memory
|
||||
self?.userContent = user
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func startEditing(_ target: EditTarget) {
|
||||
@@ -53,15 +81,24 @@ final class MemoryViewModel {
|
||||
}
|
||||
|
||||
func save() {
|
||||
switch editingFile {
|
||||
case .memory:
|
||||
fileService.saveMemory(editText, profile: activeProfile)
|
||||
memoryContent = editText
|
||||
case .user:
|
||||
fileService.saveUserProfile(editText, profile: activeProfile)
|
||||
userContent = editText
|
||||
let svc = fileService
|
||||
let target = editingFile
|
||||
let text = editText
|
||||
let profile = activeProfile
|
||||
Task.detached { [weak self] in
|
||||
switch target {
|
||||
case .memory: svc.saveMemory(text, profile: profile)
|
||||
case .user: svc.saveUserProfile(text, profile: profile)
|
||||
}
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
switch target {
|
||||
case .memory: self.memoryContent = text
|
||||
case .user: self.userContent = text
|
||||
}
|
||||
self.isEditing = false
|
||||
}
|
||||
}
|
||||
isEditing = false
|
||||
}
|
||||
|
||||
func cancelEditing() {
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct MemoryView: View {
|
||||
@State private var viewModel = MemoryViewModel()
|
||||
@State private var viewModel: MemoryViewModel
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: MemoryViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 20) {
|
||||
@@ -43,6 +48,11 @@ struct MemoryView: View {
|
||||
.frame(maxWidth: .infinity, alignment: .topLeading)
|
||||
}
|
||||
.navigationTitle("Memory")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Loading memory…",
|
||||
isEmpty: viewModel.memoryContent.isEmpty && viewModel.userContent.isEmpty
|
||||
)
|
||||
.onAppear { viewModel.load() }
|
||||
.onChange(of: fileWatcher.lastChangeDate) {
|
||||
viewModel.load()
|
||||
|
||||
@@ -13,26 +13,63 @@ struct HermesPersonality: Identifiable, Sendable, Equatable {
|
||||
@Observable
|
||||
final class PersonalitiesViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "PersonalitiesViewModel")
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
var personalities: [HermesPersonality] = []
|
||||
var activeName: String = ""
|
||||
var soulMarkdown: String = ""
|
||||
var soulPath: String { HermesPaths.home + "/SOUL.md" }
|
||||
var soulPath: String { context.paths.soulMD }
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let config = fileService.loadConfig()
|
||||
activeName = config.personality
|
||||
personalities = parsePersonalitiesBlock()
|
||||
soulMarkdown = (try? String(contentsOfFile: soulPath, encoding: .utf8)) ?? ""
|
||||
let svc = fileService
|
||||
let ctx = context
|
||||
let path = soulPath
|
||||
Task.detached { [weak self] in
|
||||
let config = svc.loadConfig()
|
||||
let parsed = Self.parsePersonalitiesBlock(yaml: ctx.readText(ctx.paths.configYAML) ?? "")
|
||||
let soul = ctx.readText(path) ?? ""
|
||||
await MainActor.run { [weak self] in
|
||||
guard let self else { return }
|
||||
self.activeName = config.personality
|
||||
self.personalities = parsed
|
||||
self.soulMarkdown = soul
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Static form so the detached load can call into it without touching
|
||||
/// MainActor-isolated state. The instance form below remains for any
|
||||
/// other callers that need it.
|
||||
nonisolated private static func parsePersonalitiesBlock(yaml: String) -> [HermesPersonality] {
|
||||
guard !yaml.isEmpty else { return [] }
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
var nameSet: Set<String> = []
|
||||
for key in parsed.values.keys where key.hasPrefix("personalities.") {
|
||||
let parts = key.split(separator: ".", maxSplits: 2, omittingEmptySubsequences: false)
|
||||
if parts.count >= 2 { nameSet.insert(String(parts[1])) }
|
||||
}
|
||||
for key in parsed.lists.keys where key.hasPrefix("personalities.") {
|
||||
let parts = key.split(separator: ".", maxSplits: 2, omittingEmptySubsequences: false)
|
||||
if parts.count >= 2 { nameSet.insert(String(parts[1])) }
|
||||
}
|
||||
return nameSet.sorted().map { name in
|
||||
let prompt = parsed.values["personalities.\(name).prompt"] ?? ""
|
||||
return HermesPersonality(name: name, prompt: HermesFileService.stripYAMLQuotes(prompt))
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse the `personalities:` section of config.yaml using the nested parser.
|
||||
/// Each personality is a top-level key under `personalities`, optionally with
|
||||
/// a `prompt:` child.
|
||||
private func parsePersonalitiesBlock() -> [HermesPersonality] {
|
||||
guard let yaml = try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8) else { return [] }
|
||||
guard let yaml = context.readText(context.paths.configYAML) else { return [] }
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
// Find all keys "personalities.<name>[.subkey]"
|
||||
var nameSet: Set<String> = []
|
||||
@@ -65,12 +102,11 @@ final class PersonalitiesViewModel {
|
||||
}
|
||||
|
||||
func saveSOUL(_ content: String) {
|
||||
do {
|
||||
try content.write(toFile: soulPath, atomically: true, encoding: .utf8)
|
||||
if context.writeText(soulPath, content: content) {
|
||||
soulMarkdown = content
|
||||
message = "SOUL.md saved"
|
||||
} catch {
|
||||
logger.error("Failed to write SOUL.md: \(error.localizedDescription)")
|
||||
} else {
|
||||
logger.error("Failed to write SOUL.md to \(self.context.displayName)")
|
||||
message = "Save failed"
|
||||
}
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { [weak self] in
|
||||
@@ -79,25 +115,11 @@ final class PersonalitiesViewModel {
|
||||
}
|
||||
|
||||
func openConfigInEditor() {
|
||||
NSWorkspace.shared.open(URL(fileURLWithPath: HermesPaths.configYAML))
|
||||
context.openInLocalEditor(context.paths.configYAML)
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
process.arguments = arguments
|
||||
process.environment = HermesFileService.enrichedEnvironment()
|
||||
let pipe = Pipe()
|
||||
process.standardOutput = pipe
|
||||
process.standardError = Pipe()
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
return (String(data: data, encoding: .utf8) ?? "", process.terminationStatus)
|
||||
} catch {
|
||||
return ("", -1)
|
||||
}
|
||||
context.runHermes(arguments)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
import SwiftUI
|
||||
|
||||
struct PersonalitiesView: View {
|
||||
@State private var viewModel = PersonalitiesViewModel()
|
||||
@State private var viewModel: PersonalitiesViewModel
|
||||
@State private var soulDraft = ""
|
||||
@State private var editingSOUL = false
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: PersonalitiesViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 20) {
|
||||
|
||||
@@ -6,6 +6,9 @@ import os
|
||||
@Observable
|
||||
@MainActor
|
||||
final class DiscordSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var botToken: String = ""
|
||||
var allowedUsers: String = ""
|
||||
var homeChannel: String = ""
|
||||
@@ -26,7 +29,7 @@ final class DiscordSetupViewModel {
|
||||
let replyToModeOptions = ["off", "first", "all"]
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
botToken = env["DISCORD_BOT_TOKEN"] ?? ""
|
||||
allowedUsers = env["DISCORD_ALLOWED_USERS"] ?? ""
|
||||
homeChannel = env["DISCORD_HOME_CHANNEL"] ?? ""
|
||||
@@ -34,7 +37,7 @@ final class DiscordSetupViewModel {
|
||||
allowBots = env["DISCORD_ALLOW_BOTS"] ?? "none"
|
||||
replyToMode = env["DISCORD_REPLY_TO_MODE"] ?? "first"
|
||||
|
||||
let cfg = HermesFileService().loadConfig().discord
|
||||
let cfg = HermesFileService(context: context).loadConfig().discord
|
||||
requireMention = cfg.requireMention
|
||||
freeResponseChannels = cfg.freeResponseChannels
|
||||
autoThread = cfg.autoThread
|
||||
@@ -56,7 +59,7 @@ final class DiscordSetupViewModel {
|
||||
"discord.auto_thread": PlatformSetupHelpers.envBool(autoThread),
|
||||
"discord.reactions": PlatformSetupHelpers.envBool(reactions)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
@@ -5,6 +5,12 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class EmailSetupViewModel {
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
var address: String = ""
|
||||
var password: String = ""
|
||||
var imapHost: String = ""
|
||||
@@ -34,7 +40,7 @@ final class EmailSetupViewModel {
|
||||
]
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
address = env["EMAIL_ADDRESS"] ?? ""
|
||||
password = env["EMAIL_PASSWORD"] ?? ""
|
||||
imapHost = env["EMAIL_IMAP_HOST"] ?? ""
|
||||
@@ -46,7 +52,7 @@ final class EmailSetupViewModel {
|
||||
homeAddress = env["EMAIL_HOME_ADDRESS"] ?? ""
|
||||
allowAllUsers = PlatformSetupHelpers.parseEnvBool(env["EMAIL_ALLOW_ALL_USERS"])
|
||||
// skip_attachments lives in config.yaml.
|
||||
let yaml = (try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8)) ?? ""
|
||||
let yaml = context.readText(context.paths.configYAML) ?? ""
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
skipAttachments = (parsed.values["platforms.email.skip_attachments"] ?? "false") == "true"
|
||||
}
|
||||
@@ -72,7 +78,7 @@ final class EmailSetupViewModel {
|
||||
let configKV: [String: String] = [
|
||||
"platforms.email.skip_attachments": PlatformSetupHelpers.envBool(skipAttachments)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
@@ -5,6 +5,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class FeishuSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var appID: String = ""
|
||||
var appSecret: String = ""
|
||||
var domain: String = "lark"
|
||||
@@ -19,7 +22,7 @@ final class FeishuSetupViewModel {
|
||||
let connectionOptions = ["websocket", "webhook"]
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
appID = env["FEISHU_APP_ID"] ?? ""
|
||||
appSecret = env["FEISHU_APP_SECRET"] ?? ""
|
||||
domain = env["FEISHU_DOMAIN"] ?? "lark"
|
||||
@@ -39,7 +42,7 @@ final class FeishuSetupViewModel {
|
||||
"FEISHU_ALLOWED_USERS": allowedUsers,
|
||||
"FEISHU_CONNECTION_MODE": connectionMode == "websocket" ? "" : connectionMode
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: [:])
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: [:])
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
+10
-4
@@ -15,6 +15,12 @@ import AppKit
|
||||
@Observable
|
||||
@MainActor
|
||||
final class HomeAssistantSetupViewModel {
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
var url: String = "http://homeassistant.local:8123"
|
||||
var token: String = ""
|
||||
|
||||
@@ -30,11 +36,11 @@ final class HomeAssistantSetupViewModel {
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
url = env["HASS_URL"] ?? "http://homeassistant.local:8123"
|
||||
token = env["HASS_TOKEN"] ?? ""
|
||||
|
||||
let cfg = HermesFileService().loadConfig().homeAssistant
|
||||
let cfg = HermesFileService(context: context).loadConfig().homeAssistant
|
||||
watchAll = cfg.watchAll
|
||||
cooldownSeconds = cfg.cooldownSeconds
|
||||
watchDomains = cfg.watchDomains
|
||||
@@ -53,7 +59,7 @@ final class HomeAssistantSetupViewModel {
|
||||
"platforms.homeassistant.extra.watch_all": PlatformSetupHelpers.envBool(watchAll),
|
||||
"platforms.homeassistant.extra.cooldown_seconds": String(cooldownSeconds)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
@@ -62,6 +68,6 @@ final class HomeAssistantSetupViewModel {
|
||||
/// Open config.yaml in the user's default editor so they can manually edit
|
||||
/// the list-valued filter fields.
|
||||
func openConfigForLists() {
|
||||
NSWorkspace.shared.open(URL(fileURLWithPath: HermesPaths.configYAML))
|
||||
context.openInLocalEditor(context.paths.configYAML)
|
||||
}
|
||||
}
|
||||
|
||||
+5
-2
@@ -6,6 +6,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class IMessageSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var serverURL: String = ""
|
||||
var password: String = ""
|
||||
var webhookHost: String = "127.0.0.1"
|
||||
@@ -19,7 +22,7 @@ final class IMessageSetupViewModel {
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
serverURL = env["BLUEBUBBLES_SERVER_URL"] ?? ""
|
||||
password = env["BLUEBUBBLES_PASSWORD"] ?? ""
|
||||
webhookHost = env["BLUEBUBBLES_WEBHOOK_HOST"] ?? "127.0.0.1"
|
||||
@@ -43,7 +46,7 @@ final class IMessageSetupViewModel {
|
||||
"BLUEBUBBLES_ALLOW_ALL_USERS": allowAllUsers ? "true" : "",
|
||||
"BLUEBUBBLES_SEND_READ_RECEIPTS": sendReadReceipts ? "true" : ""
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: [:])
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: [:])
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
@@ -5,6 +5,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class MatrixSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var homeserver: String = ""
|
||||
var accessToken: String = "" // preferred
|
||||
var userID: String = ""
|
||||
@@ -22,7 +25,7 @@ final class MatrixSetupViewModel {
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
homeserver = env["MATRIX_HOMESERVER"] ?? ""
|
||||
accessToken = env["MATRIX_ACCESS_TOKEN"] ?? ""
|
||||
userID = env["MATRIX_USER_ID"] ?? ""
|
||||
@@ -32,7 +35,7 @@ final class MatrixSetupViewModel {
|
||||
recoveryKey = env["MATRIX_RECOVERY_KEY"] ?? ""
|
||||
encryption = PlatformSetupHelpers.parseEnvBool(env["MATRIX_ENCRYPTION"])
|
||||
|
||||
let cfg = HermesFileService().loadConfig().matrix
|
||||
let cfg = HermesFileService(context: context).loadConfig().matrix
|
||||
requireMention = cfg.requireMention
|
||||
autoThread = cfg.autoThread
|
||||
dmMentionThreads = cfg.dmMentionThreads
|
||||
@@ -54,7 +57,7 @@ final class MatrixSetupViewModel {
|
||||
"matrix.auto_thread": PlatformSetupHelpers.envBool(autoThread),
|
||||
"matrix.dm_mention_threads": PlatformSetupHelpers.envBool(dmMentionThreads)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
+6
-3
@@ -5,6 +5,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class MattermostSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var serverURL: String = ""
|
||||
var token: String = ""
|
||||
var allowedUsers: String = ""
|
||||
@@ -18,7 +21,7 @@ final class MattermostSetupViewModel {
|
||||
let replyModeOptions = ["off", "thread"]
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
serverURL = env["MATTERMOST_URL"] ?? ""
|
||||
token = env["MATTERMOST_TOKEN"] ?? ""
|
||||
allowedUsers = env["MATTERMOST_ALLOWED_USERS"] ?? ""
|
||||
@@ -26,7 +29,7 @@ final class MattermostSetupViewModel {
|
||||
freeResponseChannels = env["MATTERMOST_FREE_RESPONSE_CHANNELS"] ?? ""
|
||||
replyMode = env["MATTERMOST_REPLY_MODE"] ?? "off"
|
||||
|
||||
let cfg = HermesFileService().loadConfig().mattermost
|
||||
let cfg = HermesFileService(context: context).loadConfig().mattermost
|
||||
requireMention = cfg.requireMention
|
||||
}
|
||||
|
||||
@@ -40,7 +43,7 @@ final class MattermostSetupViewModel {
|
||||
"MATTERMOST_REPLY_MODE": replyMode == "off" ? "" : replyMode,
|
||||
"MATTERMOST_REQUIRE_MENTION": PlatformSetupHelpers.envBool(requireMention)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: [:])
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: [:])
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
@@ -15,10 +15,11 @@ import os
|
||||
@MainActor
|
||||
enum PlatformSetupHelpers {
|
||||
static let logger = Logger(subsystem: "com.scarf", category: "PlatformSetup")
|
||||
static let envService = HermesEnvService()
|
||||
|
||||
/// Apply a form save in one atomic batch.
|
||||
/// Apply a form save in one atomic batch against a specific server.
|
||||
///
|
||||
/// - `context`: the server whose `.env` and `config.yaml` we're writing.
|
||||
/// Local goes through `LocalTransport`; remote rounds through ssh+scp.
|
||||
/// - `envPairs`: values to write into `.env`. Empty strings trigger `unset()`
|
||||
/// (commenting the line out) rather than storing a literal empty value.
|
||||
/// - `configKV`: scalar config.yaml paths to set via `hermes config set`.
|
||||
@@ -27,7 +28,9 @@ enum PlatformSetupHelpers {
|
||||
///
|
||||
/// Returns a user-facing summary message.
|
||||
@discardableResult
|
||||
static func saveForm(envPairs: [String: String], configKV: [String: String]) -> String {
|
||||
static func saveForm(context: ServerContext, envPairs: [String: String], configKV: [String: String]) -> String {
|
||||
let envService = HermesEnvService(context: context)
|
||||
|
||||
// Split env pairs into set vs. unset.
|
||||
var toSet: [String: String] = [:]
|
||||
var toUnset: [String] = []
|
||||
@@ -49,7 +52,7 @@ enum PlatformSetupHelpers {
|
||||
|
||||
var configFailures: [String] = []
|
||||
for (key, value) in configKV {
|
||||
let result = runHermesCLI(args: ["config", "set", key, value])
|
||||
let result = runHermesCLI(context: context, args: ["config", "set", key, value])
|
||||
if result.exitCode != 0 {
|
||||
configFailures.append(key)
|
||||
logger.warning("hermes config set \(key) failed: \(result.output)")
|
||||
@@ -61,11 +64,11 @@ enum PlatformSetupHelpers {
|
||||
return "Saved — restart gateway to apply"
|
||||
}
|
||||
|
||||
/// Synchronous hermes CLI invocation. Use only for fast commands like
|
||||
/// `config set`; longer commands should use `HermesFileService.runHermesCLI`
|
||||
/// from a `Task.detached`.
|
||||
static func runHermesCLI(args: [String], timeout: TimeInterval = 15) -> (exitCode: Int32, output: String) {
|
||||
HermesFileService().runHermesCLI(args: args, timeout: timeout)
|
||||
/// Synchronous hermes CLI invocation against the given server. Use only
|
||||
/// for fast commands like `config set`; longer commands should use
|
||||
/// `HermesFileService.runHermesCLI` from a `Task.detached`.
|
||||
static func runHermesCLI(context: ServerContext, args: [String], timeout: TimeInterval = 15) -> (exitCode: Int32, output: String) {
|
||||
HermesFileService(context: context).runHermesCLI(args: args, timeout: timeout)
|
||||
}
|
||||
|
||||
/// Ask the user's default browser to open a URL (typically a hermes doc page
|
||||
|
||||
@@ -9,6 +9,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class SignalSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var httpURL: String = "http://127.0.0.1:8080"
|
||||
var account: String = "" // E.164 phone, e.g. +15551234567
|
||||
var allowedUsers: String = ""
|
||||
@@ -29,7 +32,7 @@ final class SignalSetupViewModel {
|
||||
}
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
httpURL = env["SIGNAL_HTTP_URL"] ?? "http://127.0.0.1:8080"
|
||||
account = env["SIGNAL_ACCOUNT"] ?? ""
|
||||
allowedUsers = env["SIGNAL_ALLOWED_USERS"] ?? ""
|
||||
@@ -60,7 +63,7 @@ final class SignalSetupViewModel {
|
||||
"SIGNAL_HOME_CHANNEL": homeChannel,
|
||||
"SIGNAL_ALLOW_ALL_USERS": allowAllUsers ? "true" : ""
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: [:])
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: [:])
|
||||
clearMessageAfterDelay()
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class SlackSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var botToken: String = "" // xoxb-...
|
||||
var appToken: String = "" // xapp-...
|
||||
var allowedUsers: String = ""
|
||||
@@ -21,14 +24,14 @@ final class SlackSetupViewModel {
|
||||
let replyToModeOptions = ["off", "first", "all"]
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
botToken = env["SLACK_BOT_TOKEN"] ?? ""
|
||||
appToken = env["SLACK_APP_TOKEN"] ?? ""
|
||||
allowedUsers = env["SLACK_ALLOWED_USERS"] ?? ""
|
||||
homeChannel = env["SLACK_HOME_CHANNEL"] ?? ""
|
||||
homeChannelName = env["SLACK_HOME_CHANNEL_NAME"] ?? ""
|
||||
|
||||
let cfg = HermesFileService().loadConfig().slack
|
||||
let cfg = HermesFileService(context: context).loadConfig().slack
|
||||
replyToMode = cfg.replyToMode
|
||||
requireMention = cfg.requireMention
|
||||
replyInThread = cfg.replyInThread
|
||||
@@ -50,7 +53,7 @@ final class SlackSetupViewModel {
|
||||
"platforms.slack.extra.reply_in_thread": PlatformSetupHelpers.envBool(replyInThread),
|
||||
"platforms.slack.extra.reply_broadcast": PlatformSetupHelpers.envBool(replyBroadcast)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
+6
-3
@@ -8,6 +8,9 @@ import os
|
||||
@Observable
|
||||
@MainActor
|
||||
final class TelegramSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
// Required
|
||||
var botToken: String = ""
|
||||
var allowedUsers: String = ""
|
||||
@@ -23,7 +26,7 @@ final class TelegramSetupViewModel {
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
botToken = env["TELEGRAM_BOT_TOKEN"] ?? ""
|
||||
allowedUsers = env["TELEGRAM_ALLOWED_USERS"] ?? ""
|
||||
homeChannel = env["TELEGRAM_HOME_CHANNEL"] ?? ""
|
||||
@@ -31,7 +34,7 @@ final class TelegramSetupViewModel {
|
||||
webhookPort = env["TELEGRAM_WEBHOOK_PORT"] ?? ""
|
||||
webhookSecret = env["TELEGRAM_WEBHOOK_SECRET"] ?? ""
|
||||
|
||||
let cfg = HermesFileService().loadConfig()
|
||||
let cfg = HermesFileService(context: context).loadConfig()
|
||||
requireMention = cfg.telegram.requireMention
|
||||
reactions = cfg.telegram.reactions
|
||||
}
|
||||
@@ -49,7 +52,7 @@ final class TelegramSetupViewModel {
|
||||
"telegram.require_mention": PlatformSetupHelpers.envBool(requireMention),
|
||||
"telegram.reactions": PlatformSetupHelpers.envBool(reactions)
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
clearMessageAfterDelay()
|
||||
}
|
||||
|
||||
|
||||
@@ -7,6 +7,9 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class WebhookSetupViewModel {
|
||||
let context: ServerContext
|
||||
init(context: ServerContext = .local) { self.context = context }
|
||||
|
||||
var enabled: Bool = false
|
||||
var port: String = "8644"
|
||||
var secret: String = ""
|
||||
@@ -14,7 +17,7 @@ final class WebhookSetupViewModel {
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
enabled = PlatformSetupHelpers.parseEnvBool(env["WEBHOOK_ENABLED"])
|
||||
port = env["WEBHOOK_PORT"] ?? "8644"
|
||||
secret = env["WEBHOOK_SECRET"] ?? ""
|
||||
@@ -26,7 +29,7 @@ final class WebhookSetupViewModel {
|
||||
"WEBHOOK_PORT": port,
|
||||
"WEBHOOK_SECRET": secret
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: [:])
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: [:])
|
||||
DispatchQueue.main.asyncAfter(deadline: .now() + 3) { [weak self] in
|
||||
self?.message = nil
|
||||
}
|
||||
|
||||
+10
-4
@@ -8,6 +8,12 @@ import Foundation
|
||||
@Observable
|
||||
@MainActor
|
||||
final class WhatsAppSetupViewModel {
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
var enabled: Bool = false
|
||||
var mode: String = "bot" // "bot" | "self-chat"
|
||||
var allowedUsers: String = "" // Comma-separated phone numbers (no +)
|
||||
@@ -27,7 +33,7 @@ final class WhatsAppSetupViewModel {
|
||||
var pairingInProgress: Bool = false
|
||||
|
||||
func load() {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
enabled = PlatformSetupHelpers.parseEnvBool(env["WHATSAPP_ENABLED"])
|
||||
mode = env["WHATSAPP_MODE"] ?? "bot"
|
||||
allowedUsers = env["WHATSAPP_ALLOWED_USERS"] ?? ""
|
||||
@@ -40,7 +46,7 @@ final class WhatsAppSetupViewModel {
|
||||
allowedUsers = ""
|
||||
}
|
||||
|
||||
let cfg = HermesFileService().loadConfig().whatsapp
|
||||
let cfg = HermesFileService(context: context).loadConfig().whatsapp
|
||||
unauthorizedDMBehavior = cfg.unauthorizedDMBehavior
|
||||
replyPrefix = cfg.replyPrefix
|
||||
}
|
||||
@@ -57,7 +63,7 @@ final class WhatsAppSetupViewModel {
|
||||
"whatsapp.unauthorized_dm_behavior": unauthorizedDMBehavior,
|
||||
"whatsapp.reply_prefix": replyPrefix
|
||||
]
|
||||
message = PlatformSetupHelpers.saveForm(envPairs: envPairs, configKV: configKV)
|
||||
message = PlatformSetupHelpers.saveForm(context: context, envPairs: envPairs, configKV: configKV)
|
||||
clearMessageAfterDelay()
|
||||
}
|
||||
|
||||
@@ -72,7 +78,7 @@ final class WhatsAppSetupViewModel {
|
||||
self?.clearMessageAfterDelay()
|
||||
}
|
||||
terminalController.start(
|
||||
executable: HermesPaths.hermesBinary,
|
||||
executable: context.paths.hermesBinary,
|
||||
arguments: ["whatsapp"]
|
||||
)
|
||||
}
|
||||
|
||||
@@ -9,7 +9,14 @@ import os
|
||||
@MainActor
|
||||
final class PlatformsViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "PlatformsViewModel")
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var gatewayState: GatewayState?
|
||||
var selected: HermesToolPlatform = KnownPlatforms.cli
|
||||
@@ -41,14 +48,14 @@ final class PlatformsViewModel {
|
||||
/// until the first YAML edit.
|
||||
func hasConfigBlock(for platform: HermesToolPlatform) -> Bool {
|
||||
if platform.name == "cli" { return true }
|
||||
let yaml = (try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8)) ?? ""
|
||||
let yaml = context.readText(context.paths.configYAML) ?? ""
|
||||
for line in yaml.components(separatedBy: "\n") where !line.hasPrefix(" ") && !line.hasPrefix("\t") {
|
||||
if line.trimmingCharacters(in: .whitespaces) == "\(platform.name):" { return true }
|
||||
}
|
||||
// Env-var fallback: any identifying env var for this platform counts
|
||||
// as "configured". Uses the shared `identifyingEnvVar(for:)` mapping.
|
||||
if let key = Self.identifyingEnvVar(for: platform.name) {
|
||||
let env = HermesEnvService().load()
|
||||
let env = HermesEnvService(context: context).load()
|
||||
if let value = env[key], !value.isEmpty { return true }
|
||||
}
|
||||
return false
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct DiscordSetupView: View {
|
||||
@State private var viewModel = DiscordSetupViewModel()
|
||||
@State private var viewModel: DiscordSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: DiscordSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct EmailSetupView: View {
|
||||
@State private var viewModel = EmailSetupViewModel()
|
||||
@State private var viewModel: EmailSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: EmailSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct FeishuSetupView: View {
|
||||
@State private var viewModel = FeishuSetupViewModel()
|
||||
@State private var viewModel: FeishuSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: FeishuSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct HomeAssistantSetupView: View {
|
||||
@State private var viewModel = HomeAssistantSetupViewModel()
|
||||
@State private var viewModel: HomeAssistantSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: HomeAssistantSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct IMessageSetupView: View {
|
||||
@State private var viewModel = IMessageSetupViewModel()
|
||||
@State private var viewModel: IMessageSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: IMessageSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct MatrixSetupView: View {
|
||||
@State private var viewModel = MatrixSetupViewModel()
|
||||
@State private var viewModel: MatrixSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: MatrixSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct MattermostSetupView: View {
|
||||
@State private var viewModel = MattermostSetupViewModel()
|
||||
@State private var viewModel: MattermostSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: MattermostSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct SignalSetupView: View {
|
||||
@State private var viewModel = SignalSetupViewModel()
|
||||
@State private var viewModel: SignalSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: SignalSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct SlackSetupView: View {
|
||||
@State private var viewModel = SlackSetupViewModel()
|
||||
@State private var viewModel: SlackSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: SlackSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct TelegramSetupView: View {
|
||||
@State private var viewModel = TelegramSetupViewModel()
|
||||
@State private var viewModel: TelegramSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: TelegramSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct WebhookSetupView: View {
|
||||
@State private var viewModel = WebhookSetupViewModel()
|
||||
@State private var viewModel: WebhookSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: WebhookSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import SwiftUI
|
||||
|
||||
struct WhatsAppSetupView: View {
|
||||
@State private var viewModel = WhatsAppSetupViewModel()
|
||||
@State private var viewModel: WhatsAppSetupViewModel
|
||||
init(context: ServerContext) { _viewModel = State(initialValue: WhatsAppSetupViewModel(context: context)) }
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
import SwiftUI
|
||||
|
||||
struct PlatformsView: View {
|
||||
@State private var viewModel = PlatformsViewModel()
|
||||
@State private var viewModel: PlatformsViewModel
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: PlatformsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
// HSplitView (not nested NavigationSplitView) because ContentView already
|
||||
// hosts the outer NavigationSplitView — nesting them breaks layout on macOS.
|
||||
var body: some View {
|
||||
@@ -114,23 +119,25 @@ struct PlatformsView: View {
|
||||
|
||||
/// Dispatch to the right per-platform setup view based on the selection.
|
||||
/// Each setup view owns its own `@State` view model and handles load/save
|
||||
/// independently; we don't push state down from this container.
|
||||
/// independently; the parent's `context` is forwarded so writes go to the
|
||||
/// right server.
|
||||
@ViewBuilder
|
||||
private var platformForm: some View {
|
||||
let ctx = viewModel.context
|
||||
switch viewModel.selected.name {
|
||||
case "cli": cliPanel
|
||||
case "telegram": TelegramSetupView()
|
||||
case "discord": DiscordSetupView()
|
||||
case "slack": SlackSetupView()
|
||||
case "whatsapp": WhatsAppSetupView()
|
||||
case "signal": SignalSetupView()
|
||||
case "email": EmailSetupView()
|
||||
case "matrix": MatrixSetupView()
|
||||
case "mattermost": MattermostSetupView()
|
||||
case "feishu": FeishuSetupView()
|
||||
case "imessage": IMessageSetupView()
|
||||
case "homeassistant": HomeAssistantSetupView()
|
||||
case "webhook": WebhookSetupView()
|
||||
case "telegram": TelegramSetupView(context: ctx)
|
||||
case "discord": DiscordSetupView(context: ctx)
|
||||
case "slack": SlackSetupView(context: ctx)
|
||||
case "whatsapp": WhatsAppSetupView(context: ctx)
|
||||
case "signal": SignalSetupView(context: ctx)
|
||||
case "email": EmailSetupView(context: ctx)
|
||||
case "matrix": MatrixSetupView(context: ctx)
|
||||
case "mattermost": MattermostSetupView(context: ctx)
|
||||
case "feishu": FeishuSetupView(context: ctx)
|
||||
case "imessage": IMessageSetupView(context: ctx)
|
||||
case "homeassistant": HomeAssistantSetupView(context: ctx)
|
||||
case "webhook": WebhookSetupView(context: ctx)
|
||||
default:
|
||||
SettingsSection(title: viewModel.selected.displayName, icon: KnownPlatforms.icon(for: viewModel.selected.name)) {
|
||||
ReadOnlyRow(label: "Setup", value: "No setup form for this platform yet.")
|
||||
|
||||
@@ -13,59 +13,73 @@ struct HermesPlugin: Identifiable, Sendable, Equatable {
|
||||
@Observable
|
||||
final class PluginsViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "PluginsViewModel")
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
var plugins: [HermesPlugin] = []
|
||||
var isLoading = false
|
||||
var message: String?
|
||||
|
||||
private var pluginsDir: String { HermesPaths.home + "/plugins" }
|
||||
private var pluginsDir: String { context.paths.pluginsDir }
|
||||
|
||||
/// Source of truth is the `~/.hermes/plugins/` directory. Each plugin is a
|
||||
/// subdirectory — we read its `plugin.json` (if present) for source/version
|
||||
/// metadata. Parsing `hermes plugins list` box-drawn output is fragile.
|
||||
func load() {
|
||||
isLoading = true
|
||||
defer { isLoading = false }
|
||||
|
||||
let fm = FileManager.default
|
||||
guard let entries = try? fm.contentsOfDirectory(atPath: pluginsDir) else {
|
||||
plugins = []
|
||||
return
|
||||
let dir = pluginsDir
|
||||
let ctx = context
|
||||
// listDirectory + (stat × N entries) + (readManifest × N) is a lot
|
||||
// of sync transport ops on remote — definitively a beach ball if
|
||||
// run on main. Detach the whole walk.
|
||||
Task.detached { [weak self] in
|
||||
// Build `result` as an immutable before the MainActor hop, so the
|
||||
// cross-closure capture is a value, not a mutated `var` (Swift 6
|
||||
// concurrent-capture rule).
|
||||
let result: [HermesPlugin] = {
|
||||
let transport = ctx.makeTransport()
|
||||
var out: [HermesPlugin] = []
|
||||
if let entries = try? transport.listDirectory(dir) {
|
||||
for entry in entries.sorted() where !entry.hasPrefix(".") {
|
||||
let path = dir + "/" + entry
|
||||
guard transport.stat(path)?.isDirectory == true else { continue }
|
||||
let manifest = Self.readManifestStatic(path: path, context: ctx)
|
||||
let disabled = transport.fileExists(path + "/.disabled")
|
||||
out.append(HermesPlugin(
|
||||
name: entry,
|
||||
source: manifest.source,
|
||||
enabled: !disabled,
|
||||
version: manifest.version,
|
||||
path: path
|
||||
))
|
||||
}
|
||||
}
|
||||
return out
|
||||
}()
|
||||
await MainActor.run { [weak self] in
|
||||
self?.plugins = result
|
||||
self?.isLoading = false
|
||||
}
|
||||
}
|
||||
var result: [HermesPlugin] = []
|
||||
for entry in entries.sorted() where !entry.hasPrefix(".") {
|
||||
let path = pluginsDir + "/" + entry
|
||||
var isDir: ObjCBool = false
|
||||
guard fm.fileExists(atPath: path, isDirectory: &isDir), isDir.boolValue else { continue }
|
||||
|
||||
let manifest = Self.readManifest(path: path)
|
||||
let disabled = fm.fileExists(atPath: path + "/.disabled")
|
||||
result.append(HermesPlugin(
|
||||
name: entry,
|
||||
source: manifest.source,
|
||||
enabled: !disabled,
|
||||
version: manifest.version,
|
||||
path: path
|
||||
))
|
||||
}
|
||||
plugins = result
|
||||
}
|
||||
|
||||
/// Best-effort manifest read. Supports both plugin.json and plugin.yaml shapes.
|
||||
private static func readManifest(path: String) -> (source: String, version: String) {
|
||||
let fm = FileManager.default
|
||||
/// Static form of readManifest used by the detached load task. The
|
||||
/// instance form delegates to this so both call paths share logic.
|
||||
nonisolated fileprivate static func readManifestStatic(path: String, context: ServerContext) -> (source: String, version: String) {
|
||||
let jsonPath = path + "/plugin.json"
|
||||
if fm.fileExists(atPath: jsonPath),
|
||||
let data = try? Data(contentsOf: URL(fileURLWithPath: jsonPath)),
|
||||
if let data = context.readData(jsonPath),
|
||||
let obj = try? JSONSerialization.jsonObject(with: data) as? [String: Any] {
|
||||
let source = (obj["source"] as? String) ?? (obj["repository"] as? String) ?? (obj["url"] as? String) ?? ""
|
||||
let version = (obj["version"] as? String) ?? ""
|
||||
return (source, version)
|
||||
}
|
||||
let yamlPath = path + "/plugin.yaml"
|
||||
if fm.fileExists(atPath: yamlPath),
|
||||
let yaml = try? String(contentsOfFile: yamlPath, encoding: .utf8) {
|
||||
if let yaml = context.readText(yamlPath) {
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
let source = HermesFileService.stripYAMLQuotes(parsed.values["source"] ?? parsed.values["repository"] ?? parsed.values["url"] ?? "")
|
||||
let version = HermesFileService.stripYAMLQuotes(parsed.values["version"] ?? "")
|
||||
@@ -74,6 +88,10 @@ final class PluginsViewModel {
|
||||
return ("", "")
|
||||
}
|
||||
|
||||
// (readManifestStatic above is the new implementation; the instance
|
||||
// version was removed because the only caller was the load() walk,
|
||||
// which now runs detached and uses the static form.)
|
||||
|
||||
func install(_ identifier: String) {
|
||||
isLoading = true
|
||||
message = "Installing \(identifier)…"
|
||||
|
||||
@@ -1,11 +1,16 @@
|
||||
import SwiftUI
|
||||
|
||||
struct PluginsView: View {
|
||||
@State private var viewModel = PluginsViewModel()
|
||||
@State private var viewModel: PluginsViewModel
|
||||
@State private var installIdentifier = ""
|
||||
@State private var showInstall = false
|
||||
@State private var pendingRemove: HermesPlugin?
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: PluginsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
VStack(spacing: 0) {
|
||||
header
|
||||
@@ -19,6 +24,11 @@ struct PluginsView: View {
|
||||
}
|
||||
}
|
||||
.navigationTitle("Plugins")
|
||||
.loadingOverlay(
|
||||
viewModel.isLoading,
|
||||
label: "Loading plugins…",
|
||||
isEmpty: viewModel.plugins.isEmpty
|
||||
)
|
||||
.onAppear { viewModel.load() }
|
||||
.sheet(isPresented: $showInstall) { installSheet }
|
||||
.confirmationDialog(
|
||||
|
||||
@@ -11,7 +11,14 @@ struct HermesProfile: Identifiable, Sendable, Equatable {
|
||||
@Observable
|
||||
final class ProfilesViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "ProfilesViewModel")
|
||||
private let fileService = HermesFileService()
|
||||
let context: ServerContext
|
||||
private let fileService: HermesFileService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.fileService = HermesFileService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var profiles: [HermesProfile] = []
|
||||
var activeName: String = "default"
|
||||
|
||||
@@ -3,13 +3,18 @@ import AppKit
|
||||
import UniformTypeIdentifiers
|
||||
|
||||
struct ProfilesView: View {
|
||||
@State private var viewModel = ProfilesViewModel()
|
||||
@State private var viewModel: ProfilesViewModel
|
||||
@State private var selected: HermesProfile?
|
||||
@State private var showCreate = false
|
||||
@State private var createName = ""
|
||||
@State private var createCloneConfig = true
|
||||
@State private var createCloneAll = false
|
||||
@State private var showRename = false
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: ProfilesViewModel(context: context))
|
||||
}
|
||||
|
||||
@State private var renameTarget: HermesProfile?
|
||||
@State private var renameNewName = ""
|
||||
@State private var pendingDelete: HermesProfile?
|
||||
|
||||
@@ -2,7 +2,14 @@ import Foundation
|
||||
|
||||
@Observable
|
||||
final class ProjectsViewModel {
|
||||
private let service = ProjectDashboardService()
|
||||
let context: ServerContext
|
||||
private let service: ProjectDashboardService
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
self.service = ProjectDashboardService(context: context)
|
||||
}
|
||||
|
||||
|
||||
var projects: [ProjectEntry] = []
|
||||
var selectedProject: ProjectEntry?
|
||||
|
||||
@@ -6,10 +6,15 @@ private enum DashboardTab: String, CaseIterable {
|
||||
}
|
||||
|
||||
struct ProjectsView: View {
|
||||
@State private var viewModel = ProjectsViewModel()
|
||||
@State private var viewModel: ProjectsViewModel
|
||||
@Environment(AppCoordinator.self) private var coordinator
|
||||
@Environment(HermesFileWatcher.self) private var fileWatcher
|
||||
@State private var showingAddSheet = false
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: ProjectsViewModel(context: context))
|
||||
}
|
||||
|
||||
@State private var selectedTab: DashboardTab = .dashboard
|
||||
|
||||
var body: some View {
|
||||
@@ -209,7 +214,10 @@ struct ProjectsView: View {
|
||||
}
|
||||
|
||||
private func openInFinder(_ path: String) {
|
||||
NSWorkspace.shared.open(URL(fileURLWithPath: path))
|
||||
// Project paths come from the registry on the active server. For
|
||||
// remote, the path is on that machine's filesystem and can't be
|
||||
// shown in this Mac's Finder — no-op via the helper.
|
||||
viewModel.context.openInLocalEditor(path)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -13,31 +13,39 @@ struct HermesQuickCommand: Identifiable, Sendable, Equatable {
|
||||
@Observable
|
||||
final class QuickCommandsViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "QuickCommandsViewModel")
|
||||
let context: ServerContext
|
||||
|
||||
init(context: ServerContext = .local) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
var commands: [HermesQuickCommand] = []
|
||||
var message: String?
|
||||
|
||||
func load() {
|
||||
guard let yaml = try? String(contentsOfFile: HermesPaths.configYAML, encoding: .utf8) else {
|
||||
commands = []
|
||||
return
|
||||
let ctx = context
|
||||
Task.detached { [weak self] in
|
||||
let yaml = ctx.readText(ctx.paths.configYAML)
|
||||
let result: [HermesQuickCommand] = {
|
||||
guard let yaml else { return [] }
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
var byName: [String: (type: String, command: String)] = [:]
|
||||
for (key, value) in parsed.values where key.hasPrefix("quick_commands.") {
|
||||
let parts = key.split(separator: ".", maxSplits: 2, omittingEmptySubsequences: false)
|
||||
guard parts.count == 3 else { continue }
|
||||
let name = String(parts[1])
|
||||
let field = String(parts[2])
|
||||
var existing = byName[name] ?? (type: "exec", command: "")
|
||||
let stripped = HermesFileService.stripYAMLQuotes(value)
|
||||
if field == "type" { existing.type = stripped }
|
||||
if field == "command" { existing.command = stripped }
|
||||
byName[name] = existing
|
||||
}
|
||||
return byName.map { HermesQuickCommand(name: $0.key, type: $0.value.type, command: $0.value.command) }
|
||||
.sorted { $0.name < $1.name }
|
||||
}()
|
||||
await MainActor.run { [weak self] in self?.commands = result }
|
||||
}
|
||||
let parsed = HermesFileService.parseNestedYAML(yaml)
|
||||
// Each quick command is `quick_commands.<name>.type` + `quick_commands.<name>.command`.
|
||||
var byName: [String: (type: String, command: String)] = [:]
|
||||
for (key, value) in parsed.values where key.hasPrefix("quick_commands.") {
|
||||
let parts = key.split(separator: ".", maxSplits: 2, omittingEmptySubsequences: false)
|
||||
guard parts.count == 3 else { continue }
|
||||
let name = String(parts[1])
|
||||
let field = String(parts[2])
|
||||
var existing = byName[name] ?? (type: "exec", command: "")
|
||||
let stripped = HermesFileService.stripYAMLQuotes(value)
|
||||
if field == "type" { existing.type = stripped }
|
||||
if field == "command" { existing.command = stripped }
|
||||
byName[name] = existing
|
||||
}
|
||||
commands = byName.map { HermesQuickCommand(name: $0.key, type: $0.value.type, command: $0.value.command) }
|
||||
.sorted { $0.name < $1.name }
|
||||
}
|
||||
|
||||
/// Check for obviously destructive shell strings. Display-only; we do not block.
|
||||
@@ -70,25 +78,11 @@ final class QuickCommandsViewModel {
|
||||
/// Removal requires editing config.yaml directly — `hermes config set` has no
|
||||
/// unset for nested keys. Open the file in the editor for manual removal.
|
||||
func openConfigForRemoval() {
|
||||
NSWorkspace.shared.open(URL(fileURLWithPath: HermesPaths.configYAML))
|
||||
context.openInLocalEditor(context.paths.configYAML)
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
private func runHermes(_ arguments: [String]) -> (output: String, exitCode: Int32) {
|
||||
let process = Process()
|
||||
process.executableURL = URL(fileURLWithPath: HermesPaths.hermesBinary)
|
||||
process.arguments = arguments
|
||||
process.environment = HermesFileService.enrichedEnvironment()
|
||||
let pipe = Pipe()
|
||||
process.standardOutput = pipe
|
||||
process.standardError = Pipe()
|
||||
do {
|
||||
try process.run()
|
||||
process.waitUntilExit()
|
||||
let data = pipe.fileHandleForReading.readDataToEndOfFile()
|
||||
return (String(data: data, encoding: .utf8) ?? "", process.terminationStatus)
|
||||
} catch {
|
||||
return ("", -1)
|
||||
}
|
||||
context.runHermes(arguments)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
import SwiftUI
|
||||
|
||||
struct QuickCommandsView: View {
|
||||
@State private var viewModel = QuickCommandsViewModel()
|
||||
@State private var viewModel: QuickCommandsViewModel
|
||||
@State private var showAddSheet = false
|
||||
@State private var editTarget: HermesQuickCommand?
|
||||
|
||||
init(context: ServerContext) {
|
||||
_viewModel = State(initialValue: QuickCommandsViewModel(context: context))
|
||||
}
|
||||
|
||||
|
||||
var body: some View {
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
|
||||
@@ -0,0 +1,115 @@
|
||||
import Foundation
|
||||
import AppKit
|
||||
|
||||
/// Drives the Add Server sheet. Exposed state maps 1:1 to form fields, plus
|
||||
/// a reachability test that runs `ssh host 'command -v hermes && ls .hermes/state.db'`
|
||||
/// and surfaces stderr inline on failure.
|
||||
@Observable
|
||||
@MainActor
|
||||
final class AddServerViewModel {
|
||||
/// Name shown in the server picker (defaults to host if the user leaves
|
||||
/// it blank).
|
||||
var displayName: String = ""
|
||||
var host: String = ""
|
||||
var user: String = ""
|
||||
var port: String = ""
|
||||
var identityFile: String = ""
|
||||
/// Override for `~/.hermes` on the remote. Empty = default.
|
||||
var remoteHome: String = ""
|
||||
|
||||
var isTesting: Bool = false
|
||||
/// Outcome of the most recent Test Connection run. `nil` = not yet run.
|
||||
var testResult: TestResult?
|
||||
|
||||
enum TestResult: Equatable {
|
||||
/// `suggestedRemoteHome` is non-nil when the probe didn't find
|
||||
/// state.db at the configured (or default) path but did find a
|
||||
/// `state.db` at one of the well-known alternates (e.g. a systemd
|
||||
/// install in `/var/lib/hermes/.hermes`). UI offers a one-click
|
||||
/// fill so the user doesn't have to know the convention.
|
||||
case success(hermesPath: String, dbFound: Bool, suggestedRemoteHome: String?)
|
||||
/// `command` is the full ssh invocation we attempted (so the user can
|
||||
/// paste it into Terminal to see what their shell does with it).
|
||||
/// `stderr` is whatever ssh / the remote shell wrote to stderr.
|
||||
case failure(message: String, stderr: String, command: String)
|
||||
}
|
||||
|
||||
/// The config the form currently represents — built on demand, not
|
||||
/// persisted until the user clicks Save.
|
||||
var draftConfig: SSHConfig {
|
||||
SSHConfig(
|
||||
host: host.trimmingCharacters(in: .whitespaces),
|
||||
user: nonEmpty(user),
|
||||
port: Int(port),
|
||||
identityFile: nonEmpty(identityFile),
|
||||
remoteHome: nonEmpty(remoteHome),
|
||||
hermesBinaryHint: nil
|
||||
)
|
||||
}
|
||||
|
||||
/// Hostname or alias is the only required field; everything else
|
||||
/// defaults to `~/.ssh/config` / ssh-agent.
|
||||
var canSave: Bool {
|
||||
!host.trimmingCharacters(in: .whitespaces).isEmpty
|
||||
}
|
||||
|
||||
var resolvedDisplayName: String {
|
||||
let trimmed = displayName.trimmingCharacters(in: .whitespaces)
|
||||
if !trimmed.isEmpty { return trimmed }
|
||||
return host.trimmingCharacters(in: .whitespaces)
|
||||
}
|
||||
|
||||
// MARK: - Identity file picker
|
||||
|
||||
func pickIdentityFile() {
|
||||
let panel = NSOpenPanel()
|
||||
panel.message = "Choose an SSH private key"
|
||||
panel.canChooseFiles = true
|
||||
panel.canChooseDirectories = false
|
||||
panel.allowsMultipleSelection = false
|
||||
// Default to ~/.ssh so users land in the right place.
|
||||
if let sshDir = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first?
|
||||
.deletingLastPathComponent().appendingPathComponent(".ssh", isDirectory: true) {
|
||||
panel.directoryURL = sshDir
|
||||
}
|
||||
if panel.runModal() == .OK, let url = panel.url {
|
||||
identityFile = url.path
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Test Connection
|
||||
|
||||
/// Run a single ssh round-trip to verify auth + discover the remote
|
||||
/// hermes binary. Populates `testResult` with either a success (so the
|
||||
/// user knows the binary was found and the DB is readable) or a
|
||||
/// failure with stderr for debugging.
|
||||
///
|
||||
/// Uses `ssh -v` for the test probe so we capture the full handshake
|
||||
/// trace — even if auth fails before the remote shell starts, ssh's
|
||||
/// own diagnostic output gives the user (and us) something to act on.
|
||||
func testConnection() async {
|
||||
isTesting = true
|
||||
defer { isTesting = false }
|
||||
|
||||
let config = draftConfig
|
||||
let probe = TestConnectionProbe(config: config)
|
||||
testResult = await probe.run()
|
||||
}
|
||||
|
||||
/// If the test succeeded, we prefer to save the probed binary path into
|
||||
/// `hermesBinaryHint` so subsequent calls don't need to re-resolve it.
|
||||
func configForSave() -> SSHConfig {
|
||||
var cfg = draftConfig
|
||||
if case .success(let path, _, _) = testResult {
|
||||
cfg.hermesBinaryHint = path
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
|
||||
// MARK: - Helpers
|
||||
|
||||
private func nonEmpty(_ s: String) -> String? {
|
||||
let trimmed = s.trimmingCharacters(in: .whitespaces)
|
||||
return trimmed.isEmpty ? nil : trimmed
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,174 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
/// Tracks connection health for the current window's server. Remote contexts
|
||||
/// get a lightweight 15s heartbeat (a no-op `true` remote command) that
|
||||
/// flips the status between green / yellow / red. Local contexts are always
|
||||
/// green since there's no connection to lose.
|
||||
@Observable
|
||||
@MainActor
|
||||
final class ConnectionStatusViewModel {
|
||||
private let logger = Logger(subsystem: "com.scarf", category: "ConnectionStatus")
|
||||
|
||||
enum Status: Equatable {
|
||||
/// Healthy: SSH connected AND we can read `~/.hermes/config.yaml`.
|
||||
case connected
|
||||
/// SSH connects but the follow-up read-access probe failed. Data
|
||||
/// views will be empty until this is resolved. `reason` is shown
|
||||
/// in the pill tooltip; users click the pill to open diagnostics.
|
||||
case degraded(reason: String)
|
||||
/// No probe yet or the previous probe timed out but we haven't
|
||||
/// confirmed failure. Shown as yellow to tell the user "checking…".
|
||||
case idle
|
||||
/// Last probe failed. `message` is a terse human summary; `stderr`
|
||||
/// is the raw diagnostic text for a disclosure panel.
|
||||
case error(message: String, stderr: String)
|
||||
}
|
||||
|
||||
private(set) var status: Status = .idle
|
||||
/// Timestamp of the last successful probe. Used by the UI to show how
|
||||
/// fresh the status indicator is ("just now", "2m ago"…).
|
||||
private(set) var lastSuccess: Date?
|
||||
/// Number of consecutive probe failures. Surfaced as a yellow "Reconnecting…"
|
||||
/// state for the first failure (silent retry), then promoted to red after
|
||||
/// `consecutiveFailureThreshold` failures so flaky connections don't
|
||||
/// flap the indicator on every dropped packet.
|
||||
private(set) var consecutiveFailures = 0
|
||||
private let consecutiveFailureThreshold = 2
|
||||
|
||||
let context: ServerContext
|
||||
private let transport: any ServerTransport
|
||||
private var probeTask: Task<Void, Never>?
|
||||
|
||||
init(context: ServerContext) {
|
||||
self.context = context
|
||||
self.transport = context.makeTransport()
|
||||
if !context.isRemote {
|
||||
// Local contexts are always considered connected — no network
|
||||
// or auth can fail.
|
||||
self.status = .connected
|
||||
self.lastSuccess = Date()
|
||||
}
|
||||
}
|
||||
|
||||
/// Kick off a background heartbeat loop. Safe to call multiple times;
|
||||
/// subsequent calls cancel the prior task and restart.
|
||||
func startMonitoring() {
|
||||
guard context.isRemote else { return }
|
||||
probeTask?.cancel()
|
||||
probeTask = Task { [weak self] in
|
||||
while !Task.isCancelled {
|
||||
await self?.probeOnce()
|
||||
try? await Task.sleep(nanoseconds: 15_000_000_000) // 15s
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func stopMonitoring() {
|
||||
probeTask?.cancel()
|
||||
probeTask = nil
|
||||
}
|
||||
|
||||
/// Manual probe — also invoked by the toolbar "Retry" button on error.
|
||||
func retry() {
|
||||
Task { await probeOnce() }
|
||||
}
|
||||
|
||||
private func probeOnce() async {
|
||||
let snapshot = transport
|
||||
let hermesHome = context.paths.home
|
||||
// Two-tier probe in one SSH round-trip:
|
||||
// tier 1: `true` — raw connectivity / auth / ControlMaster path
|
||||
// tier 2: `test -r $HERMESHOME/config.yaml` — can we actually
|
||||
// read the file Dashboard reads on every tick? Green pill
|
||||
// only if both pass; yellow "degraded" if tier 1 passes
|
||||
// but tier 2 fails (the exact symptom in issue #19).
|
||||
// Script emits two lines: TIER1:<exitcode> and TIER2:<exitcode>.
|
||||
let homeArg: String
|
||||
if hermesHome.hasPrefix("~/") {
|
||||
homeArg = "\"$HOME/\(hermesHome.dropFirst(2))\""
|
||||
} else if hermesHome == "~" {
|
||||
homeArg = "\"$HOME\""
|
||||
} else {
|
||||
homeArg = "\"\(hermesHome.replacingOccurrences(of: "\"", with: "\\\""))\""
|
||||
}
|
||||
let script = """
|
||||
echo TIER1:0
|
||||
H=\(homeArg)
|
||||
if [ -r "$H/config.yaml" ]; then echo TIER2:0; else echo TIER2:1; fi
|
||||
"""
|
||||
|
||||
enum ProbeOutcome {
|
||||
case connected
|
||||
case degraded(reason: String)
|
||||
case failure(TransportError)
|
||||
}
|
||||
|
||||
let outcome: ProbeOutcome = await Task.detached {
|
||||
do {
|
||||
let probe = try snapshot.runProcess(
|
||||
executable: "/bin/sh",
|
||||
args: ["-c", script],
|
||||
stdin: nil,
|
||||
timeout: 10
|
||||
)
|
||||
guard probe.exitCode == 0 else {
|
||||
return .failure(.commandFailed(exitCode: probe.exitCode, stderr: probe.stderrString))
|
||||
}
|
||||
let out = probe.stdoutString
|
||||
let tier1 = out.contains("TIER1:0")
|
||||
let tier2 = out.contains("TIER2:0")
|
||||
if !tier1 {
|
||||
// The script itself didn't reach tier 1 — treat as connection failure.
|
||||
return .failure(.commandFailed(exitCode: 1, stderr: out))
|
||||
}
|
||||
if tier2 {
|
||||
return .connected
|
||||
}
|
||||
// Connected but can't read config.yaml — the core issue #19
|
||||
// symptom. Give the pill a short reason; the full story goes
|
||||
// into Remote Diagnostics.
|
||||
return .degraded(reason: "can't read ~/.hermes/config.yaml")
|
||||
} catch let e as TransportError {
|
||||
return .failure(e)
|
||||
} catch {
|
||||
return .failure(.other(message: error.localizedDescription))
|
||||
}
|
||||
}.value
|
||||
|
||||
switch outcome {
|
||||
case .connected:
|
||||
status = .connected
|
||||
lastSuccess = Date()
|
||||
consecutiveFailures = 0
|
||||
case .degraded(let reason):
|
||||
status = .degraded(reason: reason)
|
||||
lastSuccess = Date() // SSH itself is fine, reset failure count
|
||||
consecutiveFailures = 0
|
||||
case .failure(let err):
|
||||
consecutiveFailures += 1
|
||||
// First failure → silent yellow "Reconnecting…" while we try
|
||||
// again on the next 15s tick. Only flip to red after we've
|
||||
// failed `consecutiveFailureThreshold` times in a row, so a
|
||||
// single dropped packet (laptop sleep/wake, transient WiFi)
|
||||
// doesn't visually scare the user.
|
||||
if consecutiveFailures < consecutiveFailureThreshold {
|
||||
status = .idle
|
||||
// Try again sooner than the regular tick — gives the
|
||||
// typical "WiFi reconnected within 5s" case a chance to
|
||||
// self-heal before the next 15s heartbeat.
|
||||
Task { [weak self] in
|
||||
try? await Task.sleep(nanoseconds: 3_000_000_000)
|
||||
if self?.consecutiveFailures ?? 0 > 0 {
|
||||
await self?.probeOnce()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
status = .error(
|
||||
message: err.errorDescription ?? "Unreachable",
|
||||
stderr: err.diagnosticStderr
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,474 @@
|
||||
import Foundation
|
||||
import os
|
||||
|
||||
/// Runs a fixed check-list against a remote server and reports per-probe
|
||||
/// pass/fail. Exists because `TestConnectionProbe` only verifies ssh
|
||||
/// connectivity + hermes binary presence, and `ConnectionStatusViewModel`
|
||||
/// only pings `/bin/sh -c true`. When users file "connection green but
|
||||
/// everything empty" bug reports (issue #19), this is the diagnostic surface
|
||||
/// that tells them (and us) exactly which read fails and why.
|
||||
///
|
||||
/// One shell invocation runs every check on the remote and emits a
|
||||
/// line-delimited `KEY|STATUS|DETAIL` protocol that the view model parses.
|
||||
/// Cheaper than one SSH round-trip per probe and gives a consistent shell
|
||||
/// environment across all probes.
|
||||
@Observable
|
||||
@MainActor
|
||||
final class RemoteDiagnosticsViewModel {
|
||||
private static let logger = Logger(subsystem: "com.scarf", category: "RemoteDiagnostics")
|
||||
|
||||
let context: ServerContext
|
||||
|
||||
/// Probes in display order. The order matters: connectivity first, then
|
||||
/// environment checks, then Hermes data-path checks. A failure early in
|
||||
/// the list usually explains every subsequent failure.
|
||||
enum ProbeID: String, CaseIterable, Identifiable {
|
||||
case connectivity
|
||||
case remoteUser
|
||||
case remoteHome
|
||||
case hermesHomeConfigured
|
||||
case hermesDirExists
|
||||
case hermesDirReadable
|
||||
case configYAMLReadable
|
||||
case configYAMLContents
|
||||
case stateDBReadable
|
||||
case sqlite3Installed
|
||||
case sqlite3CanOpenStateDB
|
||||
case hermesBinaryNonLogin
|
||||
case hermesBinaryLogin
|
||||
case pgrepAvailable
|
||||
|
||||
var id: String { rawValue }
|
||||
|
||||
/// Human-readable title rendered in the diagnostics sheet.
|
||||
var title: String {
|
||||
switch self {
|
||||
case .connectivity: return "SSH connectivity"
|
||||
case .remoteUser: return "Remote user identity"
|
||||
case .remoteHome: return "Remote $HOME"
|
||||
case .hermesHomeConfigured: return "Hermes home directory"
|
||||
case .hermesDirExists: return "Hermes directory exists"
|
||||
case .hermesDirReadable: return "Hermes directory readable"
|
||||
case .configYAMLReadable: return "config.yaml readable"
|
||||
case .configYAMLContents: return "config.yaml actually readable (content)"
|
||||
case .stateDBReadable: return "state.db readable"
|
||||
case .sqlite3Installed: return "sqlite3 binary installed on remote"
|
||||
case .sqlite3CanOpenStateDB: return "sqlite3 can open state.db"
|
||||
case .hermesBinaryNonLogin: return "hermes binary on non-login PATH"
|
||||
case .hermesBinaryLogin: return "hermes binary on login PATH (via rc files)"
|
||||
case .pgrepAvailable: return "pgrep available (for 'is Hermes running')"
|
||||
}
|
||||
}
|
||||
|
||||
/// When the check fails, show this hint alongside the stderr.
|
||||
var failureHint: String? {
|
||||
switch self {
|
||||
case .connectivity:
|
||||
return "SSH itself can't complete. Before re-testing in Scarf, confirm `ssh <host>` works in Terminal."
|
||||
case .remoteUser, .remoteHome:
|
||||
return nil
|
||||
case .hermesHomeConfigured:
|
||||
return nil
|
||||
case .hermesDirExists:
|
||||
return "Scarf is looking at the default `~/.hermes`. If Hermes is installed elsewhere (e.g. `/var/lib/hermes/.hermes` for systemd installs), set the Hermes home directory in Manage Servers → this server → Edit."
|
||||
case .hermesDirReadable:
|
||||
return "The SSH user can see `~/.hermes` but can't list it. Check permissions: `ls -ld ~/.hermes` on the remote — the SSH user needs at least `r-x`."
|
||||
case .configYAMLReadable, .configYAMLContents:
|
||||
return "Scarf can't read `config.yaml`. This usually means the SSH user is different from the user Hermes runs as. Either (a) run Hermes as the SSH user, (b) `chmod a+r ~/.hermes/config.yaml`, or (c) configure Scarf to SSH as the Hermes user."
|
||||
case .stateDBReadable:
|
||||
return "Scarf can't read `state.db` — Sessions, Activity, Dashboard stats all depend on this. Same fix pattern as config.yaml."
|
||||
case .sqlite3Installed:
|
||||
return "Scarf pulls a snapshot of state.db via `sqlite3 .backup`, so sqlite3 must be installed on the remote. Install: `sudo apt install sqlite3` (Ubuntu/Debian), `sudo yum install sqlite` (RHEL/Fedora), `apk add sqlite` (Alpine)."
|
||||
case .sqlite3CanOpenStateDB:
|
||||
return "sqlite3 exists but can't open state.db. Could be a permission issue, a corrupt DB, or a version skew."
|
||||
case .hermesBinaryNonLogin:
|
||||
return "Scarf's runtime calls use non-login SSH shells (no .bashrc). If `hermes` only appears here via the login path, runtime CLI calls will fail. Move your PATH export from `.bashrc` to `.zshenv` or `.profile`."
|
||||
case .hermesBinaryLogin:
|
||||
return "hermes couldn't be located even after sourcing login rc files. Install path is non-standard — set the hermes binary path manually in Manage Servers."
|
||||
case .pgrepAvailable:
|
||||
return "pgrep not found on remote. Dashboard can't determine whether Hermes is running. Install procps: `apt install procps` (most distros have it by default)."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct Probe: Identifiable, Sendable {
|
||||
let id: ProbeID
|
||||
let passed: Bool
|
||||
let detail: String
|
||||
}
|
||||
|
||||
private(set) var probes: [Probe] = []
|
||||
private(set) var isRunning: Bool = false
|
||||
private(set) var startedAt: Date?
|
||||
private(set) var finishedAt: Date?
|
||||
/// Raw stdout/stderr from the most recent run, preserved so the UI can
|
||||
/// surface them in a disclosure panel when things look wrong. This is
|
||||
/// how we debug cases where the script ran but no probes were parsed
|
||||
/// (e.g. transport-quoting bugs, dash-vs-bash incompatibilities).
|
||||
private(set) var rawStdout: String = ""
|
||||
private(set) var rawStderr: String = ""
|
||||
private(set) var rawExitCode: Int32 = 0
|
||||
|
||||
init(context: ServerContext) {
|
||||
self.context = context
|
||||
}
|
||||
|
||||
/// Kick off the full check list. Safe to call again to re-run.
|
||||
func run() async {
|
||||
if isRunning { return }
|
||||
isRunning = true
|
||||
probes = []
|
||||
startedAt = Date()
|
||||
finishedAt = nil
|
||||
|
||||
let script = Self.buildScript(hermesHome: context.paths.home)
|
||||
let captured = await Self.execute(script: script, context: context)
|
||||
|
||||
switch captured {
|
||||
case .connectFailure(let msg):
|
||||
rawStdout = ""
|
||||
rawStderr = msg
|
||||
rawExitCode = -1
|
||||
probes = [
|
||||
Probe(id: .connectivity, passed: false, detail: msg)
|
||||
] + ProbeID.allCases
|
||||
.filter { $0 != .connectivity }
|
||||
.map { Probe(id: $0, passed: false, detail: "(skipped — SSH didn't connect)") }
|
||||
case .completed(let stdout, let stderr, let exitCode):
|
||||
rawStdout = stdout
|
||||
rawStderr = stderr
|
||||
rawExitCode = exitCode
|
||||
probes = Self.parse(stdout: stdout, stderr: stderr, exitCode: exitCode)
|
||||
}
|
||||
|
||||
finishedAt = Date()
|
||||
isRunning = false
|
||||
Self.logger.info("Diagnostics for \(self.context.displayName, privacy: .public) finished — \(self.passingCount)/\(self.probes.count) passing")
|
||||
}
|
||||
|
||||
/// Quick summary string, e.g. "9/14 passing". Used in the header.
|
||||
var summary: String {
|
||||
guard !probes.isEmpty else { return "Not yet run." }
|
||||
return "\(passingCount)/\(probes.count) checks passing"
|
||||
}
|
||||
|
||||
var passingCount: Int {
|
||||
probes.filter { $0.passed }.count
|
||||
}
|
||||
|
||||
var allPassed: Bool {
|
||||
!probes.isEmpty && passingCount == probes.count
|
||||
}
|
||||
|
||||
// MARK: - Script + parsing
|
||||
|
||||
/// Build the remote shell script. Uses a pipe-delimited protocol so the
|
||||
/// Swift side can parse without regex surprises. Status is `PASS` or
|
||||
/// `FAIL`; detail is a single line (can be blank). `__END__` at the
|
||||
/// bottom lets us detect truncation.
|
||||
private static func buildScript(hermesHome: String) -> String {
|
||||
// Shell-quote the home path — user may have typed `~/.hermes` which
|
||||
// we want the remote shell to expand, so we substitute `~/` with
|
||||
// `$HOME/` like `SSHTransport.remotePathArg` does.
|
||||
let expanded: String
|
||||
if hermesHome.hasPrefix("~/") {
|
||||
expanded = "\"$HOME/\(hermesHome.dropFirst(2))\""
|
||||
} else if hermesHome == "~" {
|
||||
expanded = "\"$HOME\""
|
||||
} else {
|
||||
// Absolute path — still quote in case of spaces.
|
||||
expanded = "\"\(hermesHome.replacingOccurrences(of: "\"", with: "\\\""))\""
|
||||
}
|
||||
|
||||
return #"""
|
||||
H=\#(expanded)
|
||||
emit() { printf '%s|%s|%s\n' "$1" "$2" "$3"; }
|
||||
|
||||
emit connectivity PASS "(running in this shell)"
|
||||
|
||||
user=$(id -un 2>/dev/null || echo unknown)
|
||||
emit remoteUser PASS "$user"
|
||||
|
||||
emit remoteHome PASS "$HOME"
|
||||
|
||||
emit hermesHomeConfigured PASS "$H"
|
||||
|
||||
if [ -d "$H" ]; then
|
||||
emit hermesDirExists PASS "$H"
|
||||
else
|
||||
emit hermesDirExists FAIL "not a directory: $H"
|
||||
fi
|
||||
|
||||
if [ -r "$H" ] && [ -x "$H" ]; then
|
||||
emit hermesDirReadable PASS ""
|
||||
else
|
||||
emit hermesDirReadable FAIL "cannot read/enter $H (check perms on the dir)"
|
||||
fi
|
||||
|
||||
if [ -r "$H/config.yaml" ]; then
|
||||
emit configYAMLReadable PASS ""
|
||||
else
|
||||
if [ -e "$H/config.yaml" ]; then
|
||||
emit configYAMLReadable FAIL "exists but not readable by $user"
|
||||
else
|
||||
emit configYAMLReadable FAIL "file does not exist"
|
||||
fi
|
||||
fi
|
||||
|
||||
if head -c 1 "$H/config.yaml" > /dev/null 2>&1; then
|
||||
size=$(wc -c < "$H/config.yaml" 2>/dev/null | tr -d ' ')
|
||||
emit configYAMLContents PASS "${size} bytes"
|
||||
else
|
||||
emit configYAMLContents FAIL "cannot read file contents"
|
||||
fi
|
||||
|
||||
if [ -r "$H/state.db" ]; then
|
||||
size=$(wc -c < "$H/state.db" 2>/dev/null | tr -d ' ')
|
||||
emit stateDBReadable PASS "${size} bytes"
|
||||
else
|
||||
if [ -e "$H/state.db" ]; then
|
||||
emit stateDBReadable FAIL "exists but not readable by $user"
|
||||
else
|
||||
emit stateDBReadable FAIL "file does not exist"
|
||||
fi
|
||||
fi
|
||||
|
||||
if command -v sqlite3 > /dev/null 2>&1; then
|
||||
sq=$(command -v sqlite3)
|
||||
emit sqlite3Installed PASS "$sq"
|
||||
else
|
||||
emit sqlite3Installed FAIL "sqlite3 not on PATH"
|
||||
fi
|
||||
|
||||
if sqlite3 "$H/state.db" 'SELECT 1' > /dev/null 2>&1; then
|
||||
emit sqlite3CanOpenStateDB PASS ""
|
||||
else
|
||||
err=$(sqlite3 "$H/state.db" 'SELECT 1' 2>&1 | head -1)
|
||||
emit sqlite3CanOpenStateDB FAIL "$err"
|
||||
fi
|
||||
|
||||
# Non-login PATH: just ask the current shell.
|
||||
hpath=$(command -v hermes 2>/dev/null)
|
||||
if [ -n "$hpath" ]; then
|
||||
emit hermesBinaryNonLogin PASS "$hpath"
|
||||
else
|
||||
emit hermesBinaryNonLogin FAIL "not on non-login PATH ($PATH)"
|
||||
fi
|
||||
|
||||
# Login PATH: source rc files (mirroring TestConnectionProbe) and re-probe.
|
||||
for rc in "$HOME/.zshenv" "$HOME/.zprofile" "$HOME/.bash_profile" "$HOME/.profile"; do
|
||||
[ -f "$rc" ] && . "$rc" 2>/dev/null
|
||||
done
|
||||
hpath2=$(command -v hermes 2>/dev/null)
|
||||
if [ -z "$hpath2" ]; then
|
||||
for cand in "$HOME/.local/bin/hermes" "/opt/homebrew/bin/hermes" "/usr/local/bin/hermes" "$HOME/.hermes/bin/hermes"; do
|
||||
if [ -x "$cand" ]; then hpath2="$cand"; break; fi
|
||||
done
|
||||
fi
|
||||
if [ -n "$hpath2" ]; then
|
||||
emit hermesBinaryLogin PASS "$hpath2"
|
||||
else
|
||||
emit hermesBinaryLogin FAIL "not found after sourcing rc files"
|
||||
fi
|
||||
|
||||
if command -v pgrep > /dev/null 2>&1; then
|
||||
emit pgrepAvailable PASS "$(command -v pgrep)"
|
||||
else
|
||||
emit pgrepAvailable FAIL "pgrep not on PATH"
|
||||
fi
|
||||
|
||||
printf '__END__\n'
|
||||
"""#
|
||||
}
|
||||
|
||||
enum Captured {
|
||||
case connectFailure(String)
|
||||
case completed(stdout: String, stderr: String, exitCode: Int32)
|
||||
}
|
||||
|
||||
private static func execute(script: String, context: ServerContext) async -> Captured {
|
||||
// Can't use `transport.runProcess(executable: "/bin/sh", args: ["-c", script])`
|
||||
// here: SSHTransport.runProcess pipes every argument through
|
||||
// `remotePathArg` (which double-quotes to rewrite `~/` → `$HOME/`),
|
||||
// which mangles a multi-line shell script containing `"$1"`,
|
||||
// nested quotes, and `printf` escape sequences. The result on the
|
||||
// remote is a scrambled string and every probe fails to emit.
|
||||
//
|
||||
// Mirror TestConnectionProbe's approach: build the ssh argv
|
||||
// directly so the script travels as a single opaque argv entry
|
||||
// that ssh forwards to the remote shell unchanged.
|
||||
switch context.kind {
|
||||
case .local:
|
||||
return await runLocally(script: script)
|
||||
case .ssh(let config):
|
||||
return await runOverSSH(script: script, config: config)
|
||||
}
|
||||
}
|
||||
|
||||
/// Direct ssh invocation. Pipes the script into `sh` on stdin rather
|
||||
/// than passing it as `sh -c <script>` argv — because ssh concatenates
|
||||
/// argv with spaces and sends that as a single command string to the
|
||||
/// remote's LOGIN shell, which then parses newlines as command
|
||||
/// separators. A multi-line `sh -c <script>` would run only the first
|
||||
/// line inside the `sh` subprocess (any variables set there die when
|
||||
/// `sh` exits), and the rest would run in the login shell with no
|
||||
/// access to those variables. Symptom: `$H=""` everywhere downstream.
|
||||
///
|
||||
/// Feeding the script via stdin avoids the split entirely — `sh -s`
|
||||
/// consumes the whole stream in one process, so variable scope is
|
||||
/// preserved and the script runs exactly the same way it would from
|
||||
/// a local `cat script.sh | sh`.
|
||||
private static func runOverSSH(script: String, config: SSHConfig) async -> Captured {
|
||||
var sshArgv: [String] = [
|
||||
"-o", "ControlMaster=auto",
|
||||
"-o", "ControlPath=\(controlDirPath())/%C",
|
||||
"-o", "ControlPersist=600",
|
||||
"-o", "ServerAliveInterval=30",
|
||||
"-o", "ConnectTimeout=10",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "LogLevel=QUIET",
|
||||
"-o", "BatchMode=yes",
|
||||
"-T" // no pty — keep stdin/stdout a clean byte stream
|
||||
]
|
||||
if let port = config.port { sshArgv += ["-p", String(port)] }
|
||||
if let id = config.identityFile, !id.isEmpty {
|
||||
sshArgv += ["-i", id]
|
||||
}
|
||||
let hostSpec: String
|
||||
if let user = config.user, !user.isEmpty { hostSpec = "\(user)@\(config.host)" }
|
||||
else { hostSpec = config.host }
|
||||
sshArgv.append(hostSpec)
|
||||
sshArgv.append("--")
|
||||
sshArgv.append("/bin/sh")
|
||||
sshArgv.append("-s") // read script from stdin
|
||||
|
||||
return await Task.detached { () -> Captured in
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: "/usr/bin/ssh")
|
||||
proc.arguments = sshArgv
|
||||
|
||||
// Inherit the shell's SSH_AUTH_SOCK so ssh can reach the
|
||||
// agent — same pattern as SSHTransport + TestConnectionProbe.
|
||||
var env = ProcessInfo.processInfo.environment
|
||||
let shellEnv = HermesFileService.enrichedEnvironment()
|
||||
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
|
||||
if env[key] == nil, let v = shellEnv[key], !v.isEmpty {
|
||||
env[key] = v
|
||||
}
|
||||
}
|
||||
proc.environment = env
|
||||
|
||||
let stdinPipe = Pipe()
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
proc.standardInput = stdinPipe
|
||||
proc.standardOutput = stdoutPipe
|
||||
proc.standardError = stderrPipe
|
||||
|
||||
do {
|
||||
try proc.run()
|
||||
} catch {
|
||||
return .connectFailure("Failed to launch ssh: \(error.localizedDescription)")
|
||||
}
|
||||
|
||||
// Write the script to ssh's stdin, then close the write end so
|
||||
// remote sh sees EOF and exits after executing the whole script.
|
||||
if let data = script.data(using: .utf8) {
|
||||
try? stdinPipe.fileHandleForWriting.write(contentsOf: data)
|
||||
}
|
||||
try? stdinPipe.fileHandleForWriting.close()
|
||||
|
||||
let deadline = Date().addingTimeInterval(30)
|
||||
while proc.isRunning && Date() < deadline {
|
||||
try? await Task.sleep(nanoseconds: 100_000_000)
|
||||
}
|
||||
if proc.isRunning {
|
||||
proc.terminate()
|
||||
return .connectFailure("Diagnostics timed out after 30s")
|
||||
}
|
||||
let out = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
let err = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
return .completed(
|
||||
stdout: String(data: out, encoding: .utf8) ?? "",
|
||||
stderr: String(data: err, encoding: .utf8) ?? "",
|
||||
exitCode: proc.terminationStatus
|
||||
)
|
||||
}.value
|
||||
}
|
||||
|
||||
/// Local Shell invocation — runs the diagnostic script against the
|
||||
/// user's own Mac. Less useful than the remote form (most checks will
|
||||
/// trivially pass), but lets the same UI work for both contexts.
|
||||
private static func runLocally(script: String) async -> Captured {
|
||||
return await Task.detached { () -> Captured in
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: "/bin/sh")
|
||||
proc.arguments = ["-c", script]
|
||||
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
proc.standardOutput = stdoutPipe
|
||||
proc.standardError = stderrPipe
|
||||
do {
|
||||
try proc.run()
|
||||
} catch {
|
||||
return .connectFailure("Failed to launch /bin/sh: \(error.localizedDescription)")
|
||||
}
|
||||
let deadline = Date().addingTimeInterval(10)
|
||||
while proc.isRunning && Date() < deadline {
|
||||
try? await Task.sleep(nanoseconds: 100_000_000)
|
||||
}
|
||||
if proc.isRunning {
|
||||
proc.terminate()
|
||||
return .connectFailure("Local diagnostics timed out (should be <1s)")
|
||||
}
|
||||
let out = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
let err = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
return .completed(
|
||||
stdout: String(data: out, encoding: .utf8) ?? "",
|
||||
stderr: String(data: err, encoding: .utf8) ?? "",
|
||||
exitCode: proc.terminationStatus
|
||||
)
|
||||
}.value
|
||||
}
|
||||
|
||||
/// Same cache directory used by SSHTransport — shared so the diagnostic
|
||||
/// probe reuses the connection's ControlMaster socket when it already
|
||||
/// exists (no second TCP handshake, no second auth).
|
||||
private static func controlDirPath() -> String {
|
||||
SSHTransport.controlDirPath()
|
||||
}
|
||||
|
||||
private static func parse(stdout: String, stderr: String, exitCode: Int32) -> [Probe] {
|
||||
var results: [ProbeID: Probe] = [:]
|
||||
for line in stdout.split(whereSeparator: { $0 == "\n" || $0 == "\r" }) {
|
||||
let parts = line.split(separator: "|", maxSplits: 2, omittingEmptySubsequences: false)
|
||||
guard parts.count == 3 else { continue }
|
||||
let key = String(parts[0]).trimmingCharacters(in: .whitespaces)
|
||||
let status = String(parts[1]).trimmingCharacters(in: .whitespaces)
|
||||
let detail = String(parts[2]).trimmingCharacters(in: .whitespaces)
|
||||
guard let probe = ProbeID(rawValue: key) else { continue }
|
||||
results[probe] = Probe(
|
||||
id: probe,
|
||||
passed: status == "PASS",
|
||||
detail: detail
|
||||
)
|
||||
}
|
||||
|
||||
// If the script didn't complete, fill in the missing probes so the UI
|
||||
// still shows every expected row (rather than silently skipping).
|
||||
let terminated = stdout.contains("__END__")
|
||||
let fallbackDetail: String
|
||||
if terminated {
|
||||
fallbackDetail = "(no output)"
|
||||
} else if exitCode != 0 {
|
||||
fallbackDetail = "(script exited \(exitCode) before this check — stderr: \(stderr.prefix(200)))"
|
||||
} else {
|
||||
fallbackDetail = "(no output from script)"
|
||||
}
|
||||
|
||||
return ProbeID.allCases.map { id in
|
||||
results[id] ?? Probe(id: id, passed: false, detail: fallbackDetail)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,215 @@
|
||||
import Foundation
|
||||
|
||||
/// Bypasses `SSHTransport`'s normal terse-error path so the Add Server sheet
|
||||
/// can show the user a full diagnostic on failure: the exact ssh command we
|
||||
/// invoked, the verbose `ssh -v` handshake trace, and any remote shell
|
||||
/// output. This is the difference between "Remote command exited 255" with
|
||||
/// no further info, and "ssh said 'Permission denied (publickey)' on line N
|
||||
/// of the trace, here's the command we ran, here's what was in your env".
|
||||
struct TestConnectionProbe {
|
||||
let config: SSHConfig
|
||||
|
||||
func run() async -> AddServerViewModel.TestResult {
|
||||
let host = config.host.trimmingCharacters(in: .whitespaces)
|
||||
guard !host.isEmpty else {
|
||||
return .failure(message: "Host is empty", stderr: "", command: "")
|
||||
}
|
||||
|
||||
// Same options SSHTransport uses, plus -v for verbose ssh trace.
|
||||
// We deliberately skip ControlMaster here so the probe is a fresh
|
||||
// connection — a stale control socket from a previous failed run
|
||||
// shouldn't mask current state.
|
||||
var sshArgs: [String] = [
|
||||
"-v",
|
||||
"-o", "ServerAliveInterval=30",
|
||||
"-o", "ConnectTimeout=10",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "BatchMode=yes",
|
||||
"-o", "LogLevel=ERROR" // Errors only on stderr; -v puts handshake on stderr separately
|
||||
]
|
||||
if let port = config.port { sshArgs += ["-p", String(port)] }
|
||||
if let id = config.identityFile, !id.isEmpty {
|
||||
sshArgs += ["-i", id]
|
||||
}
|
||||
let hostSpec: String
|
||||
if let user = config.user, !user.isEmpty { hostSpec = "\(user)@\(host)" }
|
||||
else { hostSpec = host }
|
||||
sshArgs.append(hostSpec)
|
||||
sshArgs.append("--")
|
||||
|
||||
// Remote probe script. Tries three strategies in order:
|
||||
// 1. `command -v hermes` against the bare non-interactive PATH —
|
||||
// works if the user put their install location in ~/.zshenv.
|
||||
// 2. Source common login rc files (.zprofile, .bash_profile,
|
||||
// .profile) and re-probe — picks up PATH set in login shells.
|
||||
// 3. Probe the well-known install candidates directly. Mirrors
|
||||
// `HermesPathSet.hermesBinaryCandidates` so behavior matches
|
||||
// Scarf's local resolution.
|
||||
// The matched absolute path is stored as `hermesBinaryHint` on the
|
||||
// SSHConfig so subsequent CLI/ACP invocations don't have to re-probe.
|
||||
// If the user already typed a remoteHome override, use it; otherwise
|
||||
// default to $HOME/.hermes. Either way, the script also probes a
|
||||
// short list of well-known alternates when the primary path doesn't
|
||||
// have state.db — systemd/docker/VPS installs tend to live at
|
||||
// /var/lib/hermes/.hermes or /home/hermes/.hermes, and SSHing in as
|
||||
// a different user than the Hermes daemon is the leading cause of
|
||||
// "connection green, data empty" bug reports (issue #19).
|
||||
let primary: String
|
||||
if let override = config.remoteHome, !override.isEmpty {
|
||||
if override.hasPrefix("~/") {
|
||||
primary = "$HOME/\(override.dropFirst(2))"
|
||||
} else if override == "~" {
|
||||
primary = "$HOME"
|
||||
} else {
|
||||
primary = override
|
||||
}
|
||||
} else {
|
||||
primary = "$HOME/.hermes"
|
||||
}
|
||||
|
||||
let script = #"""
|
||||
hpath=$(command -v hermes 2>/dev/null)
|
||||
if [ -z "$hpath" ]; then
|
||||
for rc in "$HOME/.zshenv" "$HOME/.zprofile" "$HOME/.bash_profile" "$HOME/.profile"; do
|
||||
[ -f "$rc" ] && . "$rc" 2>/dev/null
|
||||
done
|
||||
hpath=$(command -v hermes 2>/dev/null)
|
||||
fi
|
||||
if [ -z "$hpath" ]; then
|
||||
for cand in "$HOME/.local/bin/hermes" "/opt/homebrew/bin/hermes" "/usr/local/bin/hermes" "$HOME/.hermes/bin/hermes"; do
|
||||
if [ -x "$cand" ]; then hpath="$cand"; break; fi
|
||||
done
|
||||
fi
|
||||
echo "HERMES:$hpath"
|
||||
PRIMARY="\#(primary)"
|
||||
if [ -r "$PRIMARY/state.db" ]; then
|
||||
echo "DB:ok"
|
||||
echo "HOME_USED:$PRIMARY"
|
||||
else
|
||||
echo "DB:missing"
|
||||
# Probe well-known alternates. Emit the first one that has a
|
||||
# readable state.db so the UI can offer a one-click fill.
|
||||
for alt in "/var/lib/hermes/.hermes" "/opt/hermes/.hermes" "/home/hermes/.hermes" "/root/.hermes"; do
|
||||
if [ -r "$alt/state.db" ]; then
|
||||
echo "SUGGEST:$alt"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
"""#
|
||||
sshArgs.append("/bin/sh")
|
||||
sshArgs.append("-c")
|
||||
sshArgs.append(script)
|
||||
|
||||
// Build the displayable command string. Show exactly what `ssh ...`
|
||||
// would look like in the user's terminal (with single-quoting for
|
||||
// the script). Doesn't have to be byte-equivalent to what
|
||||
// `Process` invokes — just a faithful reproduction the user can
|
||||
// paste into Terminal to compare.
|
||||
let displayCommand = "/usr/bin/ssh " + sshArgs.map { Self.shellDisplayQuote($0) }.joined(separator: " ")
|
||||
|
||||
let probe = await Task.detached { () -> (Int32, String, String) in
|
||||
let proc = Process()
|
||||
proc.executableURL = URL(fileURLWithPath: "/usr/bin/ssh")
|
||||
proc.arguments = sshArgs
|
||||
// Inherit shell-derived SSH_AUTH_SOCK so ssh can reach the agent.
|
||||
// Without this, GUI-launched Scarf can't see the user's
|
||||
// ssh-add'd keys (terminal works because shell sets the var).
|
||||
var env = ProcessInfo.processInfo.environment
|
||||
let shellEnv = HermesFileService.enrichedEnvironment()
|
||||
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
|
||||
if env[key] == nil, let value = shellEnv[key], !value.isEmpty {
|
||||
env[key] = value
|
||||
}
|
||||
}
|
||||
proc.environment = env
|
||||
|
||||
let stdoutPipe = Pipe()
|
||||
let stderrPipe = Pipe()
|
||||
proc.standardOutput = stdoutPipe
|
||||
proc.standardError = stderrPipe
|
||||
do {
|
||||
try proc.run()
|
||||
} catch {
|
||||
return (-1, "", "Failed to launch /usr/bin/ssh: \(error.localizedDescription)")
|
||||
}
|
||||
// Bound the probe so a hung connection doesn't lock the UI.
|
||||
let deadline = Date().addingTimeInterval(20)
|
||||
while proc.isRunning && Date() < deadline {
|
||||
try? await Task.sleep(nanoseconds: 100_000_000)
|
||||
}
|
||||
if proc.isRunning {
|
||||
proc.terminate()
|
||||
let partial = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
return (-1, "", "Timed out after 20s.\n\nssh trace so far:\n" + (String(data: partial, encoding: .utf8) ?? ""))
|
||||
}
|
||||
let out = (try? stdoutPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
let err = (try? stderrPipe.fileHandleForReading.readToEnd()) ?? Data()
|
||||
return (
|
||||
proc.terminationStatus,
|
||||
String(data: out, encoding: .utf8) ?? "",
|
||||
String(data: err, encoding: .utf8) ?? ""
|
||||
)
|
||||
}.value
|
||||
|
||||
let (exitCode, stdout, stderr) = probe
|
||||
|
||||
// Diagnostic envelope: always include the ssh command + the
|
||||
// SSH_AUTH_SOCK presence at the top of the stderr blob so the
|
||||
// user immediately sees whether agent inheritance worked.
|
||||
let agentEnv = ProcessInfo.processInfo.environment["SSH_AUTH_SOCK"]
|
||||
?? HermesFileService.enrichedEnvironment()["SSH_AUTH_SOCK"]
|
||||
?? "(not set)"
|
||||
let envSummary = "SSH_AUTH_SOCK = \(agentEnv)\n\n"
|
||||
|
||||
if exitCode == 0 {
|
||||
let lines = stdout.split(separator: "\n").map(String.init)
|
||||
let hermesPath = lines.first(where: { $0.hasPrefix("HERMES:") })?
|
||||
.dropFirst("HERMES:".count).trimmingCharacters(in: .whitespaces) ?? ""
|
||||
let dbFound = lines.contains(where: { $0 == "DB:ok" })
|
||||
let suggestedHome = lines.first(where: { $0.hasPrefix("SUGGEST:") })
|
||||
.map { String($0.dropFirst("SUGGEST:".count)).trimmingCharacters(in: .whitespaces) }
|
||||
if hermesPath.isEmpty {
|
||||
return .failure(
|
||||
message: "hermes binary not found in remote $PATH",
|
||||
stderr: envSummary + "Add hermes to the remote PATH (e.g. ~/.zshenv).\n\nRemote stdout:\n\(stdout)",
|
||||
command: displayCommand
|
||||
)
|
||||
}
|
||||
return .success(hermesPath: String(hermesPath), dbFound: dbFound, suggestedRemoteHome: suggestedHome)
|
||||
}
|
||||
|
||||
// Classify common failures by scanning the stderr trace.
|
||||
let lower = stderr.lowercased()
|
||||
let summary: String
|
||||
if lower.contains("permission denied") {
|
||||
summary = "Permission denied — check that your key is loaded in ssh-agent (run `ssh-add -l` in Terminal) and that the remote accepts it."
|
||||
} else if lower.contains("host key verification failed") {
|
||||
summary = "Host key mismatch — run `ssh-keygen -R \(host)` in Terminal, then retry."
|
||||
} else if lower.contains("connection refused") || lower.contains("no route to host") {
|
||||
summary = "Can't reach the host — check the IP/network/firewall."
|
||||
} else if lower.contains("could not resolve hostname") {
|
||||
summary = "Hostname did not resolve."
|
||||
} else if exitCode == 255 {
|
||||
summary = "ssh failed (exit 255). See the trace below."
|
||||
} else {
|
||||
summary = "Remote command exited \(exitCode)."
|
||||
}
|
||||
|
||||
return .failure(
|
||||
message: summary,
|
||||
stderr: envSummary + (stderr.isEmpty ? "(ssh produced no stderr — this usually means the process itself failed to start, the executable couldn't be located, or stdin/stdout was closed unexpectedly.)" : stderr),
|
||||
command: displayCommand
|
||||
)
|
||||
}
|
||||
|
||||
/// Quote an argument for display in a copy-pasteable ssh command. Always
|
||||
/// wraps in single quotes if it contains anything beyond a basic safe set
|
||||
/// — visually noisier than minimal quoting but unambiguous.
|
||||
private static func shellDisplayQuote(_ s: String) -> String {
|
||||
if s.isEmpty { return "''" }
|
||||
let safe = CharacterSet(charactersIn: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789@%+=:,./-_")
|
||||
if s.unicodeScalars.allSatisfy({ safe.contains($0) }) { return s }
|
||||
return "'" + s.replacingOccurrences(of: "'", with: "'\\''") + "'"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,225 @@
|
||||
import SwiftUI
|
||||
|
||||
/// Sheet for adding a new remote server. Collects SSH connection details,
|
||||
/// runs a "Test Connection" probe, and — on save — hands the persisted
|
||||
/// `SSHConfig` (with `hermesBinaryHint` populated by the probe) to the
|
||||
/// caller via the `onSave` closure.
|
||||
struct AddServerSheet: View {
|
||||
@State private var viewModel = AddServerViewModel()
|
||||
@Environment(\.dismiss) private var dismiss
|
||||
|
||||
/// Called when the user confirms. Caller persists via `ServerRegistry`
|
||||
/// and typically switches the active window's context to the new server.
|
||||
let onSave: (_ displayName: String, _ config: SSHConfig) -> Void
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 0) {
|
||||
header
|
||||
Divider()
|
||||
ScrollView {
|
||||
VStack(alignment: .leading, spacing: 16) {
|
||||
connectionSection
|
||||
Divider()
|
||||
testSection
|
||||
}
|
||||
.padding(20)
|
||||
}
|
||||
Divider()
|
||||
footer
|
||||
}
|
||||
.frame(width: 560, height: 680)
|
||||
}
|
||||
|
||||
private var header: some View {
|
||||
HStack {
|
||||
Image(systemName: "server.rack")
|
||||
.font(.title2)
|
||||
Text("Add Remote Server")
|
||||
.font(.headline)
|
||||
Spacer()
|
||||
}
|
||||
.padding(.horizontal, 20)
|
||||
.padding(.vertical, 12)
|
||||
}
|
||||
|
||||
private var connectionSection: some View {
|
||||
VStack(alignment: .leading, spacing: 10) {
|
||||
Text("Connection")
|
||||
.font(.subheadline).bold()
|
||||
.foregroundStyle(.secondary)
|
||||
|
||||
LabeledField("Name") {
|
||||
TextField("Optional — defaults to hostname", text: $viewModel.displayName)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
}
|
||||
|
||||
LabeledField("Host") {
|
||||
TextField("hermes.example.com or a ~/.ssh/config alias", text: $viewModel.host)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
.autocorrectionDisabled()
|
||||
}
|
||||
|
||||
LabeledField("User") {
|
||||
TextField("Defaults to ~/.ssh/config or current user", text: $viewModel.user)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
.autocorrectionDisabled()
|
||||
}
|
||||
|
||||
LabeledField("Port") {
|
||||
TextField("22", text: $viewModel.port)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
.frame(width: 100)
|
||||
Spacer()
|
||||
}
|
||||
|
||||
LabeledField("Identity file") {
|
||||
HStack(spacing: 8) {
|
||||
TextField("ssh-agent (leave blank)", text: $viewModel.identityFile)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
.autocorrectionDisabled()
|
||||
Button("Choose…") { viewModel.pickIdentityFile() }
|
||||
}
|
||||
}
|
||||
|
||||
LabeledField("Hermes data directory") {
|
||||
TextField("Default: ~/.hermes", text: $viewModel.remoteHome)
|
||||
.textFieldStyle(.roundedBorder)
|
||||
.autocorrectionDisabled()
|
||||
}
|
||||
Text("Leave blank unless Hermes is installed at a non-default path (systemd services often live at /var/lib/hermes/.hermes; Docker sidecars vary). Test Connection auto-suggests a value when it detects one of the known alternates.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
.fixedSize(horizontal: false, vertical: true)
|
||||
|
||||
Text("Scarf uses ssh-agent for authentication. If your key has a passphrase, run `ssh-add` before connecting — Scarf never prompts for or stores passphrases.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
.padding(.top, 4)
|
||||
}
|
||||
}
|
||||
|
||||
private var testSection: some View {
|
||||
VStack(alignment: .leading, spacing: 8) {
|
||||
HStack {
|
||||
Text("Probe").font(.subheadline).bold().foregroundStyle(.secondary)
|
||||
Spacer()
|
||||
Button {
|
||||
Task { await viewModel.testConnection() }
|
||||
} label: {
|
||||
if viewModel.isTesting {
|
||||
ProgressView().controlSize(.small)
|
||||
} else {
|
||||
Text("Test Connection")
|
||||
}
|
||||
}
|
||||
.disabled(viewModel.isTesting || !viewModel.canSave)
|
||||
}
|
||||
|
||||
if let result = viewModel.testResult {
|
||||
switch result {
|
||||
case .success(let path, let dbFound, let suggestedHome):
|
||||
VStack(alignment: .leading, spacing: 6) {
|
||||
Label("Connected", systemImage: "checkmark.circle.fill")
|
||||
.foregroundStyle(.green)
|
||||
Text("hermes at \(path)").font(.caption).monospaced()
|
||||
if dbFound {
|
||||
Text("state.db readable")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
} else if let suggestion = suggestedHome {
|
||||
// Scarf found Hermes data at one of the common
|
||||
// alternate paths. One-click fill the
|
||||
// remoteHome field so the user doesn't have to
|
||||
// know this is a convention thing.
|
||||
VStack(alignment: .leading, spacing: 4) {
|
||||
Text("state.db not found at the default location, but Scarf found one at:")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.orange)
|
||||
HStack {
|
||||
Text(suggestion)
|
||||
.font(.caption.monospaced())
|
||||
.textSelection(.enabled)
|
||||
Spacer()
|
||||
Button("Use this") {
|
||||
viewModel.remoteHome = suggestion
|
||||
}
|
||||
.controlSize(.small)
|
||||
}
|
||||
.padding(8)
|
||||
.background(Color.yellow.opacity(0.12), in: RoundedRectangle(cornerRadius: 6))
|
||||
}
|
||||
} else {
|
||||
Text("state.db not found at the configured path. Either Hermes hasn't run yet on this server, or it's installed at a non-default location — set the Hermes data directory field above.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.orange)
|
||||
.fixedSize(horizontal: false, vertical: true)
|
||||
}
|
||||
}
|
||||
case .failure(let message, let stderr, let command):
|
||||
VStack(alignment: .leading, spacing: 6) {
|
||||
Label(message, systemImage: "xmark.octagon.fill")
|
||||
.foregroundStyle(.red)
|
||||
DisclosureGroup("ssh trace") {
|
||||
ScrollView {
|
||||
Text(stderr.isEmpty ? "(no output)" : stderr)
|
||||
.font(.system(size: 11, design: .monospaced))
|
||||
.textSelection(.enabled)
|
||||
.frame(maxWidth: .infinity, alignment: .leading)
|
||||
}
|
||||
.frame(maxHeight: 180)
|
||||
}
|
||||
.font(.caption)
|
||||
DisclosureGroup("Command") {
|
||||
ScrollView {
|
||||
Text(command)
|
||||
.font(.system(size: 11, design: .monospaced))
|
||||
.textSelection(.enabled)
|
||||
.frame(maxWidth: .infinity, alignment: .leading)
|
||||
}
|
||||
.frame(maxHeight: 100)
|
||||
}
|
||||
.font(.caption)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private var footer: some View {
|
||||
HStack {
|
||||
Spacer()
|
||||
Button("Cancel") { dismiss() }
|
||||
.keyboardShortcut(.cancelAction)
|
||||
Button("Save") {
|
||||
onSave(viewModel.resolvedDisplayName, viewModel.configForSave())
|
||||
dismiss()
|
||||
}
|
||||
.keyboardShortcut(.defaultAction)
|
||||
.disabled(!viewModel.canSave)
|
||||
}
|
||||
.padding(.horizontal, 20)
|
||||
.padding(.vertical, 12)
|
||||
}
|
||||
}
|
||||
|
||||
/// Form-field helper: label on the left, editable field on the right.
|
||||
private struct LabeledField<Content: View>: View {
|
||||
let label: String
|
||||
let content: Content
|
||||
|
||||
init(_ label: String, @ViewBuilder content: () -> Content) {
|
||||
self.label = label
|
||||
self.content = content()
|
||||
}
|
||||
|
||||
var body: some View {
|
||||
HStack(alignment: .firstTextBaseline, spacing: 12) {
|
||||
Text(label)
|
||||
.font(.callout)
|
||||
.foregroundStyle(.secondary)
|
||||
.frame(width: 140, alignment: .trailing)
|
||||
content
|
||||
Spacer(minLength: 0)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,256 @@
|
||||
import SwiftUI
|
||||
|
||||
/// Small colored pill shown in the toolbar reflecting the server's reach-
|
||||
/// ability. Green = connected, yellow = probing, red = unreachable.
|
||||
///
|
||||
/// Clicking the pill (when red) surfaces the raw stderr so users can
|
||||
/// diagnose SSH issues without digging through Console.
|
||||
struct ConnectionStatusPill: View {
|
||||
let status: ConnectionStatusViewModel
|
||||
@State private var showDetails = false
|
||||
@State private var showDiagnostics = false
|
||||
|
||||
var body: some View {
|
||||
Button {
|
||||
switch status.status {
|
||||
case .error:
|
||||
showDetails = true
|
||||
case .degraded:
|
||||
// Yellow "can't read" state — open the diagnostics sheet
|
||||
// so the user can see exactly which files fail and why.
|
||||
showDiagnostics = true
|
||||
case .connected, .idle:
|
||||
status.retry()
|
||||
}
|
||||
} label: {
|
||||
// Leading SF Symbol does double duty: its color is the status
|
||||
// signal (green/orange/yellow/red), and its shape reads as a
|
||||
// clickable toolbar tool. No custom background — the toolbar's
|
||||
// `.principal` emphasis bezel is the frame.
|
||||
HStack(spacing: 5) {
|
||||
Image(systemName: iconName)
|
||||
.foregroundStyle(color)
|
||||
.symbolRenderingMode(.hierarchical)
|
||||
Text(label)
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
.lineLimit(1)
|
||||
}
|
||||
.padding(.horizontal, 4)
|
||||
}
|
||||
.buttonStyle(.plain)
|
||||
.help(tooltip)
|
||||
.popover(isPresented: $showDetails, arrowEdge: .bottom) {
|
||||
errorDetails.frame(width: 400)
|
||||
}
|
||||
.sheet(isPresented: $showDiagnostics) {
|
||||
RemoteDiagnosticsView(context: status.context)
|
||||
}
|
||||
}
|
||||
|
||||
private var color: Color {
|
||||
switch status.status {
|
||||
case .connected: return .green
|
||||
case .degraded: return .orange
|
||||
case .idle: return .yellow
|
||||
case .error: return .red
|
||||
}
|
||||
}
|
||||
|
||||
/// State-specific SF Symbol. The icon shape itself signals what the
|
||||
/// click will do: checkmark for connected (click to re-probe),
|
||||
/// stethoscope for degraded (click to run diagnostics), spinning
|
||||
/// arrows for probing, triangle for error.
|
||||
private var iconName: String {
|
||||
switch status.status {
|
||||
case .connected: return "checkmark.circle.fill"
|
||||
case .degraded: return "stethoscope"
|
||||
case .idle: return "arrow.triangle.2.circlepath"
|
||||
case .error: return "exclamationmark.triangle.fill"
|
||||
}
|
||||
}
|
||||
|
||||
private var label: String {
|
||||
switch status.status {
|
||||
case .connected: return "Connected"
|
||||
case .degraded: return "Connected — can't read Hermes state"
|
||||
case .idle: return "Checking…"
|
||||
case .error(let message, _): return message
|
||||
}
|
||||
}
|
||||
|
||||
private var tooltip: String {
|
||||
switch status.status {
|
||||
case .connected:
|
||||
if let ts = status.lastSuccess {
|
||||
let fmt = RelativeDateTimeFormatter()
|
||||
return "Last probe: \(fmt.localizedString(for: ts, relativeTo: Date()))"
|
||||
}
|
||||
return "Connected"
|
||||
case .degraded(let reason):
|
||||
return "SSH works but \(reason). Click for diagnostics."
|
||||
case .idle: return "Waiting for first probe"
|
||||
case .error(_, _): return "Click for details"
|
||||
}
|
||||
}
|
||||
|
||||
@ViewBuilder
|
||||
private var errorDetails: some View {
|
||||
if case .error(let message, let stderr) = status.status {
|
||||
VStack(alignment: .leading, spacing: 10) {
|
||||
HStack {
|
||||
Label(message, systemImage: "xmark.octagon.fill")
|
||||
.foregroundStyle(.red)
|
||||
.font(.headline)
|
||||
Spacer()
|
||||
Button("Retry") {
|
||||
status.retry()
|
||||
showDetails = false
|
||||
}
|
||||
}
|
||||
Divider()
|
||||
|
||||
// Specific guidance based on stderr classification.
|
||||
if stderr.isEmpty {
|
||||
Text("No additional output. Check ~/.ssh/config and ssh-agent.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
} else {
|
||||
ScrollView {
|
||||
Text(stderr)
|
||||
.font(.system(size: 11, design: .monospaced))
|
||||
.textSelection(.enabled)
|
||||
.frame(maxWidth: .infinity, alignment: .leading)
|
||||
}
|
||||
.frame(maxHeight: 200)
|
||||
}
|
||||
|
||||
// Tailored hint per failure class. We avoid auto-running
|
||||
// anything (Scarf can't safely invoke ssh-add or ssh-keygen
|
||||
// on the user's behalf), but copy-paste commands so the fix
|
||||
// is one paste away in Terminal.
|
||||
hintFor(stderr: stderr)
|
||||
}
|
||||
.padding(14)
|
||||
.frame(width: 440)
|
||||
}
|
||||
}
|
||||
|
||||
@ViewBuilder
|
||||
private func hintFor(stderr: String) -> some View {
|
||||
let lower = stderr.lowercased()
|
||||
if lower.contains("host key verification failed")
|
||||
|| lower.contains("remote host identification has changed") {
|
||||
// Known-hosts mismatch: this is the "blocking alert with
|
||||
// fingerprints" Phase 4 calls for. We can't safely auto-trust
|
||||
// a new key, so we offer the exact remediation command.
|
||||
HostKeyMismatchHint(serverHost: extractHostHint(from: stderr))
|
||||
} else if lower.contains("permission denied")
|
||||
|| (lower.contains("publickey") && lower.contains("denied")) {
|
||||
SshAddHint()
|
||||
} else {
|
||||
Text("If this is the first connection, ensure your key is loaded with `ssh-add` and that the remote accepts it.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
}
|
||||
}
|
||||
|
||||
/// Pull the host out of an ssh stderr line like
|
||||
/// "Host key verification failed for 192.168.0.82". Best-effort — falls
|
||||
/// back to a placeholder when no match is found.
|
||||
private func extractHostHint(from stderr: String) -> String {
|
||||
// Look for "Offending ECDSA key in /Users/.../.ssh/known_hosts:5"
|
||||
// or "Host key verification failed." — neither of which directly
|
||||
// contains the host. We fall back to scanning for an IP-like or
|
||||
// hostname-like token in the trace.
|
||||
let pattern = #"(?:host|key for) ['\"]?([A-Za-z0-9._-]+)['\"]?"#
|
||||
if let regex = try? NSRegularExpression(pattern: pattern, options: .caseInsensitive),
|
||||
let match = regex.firstMatch(in: stderr, range: NSRange(stderr.startIndex..., in: stderr)),
|
||||
match.numberOfRanges >= 2,
|
||||
let range = Range(match.range(at: 1), in: stderr) {
|
||||
return String(stderr[range])
|
||||
}
|
||||
return "<your-host>"
|
||||
}
|
||||
}
|
||||
|
||||
/// Specific remediation card for "host key verification failed" — the
|
||||
/// blocking case where ssh refuses because the remote's fingerprint changed.
|
||||
/// We never auto-accept; the user runs ssh-keygen -R themselves.
|
||||
private struct HostKeyMismatchHint: View {
|
||||
let serverHost: String
|
||||
@State private var copied = false
|
||||
|
||||
private var command: String { "ssh-keygen -R \(serverHost)" }
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 6) {
|
||||
Label("Host key changed", systemImage: "exclamationmark.shield")
|
||||
.font(.subheadline).bold()
|
||||
.foregroundStyle(.orange)
|
||||
Text("The remote's SSH fingerprint no longer matches what your `~/.ssh/known_hosts` file expected. This usually means the remote was reinstalled — or, less commonly, that someone is intercepting the connection.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
Text("If you trust the change, remove the stale entry and reconnect:")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
HStack {
|
||||
Text(command)
|
||||
.font(.system(size: 11, design: .monospaced))
|
||||
.textSelection(.enabled)
|
||||
.padding(6)
|
||||
.background(Color.secondary.opacity(0.12), in: RoundedRectangle(cornerRadius: 4))
|
||||
Spacer()
|
||||
Button(copied ? "Copied" : "Copy") {
|
||||
NSPasteboard.general.clearContents()
|
||||
NSPasteboard.general.setString(command, forType: .string)
|
||||
copied = true
|
||||
Task { try? await Task.sleep(nanoseconds: 1_500_000_000); copied = false }
|
||||
}
|
||||
.buttonStyle(.borderless)
|
||||
}
|
||||
}
|
||||
.padding(8)
|
||||
.background(Color.orange.opacity(0.1), in: RoundedRectangle(cornerRadius: 6))
|
||||
}
|
||||
}
|
||||
|
||||
/// Hint for "Permission denied" failures — almost always means ssh-agent
|
||||
/// doesn't have the right key loaded. We can't run ssh-add for the user
|
||||
/// (no UI to handle the passphrase prompt), but we provide the exact
|
||||
/// command + a copy button.
|
||||
private struct SshAddHint: View {
|
||||
@State private var copied = false
|
||||
private let command = "ssh-add ~/.ssh/id_ed25519"
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 6) {
|
||||
Label("Authentication uses ssh-agent", systemImage: "key.viewfinder")
|
||||
.font(.subheadline).bold()
|
||||
.foregroundStyle(.blue)
|
||||
Text("Scarf never prompts for passphrases. Add your key to ssh-agent in Terminal, then click Retry. If your key isn't `id_ed25519`, swap the path:")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
HStack {
|
||||
Text(command)
|
||||
.font(.system(size: 11, design: .monospaced))
|
||||
.textSelection(.enabled)
|
||||
.padding(6)
|
||||
.background(Color.secondary.opacity(0.12), in: RoundedRectangle(cornerRadius: 4))
|
||||
Spacer()
|
||||
Button(copied ? "Copied" : "Copy") {
|
||||
NSPasteboard.general.clearContents()
|
||||
NSPasteboard.general.setString(command, forType: .string)
|
||||
copied = true
|
||||
Task { try? await Task.sleep(nanoseconds: 1_500_000_000); copied = false }
|
||||
}
|
||||
.buttonStyle(.borderless)
|
||||
}
|
||||
Text("To skip the passphrase prompt at every reboot, add `--apple-use-keychain` to cache it in macOS Keychain.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.tertiary)
|
||||
}
|
||||
.padding(8)
|
||||
.background(Color.blue.opacity(0.08), in: RoundedRectangle(cornerRadius: 6))
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,134 @@
|
||||
import SwiftUI
|
||||
|
||||
/// List of registered remote servers with add/remove actions. Rendered as a
|
||||
/// popover from the toolbar switcher.
|
||||
struct ManageServersView: View {
|
||||
@Environment(ServerRegistry.self) private var registry
|
||||
@State private var showAddSheet = false
|
||||
@State private var pendingRemoveID: ServerID?
|
||||
@State private var diagnosticsContext: ServerContext?
|
||||
|
||||
var body: some View {
|
||||
VStack(alignment: .leading, spacing: 0) {
|
||||
header
|
||||
Divider()
|
||||
if registry.entries.isEmpty {
|
||||
empty
|
||||
} else {
|
||||
list
|
||||
}
|
||||
}
|
||||
.frame(width: 440, height: 380)
|
||||
.sheet(isPresented: $showAddSheet) {
|
||||
AddServerSheet { name, config in
|
||||
_ = registry.addServer(displayName: name, config: config)
|
||||
}
|
||||
}
|
||||
.sheet(item: Binding(
|
||||
get: { diagnosticsContext.map { IdentifiableContext(context: $0) } },
|
||||
set: { diagnosticsContext = $0?.context }
|
||||
)) { wrapper in
|
||||
RemoteDiagnosticsView(context: wrapper.context)
|
||||
}
|
||||
.confirmationDialog(
|
||||
"Remove this server?",
|
||||
isPresented: Binding(
|
||||
get: { pendingRemoveID != nil },
|
||||
set: { if !$0 { pendingRemoveID = nil } }
|
||||
),
|
||||
actions: {
|
||||
Button("Remove", role: .destructive) {
|
||||
if let id = pendingRemoveID { registry.removeServer(id) }
|
||||
pendingRemoveID = nil
|
||||
}
|
||||
Button("Cancel", role: .cancel) { pendingRemoveID = nil }
|
||||
},
|
||||
message: {
|
||||
Text("The server's SSH configuration is removed from Scarf. Your remote files are untouched.")
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
/// Wrapper because `ServerContext` isn't `Identifiable` against the sheet
|
||||
/// item API in a way that preserves display-ordering stability.
|
||||
private struct IdentifiableContext: Identifiable {
|
||||
var id: ServerID { context.id }
|
||||
let context: ServerContext
|
||||
}
|
||||
|
||||
private var header: some View {
|
||||
HStack {
|
||||
Text("Servers").font(.headline)
|
||||
Spacer()
|
||||
Button {
|
||||
showAddSheet = true
|
||||
} label: {
|
||||
Label("Add", systemImage: "plus")
|
||||
}
|
||||
.buttonStyle(.borderless)
|
||||
}
|
||||
.padding(12)
|
||||
}
|
||||
|
||||
private var empty: some View {
|
||||
VStack(spacing: 8) {
|
||||
Image(systemName: "server.rack")
|
||||
.font(.system(size: 28))
|
||||
.foregroundStyle(.secondary)
|
||||
Text("No remote servers").font(.headline)
|
||||
Text("Click Add to connect to a remote Hermes installation over SSH.")
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
.multilineTextAlignment(.center)
|
||||
.frame(maxWidth: 280)
|
||||
}
|
||||
.padding(32)
|
||||
.frame(maxWidth: .infinity, maxHeight: .infinity)
|
||||
}
|
||||
|
||||
private var list: some View {
|
||||
List {
|
||||
ForEach(registry.entries) { entry in
|
||||
HStack(spacing: 10) {
|
||||
Image(systemName: "server.rack")
|
||||
.foregroundStyle(.blue)
|
||||
VStack(alignment: .leading, spacing: 2) {
|
||||
Text(entry.displayName).font(.body)
|
||||
if case .ssh(let config) = entry.kind {
|
||||
Text(summary(for: config))
|
||||
.font(.caption)
|
||||
.foregroundStyle(.secondary)
|
||||
}
|
||||
}
|
||||
Spacer()
|
||||
Button {
|
||||
diagnosticsContext = entry.context
|
||||
} label: {
|
||||
Image(systemName: "stethoscope")
|
||||
}
|
||||
.buttonStyle(.borderless)
|
||||
.help("Run remote diagnostics — check exactly which files are readable on this server.")
|
||||
Button {
|
||||
pendingRemoveID = entry.id
|
||||
} label: {
|
||||
Image(systemName: "trash")
|
||||
}
|
||||
.buttonStyle(.borderless)
|
||||
.foregroundStyle(.red)
|
||||
.help("Remove this server from Scarf.")
|
||||
}
|
||||
.padding(.vertical, 4)
|
||||
}
|
||||
}
|
||||
.listStyle(.inset)
|
||||
}
|
||||
|
||||
private func summary(for config: SSHConfig) -> String {
|
||||
var s = ""
|
||||
if let user = config.user, !user.isEmpty { s += "\(user)@" }
|
||||
s += config.host
|
||||
if let port = config.port { s += ":\(port)" }
|
||||
if let home = config.remoteHome, !home.isEmpty { s += " (\(home))" }
|
||||
return s
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user