feat: v2.0 — correctness + UX polish on multi-server + remote SSH

The multi-window / multi-server / remote-SSH work that landed in
00ca722 (feat: multi-window + remote SSH server support (Phases 0-4))
was feature-complete but accumulated rough edges during dogfooding
against a remote Mac mini. This commit finishes the 2.0 release:
correctness fixes on remote, a chat-view UX overhaul, and a Swift 6
complete-concurrency sweep across the service layer.

Correctness on remote
- Kill the WAL-error spam: snapshotSQLite now runs `PRAGMA
  journal_mode=DELETE` on the remote temp DB before scp, so the
  pulled file is self-contained. Open remote snapshots with
  `file:...?immutable=1` URI as defense-in-depth, and drop the
  pointless `PRAGMA journal_mode=WAL` from HermesDataService.open.
- loadSessionHistory and refreshMessages now force a fresh snapshot
  via refresh(), so resuming a session on a remote shows messages
  persisted since launch (previously stuck on the first snapshot).
- New SnapshotCoordinator actor dedupes concurrent snapshotSQLite
  calls per ServerID — Dashboard + Sessions + Activity no longer
  issue three parallel SSH backups for the same fetch.
- ACP cwd comes from the remote's $HOME (probed once, cached per
  server in UserHomeCache), not the local Mac's NSHomeDirectory().
- Typing into a blank Chat always creates a new session. The old
  auto-resume-most-recent fallback was picking up cron-spawned
  sessions that Hermes had already GC'd, producing silent prompt
  failures.
- handlePromptComplete surfaces non-success stopReasons ("refusal",
  "error", "max_tokens") as a system message so failed prompts no
  longer sit under a forever-spinning "Agent working…".

Chat UX
- Replace six racing onChange-driven scrollTo calls with
  `.defaultScrollAnchor(.bottom)` alone. Manual proxy.scrollTo
  against a LazyVStack that hadn't finished laying out was
  overshooting into whitespace. Layout-pass-integrated anchor
  behaves correctly at stream start and finish.
- Remove ContentUnavailableView swap in RichChatView — it tore down
  the whole ScrollView hierarchy on first message. Empty state now
  lives inside the scroll view.
- continueLastSession surfaces an acpError banner if open() fails,
  instead of silently returning.

Lifecycle hygiene
- ServerRegistry.removeServer closes the server's SSH ControlMaster
  (`ssh -O exit`), prunes its snapshot cache dir, and invalidates
  UserHomeCache for that ID. App launch sweeps orphan snapshot dirs
  whose UUIDs aren't in the registry anymore.
- NSWorkspace.activateFileViewerSelecting (backup-saved-to dialog)
  gated on !context.isRemote; remote surfaces the remote path in the
  saveMessage instead of silently no-op'ing on a nonexistent local
  path.

Swift 6 concurrency — 230 warnings → 1
- Mark ServerContext, HermesPathSet, ServerTransport (protocol),
  LocalTransport, SSHTransport, HermesFileService, and every value-
  type accessor as `nonisolated`. Prevents AppKit-import-driven
  MainActor inference from bleeding onto data-only types.
- Hand-written Codable conformances (vs. synthesized) for
  ACPRequest, ACPRawMessage, ACPError, GatewayState, PlatformState,
  HermesCronJob, CronSchedule, CronJobsFile, AuthFile, AuthEntry.
  Synthesized inits were inferred @MainActor by Swift 6's default-
  isolation rule; hand-written ones are explicitly nonisolated.
- Captured-var refactors in MCPServerEditorViewModel, PluginsView
  Model, LocalTransport.watchPaths. Thread.sleep → Task.sleep in
  TestConnectionProbe.
- Remaining warning is AnyCodable.value mutation in init(from:) —
  Any-typed storage can't be strictly Sendable; acknowledged via
  @unchecked Sendable.

ACP adapter upstream bug (not fixed here, but handled)
- Hermes's ACP adapter returns JSON-RPC success `{"result":{}}` for
  session/load on a missing session, logging the warning only to
  stderr. Scarf can't distinguish "loaded" from "silently missing"
  at that layer; the stopReason=refusal surfacing above catches the
  downstream symptom. Upstream issue worth filing.

Release docs
- releases/v2.0.0/RELEASE_NOTES.md with full user-facing breakdown.
- README.md "What's New" bumped to 2.0 with a multi-server section.
  Compatibility table adds v0.10.0 as verified.
- GitHub repo description updated (via `gh repo edit`) to call out
  multi-server + remote SSH.

35 files changed, +809/-350.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alan Wizemann
2026-04-19 13:02:40 -07:00
parent 00ca7229df
commit 5920923d92
37 changed files with 866 additions and 349 deletions
+23 -6
View File
@@ -17,15 +17,30 @@
<a href="https://www.buymeacoffee.com/awizemann"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me a Coffee" height="28"></a>
</p>
## What's New in 1.6
## What's New in 2.0
- **Multi-server** — Manage multiple Hermes installations (local + any number of remotes) from one app. Each window binds to one server; open them side-by-side.
- **Remote Hermes over SSH** — Every feature that worked against your local `~/.hermes/` now works against a remote host. File I/O routes through `scp`/`sftp`; chat ACP runs over `ssh -T`; SQLite is served from atomic `.backup` snapshots pulled on file-watcher ticks.
- **Chat UX overhaul** — No more white-screen flash on first message, no more scroll jumping into whitespace during streaming, failed prompts explain themselves instead of silently spinning forever.
- **Correctness pass** — Fixed remote WAL error spam, stale-snapshot session resume, auto-resume of dead cron sessions, 230+ Swift 6 concurrency warnings.
See the full [v2.0.0 release notes](https://github.com/awizemann/scarf/releases/tag/v2.0.0).
### Previously, in 1.6
- **Platforms** — Native GUI setup for all 13 messaging platforms, no more hand-editing `.env`
- **Credential Pools** — Fixed OAuth flow and API-key handling; pick providers from a catalog
- **Model Picker** — Hierarchical browser backed by the 111-provider models.dev cache
- **Settings tabs** — 10 organized tabs covering ~60 previously hidden config fields
- **Configure sidebar** — New section for Personalities, Quick Commands, Plugins, Webhooks, Profiles
- **Configure sidebar** — Personalities, Quick Commands, Plugins, Webhooks, Profiles
See the full [v1.6.0 release notes](https://github.com/awizemann/scarf/releases/tag/v1.6.0).
See the [v1.6.0 release notes](https://github.com/awizemann/scarf/releases/tag/v1.6.0) for the full 1.6 series.
## Multi-server, one window per server
Scarf 2.0 is a multi-window app. Each window is bound to exactly one Hermes server — your local `~/.hermes/` is synthesized automatically, and you can add remotes via **File → Open Server…****Add Server** (host, user, port, optional identity file). Open a second window for a different server and the two run side-by-side with independent state.
Remote Hermes is reached over system SSH — the same `~/.ssh/config`, ssh-agent, ProxyJump, and ControlMaster pooling your terminal uses. File I/O flows through `scp`/`sftp`; SQLite is served from atomic `sqlite3 .backup` snapshots cached under `~/Library/Caches/scarf/snapshots/<server-id>/`; chat (ACP) tunnels as `ssh -T host -- hermes acp` with JSON-RPC over stdio end-to-end. Everything in the feature list below works against remote identically to local.
## Features
@@ -77,7 +92,8 @@ Custom, agent-generated dashboards for any project. Define stat boxes, charts, t
- macOS 14.6+ (Sonoma)
- Xcode 16.0+
- [Hermes agent](https://github.com/hermes-ai/hermes-agent) v0.6.0+ installed at `~/.hermes/` (v0.9.0 recommended for full feature support)
- [Hermes agent](https://github.com/hermes-ai/hermes-agent) v0.6.0+ installed at `~/.hermes/` on each target host (v0.9.0+ recommended for full feature support)
- For remote servers: SSH access (key-based), `sqlite3` on the remote (for atomic DB snapshots), and the `hermes` CLI resolvable from the remote user's `PATH` or at a path you specify per server.
### Compatibility
@@ -88,9 +104,10 @@ Scarf reads Hermes's SQLite database and parses CLI output from `hermes status`,
| v0.6.0 (2026-03-30) | Verified |
| v0.7.0 (2026-04-03) | Verified |
| v0.8.0 (2026-04-08) | Verified |
| v0.9.0 (2026-04-13) | Verified (recommended for full 1.6 feature support) |
| v0.9.0 (2026-04-13) | Verified |
| v0.10.0 (2026-04-18) | Verified (recommended for full 2.0 feature support) |
Scarf 1.6 targets Hermes v0.9.0 specifically for the new Platforms, Credentials, Skills Hub, and Cron write features. Earlier Hermes versions remain supported for the monitoring and session features but may not expose every new setup form.
Scarf 2.0 targets Hermes v0.10.0 for the ACP session/fork/list/resume capabilities used by remote chat. Earlier Hermes versions remain supported for monitoring, sessions, and file-based features; ACP-specific behavior may gracefully degrade on older agents.
If a Hermes update changes the database schema or CLI output format, Scarf may need to be updated. Check the [Health](#features) view for compatibility warnings.
+58
View File
@@ -0,0 +1,58 @@
## What's New in 2.0
Scarf now manages **multiple Hermes installations** — your local `~/.hermes/` plus any number of remote Hermes instances reached over SSH. Every feature that worked on your Mac now works against a Linux server, a Mac mini on the network, or whatever other host has Hermes installed.
This is a major version bump because the entire service layer was rewritten around a `ServerContext` + `ServerTransport` abstraction, and because the window model changed from single-window-single-server to multi-window-one-server-per-window.
### Multi-server
- **Manage Servers** sheet lets you add, rename, and remove remote servers. Each entry is an SSH target (`user@host`, port, optional identity file, optional `remoteHome` override if your install isn't at `~/.hermes/`).
- Each window is bound to exactly one server. Open a second window via **File → Open Server** → pick a different server, and the two run side-by-side with independent state — chat, dashboards, activity, sessions, the lot.
- The menu bar status icon shows a summary across all registered servers (green hare = any Hermes running anywhere).
- Window-state restoration: quit + relaunch re-opens every window you had open, each reconnected to its bound server.
### Remote over SSH
- **ControlMaster connection pooling** — after the first auth, each remote primitive is a ~5ms tunnel call. Uses the system `ssh`, `scp`, `sftp` so your `~/.ssh/config`, ssh-agent, 1Password/Secretive SSH agents, and ProxyJump all work unchanged.
- **DB access via atomic snapshots** — Scarf runs `sqlite3 .backup` on the remote (WAL-safe, won't corrupt), flips the snapshot out of WAL mode, and pulls it down with `scp`. Snapshots are cached under `~/Library/Caches/scarf/snapshots/<server-id>/` and re-pulled when the file watcher sees a change on the remote's `state.db`.
- **ACP chat over SSH** — the Agent Client Protocol tunnel runs `ssh -T host -- hermes acp`. JSON-RPC over stdio travels end-to-end unmodified, so Rich Chat, streaming, tool calls, permission dialogs, and compression all work against the remote agent identically to local.
- **File watcher** — local uses FSEvents (instant); remote polls `stat` mtime every 3s with ControlMaster keeping the cost bounded. Views auto-refresh on any tick.
- **Cleanup on server-remove** — deleting a remote closes its ControlMaster socket (`ssh -O exit`), prunes its snapshot cache, and invalidates any process-wide caches keyed to its ID. App launch also sweeps orphaned snapshot dirs whose UUIDs are no longer in the registry.
### Chat UX overhaul
All of these were visible bugs during remote dogfooding and are now fixed on both local and remote:
- **No more white-screen flash** on the first message of a session. `RichChatView` used to swap `ContentUnavailableView` out for the message list, which tore down and recreated the entire ScrollView hierarchy. The empty state now lives inside the ScrollView itself.
- **No more scroll-jumping to whitespace** at stream start/finish. Replaced six racing `onChange`-driven scroll calls with SwiftUI's built-in `.defaultScrollAnchor(.bottom)`, which is implemented inside the layout pass and doesn't overshoot LazyVStack content.
- **Resuming a session on a remote now shows its full history.** The DB snapshot is refreshed on session-load — previously it was pulled once on first open and never again, so any messages the remote wrote since launch were invisible.
- **"Continue from last session" surfaces errors** instead of silently doing nothing when SSH is down.
- **Typing into a blank Chat always creates a new session.** Previously it auto-resumed the most recently active session in the DB, which often picked up a cron-spawned session that Hermes had already garbage-collected — producing a silent prompt failure.
- **Failed prompts now explain themselves.** When the agent returns `stopReason: "refusal"`, `"error"`, or `"max_tokens"` with no assistant output, a system message appears under your prompt explaining what happened. No more spinning "Agent working…" forever.
### Correctness — remote SQLite
- The WAL-error spam (`cannot open file at line 51044 of [f0ca7bba1c] — os_unix.c:51044: (2) open(/Users/…/state.db-wal) - No such file or directory`) is gone. `sqlite3 .backup` preserves the source DB's journal mode; the scp'd copy used to try to open a WAL sidecar that doesn't exist. The snapshot script now runs `PRAGMA journal_mode=DELETE` after `.backup` on the remote, and Scarf opens remote snapshots with `file:…?immutable=1` as defense-in-depth.
- **Concurrent snapshot dedupe** — a new `SnapshotCoordinator` actor makes sure that when Dashboard + Sessions + Activity all ask for a fresh snapshot at the same moment (e.g. on a file-watcher tick), only one SSH backup runs; the other callers await the in-flight pull and share the result.
### Under the hood
- New `ServerContext` value type flows through `.environment()` to every view and ViewModel. Every file and process operation routes through `context.makeTransport()``LocalTransport` (`FileManager`, `Process`, FSEvents) or `SSHTransport` (ssh, scp, sftp, mtime polling). The protocol is small enough that each transport is ~400 lines.
- Swift 6 complete-concurrency sweep: ~230 warnings reduced to 1. `ServerContext`, `HermesPathSet`, `ServerTransport`, all service inits, and every value-type accessor are explicitly `nonisolated`. Hand-written `Codable` conformances for the nine types whose synthesized conformances were inferred `@MainActor` by Swift 6's default-isolation rule (`ACPRequest`, `ACPRawMessage`, `GatewayState`, `PlatformState`, `HermesCronJob`, `CronSchedule`, `CronJobsFile`, `AuthFile`, `AuthEntry`).
- ACP cwd now comes from the *remote* `$HOME`, probed once on first connect and cached per server. Previously it passed your local Mac's home path to the ACP adapter, which only worked by coincidence when the remote username matched.
### Compatibility
Hermes v0.10.0 is now verified alongside v0.6v0.9. Scarf builds its session/message `SELECT` columns based on an additive schema detection (`hasV07Schema`), so newer Hermes versions with extra columns don't break queries.
### Migration from 1.6.x
- Sparkle will offer the update automatically. Trigger manually via **Scarf → Check for Updates…** or the menu bar.
- Your local server is synthesized automatically — existing 1.6.x users see "Local" in the server list with no setup needed.
- `servers.json` is created on first add-remote. Location: `~/Library/Application Support/scarf/servers.json`.
- Nothing you configured in 1.6.x (OAuth tokens, credential pools, cron jobs, MCP servers, platform setup) is touched. Those live in `~/.hermes/` and remain the source of truth.
### Known limitations
- Remote file watching is 3s mtime polling (vs. FSEvents on local). If you need sub-second updates on a remote, that's a followup.
- The `session/load` ACP call against an already-deleted session returns success-with-no-body from the Hermes adapter — Scarf now detects the resulting `stopReason: "refusal"` and surfaces it, but the underlying Hermes behavior is an upstream-adapter bug that should also get a proper error response.
+74 -30
View File
@@ -2,39 +2,83 @@ import Foundation
// MARK: - JSON-RPC Transport
struct ACPRequest: Encodable {
let jsonrpc = "2.0"
let id: Int
let method: String
let params: [String: AnyCodable]
// Hand-written `encode(to:)` / `init(from:)` with explicit `nonisolated` so
// Swift 6's default-isolation doesn't synthesize a MainActor-isolated
// conformance which would prevent these payloads from being encoded or
// decoded inside `ACPClient`'s actor context (the JSON-RPC read/write loop).
// The member list must stay in sync with the stored properties above.
struct ACPRequest: Encodable, Sendable {
nonisolated let jsonrpc = "2.0"
nonisolated let id: Int
nonisolated let method: String
nonisolated let params: [String: AnyCodable]
enum CodingKeys: String, CodingKey { case jsonrpc, id, method, params }
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encode(jsonrpc, forKey: .jsonrpc)
try c.encode(id, forKey: .id)
try c.encode(method, forKey: .method)
try c.encode(params, forKey: .params)
}
}
struct ACPRawMessage: Decodable {
let jsonrpc: String?
let id: Int?
let method: String?
let result: AnyCodable?
let error: ACPError?
let params: AnyCodable?
struct ACPRawMessage: Decodable, Sendable {
nonisolated let jsonrpc: String?
nonisolated let id: Int?
nonisolated let method: String?
nonisolated let result: AnyCodable?
nonisolated let error: ACPError?
nonisolated let params: AnyCodable?
var isResponse: Bool { id != nil && method == nil }
var isNotification: Bool { method != nil && id == nil }
var isRequest: Bool { method != nil && id != nil }
nonisolated var isResponse: Bool { id != nil && method == nil }
nonisolated var isNotification: Bool { method != nil && id == nil }
nonisolated var isRequest: Bool { method != nil && id != nil }
enum CodingKeys: String, CodingKey { case jsonrpc, id, method, result, error, params }
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.jsonrpc = try c.decodeIfPresent(String.self, forKey: .jsonrpc)
self.id = try c.decodeIfPresent(Int.self, forKey: .id)
self.method = try c.decodeIfPresent(String.self, forKey: .method)
self.result = try c.decodeIfPresent(AnyCodable.self, forKey: .result)
self.error = try c.decodeIfPresent(ACPError.self, forKey: .error)
self.params = try c.decodeIfPresent(AnyCodable.self, forKey: .params)
}
}
struct ACPError: Decodable, Sendable {
let code: Int
let message: String
nonisolated let code: Int
nonisolated let message: String
enum CodingKeys: String, CodingKey { case code, message }
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.code = try c.decode(Int.self, forKey: .code)
self.message = try c.decode(String.self, forKey: .message)
}
}
// MARK: - AnyCodable (for dynamic JSON)
struct AnyCodable: Codable, Sendable {
let value: Any
struct AnyCodable: Codable, @unchecked Sendable {
nonisolated let value: Any
init(_ value: Any) { self.value = value }
nonisolated init(_ value: Any) { self.value = value }
init(from decoder: Decoder) throws {
// NOT marked `nonisolated`: Swift's default-isolation treats writes to a
// `let value: Any` stored property as MainActor-isolated even when the
// property is declared nonisolated (Any can't be strictly Sendable, so
// the compiler can't prove the write is safe off-main). Leaving the
// init as default-isolated silences the mutation warnings; the Decodable
// conformance is still usable from ACPClient's nonisolated read loop
// because all callers are already @preconcurrency with respect to
// `AnyCodable` (it's @unchecked Sendable).
init(from decoder: any Decoder) throws {
let container = try decoder.singleValueContainer()
if container.decodeNil() {
value = NSNull()
@@ -55,7 +99,7 @@ struct AnyCodable: Codable, Sendable {
}
}
func encode(to encoder: Encoder) throws {
func encode(to encoder: any Encoder) throws {
var container = encoder.singleValueContainer()
switch value {
case is NSNull:
@@ -79,10 +123,10 @@ struct AnyCodable: Codable, Sendable {
// MARK: - Accessors
var stringValue: String? { value as? String }
var intValue: Int? { value as? Int }
var dictValue: [String: Any]? { value as? [String: Any] }
var arrayValue: [Any]? { value as? [Any] }
nonisolated var stringValue: String? { value as? String }
nonisolated var intValue: Int? { value as? Int }
nonisolated var dictValue: [String: Any]? { value as? [String: Any] }
nonisolated var arrayValue: [Any]? { value as? [Any] }
}
// MARK: - ACP Events (parsed from session/update notifications)
@@ -154,7 +198,7 @@ struct ACPPromptResult: Sendable {
// MARK: - Event Parsing
enum ACPEventParser {
static func parse(notification: ACPRawMessage) -> ACPEvent? {
nonisolated static func parse(notification: ACPRawMessage) -> ACPEvent? {
guard notification.method == "session/update",
let params = notification.params?.dictValue,
let sessionId = params["sessionId"] as? String,
@@ -202,7 +246,7 @@ enum ACPEventParser {
}
}
static func parsePermissionRequest(_ message: ACPRawMessage) -> ACPEvent? {
nonisolated static func parsePermissionRequest(_ message: ACPRawMessage) -> ACPEvent? {
guard message.method == "session/request_permission",
let params = message.params?.dictValue,
let sessionId = params["sessionId"] as? String,
@@ -226,7 +270,7 @@ enum ACPEventParser {
// MARK: - Content Extraction
private static func extractContentText(from update: [String: Any]) -> String {
nonisolated private static func extractContentText(from update: [String: Any]) -> String {
if let content = update["content"] as? [String: Any],
let text = content["text"] as? String {
return text
@@ -234,7 +278,7 @@ enum ACPEventParser {
return ""
}
private static func extractContentArrayText(from update: [String: Any]) -> String {
nonisolated private static func extractContentArrayText(from update: [String: Any]) -> String {
if let contentArray = update["content"] as? [[String: Any]] {
return contentArray.compactMap { item -> String? in
guard let inner = item["content"] as? [String: Any] else { return nil }
+67 -30
View File
@@ -9,7 +9,7 @@ struct AuxiliaryModel: Sendable, Equatable {
var apiKey: String
var timeout: Int
static let empty = AuxiliaryModel(provider: "auto", model: "", baseURL: "", apiKey: "", timeout: 30)
nonisolated static let empty = AuxiliaryModel(provider: "auto", model: "", baseURL: "", apiKey: "", timeout: 30)
}
/// Group of display-related settings mirroring the `display:` block in config.yaml.
@@ -23,7 +23,7 @@ struct DisplaySettings: Sendable, Equatable {
var toolPreviewLength: Int
var busyInputMode: String // e.g. "interrupt"
static let empty = DisplaySettings(
nonisolated static let empty = DisplaySettings(
skin: "default",
compact: false,
resumeDisplay: "full",
@@ -54,7 +54,7 @@ struct TerminalSettings: Sendable, Equatable {
var daytonaImage: String
var singularityImage: String
static let empty = TerminalSettings(
nonisolated static let empty = TerminalSettings(
cwd: ".",
timeout: 180,
envPassthrough: [],
@@ -82,7 +82,7 @@ struct BrowserSettings: Sendable, Equatable {
var allowPrivateURLs: Bool
var camofoxManagedPersistence: Bool
static let empty = BrowserSettings(
nonisolated static let empty = BrowserSettings(
inactivityTimeout: 120,
commandTimeout: 30,
recordSessions: false,
@@ -115,7 +115,7 @@ struct VoiceSettings: Sendable, Equatable {
var sttOpenAIModel: String
var sttMistralModel: String
static let empty = VoiceSettings(
nonisolated static let empty = VoiceSettings(
recordKey: "ctrl+b",
maxRecordingSeconds: 120,
silenceDuration: 3.0,
@@ -147,7 +147,7 @@ struct AuxiliarySettings: Sendable, Equatable {
var mcp: AuxiliaryModel
var flushMemories: AuxiliaryModel
static let empty = AuxiliarySettings(
nonisolated static let empty = AuxiliarySettings(
vision: .empty,
webExtract: .empty,
compression: .empty,
@@ -170,7 +170,7 @@ struct SecuritySettings: Sendable, Equatable {
var blocklistEnabled: Bool
var blocklistDomains: [String]
static let empty = SecuritySettings(
nonisolated static let empty = SecuritySettings(
redactSecrets: true,
redactPII: false,
tirithEnabled: true,
@@ -188,7 +188,7 @@ struct HumanDelaySettings: Sendable, Equatable {
var minMS: Int
var maxMS: Int
static let empty = HumanDelaySettings(mode: "off", minMS: 800, maxMS: 2500)
nonisolated static let empty = HumanDelaySettings(mode: "off", minMS: 800, maxMS: 2500)
}
/// Compression / context routing.
@@ -198,14 +198,14 @@ struct CompressionSettings: Sendable, Equatable {
var targetRatio: Double
var protectLastN: Int
static let empty = CompressionSettings(enabled: true, threshold: 0.5, targetRatio: 0.2, protectLastN: 20)
nonisolated static let empty = CompressionSettings(enabled: true, threshold: 0.5, targetRatio: 0.2, protectLastN: 20)
}
struct CheckpointSettings: Sendable, Equatable {
var enabled: Bool
var maxSnapshots: Int
static let empty = CheckpointSettings(enabled: true, maxSnapshots: 50)
nonisolated static let empty = CheckpointSettings(enabled: true, maxSnapshots: 50)
}
struct LoggingSettings: Sendable, Equatable {
@@ -213,7 +213,7 @@ struct LoggingSettings: Sendable, Equatable {
var maxSizeMB: Int
var backupCount: Int
static let empty = LoggingSettings(level: "INFO", maxSizeMB: 5, backupCount: 3)
nonisolated static let empty = LoggingSettings(level: "INFO", maxSizeMB: 5, backupCount: 3)
}
struct DelegationSettings: Sendable, Equatable {
@@ -223,7 +223,7 @@ struct DelegationSettings: Sendable, Equatable {
var apiKey: String
var maxIterations: Int
static let empty = DelegationSettings(model: "", provider: "", baseURL: "", apiKey: "", maxIterations: 50)
nonisolated static let empty = DelegationSettings(model: "", provider: "", baseURL: "", apiKey: "", maxIterations: 50)
}
/// Discord-specific platform settings (`discord.*`). Other platforms currently have thinner schemas.
@@ -233,7 +233,7 @@ struct DiscordSettings: Sendable, Equatable {
var autoThread: Bool
var reactions: Bool
static let empty = DiscordSettings(requireMention: true, freeResponseChannels: "", autoThread: true, reactions: true)
nonisolated static let empty = DiscordSettings(requireMention: true, freeResponseChannels: "", autoThread: true, reactions: true)
}
/// Telegram settings under `telegram.*` in config.yaml. Most Telegram tuning is
@@ -243,7 +243,7 @@ struct TelegramSettings: Sendable, Equatable {
var requireMention: Bool
var reactions: Bool
static let empty = TelegramSettings(requireMention: true, reactions: false)
nonisolated static let empty = TelegramSettings(requireMention: true, reactions: false)
}
/// Slack settings under `platforms.slack.*` (and a couple of top-level keys).
@@ -253,7 +253,7 @@ struct SlackSettings: Sendable, Equatable {
var replyInThread: Bool
var replyBroadcast: Bool
static let empty = SlackSettings(replyToMode: "first", requireMention: true, replyInThread: true, replyBroadcast: false)
nonisolated static let empty = SlackSettings(replyToMode: "first", requireMention: true, replyInThread: true, replyBroadcast: false)
}
/// Matrix settings under `matrix.*`.
@@ -262,7 +262,7 @@ struct MatrixSettings: Sendable, Equatable {
var autoThread: Bool
var dmMentionThreads: Bool
static let empty = MatrixSettings(requireMention: true, autoThread: true, dmMentionThreads: false)
nonisolated static let empty = MatrixSettings(requireMention: true, autoThread: true, dmMentionThreads: false)
}
/// Mattermost settings. Mattermost is mostly driven by env vars; config.yaml
@@ -272,7 +272,7 @@ struct MattermostSettings: Sendable, Equatable {
var requireMention: Bool
var replyMode: String // "thread" | "off"
static let empty = MattermostSettings(requireMention: true, replyMode: "off")
nonisolated static let empty = MattermostSettings(requireMention: true, replyMode: "off")
}
/// WhatsApp settings under `whatsapp.*`.
@@ -280,7 +280,7 @@ struct WhatsAppSettings: Sendable, Equatable {
var unauthorizedDMBehavior: String // "pair" | "ignore"
var replyPrefix: String
static let empty = WhatsAppSettings(unauthorizedDMBehavior: "pair", replyPrefix: "")
nonisolated static let empty = WhatsAppSettings(unauthorizedDMBehavior: "pair", replyPrefix: "")
}
/// Home Assistant filters under `platforms.homeassistant.extra`. Hermes ignores
@@ -292,7 +292,7 @@ struct HomeAssistantSettings: Sendable, Equatable {
var ignoreEntities: [String]
var cooldownSeconds: Int
static let empty = HomeAssistantSettings(watchDomains: [], watchEntities: [], watchAll: false, ignoreEntities: [], cooldownSeconds: 30)
nonisolated static let empty = HomeAssistantSettings(watchDomains: [], watchEntities: [], watchAll: false, ignoreEntities: [], cooldownSeconds: 30)
}
// MARK: - Root Config
@@ -359,7 +359,7 @@ struct HermesConfig: Sendable {
var whatsapp: WhatsAppSettings
var homeAssistant: HomeAssistantSettings
static let empty = HermesConfig(
nonisolated static let empty = HermesConfig(
model: "unknown",
provider: "unknown",
maxTurns: 0,
@@ -418,13 +418,16 @@ struct HermesConfig: Sendable {
)
}
// Hand-written `init(from:)` so Swift 6 doesn't synthesize a
// MainActor-isolated Decodable conformance (which would fail to be used from
// `HermesFileService.loadGatewayState()`, a nonisolated method).
struct GatewayState: Sendable, Codable {
let pid: Int?
let kind: String?
let gatewayState: String?
let exitReason: String?
let platforms: [String: PlatformState]?
let updatedAt: String?
nonisolated let pid: Int?
nonisolated let kind: String?
nonisolated let gatewayState: String?
nonisolated let exitReason: String?
nonisolated let platforms: [String: PlatformState]?
nonisolated let updatedAt: String?
enum CodingKeys: String, CodingKey {
case pid, kind
@@ -434,16 +437,50 @@ struct GatewayState: Sendable, Codable {
case updatedAt = "updated_at"
}
var isRunning: Bool {
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.pid = try c.decodeIfPresent(Int.self, forKey: .pid)
self.kind = try c.decodeIfPresent(String.self, forKey: .kind)
self.gatewayState = try c.decodeIfPresent(String.self, forKey: .gatewayState)
self.exitReason = try c.decodeIfPresent(String.self, forKey: .exitReason)
self.platforms = try c.decodeIfPresent([String: PlatformState].self, forKey: .platforms)
self.updatedAt = try c.decodeIfPresent(String.self, forKey: .updatedAt)
}
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encodeIfPresent(pid, forKey: .pid)
try c.encodeIfPresent(kind, forKey: .kind)
try c.encodeIfPresent(gatewayState, forKey: .gatewayState)
try c.encodeIfPresent(exitReason, forKey: .exitReason)
try c.encodeIfPresent(platforms, forKey: .platforms)
try c.encodeIfPresent(updatedAt, forKey: .updatedAt)
}
nonisolated var isRunning: Bool {
gatewayState == "running"
}
var statusText: String {
nonisolated var statusText: String {
gatewayState ?? "unknown"
}
}
struct PlatformState: Sendable, Codable {
let connected: Bool?
let error: String?
nonisolated let connected: Bool?
nonisolated let error: String?
enum CodingKeys: String, CodingKey { case connected, error }
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.connected = try c.decodeIfPresent(Bool.self, forKey: .connected)
self.error = try c.decodeIfPresent(String.self, forKey: .error)
}
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encodeIfPresent(connected, forKey: .connected)
try c.encodeIfPresent(error, forKey: .error)
}
}
+101 -26
View File
@@ -1,24 +1,24 @@
import Foundation
struct HermesCronJob: Identifiable, Sendable, Codable {
let id: String
let name: String
let prompt: String
let skills: [String]?
let model: String?
let schedule: CronSchedule
let enabled: Bool
let state: String
let deliver: String?
let nextRunAt: String?
let lastRunAt: String?
let lastError: String?
let preRunScript: String?
let deliveryFailures: Int?
let lastDeliveryError: String?
let timeoutType: String?
let timeoutSeconds: Int?
let silent: Bool?
nonisolated let id: String
nonisolated let name: String
nonisolated let prompt: String
nonisolated let skills: [String]?
nonisolated let model: String?
nonisolated let schedule: CronSchedule
nonisolated let enabled: Bool
nonisolated let state: String
nonisolated let deliver: String?
nonisolated let nextRunAt: String?
nonisolated let lastRunAt: String?
nonisolated let lastError: String?
nonisolated let preRunScript: String?
nonisolated let deliveryFailures: Int?
nonisolated let lastDeliveryError: String?
nonisolated let timeoutType: String?
nonisolated let timeoutSeconds: Int?
nonisolated let silent: Bool?
enum CodingKeys: String, CodingKey {
case id, name, prompt, skills, model, schedule, enabled, state, deliver, silent
@@ -32,7 +32,51 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
case timeoutSeconds = "timeout_seconds"
}
var stateIcon: String {
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.id = try c.decode(String.self, forKey: .id)
self.name = try c.decode(String.self, forKey: .name)
self.prompt = try c.decode(String.self, forKey: .prompt)
self.skills = try c.decodeIfPresent([String].self, forKey: .skills)
self.model = try c.decodeIfPresent(String.self, forKey: .model)
self.schedule = try c.decode(CronSchedule.self, forKey: .schedule)
self.enabled = try c.decode(Bool.self, forKey: .enabled)
self.state = try c.decode(String.self, forKey: .state)
self.deliver = try c.decodeIfPresent(String.self, forKey: .deliver)
self.nextRunAt = try c.decodeIfPresent(String.self, forKey: .nextRunAt)
self.lastRunAt = try c.decodeIfPresent(String.self, forKey: .lastRunAt)
self.lastError = try c.decodeIfPresent(String.self, forKey: .lastError)
self.preRunScript = try c.decodeIfPresent(String.self, forKey: .preRunScript)
self.deliveryFailures = try c.decodeIfPresent(Int.self, forKey: .deliveryFailures)
self.lastDeliveryError = try c.decodeIfPresent(String.self, forKey: .lastDeliveryError)
self.timeoutType = try c.decodeIfPresent(String.self, forKey: .timeoutType)
self.timeoutSeconds = try c.decodeIfPresent(Int.self, forKey: .timeoutSeconds)
self.silent = try c.decodeIfPresent(Bool.self, forKey: .silent)
}
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encode(id, forKey: .id)
try c.encode(name, forKey: .name)
try c.encode(prompt, forKey: .prompt)
try c.encodeIfPresent(skills, forKey: .skills)
try c.encodeIfPresent(model, forKey: .model)
try c.encode(schedule, forKey: .schedule)
try c.encode(enabled, forKey: .enabled)
try c.encode(state, forKey: .state)
try c.encodeIfPresent(deliver, forKey: .deliver)
try c.encodeIfPresent(nextRunAt, forKey: .nextRunAt)
try c.encodeIfPresent(lastRunAt, forKey: .lastRunAt)
try c.encodeIfPresent(lastError, forKey: .lastError)
try c.encodeIfPresent(preRunScript, forKey: .preRunScript)
try c.encodeIfPresent(deliveryFailures, forKey: .deliveryFailures)
try c.encodeIfPresent(lastDeliveryError, forKey: .lastDeliveryError)
try c.encodeIfPresent(timeoutType, forKey: .timeoutType)
try c.encodeIfPresent(timeoutSeconds, forKey: .timeoutSeconds)
try c.encodeIfPresent(silent, forKey: .silent)
}
nonisolated var stateIcon: String {
switch state {
case "scheduled": return "clock"
case "running": return "play.circle"
@@ -42,7 +86,7 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
}
}
var deliveryDisplay: String? {
nonisolated var deliveryDisplay: String? {
guard let deliver, !deliver.isEmpty else { return nil }
// v0.9.0 extends Discord routing to threads: `discord:<chat>:<thread>`.
if deliver.hasPrefix("discord:") {
@@ -59,10 +103,10 @@ struct HermesCronJob: Identifiable, Sendable, Codable {
}
struct CronSchedule: Sendable, Codable {
let kind: String
let runAt: String?
let display: String?
let expression: String?
nonisolated let kind: String
nonisolated let runAt: String?
nonisolated let display: String?
nonisolated let expression: String?
enum CodingKeys: String, CodingKey {
case kind
@@ -70,14 +114,45 @@ struct CronSchedule: Sendable, Codable {
case display
case expression
}
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.kind = try c.decode(String.self, forKey: .kind)
self.runAt = try c.decodeIfPresent(String.self, forKey: .runAt)
self.display = try c.decodeIfPresent(String.self, forKey: .display)
self.expression = try c.decodeIfPresent(String.self, forKey: .expression)
}
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encode(kind, forKey: .kind)
try c.encodeIfPresent(runAt, forKey: .runAt)
try c.encodeIfPresent(display, forKey: .display)
try c.encodeIfPresent(expression, forKey: .expression)
}
}
// Hand-written `init(from:)` / `encode(to:)` so Swift 6 doesn't synthesize a
// MainActor-isolated Codable conformance `HermesFileService.loadCronJobs`
// is nonisolated and needs to decode this from a background task.
struct CronJobsFile: Sendable, Codable {
let jobs: [HermesCronJob]
let updatedAt: String?
nonisolated let jobs: [HermesCronJob]
nonisolated let updatedAt: String?
enum CodingKeys: String, CodingKey {
case jobs
case updatedAt = "updated_at"
}
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.jobs = try c.decode([HermesCronJob].self, forKey: .jobs)
self.updatedAt = try c.decodeIfPresent(String.self, forKey: .updatedAt)
}
nonisolated func encode(to encoder: any Encoder) throws {
var c = encoder.container(keyedBy: CodingKeys.self)
try c.encode(jobs, forKey: .jobs)
try c.encodeIfPresent(updatedAt, forKey: .updatedAt)
}
}
+24 -24
View File
@@ -25,44 +25,44 @@ struct HermesPathSet: Sendable, Hashable {
// MARK: - Defaults
/// Absolute path to the local user's `~/.hermes` directory.
static let defaultLocalHome: String = {
nonisolated static let defaultLocalHome: String = {
let user = ProcessInfo.processInfo.environment["HOME"] ?? NSHomeDirectory()
return user + "/.hermes"
}()
/// Default remote home when the user doesn't override it in `SSHConfig`.
/// We leave `~` unexpanded on purpose the remote shell resolves it.
static let defaultRemoteHome: String = "~/.hermes"
nonisolated static let defaultRemoteHome: String = "~/.hermes"
// MARK: - Paths (mirror of the old HermesPaths layout)
var stateDB: String { home + "/state.db" }
var configYAML: String { home + "/config.yaml" }
var envFile: String { home + "/.env" }
var authJSON: String { home + "/auth.json" }
var soulMD: String { home + "/SOUL.md" }
var pluginsDir: String { home + "/plugins" }
var memoriesDir: String { home + "/memories" }
var memoryMD: String { memoriesDir + "/MEMORY.md" }
var userMD: String { memoriesDir + "/USER.md" }
var sessionsDir: String { home + "/sessions" }
var cronJobsJSON: String { home + "/cron/jobs.json" }
var cronOutputDir: String { home + "/cron/output" }
var gatewayStateJSON: String { home + "/gateway_state.json" }
var skillsDir: String { home + "/skills" }
var errorsLog: String { home + "/logs/errors.log" }
var agentLog: String { home + "/logs/agent.log" }
var gatewayLog: String { home + "/logs/gateway.log" }
var scarfDir: String { home + "/scarf" }
var projectsRegistry: String { scarfDir + "/projects.json" }
var mcpTokensDir: String { home + "/mcp-tokens" }
nonisolated var stateDB: String { home + "/state.db" }
nonisolated var configYAML: String { home + "/config.yaml" }
nonisolated var envFile: String { home + "/.env" }
nonisolated var authJSON: String { home + "/auth.json" }
nonisolated var soulMD: String { home + "/SOUL.md" }
nonisolated var pluginsDir: String { home + "/plugins" }
nonisolated var memoriesDir: String { home + "/memories" }
nonisolated var memoryMD: String { memoriesDir + "/MEMORY.md" }
nonisolated var userMD: String { memoriesDir + "/USER.md" }
nonisolated var sessionsDir: String { home + "/sessions" }
nonisolated var cronJobsJSON: String { home + "/cron/jobs.json" }
nonisolated var cronOutputDir: String { home + "/cron/output" }
nonisolated var gatewayStateJSON: String { home + "/gateway_state.json" }
nonisolated var skillsDir: String { home + "/skills" }
nonisolated var errorsLog: String { home + "/logs/errors.log" }
nonisolated var agentLog: String { home + "/logs/agent.log" }
nonisolated var gatewayLog: String { home + "/logs/gateway.log" }
nonisolated var scarfDir: String { home + "/scarf" }
nonisolated var projectsRegistry: String { scarfDir + "/projects.json" }
nonisolated var mcpTokensDir: String { home + "/mcp-tokens" }
// MARK: - Binary resolution
/// Install locations we probe for the local `hermes` binary, in priority
/// order. Checked on every access so a user installing via a different
/// method doesn't need to relaunch Scarf.
static let hermesBinaryCandidates: [String] = {
nonisolated static let hermesBinaryCandidates: [String] = {
let user = ProcessInfo.processInfo.environment["HOME"] ?? NSHomeDirectory()
return [
user + "/.local/bin/hermes", // pipx / pip --user (default)
@@ -79,7 +79,7 @@ struct HermesPathSet: Sendable, Hashable {
///
/// Remote: returns `binaryHint` (populated at connect time) or bare
/// `"hermes"` as a last-resort default that relies on the remote `$PATH`.
var hermesBinary: String {
nonisolated var hermesBinary: String {
if isRemote {
return binaryHint ?? "hermes"
}
+75 -11
View File
@@ -42,6 +42,15 @@ enum ServerKind: Sendable, Hashable, Codable {
/// every service and ViewModel in Phase 1. One `ServerContext` corresponds to
/// one Hermes installation; multi-window scenes in Phase 3 will construct
/// one per window.
///
/// **Why every member is `nonisolated`.** This file imports `AppKit`
/// (`NSWorkspace.shared.open` in `openInLocalEditor`), which under Swift 6's
/// upcoming default-isolation rules pulls the whole struct to `@MainActor`.
/// `ServerContext` is a plain `Sendable` value accessing `.local`, `.paths`,
/// `.isRemote`, or `makeTransport()` from a background actor must not trap
/// the caller into hopping MainActor. `nonisolated` on each member keeps
/// them callable from any context; the one MainActor-dependent method
/// (`openInLocalEditor`) lives in the extension below.
struct ServerContext: Sendable, Hashable, Identifiable {
let id: ServerID
var displayName: String
@@ -49,7 +58,7 @@ struct ServerContext: Sendable, Hashable, Identifiable {
/// Path layout for this server. Cheap all path components are computed
/// on demand from `home`, no I/O.
var paths: HermesPathSet {
nonisolated var paths: HermesPathSet {
switch kind {
case .local:
return HermesPathSet(
@@ -66,7 +75,7 @@ struct ServerContext: Sendable, Hashable, Identifiable {
}
}
var isRemote: Bool {
nonisolated var isRemote: Bool {
if case .ssh = kind { return true }
return false
}
@@ -75,7 +84,7 @@ struct ServerContext: Sendable, Hashable, Identifiable {
/// a `LocalTransport`; SSH contexts get an `SSHTransport` configured
/// from `SSHConfig`. Each call returns a fresh value transports are
/// cheap and stateless beyond disk caches.
func makeTransport() -> any ServerTransport {
nonisolated func makeTransport() -> any ServerTransport {
switch kind {
case .local:
return LocalTransport(contextID: id)
@@ -90,17 +99,72 @@ struct ServerContext: Sendable, Hashable, Identifiable {
/// local context has the same identity across launches, and so persisted
/// window-state restorations that reference it continue to resolve even
/// if `servers.json` hasn't been touched yet.
private static let localID = ServerID(uuidString: "00000000-0000-0000-0000-000000000001")!
nonisolated private static let localID = ServerID(uuidString: "00000000-0000-0000-0000-000000000001")!
/// The default "this machine" context. Used everywhere in Phase 0/1 and
/// remains the fallback when no remote server is selected.
static let local = ServerContext(
nonisolated static let local = ServerContext(
id: localID,
displayName: "Local",
kind: .local
)
}
// MARK: - Remote user-home resolution
/// Process-wide cache of each server's resolved user `$HOME`. Probed once per
/// `ServerID` via the transport, then memoized for the app's lifetime home
/// directories don't change under us, and the probe is a ~5ms SSH round-trip
/// with ControlMaster. Used by anything that needs to hand a working
/// directory to the ACP agent or the Hermes CLI on the correct host.
private actor UserHomeCache {
static let shared = UserHomeCache()
private var cache: [ServerID: String] = [:]
func resolve(for context: ServerContext) async -> String {
if let cached = cache[context.id] { return cached }
let resolved = await probe(context: context)
cache[context.id] = resolved
return resolved
}
func invalidate(contextID: ServerID) {
cache.removeValue(forKey: contextID)
}
private func probe(context: ServerContext) async -> String {
if !context.isRemote { return NSHomeDirectory() }
let transport = context.makeTransport()
let result = try? transport.runProcess(
executable: "/bin/sh",
args: ["-c", "echo $HOME"],
stdin: nil,
timeout: 10
)
let out = result?.stdoutString.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
// Fall back to `~` (unexpanded) so ACP at least gets a plausible cwd
// rather than a local Mac path. The remote side will expand it if
// passed through a shell; if not, failures are surfaced by ACP itself.
return out.isEmpty ? "~" : out
}
}
extension ServerContext {
/// Resolved absolute path to the user's home directory on the target host.
/// Local: `NSHomeDirectory()`. Remote: probed `$HOME` over SSH, cached.
/// Use this not `NSHomeDirectory()` whenever you're passing a `cwd`
/// or user path to a process that runs on the target host.
func resolvedUserHome() async -> String {
await UserHomeCache.shared.resolve(for: self)
}
/// Called when a server is removed from the registry, so the process-wide
/// caches keyed by `ServerID` don't hold stale entries forever.
static func invalidateCaches(for contextID: ServerID) async {
await UserHomeCache.shared.invalidate(contextID: contextID)
}
}
// MARK: - Convenience file I/O via the right transport
/// Centralized file I/O entry points for VMs that don't own a service. Every
@@ -114,20 +178,20 @@ struct ServerContext: Sendable, Hashable, Identifiable {
extension ServerContext {
/// Read a UTF-8 text file. `nil` on any error (missing, transport down,
/// invalid encoding).
func readText(_ path: String) -> String? {
nonisolated func readText(_ path: String) -> String? {
guard let data = try? makeTransport().readFile(path) else { return nil }
return String(data: data, encoding: .utf8)
}
/// Read raw bytes. `nil` on any error.
func readData(_ path: String) -> Data? {
nonisolated func readData(_ path: String) -> Data? {
try? makeTransport().readFile(path)
}
/// Atomic write. Returns `true` on success, `false` on any error
/// (caller is expected to surface failures via UI when relevant).
@discardableResult
func writeText(_ path: String, content: String) -> Bool {
nonisolated func writeText(_ path: String, content: String) -> Bool {
guard let data = content.data(using: .utf8) else { return false }
do {
try makeTransport().writeFile(path, data: data)
@@ -138,12 +202,12 @@ extension ServerContext {
}
/// Existence check. Local: `FileManager`. Remote: `ssh test -e`.
func fileExists(_ path: String) -> Bool {
nonisolated func fileExists(_ path: String) -> Bool {
makeTransport().fileExists(path)
}
/// File modification timestamp, or `nil` if the file doesn't exist.
func modificationDate(_ path: String) -> Date? {
nonisolated func modificationDate(_ path: String) -> Date? {
makeTransport().stat(path)?.mtime
}
@@ -153,7 +217,7 @@ extension ServerContext {
/// to fire off a CLI command never spawn `hermes` via `Process()`
/// directly, because that path bypasses the transport for remote.
@discardableResult
func runHermes(_ args: [String], timeout: TimeInterval = 60, stdin: String? = nil) -> (output: String, exitCode: Int32) {
nonisolated func runHermes(_ args: [String], timeout: TimeInterval = 60, stdin: String? = nil) -> (output: String, exitCode: Int32) {
let result = HermesFileService(context: self).runHermesCLI(args: args, timeout: timeout, stdinInput: stdin)
return (result.output, result.exitCode)
}
@@ -98,11 +98,39 @@ final class ServerRegistry {
}
func removeServer(_ id: ServerID) {
// Grab the entry BEFORE removing it so we can tear down its transport
// state. Without this the user would leak a ControlMaster socket
// (~10min TTL) and a snapshot cache dir (indefinite) per removed
// server harmless individually, ugly at scale.
let removed = entries.first { $0.id == id }
entries.removeAll { $0.id == id }
save()
if let removed, case .ssh(let config) = removed.kind {
let transport = SSHTransport(contextID: id, config: config, displayName: removed.displayName)
transport.closeControlMaster()
}
SSHTransport.pruneSnapshotCache(for: id)
// Drop process-wide cache entries keyed on this ServerID so a future
// re-add with a colliding ID (theoretical UUIDs are random, but be
// defensive) doesn't serve stale data.
Task.detached { await ServerContext.invalidateCaches(for: id) }
onEntriesChanged?()
}
// MARK: - App-launch sweep
/// Remove snapshot cache directories whose UUID isn't in the current
/// registry. Handles the case where the user removed a server while the
/// app was closed we want the cache to converge to the registry's
/// state at launch rather than carrying forever.
func sweepOrphanCaches() {
var keep: Set<ServerID> = [ServerContext.local.id]
for entry in entries { keep.insert(entry.id) }
SSHTransport.sweepOrphanSnapshots(keeping: keep)
}
// MARK: - Persistence
private func load() {
+3 -3
View File
@@ -436,14 +436,14 @@ actor ACPClient {
guard !lineData.isEmpty else { continue }
if let lineStr = String(data: lineData, encoding: .utf8) {
await self?.logger.debug("ACP recv: \(lineStr.prefix(200))")
self?.logger.debug("ACP recv: \(lineStr.prefix(200))")
}
do {
let message = try JSONDecoder().decode(ACPRawMessage.self, from: lineData)
await self?.handleMessage(message)
} catch {
await self?.logger.warning("Failed to decode ACP message: \(error.localizedDescription)")
self?.logger.warning("Failed to decode ACP message: \(error.localizedDescription)")
}
}
}
@@ -459,7 +459,7 @@ actor ACPClient {
if data.isEmpty { break }
if let text = String(data: data, encoding: .utf8)?.trimmingCharacters(in: .whitespacesAndNewlines),
!text.isEmpty {
await self?.logger.info("ACP stderr: \(text.prefix(500))")
self?.logger.info("ACP stderr: \(text.prefix(500))")
await self?.appendStderr(text)
}
}
@@ -1,6 +1,33 @@
import Foundation
import SQLite3
/// Dedupes concurrent `snapshotSQLite` calls for the same server. When the
/// file watcher ticks, Dashboard + Sessions + Activity (+ Chat's loadHistory)
/// can all ask for a fresh snapshot within the same millisecond without
/// coordination they each spawn their own `ssh host sqlite3 .backup; scp`
/// round-trip, three parallel backups of the same DB. Callers in flight for
/// the same `ServerID` await the first caller's Task and share its result.
actor SnapshotCoordinator {
static let shared = SnapshotCoordinator()
private var inFlight: [ServerID: Task<URL, Error>] = [:]
func snapshot(
remotePath: String,
contextID: ServerID,
transport: any ServerTransport
) async throws -> URL {
if let existing = inFlight[contextID] {
return try await existing.value
}
let task = Task<URL, Error> {
try transport.snapshotSQLite(remotePath: remotePath)
}
inFlight[contextID] = task
defer { inFlight[contextID] = nil }
return try await task.value
}
}
actor HermesDataService {
private var db: OpaquePointer?
private var hasV07Schema = false
@@ -16,17 +43,22 @@ actor HermesDataService {
self.transport = context.makeTransport()
}
func open() -> Bool {
func open() async -> Bool {
if db != nil { return true }
let localPath: String
if context.isRemote {
// Pull a fresh snapshot from the remote host. Uses `sqlite3
// .backup` on the remote, which is WAL-safe; a plain cp would
// corrupt.
guard let snapshotURL = try? transport.snapshotSQLite(remotePath: context.paths.stateDB) else {
return false
}
localPath = snapshotURL.path
// corrupt. Routed through SnapshotCoordinator so concurrent
// view models don't each spawn a parallel SSH backup for the
// same server.
let url = try? await SnapshotCoordinator.shared.snapshot(
remotePath: context.paths.stateDB,
contextID: context.id,
transport: transport
)
guard let url else { return false }
localPath = url.path
} else {
localPath = context.paths.stateDB
guard FileManager.default.fileExists(atPath: localPath) else { return false }
@@ -57,13 +89,17 @@ actor HermesDataService {
return true
}
/// Force a fresh snapshot pull + reopen. Used by the file watcher tick
/// and by remote-write code paths that need the UI to reflect changes
/// Hermes just made. Local contexts reopen in place since the on-disk
/// file is already authoritative.
func refresh() {
/// Force a fresh snapshot pull + reopen. Used on session-load and in
/// any path that needs the UI to reflect writes Hermes just made.
/// Without this, remote snapshots would be frozen at the first `open()`
/// for the app's lifetime new messages added to a resumed session
/// would never appear because the snapshot was pulled before they were
/// written. Local contexts pay essentially nothing: close+reopen on a
/// live DB is a no-op.
@discardableResult
func refresh() async -> Bool {
close()
_ = open()
return await open()
}
func close() {
@@ -24,7 +24,7 @@ struct HermesEnvService: Sendable {
let path: String
let transport: any ServerTransport
init(context: ServerContext = .local) {
nonisolated init(context: ServerContext = .local) {
self.path = context.paths.envFile
self.transport = context.makeTransport()
}
@@ -5,19 +5,19 @@ struct HermesFileService: Sendable {
let context: ServerContext
let transport: any ServerTransport
init(context: ServerContext = .local) {
nonisolated init(context: ServerContext = .local) {
self.context = context
self.transport = context.makeTransport()
}
// MARK: - Config
func loadConfig() -> HermesConfig {
nonisolated func loadConfig() -> HermesConfig {
guard let content = readFile(context.paths.configYAML) else { return .empty }
return parseConfig(content)
}
private func parseConfig(_ yaml: String) -> HermesConfig {
nonisolated private func parseConfig(_ yaml: String) -> HermesConfig {
let parsed = Self.parseNestedYAML(yaml)
let values = parsed.values
let lists = parsed.lists
@@ -380,7 +380,7 @@ struct HermesFileService: Sendable {
// MARK: - Gateway State
func loadGatewayState() -> GatewayState? {
nonisolated func loadGatewayState() -> GatewayState? {
guard let data = readFileData(context.paths.gatewayStateJSON) else { return nil }
do {
return try JSONDecoder().decode(GatewayState.self, from: data)
@@ -392,7 +392,7 @@ struct HermesFileService: Sendable {
// MARK: - Memory
func loadMemoryProfiles() -> [String] {
nonisolated func loadMemoryProfiles() -> [String] {
guard let entries = try? transport.listDirectory(context.paths.memoriesDir) else { return [] }
return entries.filter { name in
let path = context.paths.memoriesDir + "/" + name
@@ -400,27 +400,27 @@ struct HermesFileService: Sendable {
}.sorted()
}
func loadMemory(profile: String = "") -> String {
nonisolated func loadMemory(profile: String = "") -> String {
let path = memoryPath(profile: profile, file: "MEMORY.md")
return readFile(path) ?? ""
}
func loadUserProfile(profile: String = "") -> String {
nonisolated func loadUserProfile(profile: String = "") -> String {
let path = memoryPath(profile: profile, file: "USER.md")
return readFile(path) ?? ""
}
func saveMemory(_ content: String, profile: String = "") {
nonisolated func saveMemory(_ content: String, profile: String = "") {
let path = memoryPath(profile: profile, file: "MEMORY.md")
writeFile(path, content: content)
}
func saveUserProfile(_ content: String, profile: String = "") {
nonisolated func saveUserProfile(_ content: String, profile: String = "") {
let path = memoryPath(profile: profile, file: "USER.md")
writeFile(path, content: content)
}
private func memoryPath(profile: String, file: String) -> String {
nonisolated private func memoryPath(profile: String, file: String) -> String {
if profile.isEmpty {
return context.paths.memoriesDir + "/" + file
}
@@ -429,7 +429,7 @@ struct HermesFileService: Sendable {
// MARK: - Cron
func loadCronJobs() -> [HermesCronJob] {
nonisolated func loadCronJobs() -> [HermesCronJob] {
guard let data = readFileData(context.paths.cronJobsJSON) else { return [] }
do {
let file = try JSONDecoder().decode(CronJobsFile.self, from: data)
@@ -440,7 +440,7 @@ struct HermesFileService: Sendable {
}
}
func loadCronOutput(jobId: String) -> String? {
nonisolated func loadCronOutput(jobId: String) -> String? {
let dir = context.paths.cronOutputDir
guard let files = try? transport.listDirectory(dir) else { return nil }
let matching = files.filter { $0.contains(jobId) }.sorted().last
@@ -450,7 +450,7 @@ struct HermesFileService: Sendable {
// MARK: - Skills
func loadSkills() -> [HermesSkillCategory] {
nonisolated func loadSkills() -> [HermesSkillCategory] {
let dir = context.paths.skillsDir
guard let categories = try? transport.listDirectory(dir) else { return [] }
@@ -479,17 +479,17 @@ struct HermesFileService: Sendable {
}
}
func loadSkillContent(path: String) -> String {
nonisolated func loadSkillContent(path: String) -> String {
guard isValidSkillPath(path) else { return "" }
return readFile(path) ?? ""
}
func saveSkillContent(path: String, content: String) {
nonisolated func saveSkillContent(path: String, content: String) {
guard isValidSkillPath(path) else { return }
writeFile(path, content: content)
}
private func isValidSkillPath(_ path: String) -> Bool {
nonisolated private func isValidSkillPath(_ path: String) -> Bool {
guard !path.contains(".."), path.hasPrefix(context.paths.skillsDir) else {
print("[Scarf] Rejected skill path outside skills directory: \(path)")
return false
@@ -497,7 +497,7 @@ struct HermesFileService: Sendable {
return true
}
private func parseSkillRequiredConfig(_ path: String) -> [String] {
nonisolated private func parseSkillRequiredConfig(_ path: String) -> [String] {
guard let content = readFile(path) else { return [] }
var result: [String] = []
var inRequiredConfig = false
@@ -523,7 +523,7 @@ struct HermesFileService: Sendable {
// MARK: - MCP Servers
func loadMCPServers() -> [HermesMCPServer] {
nonisolated func loadMCPServers() -> [HermesMCPServer] {
guard let yaml = readFile(context.paths.configYAML) else { return [] }
let parsed = parseMCPServersBlock(yaml: yaml)
return parsed.map { server in
@@ -555,7 +555,7 @@ struct HermesFileService: Sendable {
/// Args are written separately via `setMCPServerArgs` to avoid argparse issues with `-`-prefixed args like `-y`.
/// Pipes `y\n` because the CLI prompts to save even when the initial connection check fails (which it will, since we intentionally add no args first).
@discardableResult
func addMCPServerStdio(name: String, command: String, args: [String]) -> (exitCode: Int32, output: String) {
nonisolated func addMCPServerStdio(name: String, command: String, args: [String]) -> (exitCode: Int32, output: String) {
let addResult = runHermesCLI(
args: ["mcp", "add", name, "--command", command],
timeout: 45,
@@ -569,7 +569,7 @@ struct HermesFileService: Sendable {
}
@discardableResult
func addMCPServerHTTP(name: String, url: String, auth: String?) -> (exitCode: Int32, output: String) {
nonisolated func addMCPServerHTTP(name: String, url: String, auth: String?) -> (exitCode: Int32, output: String) {
var cliArgs: [String] = ["mcp", "add", name, "--url", url]
if let auth, !auth.isEmpty {
cliArgs.append(contentsOf: ["--auth", auth])
@@ -578,14 +578,14 @@ struct HermesFileService: Sendable {
}
@discardableResult
func setMCPServerArgs(name: String, args: [String]) -> Bool {
nonisolated func setMCPServerArgs(name: String, args: [String]) -> Bool {
patchMCPServerField(name: name) { entryLines in
Self.replaceOrInsertList(header: "args", items: args, in: &entryLines)
}
}
@discardableResult
func removeMCPServer(name: String) -> (exitCode: Int32, output: String) {
nonisolated func removeMCPServer(name: String) -> (exitCode: Int32, output: String) {
runHermesCLI(args: ["mcp", "remove", name], timeout: 30)
}
@@ -614,7 +614,7 @@ struct HermesFileService: Sendable {
)
}
private static func parseToolListFromTestOutput(_ output: String) -> [String] {
nonisolated private static func parseToolListFromTestOutput(_ output: String) -> [String] {
var tools: [String] = []
for rawLine in output.components(separatedBy: "\n") {
let line = rawLine.trimmingCharacters(in: .whitespaces)
@@ -630,35 +630,35 @@ struct HermesFileService: Sendable {
}
@discardableResult
func toggleMCPServerEnabled(name: String, enabled: Bool) -> Bool {
nonisolated func toggleMCPServerEnabled(name: String, enabled: Bool) -> Bool {
patchMCPServerField(name: name) { entryLines in
Self.replaceOrInsertScalar(key: "enabled", value: enabled ? "true" : "false", in: &entryLines)
}
}
@discardableResult
func setMCPServerEnv(name: String, env: [String: String]) -> Bool {
nonisolated func setMCPServerEnv(name: String, env: [String: String]) -> Bool {
patchMCPServerField(name: name) { entryLines in
Self.replaceOrInsertSubMap(header: "env", map: env, in: &entryLines)
}
}
@discardableResult
func setMCPServerHeaders(name: String, headers: [String: String]) -> Bool {
nonisolated func setMCPServerHeaders(name: String, headers: [String: String]) -> Bool {
patchMCPServerField(name: name) { entryLines in
Self.replaceOrInsertSubMap(header: "headers", map: headers, in: &entryLines)
}
}
@discardableResult
func updateMCPToolFilters(name: String, include: [String], exclude: [String], resources: Bool, prompts: Bool) -> Bool {
nonisolated func updateMCPToolFilters(name: String, include: [String], exclude: [String], resources: Bool, prompts: Bool) -> Bool {
patchMCPServerField(name: name) { entryLines in
Self.replaceOrInsertToolsBlock(include: include, exclude: exclude, resources: resources, prompts: prompts, in: &entryLines)
}
}
@discardableResult
func setMCPServerTimeouts(name: String, timeout: Int?, connectTimeout: Int?) -> Bool {
nonisolated func setMCPServerTimeouts(name: String, timeout: Int?, connectTimeout: Int?) -> Bool {
patchMCPServerField(name: name) { entryLines in
if let timeout {
Self.replaceOrInsertScalar(key: "timeout", value: String(timeout), in: &entryLines)
@@ -674,7 +674,7 @@ struct HermesFileService: Sendable {
}
@discardableResult
func deleteMCPOAuthToken(name: String) -> Bool {
nonisolated func deleteMCPOAuthToken(name: String) -> Bool {
let path = context.paths.mcpTokensDir + "/" + name + ".json"
do {
try transport.removeFile(path)
@@ -685,7 +685,7 @@ struct HermesFileService: Sendable {
}
@discardableResult
func restartGateway() -> (exitCode: Int32, output: String) {
nonisolated func restartGateway() -> (exitCode: Int32, output: String) {
runHermesCLI(args: ["gateway", "restart"], timeout: 30)
}
@@ -697,7 +697,7 @@ struct HermesFileService: Sendable {
let suffix: [String]
}
private func extractMCPBlock(yaml: String) -> MCPBlockLocation {
nonisolated private func extractMCPBlock(yaml: String) -> MCPBlockLocation {
let lines = yaml.components(separatedBy: "\n")
var blockStart = -1
var blockEnd = lines.count
@@ -740,7 +740,7 @@ struct HermesFileService: Sendable {
)
}
fileprivate func parseMCPServersBlock(yaml: String) -> [HermesMCPServer] {
nonisolated fileprivate func parseMCPServersBlock(yaml: String) -> [HermesMCPServer] {
let location = extractMCPBlock(yaml: yaml)
guard location.block.count > 1 else { return [] }
@@ -876,7 +876,7 @@ struct HermesFileService: Sendable {
// MARK: - MCP YAML: surgical patcher
private func patchMCPServerField(name: String, mutate: (inout [String]) -> Void) -> Bool {
nonisolated private func patchMCPServerField(name: String, mutate: (inout [String]) -> Void) -> Bool {
guard let yaml = readFile(context.paths.configYAML) else { return false }
let location = extractMCPBlock(yaml: yaml)
guard !location.block.isEmpty else { return false }
@@ -932,7 +932,7 @@ struct HermesFileService: Sendable {
// MARK: - MCP YAML: mutators
private static func replaceOrInsertScalar(key: String, value: String, in lines: inout [String]) {
nonisolated private static func replaceOrInsertScalar(key: String, value: String, in lines: inout [String]) {
// entry header is at lines[0] at indent 2. Scalars live at indent 4.
for index in 1..<lines.count {
let line = lines[index]
@@ -950,7 +950,7 @@ struct HermesFileService: Sendable {
lines.insert(" \(key): \(value)", at: 1)
}
private static func removeScalar(key: String, in lines: inout [String]) {
nonisolated private static func removeScalar(key: String, in lines: inout [String]) {
var removeIndex: Int?
for index in 1..<lines.count {
let line = lines[index]
@@ -969,7 +969,7 @@ struct HermesFileService: Sendable {
}
}
private static func replaceOrInsertList(header: String, items: [String], in lines: inout [String]) {
nonisolated private static func replaceOrInsertList(header: String, items: [String], in lines: inout [String]) {
var headerIndex: Int?
var removeEnd: Int?
for index in 1..<lines.count {
@@ -1027,7 +1027,7 @@ struct HermesFileService: Sendable {
}
}
private static func replaceOrInsertSubMap(header: String, map: [String: String], in lines: inout [String]) {
nonisolated private static func replaceOrInsertSubMap(header: String, map: [String: String], in lines: inout [String]) {
var headerIndex: Int?
var removeEnd: Int?
for index in 1..<lines.count {
@@ -1085,7 +1085,7 @@ struct HermesFileService: Sendable {
}
}
private static func replaceOrInsertToolsBlock(include: [String], exclude: [String], resources: Bool, prompts: Bool, in lines: inout [String]) {
nonisolated private static func replaceOrInsertToolsBlock(include: [String], exclude: [String], resources: Bool, prompts: Bool, in lines: inout [String]) {
var headerIndex: Int?
var removeEnd: Int?
for index in 1..<lines.count {
@@ -1134,7 +1134,7 @@ struct HermesFileService: Sendable {
}
}
private static func yamlScalar(_ value: String) -> String {
nonisolated private static func yamlScalar(_ value: String) -> String {
if value.isEmpty { return "\"\"" }
// YAML 1.2 reserved indicators that change meaning at the start of a
// scalar: @ * & ? | > ! % , [ ] { } < ` ' " plus space (would be
@@ -1158,7 +1158,7 @@ struct HermesFileService: Sendable {
return value
}
private static func unquote(_ value: String) -> String {
nonisolated private static func unquote(_ value: String) -> String {
var v = value
if (v.hasPrefix("\"") && v.hasSuffix("\"") && v.count >= 2) || (v.hasPrefix("'") && v.hasSuffix("'") && v.count >= 2) {
v = String(v.dropFirst().dropLast())
@@ -1168,11 +1168,11 @@ struct HermesFileService: Sendable {
// MARK: - Hermes Process
func isHermesRunning() -> Bool {
nonisolated func isHermesRunning() -> Bool {
hermesPID() != nil
}
func hermesPID() -> pid_t? {
nonisolated func hermesPID() -> pid_t? {
// Run `pgrep -f hermes` either locally or via the transport. On
// remote hosts we trust `pgrep` to be present it's standard on
// Linux and macOS. On failure we conservatively return nil rather
@@ -1192,7 +1192,7 @@ struct HermesFileService: Sendable {
}
@discardableResult
func stopHermes() -> Bool {
nonisolated func stopHermes() -> Bool {
// v0.9.0 fixed `hermes gateway stop` so it issues `launchctl bootout` and
// waits for exit. Use the CLI to avoid racing launchd's KeepAlive respawn.
if runHermesCLI(args: ["gateway", "stop"]).exitCode == 0 {
@@ -1227,7 +1227,7 @@ struct HermesFileService: Sendable {
/// resolves AI provider auth by reading env vars a GUI-launched Scarf
/// subprocess sees none of the `export ANTHROPIC_API_KEY=` lines from
/// the user's shell init files.
private static let shellEnvKeys: [String] = [
nonisolated private static let shellEnvKeys: [String] = [
"PATH",
"ANTHROPIC_API_KEY", "ANTHROPIC_TOKEN", "ANTHROPIC_BASE_URL",
"OPENAI_API_KEY", "OPENAI_BASE_URL",
@@ -1255,7 +1255,7 @@ struct HermesFileService: Sendable {
/// 2. If that yields no PATH (timed out / prompt framework broke it),
/// fall back to `zsh -l` (login only) with a 3-second timeout.
/// 3. If that also fails, hardcoded sane-default PATH; no credentials.
private static let enrichedShellEnv: [String: String] = {
nonisolated private static let enrichedShellEnv: [String: String] = {
// Build a shell script that prints `KEY\0VALUE\0` for each key.
// Using printf with \0 as separator lets us unambiguously split the
// output even if a value contains newlines.
@@ -1295,7 +1295,7 @@ struct HermesFileService: Sendable {
/// `KEY\0VALUE\0`-delimited output. Returns nil on timeout/failure.
/// When `interactive` is true, injects env vars that suppress common
/// prompt frameworks so the shell doesn't hang waiting for terminal setup.
private static func runShellProbe(script: String, interactive: Bool, timeout: TimeInterval) -> [String: String]? {
nonisolated private static func runShellProbe(script: String, interactive: Bool, timeout: TimeInterval) -> [String: String]? {
let pipe = Pipe()
let errPipe = Pipe()
let process = Process()
@@ -1388,7 +1388,7 @@ struct HermesFileService: Sendable {
/// **Remote context:** skips that step our process env has nothing to
/// do with the remote `hermes acp`'s runtime env. The remote `.env` /
/// `auth.json` / `config.yaml` are still checked through the transport.
func hasAnyAICredential() -> Bool {
nonisolated func hasAnyAICredential() -> Bool {
let credentialKeys = Self.shellEnvKeys.filter { $0 != "PATH" && $0 != "ANTHROPIC_BASE_URL" && $0 != "OPENAI_BASE_URL" }
if !context.isRemote {
@@ -1490,12 +1490,12 @@ struct HermesFileService: Sendable {
/// Read a UTF-8 text file through the transport. Missing files and any
/// transport error surface as `nil` callers treat missing/unreadable
/// the same way they always have.
private func readFile(_ path: String) -> String? {
nonisolated private func readFile(_ path: String) -> String? {
guard let data = try? transport.readFile(path) else { return nil }
return String(data: data, encoding: .utf8)
}
private func readFileData(_ path: String) -> Data? {
nonisolated private func readFileData(_ path: String) -> Data? {
try? transport.readFile(path)
}
@@ -1503,7 +1503,7 @@ struct HermesFileService: Sendable {
/// old pre-transport behavior (print + swallow on error) because the
/// callers don't have a UI path for surfacing I/O failures that's
/// planned for Phase 4.
private func writeFile(_ path: String, content: String) {
nonisolated private func writeFile(_ path: String, content: String) {
guard let data = content.data(using: .utf8) else { return }
do {
try transport.writeFile(path, data: data)
@@ -13,7 +13,7 @@ final class HermesFileWatcher {
let context: ServerContext
private let transport: any ServerTransport
init(context: ServerContext = .local) {
nonisolated init(context: ServerContext = .local) {
self.context = context
self.transport = context.makeTransport()
}
@@ -55,7 +55,7 @@ struct ModelCatalogService: Sendable {
let path: String
let transport: any ServerTransport
init(context: ServerContext = .local) {
nonisolated init(context: ServerContext = .local) {
self.path = context.paths.home + "/models_dev_cache.json"
self.transport = context.makeTransport()
}
@@ -5,7 +5,7 @@ struct ProjectDashboardService: Sendable {
let context: ServerContext
let transport: any ServerTransport
init(context: ServerContext = .local) {
nonisolated init(context: ServerContext = .local) {
self.context = context
self.transport = context.makeTransport()
}
@@ -5,12 +5,12 @@ import os
/// `FileManager`, `Process`, and `DispatchSourceFileSystemObject` the APIs
/// services were already using before Phase 2.
struct LocalTransport: ServerTransport {
private static let logger = Logger(subsystem: "com.scarf", category: "LocalTransport")
nonisolated private static let logger = Logger(subsystem: "com.scarf", category: "LocalTransport")
let contextID: ServerID
let isRemote: Bool = false
init(contextID: ServerID = ServerContext.local.id) {
nonisolated init(contextID: ServerID = ServerContext.local.id) {
self.contextID = contextID
}
@@ -156,10 +156,13 @@ struct LocalTransport: ServerTransport {
func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent> {
AsyncStream { continuation in
var sources: [DispatchSourceFileSystemObject] = []
for path in paths {
// Build the source list immutably, then hand a value-typed copy
// to onTermination. Swift 6's concurrent-capture rule rejects a
// `var sources` shared between the outer builder and the inner
// termination closure.
let sources: [DispatchSourceFileSystemObject] = paths.compactMap { path in
let fd = Darwin.open(path, O_EVTONLY)
guard fd >= 0 else { continue }
guard fd >= 0 else { return nil }
let src = DispatchSource.makeFileSystemObjectSource(
fileDescriptor: fd,
eventMask: [.write, .extend, .rename],
@@ -168,7 +171,7 @@ struct LocalTransport: ServerTransport {
src.setEventHandler { continuation.yield(.anyChanged) }
src.setCancelHandler { Darwin.close(fd) }
src.resume()
sources.append(src)
return src
}
continuation.onTermination = { _ in
for s in sources { s.cancel() }
+64 -15
View File
@@ -16,7 +16,7 @@ import os
/// control socket at `~/Library/Caches/scarf/ssh/%C` so multiple Scarf
/// windows pointed at the same host share one session cleanly.
struct SSHTransport: ServerTransport {
private static let logger = Logger(subsystem: "com.scarf", category: "SSHTransport")
nonisolated private static let logger = Logger(subsystem: "com.scarf", category: "SSHTransport")
let contextID: ServerID
let isRemote: Bool = true
@@ -24,7 +24,7 @@ struct SSHTransport: ServerTransport {
let config: SSHConfig
let displayName: String
init(contextID: ServerID, config: SSHConfig, displayName: String) {
nonisolated init(contextID: ServerID, config: SSHConfig, displayName: String) {
self.contextID = contextID
self.config = config
self.displayName = displayName
@@ -32,30 +32,79 @@ struct SSHTransport: ServerTransport {
// MARK: - ssh/scp binary discovery
private var sshBinary: String { "/usr/bin/ssh" }
private var scpBinary: String { "/usr/bin/scp" }
nonisolated private var sshBinary: String { "/usr/bin/ssh" }
nonisolated private var scpBinary: String { "/usr/bin/scp" }
/// The fully-qualified `user@host` spec (or just `host` if no user set).
private var hostSpec: String {
nonisolated private var hostSpec: String {
if let user = config.user, !user.isEmpty { return "\(user)@\(config.host)" }
return config.host
}
/// Absolute path to this server's ControlMaster socket directory. One
/// socket per server, lives under the app's Caches so macOS can sweep it.
private var controlDir: String {
nonisolated private var controlDir: String { Self.controlDirPath() }
/// Per-server snapshot cache directory (for SQLite `.backup` drops).
nonisolated private var snapshotDir: String { Self.snapshotDirPath(for: contextID) }
/// Shared control-master socket directory (one dir, sockets within it are
/// per-host via OpenSSH's `%C` token). Exposed as a static so
/// cleanup paths (`ServerRegistry.removeServer`, app-launch sweep) can
/// compute it without instantiating a transport.
nonisolated static func controlDirPath() -> String {
let base = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.path
?? NSHomeDirectory() + "/Library/Caches"
return base + "/scarf/ssh"
}
/// Per-server snapshot cache directory (for SQLite `.backup` drops).
private var snapshotDir: String {
/// Snapshot cache directory for a given server. Stable per-ID so repeated
/// connections to the same server share the cache, and so cleanup can
/// find it from the ID alone.
nonisolated static func snapshotDirPath(for contextID: ServerID) -> String {
let base = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.path
?? NSHomeDirectory() + "/Library/Caches"
return base + "/scarf/snapshots/\(contextID.uuidString)"
}
/// Root of the snapshot cache (all servers). Used by the app-launch sweep
/// that prunes dirs whose UUID no longer appears in the registry.
nonisolated static func snapshotRootPath() -> String {
let base = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.path
?? NSHomeDirectory() + "/Library/Caches"
return base + "/scarf/snapshots"
}
/// Remove the snapshot directory for a server (no-op if absent). Called
/// on `removeServer` and on app-launch for orphaned dirs.
static func pruneSnapshotCache(for contextID: ServerID) {
let dir = snapshotDirPath(for: contextID)
try? FileManager.default.removeItem(atPath: dir)
}
/// Walk the snapshot root and delete any directory whose UUID isn't in
/// `keep`. Called once at app launch so snapshots from servers the user
/// removed while the app was closed don't linger.
static func sweepOrphanSnapshots(keeping keep: Set<ServerID>) {
let root = snapshotRootPath()
guard let entries = try? FileManager.default.contentsOfDirectory(atPath: root) else { return }
for name in entries {
if let id = ServerID(uuidString: name), keep.contains(id) { continue }
try? FileManager.default.removeItem(atPath: root + "/" + name)
}
}
/// Ask OpenSSH to shut down this host's ControlMaster socket, so the TCP
/// session isn't held open after the user removes this server. If no
/// master is currently running, `ssh -O exit` exits non-zero we ignore
/// the exit code because the desired end state (no master) is reached
/// either way.
func closeControlMaster() {
ensureControlDir()
let args = sshArgs(extra: ["-O", "exit", hostSpec])
_ = try? runLocal(executable: sshBinary, args: args, stdin: nil, timeout: 10)
}
/// Common ssh options used by every invocation. Keep every `-o` flag
/// here so we never drift between calls.
///
@@ -68,7 +117,7 @@ struct SSHTransport: ServerTransport {
/// process exit rather than a hang.
/// - `LogLevel=QUIET` suppresses the login banner so ACP's line-delimited
/// JSON stays binary-clean.
private func sshArgs(extra: [String] = []) -> [String] {
nonisolated private func sshArgs(extra: [String] = []) -> [String] {
var args: [String] = [
"-o", "ControlMaster=auto",
"-o", "ControlPath=\(controlDir)/%C",
@@ -91,7 +140,7 @@ struct SSHTransport: ServerTransport {
/// Ensure the ControlMaster socket directory exists. Called before every
/// ssh invocation. Cheap `createDirectory(withIntermediateDirectories: true)`
/// is a no-op when present.
private func ensureControlDir() {
nonisolated private func ensureControlDir() {
try? FileManager.default.createDirectory(atPath: controlDir, withIntermediateDirectories: true)
// 0700 so socket files aren't visible to other users on the Mac.
try? FileManager.default.setAttributes([.posixPermissions: 0o700], ofItemAtPath: controlDir)
@@ -100,7 +149,7 @@ struct SSHTransport: ServerTransport {
/// Shell-quote a single argument for remote execution. The remote shell
/// receives our argv joined with spaces, so anything containing
/// whitespace/metacharacters must be quoted to survive that flattening.
private static func shellQuote(_ s: String) -> String {
nonisolated private static func shellQuote(_ s: String) -> String {
if s.isEmpty { return "''" }
// Safe subset: alphanumerics + a few shell-inert characters.
let safe = CharacterSet(charactersIn: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789@%+=:,./-_")
@@ -120,7 +169,7 @@ struct SSHTransport: ServerTransport {
/// Why not single-quote: that would make `$HOME` literal too. We
/// specifically need partial-expansion semantics, which is what double
/// quotes give us.
private static func remotePathArg(_ path: String) -> String {
nonisolated private static func remotePathArg(_ path: String) -> String {
var p = path
if p.hasPrefix("~/") {
p = "$HOME/" + p.dropFirst(2)
@@ -140,7 +189,7 @@ struct SSHTransport: ServerTransport {
/// single-quoted via `shellQuote` so ssh's argv-join-by-space doesn't
/// split it across multiple shell tokens on the remote side.
@discardableResult
private func runRemoteShell(_ command: String, timeout: TimeInterval? = 60) throws -> ProcessResult {
nonisolated private func runRemoteShell(_ command: String, timeout: TimeInterval? = 60) throws -> ProcessResult {
var args = sshArgs()
args.append(hostSpec)
args.append("sh")
@@ -322,7 +371,7 @@ struct SSHTransport: ServerTransport {
/// SSH_AUTH_SOCK / SSH_AGENT_PID harvested from the user's login shell.
/// Without this, GUI-launched Scarf can't reach 1Password / Secretive /
/// `ssh-add`'d keys that the user's terminal sees fine.
private static func sshSubprocessEnvironment() -> [String: String] {
nonisolated private static func sshSubprocessEnvironment() -> [String: String] {
var env = ProcessInfo.processInfo.environment
let shellEnv = HermesFileService.enrichedEnvironment()
for key in ["SSH_AUTH_SOCK", "SSH_AGENT_PID"] {
@@ -430,7 +479,7 @@ struct SSHTransport: ServerTransport {
/// `LocalTransport.runProcess` duplicated rather than shared because
/// SSH-specific code paths live on this type and we want all Process
/// lifecycle in one place per transport.
private func runLocal(executable: String, args: [String], stdin: Data?, timeout: TimeInterval?) throws -> ProcessResult {
nonisolated private func runLocal(executable: String, args: [String], stdin: Data?, timeout: TimeInterval?) throws -> ProcessResult {
ensureControlDir()
let proc = Process()
proc.executableURL = URL(fileURLWithPath: executable)
@@ -18,32 +18,32 @@ import Foundation
protocol ServerTransport: Sendable {
/// Identifies the context this transport serves. Used for cache
/// namespacing (e.g. per-server SQLite snapshot directories).
var contextID: ServerID { get }
nonisolated var contextID: ServerID { get }
/// `true` if this transport talks to a remote host over SSH.
var isRemote: Bool { get }
nonisolated var isRemote: Bool { get }
// MARK: - Files
func readFile(_ path: String) throws -> Data
nonisolated func readFile(_ path: String) throws -> Data
/// Atomic write: the file at `path` is either the previous contents or
/// the new contents, never a partial write. Preserves `0600` mode for
/// paths that match `.env` conventions so secrets stay owner-only.
func writeFile(_ path: String, data: Data) throws
func fileExists(_ path: String) -> Bool
func stat(_ path: String) -> FileStat?
func listDirectory(_ path: String) throws -> [String]
nonisolated func writeFile(_ path: String, data: Data) throws
nonisolated func fileExists(_ path: String) -> Bool
nonisolated func stat(_ path: String) -> FileStat?
nonisolated func listDirectory(_ path: String) throws -> [String]
/// Create directories including intermediates. No-op if already present.
func createDirectory(_ path: String) throws
nonisolated func createDirectory(_ path: String) throws
/// Delete a file. No-op if absent.
func removeFile(_ path: String) throws
nonisolated func removeFile(_ path: String) throws
// MARK: - Processes
/// Run a process to completion and capture its stdout/stderr. For remote
/// transports this actually invokes `ssh host -- executable args` under
/// the hood; for local it spawns `executable` directly.
func runProcess(
nonisolated func runProcess(
executable: String,
args: [String],
stdin: Data?,
@@ -57,7 +57,7 @@ protocol ServerTransport: Sendable {
///
/// Local: `executable` + `args` verbatim.
/// Remote: `/usr/bin/ssh` + connection flags + `[host, "--", executable, args]`.
func makeProcess(executable: String, args: [String]) -> Process
nonisolated func makeProcess(executable: String, args: [String]) -> Process
// MARK: - SQLite
@@ -66,13 +66,13 @@ protocol ServerTransport: Sendable {
/// just the remote path unchanged. For SSH transports this performs
/// `sqlite3 .backup` on the remote side and scp's the backup into
/// `~/Library/Caches/scarf/<serverID>/state.db`, returning that URL.
func snapshotSQLite(remotePath: String) throws -> URL
nonisolated func snapshotSQLite(remotePath: String) throws -> URL
// MARK: - Watching
/// Observe changes to a set of paths and yield events when any of them
/// change. Local: FSEvents. Remote: polls `stat` mtime every 3s.
func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent>
nonisolated func watchPaths(_ paths: [String]) -> AsyncStream<WatchEvent>
}
/// Stat-style file metadata. `nil` (return value) means the file does not
@@ -89,8 +89,8 @@ struct ProcessResult: Sendable {
let stdout: Data
let stderr: Data
var stdoutString: String { String(data: stdout, encoding: .utf8) ?? "" }
var stderrString: String { String(data: stderr, encoding: .utf8) ?? "" }
nonisolated var stdoutString: String { String(data: stdout, encoding: .utf8) ?? "" }
nonisolated var stderrString: String { String(data: stderr, encoding: .utf8) ?? "" }
}
enum WatchEvent: Sendable {
@@ -52,7 +52,12 @@ final class ActivityViewModel {
func load() async {
isLoading = true
let opened = await dataService.open()
// refresh() = close + reopen, which forces a fresh snapshot pull on
// remote contexts. Using open() here would short-circuit after the
// first load and show stale data for the view's lifetime. The DB
// stays open after load() returns so selectEntry() can read tool
// results without re-opening cleanup() closes on disappear.
let opened = await dataService.refresh()
guard opened else {
isLoading = false
return
@@ -3,6 +3,7 @@ import SwiftUI
struct ActivityView: View {
@State private var viewModel: ActivityViewModel
@Environment(AppCoordinator.self) private var coordinator
@Environment(HermesFileWatcher.self) private var fileWatcher
init(context: ServerContext) {
_viewModel = State(initialValue: ActivityViewModel(context: context))
@@ -22,6 +23,9 @@ struct ActivityView: View {
}
.navigationTitle("Activity")
.task { await viewModel.load() }
.onChange(of: fileWatcher.lastChangeDate) {
Task { await viewModel.load() }
}
.onDisappear { Task { await viewModel.cleanup() } }
}
@@ -174,21 +174,20 @@ final class ChatViewModel {
}
}
/// Start ACP for the current or most recent session, then send the queued prompt.
/// Start ACP for the current session (or create a new one), then send the
/// queued prompt. Typing into a blank Chat screen ALWAYS creates a new
/// session the "Continue from Last Session" button is the explicit path
/// for resuming. The previous behavior (falling back to the most recently
/// active session in the DB) would pick up cron/background sessions the
/// user never interacted with; those can be garbage-collected by Hermes
/// between the DB read and ACP `session/load`, producing a silent prompt
/// failure with no UI feedback.
private func autoStartACPAndSend(text: String) {
// Show the user message immediately
richChatViewModel.addUserMessage(text: text)
Task { @MainActor in
// Find a session to resume: prefer current sessionId, then most recent
var sessionToResume = richChatViewModel.sessionId
if sessionToResume == nil {
let opened = await dataService.open()
if opened {
sessionToResume = await dataService.fetchMostRecentlyActiveSessionId()
await dataService.close()
}
}
let sessionToResume = richChatViewModel.sessionId
let client = ACPClient(context: context)
self.acpClient = client
@@ -199,7 +198,7 @@ final class ChatViewModel {
startACPEventLoop(client: client)
startHealthMonitor(client: client)
let cwd = NSHomeDirectory()
let cwd = await context.resolvedUserHome()
hasActiveProcess = true
@@ -289,7 +288,7 @@ final class ChatViewModel {
startACPEventLoop(client: client)
startHealthMonitor(client: client)
let cwd = NSHomeDirectory()
let cwd = await context.resolvedUserHome()
// Mark active BEFORE setting session ID so .task(id:) sees isACPMode=true
// and doesn't wipe messages with a DB refresh
@@ -420,7 +419,7 @@ final class ChatViewModel {
do {
try await client.start()
let cwd = NSHomeDirectory()
let cwd = await context.resolvedUserHome()
let resolvedSessionId: String
// Try resumeSession first (designed for reconnection), then loadSession.
@@ -238,7 +238,44 @@ final class RichChatViewModel {
}
private func handlePromptComplete(response: ACPPromptResult) {
// Detect a failed prompt that produced no assistant output e.g.
// Hermes returning `stopReason: "refusal"` when the session was
// silently garbage-collected, or `"error"` when the ACP call itself
// threw. Without surfacing this, the user sees their prompt sitting
// alone under "Agent working" that never completes with any text.
let hadAssistantOutput = streamingAssistantText.isEmpty == false
|| messages.last?.isAssistant == true
finalizeStreamingMessage()
if !hadAssistantOutput, response.stopReason != "end_turn" {
let reason: String
switch response.stopReason {
case "refusal":
reason = "The agent refused to respond (the session may have been cleared on the server). Try starting a new session from the Session menu."
case "error":
reason = "The prompt failed — check the ACP error banner above for details."
case "max_tokens":
reason = "The response was cut off before the agent could produce any output (max_tokens reached before any tokens were emitted)."
default:
reason = "The prompt ended without a response (stopReason: \(response.stopReason))."
}
let id = nextLocalId
nextLocalId -= 1
messages.append(HermesMessage(
id: id,
sessionId: sessionId ?? "",
role: "system",
content: reason,
toolCallId: nil,
toolCalls: [],
toolName: nil,
timestamp: Date(),
tokenCount: nil,
finishReason: response.stopReason,
reasoning: nil
))
}
// Accumulate token usage from this prompt
acpInputTokens += response.inputTokens
acpOutputTokens += response.outputTokens
@@ -398,7 +435,11 @@ final class RichChatViewModel {
/// (e.g., CLI session) with the current ACP session.
func loadSessionHistory(sessionId: String, acpSessionId: String? = nil) async {
self.sessionId = sessionId
let opened = await dataService.open()
// Force a fresh snapshot pull on remote contexts. An earlier open()
// would have cached a stale copy on resume we need whatever
// Hermes has actually persisted since then, or the resumed session
// will show only history up to the moment the snapshot was taken.
let opened = await dataService.refresh()
guard opened else { return }
var allMessages = await dataService.fetchMessages(sessionId: sessionId)
@@ -441,7 +482,10 @@ final class RichChatViewModel {
}
func refreshMessages() async {
let opened = await dataService.open()
// Polling tick (terminal mode): pull a fresh snapshot so remote
// reflects Hermes writes since the last tick. On local this is a
// cheap reopen of the live DB.
let opened = await dataService.refresh()
guard opened else { return }
if sessionId == nil {
@@ -359,7 +359,7 @@ struct ChatView: View {
// MARK: - Permission Approval View
extension RichChatViewModel.PendingPermission: @retroactive Identifiable {
extension RichChatViewModel.PendingPermission: Identifiable {
var id: Int { requestId }
}
@@ -6,22 +6,27 @@ struct RichChatMessageList: View {
/// External trigger to force a scroll-to-bottom (e.g., from "Return to Active Session").
var scrollTrigger: UUID = UUID()
/// Stable scroll target. Must NOT depend on `isWorking` if the anchor
/// flipped between "typing-indicator" and "group-N" at stream start/
/// finish, two onChange handlers would race to scroll to different
/// targets and the chat would visibly jump.
private var scrollAnchor: String {
if let last = groups.last { return "group-\(last.id)" }
return "scroll-top"
}
/// Why `.defaultScrollAnchor(.bottom)` *alone* and no `proxy.scrollTo`.
///
/// `.defaultScrollAnchor(.bottom)` tells SwiftUI to pin the viewport to
/// the bottom of the content automatically as messages stream in or
/// new turns arrive, the scroll position tracks the bottom edge.
///
/// We used to also call `proxy.scrollTo(lastID, anchor: .bottom)` from
/// six different `onChange` handlers during streaming. The two
/// mechanisms fought each other: the ScrollViewReader can resolve an ID
/// to a position **before** LazyVStack has finished laying out that
/// row, so `scrollTo` would land past the actual content the
/// "viewport showing whitespace, chat is above" symptom. Removing the
/// manual scroll and trusting `defaultScrollAnchor` eliminates the race.
///
/// The only remaining explicit scroll is `scrollTrigger` for the "Return
/// to Active Session" button; that fires rarely, after layout has
/// settled, so the overshoot doesn't happen.
var body: some View {
ScrollViewReader { proxy in
ScrollView {
LazyVStack(alignment: .leading, spacing: 16) {
Spacer(minLength: 0)
.id("scroll-top")
if groups.isEmpty && !isWorking {
emptyState
}
@@ -39,29 +44,24 @@ struct RichChatMessageList: View {
.padding()
}
.defaultScrollAnchor(.bottom)
.onAppear {
if !groups.isEmpty {
DispatchQueue.main.async {
scrollToBottom(proxy: proxy, animated: false)
}
.onChange(of: scrollTrigger) {
let target = lastAnchorID
withAnimation(.easeOut(duration: 0.15)) {
proxy.scrollTo(target, anchor: .bottom)
}
}
// New turn: animate to the bottom.
.onChange(of: groups.count) {
scrollToBottom(proxy: proxy)
}
// Streaming chunks: track the bottom without animation so the
// text glides instead of bouncing.
.onChange(of: groups.last?.assistantMessages.last?.content ?? "") {
scrollToBottom(proxy: proxy, animated: false)
}
// Explicit "Return to Active Session" button.
.onChange(of: scrollTrigger) {
scrollToBottom(proxy: proxy)
}
}
}
/// Anchor ID used by the explicit scrollTrigger path. Prefers the typing
/// indicator when visible (so we scroll to the very bottom of the
/// current turn), otherwise the last group.
private var lastAnchorID: String {
if isWorking { return "typing-indicator" }
if let last = groups.last { return "group-\(last.id)" }
return "group-0"
}
private var emptyState: some View {
VStack(spacing: 12) {
Image(systemName: "bubble.left.and.text.bubble.right")
@@ -79,17 +79,6 @@ struct RichChatMessageList: View {
.padding(.vertical, 80)
}
private func scrollToBottom(proxy: ScrollViewProxy, animated: Bool = true) {
let target = scrollAnchor
if animated {
withAnimation(.easeOut(duration: 0.15)) {
proxy.scrollTo(target, anchor: .bottom)
}
} else {
proxy.scrollTo(target, anchor: .bottom)
}
}
private var typingIndicator: some View {
HStack {
HStack(spacing: 4) {
@@ -229,16 +229,40 @@ final class CredentialPoolsViewModel {
// All fields are optional because the format evolves and we want decoding to
// succeed even if hermes adds new keys or omits some for certain auth types.
private struct AuthFile: Decodable {
let credential_pool: [String: [AuthEntry]]
// Hand-written `init(from:)` so Swift 6 doesn't synthesize a MainActor-
// isolated conformance auth.json decode runs in `load()`'s detached task.
private struct AuthFile: Decodable, Sendable {
nonisolated let credential_pool: [String: [AuthEntry]]
enum CodingKeys: String, CodingKey { case credential_pool }
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.credential_pool = try c.decode([String: [AuthEntry]].self, forKey: .credential_pool)
}
}
private struct AuthEntry: Decodable {
let id: String?
let label: String?
let auth_type: String?
let source: String?
let access_token: String?
let last_status: String?
let request_count: Int?
private struct AuthEntry: Decodable, Sendable {
nonisolated let id: String?
nonisolated let label: String?
nonisolated let auth_type: String?
nonisolated let source: String?
nonisolated let access_token: String?
nonisolated let last_status: String?
nonisolated let request_count: Int?
enum CodingKeys: String, CodingKey {
case id, label, auth_type, source, access_token, last_status, request_count
}
nonisolated init(from decoder: any Decoder) throws {
let c = try decoder.container(keyedBy: CodingKeys.self)
self.id = try c.decodeIfPresent(String.self, forKey: .id)
self.label = try c.decodeIfPresent(String.self, forKey: .label)
self.auth_type = try c.decodeIfPresent(String.self, forKey: .auth_type)
self.source = try c.decodeIfPresent(String.self, forKey: .source)
self.access_token = try c.decodeIfPresent(String.self, forKey: .access_token)
self.last_status = try c.decodeIfPresent(String.self, forKey: .last_status)
self.request_count = try c.decodeIfPresent(Int.self, forKey: .request_count)
}
}
@@ -23,7 +23,9 @@ final class DashboardViewModel {
func load() async {
isLoading = true
let opened = await dataService.open()
// refresh() = close + reopen, forces a fresh remote snapshot. Cheap
// on local (live DB reopen).
let opened = await dataService.refresh()
if opened {
stats = await dataService.fetchStats()
recentSessions = await dataService.fetchSessions(limit: 5)
@@ -69,7 +69,7 @@ final class GatewayViewModel {
/// Static form of the gateway-status walk so the detached load can call
/// it without bouncing back to MainActor.
private static func fetchGatewayStatus(context: ServerContext) -> GatewayInfo {
nonisolated private static func fetchGatewayStatus(context: ServerContext) -> GatewayInfo {
let stateJSON = context.readData(context.paths.gatewayStateJSON)
var pid: Int?
var state = "unknown"
@@ -108,7 +108,7 @@ final class GatewayViewModel {
)
}
private static func fetchPairing(context: ServerContext) -> (approved: [PairedUser], pending: [PendingPairing]) {
nonisolated private static func fetchPairing(context: ServerContext) -> (approved: [PairedUser], pending: [PendingPairing]) {
let output = context.runHermes(["pairing", "list"]).output
var approved: [PairedUser] = []
var pending: [PendingPairing] = []
@@ -92,7 +92,9 @@ final class InsightsViewModel {
func load() async {
isLoading = true
let opened = await dataService.open()
// refresh() forces a fresh remote snapshot each load. On local it's
// a cheap reopen of the live DB.
let opened = await dataService.refresh()
guard opened else {
isLoading = false
return
@@ -3,6 +3,7 @@ import SwiftUI
struct InsightsView: View {
@State private var viewModel: InsightsViewModel
@Environment(AppCoordinator.self) private var coordinator
@Environment(HermesFileWatcher.self) private var fileWatcher
init(context: ServerContext) {
_viewModel = State(initialValue: InsightsViewModel(context: context))
@@ -28,6 +29,9 @@ struct InsightsView: View {
.onChange(of: viewModel.period) {
Task { await viewModel.load() }
}
.onChange(of: fileWatcher.lastChangeDate) {
Task { await viewModel.load() }
}
}
private var periodPicker: some View {
@@ -76,23 +76,29 @@ final class MCPServerEditorViewModel {
let prompts = promptsEnabled
Task.detached {
var success = true
switch transport {
case .stdio:
if !service.setMCPServerEnv(name: name, env: envMap) { success = false }
case .http:
if !service.setMCPServerHeaders(name: name, headers: headerMap) { success = false }
}
if !service.updateMCPToolFilters(
name: name,
include: include,
exclude: exclude,
resources: resources,
prompts: prompts
) { success = false }
if !service.setMCPServerTimeouts(name: name, timeout: timeoutValue, connectTimeout: connectValue) {
success = false
}
// Compute success as an immutable so the MainActor.run closure
// captures a value, not a mutable var. Swift 6 rejects
// var-captures across concurrent closures as data races.
let success: Bool = {
var ok = true
switch transport {
case .stdio:
if !service.setMCPServerEnv(name: name, env: envMap) { ok = false }
case .http:
if !service.setMCPServerHeaders(name: name, headers: headerMap) { ok = false }
}
if !service.updateMCPToolFilters(
name: name,
include: include,
exclude: exclude,
resources: resources,
prompts: prompts
) { ok = false }
if !service.setMCPServerTimeouts(name: name, timeout: timeoutValue, connectTimeout: connectValue) {
ok = false
}
return ok
}()
await MainActor.run {
self.isSaving = false
if !success {
@@ -47,7 +47,7 @@ final class PersonalitiesViewModel {
/// Static form so the detached load can call into it without touching
/// MainActor-isolated state. The instance form below remains for any
/// other callers that need it.
private static func parsePersonalitiesBlock(yaml: String) -> [HermesPersonality] {
nonisolated private static func parsePersonalitiesBlock(yaml: String) -> [HermesPersonality] {
guard !yaml.isEmpty else { return [] }
let parsed = HermesFileService.parseNestedYAML(yaml)
var nameSet: Set<String> = []
@@ -38,23 +38,29 @@ final class PluginsViewModel {
// of sync transport ops on remote definitively a beach ball if
// run on main. Detach the whole walk.
Task.detached { [weak self] in
let transport = ctx.makeTransport()
var result: [HermesPlugin] = []
if let entries = try? transport.listDirectory(dir) {
for entry in entries.sorted() where !entry.hasPrefix(".") {
let path = dir + "/" + entry
guard transport.stat(path)?.isDirectory == true else { continue }
let manifest = Self.readManifestStatic(path: path, context: ctx)
let disabled = transport.fileExists(path + "/.disabled")
result.append(HermesPlugin(
name: entry,
source: manifest.source,
enabled: !disabled,
version: manifest.version,
path: path
))
// Build `result` as an immutable before the MainActor hop, so the
// cross-closure capture is a value, not a mutated `var` (Swift 6
// concurrent-capture rule).
let result: [HermesPlugin] = {
let transport = ctx.makeTransport()
var out: [HermesPlugin] = []
if let entries = try? transport.listDirectory(dir) {
for entry in entries.sorted() where !entry.hasPrefix(".") {
let path = dir + "/" + entry
guard transport.stat(path)?.isDirectory == true else { continue }
let manifest = Self.readManifestStatic(path: path, context: ctx)
let disabled = transport.fileExists(path + "/.disabled")
out.append(HermesPlugin(
name: entry,
source: manifest.source,
enabled: !disabled,
version: manifest.version,
path: path
))
}
}
}
return out
}()
await MainActor.run { [weak self] in
self?.plugins = result
self?.isLoading = false
@@ -64,7 +70,7 @@ final class PluginsViewModel {
/// Static form of readManifest used by the detached load task. The
/// instance form delegates to this so both call paths share logic.
fileprivate static func readManifestStatic(path: String, context: ServerContext) -> (source: String, version: String) {
nonisolated fileprivate static func readManifestStatic(path: String, context: ServerContext) -> (source: String, version: String) {
let jsonPath = path + "/plugin.json"
if let data = context.readData(jsonPath),
let obj = try? JSONSerialization.jsonObject(with: data) as? [String: Any] {
@@ -102,7 +102,7 @@ struct TestConnectionProbe {
// Bound the probe so a hung connection doesn't lock the UI.
let deadline = Date().addingTimeInterval(20)
while proc.isRunning && Date() < deadline {
Thread.sleep(forTimeInterval: 0.1)
try? await Task.sleep(nanoseconds: 100_000_000)
}
if proc.isRunning {
proc.terminate()
@@ -37,7 +37,10 @@ final class SessionsViewModel {
var deleteSessionId: String?
func load() async {
let opened = await dataService.open()
// refresh() forces a fresh snapshot on remote contexts. The DB stays
// open after load() so selectSession()/search() can query without
// re-opening cleanup() closes on disappear.
let opened = await dataService.refresh()
guard opened else { return }
sessions = await dataService.fetchSessions(limit: 500)
sessionPreviews = await dataService.fetchSessionPreviews(limit: 500)
@@ -3,6 +3,7 @@ import SwiftUI
struct SessionsView: View {
@State private var viewModel: SessionsViewModel
@Environment(AppCoordinator.self) private var coordinator
@Environment(HermesFileWatcher.self) private var fileWatcher
init(context: ServerContext) {
_viewModel = State(initialValue: SessionsViewModel(context: context))
@@ -38,6 +39,9 @@ struct SessionsView: View {
coordinator.selectedSessionId = nil
}
}
.onChange(of: fileWatcher.lastChangeDate) {
Task { await viewModel.load() }
}
.onDisappear { Task { await viewModel.cleanup() } }
.sheet(isPresented: $viewModel.showRenameSheet) {
renameSheet
@@ -247,8 +247,17 @@ final class SettingsViewModel {
self.backupInProgress = false
if result.exitCode == 0 {
if let zipPath {
NSWorkspace.shared.activateFileViewerSelecting([URL(fileURLWithPath: zipPath)])
self.saveMessage = "Backup saved"
// NSWorkspace operates on the *local* Mac's filesystem;
// a remote backup path doesn't exist here, so revealing
// it would silently no-op (or worse, reveal an
// unrelated local file with the same path). Surface the
// remote location in the saveMessage instead.
if self.context.isRemote {
self.saveMessage = "Backup saved on \(self.context.displayName): \(zipPath)"
} else {
NSWorkspace.shared.activateFileViewerSelecting([URL(fileURLWithPath: zipPath)])
self.saveMessage = "Backup saved"
}
} else {
self.saveMessage = "Backup complete"
}
+11 -6
View File
@@ -21,6 +21,11 @@ struct ScarfApp: App {
_registry = State(initialValue: registry)
_liveRegistry = State(initialValue: live)
// Prune snapshot cache dirs whose server UUIDs aren't in the registry
// anymore handles the case where a server was removed while Scarf
// wasn't running. Cheap: just an `ls` of the snapshots root.
registry.sweepOrphanCaches()
// Warm up the login-shell env probe off-main at launch. Without
// this, the first MainActor caller (chat preflight, OAuth flow,
// signal-cli detect, etc.) blocks for 5-8 seconds while
@@ -43,7 +48,7 @@ struct ScarfApp: App {
// last open. Show a dedicated "server removed" view rather than
// silently falling back to local falling back would mislead
// the user into thinking they're looking at the right server.
if let ctx = registry.context(for: serverID ?? ServerContext.local.id) {
if let ctx = registry.context(for: serverID) {
ContextBoundRoot(context: ctx)
.environment(registry)
.environment(\.serverContext, ctx)
@@ -53,7 +58,7 @@ struct ScarfApp: App {
// another window since this one last opened.
.onAppear { liveRegistry.rebuild() }
} else {
MissingServerView(removedServerID: serverID ?? ServerContext.local.id)
MissingServerView(removedServerID: serverID)
.environment(registry)
.environment(updater)
}
@@ -202,7 +207,7 @@ final class ServerLiveStatus: Identifiable {
while !Task.isCancelled {
try? await Task.sleep(nanoseconds: 10_000_000_000)
if Task.isCancelled { return }
await self?.refresh()
self?.refresh()
}
}
}
@@ -219,7 +224,7 @@ final class ServerLiveStatus: Identifiable {
// Refresh after a short delay to pick up the new state.
Task { [weak self] in
try? await Task.sleep(nanoseconds: 3_000_000_000)
await self?.refresh()
self?.refresh()
}
}
@@ -227,7 +232,7 @@ final class ServerLiveStatus: Identifiable {
Task.detached { [fileService] in _ = fileService.stopHermes() }
Task { [weak self] in
try? await Task.sleep(nanoseconds: 2_000_000_000)
await self?.refresh()
self?.refresh()
}
}
@@ -246,7 +251,7 @@ final class ServerLiveStatus: Identifiable {
Task.detached { [weak self] in
let running = svc.isHermesRunning()
let gateway = svc.loadGatewayState()?.isRunning ?? false
await MainActor.run {
await MainActor.run { [weak self] in
self?.hermesRunning = running
self?.gatewayRunning = gateway
}