Skip to content
Scion adapter

Scion adapter

Scion adapter

extras/scion-agent/ packages core-agent for Scion’s container runtime. It is the Go counterpart of Scion’s Python adk_scion_agent example — same lifecycle contract, built on the core-agent library.

The adapter is opt-in. The core library and bundled CLI work standalone; you only build scion-agent when you want to deploy core-agent as a Scion-managed agent.


What it adds on top of core-agent

  • --input <task> flag. Scion’s harness appends this when starting an agent with an initial task. Without it, the binary reads its first prompt from stdin.
  • Stdin loop for follow-up turns. scion message <agent> delivers user input via tmux send-keys, so the binary just keeps scanning stdin between turns.
  • Transient activity emission. On every agent / tool boundary the adapter writes thinking, executing, or working to $HOME/agent-info.json so Scion’s UI can render what’s happening live.
  • Sticky lifecycle states via a sciontool_status tool. The model invokes this tool to declare ask_user, blocked, task_completed, or limits_exceeded. The tool shells out to Scion’s sciontool binary so the hub gets notified.

Outside a Scion container (no sciontool on PATH, no writable $HOME) the lifecycle hooks degrade to no-ops, so the same binary is usable for local development.


Build and deploy

# From the core-agent repo root, not from extras/scion-agent/.
docker build \
  --build-arg BASE_IMAGE=scion-base:latest \
  -t scion-core-agent \
  -f extras/scion-agent/Dockerfile .

You need a scion-base image available locally — build it from the Scion repo first if you don’t have one.

Then register the template tree under extras/scion-agent/templates/scion/ with Scion (it has the scion-agent.yaml, agents.md, and harness-configs/scion/config.yaml Scion expects).


Run locally without a container

go build -o /tmp/scion-agent ./extras/scion-agent
GOOGLE_API_KEY=… /tmp/scion-agent --input "list the files in this directory"

You’ll see the agent stream its response to stdout, tool calls on stderr, and $HOME/agent-info.json ticking through thinking / executing / working. After the model decides the task is done and invokes sciontool_status("task_completed", ...), the file flips to completed.


Design notes

Lifecycle hooks live in the adapter, not in core-agent

The adapter wraps agent.Run() with its own ~30-line streamTurn and emits transient activity by inspecting the event stream:

  • before the loop: WriteActivity("thinking")
  • on a FunctionCall event: WriteActivity("executing")
  • on a FunctionResponse event: WriteActivity("thinking")
  • after the loop: WriteActivity("working")

This keeps every Scion-shaped concept in extras/scion-agent/. core-agent’s public API gains nothing for this — if a future adapter needs control-flow callbacks (abort a tool call before it runs, substitute a tool response), we’ll expose ADK’s BeforeToolCallback etc. on agent.New then. Today, no consumer needs that, so we don’t add the API surface.

Sticky vs transient

KindExamplesMechanismFrequency
Transientthinking, executing, workingatomic write to $HOME/agent-info.jsonper agent / tool boundary
Stickyask_user, blocked, task_completed, limits_exceededsciontool status <type> <message> subprocessinvoked intentionally by the model via the sciontool_status tool

WriteActivity reads the current activity first and refuses to overwrite a sticky state with a transient. Matches the Python adapter’s semantics exactly.

Env-var contract

VarSet byPurpose
GOOGLE_API_KEYuser / ScionGemini API key.
GEMINI_API_KEYScion’s Gemini harnessBridged to GOOGLE_API_KEY at startup if the latter is unset.
ANTHROPIC_API_KEYuserAnthropic API key (when model.provider: anthropic).
HOMEcontainerWhere agent-info.json lives. Falls back to /home/scion.

The adapter inherits all of core-agent’s other env-var conventions (Vertex Gemini, Vertex Anthropic, etc.) — see Providers.


See also