Remote Runs — How it would work
Problem & goals
- Run Terraform/Tofu plans/applies remotely while keeping state control centralized in OpenTaco.
- Keep CLI-first UX: users trigger runs locally, but execution happens on remote compute.
- Respect unit locks and provide streaming logs and clear statuses.
User journeys
- Submit:
taco run plan|apply <unit-id> [--dir <path>] [--var-file ...]
→ returns a run ID. - Watch:
taco run watch <run-id>
shows live logs until completion. - Inspect:
taco run ls
/taco run get <run-id>
show status, timestamps, exit codes, artifacts.
High-level design
- API:
POST /v1/runs
to submit;GET /v1/runs/:id
for status;GET /v1/runs/:id/logs
for logs; optionalDELETE /v1/runs/:id
to cancel. - Lock integration: acquire lock prior to apply; release on completion or failure (opt-in).
- Compute adapter: start with GitHub Actions; pluggable interface for future providers.
- Artifacts: store minimal logs/metadata; large artifacts can be external links.
Shapes (provisional)
- Submit payload: unit ID, VCS ref or local tarball ref, plan/apply, inputs, environment, timeouts.
- Status: queued → running → succeeded/failed/canceled; timestamps; link to logs.
- CLI flags mirror key fields; subject to change after design validation.
Open questions
- Packaging: source transfer as tarball vs VCS ref vs OCI artifact.
- Secrets: pass-through via compute provider vs OpenTaco-managed vault.
- Concurrency: per-unit queue depth and global scheduling policies.