Documentation · LLM assistant

Design: LLM-assisted operations

Status: Draft · Last updated: 2026-04-20

An LLM-assisted layer for grounded answers, draft change bundles, and import/reconciliation help—without making the model a second source of truth. Reads inventory through APIs, proposes structured actions, and executes writes only through the same paths as human operators (audit, RBAC, approval where required).

Goals

GoalUser-visible outcome
ThroughputFaster path from natural language to validated API payloads, bulk mappings, and runbooks.
TrustAnswers cite inventory objects; “unknown” is explicit when data is missing.
SafetyNo silent writes; privileged actions use the same gates as today.
OperabilityObservable, rate-limited, tenant-scoped usage.

Non-goals (initial phases)

Personas and scenarios

PersonaScenarioAssistant role
NetOps / NOCDependencies / incident contextSearch + resource graph; summarize with citations.
DCIM / fieldMaintenance planningDraft change bundle; highlight conflicts.
Automation engineerRecurring proceduresDraft steps, variables, guardrails for jobs/plugins.
Data stewardVendor export reconciliationColumn → schema mapping preview for bulk import.
New operatorLearn the model in contextExplain this record and links from retrieved fields.

Functional requirements (summary)

Grounded Q&A (read path)

Optional page context (resourceType, id). Tools: GET /v1/search, GET /v1/resource-view/{resourceType}/{id}, GET /v1/resource-graph/{resourceType}/{id}. Short answers with citations; no invented IDs when retrieval is empty.

Change assistance (write path)

Machine-readable proposals; UI preview; execution via existing mutating REST with RBAC.

Bulk import assistance

Column mapping, coercion notes, validation warnings before bulk import endpoints.

Incident / ticket assist (optional)

Extract identifiers from pasted text; resolve via search; return linked inventory summary.

System architecture

Copilot orchestrator behind the BFF; tool-calling loop; core domain and workers remain authoritative for state changes.

Tooling contract (v1 minimum)

ToolPurposeAPI
searchFind objectsGET /v1/search?q=&limit=
get_resource_viewFields + graphGET /v1/resource-view/…
get_resource_graphGraph JSONGET /v1/resource-graph/…

Tools run with the caller’s credentials; idempotency keys where supported; hard limits per message/session.

Security, privacy, compliance

Observability

Latency, tokens, tool errors, rate limits, tracing spans per request and tool call, user-safe error surfaces.

UX surfaces

Global assistant with page context; contextual actions on list/detail; confirm dialog for proposals; strong empty/error states.

Phased delivery

PhaseScopeExit criteria
P0Grounded Q&A, citations, no writesPilot spot-checks on hallucination rate
P1Proposals + preview + REST executionDry-run path for defined resource types
P2Bulk import mappingFaster time-to-first-good import
P3Ticket paste + optional risk narrationTriage workflow adopted by a team

Success metrics & risks

Time to answer vs. manual navigation; proposal acceptance rate; import mapping edits; zero unauthorized mutations by design. Mitigations: citations, injection handling, budgets, private LLM options.

Open questions

  1. Embeddings scope (docs-only vs. object embeddings).
  2. Multi-tenant deployment topology for copilot.
  3. ITSM integration for certain mutations.
  4. Internationalization.

Full specification: docs/design-llm-assistant.md

← Back to documentation