Issue #3582288: Add loop_aware option to skip context re-injection on agent loops

Summary

Adds a per-agent loop_aware boolean to ai_context agent config. When enabled, context is injected only on loop 0. On subsequent loops, the system prompt is rebuilt without context items. The LLM retains context influence only through prior conversation messages (user/assistant history). This is a token optimization trade-off — not a lossless skip.

Measured 52% token reduction on a 3-loop Canvas page build with ~4K tokens of always-include context.

Changes

  • config/schema/ai_context.schema.yml — Adds loop_aware: boolean to per-agent mapping, label: "Loop-aware context injection"
  • src/Service/AiContextRequestFactory.phpfindAgentConfig() made public (was private). New isLoopAware() method encapsulates the config check. Factory is the canonical and only source for agent config lookups.
  • src/EventSubscriber/AiContextSystemPromptSubscriber.php — Stores loop count from AgentStartedExecutionEvent, caches loop_aware flag via isLoopAware() in onAgentStarted(), skips injection when flag is set and loop > 0. Subscriber has no direct knowledge of agent config keys. Usage tracking is intentionally skipped on loops where no context is injected (documented in code comment).
  • src/Form/AiContextAgentForm.php — Adds checkbox UI. Description accurately states the trade-off: "Safer for stable directives like tone or writing style; riskier for context containing factual data or output-format constraints the LLM must reference on every loop."
  • tests/src/Kernel/SystemPromptSubscriberLoopAwareTest.php — 7 data-provider cases + 1 explicit test. Minimal module set (10 modules). No @runTestsInSeparateProcesses.

Review Response (April 2026)

All 9 findings from the April 20 review are addressed:

Finding #1 — LLM state semantics (critical)

The original comment and form description incorrectly implied the LLM "already has" the context. In reality, BuildSystemPromptEvent fires fresh each loop — the system prompt is replaced, not accumulated. Context influence persists only through prior assistant/user messages in conversation history. Fixed: code comment and form description now accurately describe this as a token optimization trade-off, with guidance on which context types are safer (stable directives) vs. riskier (factual data, format constraints).

Finding #2 — Architectural single-responsibility (critical)

The subscriber was duplicating AiContextRequestFactory's config lookup. Fixed: findAgentConfig() is now public. New isLoopAware() method encapsulates the check. The subscriber calls $this->requestFactory->isLoopAware($agentId) — it no longer reads ai_context.agents config or knows the config key name. Grep confirms only two services read this config: the factory (canonical read) and the form (write on save).

Finding #3 — Config caching (moderate)

Fixed: loop_aware flag cached in $this->loopAwareFlags during onAgentStarted(). The check in onPreSystemPrompt() is a property lookup with no config re-read.

Finding #4 — Usage tracking skip (moderate)

Added documentation in the code comment: "Usage tracking is also intentionally skipped since no context was injected."

Finding #5 — @runTestsInSeparateProcesses (moderate)

Removed. No global state contamination in this test.

Finding #6 — Test module bloat (moderate)

Trimmed from 20 to 10 modules. Removed: datetime, options, node, taxonomy, views, content_moderation, workflows, modeler_api, scheduler, scheduler_content_moderation_integration. All 7 tests pass with the reduced set.

Finding #7 — Missing period (minor)

Moot — the comment was rewritten entirely per finding #1.

Finding #8 — Schema label (minor)

Changed from "Skip context injection on agent loops > 0" to "Loop-aware context injection".

Finding #9 — Form placement (minor)

Noted for future work. If more optimization flags land, they will be grouped under a details element.

Edited by Alex Urevick-Ackelsberg

Merge request reports

Loading