The Tiny Failure That Exposes the Whole Integration
The failure looked boring: a presentation-design automation path could not run because the LLM generation lane it expected was not configured.
That sounds like a small setup miss. In practice, it is exactly the kind of miss that design-tool integrations must catch early.
Design tools are not plain text editors. A useful automation route may need to preserve layout constraints, templates, source assets, speaker intent, image policy, export format, and human review points. If the route silently swaps to a weaker path, the agent can produce something that looks successful while violating the actual contract.
A design artifact is not done when an API returns. It is done when the promised artifact exists with the promised constraints preserved.
That is why I now treat a missing LLM config as a capability-gate failure, not as a runtime inconvenience. The system should say “blocked: the chosen generation route is not configured” before it mutates anything, spends tokens, creates partial files, or asks the user to trust a degraded artifact.
Why Design Automation Is Easy to Fake
LLM-backed design automation creates a particularly tempting illusion: if the model can talk confidently about the desired output, it can feel as if the output route is ready.
It is not.
A presentation generator, diagram tool, image route, or design-system plugin usually crosses at least four boundaries:
- Semantic intent: what the user actually wants the artifact to communicate.
- Source contract: what inputs, assets, data, and evidence must survive the transformation.
- Tool capability: what the selected design route can actually generate or preserve.
- Configured execution: which model, credentials, permissions, and export paths are currently available.
Agents are good at the first boundary. They can be dangerously overconfident at the other three unless the integration makes those boundaries explicit.
This is not unique to one tool. Presenton, for example, is a public open-source AI presentation generator and API that supports self-hosting, custom templates, and multiple model backends. That flexibility is useful precisely because it decouples artifact generation from one fixed SaaS route. It also means the integration must know which route is actually configured before it claims it can generate a deck.
Capabilities Are Contracts, Not Hints
The broader software ecosystem already knows this lesson. Good integrations declare what they can do before they do it.
Figma plugin manifests make network access an explicit manifest concern: when network access is declared, allowed domains constrain what the plugin may reach. The Figma plugin manifest documentation treats this as part of the plugin description, not a side conversation after the plugin runs.
The Model Context Protocol does the same kind of thing for tools. Servers that support tools must declare the tools capability, and each tool exposes metadata plus an input schema. Discovery comes before invocation.
LLM tool frameworks also point in the same direction. The AI SDK tool-calling documentation describes tool definitions with input schemas used both by the model and for validation, strict-mode options where providers support them, and explicit approval flows for sensitive operations.
Design-tool integrations should borrow that posture:
The Capability Gate I Want Before Any Design Generation
For a design-tool route, I want a preflight gate that is boring, explicit, and hard to bypass. It should answer four questions before the first generation side effect:
| Gate | Question | Fail-closed behavior |
|---|---|---|
| Input contract | Are the required source text, assets, constraints, and output obligations present? | Stop and list the missing inputs. Do not fabricate missing assets. |
| Route capability | Can this route preserve the artifact class being requested? | Choose an authorized stronger route, ask for approval to degrade, or block. |
| LLM configuration | Is the model lane required by this design route configured and reachable? | Report a blocked state. Do not silently borrow another lane with different behavior. |
| Artifact proof | Did the output file or design object satisfy the original artifact obligations? | Mark the run incomplete even if the service call succeeded. |
The third gate is the one that triggered this post. But the point is not “remember to configure the model.” The point is that model configuration is part of the artifact contract. If it is absent, the design route is absent.
A Minimal Preflight Contract
I do not want the agent to reason about this from vibes every time. I want the route to expose a small, inspectable contract.
design_route:
purpose: "generate or update a presentation artifact"
input_contract:
requires_outline: true
requires_source_assets: declared_by_request
requires_template_constraints: declared_by_request
capabilities:
preserves_layout_constraints: true
exports_pptx: true
exports_pdf: true
supports_image_generation: route_dependent
execution_readiness:
llm_configured: must_be_true
required_permissions_present: must_be_true
dry_run_available: recommended
failure_policy:
missing_config: block_before_generation
unsupported_asset_class: block_or_user_approved_degradation
artifact_validation_failure: do_not_mark_complete
This is conceptual, not a dump of a live config. The shape matters more than the exact syntax. A real implementation might use a manifest, a typed config object, a tool registry, or a preflight function. What matters is that the design route can explain itself before the agent asks it to produce public-looking work.
What “Blocked” Should Look Like
A useful blocked state is not just an error string. It should preserve the decision context so the next move is obvious.
| Failure | Bad behavior | Better blocked state |
|---|---|---|
| Missing LLM config | Try a different model route without telling the user. | “The selected design route is not configured for generation. No artifact was created.” |
| Route cannot preserve assets | Generate a text-only approximation and call it done. | “This route would drop required assets. Approve degradation or choose another route.” |
| Permissions are absent | Request broad access at runtime because the agent is in a hurry. | “The route lacks required access. Configure the permission boundary first.” |
| Artifact validation fails | Report success because the API returned normally. | “Generation returned, but the artifact did not satisfy the contract.” |
This is especially important for design artifacts because partial success can look polished. A wrong deck with clean typography is more dangerous than a loud failure.
The Checklist I Use Now
Before letting an agent drive a design-tool integration, I want these checks to pass:
- Name the artifact: deck, diagram, mockup, image set, template, or export bundle.
- Name the non-negotiables: required assets, template constraints, data fidelity, branding boundaries, and export formats.
- Name the route class: text-only planning, design-aware generation, API-backed artifact creation, or manual-assist workflow.
- Prove model readiness: the selected LLM lane is configured, reachable, and appropriate for the route.
- Prove tool readiness: permissions, network boundaries, local dependencies, and output directories are ready before side effects.
- Run a tiny proof: prefer a bounded dry run before committing a large design task.
- Label degradation: if a route cannot preserve something, say so before generation.
- Verify the artifact: inspect the final file or design object against the original contract.
The human should not have to reverse-engineer which route the agent silently chose. The route should be visible enough that the human can approve the risk.
How This Connects to Agent Workflow Validation
This is the same pattern I use elsewhere: fail closed before the expensive or irreversible step, and keep the failure legible.
In Fail-Closing Agent Launches, the lesson was that auth and readiness checks should block before tooling starts. In Modernizing Agent Skills Without Growing a Skill Jungle, the related lesson was that skills need explicit capability boundaries instead of optimistic routing.
Design-tool integrations combine both problems. They are tool launches that produce artifacts. That means they need launch readiness and artifact proof.
If a route cannot prove its capability up front, it should not get to create something polished enough to be mistaken for success.
The Bigger Lesson
LLMs make design automation feel more fluid, but they do not remove integration contracts. They make those contracts more important.
A missing LLM config is not embarrassing. It is useful signal. It tells you the route has not earned the right to generate yet.
That is the design-tool integration standard I want: explicit route contracts, fail-closed model readiness, visible degradation, and artifact validation that checks the output rather than the agent's confidence.
When that standard is in place, a blocked run is not a failure of the workflow. It is the workflow doing its job.