AI & API Design

How a Stubborn AI Agent Led Me to Redesign How APIs Talk to Machines

Ramazan Yavuz
Ramazan Yavuz
2025

The Agent That Would Not Stop Guessing

I was building an API to connect a messaging system to an AI agent. I did everything right. Full API specs. A discovery endpoint. Documentation kept up to date. The kind of setup where a human developer would read the docs once and never have a problem.

The agent had problems.

It kept trying random formats. Guessing parameters. Hitting endpoints with payloads that made no sense. Pure trial and error. It would get a 410 back and start hallucinating recovery strategies, trying URLs that did not exist, reformatting requests for no reason, asking vague clarifying questions that helped nobody.

I was able to fix it by fine-tuning the agent's behavior on the client side. Hard-coding instructions into the agent's context about how to interact with this specific API. And it worked. Until the context cleared. Then the whole cycle started again.

The obvious next step was to hard-code everything into the agent's system prompt or an agents.md file. But the API was going to keep changing. I was not going to maintain a parallel instruction manual for the AI every time I added an endpoint or changed a response format. That is the kind of work that quietly destroys a project from the inside.

I sat with this problem for a while and realized something that should have been obvious from the start.

The Problem Is Not the Agent

The problem was the API.

Traditional REST responses return results. A status code, maybe a message, maybe some data. That is everything a human developer needs because humans can read documentation, understand conventions, and make inferences.

AI agents cannot do any of that reliably. When an agent gets back {"status": 410, "message": "Gone"}, it has no idea what "gone" means in context. Is the resource deleted? Archived? Moved? Temporarily unavailable? The agent does not know, so it guesses. And guessing means wasted tokens, failed requests, and broken user trust.

The missing piece was guidance. Not documentation that lives in a separate file the agent may or may not have in context. Guidance that lives in the response itself, right next to the data, delivered at the exact moment the agent needs it.

Building TEKIR

The idea was simple: extend API responses with structured fields that tell the agent what happened, why, and what to do next.

Not just for errors. For everything. A successful order confirmation should tell the agent "show the user a summary, tracking is not available for another hour or two, cancellation is irreversible so ask before doing it." A 410 should tell the agent "this thread was archived after 90 days of inactivity, here is the endpoint to create a new one, ask the user before creating it."

I defined seven optional extension fields:

reason explains why the response has this outcome. Not a generic error message, but actual context. "Threads archive after 90 days of inactivity. Read-only, messages are still accessible but no new posts allowed."

next_actions gives the agent machine-actionable follow-up steps. Each action has an ID, a description, an endpoint, an HTTP method, and an effect type (read, create, update, delete). The agent does not have to guess what it can do next. The API tells it.

agent_guidance provides natural-language instructions. "Do not poll tracking immediately, labels take one to two hours. If canceling, confirm the user understands it is irreversible." These are instructions the agent can reason about directly.

user_confirmation_required is a boolean that tells the agent whether it should ask the user before acting. This is critical for autonomous agents. An effectful action like deleting an order should not happen silently just because the agent decided it was the right move.

retry_policy gives structured retry hints. Whether the request is retryable, how long to wait, maximum attempts. No more agents hammering an endpoint in a tight loop because they do not know the rate limit.

limitations documents constraints. "Maximum 100 items per page." "Search results limited to last 30 days." Things the agent needs to know to avoid making nonsensical requests.

links provides related resources following RFC 8288. Documentation, related endpoints, status pages.

Every field is optional. You can add a single next_actions to one endpoint and immediately get value. No all-or-nothing adoption.

Making It Work with What Already Exists

I did not want to invent a new response format. That would mean every existing API client would break, and adoption would be impossible.

TEKIR extends RFC 9457, which is the existing standard for Problem Details in HTTP APIs. Any valid RFC 9457 response is automatically valid TEKIR. The extension fields are additional JSON properties that existing clients will simply ignore if they do not understand them.

This was a deliberate design decision. RFC 9457 is explicitly built for errors. It defines fields like type, title, status, and detail for describing what went wrong. But the limitation is right there in the name: "Problem Details." It only covers problems.

The insight behind TEKIR is that agents need guidance on successful responses too. When an order is confirmed, the agent needs to know what comes next just as much as when an order fails. TEKIR takes the RFC 9457 envelope and makes it work for the full lifecycle of a request, not just the failure cases.

It also complements other standards rather than competing with them. OpenAPI defines API contracts at design time. TEKIR provides metadata at runtime. MCP handles tool discovery and invocation. TEKIR handles what comes back after invocation. HATEOAS promised self-describing APIs decades ago. TEKIR is what makes hypermedia controls actually practical, because LLMs can reason about structured action definitions in a way that previous generations of API clients never could.

The Discovery Mechanism

One piece that took some thinking was how an agent discovers that an API supports TEKIR in the first place.

The solution has two parts. First, every TEKIR response includes two headers: TEKIR-Version and TEKIR-Discovery. The discovery header points to a tekir.json file at the API root.

Second, tekir.json is a static document that describes the entire API surface. Which endpoints exist, what TEKIR fields each one uses, what authentication looks like, and crucially, named workflows. A workflow called "create_order" might describe the sequence: GET /products, then POST /orders, then POST /orders/{id}/confirm. The agent reads this once and understands the full flow without trial and error.

The flow is: agent hits any endpoint, sees the TEKIR-Discovery header, fetches tekir.json, and from that point on knows everything. No guessing, no probing, no wasted requests.

Zero-Install Adoption

I wanted adoption to be as frictionless as possible. If you use Claude Code, you can install TEKIR as a plugin and use the /tekir command before asking Claude to build an endpoint. It will generate TEKIR-compliant responses automatically.

But even simpler: you can just drop a TEKIR.md file into your project root. AI coding assistants like Claude and Cursor will read it automatically and start building compliant APIs without any explicit instruction. One curl command, no package to install, no configuration.

For more structured integration, there is Express and Fastify middleware that adds a res.tekir() method to your response objects. You call it with your guidance fields before sending the response, and the middleware handles the rest.

Why I Named It After a Cat

"Tekir" is the Turkish word for "tabby." Tabby cats are one of nature's most resilient designs. Mixed genes over thousands of years, street-forged instincts. They evolved beyond survival. They adapt and thrive in any environment without needing an instruction manual.

That is the notion I wanted to bring to API design. Systems that can figure out what to do next without constant hand-holding.

But there is a more personal reason. In January, my cat was hit by a car. His name was Cilgin, which means "crazy" in Turkish. I could not stop thinking about it, so I named this project after him. He was a tekir. Extremely independent, very intelligent, and honestly more "human" than most AI systems could ever hope to be.

The idea behind the project reflects that spirit: systems that can figure out what to do next without constant supervision.

I also realized the name could work as a backronym: Transparent Endpoint Knowledge for Intelligent Reasoning. Sometimes things just line up.

What Changed After Implementing It

The difference was immediate. The agent stopped guessing. When it got a response, it knew exactly why it got that response and what its options were. No more trial and error. No more hallucinated recovery strategies. No more client-side prompt engineering to compensate for an API that only returned results.

The part that surprised me most was how much difference user_confirmation_required made. Before TEKIR, the agent would sometimes autonomously take destructive actions because nothing in the response told it to stop and ask. After TEKIR, effectful actions were explicitly flagged, and the agent consistently asked for confirmation before doing anything irreversible.

I still do not love non-deterministic programming. But if you are going to build systems where AI agents interact with APIs, the least you can do is make those APIs explain themselves. The alternative is an ever-growing pile of client-side prompts that break every time the API changes or the agent's context resets.

TEKIR is open source under MIT. The spec, the schema, the middleware, and the examples are all on GitHub. The whole thing is designed to be adopted incrementally. Start with one field on one endpoint and see if it helps.

If your AI agents are brute-forcing your APIs, the problem might not be the agents.