July 14, 2025

Searching for Gold (Standards) With Agentic Coding

If you’ve been in software delivery for more than five minutes lately, you’ve likely heard “agentic coding,” “AI pair programming,” or “agentic AI” thrown around. Maybe it’s someone promising 10x delivery speeds. Maybe it’s your dev team mentioning GitHub Copilot and VS Code now offer agentic solutions right out of the box and they’re not sure if they’re allowed to use them. Whatever the case, if you’re leading a software development team in 2025, agentic coding isn’t something you can afford to ignore.

So what do we do in the face of all the hype? We search for gold-standard approaches.  This isn’t about replacing developers. It’s about adding a new kind of muscle to your delivery org—a muscle that’s more about discernment than indiscriminate automation.

Below are some of the learnings we’ve come across that can guide us toward building gold-standard approaches while using agentic coding.

The Bell Curve of AI Utility

Agentic coding has an interesting utility curve. At the low end of complexity—think, updating the color scheme—it’s like using a fire hose to blow out a candle. Overkill, messy, unnecessary.

On the high end—gnarly business logic, cross-team integrations—it’s like asking a mischievous genie to put out a grease fire in the kitchen. Sure, it’ll “fix” your problem… but maybe it just moves the fire to the living room.

The sweet spot where agentic coding shines is somewhere in the middle: complex enough to need structure, not so complex that AI hallucinates its way through it. Identifying that sweet spot is a learned skill. And like any muscle, your team gets better at it the more they use it.

Agent-First vs. Human-First Work

We already break work down into user stories and epics. What if we started labeling stories “Agent-First” vs “Human-First”?

That one-word distinction changes the entire approach to delivery:

  • Human-First: Coding that lands on either end of the Bell Curve of AI Utility would be given this label. It’s for work that is straightforward and small, or it’s for work where decisions need nuance, collaboration, and deep domain understanding. In a setting where team members are enabled to use agentic coding and code assist, getting a sense of when pieces of work should have this designation can help avoid banging your agent’s head against the wall on a problem that just isn’t suited for that approach.
  • Agent-First: This label is for work that a team believes will be completed using agentic or code-assist methods for all the heavy lifting. Over time, you can adjust forecasts based on the relative speed of delivery for these types of work items.

Start practicing this distinction. Bake it into planning sessions. It’ll sharpen your estimates and boost your team’s AI literacy.

The New Estimation Muscle: How Much AI Will Actually Help?

When your team estimates work, how often are they considering how much of it can be done with AI? Not just faster, but meaningfully assisted—like generating boilerplates, writing unit tests and automation scripts, or refactoring large sections of legacy code.

It’s no longer enough to say, “This will take 12 weeks.” Instead, try: “40% of this scope of work has high-AI-accompaniment potential, so our lower-bound estimate is 7 weeks instead of 10, and the upper-bound is 14.” That range tells your client or stakeholder a more honest story.

This approach isn’t just smarter—it’s necessary. When you’re delivering software people are paying real money for, you can’t just sprinkle AI on top and hope for magic. You need a legitimate rationale for your estimation’s timetable, and the more clarity you can give, the better your relationship over the course of an engagement will be.

Agentic Collaboration, Not Blind Adherence

When using an agentic coding platform, feeding it console errors and linter errors is hugely helpful. However, one shouldn’t simply push any changes it suggests. Instead, ask the agent: “Explain this error and what steps you would take to fix it.”

That change does two things:

  1. It keeps your engineers in the loop instead of outsourcing the thinking.
  2. It surfaces when the AI is confidently wrong—before it causes a production fire.

Just like with any new tool, the goal isn’t blind trust. It’s intelligent collaboration.

Agentic Coding Is a Team Skill

None of this works if your team doesn’t think critically about when and how to use AI. That’s why “agentic discernment” needs to be baked into your dev culture.

It’s not about buying the latest AI plugin or framework. It’s about building a team that knows when AI helps—and when it hurts. Teams that ask, “What’s the best way to build this?” not just “What’s the fastest?”

And that’s where the real value lies.

When It Comes to Agentic Coding:

  • Start estimating with AI involvement in mind.
  • Recognize the sweet spot on the AI utility bell curve.
  • Label tasks “agent-first” vs “human-first.”
  • Don’t blindly accept AI-generated fixes; ask for explanations.
  • Treat agentic discernment as a team competency, not a feature.

Want to explore how AI-first delivery might impact your roadmap? Let’s whiteboard it out. No strings, No fluff. Just clarity.