
July 14, 2025
Searching for Gold (Standards) With Agentic Coding
July 14, 2025
Share:
If you’ve been in software delivery for more than five minutes lately, you’ve likely heard “agentic coding,” “AI pair programming,” or “agentic AI” thrown around. Maybe it’s someone promising 10x delivery speeds. Maybe it’s your dev team mentioning GitHub Copilot and VS Code now offer agentic solutions right out of the box and they’re not sure if they’re allowed to use them. Whatever the case, if you’re leading a software development team in 2025, agentic coding isn’t something you can afford to ignore.
So what do we do in the face of all the hype? We search for gold-standard approaches. This isn’t about replacing developers. It’s about adding a new kind of muscle to your delivery org—a muscle that’s more about discernment than indiscriminate automation.
Below are some of the learnings we’ve come across that can guide us toward building gold-standard approaches while using agentic coding.
Agentic coding has an interesting utility curve. At the low end of complexity—think, updating the color scheme—it’s like using a fire hose to blow out a candle. Overkill, messy, unnecessary.
On the high end—gnarly business logic, cross-team integrations—it’s like asking a mischievous genie to put out a grease fire in the kitchen. Sure, it’ll “fix” your problem… but maybe it just moves the fire to the living room.
The sweet spot where agentic coding shines is somewhere in the middle: complex enough to need structure, not so complex that AI hallucinates its way through it. Identifying that sweet spot is a learned skill. And like any muscle, your team gets better at it the more they use it.
We already break work down into user stories and epics. What if we started labeling stories “Agent-First” vs “Human-First”?
That one-word distinction changes the entire approach to delivery:
Start practicing this distinction. Bake it into planning sessions. It’ll sharpen your estimates and boost your team’s AI literacy.
When your team estimates work, how often are they considering how much of it can be done with AI? Not just faster, but meaningfully assisted—like generating boilerplates, writing unit tests and automation scripts, or refactoring large sections of legacy code.
It’s no longer enough to say, “This will take 12 weeks.” Instead, try: “40% of this scope of work has high-AI-accompaniment potential, so our lower-bound estimate is 7 weeks instead of 10, and the upper-bound is 14.” That range tells your client or stakeholder a more honest story.
This approach isn’t just smarter—it’s necessary. When you’re delivering software people are paying real money for, you can’t just sprinkle AI on top and hope for magic. You need a legitimate rationale for your estimation’s timetable, and the more clarity you can give, the better your relationship over the course of an engagement will be.
When using an agentic coding platform, feeding it console errors and linter errors is hugely helpful. However, one shouldn’t simply push any changes it suggests. Instead, ask the agent: “Explain this error and what steps you would take to fix it.”
That change does two things:
Just like with any new tool, the goal isn’t blind trust. It’s intelligent collaboration.
None of this works if your team doesn’t think critically about when and how to use AI. That’s why “agentic discernment” needs to be baked into your dev culture.
It’s not about buying the latest AI plugin or framework. It’s about building a team that knows when AI helps—and when it hurts. Teams that ask, “What’s the best way to build this?” not just “What’s the fastest?”
And that’s where the real value lies.
Want to explore how AI-first delivery might impact your roadmap? Let’s whiteboard it out. No strings, No fluff. Just clarity.