Agentic Coding Assistants: August 2025 Recommendations
đź“– New Ebook Available
Build Your First MCP Server: A Developer's Guide to Wrapping Existing APIs for AI Agents to Use
Learn to create powerful AI integrations step by step
Here are my recommendations for working with Agentic Coding Assistants-a structured approach to AI-assisted development that balances creative flow with engineering discipline. I used to call this "mostly" vibe coding, but I’ve changed my mind: "vibing" implies you’re not reading the code closely. The point here is the opposite—careful review and intervention.
Codify best practices
Keep a file (e.g., .cursor-rules or CLAUDE.md for Claude Code) that every LLM call references. This ensures consistency across your codebase and helps the AI understand your preferred patterns, naming conventions, and architectural decisions from the start.
- Naming conventions: camelCase variables/functions, PascalCase classes/components, SCREAMING_SNAKE_CASE constants.
- File and folder structure: where UI lives vs. server code, where tests and stories live, and how to name files.
- PR and commit style: scope of edits, message format, and when to split work.
- Testing policy: unit vs. e2e expectations and the minimum coverage or smoke checks required.
Plan with a reasoning model
Have the LLM draft a step-by-step plan, ask clarifying questions, then save the result to PLAN.md or README.md. For August 2025, use GPT-5 and explicitly include “Think hard” in the prompt to push deeper reasoning. This upfront investment prevents the AI from going down rabbit holes and gives you a clear roadmap. For truly greenfield work, do not start coding until the written plan and requirements feel complete enough to implement without guesswork.
Execute with a coding model
Switch models to implement the plan, one section at a time.
- UI-focused coding (e.g., React/Next.js, component work): use GPT-5.
- Non-UI coding (backend, scripts, data/infra): use claude-sonnet-4 or, if you’re a baller, claude-opus-4.1. (Opus is very expensive, most plan's don't include even limited usage of it.)
Different models excel at different tasks—use GPT-5 for planning and UI, Claude for heavier non-UI implementation.
Read the chain-of-thought and break early
Read the model’s chain-of-thought and code as it proceeds. Immediately break out and course-correct if you see a wrong practice, a shift to a different approach than requested, or the model assuming incorrect syntax and “changing directions” without asking. This happens—sometimes the model gives up or pivots silently. You must watch carefully and intervene fast.
Embed fresh docs in prompts
When adding a dependency or making an API call, paste the relevant documentation so the model uses current syntax and avoids hallucination. This is especially important for rapidly evolving frameworks and APIs where the model's training data might be outdated.
Vet every dependency
Confirm it's necessary, actively maintained, and on the latest stable version. AI models can sometimes suggest dependencies that are deprecated, poorly maintained, or overkill for your use case. A quick manual check can save you significant technical debt later.
Loop on errors
Feed stack traces back to the coding model; if it stalls, prompt for alternative angles or bring in a human "Developer Brain" for validation. Bringing in a MCP server for a browser can help with the error feedback loop. Don't let the AI spin its wheels—sometimes a fresh perspective or manual intervention is exactly what's needed.
Tight feedback loops: CI, tests, and debugging
- CI checks: ensure the app type-checks, builds, and links in CI for every PR. Fail fast and keep the loop tight.
- Smoke/e2e tests: add a minimal Playwright smoke suite for critical paths so regressions surface immediately.
- Local runs: run the build/test locally before marking tasks "done"; feed failures back into the model for fixes.
Connect your agent with MCP servers
Use MCP servers to give the AI controlled access to tools and services (browser for scraping/debugging, Atlassian for Jira/Confluence, GitHub for PR hygiene, Outlook for scheduling). This turns the assistant from a code generator into a capable operator while keeping actions auditable.
Use stable, popular tech
Favor stacks well covered in model training (e.g., React + TypeScript, Vite or Next.js, shadcn/ui, Tailwind CSS v3). If your stack's new you'll need to inject docs and double check all the time it's using the right syntax and APIs. I'm avoiding the upgrade to Tailwind 4 for this reason.
What's Your Take?
These practices have helped me maintain both velocity and quality when working with AI coding assistants. The key is balancing AI capabilities with close human review and engineering discipline.
Do you have any tips for effective AI-assisted development? How are you approaching reviewing AI-generated code?
I'd love to hear about your experiences and techniques on LinkedIn. The field is evolving rapidly, and there’s still so much to learn about the most effective ways to collaborate with AI in software development.
Want to Chat About AI Engineering?
I hold monthly office hours to discuss your AI Product, MCP Servers, Web Dev, systematically improving your app with Evals, or whatever strikes your fancy. These times are odd because it's weekends and before/after my day job, but I offer this as a free community service. I may create anonymized content from our conversations as they often make interesting blog posts for others to learn from.
Book Office Hours