AI Coding, Trust and Review
Posted on: 2026-03-04
The more I use AI, the more I believe it represents the future of accelerated development; however, I am also increasingly wary of its tendency to "drift" in the wrong direction.
While it consistently finds solutions, they aren't always the optimal ones, a reality that persists whether using Claude, Cursor, or any other tool. Furthermore, I am realizing that humanly reviewing every line of code will soon become impossible as development velocity skyrockets across multiple teams and pods.
Developing with AI must be viewed as a full life cycle, meaning the review process must be an integrated component of creation, from planning and execution to post-coding.
To address this, I am implementing deeper automation to ensure oversight is embedded at every stage. Within my multi-agents project, once the planner agent completes its task, a secondary AI immediately challenges the plan. As work is distributed among multiple agents, each stream is then evaluated against a specific matrix of criteria. Finally, once the code is written, it is scrutinized by another AI agent with a different, more "pessimistic" personality.

While this comprehensive loop is currently long, it will be optimized. Much like human development cycles, establishing multiple gates to ensure quality seems to be the only viable path forward with AI.
