AI Blinders remain why human still have an edge for now
Posted on: 2026-03-09
The black side panels added to a horse's head during a race are primarily called blinkers (or blinders in the United States). These devices are plastic cups attached to a fabric hood placed over the horse's head and under the bridle to limit the animal's wide peripheral vision
Much like a racehorse, AI that generates code often wears blinders. These models are fast and produce code quickly, but they frequently run into situations that require going back to fix mistakes. While models are getting better, they still suffer from tunnel vision, often missing the bigger picture of software architecture or how code can be reused.
Note that in this picture, generated by Gemini, the prompt asked for a horse with blinders but the AI decided otherwise
I recently started a project where Codex created all the pages I needed. However, instead of building a reusable layout with a header and a side menu that could accept custom properties, it created repetitive code for every page. When I asked the AI to adjust one page header, it only fixed that specific one. That is when I looked closer at the code and realized the issue. The AI did exactly what I initially asked, but it failed to realize that the pages shared common parts.
One could argue that an AI can simply modify all those copy-pasted parts just as fast. I would counter-argue that because the code is not shared, the AI does not think ahead; it only fixes the single page you point out. Furthermore, this approach requires unit testing every individual header instead of just one. In a large project, these mistakes become very costly. Adding specific rules to the AI’s context helps, but there are always edge cases where a human with real-world experience still performs better.
Despite these issues, I work heavily with AI to write code. I am currently building a complex multi-agent system because I believe that dividing tasks among different agents, some focused on the broad picture and others on architecture, could lead to better results. This idea is based on how humans have successfully broken down tasks in real projects for a long time. However, supervision and a long-term view are still roles for senior developers.
Since models cannot hold infinite memory, the best approach right now is to craft a specific context. This means giving the AI short-term memory for the task at hand and "long-term memory" through directions and existing code pieces. By mixing this with the model's general knowledge, the result improves through a few loops of validation. In these loops, other AI agents check the output from different perspectives, or different "blinders."
This method is like taking several photos of your surroundings from different angles. Once you have enough shots, you can see the whole picture. The remaining challenge is determining which shots are good, which are misleading, and which parts of the picture are still missing.
