The rise of AI-powered "vibe coding" enables anyone to create software through simple English prompts, but this convenience comes with significant risks that could undermine innovation and organisational differentiation, warns Dr Lewis Z. Liu, CEO of Eigen Technologies.
The Abstraction Problem in Human and AI Systems
In a thought-provoking column dated Wednesday 12 November 2025, Liu explores how layers of abstraction - the process of simplifying complex information for higher management - creates dangerous knowledge gaps in both human organisations and AI systems. The piece was inspired by conversations with his former Harvard roommate, Momin Malik, now a leading AI safety researcher at the Mayo Clinic.
"While I find the intellectual laziness tedious, I sympathize with the rationale," Liu writes about executives who constantly ask technical staff to "explain this in plain English." However, he warns that delegating critical thinking to AI introduces unprecedented risks compared to human delegation.
From Transistors to Vibe Coding: The Evolution of Abstraction
Liu illustrates the concept of abstraction layers using computer science as an example. The journey begins with transistors - the fundamental units of computation - which are abstracted into logic gates and circuits. These then become assembly language, which evolves into higher-level languages like C and C++, then Python, and now natural language prompts through tools like Cursor and Lovable.
"At every level of abstraction, for what you gain in ease of completing a larger task, you lose two things: control over detail and you bake in core assumptions about the system," Liu explains. This becomes particularly problematic with AI-generated code, where users surrender control over how applications are actually built.
The Purple Problem and Knowledge Collapse
Current AI coding tools exhibit what Liu calls the "purple problem" - an overwhelming number of new applications feature purple interfaces because AI systems over-index on recent colour trends. This phenomenon represents a broader issue of knowledge collapse, where AI systems converge on similar solutions based on training data patterns.
"This is fine if you're iterating on something simple like a pizza delivery app, but not okay if you're trying to build something genuinely novel," Liu cautions. The problem extends beyond colour choices to fundamental architectural decisions that could limit innovation.
The risks become even more pronounced when AI enters corporate decision-making chains. Liu cites historical examples like Volkswagen's Dieselgate scandal in 2015 and BP's Texas City refinery explosion in 2005 as cases where critical information was lost as it moved up management layers.
"With AI, however, every human is equipped with the same centralised AI tool, feeding them the same perspective and same information," Liu observes. This creates a "lowest-common-denominator" approach to knowledge and decision-making that strips away organisational differentiation.
A Call for Deeper Understanding
Despite these concerns, Liu maintains that AI remains an "extremely powerful tool for abstraction" when paired with unique datasets and context-aware users. The solution lies in human vigilance rather than rejecting AI altogether.
He implores leaders: "Before you ask your colleague to 'explain something in plain English', try to understand one level of detail below, question the core assumptions and ask for the provenance of the information."
This approach, Liu argues, could harness AI's power for abstraction while avoiding the pitfalls of knowledge collapse that threaten to make every organisation "beige like everyone else."