Fractile vs Nvidia: Can a UK Startup Undercut AI's Chip Giant?
Fractile vs Nvidia: UK Startup Challenges AI Chip Dominance

A little-known British startup is beginning to test one of the biggest assumptions underpinning the AI boom: that Nvidia will remain the unavoidable centre of the ecosystem.

A Different Way to Build AI Chips

Founded in 2022 by Oxford researcher Walter Goodwin, Fractile is attempting to redesign how AI chips work altogether, with a focus on speed and – crucially – cost. The first phase of the AI boom was categorised by the ability to build ever-larger models. Now it is about running them, and doing so cheaply enough to sustain real-world deployment at scale.

Nvidia's GPUs, originally designed for graphics processing, became the backbone of AI because of their ability to handle parallel workloads. However, they were never purpose-built for modern large language models. Their core limitation is architectural – data must constantly move between the processor and separate memory (DRAM). This movement creates latency, consumes vast amounts of energy and acts as a brake on performance.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Fractile's approach to chip-making aims to eliminate that hurdle by making chips that fuse memory and compute together, relying on high-speed DRAM so that data can be processed where it sits. In theory, that process reduces the need for data transfer, which is the single biggest constraint in current AI systems. The company has claimed that, based on simulations, this could allow models to run significantly faster and at a fraction of the cost of today's GPU-based systems.

Nvidia Is Being Challenged, but Not Replaced

Its software ecosystem and integration across cloud providers create a moat that is extraordinarily difficult to cross. For most companies, Nvidia remains the default, not necessarily because it is cheapest, but because it is the easiest and most reliable. Yet AI developers are increasingly constrained by supply, cost and power availability. Running models at scale is expensive, and margins are under pressure – particularly for companies like Anthropic, which rely heavily on third-party infrastructure.

That squeeze is driving a shift towards diversification like Google's TPUs or Amazon's Trainium, and now a new generation of specialist chips focused on inference efficiency. And Fractile, it seems, sits squarely in that latter emerging category. Startups like Fractile are betting that purpose-built hardware, designed specifically for AI workloads, can unlock a step-change in both speed and cost. If that proves true, it could reshape how AI systems are deployed, and who controls the economics behind them.

A Credible Challenger

Fractile's chips are not yet in commercial use, and the timeline to deployment stretches into the latter part of the decade. The gap between simulation and scaled production has undone many promising chip startups before. And Nvidia is not standing still, as it shifts into more specialised AI hardware, acknowledging the same pressures that startups are targeting. Still, the emergence of credible alternatives, particularly from a small, UK-based company, shows that more players are beginning to run with the big dogs.

Pickt after-article banner — collaborative shopping lists app with family illustration