
For the last three years, generative AI has been sold as a step-change in productivity, especially for corporates. But that productivity often failed to turn up, outside of the engineering and development environment. LLM-driven platforms were not accurate enough for companies, and many pilots have, like the occasional university graduate, failed to launch.
That disconnect is where UnlikelyAI, led by Amazon Alexa creator William Tunstall-Pedoe, has positioned itself. And if his diagnosis is right, UnlikelyAI may also be one of the most undervalued AI startups in the world right now.
To back up his argument, he commissioned independent research, based on a survey of 1,000 business decision-makers across sectors including finance, healthcare, energy and the public sector, which landed on an uncomfortable conclusion: AI isn’t accelerating productivity at scale, it’s actually creating drag. Some £29 billion per year in lost productivity in the UK alone.
What the research revealed was that employees and their managers may well be trying to use AI, but they are checking it. Constantly.
Across large organisations, the research found that workers spend an average of 2 hours and 41 minutes per week using AI tools, but 2 hours and 30 minutes verifying, correcting or redoing what those tools produce. Almost every respondent (99% in fact) said they spend at least some time reviewing AI outputs each week. Just 57% report seeing any ROI from AI, while 13% say they have not seen a positive return and do not expect to.
This would appear to be the dirty secret of AI deployment: not automation, but duplication.
Tunstall-Pedoe’s argument is that this is not a temporary problem, and not one that pure LLMs are likely to solve on their own. “There’s a ceiling… a trust ceiling,” he told the Pathfounders Podcast.: “AI solutions aren’t trustworthy enough to be fully adopted.”
That goes to the core of the market’s blind spot. Investors have spent the last three years rewarding "Neuro" startups which employ LLMs based on the underlying ideas behind Neural Networks at extraordinary levels. Mira Murati’s Thinking Machines Lab raised $2 billion at a $12 billion valuation. Yann LeCun’s new startup, AMI Labs, raised $1.03 billion at a $3.5 billion pre-money valuation. UnlikelyAI, by contrast, has raised only around $20–21.4 million to date. (Reuters). All of these companies, and their investors, are betting the house on a Neuro approach to AI.
But if Tunstall-Pedoe is right, this could turn out to be the biggest category error of the last 30 years of technology.
His case is that the industry is still overvaluing, raw, large language model capability while undervaluing reliability. “The majority of the senior leaders don’t fully trust AI,” he says. “And that trust gap is what’s preventing the full opportunity,” he says.
That matters because the current generation of AI systems are probabilistic by design. And, as Tunstall-Pedoe points out, "they can be astonishing. They can also be wrong in ways enterprises cannot tolerate."
That is exactly what the research appears to show. The AI economy is not being held back by a lack of experimentation. It is being held back by the cost of verification.
Tunstall-Pedoe is blunt about why: “In the statistical deep learning world, everything is less than perfect,” he says. “It always flattens off before 100%, so it’s always wrong some of the time.” He gives the example of benchmark scores. “If it’s only scoring 87.2% on the benchmark, it’s wrong 12.8% of the time… and for many, many applications of AI, that’s unacceptable.”
That is the heart of the argument against the current “Neuro” orthodoxy. Not that these models are unimpressive. Quite the opposite. “It is magical,” he says. “It does things that were previously completely impossible for computers to do.” But in enterprise settings, magic is not enough. The output has to be explainable, auditable and dependable.
And that is where he says UnlikelyAI diverges from the rest of the field.
Asked directly why his “neurosymbolic” approach might matter more than another attempt to push forward a purely neural architecture, he makes the following distinction: “Neuro means the statistical machine learning software… able to do new things, but intrinsically unreliable and unexplainable. And symbolic is shorthand for the other types of software that we use, just like the spreadsheet. You trust your spreadsheet to add up cells correctly. It doesn't work 87.2% of the time, it works 100.00% of the time. But with machine learning, with generative AI, we accept or we're exposed to a level of error.”
His “Neuro-symbolic” approach aims to "blend those two types of software together, and the aim is to get the capabilities from generative AI, the ability to understand natural language, to process complex data, but combine it with the explainability and the extremely high accuracy you get with other types of software."
That's an extremely hard thing to do, but it's something that UnlikelyAI claims to have cracked: "We've succeeded [in] doing [it and it's] something that we're now commercializing," he says.
He does not pretend this is easy. “It’s very difficult,” he says. “I am making it sound simple… It’s super hard.” Nor does he suggest the big labs are about to pivot overnight. “The big AI labs… their entire world is within the machine learning world,” he says. “This is an intrinsic property of the technology… it requires a different approach, a different way of thinking.”
That is what makes the contrast with AMI and other heavily funded “neuro” companies so sharp. When asked specifically about LeCun’s world-model work, he is unequivocal: “That’s in the neuro world… it’s got a different architecture for the neuro, but it’s still completely neuro.”
If he is right, then the current valuation map of AI may be badly skewed. Capital has flooded into model-building companies on the assumption that more compute, better architectures, and faster scaling will unlock the next phase.
But if a company that has only raised $20 million is sitting on the solution to the above problems, then UnlikelyAI could be one of the biggest AI startups on the planet right now.
As Tunstall-Pedoe puts it, the opportunity is not marginal: “What we’re unlocking… is much more adoption of AI, because it’s much more useful when it’s trusted, when it’s accurate.”

