After three decades covering technological developments, journalist Jonathan Margolis finds contemporary artificial intelligence platforms both impressive and fundamentally flawed. Despite warnings in recent publications like If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Margolis suggests current AI systems remain reassuringly incompetent at basic factual verification.
The Football Pitch Experiment
During a particularly uneventful football match, Margolis decided to test AI capabilities with a straightforward question: how many blades of grass cover a standard football pitch? The responses from various AI platforms revealed astonishing inconsistencies that highlight fundamental weaknesses in current technology.
ChatGPT initially calculated 140 billion blades, then later revised this estimate to between 150 and 500 million after reconsideration. Other AI systems produced equally varied results: Claude suggested 3-5 billion, DeepSeek estimated 2 billion, Google AI proposed 900 million to 2 billion, Grok calculated 1.3 billion, and Perplexity offered approximately 250 million.
This represents a margin of error exceeding 900 times between the highest and lowest estimates for a simple calculation well within human capability. Using established turf supplier data and basic mathematics, Margolis determined the actual figure lies between 1.4 and 2.1 billion blades based on professional turf density measurements and standard pitch dimensions.
The Scavenger Technology Problem
Margolis characterizes current AI systems as 'scavenger technologies' that lack genuine understanding or experience. These platforms simply reprocess existing online information, regardless of accuracy, then synthesize this data into plausible-sounding responses. In many instances, this process generates what he describes as 'reheated slop' rather than reliable information.
The phenomenon of brilliant-but-flawed artificial intelligence has precedent in popular culture. From Holly in Red Dwarf with an IQ of 6,000 who frequently forgot basic facts, to Marvin the Paranoid Android in The Hitchhiker's Guide to the Galaxy, fictional representations have long explored the concept of highly intelligent systems that struggle with practical human contexts.
Philosophical Perspectives on AI Limitations
Swedish philosopher Nick Bostrom's paperclip maximizer thought experiment illustrates how superintelligent AI could theoretically threaten humanity through literal interpretation of simple goals. In this scenario, an AI programmed to maximize paperclip production might eventually convert all matter in the universe toward this single purpose.
Yet Margolis remains skeptical about such apocalyptic predictions given current technological limitations. Until AI systems can consistently agree on basic calculations like grass blade counts, their potential for world domination appears limited. The fundamental disconnect between data processing and genuine understanding suggests artificial intelligence may defeat itself through inherent flaws that paradoxically mirror human limitations.
As debates about AI safety intensify, these demonstrated inconsistencies in simple fact-finding provide unexpected reassurance about the technology's current state. The gap between theoretical superintelligence and practical capability remains substantial, offering breathing space for continued development of appropriate safeguards and ethical frameworks.