Letters responding to Richard Dawkins and the question of AI consciousness have sparked a deeper philosophical debate. Salley Vickers and Carrie Eckersley challenge the notion that AI behaviour equates to consciousness, while also questioning whether our current theories are sufficient.
The Pathetic Fallacy and AI
Salley Vickers of London writes: 'I was delighted to read Dr Simon Nieder’s cogent rebuttal of Richard Dawkins’s attribution of consciousness to the responses engendered by AI. That human consciousness appears to have an innate tendency to project itself on to various othernesses has long been understood – John Ruskin termed it the pathetic fallacy – and that children animate their loved toys is readily observable.'
She argues that while Wordsworth’s attribution of emotion to a mountain or her granddaughter’s conversations with a toy sloth are harmless, concluding that a harvested body of data on human response is equivalent to consciousness is naive and shocking in someone like Dawkins, who has built his reputation on stringent rationalism.
Beyond Predictive Processing
Carrie Eckersley from Holmes Chapel, Cheshire, offers a nuanced counterpoint: 'Dr Simon Nieder is right that convincing behaviour alone is not proof of subjective experience. But his argument also risks assuming that consciousness must involve something categorically beyond predictive processing and relational behaviour.'
She notes that modern neuroscience increasingly suggests human perception, selfhood, and consciousness may emerge from predictive self-modelling constrained by sensory input. In that context, dismissing AI as 'just prediction' may be less philosophically decisive than it first appears.
'The deeper issue is not whether current AI systems are conscious, but whether advances in AI are exposing how incomplete our existing theories of consciousness already are,' Eckersley writes. She questions what exactly counts as 'lived experience,' acknowledging that biological embodiment matters—interoception, affect, homeostasis, and mortality—but also that humans infer consciousness in others almost entirely through relational interaction and behavioural coherence.
Perhaps Richard Dawkins is being provocative for this very reason. His comments may tell us less about machines crossing some mystical threshold, and more about the growing tension between traditional intuitions about consciousness and emerging predictive models of mind.



