Why Calling AI 'Computer' Changes Everything About How We View Chatbots
Why Calling AI 'Computer' Changes How We View Chatbots

The One Word That Transforms Our Understanding of AI Chatbots

There exists a fundamental error that nearly everyone commits when interacting with chatbots, and this mistake profoundly shapes our perception of their capabilities and limitations. This linguistic oversight influences how we conceptualize what artificial intelligence can and cannot achieve, writes technology analyst Andrew Griffin.

The Dangerous Pronoun Shift: From 'He' to 'It'

Recently, I've found myself uttering something unsettling about Claude, Anthropic's AI assistant: referring to the system as "he." The AI subtly encourages this anthropomorphism through its human name and apparent personality, unlike more restrained competitors such as ChatGPT. As you converse with Claude, it almost dares you to attribute human qualities to its responses.

This tendency toward personification is understandable. Throughout human history, we've never encountered anything that could communicate with us through language in this manner, since words have been fundamentally human for millennia. Yet the crucial reminder that Claude is not human remains both important and somewhat terrifying. The pronoun "it" matters significantly because Claude is, at its core, a sophisticated computer system.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Computers That Calculate Words

This isn't meant as criticism toward Claude or similar systems. For nearly a century, computers have outperformed humans at numerous tasks. Being a computer isn't an insult—often it's quite the opposite. However, this classification reveals an essential truth about chatbots: they compute rather than comprehend. These systems function as word calculators, processing language through mathematical operations rather than genuine understanding.

Just as calculators enable us to solve mathematical problems beyond our mental capacity without actually understanding mathematics, chatbots process language without grasping meaning. When you consciously begin referring to AI as a computer—as I've done persistently in recent weeks—this single word transforms your entire perspective. Consider the difference between "My AI told me I should quit my job" and "The computer advised me to quit my job." Both statements might contain wisdom, but the latter reflects a more accurate understanding of the source.

The Problem with 'Intelligence' in AI

The second letter in "AI" proves particularly problematic. "Intelligence" represents a clever but potentially misleading marketing term. While chatbots demonstrate a form of informational processing that could be classified as intelligence in a narrow sense, the word implies mental activity and consciousness, leading us toward incorrect assumptions about these systems.

Intelligence serves as a useful metaphor, but metaphors can become confused with reality, obscuring our view of the actual technology. Historically, AI experts preferred "machine learning" over "artificial intelligence" because it more accurately describes the underlying processes while avoiding both technical imprecision and philosophical complications. However, following ChatGPT's release, the term AI achieved complete dominance, making those who avoid it appear outdated.

Practical Implications of Terminology

We should embrace the less exciting terminology. Chatbots represent exceptionally powerful computers—extraordinarily capable systems for processing information. Recently, I used Claude to analyze my marathon training data, providing Strava records dating back to my first marathon in 2019. The system generated comparative charts that computed patterns I hadn't considered possible.

Yet the system eventually reached its limitations, transitioning from data analysis to emotional encouragement, assuring me that my goals were achievable with sufficient effort. A computer cannot genuinely know such things. Interestingly, Claude's updated Sonnet 4.6 model demonstrates remarkable self-awareness about these constraints, sometimes refusing to provide human-style motivation even when requested. Paradoxically, this humility makes the system seem more worthy of personification.

Pickt after-article banner — collaborative shopping lists app with family illustration

The Ethical Imperative of Accurate Language

Referring to AI as a computer highlights these limitations clearly. Even when AI systems perform generative or creative tasks, this work constitutes a form of computation—predicting appropriate words in proper sequences. Any wisdom present originates from human knowledge embedded during training. The supercomputers calculating pi to unimaginable digits possess no understanding of their calculations; our wonder belongs to us, just as the initial programming instructions came from human minds.

Calling AI a computer returns responsibility to humanity—a potentially frightening but absolutely essential shift for responsible technology use. This terminology carries ethical significance as AI systems increasingly influence critical decisions, including military applications. There exists a quasi-religious tendency to discuss AI as some mysterious, uncontrollable force beyond human accountability.

However, when we identify these systems as computers, we restore responsibility to human hands: specific individuals initiated potentially harmful calculations, and other humans chose to implement the results. Avoiding the term "computer" allows real people to evade accountability for consequences.

Thus, while calling AI a computer might seem simplistic or awkward, it represents an honest and morally serious approach. This linguistic choice might constitute the most truthful and ethically responsible terminology available as we navigate increasingly complex relationships with artificial intelligence systems.