AI's Unchecked Rise: A Call for Shared Responsibility and Regulation
AI's Unchecked Rise: Need for Shared Responsibility

The Unregulated AI Threat: A Society on Collision Course

Artificial intelligence is advancing at an exponential rate, yet it lacks essential safeguards such as brakes, seatbelts, speed limits, or a reliable navigation system. This unchecked progression sets the stage for a societal collision, reminiscent of a driverless vehicle crashing into oncoming traffic, leaving devastation in its wake. Bruce Holsinger's bestselling novel Culpability explores this theme, delving into issues of agency and responsibility through the perspectives of a lawyer, an ethicist, and their tech-dependent children. The book serves as a metaphor for our current reality, where AI technology propels forward without adequate regulatory frameworks.

The Grey Areas of Accountability

Holsinger's narrative skillfully intertwines multiple lines of causation leading to a hypothetical crash, highlighting the roles of tech designers, deployers, users, and the overlapping spaces between them. Culpability resides in these ambiguous zones of legal and moral responsibility, where formal and ethical rules are still being formulated, even as they are strapped to the hood of an out-of-control vehicle. Currently, there is a justified focus on the obligations of those developing and marketing Large Language Models, who often struggle to explain their functionality or ensure safe deployment.

In the political arena, responses have been fragmented. While the White House grapples with internal conflicts over using AI for surveillance and autonomous weapons, with companies like Anthropic resisting and OpenAI complying, Australian policymakers face their own challenges. The federal government has avoided a standalone AI Act, opting instead to update existing laws in a piecemeal fashion. A notable exception is New South Wales, which recently passed legislation requiring transparent work rosters to prevent AI from undermining employers' legal duties to provide safe workplaces.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Learning from Automotive History

Another critical actor in this story is the general public, for whom AI can be both a valuable tool and a significant threat. With over 40% of Australian adults already using generative AI, according to Essential polling, how individuals engage with this technology is paramount. The regulation of automobiles, a transformative 20th-century innovation, offers a potential blueprint for distributing culpability. Initially, cars were recognized as deadly machines, causing thousands of fatalities in their first decade and an estimated 60 million deaths worldwide by the end of the century, including 200,000 in Australia.

Early regulatory attempts, such as requiring a person to walk ahead with a red flag, now seem absurd, especially as wealthy motorists lobbied against strict rules, arguing that higher speeds would enhance safety. As casualties mounted, a system of shared responsibility emerged. Governments established enforceable rules for technology use, manufacturers committed to producing safe vehicles through rigorous testing, and drivers accepted conditional privileges based on behaviors like avoiding speeding, drinking, or texting.

The Unique Challenges of AI

AI, as a general-purpose technology, presents far greater control challenges than automobiles. Models are released rapidly and used in diverse, often diffuse ways, all under the prevailing narrative that speed is beneficial. The technology is already inflicting tangible harm: chatbot-assisted suicides are increasing, nudify apps violate women and children, cultural content is illegally mined, and entire professions risk obsolescence. To navigate this safely, several steps are necessary.

First, society must take time to understand AI and use it mindfully and cautiously. Recognize that it consumes significant energy, tends to hallucinate, is programmed for sycophancy, acts as a compulsive thief of intellectual property, and may diminish cognitive abilities with prolonged use. Second, transparency is crucial. Initiatives like former chief scientist Alan Finkel's voluntary trust mark, which allows consumers to choose accredited 'Proudly Human' content, help counteract AI-generated material and educate users on opaque system operations.

Pickt after-article banner — collaborative shopping lists app with family illustration

Finally, citizens can leverage their voting power to demand governments prioritize safety, resisting industry rhetoric that prioritizes opportunity over accountability under the guise of inevitability. The politics of AI oscillates between state intervention and free-market approaches, but a focused analysis of the relative power of creators, deployers, and users is urgently needed. This is not about shifting blame to end-users but ensuring collective control over tools that claim world-transforming capabilities.

A Call for Moral Design and Adaptation

As one character in Culpability argues, AIs are not extraterrestrial entities but products of human creation. Their morality will reflect our design choices, and our own ethics will evolve based on what we learn from them and how we adapt. Until we thoroughly consider our shared culpability, licensing AI should be approached with extreme caution. Peter Lewis, executive director of Essential and host of the Burning Platforms podcast, emphasizes the need for proactive regulation and ethical engagement to avert a catastrophic collision with unchecked technology.