
The use of artificial intelligence to recreate an interview with a victim of the 2018 Parkland school shooting has sparked a heated debate about the ethics of AI in journalism. The AI-generated segment, which featured a simulated conversation with Joaquin Oliver, a student killed in the tragedy, was broadcast by CNN's Jim Acosta, raising questions about consent, sensitivity, and the role of technology in media.
Controversial Use of AI in Journalism
The interview, produced using AI voice-cloning and deepfake technology, aimed to highlight the ongoing issue of gun violence in the US. However, critics argue that such practices cross ethical boundaries by digitally resurrecting victims without their families' explicit consent.
Public and Professional Reactions
Media ethicists and advocacy groups have condemned the segment, calling it a dangerous precedent that could exploit victims' memories for sensationalism. Meanwhile, some tech advocates defend the approach as a powerful tool for storytelling and activism.
CNN's Response
CNN has defended the segment, stating it was created with careful consideration and intended to honour the victims while advocating for gun reform. The network emphasised that the Oliver family was consulted, though questions remain about the broader implications of such technology.
The Future of AI in Media
This incident highlights the growing tension between technological innovation and ethical journalism. As AI becomes more sophisticated, the media industry faces critical decisions about how to balance compelling storytelling with respect for individuals' legacies.