Should uncannily faked video content overlap with a new pandemic or increased cyber warfare, then we’ll be in line for a more monstrous kind of misinformation. The EU has already proposed regulation artificial intelligence (AI), which will require organisations to disclose any deepfake usage and creation.

This has no doubt been spurred on by recent events. As GlobaData’s report on misinformation states, the COVID-19 pandemic has been “fertile ground for fake news and the exploitation of public fear”. Social media platforms have struggled to contain the spread of hoaxes, conspiracy theories and quack cures, with some reviewing their ad policy to forbid the promotion of medical misinformation.

Despite these efforts, they have found themselves being grilled by politicians about their failure to prevent the spread of misinformation on their platforms. The debate grew particularly contentious following the riots on Capitol Hill on 6 January after a mob of Donald Trump supporters, QAnon conspiracists and anti-maskers were egged on by fake news that the US presidential election had been stolen.

However, most problems aren’t caused by malicious ‘maskholes’, but by people impulsively sharing information without thinking critically about it.

This poses some questions: Can audiences be educated to think more before posting? How can fake news be properly regulated? Verdict finds out with Henry Brown, data & analytics director at Ciklum, Jared Ficklin, chief creative technologist at Argodesign, Andy Parsons, director of Content Authenticity Initiative at Adobe, Andy Patel, researcher at the AI Center of Excellence for F-Secure, and Rachel Roumeliotis, VP of data & AI at O’Reilly.

Through separate discussions, we investigate whether AI has any part to play in proceedings, and how much Social Media platforms can realistically solve the issue.