The global pandemic is likely to be a point of no return for big social brands as, after the crisis, they will be expected to keep policing harmful content. Regulating fake news could impact on advertisers, their clients and any platform that carries content and advertising. Additionally, it would have a positive effect for businesses, which may find themselves the target of misinformation for being too large and/or if involved with controversial public figures and practices.
Verdict: Is fake news always clad in the same political colours?
Andy Patel, F-Secure: “Misinformation and cult-like behaviour exists on all sides of the political equation. Examples from the left include the divides seen between [Black Lives Matter], Bernie Sanders and Hillary Clinton supporters during the 2016 US elections and, more recently, the in-fighting between supporters of left-leaning political parties and figures in the UK.
“The misinformation shared by these groups is used to further infighting and has been quite specific to each group’s political agenda. As such, this misinformation hasn’t had a notable impact on the general public. QAnon, anti-vax, anti-mask, ‘Covid hoax’ and ‘Stop the Steal’ narratives have all impacted the general public, and that’s why there’s been focus on misinformation coming from the right. Note, however, that any misinformation shared by any of these groups – whether they’re on the left or right – can be leveraged by adversaries – both domestic and foreign – to cause further harm to society.”
Verdict: Is fake news being encouraged by political actors?
Jared Ficklin, Argodesign: “We most often hold up manipulating ‘facts’ to gain political power as the key motivator of fake news. But, there are a lot of other motivators, like selling ads using targeted affiliate marketing, that motivate one to write fake news. In reality, ad revenue probably motivates fake news to a much broader extent.
“Agreement bias is a powerful psychology being exploited here. If someone believes the Earth is flat, they’re more likely to read stories about it. They’re also more likely to trust the advertisers in that story if it agrees with their point of view. So, if you want to sell a Flat Earth T-shirt, you should write a story about a Flat Earth society and then use targeted Social Media to put it in front of Flat Earth believers.
“This is only by the slightest technicality journalism, and before Social Media would be the domain of a zine or brochure. It would never run in a paper. The audience is too narrow.”
Jared Ficklin, chief creative technologist at Argodesign
Verdict: Can AI really help fight fake news?
Henry Brown, Ciklum: “Natural Language Processing (NLP) can be used to detect nuances in grammar, spelling and sentence structure, which in turn may reveal an issue with the original author of an article or piece of content.
“Network techniques may also help to detect particular users on Social Media platforms who are more prone to sharing fake news, and thus try to encourage better enforcement of the content that they share. A fake news warning could then be shared with other users on the platform.”
Ficklin: “The ‘Real News Layer’ is designed to amplify our critical thinking skills. It’s a combination of a well-designed user interface and traditional pattern-matching algorithms and technologies of spidering the web to create a data pool.
“AI can enhance these with much better matching in order to better collate and correlate stories for us. It isn’t a fact checker, it’s a researcher. Where traditional algorithms would fail to identify two stories being on the same subject, AI can be deployed to determine nuance.”
Verdict: What kind of dataset would be used to train this AI?
Ficklin: “The system could be primed with the very few things that can be pure fact, such as the force of gravity on Earth. But, in many ways, it shouldn’t be told what’s truth. It should instead be a comparator of what’s written and by whom and how close they are to the source or actual event.
“Let’s say there’s a Supreme Court ruling you’re reading about. The Real News Layer could mention to you that it’s found 100 variations of the article you’re reading about this specific judgement on this specific date. Out of the variations, there are a total of 50 facts asserted and your article only contains four of those assertions, which is way off the average. It could reveal your story also includes unique assertions.
“It could also reveal that your article is way down the attribution tree. When you take a quick look at this, you realise your article only reveals the opinions of the two dissenting justices of a certain political party and is actually written based on another article, which was based on another article. Meanwhile, you can see much of the rest of the world is reading an article with more information in it, written by a reporter who was actually at the hearing.”
Verdict: Can AI classify the authenticity of writers as well as outlets?
Ficklin: “It’s at that point you could look at the reporter for your article. The AI has ranked this reporter for placement and attribution. They seem to attribute a lot of others but don’t ever get attributed. Further, their stories seem to only appear in one publication that’s read by only a specific cohort of your friends.”
Verdict: Is it fair to say that visual creators get forgotten in the discussion on misinformation?
Andy Parsons, Adobe: “I do think that’s sometimes the case, but given Adobe’s relationship with the creative community, we’ve created the Content Authenticity Initiative (CAI) coalition to address their needs and acknowledge the crucial role they play.
“With the rise of completely synthetic visual objects – including human depictions – deceptive content can be created without editing prior content. This is why we must re-establish a common understanding of objective facts, before our understanding of what’s ‘real’ online is eroded beyond recovery.”
Andy Parsons, director of Content Authenticity Initiative at Adobe
Verdict: How important is it to fight faked video and imagery, even now, before the age of deepfake has truly landed?
Parsons: “The core ideas behind the CAI work apply equally well to any type of media, including text, audio, images and video. It’s essential to develop techniques and standards to combat inauthentic content now for two reasons. First, we’ve seen extremely dangerous examples of disinformation that deliberately mislead with simple, unsophisticated attacks. Bad actors won’t necessarily reach for deepfakes if their goals can be achieved with simpler means.
“Second, detection of deepfakes is an arms race that’s already underway. Creators of malicious content utilise ever-more sophisticated tools and the detection algorithms have to keep pace.
“Given that the tools for making Hollywood-quality synthetics are not only more approachable now, but inexpensive or free, it’s more critical than ever to have measures in place for good actors to use these tools responsibly and for consumers to have access to the information on how their content came to be.”
Verdict: How far are we from original photos and videos being as easily traced as the authentic article?
Parsons: “We expect CAI provenance data to be an important training signal for AI detection models, and for provenance to be used in concert with detection results. For instance, a detector algorithm might use AI to score an image as 80% likely to be authentic, then consult the image’s provenance before delivering an assessment.
“The inverse is also important: Our provenance can include the results of AI detection as part of its sealed, verifiable data. For example, if a detection algorithm were used on a video uploaded to Social Media, the results of the analysis could be attached to the video in the same way that verifiable assertions like copyright, camera data and edit history are secured.
“With this exposed transparently, downstream platforms and consumers can then decide whether the video can be trusted.”
Rachel Roumeliotis, O’Reilly: “With the advent of organisations like the CAI and non-fungible tokens (NFTs) taking centre stage, creators digitally stamping their work will gain momentum over the next few years.
“What might be interesting to think about is, what if AI creates something? Who does that track back to : the algorithm writer, a corporation? As technologies like NLP are already doing this, it’s something we’ll need to address in the near future.”
Brown: “Deepfakes can be created by AI , but AI can also be used to detect it. In fact, the ability of AI – specifically, Generative Adversarial Networks (GANs) – to create fake content is directly related to its ability to detect it. AI will learn to create better examples of deepfakes if it can differentiate between real and fictitious.
“One machine learning solution generates content, and another tries to detect whether the content is fake or not. Of course, a group that wants to create and release fake news, generated by GANs, probably wouldn’t release their [detector] algorithms but, in theory, someone else could build their own machine learning solution to tackle it.
“AI can also use ‘fingerprinting’ techniques to determine whether an image predates the claimed event or moment.”
Rachel Roumeliotis, VP of data & AI at O’Reilly
Verdict: Are social networks doing enough to fight fake news?
Patel: “Researchers and journalists warned us about the dangers of QAnon and ‘Stop the Steal’ a long time before social network sites took action. Many of the same people predicted something would happen on 6 January. In order to stop online cults escalating out of control, social networks need to listen a lot more closely to what these researchers and journalists are saying and take timely and appropriate actions.
“Social networks have implemented automation that effectively prevents ISIS-related content and accounts from persisting on their websites. They also often publish reports and datasets related to a wide range of adversarial campaigns detected on their platforms (in different languages, from different geographical regions).”
Verdict: Are social networks our only hope against fake news?
Roumeliotis: “Companies like Facebook have processes in place and indicators of where something has come from, but people should always try to find news at its source. If it does seem outlandish or strange, cross-reference it with another trusted news source to weigh its validity.”
Brown: “Whilst prevention of misinformation spread is absolutely key, it comes down to the end-users: who must learn and choose not to engage with fake content online. This in itself requires teaching, as well as the provision of tools that will help people to identify what is fake, and what isn’t.
“Even with these tools in place, if people continue to choose to engage with misinformation, then there really is only so much that AI can do.”
Ficklin: “Someone who believes the Earth is flat is past the point of responding to fact checking. Instead, it’s better to offer them a contextualised view. Let them know, using simple UI, that the story they’re reading is one out of 1bn stories about the shape of the Earth and the other 999,999,999 say the Earth is round.
“Tell them that only people who believe the Earth is flat are reading the story in front of them, that the video in the story is two seconds out of a longer video about the round Earth, that the story is actually ten years old and there are newer versions you can read.”
For details on GlobalData’s ‘Misinformation – Thematic Research’ report, click here.