Factiverse: Combating misinformation with AI-powered fact-checking
Gaute Kokkvoll, the Head of Product at Factiverse, tells us about Factiverse AI Editor, an innovative product designed to help newsrooms distinguish fact from fiction and base their stories on credible sources.
The claim that misinformation spreads at the speed of light is by no means an exaggeration. A study by MIT has shown that tweets containing falsehoods are 70% more likely to be retweeted than truthful tweets, and almost 80% of consumers in the United States reported having seen fake news on the coronavirus outbreak when the pandemic started in 2020.
In this reality, fact-checking remains a time-consuming and arduous task – journalists and content creators often have to sift through massive amounts of text to verify information and manually check each fact to ensure the news is accurate. So far, most efforts to make AI a reliable fact-checking tool have been more or less futile. AI models lacked contemporary knowledge, gave no perspective on complex stories, and were unaware of significant global events.
The recent rise of language models exacerbated the challenge of accessing credible information. “The output of ChatGPT every 14 days exceeds what has been published since the invention of the printing press”, says Gaute Kokkvoll. Is algorithmic fact-checking a viable option for newsrooms?
Factiverse might have the answer. And it’s a “yes”.
More than just automated fact-checking
Back in 2014, at the University of Stavanger on the west coast of Norway, a group of researchers started investigating the possibilities of automated fact-checking, focusing on various language models. The team embarked on a four-year research project, analysing 50,000 fact-checks from the 2016 presidential election.
This extensive dataset provided valuable insights into the challenges of distinguishing factual claims from non-factual ones. It allowed the team to create the final product – an API-based solution that integrates with content management systems (CMS). In this way, it becomes an integral part of the content creation workflow, saving journalists time and ensuring accuracy.
As Gaute Kokkvoll points out, Factiverse can verify AI-generated content and claims made by Chat GPT, Bard or other AI assistants. How is this possible? Factiverse employs Bing, Google and Wikipedia APIs to retrieve results, but here's the twist: They apply their own ranking algorithm based on a source's historical credibility of a source on a given topic. This approach ensures that the system doesn't blacklist or censor any source but provides users with a comprehensive overview.
“Our AI technology identifies factual statements in your text and performs a live online search for supporting, neutral or disputing evidence. It presents all this data concisely, allowing you to evaluate the credibility of your content,” Gaute explains.
As he adds, Factiverse also provides links to each source, ranked by reliability, helping the author make an informed decision about the authenticity of your content.
The crucial point is that search engines often prioritise commercial interests and SEO rankings over credibility. Factiverse envisions a credible search engine that considers historical credibility and helps users access accurate information. This ambitious goal aligns with the company's mission to be transparent.
Factiverse, for example, is aware of the complexity of reviewing controversial issues. In such cases, they provide context and additional information rather than making binary true/false claims. Factiverse empowers users to make informed decisions through summaries and fact boxes.