How to detect fake news automatically with computational linguistics

How to detect fake news automatically with computational linguistics
Feb 4, 2019
3 MIN. READ

Fake news continues to threaten political stability around the world, but computational linguists offer potential solutions for detecting propaganda before it spreads.

Former President Ronald Reagan popularized the dictum, “Trust, but verify,” during meetings with Mikhail Gorbachev in the late 1980s, transforming the Russian proverb into a principal of foreign policy. Over 30 years later, a resurgence of propaganda in the form of “fake news” has made verification a critical priority for national security.

“Our historical method of placing trust in reporters or news organizations is under attack,” state the authors of a new paper titled, “A Model for Evaluating Fake News.”

The proliferation of fake news has moved faster than our ability to detect it. Current tools lack the sophistication to eliminate nefarious content automatically, but computational linguistics and signal intelligence appears to hold answers.

The paper describes a model that leverages a defined set of variables to oust fake news. By comparing key linguistic features to an archive of authentic news stories, algorithms may flag stories that significantly deviate from the characteristics of real journalism.

“Understanding the pattern spread of a fact-based narrative is the first signal that requires identification,” the authors continue.

Journalism contains a vast variety of publications, voices, and styles by nature—making fake news particularly tricky to isolate. From Fox News to the Washington Post, one may find a diverse range of perspectives and standards.

In simpler cases, a reputation analysis can shed light on the veracity of an article.

“If authors move to different publishers or publishers change names in an attempt to hide bad reputations, the characteristics of their previous work remains, allowing for potential matches of emerging entities to existing bodies of work found in the archive,” the paper asserts.

A writer’s byline history or a publication’s catalog of articles will likely yield suspicious patterns if aggressively purporting fake news. But, propagandists are constantly refining their methods. The detection of fake news requires more nuance to prove effective in the long term.

The goals behind fake news tend to drive common patterns that deviate from fact-based narratives. In the paper, the proposed computational linguistics model suggests using indicators like adverbs and word count to detect possible propaganda.

When foreign aggressors are crafting false headlines, they often employ excessive use of adverbs to add emotional urgency to their content. This manipulative writing style is flagged when more obvious patterns are not present.

The surge of Russian meddling in foreign politics has brought the concept of fake news to the forefront of cybersecurity concerns. Advances in AI and chatbots have made it all too easy for manipulative content to influence the public, but, fortunately, the same technology may soon be employed to counteract misinformation.

“The entire process is designed for both efficiency and the ability to use any single component with high assurance.”
- Authors of A Model for Evaluating Fake News

The model presented by the paper submits novel calculations for the rapid detection of fake news. Although executions of such solutions are still in nascent stages, these data science techniques offer concrete steps for counteracting propaganda automatically.

In an era of digital warfare, the Byzantine tactic of propaganda remains as relevant as ever. With the help of computational linguists, individuals will have the ability to verify, then trust when it comes to online content.

ICF cybersecurity expert Dr. Char Sample led the research for this paper, which is available in full below.

Subscribe to get our latest insights

File Under