Researchers released a new tool Wednesday to track how many stories posted on social media are coming from sources known to publish disinformation.
The University of Michigan Center for Social Media Responsibility unveiled a new metric titled the “Iffy Quotient,” which determines how frequently stories from the questionable sources are shared on Facebook and Twitter.
The researchers tracked stories back to 2016, and found that the metric grew in the months ahead of the previous presidential election, doubling from January 2016 to November of the same year on both platforms.
The “Iffy Quotient” has since fallen on Facebook to its levels at the start of 2016. However, Twitter hasn’t experienced the same kind of decrease, and the metric's levels are currently 50 percent higher on that site than they are for Facebook.
The quotient is tracked on a dashboard that also shows how many domains from other kinds of sites are being shared.
The tool relies on information gathered by social media tracking company NewsWhip to see which domains are most widely shared on the platforms. It then compares the top 5,000 domains being shared at a given time with lists created by the independent site Media Bias/Fact Check, which organizes sources by reliability and bias.
Out of the top domains shared on the social media platforms, those that are sourced to sites found to be on Media Bias/Fact Checker’s “Questionable Sources” or “Conspiracy” lists are labeled as “Iffy.”
"By contrast with the current environment of accountability by 'gotcha' examples of bad outcomes, the Iffy Quotient tells us something about the overall performance of the platforms," Paul Resnick, the founder and acting director of the Center for Social Media Responsibility, said in a release.
"The platforms can track metrics internally with their own data, but hesitate to report them externally. By publishing continuously, we can provide accountability when things get worse and credibility for claims of progress."
Facebook and Twitter have both faced pressure from lawmakers to crack down on “fake news” after Russia was found to have used social media to influence the 2016 election.
Both platforms have since removed accounts tied to the Russian and Iranian governments, and implemented policies aimed at mitigating the spread of disinformation.