On ResearchBuzz (October 12, 2018), Tara Calishain noted an announcement of a new “fake news”-fighting tool from the University of Michigan Center for Social Media Responsibility: “…a tool to help monitor the prevalence of fake news on social media through a Platform Health Metric called the Iffy Quotient. A web-based dashboard that shows the Iffy Quotient for Facebook and Twitter, dating back to 2016, will be updated regularly.”
A couple of links away was the White Paper describing this project, co-authored by Paul Resnick, Aviv Ovadya, and Garlin Gilchrist. In the Acknowledgements the authors note that
The Iffy Quotient is an adaptation of Aviv Ovadya’s previous work toward a dashboard for measuring attention toward unreliable sources, which he developed prior to joining the Center for Social Media Responsibility. Ovadya’s work began in late 2016 and was also funded through his 2017 Knight News Innovation Fellowship at the Tow Center for Digital Journalism at Columbia University.
Earlier in 2018 Buzzfeed interviewed Aviv Ovadya about the “terrifying future of fake news” — a search on the term “Infocalypse” leads either to references to this interview or to the original use describing Internet crime.As explored in the Buzzfeed interview, Ovadya’s use is wider-ranging.
The goals of the Iffy Quotient: A Platform Health Metric for Misinformation are more modest than the prevention of Ovadya’s nightmare scenario, but in the best light contribute to this effort. Briefly, the metric is developed to describe “how much content from ‘iffy’ sites has been amplified on Facebook and Twitter.” The comparative chart is prepared in several steps:
1) NewsWhip provides the 5,000 most engaged-with URLs each day, on Facebook and Twitter.
2) Media Bias/Fact Check provides lists of domain names they have judged.
a) We define as Iffy those sites listed as Questionable Sources or Conspiracy/Pseudoscience.
b) We define as OK those sites listed in other categories, including Left Bias and Right Bias
3) We check for automatic redirects to infer categorization of additional domain names.
4) For each site, the Iffy Quotient is the fraction of the day’s 5,000 URLs that are from domain names categorized as Iffy.
5) We report a seven-day moving average to smooth the chart. (Iffy Quotient, p.9)
The metric, then, depends on information provided from several sources: NewsWhip collects the URLs to be evaluated, and Media Bias/Fact Check provides a standard against which the websites can be evaluated. The authors note the use of another evaluative list, Melissa Zimdars ‘ OpenSources collection, as a test of “robustness” (p. 7).
In their Introduction, the authors argue for the merits of evaluations maintained outside the confines and control of the two social media giants being examined:
First, they can draw attention to issues that platforms may either not be tracking themselves or not prioritizing as much as the public would like. This form of public accountability is preferable to the current environment of accountability by gotcha anecdotes. It focuses attention on the overall performance of platforms rather than on bad outcomes in individual cases; some bad outcomes may be inevitable given the scale on which the platforms operate.
Second, external metrics can create public legitimacy for claims that platforms make about how well they are meeting public responsibilities. Even if Facebook actually reduces the audience share for Iffy content, the public may be skeptical if Facebook defines the metric, conducts the measurement without audit, and chooses whether to report it. (pp. 1,2)
Here, a small dog barks. Is this a “gotcha anecdote”?
Recently, American Greatness Assistant Editor Pedro Gonzalez uncovered a clear discrepancy with the website “Media Bias Fact Check,” between its coverage of American Greatness and Huffington Post.
In its profile of AG, the website chastises this publication for occasionally linking to such sources as Tucker Carlson’s The Daily Caller or The Gateway Pundit, and thus ranks us as having “mixed” factual reporting and a “right” bias… even though it then admits that we have never failed a fact check.
Meanwhile, in a rather glowing review of HuffPo, the same site declares that the far-left publication has a “high” factual reporting rate, despite admitting immediately afterward that the site has, in fact, failed a fact check and published an unproven claim in the past.
While NewsWhip simply collects the URLs appearing day-to-day on Facebook and Twitter, a more substantial burden is placed on the evaluative sources. Anyone interested in this question, at least since mid-2016, will undoubtedly recall the discussions of Melissa Zimdars’ list. This is still accessible, although most resources for critically evaluating online news cite the original version, held as a PDF on Google Docs. “The Moving Finger writes; and, having writ, Moves on…”
The principal evaluative tool used in developing the IffyQuotient, however, is Media Bias/Fact Check.
Barking dogs aside, the staffers of this resource could reasonably be compared to the agents cited by Juvenal (and, in a stretch, by Alan Moore?):
Dave Van Zandt is the primary editor for sources. He is assisted by a collective of volunteers who assist in research for many sources listed on these pages. (Media Bias/Fact Check: About)
In addition to other thoughts about the state of online media-bias checking, Tamar Wilner, writing in the Columbia Journalism Review, comments on Van Zandt’s enterprise:
Amateur attempts at such tools already exist, and have found plenty of fans. Google “media bias,” and you’ll find Media Bias/Fact Check, run by armchair media analyst Dave Van Zandt. The site’s methodology is simple: Van Zandt and his team rate each outlet from 0 to 10 on the categories of biased wording and headlines, factuality and sourcing, story choices (“does the source report news from both sides”), and political affiliation….Their subjective assessments leave room for human biases, or even simple inconsistencies, to creep in….
Perhaps algorithmic analysis, and authority, relies,in the end, on subjective assessment? The creators of the Iffy Quotient admit that the model is, well, “iffy”:
We use the term “Iffy” to describe sites that frequently publish misinformation. It is a light-hearted way to acknowledge that our categorization of the sites is based on imprecise criteria and fallible human judgments. (Iffy Quotient: Executive Summary)
Perhaps other methods of shouting “No!” in the face of the Infocalypse can be developed? Here, for instance, is a project, at the University of Western Ontario – longer, slower, more methodical, more encompassing of human critical judgement, with a longer view of human knowledge. More promising than a Watchman?
Images: wall of tvs: http://nightflight.com/wp-content/uploads ; barking dog: https://www.dogmaster.com.au/stop-dog-barking-faq/#bark-control-products