An Iffy Promise?

Subtitle: Quis Custodiet Ipsos Custodes?
[H/T Alan Moore]

Post by Lucian Minor

On ResearchBuzz (October 12, 2018), Tara Calishain noted an announcement  of a new “fake news”-fighting tool from the University of Michigan Center for Social Media Responsibility:  “…a tool to help monitor the prevalence of fake news on social media through a Platform Health Metric called the Iffy Quotient. A web-based dashboard that shows the Iffy Quotient for Facebook and Twitter, dating back to 2016, will be updated regularly.”

A couple of links away was the White Paper describing this project, co-authored by Paul Resnick, Aviv Ovadya, and Garlin Gilchrist. In the Acknowledgements the authors note that

The Iffy Quotient is an adaptation of Aviv Ovadya’s previous work toward a dashboard for measuring attention toward unreliable sources, which he developed prior to joining the Center for Social Media Responsibility. Ovadya’s work began in late 2016 and was also funded through his 2017 Knight News Innovation Fellowship at the Tow Center for Digital Journalism at Columbia University.

Earlier in 2018 Buzzfeed interviewed Aviv Ovadya about the “terrifying future of fake news” —  a search on the term “Infocalypse” leads either to references to this interview or to the original use describing Internet crime.As explored in the Buzzfeed interview,  Ovadya’s use is wider-ranging.

The goals of the Iffy Quotient: A Platform Health Metric for Misinformation are more modest than the prevention of Ovadya’s nightmare scenario, but in the best light contribute to this effort.  Briefly, the metric is developed to describe “how much content from ‘iffy’ sites has been amplified on Facebook and Twitter.” The comparative chart is prepared in several steps:

1) NewsWhip provides the 5,000 most engaged-with URLs each day, on Facebook and Twitter.

2) Media Bias/Fact Check provides lists of domain names they have judged.

a) We define as Iffy those sites listed as Questionable Sources or Conspiracy/Pseudoscience.

b) We define as OK those sites listed in other categories, including Left Bias and Right Bias

3) We check for automatic redirects to infer categorization of additional domain names.

4) For each site, the Iffy Quotient is the fraction of the day’s 5,000 URLs that are from domain names categorized as Iffy.

5) We report a seven-day moving average to smooth the chart. (Iffy Quotient, p.9)

The metric, then, depends on information provided from several sources: NewsWhip collects the URLs to be evaluated, and Media Bias/Fact Check provides a standard against which the websites can be evaluated. The authors note the use of another evaluative list, Melissa Zimdars ‘ OpenSources collection, as a test of “robustness” (p. 7).

In their Introduction, the authors argue for the merits of evaluations maintained outside the confines and control of the two social media giants being examined:

First, they can draw attention to issues that platforms may either not be tracking themselves or not prioritizing as much as the public would like. This form of public accountability is preferable to the current environment of accountability by gotcha anecdotes. It focuses attention on the overall performance of platforms rather than on bad outcomes in individual cases; some bad outcomes may be inevitable given the scale on which the platforms operate.

Second, external metrics can create public legitimacy for claims that platforms make about how well they are meeting public responsibilities. Even if Facebook actually reduces the audience share for Iffy content, the public may be skeptical if Facebook defines the metric, conducts the measurement without audit, and chooses whether to report it. (pp. 1,2)

Here, a small dog barks. Is this a “gotcha anecdote”?

Recently, American Greatness Assistant Editor Pedro Gonzalez uncovered a clear discrepancy with the website “Media Bias Fact Check,” between its coverage of American Greatness and Huffington Post.

In its profile of AG, the website chastises this publication for occasionally linking to such sources as Tucker Carlson’s The Daily Caller or The Gateway Pundit, and thus ranks us as having “mixed” factual reporting and a “right” bias… even though it then admits that we have never failed a fact check.

Meanwhile, in a rather glowing review of HuffPo, the same site declares that the far-left publication has a “high” factual reporting rate, despite admitting immediately afterward that the site has, in fact, failed a fact check and published an unproven claim in the past.

While NewsWhip simply collects the URLs appearing day-to-day on Facebook and Twitter, a more substantial burden is placed on the evaluative sources. Anyone interested in this question, at least since mid-2016, will undoubtedly recall the discussions of Melissa Zimdars’ list. This is still accessible, although most resources for critically evaluating online news cite the original version, held as a PDF on Google Docs.  “The Moving Finger writes; and, having writ, Moves on…”

The principal evaluative tool used in developing the IffyQuotient, however, is Media Bias/Fact Check.

Barking dogs aside, the staffers of this resource could reasonably be compared to the agents cited by Juvenal (and, in a stretch, by Alan Moore?):

Dave Van Zandt is the  primary editor for sources. He is assisted by a collective of volunteers who assist in research for many sources listed on these pages. (Media Bias/Fact Check: About)

In addition to other thoughts about the state of online media-bias checking, Tamar Wilner, writing in the Columbia Journalism Review, comments on Van Zandt’s enterprise:

Amateur attempts at such tools already exist, and have found plenty of fans. Google “media bias,” and you’ll find Media Bias/Fact Check, run by armchair media analyst Dave Van Zandt. The site’s methodology is simple: Van Zandt and his team rate each outlet from 0 to 10 on the categories of biased wording and headlines, factuality and sourcing, story choices (“does the source report news from both sides”), and political affiliation….Their subjective assessments leave room for human biases, or even simple inconsistencies, to creep in….

Perhaps algorithmic analysis, and authority, relies,in the end, on subjective assessment? The creators of the Iffy Quotient admit that the model is, well, “iffy”:

We use the term “Iffy” to describe sites that frequently publish misinformation. It is a light-hearted way to acknowledge that our categorization of the sites is based on imprecise criteria and fallible human judgments. (Iffy Quotient: Executive Summary)

Perhaps other methods of shouting “No!” in the face of the Infocalypse can be developed? Here, for instance, is a project, at the University of Western Ontario – longer, slower, more methodical, more encompassing of human critical judgement, with a longer view of human knowledge. More promising than a Watchman?

FIN

 

Images: wall of tvs: http://nightflight.com/wp-content/uploads ; barking dog: https://www.dogmaster.com.au/stop-dog-barking-faq/#bark-control-products

Advertisements

bǎi huā qífàng, bǎi jiā zhēngmíng

Paradox of Choice.

Editor’s note: found loose in the papers of Luciani Samosatensis Minor (Lucian Minor)

And it came to pass that in the never-ending search for Reliable Sources, the old advice to be careful what you wished for was recalled (a voice in the crowd comments). Even though this is alternately sourced to the Yiddish or Chinese – and more on that provenance below – it is perhaps more reliable to cite Robert Merton, who credibly articulated the Unintended Consequences of Purposive Social Action (not quite open-source, but the point is clear…)

Indeed, in the face of the plague of fakeness, BS, and generally unhelpful re-constructions of reality, many answered the call, even the pleas for truth in the hurly-burly.

https://www.axios.com/fake-news-initiatives-fact-checking-dfa6ab56-3295-4f1a-9b38-e61ca47e849f.html

and in a strange coincidence of intent, at least one other:
https://www.politico.com/story/2018/10/07/fake-news-database-help-876301

No doubt authentic American analogies can be recalled: the land rush(es) of the Nineteenth Century, wildcatting in California and Texas, the garage-workshop phase of Silicon Valley, etc.

However, one of the most venerable historical comparisons is perhaps to the quandary of Zhao Zheng (known by then to his fans as Qin Shi Huang), who, faced with the bewildering choices presented to his people by the Contention of a Hundred Schools of Thought, resolved to settle the matter expeditiously.

As his latter-day successor observed:

He buried 460 scholars alive; we have buried forty-six thousand scholars alive… You [intellectuals] revile us for being Qin Shi Huangs. You are wrong. We have surpassed Qin Shi Huang a hundredfold. When you berate us for imitating his despotism, we are happy to agree! Your mistake was that you did not say so enough. [Wikipedia much?]

And so, to our tale in the present day. As the earnest enterprises for detection of fakery and foolery amongst the people proceed, the question inevitably arises: who ya gonna trust? Or should we await the arrival of the new emperor, and the reduction of choices?

A modest postscript, for cross-cultural studies:

Quis custodiet ipsos custodes? (Juvenal, Satire VI, lines 347–348)

Meet the Old Boss

FIN

 

Images: https://www.flickr.com/photos/tuzen/16469142644/in/photostream/ ; https://commons.wikimedia.org/wiki/File:The_Vicissitudes_of_Life,_by_Zheng_Shengtian,_Zhou_Ruiwen,_and_Xu_Junxuan,_People%27s_Republic_of_China,_1967,_lithograph_-_Jordan_Schnitzer_Museum_of_Art,_University_of_Oregon_-_Eugene,_Oregon_-_DSC09552.jpg

Banned Books Week Thoughts: On “Censorship”

BBW-logo

How much do librarians really want to think about this?

It is “banned books” week, so here are some thoughts on the topic of censorship.

Not long ago, I posted a past blog post titled “Should Offensive Books be Removed from your Library’s Collection? on a private librarian discussion group and got the following response:

No. Unless it doesn’t fit your collection development policy, it should stay in the library. I just had an encounter with [a] very conservative school library that polices their collection by saying all books must meet standards set down by the instructional resources committee – even if the books are for recreational reading. Any book deemed YA must have parent permission for students to read the book. YA is their own designation and the children in question are all teens. The values of one group – or one librarian – should not determine the freedom to read for all patrons. So while a person is free to refuse to read Huckleberry Finn, they are not free to deny access to other patrons by insisting the book be removed. (And while that did happen in a classroom at our local school – and I supported the challenge, it should not happen at the library.) I choose not to read Slaughterhouse-Five – tried and couldn’t take it – but I would never ask it be removed from the library.

My reply to this person was as follows:

The “Library Bill of Rights” puts forward the notion that “a person’s right to use a library should not be …abridged because of… age.” Do you think that concerns parents might have about “age-appropriateness” are out of line then? If a parent suggests that a book should simply be moved from one section to another for a reason like this, would you consider that censorship? I think its important to note that the ALA does. For them censorship is “a change in the access status of a material, based on the content of the work and made by a governing authority or its representatives. Such changes include exclusion, restriction, removal, or age-grade-level access limitations.” Quite honestly, I think this goes way beyond an appropriate definition of censorship. Perhaps a good related question here is whether we think the idea of “rating movies” is reasonable or not. Censorship or just some regular common sense? At the same time, I think it is good for all librarians to admit that when pressed, there are many things they typically want to discourage from spreading or being widely accessible – like advocacy for the practice of sati, female genital mutilation, sexism, how to build bombs, racism, terrorism, child predation, and slavery, for example. In short, the idea here is that there are some things that we think are “beyond the pale” and that we should not give time to. At the same time if librarians do not represent constituencies of their public who are for, or considering these things, are they acting as censors in this case? Why or why not? And if so, is that a good thing? I get the impression that most librarians don’t want to think about these kinds of things too much though.

More:

What criteria process might public librarians use to weigh the parent’s concerns? Is there ever a point when any book or another information artifact might not be worth using to learn and grow? Assuming one thinks that parents should have little or no role in making decisions about what is on the shelves in public libraries or where it is located, how then should libraries determine the criteria for making decisions about what books are appropriate for children’s or teen’s sections? The core question here would seem to be: “Is the implementation of civil laws — or even doing things like rating movies or moving books to different locations in public libraries — *only* about certain individuals imposing their ideas of what is right on everyone else?”

All of that, is going to be related to what you think of the idea that “authority is constructed and contextual”.

Again, that is what my published papers in Libraryland seek to intelligently challenge.

FIN

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 6 of 6)

Falsehood flies, and the Truth comes limping after it. —

 

Post by Lucian Minor

Part 1

Part 2

Part 3

Part 4

Part 5

The purpose of this concluding note is to comment on several topics arising from Ben Bayer’s five-part series on identifying unreliable stories online. These were mentioned in passing, but in the interests of highlighting Bayer’s points further comment was deferred.

The first concerns the idea of “cognitive labor.”  This was brought up in comments on the first essay, as Bayer addressed a basic paradox of online information: “The Internet allows misinformation to be spread at lightning speed. But it also allows you to check that information just as quickly.”

Two problems should be mentioned: one is that work, even the cognitive labor of searching and evaluating (by whatever standards), takes time, and the move to save time by a division of cognitive labor depends on trust in the source of authority. This type of work is an application of the Principle of Least Effort.

This principle is also a factor in appreciating Bayer’s argument for finding “a good balance between left- and right-wing sources.” Simply put, how are these sources to be discovered and identified? This advice is not unique to Prof. Bayer: research shows that “lateral reading” is a good method of countering common biases in information seeking behaviors.

However, anyone seeking out these sources will, following the principle of least effort, look to authorities which have already been judged to be trustworthy, which immediately raises the prospect of using prior social networks. Thus, assuming independent, self-motivated and self-directed discovery of information is a very worthy goal, but it is not a simple process.

If the act of seeking trustworthy authorities might be seen as leading to a regress, then a possible solution is to recognize that people will in any case use information-seeking heuristics. These patterns of thought are not rules, per se, with the exception of such rules of logic as may be consciously adopted; rather, they are fallible methods, and prone to biases. They may be adopted by imitation or association with others, most likely in prior social networks; they may be understood as tacit or implied knowledge, or “commonsensical.”

Bayer did not discuss the role of education in his essays, electing instead to focus on individual motivation to seek true and reliable sources. At the conclusion of Part 1, he quotes William Clifford, author of “The Ethics of Belief” (1877), which seems to anticipate the problem of “least effort” raised above:

‘But,’ says one, ‘I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.’

Then he should have no time to believe.

Nevertheless, people do believe. Clifford depicts a rationalization, almost absurd in its implications, so this does not address the problem of “too little time.” Using the “sniff test” metaphor once again, the question could be, would you sniff the produce yourself, or trust somebody else’s nose? The answer, evidently, is both.

The idea of “reputational capital” was mentioned  in the context of Bayer’s  third “sniff test”  because the status of a media company qua company is one of the foundations for his argument that “the free press in a free market for news” is reliable – a fallible reliability, but with few alternatives. As a going concern, the reputation of the firm is one of its principal assets. Although “relational capital” is not often applied in the context of media as a business, Aleknonis (2010)  discusses the measurement of reputation in media and Gentzkow and Shapiro (2008) explore the question of market discipline in the media. Rather than investing the time to study the linked articles, however, it may be simpler to recall Warren Buffett’s advice to his son: “It takes twenty years to build a reputation and five minutes to lose it. If you think about that, you will do things differently.”

Bayer makes constant reference to “mainstream media”, and the authority imputed to it, even if reluctantly. He uses the criterion of an “established journalistic institution (whether liberal or conservative)”—“established” meaning  sufficient material resources and some time in the marketplace (per Buffett, at least twenty years). The membership of the White House Correspondents’ Association illustrates this group of “established” institutions with the 2017 seating chart of the White House Press Room, displaying brand icons and an idea of relative prominence in the seating arrangements. What this chart shows, however, is not journalistic ability, but proximity to power. Speaking for power, perhaps, the opinion of Ben Rhodes, “deputy national security adviser for strategic communications” in the Obama Administration, regarding the press corps is infamous:

All these newspapers used to have foreign bureaus,” he said. “Now they don’t. They call us to explain to them what’s happening in Moscow and Cairo. Most of the outlets are reporting on world events from Washington. The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns. That’s a sea change. They literally know nothing.

The appeal to “mainstream media” as a reliable source is, unfortunately, unpersuasive. Bayer’s argument that an economic incentive will drive the pursuit of truth would be more believable if there were more diversity of opinion, or better, more explicit commitment to objective reporting of facts. Consider this graph by Vanessa Otero, identifying positions taken by mainstream media firms (compared with the WCHA membership, for instance). Even if it is only illustrative, and not a precise measurement, the “skew” to the left-of-center is quite apparent (to her great credit, Otero does explain her methods). Note, too, the small number of sources identified as performing “original fact reporting,” arguably the ground upon which the architecture of interpretation, analysis, opinion, and advocacy is built. Even if Otero’s placement of these key players is labeled as “minimal partisan bias or balance of biases” (i.e. Neutral), the determining factor is the presence of bias.

Bayer acknowledges bias as a factor at several points, but with no discussion of the impact this has on reportorial objectivity, or more importantly, selection of topics for reporting. Without exploring the history of modern journalism, consider simply the notions of narrative, agenda, and advocacy. It may be enough to note as well that mainstream journalists have similar class and cultural interests, and similar ideological positions. Noam Chomsky, for all his radical thought on media, made some pointed observations about acculturation of journalists— included, interestingly, as the first reference cited in the Wikipedia article on “Mainstream Media”.

Media bias exists: the question is, does this affect the perception of credibility of the media sources by the information seeker? David Baron, in “Persistent Media Bias” (2004) observes:

Bias has two effects on the demand for news. First, rational individuals are more skeptical of potentially biased news and thus rely less on it in their decision-making. Second, bias makes certain stories more likely than others. This paper presents a supply-side theory in which bias originates with journalists who have career interests and are willing to sacrifice current wages for future opportunities. News organizations can control bias by restricting the discretion allowed to journalists, but granting discretion and tolerating bias can increase profits. The skepticism of individuals reduces demand and leads the news organization to set a lower price for its publication the greater is the bias it tolerates. Lower quality news thus commands a lower price. Bias is not driven from the market by a rival news organization nor by a news organization with an opposing bias. Moreover, bias can be greater with competition than with a monopoly news organization.

According to Gentzkow & Shapiro (2010), the consumer prefers it: “Our analysis confirms an economically significant demand for news slanted toward one’s own political ideology. Firms respond strongly to consumer preferences, which account for roughly 20 percent of the variation in measured slant in our sample….”

At the outset of the “Sniff Tests” Bayer warns: “The fault, dear readers, is not in our social media, but in ourselves.”

“If they can get you asking the wrong questions, they don’t have to worry about answers.”                                                        — Thomas Pynchon, Gravity’s Rainbow (Viking, 1973): p.251. Proverbs for Paranoids #3

 

 

 

FIN

Images: Signature of Jonathan Swift – https://commons.wikimedia.org/wiki/File:Jonathan_Swift_signature.svg ; Tenniel, Cheshire  Cat Above the War of Cards – https://commons.wikimedia.org/wiki/File:Alice_par_John_Tenniel_31.png

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 5 of 6)

The Real Deal (trust me?)

Post by Lucian Minor

Part 1

Part 2

Part 3

Part 4

In the fifth, and final, “sniff test” Ben Bayer focuses principally on the motive of the reader, or news seeker:

I’d like to discuss the fact that not everyone is motivated by the truth, not in what they do and not in what they believe. This motivates the final question: why do I want to believe this story is true? It’s important to ask that question because it’s possible to not really care about whether what goes through our brain has any relationship to reality. You can see how this is possible from the way some people manage their thinking…. If the answer is, because I want to know what’s true and I’ve checked the most obvious indications of whether or not it is (say, by asking the other sniff test questions), it’s worth taking the claim seriously. But if the answer is (at the end of the day), because the truth of this story would fit my political ideology, or some other form of wishful thinking, it’s better to investigate the claim much more extensively before taking it seriously.

Bayer analyzes his own response and investigations into the reactions to the election of Donald Trump as President. He admits that since this was, to many, an unlikely and unexpected outcome, he was aware of a need to verify the apparent truth. Drawing on the principles outlined in Sniff Test #2 (how likely is the story to be true in the first place?), he observes: “… when someone has, say, a 71.4% chance of winning, that does mean that 28.6% of the time a similar prediction will be wrong. And so as shocking as the results were, it would have taken some pretty convincing evidence to call the results into question.” In examining claims that the apparent truth was in fact false, he performed the necessary follow-up checking into those assertions, and was not convinced to revise his belief that the election results were valid. He continues:

There are of course many who were shocked and even dismayed by Trump’s election who were still willing to accept that he had won fair and square. But even that didn’t stop many of them from concocting fanciful explanations for why he won. Of course explaining why some massive historical event took place is about as difficult as predicting what massive historical events will take place, and so I don’t myself claim to know why people voted the way they did. But I can tell the difference between a careful and considered attempt to explain a historical event, and a kneejerk-reactionary attempt motivated by a political ideology.

Noticing too the occurrence of persistent beliefs reinforced by limited information, Bayer suggests adopting a challenging approach:

A common piece of advice I’ve seen for how to escape the bubble effect is to be sure to expose yourself to a mixture of news sources. Try to find a good balance between left- and right-wing sources. One of the most important “Sniff test” questions I’ve outlined depends on assessing the prior probability of a story we hear, but that question only helps if one has rational assessments of prior probability. Being trapped in one ideology’s news bubble isn’t a great way to cultivate those rational assessments, so I agree in principle with the advice about seeking out a “balanced” news diet.

Bayer cites Aristotle from the Nicomachean Ethics for an early statement of the principle he advocates:

[W]e must consider the things towards which we ourselves also are easily carried away; for some of us tend to one thing, some to another; and this will be recognizable from the pleasure and the pain we feel. We must drag ourselves away to the contrary extreme; for we shall get into the intermediate state by drawing well away from error, as people do in straightening sticks that are bent.*

Working out the implications of Aristotle’s advice to seek the mean, Bayer makes a final appeal to the reader who would pursue the truth, wherever it may lie:

Think about it: if, for example, you are left-liberal, politically, you interpret the world in these terms and you probably tend to surround yourself with more people who do the same thing…. it is better to make a concerted effort to expose yourself more to conservative media than you do to the kind that favors your perspective. And monitor the best conservative media, not the kind that’s easy to criticize. The same advice goes for right-conservatives with respect to the left-liberal media. Whichever side you’re on, you’re not likely to be taken in by the bias you encounter in the other side’s media: the other side’s ideological assumptions aren’t transparent to you like they are to them. So what are you afraid of? Let’s hope it’s not discovering the truth.

The conclusion, a passage from John Locke’s Essay concerning Human Understanding (1689), begins with this admonition:

He that would seriously set upon the search of truth ought in the first place to prepare his mind with a love of it. For he that loves it not will not take much pains to get it; nor be much concerned when he misses it….

Employing the metaphor of a “sniff test” one more time, it is easy to see that the person who would not be much concerned with missing the truth might think otherwise consuming rotten food.

The constant reader will have noticed some recurring concerns with Bayer’s assumptions, especially about the mainstream media. One last section, to follow shortly, will attempt to examine those – without faulting Bayer for not having written a different essay.

a balanced diet!

FIN

 

* Aristotle, Nicomachean Ethics, Book II, 9, 4-5.

Images: Sausage in the market: https://pxhere.com/en/photo/1157015 ; Balanced sweets: https://pxhere.com/en/photo/996132

 

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 4 of 6)

IRL Early Modern Clickbait

Post by Lucian Minor

Part 1

Part 2

Part 3

Ben Bayer’s fourth “sniff test” expands on the third of his criteria for detecting “fake news” which he proposed in his introduction: “… News is funky when the source is suspicious, when the nature of the claim being made is disproportionate to the evidence offered, and when it’s presented in a dishonest manner.” Thus, “Does this story represent its own alleged facts honestly?

Bayer illustrates the use of this question as a test by using “clickbait” for examples. This type of web-based advertising is compared to the enticements at a circus:

…The business model of modern “clickbait” functions in about the same way as circus sideshows. Both make sensationalistic claims to motivate the curious to investigate further. Both get paid regardless of whether the curious are satisfied, especially since both are fly-by-night…. The main difference is that while you pay to get into the tent, advertisers pay clickbait hosts if you click on the link. You still end up feeling like a fool, though (and have to contend with the pop-up boxes and the collages of garish ads).

The purpose of this discussion is not to belabor clickbait, however – it is to alert the reader to the problem of sorting truth from lies: “My earlier posts were aimed more at detecting outright fabrication. The fourth question is more applicable to stories that are based on some kernel of truth, but which fertilize that kernel with exaggeration and speculative fantasy…”

In an extended example of how these stories work, Bayer notes how a story appearing on one website is copied from another site, and copied yet again, until finally the original report is determined to be several years older than what was implied by the website he began with, and the original report was considerably less inflammatory. This example of the classic game of “telephone,” in which a message is passed from person to person with decreasing fidelity to the original, at least retains the hyperlinks. Bayer incidentally provides a shortlist of websites that use this technique, but this isn’t even his principal complaint:

The mere fact that the original post presented a story as if it were breaking news, even though the story was over two years old, should be cause enough for closing the site and never visiting again. But the fact that the article doesn’t support its own sensationalistic headline with its own alleged details should give us much less confidence that the author is even getting the details right in the first place.

And people share this on social media! You can search by the title of the piece on Facebook just to see how many people among your friends have shared it, and how many non-friends have shared it publicly.

Bayer reflects on the spread and widespread occurrence of “fake news” of this sort, but he also provides a somewhat ironic admission, although he may not have intended it as such: “… I knew that too many misleading pieces like this existed, and since I didn’t remember one that had made any headlines, I simply went looking for one. I went to the page of a social media acquaintance I knew was notorious for sharing material like this.” In addition to the use of “notorious” as a synonym for “reliable,” notice that his method for discovering this sort of marginal information depended on the existence of a social network of acquaintances. The importance of the social connections among individuals seeking reliable information goes unremarked and unexplored.

Returning once more to a defense of mainstream media as a benchmark of reliability (“I do think they’re the best we’ve got, and this might even mean they’re only the best of a bad lot.”), Bayer offers a counter-example, but by reference through a link, and a brief gloss of the referenced article:

Someone I respect published an interesting article (in a mainstream media outlet)* alleging that the problem with “fake news” is not as serious as the problem with false stories peddled by the mainstream media. He gives some examples of recent stories where he thinks mainstream sources have made unsubstantiated claims or reported facts out of context. As I read some of these stories, it’s not a slam dunk that the claims in question really are unsubstantiated or out of context, so they don’t strike me as the best examples to make the point.

This raises several questions, of course. The simplest is, what are the grounds for respecting the author of the linked article? Another would be, does the gloss accurately represent the article? What obligation does Bayer have to do this? In fact, the reader interested enough to click through may discover that the author’s point is not quite what Bayer implies. The third paragraph of this article is illustrative: “Now, suddenly, establishment media have found a public enemy in “fake news.” Their timing betrays their motive. By focusing America’s outrage elsewhere, establishment media have directed attention away from their own shoddy work. It’s no coincidence media’s outcry against “fake news” came immediately after the people’s outcry against mainstream coverage of the presidential campaign.” Daniel Cole’s article addresses motive, in which bias plays a substantial role, and which underlies the issue of reliability.

Bayer recaps his basic argument for trust in the reliability of the mainstream media one more time:

They make mistakes, even dishonest ones. But at least they are accountable for their mistakes. The reason we know about the worst journalistic sins in recent memory is because the agencies responsible for them eventually owned  up  to them (begrudgingly or not) — and their rivals reported them. And as I’ve argued in previous posts, there is reason to expect them to do this: ultimately, they’re accountable not to some journalistic ethics board, but to their readers and customers. They need a reputation for reliability to continue to sell papers.

This essay closes with another recapitulation, which speaks to Bayer’s most basic purpose in writing the series: to encourage the reader to do the hard work necessary to determine the truth of matters. Paraphrasing the philosopher William Clifford:  “the more careless you are about what you accept and repeat yourself, the more you encourage others to try to fool you and lie to you again. You don’t care about what’s true, so why should they care about telling you the truth? So the more you spread their lies, even unwittingly, the more you encourage their lies. But the more you do this, the more you are yourself culpable for aiding and abetting their mendacity.” Bayer concludes: “For my part, I think it’s bad enough to not care about the truth yourself, and the less you care, the more culpable you become for having surrounded yourself with liars.”

FIN

 

Notes: * Daniel Cole, ‘False news’ is worse than ‘fake news’- but we can handle the truth. Colorado Springs Gazette [online], Dec 11, 2016.

Images: Sideshow:commons.wikimedia.org/wiki/File:Marx%2BSuper%2BCircus%2BSideshow%2B2.JPG ; chumbucket: https://www.flickr.com/photos/jrmyst/2093119308

 

 

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 3 of 6)

reporters working on the first rough draft of history?

Post by Lucian Minor

Part 1

Part 2

Ben Bayer offers five tests to evaluate news items and stories, especially those encountered online. The third of Ben Bayer’s “sniff tests” begins with the question, ” If this story were true, what else would be true?

In the context of the second test, which attempts to estimate the likelihood of a story’s veracity, he suggests: “If, given our background knowledge, the story depicted is unlikely to happen in the first place, we should insist on much more evidence than the report of a single source. We need to go in search of further confirmation that it is true before we begin to take it seriously.”

Bayer spells out the value of this third test toward the end:

Shocking stories we read online, if true, would be much more like the elephant in the room than the needle in the haystack. Precisely because they’re shocking, we would expect these stories to be significant enough to be widely reported…. when we comprehensively examine the places we would expect a news story to be reported if that story were true, and we come up dry, this is sufficient justification for claiming that the events described by the story probably did not happen.

Much of the discussion in this “sniff test,” however, is devoted to explaining the authority with which he invests the mainstream media. This position was a principal support for his argument in the first section (“What is the source of this story and what do I know about it?”) and it is important if “the places we would expect a news story to be reported if that story were true” are to be considered reliable.

Bayer acknowledges the problem of devoting time and resources to determine reliability, and argues that the mainstream media are primarily profit-seeking market actors:

For most of the stories we hear from other people, we don’t have the time, the resources, or even the knowledge to confirm the story for ourselves. For most of us, the most important thing we can do to confirm a story is to look for stories from other news sources that independently report the same thing. We know that if a story has merit, if it is about a major public event, anyone with the resources and knowledge to check it out will try to do so. The members of a free press in a free market for news have an incentive to report the story. If all of the other news agencies scoop them, no one will want to buy or advertise in their news outlet…. because their business model depends on their reputation, they are accountable for their mistakes in a way that alternative media are not. There is every reason to think that the mainstream press will report even initially improbable stories that go against their agenda, when the evidence becomes obvious enough.

In his discussion of media bias, he uses an interesting euphemism for the editorial decision process:

There is very real media bias that can disincentivize news outlets from reporting important stories. Stories that don’t fit an ideological agenda can be buried “below the fold,” otherwise subordinated to other news, or (especially when a story is of marginal interest) not reported at all.

Bayer does not look to the “alternative media” for much support, even after acknowledging that they play a role in keeping the mainstream media accountable. His principal argument is, again, economic:

…the alternative media needs the mainstream media far more than the other way around. Leave aside the fact that they need someone to critique. Look up your favorite alternative blog, and see how much raw news they get from mainstream sites. Major news agencies have reporters and resources that most alternative sources can’t hope to equal.

This analysis of the relationship between the alternative and mainstream media, and a mechanism of self-correction, leaves several questions unanswered, however. How would a story be judged to be “of marginal interest” for instance? Is the influence of an ideological agenda explicit in affecting the editorial process, or is it implicit? What is meant by a “free press in a free market”? Do these questions arise because it is not self-evident that such an entity exists? Is alternative media defined by, and in contrast only to mainstream media authorities? Is the mainstream media spending its reputational capital?

Bayer mentioned three criteria for testing reliability in his Introduction: “News is funky when the source is suspicious, when the nature of the claim being made is disproportionate to the evidence offered, and when it’s presented in a dishonest manner.” In this third part, he has discussed a fourth criterion, a model of coherence among sources. In the next part, he discusses honesty in representation, and in the fifth part he discusses personal motivation. Those reviews will follow shortly.

agents of alt-media, in the fields of the Mainstream

FIN

 

Images: Blind men and elephant from commons.wikimedia.org/wiki/File:Blind_men_and_elephant.jpg ; JF Millet, Des glaneuses from commons.wikimedia.org/wiki/File:The_Gleaners_MET_DP827910.jpg

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 2 of 6)

Loosely translated: “Plus un fait est extraordinaire, plus il a besoin d’être appuyé de fortes preuves…” (Pierre Laplace, 1812)

Post by Lucian Minor

Ben Bayer offers five tests to evaluate news items and stories, especially those encountered online. The second of these is summarized by the question, “How likely is the story to be true in the first place?” To answer this, he launches into a discussion of probability, and assumes only that his reader is genuinely interested in a method for estimating this likelihood. In short, the method requires some effort on the part of the reader, and not a little numeracy.

To his credit, Bayer shunts the details of Bayesian probability analysis into an Appendix, but it may be possible to find some broad, more simply phrased advice with which to support this “sniff test.” He offers an old adage (as old as Carl Sagan, anyway, echoing Laplace): “extraordinary claims require extraordinary evidence.” Expanding on this, Bayer remarks:

The foolishness of believing improbable stories without further confirmation is at least somewhat excusable when someone is young and doesn’t know much about how the world works… Of course, improbable things sometimes happen – like the election of Trump. And that’s why I’m only describing a sniff test, not a comprehensive diagnostic procedure. If a story is improbable to begin with, early indication that it is true should at most prompt you to search for confirming evidence… and that’s when the initial indication is reliable. Not even the mainstream media is 99% sensitive to the truth. I will leave it as an exercise for readers to consider what to do when the story comes from an “alternative” media source.

Using this sniff test, then, depends substantially on “given background knowledge.” Keeping to the analogy, this assumes that one knows what rottenness smells like – which might work with fruit or meat, but perhaps assumes too much in cases of abstract ideas or complex social and political arguments. Surely, some further tests would help! Bayer provides three more, to be reviewed.

“given background knowledge”, this may be edible. Or not.

FIN

 

Images: cdn-images-1.medium.com/max/1600/1*LhJb3U3oxrg0XI3ZeUFFEg.png ; commons.wikimedia.org/wiki/File:Saga_Blue_cheese.jpg

Ben Bayer’s ‘Sniff Tests’ for identifying unreliable stories online (part 1 of 6)

Looks easy, doesn’t it?

Post by Lucian Minor

Bayer is a professor of philosophy, studying “questions about knowledge and free will.” As a brief review of his work online will show, he writes professionally for professionals, so the language is technical, precise, and intimidating. However, in 2016 he prepared a series of essays for Medium discussing principles that could be applied to the evaluation of online news sources. Written as a corrective to the plague of “fake news,” they are intended for a wide audience, but their purpose is somewhat constrained: “I don’t know if people who have the bad habit of believing unreliable stories will read this article. I am mainly speaking to those who already see that there is a problem and are looking for ways to combat it more effectively. It seems that no matter how many debunking links you post, the hydra-headed monster of fake and misleading news grows ever-larger.”

His methods are “sniff tests,” not unlike the evaluation a shopper might make in the fresh produce section of a grocery store: “You wouldn’t bite into a melon that smells funky, so why would you swallow a news story that does? News is funky when the source is suspicious, when the nature of the claim being made is disproportionate to the evidence offered, and when it’s presented in a dishonest manner.” These three criteria are expanded in the essays, and supplemented by two more tests to complete the set. The last of these is actually previewed in the Introduction:

Fake news sites exist mainly because they can make a fly-by-night profit by attracting eyeballs to ads. That means that they continue to exist because readers believe fake news and are willing to share it. But these readers should know better. A few moments of reflection is usually all that’s needed to check the temptation to believe a fake or misleading story. The fault, dear readers, is not in our social media, but in ourselves.

Following the Introduction, Bayer examines the first criterion, credibility of the source:

What is the source of this story and what do I know about it?” This is a more detailed version of the earlier criterion of “funkiness” – when the source is suspicious. Bayer expects the reader to be willing to do some work, but some of his methods leave questions open. The ability to gauge the plausibility of a story assumes considerable experience with true and false information, so perhaps this is meant for people who are already aware of the problem of fake news. Recommending a Google search for evidence of the reliability of a source leaves open the question of how to know whether those results are themselves reliable. And parenthetically suggesting that Snopes is a source for such a reliability check leaves the question of Snopes’ credibility unanswered. Indeed, who checks the fact-checkers ?

Bayer points out the basic paradox of online information: “The Internet allows misinformation to be spread at lightning speed. But it also allows you to check that information just as quickly.” There are two further problems here, however: one is that work, even the cognitive labor of searching and evaluating (by whatever standards), takes time, and the move to save time by a division of cognitive labor depends on trust in the source of authority.

To his credit, Bayer engages this problem by broadening his scope to include this question:

How do we know when we’ve found a reliable source? This is a question that is wider than journalism. It’s a philosophical question about why we should trust the testimony of others, about anything. Any time we rely on a friend to report the details of a conversation we didn’t witness, or a bystander to a murder to recount whodunit, or any number of experts (like doctors or scientists or technicians) to give their diagnosis of problems we need to solve, we are relying on testimony.

He summarizes his argument for trust in the testimony of others:

… we have reason to trust other people’s testimony when we know they have a track record of accurately reporting what they witness and not lying about it. We can trust others rationally, without having to accept what they say as a matter of blind faith. We can have evidence that people are good at reporting the facts. To acquire this evidence, we have to start with testimony whose veracity we can check directly ourselves. We build from there to acquire trust in the testimony of those whose reports we can’t check directly ourselves.

This method is not simple, however, and leads to a regress in the case of evaluating news sources:

Nobody, to my knowledge, keeps anything like an exhaustive account of the track records of different news sources. Even if someone did, there would be a further question of why we should trust them…. There’s a longer story here, but in a nutshell, I’ve pretty much come full circle to embrace what is usually derisively referred to as the “mainstream media.

At this point, some readers may be inclined to abandon Bayer as attempting to justify the status quo, but he appeals to an argument from accountability:

It’s true that the journalistic establishment is predominantly left-liberal. It’s also true that its reporting is subject to market incentives. Both of these factors leave reporters and editors subject to various forms of bias. But both also imply significant assets. The same factor that accounts for left-liberal bias (journalistic training) also brings with it a certain set of skills that contributes to reliability. Trained reporters know how to ask questions and confirm allegations in ways that not just everyone with a blog knows how to do. The same factor that accounts for corporate bias (market incentives) also brings with it a factor crucial to assessing the quality of anything we consume that is produced by others: accountability.

The accountability factor is especially crucial: the free press in a free market lives or dies by the reliability of its reporting. If it acquires a reputation for inaccuracy or dishonesty, it loses readers and customers. Its competitors are more than willing to sell papers instead. It is no accident that the majority of “fake news” sites are fly-by-night…. Consuming reliable media sources should be an exercise in the same caveat emptor logic that we apply to other markets.

Having placed the obligation for critical evaluation where it rightly belongs, on the reader, Bayer rests his defense (and returns to it in a later essay):

Of course it’s true that left- and right-wing media bias exist. But it’s one thing to be a biased interpreter of the facts, and another to be an unreliable reporter of the facts. The more you learn to distinguish the facts from the interpretation of the facts, the more you can abstract away from a story’s bias to get a grip on the underlying facts it reports. It’s for this reason I’m confident that, in spite of its bias, the mainstream media is the best source of information we have today.

This distinction between bias and unreliability may be overly fine for some, but it makes sense if the individual is responsible for their own judgement. That said, a further discussion of the relationship between the two concepts would be welcome. In fact, the metaphor of “funkiness,” with its connotation of rot, evokes the notion of bias more than unreliability.

Bayer’s other “sniff tests” will be reviewed in subsequent posts.

Reporters with various forms of “fake news” from an 1894 illustration by Frederick Burr Opper

FIN

 

Images: https://openphoto.net/gallery/image/view/19769 ; https://en.wikipedia.org/wiki/Fake_news

Should Offensive Books be Removed from your Library’s Collection?

Fiction like this just the tip of a deplorable iceberg?

“To make your argument strong, you have to make your opponent’s argument stronger.” – Robin Sloan

Should offensive books be removed from your library’s collection?

Well, as the librarian you are the authority of that realm! Can people rely on you to do what’s best?

My first response when someone says that a library has offensive materials that should be removed is to think of one of the worst things I can imagine:

“What, are there books in our libraries that inform pedophiles about how to groom children”!?

That said, I realize there are a lot of other things that offend folks, and many rightly so. After all, I get offended about lots of other things to! I did, for example, write this about how ridiculous the ALA’s Library Bill of Rights is.

So here is what I think – after, that is, recalling what I’ve heard many a good librarian religiously insist on over the years: “a good library should have something to offend everyone.”

I think this:

If the thing that you want removed is as bad as you think it is, might it be a good idea to leave it in the collection? Here, think of what many said about the Holocaust after it happened: “Never again”. In other words, if we should all be able to see that the offensive material is clearly antithetical to being human, why not leave it in the collection in order to remind us of what happened, and what possibly–as unlikely as that might seem to us now–could conceivably happen again? Why not recall that even though we see clearly now, there have been many–even among those in academia and the most powerful positions of society–who at one time did not see clearly, and who became what we most despise? Why not have those reminders that “we were once blind but now we see!”?

On the other hand, if we are concerned about the tide turning again, about regress, are we saying that it is not so clear that this is a bad thing after all? That it is, for this or that reason, easy for human nature to revert to these things? In that case, perhaps it is a good idea to leave it in the collection so that you can be familiar with the strongest forms of the arguments persons holding the deplorable views might hold? If these views are so deplorable, it is unlikely that there are contemporary books in the collection defending these arguments, but given that arguments defending this or that were made in the past which were accepted by many, older library books are likely to contain many of these arguments in their strongest forms. Why? Simply because if we take into account that many philosophical arguments do not go away over thousands of years, it makes sense that the authors of these old books, often persons holding academic positions, will be sure to reveal these arguments. I realize that many of these older books may contain offensive material that is not strictly argumentative, but even so, they will be useful for giving clues about what they assume.

On the other hand – jumping off from that last point about what we assume – perhaps we insist that thinking like this is, in the end, hopeless. Why? Because rational arguments are always nothing more than rationalizations which follow innate and tribal impulses. Ideas which seem intelligent or wise are really after the fact, always post hoc. For the most part, we stick to our group and follow charismatic and confident persons, fads and flows.

In which case, clear out all the books. Because any of the arguments they contain don’t really matter.

Not easy is it? This is why whenever you clear out this or that offensive book, just tell yourself you aren’t doing it because you are “banning the book”, but that it doesn’t fit the particular mission of your library, educational institution, etc.

FIN