How Facebook Can Better Fight Fake News: Make Money Off the People Who Promote It

in #introduceyourself7 years ago (edited)

  #Facebook and other platforms are still struggling to combat the spread of misleading or deceptive “news” items promoted on social networks.Recent revelations about Cambridge Analytica and Facebook’s slow corporate response have drawn attention away from this ongoing, equally serious problem: spend enough time on Facebook, and you are still sure to see dubious, sponsored headlines scrolling across your screen, especially during major news days when influence networks from inside and outside the United States rally to amplify their reach. And Facebook’s earlier announced plan to combat this crisis through simple user surveys does not inspire confidence.As is often the case, the underlying problem is more about economics than ideology. Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash  over substance in order to drive clicks.Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them. Indeed, much of the bogus US political items generated during the 2016 election didn’t emanate from Russian agents, but fly-by-night operations churning out spurious fodder appealing to biases across the political spectrum. Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised on its platform.I believe there is a better, proven, cost-effective solution Facebook could implement. Leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. The dot-com era dating site Hot or Not,  for instance, ran into a moderation problem when it debuted a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc).Users worked in pairs to vote on photos until a consensus was reached. Photos flagged by a strong majority of users were removed, and users who made the right decision were awarded points. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity. Perhaps users could opt in to be moderators, in exchange for rewards.Applied to the problem of Facebook ads which promote deceptive news, this review process would work something like this:

  • A news site pays to advertise an article or video on Facebook
  • Facebook holds this payment in escrow
  • Facebook publishes the ad to a select number of Facebook users who’ve volunteered to rate news items as Reliable or Unreliable
  • If a supermajority of these Facebook reviewers (60% or more) rate the news to be Reliable, the ad is automatically published, and Facebook takes the advertising money
  • If the news item is flagged as Unreliable by 60% or more reviewers, it’s sent to Facebook’s internal review board
  • If the review board determines the news to be Reliable, the ad for the article is published on Facebook
  • If the review board deems it to be Unreliable, the ad for the article is not published, Facebook returns most of the ad payment to the media site — keeping 10-20% to reimburse the social network’s review process


How Facebook Can Better Fight Fake News: Make Money Off the People Who Promote It

Amber Case@caseorganic / Mar 31, 2018Amber CaseContributorAmber Case is the former CEO of Geoloqi, a past keynote speaker for SXSWi and at TED, and author of the O’Reilly book Calm Technology: Designing for Billions of Devices and the Internet of Things. She is currently a fellow at Harvard University's Berkman Center for Internet and Society.More posts by this contributor

Facebook  and other platforms are still struggling to combat the spread of misleading or deceptive “news” items promoted on social networks.Recent revelations about Cambridge Analytica and Facebook’s slow corporate response have drawn attention away from this ongoing, equally serious problem: spend enough time on Facebook, and you are still sure to see dubious, sponsored headlines scrolling across your screen, especially during major news days when influence networks from inside and outside the United States rally to amplify their reach. And Facebook’s earlier announced plan to combat this crisis through simple user surveys does not inspire confidence.As is often the case, the underlying problem is more about economics than ideology. Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash  over substance in order to drive clicks.Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them. Indeed, much of the bogus US political items generated during the 2016 election didn’t emanate from Russian agents, but fly-by-night operations churning out spurious fodder appealing to biases across the political spectrum. Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised on its platform.I believe there is a better, proven, cost-effective solution Facebook could implement. Leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. The dot-com era dating site Hot or Not,  for instance, ran into a moderation problem when it debuted a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc).Users worked in pairs to vote on photos until a consensus was reached. Photos flagged by a strong majority of users were removed, and users who made the right decision were awarded points. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity. Perhaps users could opt in to be moderators, in exchange for rewards.Applied to the problem of Facebook ads which promote deceptive news, this review process would work something like this:

  • A news site pays to advertise an article or video on Facebook
  • Facebook holds this payment in escrow
  • Facebook publishes the ad to a select number of Facebook users who’ve volunteered to rate news items as Reliable or Unreliable
  • If a supermajority of these Facebook reviewers (60% or more) rate the news to be Reliable, the ad is automatically published, and Facebook takes the advertising money
  • If the news item is flagged as Unreliable by 60% or more reviewers, it’s sent to Facebook’s internal review board
  • If the review board determines the news to be Reliable, the ad for the article is published on Facebook
  • If the review board deems it to be Unreliable, the ad for the article is not published, Facebook returns most of the ad payment to the media site — keeping 10-20% to reimburse the social network’s review process

(Photo by Alberto Pezzali/NurPhoto via Getty Images)I’m confident a diverse array of users would consistently identify deceptive news items, saving Facebook countless hours in labor costs. And in the system I am describing, the company immunizes itself from accusations of political bias. “Sorry, Alex Jones,” Mark Zuckerberg can honestly say, “We didn’t reject your ad for promoting fake news — our users did.” Perhaps more key, not only will the social network save on labor costs, they will actually make money for removing fake news.This strategy could also be adapted by other social media platforms, especially Twitter and YouTube. To make real headway against this epidemic, the leading Internet advertisers, chief among them Google,  would also need to implement similar review processes. This filter system of consensus layers should also be applied to suspect content that’s voluntarily shared by individuals and groups, and the bot networks that amplify them.To be sure, this would only put us somewhat ahead in the escalating arms race against forces still striving to erode our confidence in democratic institutions. Seemingly every week, a new headline reveals the challenge to be greater than what we ever imagined. So my purpose in writing this is to confront the excuse Silicon Valley usually offers, for not taking action: “But this won’t scale.” Because in this case, scale is precisely the power social networks have, to best defend us.

Sort:  


Hello @shariful1993!

I noticed you have posted many times since you began your journey on Steemit. That is great! We love active partipants.

I do want to point out that the Introduceyourself tag is meant to be used once only to introduce yourself to the Steemit community. You have now posted 7 times using the introduceyourself tag. Please see this link for more information Tag Spam?

This is the 3rd time we have discussed this issue. The bot will begin to automatically flag any additional posts you make in the introduceyourself tag.

Welcome to steemit @shariful1993. Join @minnowsupport project for more help. Checkout @helpie and @qurator projects.
Send SBD/STEEM to @treeplanter to plant trees and get an get an upvote in exchange of your donation (Min 0.01 SDB)
Upvote this comment to keep helping more new steemians
Send SBD/STEEM to @tuanis in exchange of an upvote and support this project, follow for random votes.

Coin Marketplace

STEEM 0.20
TRX 0.19
JST 0.034
BTC 91309.02
ETH 3119.19
USDT 1.00
SBD 2.91