قالب وردپرس درنا توس
Home / Technology / Earn money from the people who promote it – TechCrunch

Earn money from the people who promote it – TechCrunch



Facebook and other platforms are still battling the spread of misleading or deceptive "news" disseminated on social networks.

Recent revelations about Cambridge Analytica and Facebook's slow corporate response have distracted attention from this ongoing pathway. An equally serious problem: spend enough time on Facebook, and you're still sure to scroll dubious, sponsored headlines across your screen, especially during important news as influential networks from inside and outside the United States gather to increase their reach. And Facebook's earlier announced plan to combat this crisis through simple user surveys creates no confidence.

As is often the case, the underlying problem is more in the economy than in the ideology. Websites like Facebook are dependent on advertising their earnings, while media companies rely on ads on Facebook to keep their eyes on their websites, which in turn generates revenue. In this dynamic, even reputable media world, there is an implicit incentive to prioritize Flash over the substance to get clicks.

Less unscrupulous publishers sometimes take the next step and create pseudo-news full of half-truths or lies. These are tailor-made to emotionally appeal to a public that already tends to believe them. In fact, many of the fake political elements of the US government that emerged during the 201

6 elections did not stem from Russian agents, but at night flared up operations that were prejudiced across the political spectrum. Add to this the high cost of Facebook as a company: it is unlikely that it will be possible to hire massively large teams of fact-finding examiners to review any fraudulent messages advertised on their platform.

I think there is better, proven cost-effective solution Facebook could implement. Use the insights gathered by your own users to eradicate false or deceptive messages, and then remove the profit motive by assigning publishers to try to promote them.

The first piece involves user-oriented content validation, a process that has been successfully implemented by numerous Internet services. For example, the dot-com dating site Hot or Not, encountered a moderation problem when introducing a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a number of selected users if an uploaded photo was inappropriate (pornography, spam, etc.).

The users worked in pairs to vote on photos until a consensus was reached. Photos tagged by a large majority of users were removed, and users who made the right decision received points. Only photos that produced a mixed reaction would be reviewed by company staff to make a final decision – typically just a tiny percentage of the total.

Facebook is in an even better position to implement such a system a truly massive user base that the company knows in detail. You can easily select a small subgroup of users (hundreds of thousands) to perform content ratings selected based on their demographic and ideological diversity. Maybe users could opt for rewards as moderators.

Related to the issue of Facebook ads spreading fraudulent messages, this review process would work something like this:

  • A news page pays off to promote an article Video on Facebook

  • Facebook holds this payment in trust

  • Facebook publishes the ad of a select number of Facebook users who have volunteered to classify messages as reliable or unreliable

  • When a superiority of these Facebook reviewers (60% or more) rate the news as Reliable, the Advertisement is automatically published and Facebook takes the advertising money

  • If the message is marked as "Unreliable" by at least 60% of the examiners, it will be sent to Facebook's internal rating button [19659011] If the review board considers the news as "Reliable" the ad for the article will be published on Facebook Published

  • If the review board finds it unreliable that the ad for the article is not published, Facebook returns the majority of the ad Payment to the media page – 10-20% for the refund of the social network review process

( Photo by Alberto Pezzali / JustPhoto about Getty Images)

I'm confident that a lot of users would repeatedly identify misleading news, saving Facebook countless hours in labor costs. And in the system that I describe, the company becomes immune to accusations of political bias. "Sorry, Alex Jones," Mark Zuckerberg can honestly say, "We did not turn down your ad for promoting fake messages – our users did." Maybe more keys, the social network will not only save labor costs, they will actually make money for removing fake messages

This strategy could also be adapted from other social media platforms, especially Twitter and YouTube. To stand up to this epidemic, the leading Internet advertisers, most notably Google, would need to implement similar review processes. This filter system of consensus layers should also be applied to suspicious content that is shared voluntarily by individuals and groups, and the bot networks that reinforce it.

Surely the escalating arms race against the armed forces would only bring us a little bit further, still trying to undermine our confidence in democratic institutions. Seemingly every week, a new headline reveals the challenge of being bigger than what we've ever imagined. My intention in writing is, therefore, to consider the excuse Silicon Valley normally offers to not act: "But that will not scale." In this case, scale is just the power that social networks have to defend us best. 19659021]
Source link

Leave a Reply

Your email address will not be published. Required fields are marked *