قالب وردپرس درنا توس
Home / Technology / Facebook is shrinking fake news stories because nothing else has worked

Facebook is shrinking fake news stories because nothing else has worked



Photo: Getty

Once again, Facebook is introducing another new plan to fight the scourge of counterfeit messages that populate the platform by reducing the number of links to false claims and hoaxes. It probably will not work because people are just the worst.

Facebook unveiled its latest plan on Friday at its Fighting Abuse @ Scale event in San Francisco. According to TechCrunch, the company will try to be less alert to fake news by giving them a smaller bill in the newsfeed. It will also populate a list of fact-checking articles that expose the spurious reports.

The system will supposedly work like this: When a link is shared on Facebook, machine learning algorithms will turn the article into any indication of false information. If it realizes that a story may be fake, the system will send it to third-party fact checkers to verify the validity of the item.

If fact-testers find that a story is fake, mark it for Facebook. The social network then shrinks the link preview in the newsfeed, so it's not as eye-catching as a standard news story should actually do. A story from a trusted publisher will be 10 times bigger, and according to TechCrunch, their headline will get their own place.

The new measures at least begin to break the unity that Facebook gives to everything shared on its platform – a problem that, while creating a consistent visual appearance, also adds legitimacy to counterfeit messages by: they look almost identical to real ones in the news feeds.

Matt Klinman, the creator of the Pitch app, offered one of the more convincing critiques of Facebook, the platform in an interview with Splitsider earlier this year called the "Great De-Contextualization". While his comments refer to how Facebook has damaged the comedy business in particular, his criticism also applies to the news.

An article of something like, I do not know, Rebel Patriot News, written by a Macedonian teenager or something, looks just like a New York Times article. … There is a reason that the Mad journal looks different than Vanity Fair . They need to convey a different aesthetic and tone for their content to really pop.

The plan is also basically the exact opposite of Facebook's starting point for countering fake news stories. The company first attempted to tap a large red warning sign on debunked items to warn people not to read them. This made people who believed the stories indignant and led to the articles being shared even more despite Facebook's system.

While this last effort is a much better approach than anything else, Facebook has tried – mostly tweaks to its newsfeed algorithm. Nobody understands it completely – it probably will not do much.

First, it's almost impossible to address memes and videos shared on the platform. Articles can be fact-checked and Facebook can create an automated system to produce factual information to counter a false news story. So far it is not possible to do the same for the endless amount of memes that everyone with a conservative family has undoubtedly spread about their newsfeed.

Second, people just seem to be interacting with fake news it gets out of them. A study published earlier this year in Science found that Lie spread online with a facility that does not have real news. Researchers reported that hoaxes and rumors reach more people and spread much faster than stories from reliable sources, especially because human nature makes us vulnerable to giving in to feelings of fear, disgust, and surprise – all of the emotions that fake news often applies ,

The study finds that things like reactions and comments – both measures that Facebook uses to determine whether people have "meaningful interactions" with content – are only incentives to participate and share stories, So maybe the next step of Facebook Hide the number of responses or comments that has received a wrong story. But even that will not solve the real problem: people.

[TechCrunch]


Source link