قالب وردپرس درنا توس
Home / Technology / This is Facebook's self-defense plan for 2018 midterm elections

This is Facebook's self-defense plan for 2018 midterm elections



Facebook has a four-part plan to protect its platform from malicious attacks during the US Inter-American Interim Election in 2018, business executives said today. In a teleconference with reporters, representatives from Facebook's security, product and advertising teams introduced their strategy to prevent the kind of issues they faced during the 2016 campaign. While most bad actors are motivated by profits, executives said government-sponsored attackers continue their efforts to manipulate public opinion through posts on Facebook.

Here's Facebook's plan to bolster its safety over the next few months.

. 1

Combating foreign interference . Executives pointed out that the FBI has set up a task force to monitor social media as an important step in identifying real-time election threats. Alex Stamos, Facebook's Chief Information Security Officer, said the company also works with unknown outside experts to identify outside threats. Every election is different, he said, and the company is working to develop different approaches to tackling bad actors depending on the different risks in each country.

"If you tear the entire digital misinformation problem apart, you'll find several types of bad content and many bad actors with different motivations," said Stamos. "It's important to find the right approach to these different challenges, and that not only requires a careful analysis of what happened, but we must also have the latest insights to understand completely new types of misinformation."

2. Remove Fake Accounts . Deep-pocketed attackers create thousands of fake Facebook accounts and use them to spread divisive stories within the countries they're targeting. Last year, Oxford University researchers found that a group in Poland had created and used 40,000 fake accounts in various social media services to influence elections.

Samidh Chakrabarti, a product manager dedicated to election security, said Facebook is deleting "millions" of counterfeit accounts every day. "Thanks to advances in machine learning that allowed us to detect suspicious behaviors, we were able to do so without evaluating the content itself," Chakrabarti said. He said the company has recently set up a new tool that looks for sites of foreign origin that "spread inauthentic civil content". The tool alerted Facebook to a group of Macedonians trying to influence the recent US Senate elections in Alabama. and the company then removed it from the platform.

. 3 Ads for all ads on the platform . In previous elections, advertisers were able to create so-called "dark posts" – ads that did not have a permalink and were only visible to Facebook users in the news feed. Meanwhile, Facebook has banned the practice, and this summer, a tool will be launched worldwide that will allow users to see every ad on the platform. Advertisers must indicate which candidate or campaign they represent.

"In addition to the actual creative, it also shows how much money was spent on each ad, how many impressions it received, and what demographic information contained on the ad reached the audience," said Rob Learn, director of product management for ads , "And we'll keep these ads running for four years after running, so researchers, journalists, watch dog organizations, or people who are just curious can see all those ads in one place."

4. Reduction of the spread of false news . Facebook relies heavily on third-party fact checkers to assess the accuracy of viral stories. However, until recently, fact-based auditors were only able to rate article linkages, creating incentives for bad actors to format their fake messages into images and videos. Fact-checkers complained about this loophole on Facebook and the company said that fact-checkers will be able to review photos and videos as well as article links starting this week.

Tessa Lyons, product manager for the newsfeed, said these articles by Checkern were labeled as incorrect, distributing an average of 80 percent less. She said that Facebook has also begun to look for fake domain-level messages to take disciplinary action against sites that repeatedly push hoaxes. "We're reducing their circulation, making advertising and monetization obsolete, and preventing them from reaching, growing or benefiting their audience."

Facebook's plan seems to be comprehensive – and expensive. Executives said the total number of people working on security and integrity issues will double to 20,000 this year.

The question is whether the company can detect new threats at an early stage. Facebook fought spammers and other bad actors before 2016; The problem was that they changed their tactics as the company did not expect. Facebook is aware of this. The danger for the company is that it gets "tunnel vision" when it only addresses the problems it sees today, Stamos said. "We do not just want to wage the last war," he said.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *