How to fix Facebook, according to Facebook employees

Facebook denies the accusation. “At the heart of these stories is a premise that is false,” spokesman Kevin McAllister said in an email. “Yes, we are a business and we make money, but the idea that we are doing this at the expense of people’s safety or well-being does not understand where our own commercial interests are.”

On the other hand, the company recently rejected the exact criticism of the 2019 documents. “In the past, we did not address safety and security challenges early enough in the product development process,” the September 2021 report said. blog post. “Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach. Today, we incorporate teams focused specifically on safety and security issues directly into product development teams, which allows us to address these issues during our product development process, not after it. McAlister cited Live Audio Rooms, introduced this year, as an example of a product launched as part of this process.

If this is true, it is a good thing. However, such claims made by Facebook over the years have not always stood the test of time. If the company is serious about its new approach, it will have to learn a few more lessons.

Your AI can’t fix everything

In Facebook and Instagram, the value of a post, group, or page is determined primarily by how likely you are to view, like, comment, or share it. The more likely this is, the more the platform will recommend this content to you and present it in your feed.

But what catches people’s attention is disproportionately What infuriates or misleads them. This helps explain why low-quality, outraged and hyper-party publishers are doing so well on the platform. One of the internal documents, from September 2020, notes that “low integrity pages” get most of their followers through News Feed recommendations. Another tells of an experiment in 2019 in which Facebook researchers created a fictitious account named Carol and had him follow Donald Trump and several conservative publishers. Days later, the platform encouraged Carol to join QAnon’s bands.

Facebook is aware of this dynamic. Zuckerberg himself explained in 2018 this content becomes more engaging when it comes to violating the rules of the platform. But instead of rethinking the wisdom of engagement optimization, Facebook’s response is mostly to deploy a combination of people, reviewers, and machine learning to detect bad things and eliminate or lower them. His artificial intelligence tools are considered world-class; one February blog post Mike Schropfer, chief technology officer, said that in the last three months of 2020, “97% of hate speech downloaded from Facebook was spotted by our automated systems before anyone tagged it.”

However, internal documents paint a darker picture. An April 2020 presentation noted that the removal of Facebook reduced the overall prevalence of graphic violence by about 19 percent, nudity and pornography by about 17 percent, and hate speech by about 1 percent. File from March 2021, reported earlier by IN Wall Street Journal, is even more pessimistic. In it, the company’s researchers estimate that “we can act with only 3-5% of hatred and ~ 0.6% of [violence and incitement] on Facebook, even though it’s the best in the world at it. “


Source link

Leave a Reply

Your email address will not be published.