I have seen many people here post about “embrace, extend, extinguish” and that is indeed a good piece of analysis, but one thing I don’t see people talking about is how Meta promotes content in their feeds.
Hate speech and misinformation has always had a home on the internet. Hate groups, in particular, were early adopters of online communication. So what then has changed in the past 5-15 years? Why are our feeds inundated by divisive, hateful content? Why are hate groups so much more prolific, so emboldened, able to reach so many more people? Why does a user randomly clicking on their feed inevitably end up shuttled into a hate group on Facebook? Anybody who was here for the early “wild west” internet will tell you it’s way worse than it used to be.
Answer: the algorithm aka Meta’s internal policies. Your local racists didn’t suddenly pour billions into a new PR firm or get better at organizing, instead what happened was that their content was selectively boosted by social media companies like Meta. They were given a massive megaphone, for free, by Meta, because people engaged with their content.
An internal Meta study once found, for example, that users were 4x more likely to interact with a post that had angry reacts on it. So what did they do? They made sure more posts which got angry reacts ended up in people’s feeds. There are very credible allegations that this kind of conduct has straight up caused genocides, and you can follow the destabilizing trend every time Meta is introduced to a new market.
Meta has been called out time and time again for this behavior by whistleblowers, by media, by the government. The spread of misinformation and hate on their platform is rampant and they are financially incentivized in every way to continue it. They will never stop, it is their entire business model.
Maybe meta will respect the protocol. Maybe they will follow the rules. Maybe they will put millions of dollars worth of development time into fedi software. Even if all that magically somehow happens, the real danger is that on their own site, they will continue these kind of algorithmic prioritizing of posts, poisoning the feeds of their own users, and by fedi’s nature, the feeds of every user on every server federated with them.
Fedi has one chance to stop this, I hope we do. There is one way to kill social media companies: to stop engaging with them, to stop viewing and interacting with their content, and to choose a different social media framework with transparent algorithms not based on pure engagement metrics. They are funded by advertisers, advertisers pay based on eyeballs and engagement.
Facebook is an island. They see fedi is building something people actually want to be a part of instead of being forced to because it’s what everybody else uses. They want to absorb fedi and use it to continue their business model of spreading divisive content. I say NO.
They cannot force anyone to view content that they do not want. If you don’t spend much time on the federated timeline you’re not likely to run into any of it. Also, it would do Threads a huge disservice. They took a lot of flack because they stated that they didn’t want to promote news as it’s inflammatory. They clarified that news is allowed but it’s not content they will amplify. Also, a lot of the culture has been hurt by what happened with Twitter/X and people are making a concerted efforts to promote more quality content and engagement