From the Click-Gap to Revitalized Groups: What You Need to Know About Facebook’s Latest Changes
Last year, Pew reported that for the first time, social media outranked print newspapers as a top news source among Americans. At the same time, media coverage of Facebook continues to worsen, and the platform faces a downturn in daily active users.
In the midst of this news, Facebook has unveiled a new look. Strikingly, the company has largely done away with its signature blue color, choosing instead to embrace a minimalist look for the new app and desktop versions of the platform.
This won’t be a purely aesthetic update, though. The redesign intends to move conversation away from the public square and toward more private spaces. Quoted in Time Magazine, Facebook CEO Mark Zuckerberg explains the shift: “In addition to the digital town square, we also need a digital equivalent of the living room.” The new Facebook will do this by better highlighting Groups, both by foregrounding them in the design, and proactively suggesting new Groups for users to join based on their interests.
Facebook has announced other substantive shifts in how users are served content through the newsfeed.
A New Metric: Click-Gap
Another effort to combat rising skepticism about the platform’s role in spreading fake news and contributing to a climate of political distrust takes aim at third-party websites. Facebook has announced a change in the algorithm that determines which links show up in people’s newsfeeds. The new metric that will drive this shift is called Click-Gap.
It works similarly to Google’s search rankings—that is, if websites posted to Facebook are also linked on other sites around the web, they’ll be prioritized over sites that only have links appearing on Facebook. This will limit content produced by pages taking advantage of the platform’s algorithm to spread information not present elsewhere online.
In addition to revamping the news feed, Facebook has also committed to increasing oversight of Groups, introducing a new feature called Group Quality, where admins will receive feedback from the company on why certain posts were identified as fake news, flagged, and removed.
Taking a Stand Against White Nationalism and Hate
In March, two weeks after a shooter used Facebook Live to broadcast video of his massacre in a Christchurch mosque, Facebook announced “a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.” In mid-April, they banned several far-right groups in Europe, including the British National Party, the English Defence League, and Britain First. And in early May, they banned several American extremists including Alex Jones, Laura Loomer, and Milo Yiannopoulos.
The ban has not yet eliminated white supremacy on the site. In fact, Facebook is actively participating in creating new white supremacist content. According to the AP, researchers “identified over 30 auto-generated pages for white supremacist groups.”
A Company Seeking Change
Facebook has had quite the time in the spotlight recently. Worrying headlines from the past few weeks reveal that the company faces a Federal Trade Commission fine of $5 billion after it came to light that they exposed data from millions of Americans to Cambridge Analytica employees, as well as stored millions of unencrypted passwords, putting users’ privacy at risk. Reuters is reporting that Facebook’s contractors look through private posts in order to train artificial intelligence on better understanding user behavior. AP has found that Facebook is regurgitating politically extremist views—from people in the U.S. as well as abroad—with its automatically generated content based on user activity.
Also, Facebook hired a lawyer who helped craft the Patriot Act as general counsel, sending a strong message to users concerned about potential government surveillance.
Chris Hughes, one of Facebook’s co-founders (though he has not worked at the company in years) made a strong statement in the New York Times opinion page: “Mark’s power is unprecedented and un-American. It is time to break up Facebook.”
All that is to say nothing of the role Facebook’s paid advertisements played in Russia’s attempt to interfere with the 2016 election. Trying to allay growing fears of the platform’s deleterious impacts on our political system—in what may amount to not much more than a PR stunt—Facebook has released even more data to researchers trying to quantify social media’s impact on democracy around the world. There’s also a bipartisan policy solution that would clear up some confusion. It’s called the Honest Ads Act, and would require political ads online to disclose their funders.
But as social media platforms have become more dominant among news sources, they’ve also become primed to spread misinformation and exacerbate social divisions within this country. Heading into a contentious 2020 primary season, it’s crucial for us as communicators to stay vigilant in pursuit of misinformation and the spread of potentially harmful political ads on these platforms.
In taking aim at misinformation, the company has instituted fact-checking programs across both Facebook and Instagram, dampening the spread of content deemed untrue—though it’s difficult to determine the efficacy of these efforts.
Our Take
Facebook clearly has a lot of balls in the air—facing scandals about their mismanagement of users’ information, their role in the erosion of our democracy, and their part in spreading hate and white nationalism around the world.
The shifts they’ve announced in response, much like the $5 billion fine they’re facing from the FTC, seem inconsequential relative to the massive size and scope of the company.
But the past years have shown that the company is beginning to face public and political scrutiny for its actions, and as advocates, it is our job to keep up the pressure and ensure they stay true to their mission “to give people the power to build community and bring the world closer together” without destroying our social institutions in the process.