Facebook is experimenting with letting users help write speech rules

Back in June, I wrote that to build trust, platforms should try a little more democracy. Instead of relying solely on their own employees, advisory boards, and oversight boards, I wrote, tech companies should involve real users in the process. Citing the work of Aviv Ovadya, a technologist who recently published an article on what he calls “platform democracy”, I suggested that social media could build trust by inviting average people into the policy development process.

I didn’t know it at the time, but Meta had recently completed a series of experiments that tried to do just that. From February to April, the company brought together three groups in five different countries to answer the question: what should Meta do about problematic climate information on Facebook?

The question came as watchdogs increasingly examine the company’s approach to moderating misleading environmental information. Last year the Guardian reported on an analysis by environmental group Stop Funding Heat that found 45,000 posts downplaying or denying the climate crisis. And in February, after Meta promised to label climate misinformation, a report by watchdog group Center for Countering Digital Hate found that “the platform only labeled about half of the posts promoting of articles from the world’s leading climate denial publishers,” according to NPR. .

It’s hard to trust a policy when you don’t know who made it or why

In this context, Meta hired a policy consulting firm named Behavioral Insights Team, or BIT, to bring Facebook users into the policy-making process. Specifically, users were asked what Meta should do about “problem information,” which BIT defined as “content that is not necessarily false, but expresses opinions that may contain misleading, low-quality information. or incomplete likely to lead to false conclusions. ”

Meta wouldn’t give me any examples of what he sees as problematic climate talk. But I can imagine panels being asked if Facebook should intervene if, for example, a user with a large following this winter asks something like “if climate change is real, why is it cold outside?”

Today, on all major platforms, average users have no say in how this issue is handled. Instead, it is left to corporate executives and their policy teams, who often consult with experts, human rights groups and other stakeholders. But the process is opaque and inaccessible to platform users, and in general has undermined trust in the platforms. It’s hard to trust a policy when you don’t know who made it or why. (Not to mention who enforces it, or how.)

For his experiment, Meta and BIT worked to find around 250 people who were broadly representative of Facebook’s user base. They brought them together virtually for two weekends to educate them on climate issues and platform policies, and offered them access to outside experts (on climate and elocution issues) and Facebook employees. At the end of the process, Facebook offered the group a variety of possible solutions to problematic climate information, and the group deliberated and voted on their preferred outcomes.

Facebook wouldn’t tell me what the groups had decided, only that all three groups had reached a similar consensus on what needed to be done. Their deliberations are now being taken under advisement by Facebook teams working on a policy update, the company told me.

In a blog post published today, BIT said participants expressed great satisfaction with the process and its results:

We found high levels of participant engagement and satisfaction with the deliberative process. Equally important, they convincingly demonstrated that participants could engage in meaningful and respectful deliberations around a complex topic. […]

Participants were impressed with how their groups respected the wide range of opinions and values ​​shared within the group. As one participant commented: “I was going to this [assembly] knowing that not everyone will have the same opinions, feelings or thoughts as me… In the end, we are not going to shame each other for what we felt or what we thought. They were also happy with how their groups came together to come to a decision. One participant commented that “[e]everyone was very courteous, and I was surprised at the amount of common ground seemingly reached.

Meta was also impressed with the results and plans to conduct further experiments on platform democracy.

“We don’t think we should be making so many of these decisions on our own,” Brent Harris, vice president of corporate governance, told me in an interview. “You’ve heard us repeat that, and we mean it.”

Harris helped oversee the creation of the Oversight Council, a somewhat controversial but (I argued) useful tool for delegating authority over certain content moderation issues and pushing Meta to develop more open and consistent policies. Now Harris has turned to platform democracy and says he is encouraged by the early results.

“It was actually really striking how many people … agreed on what they thought was the right approach.”

“We think if you set this up the right way, people are in a good position to deliberate and make some of the tough decisions (about) the trade-offs, and inform how we’re doing it,” Harris said. “It was actually really striking how many people, when they came together, agreed on what they thought was the right approach.”

In a post-process survey, 80% of participants said Facebook users like them should have a say in policy-making. (I’d like to ask the remaining 20% ​​some questions!)

As promising as early results are, platform democracy is not a guaranteed feature of Facebook in the coming years. More executives and product teams need to buy into the idea; the process needs to be refined and made less expensive to run; and there are more experiments to be conducted on the use of deliberative processes with specific groups or in specific geographic areas.

But in a world where, thanks to Texas and the Fifth Circuit Court of Appeals, platforms risk losing the right to moderate content, Meta and his peers have every interest in exploring the possibility of involving more people in the process. With trust in tech companies at or near an all-time low, it’s clear that relying solely on internal policy teams to craft platform rules isn’t working as intended for them. Maybe it’s time to give people more voice in the process – before the Supreme Court rules that when it comes to regulating speech, platforms deserve no voice.

Leave a Comment