What has X (Formerly Twitter) added a user-led moderation process to its community notes?
X has added another valuable element to its Community Notes user-led moderation process. All the instances of any video that gets a Community Note are now set to display the message in any re-shares and posts.
What happens when a ‘Community Notes’ contributor adds a note to a video in the app?
When a Community Notes contributor adds a note to a video in the app, they can specify that the message is about the video clip, not the specific post.
What X (Formerly Twitter) has to say about it?
According to X, Notes written on videos will automatically show on other posts containing matching videos—a highly scalable way of adding context to edited clips, AI-generated videos, and more.
Is it an effective way for more users?
It is an efficient and effective way to provide more advisory notes to more users, with a system of X able to match both re-shared images and videos in the app and tag them with corresponding contextual notes.
What is Birdwatch?
Community Notes had been developing under “Birdwatch” for years before Elon Musk took over the app. It has become a much bigger focus under Musk’s leadership.
The billionaire was hoping to use community-led moderation as a means to combat more types of platform misuse. Without the X team having to impose its own rules around what is allowed and what is not, he is leaning more into his free speech ethos.
What did the previous Twitter management explain about it?
The previous management of X, Formerly Twitter, believed that a transparent and community-driven approach would identify all the misleading information and elevate helpful context. It can help the former leadership in creating a well-informed world.
It is, in large part, how Reddit has operated for years. With volunteer moderators helping to weed out junk, up and down votes reflected the community sentiment better. It is opposed to Reddit management stepping in.
But there are limits to this as well.
What does the Poynter Institute say about it?
As per the analysis by the Poynter Institute, the vast majority of the Community Notes created are never actually seen by the users in the app.
Due to the Community Notes review system is structured. It requires consensus from the users of opposing perspectives to be displayed.
What does Poynter’s Alex Mahadevan say about it?
According to Mr. Mahadevan, Community Notes requires a cross-ideological agreement on truth in an increasingly partisan environment. Hence, it achieves an almost impossible consensus.
X determines the political leaning of Notes contributors based on their past behavior in the app. It is also not always the best proxy, but based on this system, it requires responses from both sides to approve a note.
What does Poynter’s research say about it?
Based on Poynter’s research, it is found that this is useful for highlighting low-stakes content, like clarifying satire or highlighting AI-generated images.
It is, again, a good use of this new blanket tagging. These are the things that everybody is generally in agreement on.
But some of the most harmful misinformation, along more divisive lines, include COVID-19 vaccine impacts, election interference, and gender debate. It is never likely to get that critical consensus.
Thus, most Community Notes, where they are the most needed, should be displayed.
Yet, despite this, Musk seems confident that Community Notes is the way forward. It will enable the X community to govern itself on moderation concerns.
Much trust is being placed in a system with known flaws still being worked through. It is an exciting concept with much potential in various vital areas.
Musk and Co.’s reliance on Community Notes could be too much as it is unlikely to catch out on all the instances of misinformation and misuse.
However, it has proven particularly effective in one area, and it is none other than ‘Policing misleading claims’ in ads.
Elon has already admitted that it is not “super helpful” for X’s revenue intake. Hence, with the company’s ad revenue down 60% YoY in the U.S., the function may have better uses from a business perspective.
What is Elon Musk going to do, then?
Elon seems willing to take the good with the bad, with the interest in this case being a more hands-off moderation approach. It relies on hope and ideological consensus to make false police claims.
There is a lot to like about the project, but X may also be putting too much reliance, and it is too early, on a still-in-development system.
Amid the broad media reports of X allowing more harmful content to be shared in the app under Musk’s leadership, it will remain a key area of focus for the platform and the ad partners while moving forward.
Do you want to know more?
Click here for more updates on X (formerly Twitter) and other social media platforms.