The White House pressured platforms to do more.
Twitter announced Tuesday that it would be testing a new reporting tool for users to flag tweets that contain possible misinformation.
Users will now report misinformation using the same method as harassment or any other harmful content. The user will be asked to choose whether the misleading comment is political or health-related. The politics category covers more specific misinformation, such as content that is related to elections. Users will have the option to flag COVID-19-specific misinformation in the health category.
The new feature will become available Tuesday for most users in Australia, the US, and South Korea. Twitter stated that it plans to test the unique quality for several months before rolling it out to other markets.
Not all reports will be reviewed
Twitter stated that not all reports would be reviewed as the platform continues testing the feature. The company will use the data from the test to determine how it can expand the part in the coming weeks. This test could also be used to identify misinformation tweets that are likely to go viral.
The Biden administration took a firmer stance against misinformation last month as new versions of COVID-19 continued to circulate. President Biden told reporters in July that social media platforms like Facebook were “killing people” with vaccine misinformation.
This statement came after a coordinated White House campaign to press platforms to remove misinformation about coronavirus. The US Surgeon General’s office published a report outlining new ways platforms could counter health misinformation. It called for clear consequences for accounts that repeatedly violate a platform’s rules and for companies such as Facebook and Twitter to change their algorithms to “avoid amplifying” false information.
Sen. Amy Klobuchar (D-MN) also introduced a bill earlier this year that would remove Facebook and other social media platforms’ Section 230 liability shield if they amplified harmful public health information.