WASHINGTON (Reuters) - Twitter may notify users whether they were exposed to content generated by a suspected Russian propaganda service, a company executive told U.S. lawmakers on Wednesday.
The social media company is “working to identify and inform individually” its users who saw tweets during the 2016 U.S. presidential election produced by accounts tied to the Kremlin-linked Internet Research Army, Carlos Monje, Twitter’s director of public policy, told the U.S. Senate Commerce, Science and Transportation Committee.
A Twitter spokeswoman did not immediately respond to a request for comment about plans to notify its users.
Facebook Inc in December created a portal where its users could learn whether they had liked or followed accounts created by the Internet Research Agency.
Both companies and Alphabet’s YouTube appeared before the Senate committee on Wednesday to answer lawmaker questions about how their efforts to combat the use of their platforms by violent extremists, such as the Islamic State.
But the hearing often turned its focus to questions of Russian propaganda, a vexing issue for internet firms who spent most of the past year responding to a backlash that they did too little to deter Russians from using their services to anonymously spread divisive messages among Americans in the run-up to the 2016 U.S. elections.
U.S. intelligence agencies concluded Russia sought to interfere in the election through a variety of cyber-enabled means to sow political discord and help President Donald Trump win. Russia has repeatedly denied the allegations.
The three social media companies faced a wide array of questions related to how they police different varieties of content on their services, including extremist recruitment, gun sales, automated spam accounts, intentionally fake news stories and Russian propaganda.
Monje said Twitter had improved its ability to detect and remove “maliciously automated” accounts, and now challenged up to 4 million per week - up from 2 million per week last year.
Facebook’s head of global policy, Monika Bickert, said the company was deploying a mix of technology and human review to “disrupt false news and help (users) connect with authentic news.”
Most attempts to spread disinformation on Facebook were financially motivated, Bickert said.
The companies repeatedly touted increasing success in using algorithms and artificial intelligence to catch content not suitable for their services.
Juniper Downs, YouTube’s director of public policy, said algorithms quickly catch and remove 98 percent of videos flagged for extremism. But the company still deploys some 10,000 human reviewers to monitor videos, Downs said.
Reporting by Dustin Volz; Editing by Nick Zieminski