Facebook this week decided to remove more than 30 pages and accounts from its main platform as well as from Instagram for what the company termed “coordinated inauthentic behavior”. Some of the accounts and pages seemed to be involved in promoting political protests and other related activity, and while Facebook didn’t directly attribute the campaign to Russian groups, company officials said some of the actions were closely related to what the Russian Internet Research Agency did on Facebook in 2016.
“It’s clear that whoever set up these accounts went to much greater lengths to obscure their true identities than the Russian-based Internet Research Agency (IRA) has in the past. We believe this could be partly due to changes we’ve made over the last year to make this kind of abuse much harder. But security is not something that’s ever done. We face determined, well-funded adversaries who will never give up and are constantly changing tactics,” Facebook officials said.
Since the aftermath of the 2016 presidential election and the revelations of concerted efforts from Russian groups to influence the election through Facebook and other avenues, the company has been working on ways to detect not just outright malicious activity, but patterns of behavior across accounts and pages that indicate coordinated influence campaigns. Some of the accounts Facebook disabled recently were being used to promote a supposed anti-fascist rally in Washington, D.C., next week and joined with apparently legitimate pages to organize the event. Facebook disabled the event on its platform and has begun informing people who signed up to attend it and also sent information to law enforcement agencies.
Although Facebook has a tremendous amount of resources at its disposal for investigating this kind of activity, the company stopped short of attributing this most-recent campaign to any specific group or country. Attribution remains one of the more difficult and thorny problems in security and Facebook is treading carefully. And with good reason. Organized attack groups, especially those affiliated with or sponsored by governments, often have distributed teams with diverse skill sets and frequently use false indicators and infrastructure in other countries in order to disguise their activities. Facebook officials said the group responsible for the current campaign may have learned some lessons from what happened before the 2016 election.
Blog Post What Are Social Engineering Attacks? (Types & Definition) |
“This set of actors has better operational security and does more to conceal their identities than the IRA did around the 2016 election, which is to be expected. We were able to tie previous abuse to the IRA partly because of several unique aspects of their behavior that allowed us to connect a large number of seemingly unrelated accounts. After we named the IRA, we expected the organization to evolve,” Facebook CSO Alex Stamos said.
“The set of actors we see now might be the IRA with improved capabilities, or it could be a separate group. This is one of the fundamental limitations of attribution: offensive organizations improve their techniques once they have been uncovered, and it is wishful thinking to believe that we will always be able to identify persistent actors with high confidence.”
In some cases, organizations who are targeted in an attack are more concerned with detecting and stopping the malicious activity and assessing the damage than with trying to determine who was behind it. Often, victim organizations don’t have the resources or time to devote to the attribution piece of an investigation. But Facebook is a different animal. The company has the money, talent, and other resources to devote to an in-depth investigation, and it also is much more than just another target. Facebook is a major source of news for a large fraction of the population and content on the platform can have an outsized influence on how people think about issues.
“The lack of firm attribution in this case or others does not suggest a lack of action. We have invested heavily in people and technology to detect inauthentic attempts to influence political discourse, and enforcing our policies doesn’t require us to confidently attribute the identity of those who violate them or their potential links to foreign actors. We recognize the importance of sharing our best assessment of attribution with the public, and despite the challenges we intend to continue our work to find and stop this behavior, and to publish our results responsibly,” Stamos said.