Twitter, like a lot of platforms and services, is facing something of an identity crisis. Not in the traditional, Why are we all here sense, but in the ultra-modern, Who is running the accounts on our platform, sense.
From the beginning, Twitter’s creators made the decision not to require real names on the service. It’s a policy that’s descended from older chat services, message boards and Usenet newsgroups and was designed to allow users to express themselves freely. Free expression is certainly one of the things that happens on Twitter, but that policy has had a number of unintended consequences, too.
The service is flooded with bots, automated accounts that are deployed by a number of different types of users, some legitimate, others not so much. Many companies and organizations use automation in their Twitter accounts, especially for customer service. But a wide variety of malicious actors use bots, too, for a lot of different purposes. Governments have used bots to spread disinformation for influence campaigns, cybercrime groups employ bots as part of the command-and-control infrastructure for botnets, and bots are an integral part of the cryptocurrency scam ecosystem. This has been a problem for years on Twitter, but only became a national and international issue after the 2016 presidential election.
Twitter executives are keenly aware of this problem and the company has been under pressure from legislators, regulators, and individuals to get a handle on the proliferation of spammy, deceptive, and outright fake accounts. While the company has back-end systems to detect abuse and has public policies about the ways in which automation can be used and what actions will result in account suspension, positively identifying humans on the service necessarily needs to happen on an individual basis. The overwhelming majority of Twitter usage happens on mobile platforms, and many mobile devices have built-in biometric authentication mechanisms that could be used to separate humans from machines.
Twitter CEO Jack Dorsey said this week that he sees potential in biometric authentication as a way to help combat manipulation and increase trust on the platform.
“One of the things we’re focused on right now is how do we clearly identify the humans on the service, and even that is complicated because scripting gets more and more sophisticated. Folks can script mobile app, not just the web not just the programming interface that’s meant for developers,” Dorsey said in an interview on The Bill Simmons Podcast that was published Wednesday.
“If we can utilize technologies like Face ID or Touch ID or some of the biometric things that we find on our devices today to verify that this is a real person, then we can start labeling that and give people more context for what they’re interacting with and ideally that adds some more credibility to the equation. It is something we need to fix. We haven’t had strong technology solutions in the past, but that’s definitely changing with these supercomputers we have in our pockets now.”
“The fallback is the tricky bit. If one exists, then Touch ID/Face ID might be helpful in identifying that there is a human behind an account, but not necessarily the reverse."
Plenty of mobile apps already utilize the Touch ID or Face ID systems for authentication and those methods have a number of advantages, including the fact that the individual’s biometric data is stored on the device itself in the Secure Enclave and isn’t sent to Apple’s servers. And in the specific use case that Dorsey describes, requiring or suggesting the use of biometric authentication on a trusted device could help positively identify account holders as humans.
However, there could be some obstacles. Not everyone has an iOS device (although some Android phones have biometric sensors, too), so there would need to be a secondary authentication method. And if people choose another authentication method, that choice can’t be seen as an indicator that the account is a bot.
“I think it's a step in the right direction in terms of making general authentication usable, depending on how it's implemented. But I'm not sure how much it will help the bot/automation issue. There will almost certainly need to be a fallback authentication method for users without an ios device. Bot owners who want to do standard authentication will use whichever method is easiest for them, so if a password-based flow is still offered, they'd likely default to that,” said Jordan Wright, an R&D engineer at Duo Labs who has done extensive research on Twitter bot behavior with his colleague Olabode Anise.
“The fallback is the tricky bit. If one exists, then Touch ID/Face ID might be helpful in identifying that there is a human behind an account, but not necessarily the reverse - that a given account is not human because it doesn't use Touch ID.”
Dorsey said he sees other benefits from the potential use of the technology, as well: helping to restore trust in the service.
“Something like Face ID to me is a very thoughtful approach because Apple, when they created this and a bunch of other standards that ensued, a lot of the technology is local. There are no backdoors into it. Security is a constantly evolving thing of course. I think it’s important that people have control over their own security so the local aspect of it is critical. It’s not networked, it’s not accessible by Apple or third parties. But I think the most important aspect of it is we get behind this principle of earning trust,” Dorsey said.
“It’s easy to go to one method of earning trust which is transparency, but there’s so many methods of earning trust.”