Twitter’s Authentication Policy Is a Verified Mess
Twitter finds itself struggling—again—with who can speak on its network and under what terms after two days of shifting pronouncements around its process for verifying users.
Thursday, Twitter CEO Jack Dorsey called the policy “broken” and pledged to “fix [it] faster.” The statement followed an outcry over Twitter’s policies toward hate speech after it verified the account of a white supremacist.
The uproar began Tuesday, after the company verified the account of Jason Kessler, an organizer of the August white-supremacist rally in Charlottesville, Virginia that left one protester dead and many others injured. Kessler received his blue check mark less than a month after Dorsey promised “more aggressive” enforcement against hate symbols and violent groups.
Prominent Twitter users like comedian Michael Ian Black implored Dorsey to retract Kessler’s blue check mark, arguing that it conferred “authority and legitimacy” on a white supremacist and furthered fears that Twitter is a platform for spreading hate speech. In response, Twitter said it had temporarily stopped verifying users.
“Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance,” the company tweeted through its customer-support account early Thursday. “We recognize that we have created this confusion and need to resolve it.”
Dorsey personally weighed in a few minutes later, explaining that Twitter’s policy had been “correctly” implemented when the company verified Kessler. Rather, the problem was the policy itself. “[W]e realized some time ago the system is broken and needs to be reconsidered. And we failed by not doing anything about it,” Dorsey said, promising that that the company was “Working now to fix faster.” Twitter declined additional comment.
Siva Vaidhyanathan, a professor of media studies and director of the University of Virginia’s Center for Media and Citizenship in Charlottesville, says the blue check mark denotes more than identification. “If that were true, then we would all just have to take photos of our drivers licenses. Clearly it has something to do with prominence and status and that means that Twitter is always making value judgments about applicants,” he says.
“The only party that seems confused about verification is Twitter itself,” he adds.
Vaidhyanathan asks whether status should be based on newsworthiness, or on contributions to the public sphere. “Jason Kessler brought a band of violent thugs to Charlottesville, Virginia,” he says. “I don’t think we can consider that a contribution to the public sphere. He’s contributed no ideas, no arguments, no works of art. He has merely increased the level of hatred and violence in America.”
Twitter has a pattern of addressing festering issues only after they’ve blossomed into crises. The new rules against hate speech that Dorsey promised in October? Those came only after female users threatened a boycott when the company suspended the Twitter account of actress and director Rose McGowan, not long after she alleged she had been assaulted by Harvey Weinstein.
In McGowan’s case, Twitter said her account was suspended because she tweeted a private phone number. Supporters weren’t satisfied with that explanation, given years of inaction on harassment towards women. (Not long after he was verified, Kessler tweeted derogatory and offensive remarks about McGowan.)
Critics sneered at Twitter’s initial statement that verification was intended to authenticate identify. They noted that controversial figures like Wikileaks leader Julian Assange has not been verified despite his requests and that Twitter took away the blue check mark from alt-right provocateur Milo Yiannopoulos before later banning him.
Danielle Citron, a law professor at the University of Maryland and a member of Twitter’s Trust & Safety Council, says these are expected challenges with developing norms when internet companies face new challenges. Citron says she is not familiar with Twitter’s verification process, but believes it stemmed from concern about banning impersonators, especially for public figures.
Now Twitter needs to rethink those norms, the same way the company eventually came to recognize that harassment, unauthorized nude photos, and targeting individuals also imperiled free speech. “It took them awhile,” says Citron, author of Hate Crimes in Cyberspace.
Citron says Twitter should look to Cloudflare’s clear and transparent statement about refusing to host the neo-Nazi website Daily Stormer a few days after the rally in Charlottesville. At the time, CEO Matthew Prince wrote a blog post acknowledging that if companies are denied Cloudflare’s services that can be tantamount to not being online, but that Daily Stormer crossed a line beyond hate speech to fueling violence.
“That’s how you have an iterative conversation about norms. There’s nothing wrong with revising,” says Citron. “Say, ‘Let’s be clear now. Let’s be clearer than we were.’ You earn a lot of points from users with honesty, clarity, and accountability, and due process,” she says.
Powered by WPeMatico