Windows on Snapdragon desktop apps won’t be as power-hungry as first thought

Windows on Snapdragon desktop apps won’t be as power-hungry as first thought

Qualcomm and Microsoft’s partnership to bring Windows 10 to Qualcomm Snapdragon-powered laptops has promised always-connected laptops and day-long battery lives – but the worry was that would come at a price to longevity.

However, Qualcomm has now revealed that running standard Windows 10 desktop programs shouldn’t affect that impressive battery life too much.

We were previously led to believe that to benefit from the 20 or more hours’ battery life, owners of Windows 10 on Snapdragon (also known as Windows on ARM) devices would need to stick to UWP (Universal Windows Platform) apps.

These are apps downloaded from the Microsoft Store, and Windows 10 S, which Windows on Snapdragon devices run by default, is a locked-down version of Windows 10 that can only run those apps.

While some standard Windows 10 programs have UWP versions, many more do not, which meant some people worried that you would either be stuck without some of the desktop applications you rely on, or suffer from worse battery life.

Full apps, full battery

However, in a report published on Neowin, PJ Jacobowitz, a representative from Qualcomm, suggested that performance and battery life impact would be the same as if it was running on a PC with an Intel processor.

Neowin doesn’t supply the exact quote, so we’re not entirely clear what this means. However, many are interpreting it to mean that there won’t be a significant impact on battery life if you run full desktop programs – also known as Win32 applications.

Because Win32 applications require more power (and will be run with emulation in Windows 10 S) many thought they would  further impact battery life with Windows on Snapdragon systems.

The report seems to dispute that, but the wording is ambiguous. It suggests Win32 applications will run as well as if they were running on a standard Intel machine, and will use the same amount of power. 

So, these applications will still deplete the battery life faster than a UWP app might, but due to emulation not being an issue, the impact shouldn’t be as much as we feared.

We’ve contacted Qualcomm to get clarification and will update this story as soon as we hear back.

Source: http://www.techradar.com/news/windows-on-snapdragon-desktop-apps-wont-be-as-power-hungry-as-first-thought

Powered by WPeMatico

Meltdown and Spectre fake patch warning: be careful what you download

Meltdown and Spectre fake patch warning: be careful what you download

As the scramble to patch the gaping Meltdown and Spectre security flaws continues, there are already real-world dangers pertaining to the vulnerabilities, with news of a fake patch emerging, as well as the likelihood that malicious users are coming close to weaponizing exploits.

As International Business Times spotted, security firm Malwarebytes recently discovered a fake Meltdown and Spectre patch which actually deposits ‘smoke loader’ malware on the victim’s machine.

The good news – such as it is – is that at the moment, this is targeting users over in Germany, but there’s every chance of similar scams popping up in the UK, US and elsewhere. Indeed, they may be around now, and just not found yet.

The false patch is somewhat clever in that it tries to seem authentic by looking like it’s delivered by genuine German authorities. The website hosting the patch appears to belong to the German Federal Office for Information Security.

The fake patch is delivered as an EXE (Intel-AMD-SecurityPatch.exe) and when run it infects the host PC with the aforementioned malware, which is a piece of malicious software capable of retrieving further payloads to wreak havoc on the user’s machine.

Also note that the real German cybersecurity authorities have been warning about phishing emails which are using Spectre and Meltdown ‘fixes’ as bait.

As ever, when a major threat (or two) emerges and makes a big splash all over the headlines, you can expect nefarious types to try and take advantage one way or another.

Real-world risk

And speaking of another way, the further bad news, as Ars Technica reports, is that white hat security researchers who are looking into these vulnerabilities are coming closer to engineering a practical and usable exploit.

And if the good guys are getting close, there’s every chance that the bad guys out there are as well, which means that an actual real-world attack that leverages one of these bugs could be close at hand.

And that’s a particularly worrying prospect seeing as patching these problems is a highly complex matter, requiring not just firmware fixes for Intel’s processors, but operating system patches – and indeed covering up potential holes in related things like GPU drivers.

Further gremlins are being encountered like Intel’s meltdown patch causing instability with older processors, or Microsoft’s Windows patch provoking boot failures on PCs with older AMD CPUs.

With stumbling blocks getting in the way of a difficult process, and malware authors potentially on the cusp of working out a real-world exploit that can be aimed against Meltdown or Spectre, things look rather dicey indeed.

We can only hope that the fixes are deployed fully – and users are on the ball to patch quickly before a real-world attack is weaponized and starts spreading. On the other hand, don’t be so hasty to install a fix that you fall for a fake patch ruse.

For the full lowdown on defending against these bugs, check out our guide on how to protect against Meltdown and Spectre.

Source: http://www.techradar.com/news/meltdown-and-spectre-fake-patch-warning-be-careful-what-you-download

Powered by WPeMatico

Update: SNES Mini back in stock and at a decent price too. But for how long?

Update: SNES Mini back in stock and at a decent price too. But for how long?

Update: We’re delighted to tell you that the SNES Mini deals are still live. A few days ago Nintendo added fresh stock to its site for the Nintendo Classic Mini Super Nintendo Entertainment System (and that’s the last time we’ll refer to that overlong name today) and stock is still available.

This is fantastic if you missed out at launch last year or decided to wait when retailers starting jacking up the prices beyond the £70 RRP. If you bought one to flog on eBay, it’s looking like your chances for a profit are running out!

Original story follows…

Our SNES Mini stock alert buzzer just went off and we’re here to let you know where you can buy a brand new SNES Mini right now. Better yet, it’s not at a jacked up opportunistic price. And no, we don’t mean the pretend new RRP of £79.99, but the actual original RRP of £69.99 – much cheaper than the wave of £100+ prices we’re seeing on eBay.

SNES Mini | 21 games | two controllers | £79.99 @ Very
Of course, you should go for the offer direct from Nintendo first, as it’s a tenner cheaper. The sad reality though, is that deal could be gone by the time you read this. This £79.99 price-tag is effectively the RRP that most stores have been selling the mini console for over the last few months and it’s considerably cheaper than eBay.
View Deal

With any luck, Nintendo won’t be so stingy on the production line when the inevitable N64 Mini comes along later this year. We wouldn’t hold out for a Wii U Mini any time soon though.

Source: http://www.techradar.com/news/snes-mini-back-in-stock-and-at-a-decent-price-too-but-for-how-long

Powered by WPeMatico

Anthony Levandowski Faces New Claims of Stealing Trade Secrets

Anthony Levandowski Faces New Claims of Stealing Trade Secrets

The engineer at the heart of the upcoming Waymo vs Uber trial is facing dramatic new allegations of commercial wrongdoing, this time from a former nanny.

Erika Wong, who says she cared for Anthony Levandowski’s two children from December 2016 to June 2017, filed a lawsuit in California this month accusing him of breaking a long list of employment laws. The complaint alleges the failure to pay wages, labor and health code violations, and the intentional infliction of emotional distress, among other things.

Yet in this unusual 81-page complaint, Wong also claims knowledge of a large swath of Levandowski’s personal and business dealings. She does so in great detail, including dozens of overheard names, the license-plate numbers of cars she observed at a Levandowski property, and an extensive list of the BDSM gear she claims he kept in his bedroom.

Though the lawsuit contains some obvious inaccuracies—such as stating that Levandowski is a resident of Oakland County, California, which does not exist—Wong’s claims raise new questions about Levandowski’s business conduct. In her complaint, Wong alleges that Levandowski was paying a Tesla engineer for updates on its electric truck program, selling microchips abroad, and creating new startups using stolen trade secrets. Her complaint also describes Levandowski reacting to the arrival of the Waymo lawsuit against Uber, strategizing with then-Uber CEO Travis Kalanick, and discussing fleeing to Canada to escape prosecution.

Levandowski’s outside dealings while employed at Google and Uber have been central themes in Waymo’s trade secrets case. Waymo says that Levandowski took 14,000 technical files related to laser-ranging lidar and other self-driving technologies with him when he left Google to work at Uber. He is not a party to the original Waymo complaint against Uber, however, and no criminal charges have yet been filed against him. Levandowski has consistently exercised his Fifth Amendment rights and not responded to allegations in that suit.

A statement on the Wong lawsuit from Levandowski’s spokesperson is unequivocal: “On January 5, a frivolous lawsuit was filed against Anthony Levandowski in US District Court. The allegations in the lawsuit are a work of fiction. Levandowski is confident that the lawsuit will be dismissed by the courts.” Little is known about Wong, who did not respond to a request for an interview. She says in the complaint that a medical background earned her a higher-than-average salary for a nanny; that she had taken law classes; and that she had produced a short film on Sebastian Thrun, who led the early development of Google’s self-driving car.

In the complaint, Wong describes a scene from Feb. 23 of last year, the day Waymo filed its lawsuit against Uber. When Wong arrived for work that evening, she says she saw Levandowski walking in circles in the living room, sweating profusely and talking to his lawyer, Miles Ehrlich, on the phone.

According to court records, Wong recalls Levandowski screaming, “Fuck! Fuck! Fuck! How could they do this to me? Miles, what about the clause, you … said this would work! What do I do with the discs? What do the contracts say? It’s all mine, the money, the deals, it’s all mine. What about ‘the shit?’ These are all my fucking deals!”

On March 11, a day after Waymo filed a motion for an injunction against Uber, Wong describes Levandowski texting her to say he was bringing his boss home with him. Half an hour later, she says, Kalanick and Levandowski arrived, bringing with them a white bucket containing circuit boards and lenses, as well as legal documents for Levandowski to sign. She writes that Kalanick spent about five hours at Levandowski’s home.

A week later, Wong recalls Levandowski saying to his stepmother, Suzanna Musick, “Make sure Pat Green gets paid.” (Musick has deep connections with Levandowski’s companies. Google’s first self-driving Prius was still registered in her name long after 510 Systems, the startup that built it, was sold to the tech giant.)

Wong had heard the same name in conversations between Levandowski and Randy Miller, his college friend and business partner on multiple construction deals. On April 6, according to the complaint, Green’s name came up again in discussions with Miller, this time connected to updates from Tesla’s electric trucking division. Wong’s complaint says that on April 27 she overheard Levandowski and his brother Mike talking about how Levandowski might drive up to Alberta, Canada, to avoid prison. She recalls Levandowski telling his brother, “Just arrange with Suzanna, dad, and Hazlett [another relative] to keep working with Pat Green. I need updates on Tesla trucking, the non-lidar technology is crucial and Nvidia chips. We can make money on both.”

During May and June, the suit states, Wong remembers Levandowski calling his sister frequently and asking, “Did you get any packages from Google or Pat Green?”

There is a senior manufacturing equipment engineer called Patrick Green working at Tesla on new products, according to a profile on LinkedIn, but neither Green nor Tesla responded to requests for comment and no other public evidence appears to link this person to Levandowski. Tesla has long been working on an electric self-driving truck, which was finally unveiled in November as the Semi. Levandowski has an investment in autonomous trucking as the majority shareholder in Otto Trucking, a self-driving truck startup originally named as a co-defendant in the Waymo case. Otto Trucking owns self-driving trucks based at Uber’s headquarters in San Francisco.

According to Wong’s complaint, at the same meeting Levandowski also asked his brother Mike to keep “paying off Haslim and others.” This likely refers to James Haslim, the lidar engineer who was hired by Levandowski to work at his startup Tyto Lidar. Tyto was acquired by Otto and, in turn, by Uber, where Haslim still works. Uber declined to comment on the allegation and did not make Haslim available for an interview.

This complaint makes clear that Wong also thinks Levandowski is selling trade secrets, lidar technology, and processors to customers abroad. She recalls a conversation on June 3 in which she says he told her, “I don’t plan on going to prison, the money is in the chip sales.” On another phone call a few weeks later, according to court records, she heard him say, “I’m rich as fuck. Boom-mother fucker! Fuck Travis! Fuck Uber! I’m taking the world over with all these deals, microchips sales all over the world.”

The document also details Wong’s belief that Levandowski had a hand in forming several startups not publicly linked to him. For example, the complaint describes Wong overhearing a conversation between Levandowski and his business partners about ex-Google engineer Bryan Salesky’s autonomous vehicle startup Argo.AI. She then suggested in the complaint that Levandowski might have had a role in creating the company while at Uber. Yet Google founder Larry Page has spoken of tension between Levandowski and Salesky at Google. Argo.AI tells WIRED that Levandowski was not involved in the formation of the company in any way.

Wong also suggests in the complaint that Levandowski helped create JingChi Corporation, an autonomous technology startup founded by Qing Lu, a former executive at lidar company Velodyne, in March 2017. The complaint cites as evidence a few meetings between Levandowski and Michael Jellen, Velodyne’s president. When contacted, Qing Lu and Velodyne also denied Wong’s conclusions. WIRED could find no public evidence linking Levandowski to either Argo.AI or JingChi.

Wong is seeking damages of over $6 million. Levandowski has been served with a summons, and an initial case management conference is scheduled for early April. If Levandowski was expecting his legal woes to end with the Waymo case next month, he may have to buckle up for a longer ride.

Source: https://www.wired.com/story/anthony-levandowski-faces-new-claims-of-stealing-trade-secrets/

Powered by WPeMatico

Your Next Job Could Be Babysitting Robots

Your Next Job Could Be Babysitting Robots

Book a night at LAX’s Residence Inn and you may be fortunate enough to meet an employee named Wally. His gig is relatively pedestrian—bring you room service, navigate around the hotel’s clientele in the lobby and halls—but Wally’s life is far more difficult than it seems. If you put a tray out in front of your door, for instance, he can’t get to you. If a cart is blocking the hall, he can’t push it out of the way. But fortunately for Wally, whenever he gets into a spot of trouble, he can call out for help.

See, Wally is a robot—specifically, a Relay robot from a company called Savioke. And when the machine finds itself in a particularly tricky situation, it relies on human agents in a call center way across the country in Pennsylvania to bail it out. When Wally makes the distress call, a real live human answers, takes control of the robot, and guides it to safety.

Wally’s job may seem inconsequential, but it signals just how close we are to the robot revolution. The machines are finally sophisticated enough to escape the lab and the factory, where they’ve long lived, and venture into our everyday lives. But for all their advances, robots still struggle with the human world. They get stuck. They get confused. They get assaulted. Which is giving rise to a fascinating new kind of job that only a human can do: robot babysitter.

The first companies to unleash robots into service sectors have been quietly opening call centers stocked with humans who monitor the machines and help them get out of jams. “It’s something that’s just starting to emerge, and it’s not just robots,” says David Poole, CEO and co-founder of Symphony Ventures, which consults companies on automation. “I think there is going to be a huge industry, probably mostly offshore, in the monitoring of devices in general, whether they’re health devices that individuals wear or monitoring pacemakers or whatever it might be.” Self-driving cars, too. Nissan in particular has admitted that getting a car to drive itself is hard as hell, so it wants humans in the loop.

Which might sound, well, a bit dystopian: vast rooms packed with humans devoted exclusively to tending to the whims of robots. But it’s actually an intriguing glimpse into the nature of work in a robotic future, and the way humans will interact with—and adapt to—the machines.

Save Your (Manufactured) Skin

Curiously, Relay has sourced its robot call center to a company called Active Networks, which operates traditional call centers. Which meant the people who do this work had to get new training to interact with the machines. In fact, they still get recurring training. And periodically they get together to discuss issues they run into. “This was not an easy task, as if we are preparing to take phone calls,” says Marcus Weaver, who manages call center operations at Active Networks. “We had to change our agents’ mindset and get them use to handling the request via a portal instead of someone call over the phone.”

These sitter jobs, though, may be fleeting. A robot call center is a stopgap. Robots aren’t ready to be independent just yet, but that doesn’t mean they won’t be down the line. “I can completely see that eventually we’ll reach a point where we don’t need the humans in the loop,” says Tessa Lau, CTO of Savioke, Relay’s maker. The idea here isn’t to fashion a future in which humans tend to forever-inept robots—the idea is to get them into the real world with a little bit of help. “We’re experimenting with this new technology that’s sort of the first of its kind,” says Lau. “We’re still getting the kinks out, we’re still making Relay more reliable, more autonomous.”

The stakes here are of course fairly low—no one’s life is in danger if their room service is a bit delayed. But another robot named Tug, made by Aethon in Pittsburgh, plays a more sensitive role as a hospital worker. It delivers drugs to doctors and nurses, as well as linens and food. Tug is meant not as a replacement for staff, but as an increasingly important coworker that frees up time for workers to do the human stuff, like talking to patients.

Still, though, Tug can get stuck in such a chaotic environment, so a command center in this case provides peace of mind for the client. “We didn’t have the luxury of time to wait for the culture to change in order for people to want to adopt autonomous vehicles,” Aethon’s Peter Seiff told me when WIRED visited their HQ in November. “So we built this backend into the system where we can make customers comfortable that they were being watched, even though they made the leap of faith with us that they could have autonomous vehicles running within their facility.”

Can’t We All Just Get Along?

Not everyone agrees to be watched by the bots, though. Late last year, one of Knightscope’s security robots was patrolling around the San Francisco SPCA when a group setting up an encampment allegedly attacked it.

“When you’re living outdoors, the lack of privacy is really dehumanizing after awhile, where the public’s eyes are always on you,” Jennifer Friedenbach, executive director of San Francisco’s Coalition on Homelessness, told WIRED in December. “It’s really kind of a relief when nighttime comes, when you can just be without a lot of people around. And then there’s this robot cruising around recording you.”

The question of privacy grows all the more complicated when you get sitters peering through a robot’s cameras from afar. A human interacting with a security robot might rightly assume that they’re being recorded, but what they might not know is that Knightscope staffs a human call center 24/7 to monitor the robots. Who, exactly, is watching? (Savioke’s Relay robot, for its part, takes video in what you might consider public places like the lobby and hallways, but blurs video when it approaches a guest’s door, lest it witness something no robot or human needs to see.)

When there’s a human behind the scenes, robots start to have an image problem. The value of a service robot, at least in part, is its impartiality. It lives to serve, in a specific capacity—just for you, dear client. But a call center calls that presentation into question. How much control does the sitter have? And at what point does the robot start to take on its sitter’s human personality?

Savioke ran into this problem early on. “The concern that we had was that we’re trying to create a particular character that Relay has,” says Lau. “He’s friendly, he’s helpful, he’s polite. If you open the door to having our call center arbitrarily create behaviors for Relay, like putting text on the screen, we can’t necessarily control everything that people will type in.”

Savioke eventually decided to restrict what the sitters had power over. “They can send him on a delivery, they can drive him around in a limited form to get his bearings again, but we decided not to allow them to sort of puppeteer him because he’s really not a remote-controlled toy,” says Lau.

It’s an interesting twist on what’s known as human-robot interaction, a matter so complicated that it’s spawned an entire academic field. How should robots anticipate our movements, for example? How do you design robots to subtly telegraph what they’re capable of? And now with robot call centers, how does the dynamic change when the human is thousands of miles away from the robot they’re interacting with and controlling?

“Ideally, you should be able to interact with the robot at some higher level interface, guiding its higher level actions to get unstuck or remedy the situation,” says Anca Dragan, who studies human-robot interaction at UC Berkeley. “What these high level actions ought to be is an open question.”

Also an open question are the psychological effects of operating a robot from afar. Consider drone operators, who can develop PTSD even though they’re sitting comfortably behind a computer monitor. Not that the sitters looking after Relay and other robots are in danger of doing the same, but there are interesting psychological implications here. For instance, could being so far disconnected from the machine encourage unethical behavior?

We’re certainly about to find out. Sure, the job of the robot sitter may be fleeting, as the machines grow ever more sophisticated. Like children, robots grow up, and then the babysitter is out of a job. But for certain bots, a human may always be there—ready to come to the rescue.

Source: https://www.wired.com/story/job-alert-how-would-you-like-to-babysit-robots/

Powered by WPeMatico

Barcelona abandons Windows and Office, goes with Linux instead

Barcelona abandons Windows and Office, goes with Linux instead

In another entire-city-abandons-Microsoft affair, Barcelona has announced that it’s dumping Windows and Office in order to migrate to Linux and other open source solutions.

The idea is, obviously enough, to save money by not paying subscription fees to Microsoft, because the beauty of open source software is that it’s free.

As to the reality of how the move pans out, we’ll just have to see, but as we mentioned at the outset, Barcelona isn’t the first European city to try this trick. Munich did so, initially instigating plans way back in 2003, and fully completing the move to open source by 2013. However, the city announced it was switching back to Microsoft software in 2016.

Nonetheless, Barcelona is treading this brave path, which involves doing away with Windows, as well as Microsoft Office and Exchange, in favor of Linux, Libre Office and Open Xchange.

Apparently some folks at the city council are already using Linux PCs with Ubuntu installed, as well as Firefox as the default browser, as part of a pilot scheme.

Fresh software

Barcelona won’t just be using existing open source software, but also plans to recruit developers to write fresh programs, which will then potentially be distributed and used in other cities across Spain, furthering cost-saving efforts.

As ever, only time will tell how successful this initiative will be, but there are plenty of doubters given the Munich episode.

Regarding the latter, as MS Power User reports, one of the main reasons for Munich reverting to Microsoft software was apparently the fact that the tailored Linux distro used (LiMux, based on Ubuntu) and Libre Office were seen to be “far behind the current technical possibilities of established standard solutions”, and were causing crashes and instability.

Although pro-open source advocates will doubtless argue that on an overall level, Linux has made impressive strides in terms of becoming more stable and fully supported in recent times.

Munich isn’t the only example of a city failing in a Linux migration campaign, either – Vienna tried to make the move in 2005, returning to Windows four years later.

Source: http://www.techradar.com/news/barcelona-abandons-windows-and-office-goes-with-linux-instead

Powered by WPeMatico

Meet Antifa’s Secret Weapon Against Far-Right Extremists

Meet Antifa’s Secret Weapon Against Far-Right Extremists

The email arrived just as Megan Squire was starting to cook Thanksgiving dinner. She was flitting between the kitchen, where some chicken soup was simmering, and her living room office, when she saw the subject line flash on her laptop screen: “LOSer Leak.” Squire recognized the acronym of the League of the South, a neo-­Confederate organization whose leaders have called for a “second secession” and the return of slavery. An anonymous insider had released the names, addresses, emails, passwords, and dues-paying records of more than 4,800 members of the group to a left-wing activist, who in turn forwarded the information to Squire, an expert in data mining and an enemy of far-right extremism.

Fingers tapping across the keyboard, Squire first tried to figure out exactly what she had. She pulled up the Excel file’s metadata, which suggested that it had passed through several hands before reaching hers. She would have to establish its provenance. The data itself was a few years old and haphazardly assembled, so Squire had to rake the tens of thousands of information-filled cells into standardized sets. Next, she searched for League members near her home of Gibsonville, North Carolina. When she found five, she felt a shiver. She had recently received death threats for her activism, so she Googled the names to find images, in case those people showed up at her door. Then she began combing through the thousands of other names. Two appeared to be former South Carolina state legislators, one a firearms industry executive, another a former director at Bank of America.

Once she had a long list of people to investigate, Squire opened a database of her own design—named Whack-a-Mole—which contains, as far as anyone can tell, the most robust trove of information on far-right extremists. When she cross-checked the names, she found that many matched, strengthening her belief in the authenticity of the leak. By midafternoon, Squire was exchanging messages via Slack with an analyst at the Southern Poverty Law Center, a 46-year-old organization that monitors hate groups. Squire often feeds data to the SPLC, whose analysts might use it to provide information to police or to reveal white supremacists to their employers, seeking to get them fired. She also sent several high-profile names from the list to another contact, a left-wing activist who she knew might take more radical action—like posting their identities and photos online, for the public to do with what it would.

Sean Freeman

Squire, a 45-year-old professor of computer science at Elon University, lives in a large white house at the end of a suburban street. Inside are, usually, some combination of husband, daughter, two step-children, rescue dog, and cat. In her downtime she runs marathons and tracks far-right extremists. Whack-a-Mole, her creation, is a set of programs that monitors some 400,000 accounts of white nationalists on Facebook and other websites and feeds that information into a centralized database. She insists she is scrupulous to not break the law or violate Facebook’s terms of service. Nor does she conceal her identity, in person or online: “We shouldn’t have to mask up to say Nazis are bad. And I want them to see I don’t fit their stereotypes—I’m not a millennial or a ‘snowflake.’ I’m a peaceful white mom who definitely doesn’t like what they’re saying.”

Though Squire may be peaceful herself, among her strongest allies are “antifa” activists, the far-left antifascists. She doesn’t consider herself to be antifa and pushes digital activism instead of the group’s black-bloc tactics, in which bandanna-masked activists physically attack white supremacists. But she is sympathetic to antifa’s goal of silencing racist extremists and is unwilling to condemn their use of violence, describing it as the last resort of a “diversity of tactics.” She’s an intelligence operative of sorts in the battle against far-right extremism, passing along information to those who might put it to real-world use. Who might weaponize it.

As day shifted to evening, Squire closed the database so she could finish up cooking and celebrate Thanksgiving with her family and friends. Over the next three weeks, the SPLC, with help from Squire, became comfortable enough with the information to begin to act on it. In the shadowy world of the internet, where white nationalists hide behind fake accounts and anonymity is power, Whack-a-Mole was shining a searchlight. By mid-December, the SPLC had compiled a list of 130 people and was contacting them, to give them a chance to respond before possibly informing their employers or taking legal action. Meanwhile, the left-wing activist whom Squire had separately sent data to was preparing to release certain names online. This is just how Squire likes it. Hers is a new, digitally enabled kind of vigilante justice. With no clear-cut rules for just how far a citizen could and should go, Squire has made up her own.

“I’m the old lady of activism,” says Megan Squire, a professor of computer science at Elon University.

João Canziani

Squire grew up near Virginia Beach in a conservative Christian family. She has been involved in left-leaning movements since she was 15, when her high school environmental club took a trip to protest the pollution from an industrial pig farm. “I loved the activist community,” she says, “and saying things we weren’t supposed to say.” After getting degrees in art history and public policy from William & Mary, she became interested in computers and took a job as a secretary at an antivirus software company, working her way up to webmaster. She eventually got a PhD in computer science from Nova Southeastern University in Florida and moved to North Carolina to work at startup companies before landing a job teaching at Elon. Between classes she could often be spotted around town waving signs against the Iraq War, and in 2008 she went door to door campaigning for Barack Obama. But Obama’s failure, in her view, to live up to his rhetoric, compounded by the Great Recession, was “the turning point when I just threw in the towel on electoral politics,” she says. She plunged into the Occupy movement, coming to identify as a pacifist-anarchist, but she eventually became disillusioned with that as well when the movement’s “sparkle-fingers” utopianism, as she puts it, failed to generate results. In 2016, she cast a vote for the Green Party’s Jill Stein.

Donald Trump’s campaign, though, gave Squire a new sense of mission: “I needed to figure out what talents I had and what direct actions I could do.” When a mosque in the nearby city of Burlington was harassed by a local neo-Confederate group called Alamance County Taking Back Alamance County, she decided to put her skills to use. ACTBAC was using Facebook to organize a protest against the opening of the mosque, so Squire began scraping posts on the page that threatened to “kick Islam out of America.” She submitted her findings to the SPLC to get ACTBAC classified as a hate group, and to the North Carolina Department of the Secretary of State, which started an investigation into the group’s tax-exempt nonprofit status. She also organized a counterprotest to one of the group’s rallies, and it was at this event and others like it where she first became acquainted with the black-clad antifa activists. She was impressed. “They were a level of mad about racism and fascism that I was glad to see. They were definitely not quiet rainbow peace people.” Over the following months, she began feeding information to some of her new local antifa contacts. As white pride rallies intensified during 2017’s so-called Summer of Hate—a term coined by a neo-Nazi website—Squire began to monitor groups outside of North Carolina, corresponding with anonymous informants and pulling everything into her growing Whack-a-Mole database. Soon, in her community and beyond, antifa activists could be heard whispering about a new comrade who was bringing real, and potentially actionable, data-gathering skills to the cause.

The first big test of Whack-a-Mole came just before the white supremacist Unite the Right rally in Charlottesville on Saturday, August 12. In the weeks before, because of her database, Squire could see that nearly 700 white supremacists on Facebook had committed to attend the rally, and by perusing their posts, she knew they were buying plane tickets and making plans to caravan to Charlottesville. Her research also showed that some of them had extensive arrest records for violence. She sent a report to the SPLC, which passed it on to Charlottesville and Virginia law enforcement. She also called attention to the event on anarchist websites and spread the word via “affinity groups,” secret peer-to-peer antifa communication networks.

The night before the rally, Squire and her husband watched in horror on the internet as several hundred white supremacists staged a torch-lit march in Charlottesville to protest the removal of a statue of Robert E. Lee, chanting “Jews will not replace us!” The next morning, the couple got up at 5 am and drove more than 150 miles through rain and mist to Virginia. At a crowded park, she met with a half-dozen or so activists she knew from North Carolina, some of them antifa, and unfurled a banner for the Industrial Workers of the World. (She’d joined the Communist-inspired labor organization in December 2016, after witnessing what she considered its well-organized response to KKK rallies in North Carolina and Virginia.) Just before 10 am, the white supremacists began marching into Emancipation Park, a parade of Klansmen, neo-Nazis, militia members, and so-called alt-right adherents, armed with everything from homemade plexiglass shields to assault weapons. Squire screamed curses at the white supremacists by name—she knew them because she had their information on file in Whack-a-Mole and had memorized their faces. At one point, a group of clergy tried to blockade the white supremacists, and Squire linked arms with other activists to protect them. A petite woman, she was pushed aside by men with plexiglass shields. Fights broke out. Both sides blasted pepper spray. Squire put on a gas mask she’d been carrying in a backpack, but the pepper spray covered her arms, making them sting.

After the police finally separated the combatants, Squire and dozens of other counter­protesters took to Fourth Street in triumph. But then, a gray Dodge Challenger tore down the street—and rammed into their backs. The driver, who had marched with the white nationalists and was later identified as James Alex Fields, missed Squire by only a few feet. She stood on the sidewalk, weeping in shock, as the fatally injured activist Heather Heyer lay bleeding in the street.

Recounting the event months later, Squire began to cry. “I had all this intelligence that I hadn’t used as effectively as I could have. I felt like I’d wasted a chance that could have made a difference.” When she returned home, she threw herself into expanding Whack-a-Mole.

Squire, center, marches through the streets of Asheboro, North Carolina, to protest the KKK.

Daniel Hosterman

One morning in December, I visited Squire in her small university office. She had agreed to show me the database. First she logged onto a foreign server, where she has placed Whack-a-Mole to keep it out of the US government’s reach. Her screen soon filled with stacks of folders nested within folders: the 1,200-plus hate groups in her directory. As she entered command-line prompts, spreadsheets cascaded across the screen, each cell representing a social media profile she monitors. Not all of them are real people. Facebook says up to 13 percent of its accounts may be illegitimate, but the percentage of fakes in Squire’s database is probably higher, as white nationalists often hide behind multiple sock puppets. The SPLC estimates that half of the 400,000-plus accounts Squire monitors represent actual users.

Until Whack-a-Mole, monitoring white nationalism online mainly involved amateur sleuths clicking around, chasing rumors. Databases, such as they were, tended to be cobbled together and incomplete. Which is one reason no one has ever been able to measure the full reach of right-wing extremism in this country. Squire approached the problem like a scientist. “Step one is to get the data,” she says. Then analyze. Whack-a-Mole harvests most of its data by plugging into Facebook’s API, the public-facing code that allows developers to build within Facebook, and running scripts that pull the events and groups to which various account owners belong. Squire chooses which accounts to monitor based on images and keywords that line up with various extremist groups.

Most of the Whack-a-Mole profiles contain only basic biographical sketches. For more than 1,500 high-profile individuals, however, Squire fills out their entries with information gleaned from sources like the SPLC, informers, and leaks. According to Keegan Hankes, a senior analyst at the SPLC, Squire’s database “allows us to cast a much, much wider net. We’re now able to take a much higher-level look at individuals and groups.”

In October, after a man fired a gun at counterprotesters at a far-right rally in Florida, SPLC analysts used Squire’s database to help confirm that the shooter was a white nationalist and posted about it on their blog. Because so much alt-right digital data vanishes quickly, Whack-a-Mole also serves as an archive, providing a more permanent record of, say, attendees at various rallies. Squire’s database has proven so useful that the SPLC has begun laying the groundwork for it to feed directly into its servers.

“I don’t have any moral quandaries about this. I know I’m following rules and ethics that I can stand up for.”

Mark Peterson/Redux

When Squire sends her data to actual citizens—not only antifa, but also groups like the gun-toting Redneck Revolt—it gets used in somewhat less official ways. Before a neo-Nazi rally in Boston this past November, Squire provided local antifa groups with a list of 94 probable white nationalist attendees that included their names, Facebook profiles, and group affiliations. As one activist who goes by the pseudo­nym Robert Lee told me, “Whack-a-Mole is very helpful. It’s a new way to research these people that leads me to information I didn’t have.” He posts the supposed identities of anonymous neo-Nazis and KKK members on his blog, Restoring the Honor, which is read by journalists and left-wing activists, and on social media, in an effort to provoke the public (or employers) to rebuke them.

Lee is careful, he says, to stop short of full-on doxing these individuals—that is, publicizing more intimate details such as home addresses, emails, and family photos that would enable electronic or even real-world harassment against them. Squire says that’s why she feels comfortable sending him information. Of course, once a name is public, finding personal information is not that hard. In the digital age, doxing is a particularly blunt tool, one meant to terrorize and threaten people in their most private spaces. Celebrities, private citizens, left-wing activists, and Nazis have all been doxed. The tactic allows anonymous hordes of any persuasion to practice vigilante justice on anyone they deem evil, problematic, or just plain annoying. As the feminist video­game developer and activist Zoe Quinn, who has been doxed and brutally harassed online, has written: “Are you calling for accountability and reform, or are you just trying to punish someone—and do you have any right to punish anyone in the first place?”

Squire has been doxed herself. Pictures of her home, husband, and children have been passed around on racist websites. She has received death threats and terrorizing voicemails, including one that repeated “dirty kike” for 11 seconds. Elon University has fielded calls demanding she be fired. On Halloween, Confederate flags were planted in her yard. Still, though Squire fears for her family’s safety, she keeps going. “I’m aware of the risks,” she says. “But it seems worth it. That’s what taking a stand is.”

Members of Berkeley’s antifascist group block an Infowars reporter from covering a rally.

Mark Peterson/Redux

After Charlottesville, Squire considered, in her anger and grief, publicly releasing the entire Whack-a-Mole database. It would have been the largest-ever doxing of the far right. But she worried about the consequences of misidentification. Instead, she worked with her regular partners at the SPLC and activists she trusts. At one point the SPLC contacted a university about a student whom Squire had identified as a potentially violent member of the League of the South. The university did not take action, and she thought about tossing the student’s name to the ever-ravenous social media mobs. But here too, she reasoned that when you have someone’s life at your fingertips, you need rules. If the university wasn’t willing to act, then neither was she. It was, for her, a compromise, an attempt to establish a limit in a national moment pointedly lacking in limits.

Critics might still argue that public shaming of the kind Squire promotes constitutes a watered-down form of doxing, and that this willingness to take matters into their own hands makes Squire and her cohort no better than vigilantes. As David Snyder, executive director of the First Amendment Coalition, says of Squire’s work: “Is it ethical to digitally stalk people? It may not be. Is it legal? Probably, as long as she doesn’t hack into their accounts and she’s collecting information they post publicly on an open platform like Facebook.” But he warns that limiting speech of anyone, even white supremacists, starts down a slippery slope. “Political winds can shift across time. Liberals who might cheer at a university limiting neo-Nazi speech also have to worry about the flip side of that situation when someone like Trump might penalize them in the future.”

As far as Squire is concerned, there’s a clear difference between protected speech and speech that poses an imminent threat to public safety. “Richard Spencer yelling about wanting a white ethno-state after events like Charlottesville—it’s hard to argue that kind of speech doesn’t constitute danger.”

Related Stories

Ultimately, Squire sees her work as a type of “fusion center”—a government term for a data center that integrates intelligence from different agencies—for groups combating white nationalism. And she admits that she is outsourcing some of the ethical complexities of her work by handing her data off to a variety of actors. “But it’s the same as how Facebook is hypocritical in claiming to be ‘just a platform’ and not taking responsibility for hate. Every time we invent a technology to solve a problem, it introduces a bunch more problems. At least I’m attentive to the problems I’ve caused.” Squire sees herself as having to make difficult choices inside a system where old guidelines have been upended by the seismic powers of the internet. White nationalists can be tracked and followed, and therefore she believes she has a moral obligation to do so. As long as law enforcement keeps “missing” threats like James Alex Fields, she says, “I don’t have any moral quandaries about this. I know I’m following rules and ethics that I can stand up for.”

After Charlottesville, some white supremacist groups did find themselves pushed off certain social media and hosting sites (see “Nice Website,” page 56) by left-wing activists and tech companies wary of being associated with Nazis. These groups relocated to platforms like the far-right Twitter clone Gab and Russia’s Facebook-lite VK. Squire sees this as a victory, believing that if white nationalists flee to the confines of the alt-right echo chamber, their ability to recruit and organize weakens. “If the knowledge that we’re monitoring them on Facebook drives them to a darker corner of the internet, that’s good,” she asserts.

That doesn’t mean Squire won’t follow them there. She has no plans to stop digitally surveilling far-right extremists, wherever they may be. After Jason Kessler, the organizer of the Unite the Right rally, was unverified on Twitter, he joined VK. His first post read, “Hello VK! I’d rather the Russians have my information than Mark Zuckerberg.” The declaration was quickly scooped up by Squire. She had already built out Whack-a-Mole to track him there too.


The Free Speech Issue

  • Tech, Turmoil, and the New Censorship: Zeynep Tufekci explores how technology is upending everything we thought we knew about free speech.
  • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare’s decision to let an extremist stronghold burn.
  • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
  • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
  • 6 Tales of Censorship: What it’s like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

Doug Bock Clark (@dougbockclark) wrote about Myanmar’s digital revolution in issue 25.10. His first book, The Last Whalers, comes out in July.

This article appears in the February issue. Subscribe now.

Source: https://www.wired.com/story/free-speech-issue-antifa-data-mining/

Powered by WPeMatico

This Startup Wants to Neutralize Your Phone—and Un-change the World

This Startup Wants to Neutralize Your Phone—and Un-change the World

Late last fall, in the gleaming white lobby of Madison Square Garden, uniformed attendants were posted at security stations to make thousands of smartphones stupid. Chris Rock was playing his 10th show in a 12-city international tour, and at every stop, each guest was required to pass through the entryway, confirm that his or her phone was on vibrate or silent, and then hand it over to a security guard who snapped it into a locking gray neoprene pouch—rendering it totally inaccessible. The besuited man ahead of me in line, clearly coming straight from the office, had two cell phones, each of which required its own little bag. The kid behind me groaned that he wouldn’t be able to Snapchat his night. The friend whom I’d come to meet was nowhere to be found, and after slipping my phone into the pouch, I couldn’t text her to ask where she was. Finally, I spotted her near the escalator. “That was weirdly scary,” she said, laughing.

The show would start in 45 minutes. There were still seats to find, bathroom visits to be made, bottles of water to buy. And throughout the lobby, hands everywhere were fidgeting. It was as though all 5,500 of us had been reduced, by the sudden and simple deactivation of our phones, into a roomful of jonesing fiends.

Sean Freeman

We applied lip balm needlessly, ripped up tissues, cracked our knuckles. The truly desperate could get relief in a cordoned-off “phone zone” just outside the auditorium, where an employee would unlock your phone so long as you stayed within the bathroom-sized pen. “I gotta tell my wife there’s no service here,” a man told his friend, before ducking in. A woman laughed as she walked by. “It’s like a smoking area! Look at all those addicts.” Meanwhile, those who resisted the temptation to gain back access to their phones, not five minutes after relinquishing it, complained that they didn’t know the time.

Yondr, a San Francisco company with 17 employees and no VC backing, was responsible for the cell phone restriction. Its small fabric pouches, which close with a proprietary lock that can be opened only with a Yondr-­supplied gadget, have been used at concerts featuring Alicia Keys, Childish Gambino, and Guns N’ Roses, and at shows by comedians like Rock, Dave Chappelle, and Ali Wong who don’t want their material leaked on YouTube or their audiences distracted by Instagram. They’re used in hospitals and rehab centers to enforce compliance with health privacy laws, in call centers to protect sensitive customer information, in churches to focus attention on the Almighty, and in courtrooms to curb witness intimidation. They’re used in more than 600 public schools across the country to force children, finally, to look at the board and not their screens. The ingeniously unsophisticated scrap of fabric has only one job: to eliminate smartphone use in places where the people in charge don’t want it. Which is great when it means creative artists can express themselves freely or the rest of us can see a doctor without worrying we’re being recorded. But when it means stifling expression in places where smartphones are increasingly our best chance to document abuses, chronicle crimes, and tell the world what we see, it takes on a different, darker dimension. “The smartphone is many things,” says Jay Stanley, a senior policy analyst for the ACLU. “A means of privacy invasion”—something we need to be protected from—“but also an instrument of free speech.”

Related Stories

I met Graham Dugoni, Yondr’s founder, over drinks one evening in Williamsburg, Brooklyn. He was in New York for two days, meeting with vendors, clients, and business partners about how and why they should use Yondr. “Everyone gets it super intuitively,” he says. “Our attachment to our phones isn’t all that intellectual. It’s much more a body thing, so it was always clear to me that whatever solution there is to this problem had to be itself physical and tangible.”

This problem. It’s one we all have. Checking Insta­gram 897 times a day. Refreshing Twitter but not even reading whatever comes up. Feeling our phones buzz, imagining that a cool stranger is offering us our dream job, and then hating ourselves for being so dumb. “If you use a device all the time, it’s going to affect your nervous system and your patterns of thought and social interaction. It’s really just an impulse check that’s needed, I think,” Dugoni says. He sees this as a new, awkward epoch of humanity where we might all need a bit of help being our better selves. “In our hyperconnected, atomized modern society,” he says, “stepping into a phone-free space provides the foundation for sustained attention, dialog, and freedom of expression.”

Dugoni, who is 31 and projects the physical confidence of an extreme athlete, has a flip phone and claims not to read the news. “I’m really selective about my inputs,” he told me. “I have a hunch that the human race isn’t ready for all our current visual and auditory stimuli.” And since founding Yondr in 2014, he has taken it upon himself to try to take us back to a time before cell phones were everywhere and everything. He wants to un-change the world. “I think of it as a movement,” he says. “I really do.”

Dugoni grew up in Portland, Oregon, studied political science at Duke University, and played professional soccer in Norway until an injury forced him off the field and into finance. At 24 he moved to Atlanta, where he worked, unhappily, for a midsize investment firm, and for the first time in his life sat at a desk for eight hours a day. Dugoni later relocated to the Bay Area and spent a few months working at various startups, but he hated that too. In 2012, at a music festival in San Francisco, he witnessed a pair of strangers film a drunken guy obliviously dancing; they then posted the video to YouTube. Appalled, Dugoni started thinking about how he could have prevented these strangers from making a public spectacle out of someone else’s private moment. A tool, maybe, to create a phone-free space.

He spent the next year and a half researching options, reading up on sociology, phenomenology, and the philosophy of technology. And in 2014, after experimenting with different concepts, including a storage locker that could hold individual phones, he settled on a pouch that let people hold onto their phones without being able to use them. Over the next six months, he spent nights sourcing materials from Alibaba, the ecommerce conglomerate, and talking on the phone with Chinese purveyors of fabric and plastic. He’d then sit at his kitchen table until dawn, creating tiny wetsuit-like sleeves and jamming cell phones into them. After 10 proto­types, he created a version that locked and unlocked with ease. He had his product, and he gathered $100,000 from family, friends, angel investors, and his own savings to manufacture and market it.

Graham Dugoni went through 10 prototypes before perfecting the Yondr pouch’s fit and functionality.

Maria Lokke

From the beginning, concert producers understood the appeal of the pouch, and entertainment venues were among Yondr’s early customers. That changed in 2016, when Joseph Evers, the district court administrator for Philadelphia County, attended a comedy show at the Valley Forge Casino. When the person working security asked for his phone, slid it into one of the pouches, and locked it, Evers realized it could solve a big problem in the courts. At the time, he was struggling with witness intimidation: ­People were attending hearings and posting photos of the proceedings on social media. “We had tried collecting phones, but it was a nightmare,” he told me. “It took forever, and there was a lot of damage [to the phones] we had to pay for.” Yondr seemed like an obvious solution. A few days later, he got in touch with the company, and an employee traveled across the country with a case of samples. Evers presented them to the administrative board of the courts in Philadelphia, and everyone agreed immediately and unanimously. Now, on any given day, about 2,000 Yondr pouches are used in Philadelphia courts.

At first, Evers says, he worried that people would bristle at the process, but that hasn’t been the case. “There’s not a lot of drama,” he says. “People get in line and do what they have to do.” Evers says the court has seen a “dramatic change” in the number of complaints about social media posts identifying witnesses and undercover officers. “The DA and the police are the biggest beneficiaries,” he says. Surrendering your phone “is a small price to pay for safety.”

Adam Schwartz isn’t so sure. A staff attorney at the Electronic Frontier Foundation, a San Francisco–based nonprofit devoted to defending civil liberties in the digital world, Schwartz wrote to me in an email that the organization is “concerned about technologies that incapacitate, even temporarily, all of the salutary things that a person might do with their smartphone.” When I called him to elaborate, he cited the video, shot by a South Carolina high school student in 2015, showing a police officer body-slamming a black, female student for disrupting class. He reminded me of the footage of comedian Michael Richards’ epithet-laced 2006 set that sparked debate about whether entertainers should use racial slurs. He also talked about his concern that his own teenage children should have access to their phones to call 911 should a shooter show up at their school.

Technology has inverted traditional power structures with unprecedented swiftness, and the control of almost any situation is gradually shifting into the hands (literally) of whoever’s recording it. Our phones have turned us into socially connected cyborgs, enhancing what it means to see and hear and speak; in taking away the ability to use these devices, we may be compromising something that is becoming not only essential to us, but about us. “Ten years ago, very few people were walking around with a camera or video recording device, and one could easily make the argument that Yondr is merely restoring the status quo,” Schwartz says. “But the question is, are we better off today, now that the average person can instantly document wrongdoing?”

For all the complaining we do as individuals—about rude dinner companions who look down at their phone between every bite, or our own inability to sit quietly and read novels without impatience—almost nobody would dispute that smartphones have helped catalyze some of the most important social movements of the past few years. Black Lives Matter, Occupy Wall Street, the fight against sexual assault on college campuses: All have been facilitated, at least in part, by footage captured and distributed via smartphones and social media. We’ve already seen attempts to curb this newly democratized expression, and they’re often met with legal challenges—after protesters claimed police departments were using signal jammers to intercept transmissions from their cell phones, the FCC issued an advisory in 2014 calling the practice illegal, except for specially authorized federal agents. Yondr is a private company, not the state, and nobody has filed a suit against the company or its clients. But Gene Policinski, COO of the Newseum Institute and of the Institute’s First Amendment Center, thinks smartphone-­disabling technology is going to be “litigated over and over again.” Phone-restricting devices like Yondr pouches seem innocuous, he says, “but they represent something that could turn potentially dangerous.” By way of a hypothetical: What if citizens had to submit their phones to Yondr pouches or something like them before attending a public city council meeting? It could be done in the name of safety, of course, but with a potentially massive silencing effect.

And never mind hypotheticals; even in the sorts of situations that Yondr pouches were originally intended for, the potential applications are troubling. What if there had been Yondr pouches at Hannibal Buress’ show when he told a joke that is widely credited for setting in motion the long-overdue takedown of Bill Cosby? And what are we to make of the fact that, within seven months of telling the Cosby joke, Buress hopped on the Yondr train and began preventing audiences from taping his shows?

Jay Stanley, from the ACLU, appreciates the ease and elegance of Yondr’s method, but he worries that this very easiness—the frictionless slip of the phone into the pouch, the quickness with which the bag locks—could lead someone to believe that they’re not really giving anything up. Dugoni recognizes the concerns: “The interplay between privacy and transparency isn’t simple, and surveillance and the ability to record others in the public sphere creates a uniquely modern dilemma.”

Still, he thinks we gain more than we lose by restricting cell phone use: “What is the etiquette of smartphones?” he asks. “You used to be able to smoke on a plane, and now you can’t even smoke on the street in certain places.” Dugoni believes legislation restricting cell phone use in certain public areas is inevitable too. “There are already phone-free bars,” he says, referring to venues that block cellular signals as a way of encouraging sociability. “And we’re going to have to determine where phones should be used as we answer a radically new question: What does it mean to be a human in the world with a smartphone in your pocket?”

At the end of Chris Rock’s set, we all herded out of the theater. Security guards were near the exit to snap open the pouches. Reunited with our phones, we feverishly tapped away, while bumping into each other and rolling our eyes. I had received a few work emails, but nothing urgent. My husband had texted me, wondering when I’d be home. Only a few hours had passed. But it felt like 10.


The Free Speech Issue

  • Tech, Turmoil, and the New Censorship: Zeynep Tufekci explores how technology is upending everything we thought we knew about free speech.
  • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare’s decision to let an extremist stronghold burn.
  • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
  • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
  • 6 Tales of Censorship: What it’s like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

Alice Gregory is a writer in New York. This is her first story for WIRED.

This article appears in the February issue. Subscribe now.

Source: https://www.wired.com/story/free-speech-issue-yondr-smartphones/

Powered by WPeMatico

Inside Cloudflare’s Decision to Let an Extremist Stronghold Burn

Inside Cloudflare’s Decision to Let an Extremist Stronghold Burn

In the fall of 2016, Keegan Hankes, an analyst at the Southern Poverty Law Center, paid a visit to the neo-Nazi website the Daily Stormer. This was not unusual; part of Hankes’ job at the civil rights organization was to track white supremacists online, which meant reading their sites. But as Hankes loaded the page on his computer at SPLC’s headquarters in Montgomery, Alabama, something caught his eye: a pop-up window that announced “Checking your browser before accessing … Please allow up to 5 seconds.” In fine print, there was the cryptic phrase “DDoS protection by Cloudflare.” Hankes, who had worked at SPLC for three years, had no idea what Cloudflare was. But soon he noticed the pop-up appearing on other hate sites and started to poke around.

There’s a good chance that, like Hankes, you haven’t heard of Cloudflare, but it’s likely you’ve viewed something online that has passed through its system. Cloudflare is part of the backend of the internet. Nearly 10 percent of all requests for web pages go through its servers, which are housed in 118 cities around the world. These servers speed along the delivery of content, making it possible for clients’ web pages to load more quickly than they otherwise would. But Cloudflare’s main role is protection: Its technology acts as an invisible shield against distributed denial of service (DDoS) attacks—hacker campaigns that disable a website by overwhelming it with fake traffic. The company has more than 7 million customers, from individual bloggers who pay nothing for basic security services to Fortune 50 companies that pay up to a million dollars a year for guaranteed 24-hour support.

Hankes wanted to learn something about Cloudflare’s business, and what really interested him was finding out who Cloudflare was protecting. After a few months of research, he felt confident he’d uncovered something important, and on March 7, 2017, he penned a blog post that denounced Cloudflare for “optimizing the content of at least 48 hate websites.” Those sites included Stormfront, the grandfather of white-nationalist online message boards, and the Daily Stormer, at that time one of the most important hate sites on the internet. A virulently anti-­Semitic publication, the Daily Stormer was founded in 2013 by a thuggishly enigmatic white supremacist named Andrew Anglin. (“Total Fascism” was the upbeat name of one of his earlier publications.)

Sean Freeman

Without Cloudflare’s protection, the Daily Stormer and those other sites might well have been taken down by vigilante hackers intent on eliminating Nazi and white-­supremacist propaganda online. Hankes and the SPLC weren’t accusing Cloudflare of spouting racist ideology itself, of course. It was more that Cloudflare was acting like the muscle guarding the podium at a Nazi rally.

Matthew Prince, the 43-year-old CEO of Cloudflare, didn’t bother responding to the SPLC’s pointed accusation. In fact, he has only the haziest recollection of hearing about it. He might have seen a mention on Twitter. He’s not sure. Prince is a genial, Ivy League–educated Bay Area resident who once sat in on lectures by a law professor named Barack Obama—the type of person you would expect to have a vivid impression of being denounced by a prominent civil rights organization. But for Prince the criticism was nothing new. At Cloudflare, he was in the business of protecting all kinds of clients, including some whose views vaulted way outside the boundaries of acceptable discourse. He’d already been accused of helping copyright violators, sex workers, ISIS, and a litany of other deplorables. It was hardly a surprise to him that neo-Nazis would be added to the list. Come late summer, however, he would no longer be able to take that breezy attitude. Prince didn’t realize it at the time, but that SPLC blog post was the first indication of the trouble to come. Five months later, Prince would be forced to make a very public decision about the Daily Stormer, one made against his own best judgment and that presented some of the thorniest and most perplexing challenges to free speech since the ACLU defended neo-Nazis who planned to march in Skokie, Illinois, 40 years ago.

How did an internet infrastructure company get locked into a vital free-speech dispute with a bunch of Nazis? That is a story that begins, like so many great tales, in the cubicles of San Francisco and the brothels of Istanbul.

Matthew Prince struggled to stay true to his free-speech principles as CEO of Cloudflare.

João Canziani

In 2010, when Cloudflare first started, long before it counted customers in the millions, Prince and his cofounders, Michelle Zatlyn and Lee Holloway, installed a bell in their cramped SoMa offices. Whenever someone signed up for Cloudflare’s services, the bell would ring and the 10 or so employees would all drop what they were doing to see who their new customer was.

One day in 2011, the bell rang and Prince went to see who had signed up. “It was the moment where I was like, ‘We need an employee handbook.’ ” The new customer was a Turkish escort service that needed cyber-protection for a promotional website. But it was only the first. Within two weeks, some 150 Turkish escort sites had signed up for Cloudflare’s services. The young outfit had somehow become a go-to service for the Istanbul sex trade.

Curious about this niche-business popularity, a Cloudflare employee contacted the webmaster at one of the escort sites. The webmaster had heard about Cloudflare from a friend who read about it on TechCrunch, and he explained why he sought the company’s protection: Orthodox Muslim hackers had decided to take the law into their own hands and wipe the escort sites off the web. They had largely succeeded, until Cloudflare entered the picture.

To understand why the Turkish webmasters flocked to Cloudflare, you have to understand a bit more about where the company interjects itself into the invisible and near-instantaneous flow of bits that travel between an ordinary user and the servers that deliver the information. When you type a URL into a browser and hit Return, that request first goes out to a domain name server, which translates the human-readable URL (call it www.turkishescort.com) into the numerical IP address of the web server that’s hosting the content. At that point, a packet of bits is dispatched from the domain name server over to the hosting server, and the content you’ve requested is delivered back to your browser.

The trouble is that “you” might not be you at all. Your computer might be infected with malware that has commandeered it to serve in an army of zombie machines—a botnet—that hackers use to execute DDoS attacks. Your seemingly idle laptop might be helping to swamp an innocent website with thousands of requests per second, overloading the target’s servers and making it impossible for legitimate requests to get through. That’s where Cloudflare comes in.

Cloudflare protects against these attacks by inserting itself between the browser and host servers that contain the content. From the user’s perspective, the experience is frictionless: You hit the bookmark for, say, a local newspaper and within a split second your screen fills with high school sports scores and reports on the mayoral race. But behind the scenes, your request for information has been filtered through one of Cloudflare’s data centers.

“At that data center,” Prince explains, “we’ll make a series of determinations: Are you a good guy or a bad guy? Are you trying to harm the site? Or are you actually a legitimate customer? If we determine that you’re a bad guy, we stop you there. We act essentially as this force shield that covers and protects our customers.”

During a visit in September to Cloudflare’s headquarters—now in more expansive offices in SoMa—Prince took me to the company’s network operations center, where monitors line the walls, each filled with graphs and brightly colored blocks of text. These represented hundreds of different attacks being attempted in real time across the Cloudflare network. Cloudflare separates the good guys from the bad using pattern recognition. If it sees a familiar nefarious pattern breaking out, it will stop it, like a human immune system attacking a virus. The cyberattackers who went after the Turkish brothels exhibited a distinctive pattern; at Cloudflare, that fingerprint was dubbed the “TE attack,” as in Turkish Escorts.

About a year after the first Turkish escort site became a customer, Prince got a call from someone he calls a panicked “Dutch gentleman.” The caller was responsible for the website of the wildly popular Eurovision song contest. It was two days before the final showdown of the television talent show, and the site had been taken offline by a DDoS attack. When Cloudflare’s security team looked at the data, they saw the family resemblance immediately: It was the TE attack. The Eurovision contest that year was being held in Azerbaijan, a predominantly Muslim country, and the hackers had decided that Eurovision should be knocked offline. Having seen the attack before, Cloudflare was able to get the site up and running in less than 30 minutes, plenty of time before the final rounds. Fast-­forward another six months. Prince was summoned to a big financial firm in New York to help analyze a recent attack on its servers. In the conference room, the finance team slid their log files across the table to Prince and his colleagues. As they scanned the logs, smiles of recognition passed across their faces. It was the same maneuver the Turkish Escort attackers had used.

The TE attacks didn’t just help Cloudflare impress Wall Street titans, they also taught the company something about the value of protecting objectionable content. A site that someone, somewhere, deeply despises is the type of site that is likely to be attacked. And when sites are attacked, Cloudflare gets better at what it does; its pattern recognition improves. “Putting yourself in front of things that are controversial actually makes the system smarter,” Prince says. “It’s like letting your kids roll around in the dirt.” This is one of the reasons it makes sense for Cloudflare to offer a free self-service platform: By widening the pool of potential invasive agents, it makes the immune system more responsive. “It’s not obvious that a bunch of escorts that aren’t paying you anything are good customers. It’s not obvious that having people who get attacked all the time—including neo-Nazi sites—that you would by default want them to be on your network. But we’ve always thought the more things we see, the better we’re able to protect everybody else.”

Cloudflare has now logged millions of different kinds of attacks, each, like TE, with its own recognizable signature. This growing database of malice ultimately brought Cloudflare to its central, if largely invisible, position as an internet gatekeeper. The day before I visited Cloudflare’s offices, 22,000 new customers signed up for its services. Needless to say, there is no longer a bell ringing for each one that signs up.

Matthew Prince grew up in Park City, Utah. His father started out as a journalist and later became a drive-time radio host, and Prince has memories of “sitting around the dinner table, talking about the importance of the First Amendment and freedom of speech.” As an undergraduate at Trinity College in Hartford, Connecticut, Prince briefly considered majoring in computer science before deciding on English literature. He also founded a digital-only magazine. He went on to study law at the University of Chicago, where he attended those lectures by Professor Obama, before going to Harvard Business School, where he met Zatlyn.

Prince’s eclectic background gave him the confidence to grapple with Cloudflare’s speech dilemma at all its various layers: As a trained lawyer, he understood the legal implications of corporations policing speech acts; as the founder of a tech company, he was familiar with the technical abilities as well as business imperatives of dealing with customers; and as a liberal-arts-son-of-a-journalist, he thought a lot about what kind of rhetoric is acceptable online. Prince felt strongly that the invisible infrastructure layer of the internet, where Cloudflare operated, should not be the place to limit or adjudicate speech. In Prince’s governing metaphor, it would be like AT&T listening in on your phone conversations and saying, “Hey, we don’t like your political views. We’re kicking you off our network.”

In the years after the launch of Cloudflare, he argued, public-intellectual style, for the importance of preserving free speech online and the neutrality of the infrastructure layer of the internet. It was partly that history that allowed Prince and his colleagues to dismiss the initial investigation by the SPLC. “We’re always having controversies about things,” Zatlyn says.

But Prince’s law school certitude would soon be challenged by another, even uglier, twist involving the Daily Stormer. In the process of standing guard outside its clients’ websites, Cloudflare’s filters sometimes trap legitimate complaints against these sites, the majority of which involve copyright infringement. Someone uploads a catchy song to a website without permission from the artist. Eventually, the songwriter takes notice, but her lawyer can’t present a cease-and-desist notice because the copyright violator is behind the Cloudflare shield. And so, over time, Cloudflare had developed a policy of passing along any complaint to its customers and letting them deal with the requests.

But a system designed to address copyright infringement proved to be less adept at dealing with Nazis. Ordinary people disturbed by the hate speech on the Daily Stormer would seek to register their complaints about the site to Cloudflare, the host. But instead of directly addressing the complaint, Cloudflare, following its usual policy, would pass those complaints, with the senders’ contact information, along to the Daily Stormer.

In early May, another story came out—one that Cloudflare could not ignore. The article, by ProPublica, revealed that people who had complained to Cloudflare about the Daily Stormer were getting harassing and threatening calls and emails, including one that told the recipient to “fuck off and die.” The ProPublica piece quoted a blog post under Anglin’s name: “We need to make it clear to all of these people that there are consequences for messing with us. We are not a bunch of babies to be kicked around. We will take revenge. And we will do it now.” It looked as if Cloudflare had ratted out decent people to an army of fascist trolls.

Recognizing that it had a legitimate problem on its hands that couldn’t be erased by invoking free speech, Cloudflare quickly altered its abuse policy, giving users the option of not forwarding their identity and contact information. ProPublica also reported Anglin saying that the hate site paid $200 a month for its Cloudflare protection, a point Cloudflare would not comment on. Despite Cloudflare’s pride in protecting any site, no matter how heinous, Prince says he was caught off guard by the Daily Stormer’s attacks on the people who complained. “What we didn’t anticipate,” Prince told me, ruefully, “was that there are just truly awful human beings in the world.”

A few months later, on Friday, August 11, a group of torch-wielding white supremacists marched in the streets of Charlottesville, Virginia; the next day a counterprotester named Heather Heyer was run over in what appeared to be an act of political violence. That afternoon, without mentioning Heyer’s death, Donald Trump blamed the violence in Charlottesville on “many sides,” and the whole country was suddenly engulfed by the question of what we were willing to do to stand up to Nazis. The Daily Stormer posted a repulsive piece under Anglin’s byline with the headline “woman killed in road rage incident was a fat, childless 32-year-old slut.” It only got worse from there.

After reading the post, an anti-fascist vigilante hacker known as the Jester tweeted, “Nice site, Andrew. Be a shame if something ‘happened’ to it.” But the threat was empty as long as Cloudflare continued to offer its protection. “That night I was at home and I get this DM on Twitter from the Jester,” Prince says. “And he’s saying, ‘Hey, these guys are jerks. I want to DDoS them off the internet. Will you get out of the way?’ ” Prince says he responded with a link to a speech he’d given at an internet security conference defending free speech principles. (The Jester did not respond to a request for comment.) Meanwhile, the wrath against Cloudflare was rising. “All of a sudden, a ton of people were yelling at us on Twitter,” Prince recalls. The online service GoDaddy, which maintained the Daily Stormer’s domain, announced it was canceling this arrangement. The Daily Stormer tried to move its domain registration to Google Domains but was denied. Cloudflare seemed to be the last major player willing to do business with the neo-Nazi site, appearing once again to go out of its way to protect hate speech.

Related Stories

On Monday afternoon, Prince and his management team gathered to address the growing controversy. The backlash weighed heavily on the minds of the rank and file at Cloudflare. “There was definitely water-cooler talk,” recalls Janet Van Huysse, who oversees employees and human resources. “We were all over the news. People were struggling. There were a lot of people who were like, ‘I came to this company because I wanted to help build a better internet, and we believe fiercely in a free and open internet. But there are some really awful things currently on the web, and it’s because of us that they’re up there.’ ” A range of feelings would emerge during a town-hall–style meeting for employees conducted later in the week. One attendee said to Prince, “I don’t have a good answer for what we should say going forward as a proud Cloudflare employee. What should I say?” Another asked why they would consider kicking neo-Nazi sites off the platform, but not alleged ISIS sites.

On Tuesday night, Prince was hosting a dinner for Cloudflare interns at his home in San Francisco. At one point during the event, Cloudflare’s general counsel, Doug Kramer, pulled Prince aside and said, “It seems like this keeps ratcheting up.” Checking his phone surreptitiously during the meal, Prince noticed that fellow technologist Paul Berry, founder and CEO of a social media service called RebelMouse, had taken to Twitter to denounce Cloudflare for hosting “Nazi hate content that even @GoDaddy took down.”

After the interns left his apartment, Prince and Tatiana Lingos-Webb, his fiancée, cleaned up and did the dishes. (The two have since married.) Stung by Berry’s tweet, Prince started bemoaning the ease with which people seemed willing to abandon the basic ideals of free speech. “Maybe there is something different about Nazi content,” Lingos-­Webb ventured. “And I looked at her and said, ‘You too?’ ” Prince recalls.

“I went to bed angry,” Prince says, “and woke up in the morning still angry.” Checking Twitter, he discovered that someone on the Daily Stormer site—in full frog-and-scorpion mode—had decided to antagonize the one service left supporting it. An anonymous comment about the site’s technical challenges noted the moves by GoDaddy and Google to oust the Daily Stormer: “They succeeded in everything except Cloudflare, whom I hear are secretly our /ourpeople/ at the upper echelons.” Overnight, Prince and his colleagues had been welcomed into the ranks of practicing white supremacists.

Prince called Berry to walk through his reasoning for continuing to protect the Daily Stormer. Prince had known the RebelMouse CEO for years through the technology conference circuit and respected his opinions. It was a tense call. Berry told him he understood the predicament. “But when you work that fucking hard to build something that’s that successful, you get to choose who uses it,” Berry recalls telling Prince. “And you get to set a code of conduct that leaves it clear for people—a code of conduct that says we will not support white supremacy, racism, hate.”

“I’d try to make my argument,” Prince says now of the conversation, “and Paul would say, ‘It doesn’t matter, Nazis.’ And I’d say, well, it kind of does, because if the phone company is listening in on my phone calls and then decides that they don’t like what I’m talking about and starts pulling the plug, that seems creepy to me.”

Hanging up with Berry, Prince got in the shower. He had barely slept; his friends seemed to be taking the wrong side of the free-speech argument. But he could see where the controversy was heading: If he kept protecting the Daily Stormer, the inevitable next step was a customer boycott of Cloudflare, with real business consequences. But if he kicked them off, “I thought through the parade of horribles that would follow,” he recalls. Suddenly every controversial site on the Cloudflare network would be subject for review, and Cloudflare would have helped establish a precedent for deep infrastructure services regulating speech. As he stood in the shower, all these thoughts swirled around his head. “It was literally one of those lean-your-head-against-the-wall moments—like, what the hell are we going to do?”

“But then I had a thought: Maybe we can kick them off, and then talk about why that’s so dangerous. Maybe that can change the conversation.” Prince would betray his principles and then make the betrayal into an argument for why those principles matter.

Cloudflare cofounder Michelle Zatlyn says the company’s very nature attracts controversy.

João Canziani

“Matthew called me at around 10 that morning, and said, ‘We’re kicking them off,’ ” Zatlyn recalls. She’d gone to bed feeling Cloudflare was not the right place to censor and assuming they would stick to the company policy. “I was speechless, a little stunned. ‘I’m surprised to hear you say this. I was not expecting that. But OK.’ ”

By late morning, the company’s trust and safety team had completed the procedures to remove the Daily Stormer from the Cloudflare network. And Prince drafted a blog post. It began in a just-the-facts mode: “Earlier today, Cloudflare terminated the account of the Daily Stormer.”

“Our terms of service reserve the right for us to terminate users of our network at our sole discretion.” The tipping point for Prince was the suggestion on the Daily Stormer site that top managers at Cloudflare “were secretly supporters of their ideology.” But then Prince took a rhetorical twist: “Now, having made that decision, let me explain why it’s so dangerous.”

Prince spoke about the peril posed by DDoS attacks. We might all agree, Prince argued, that content like the Daily Stormer shouldn’t be online, but the mechanism for silencing those voices should not be vigilante hackers.

His bigger argument was about the danger of private companies like Cloudflare (not to mention Google or Amazon Web Services) determining what constituted acceptable speech. “Without a clear framework as a guide for content regulation,” Prince explained, “a small number of companies will largely determine what can and cannot be online.” Perhaps his most striking point came in a separate memo he wrote to his staff. “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the internet. No one should have that power.”

Prince’s dilemma over the Daily Stormer has been present in net culture from the early days of online communities. But where debates about what forms of speech should be forbidden often seemed academic and remote, today they are at the center of social discourse. White-­supremacist movements that once were deemed beyond the pale are more vocal, their ideas spreading openly into the mainstream, with political leaders not always willing to condemn them. Hankes, of the SPLC, says that even fringe hate sites “can have a tremendous impact” because of social media’s ability to amplify extreme ideas. “Our position has been that pretty much everyone south of the internet service providers”—in other words, anyone hosting or protecting online content—“has the responsibility to take a stance on these issues,” Hankes says, “or be ready to answer for the consequences of people who are taking advantage of their services.”

The original free-speech ethos that shaped the internet has also grown shakier: Back then, strong First Amendment values were one of the few areas of agreement among the libertarians and progressives who shaped the early culture. Today, that alliance is less stable. Aggressive anti-hate-speech movements on college campuses have aroused ire among libertarians, and among progressives there is a growing sense that Big Tech has become a breeding ground for bile. Every other week, it seems, there’s another flare-up over Twitter’s terms of service and the rampant harassment and abuse that plagues that platform.

“Honestly, I am so sad,” Berry told me. “I grew up in the Valley; I’ve been writing code since I was 10, and I believed in technology.” But now, Berry says, he sees money to be made as a platform company triumphing over civic decency. “Right now we have a tension between financial success and actually being human.”

The immense size of those gatekeepers—like Google, Facebook, Twitter and, in its own way, Cloudflare—has also challenged the older vision of cyberspace as a realm of unchecked speech. There have been dark wells of hate online since the Usenet era, but back then, misanthropy was distributed across thousands of different platforms. Even if you felt some speech was objectionable enough to silence, it was a practical impossibility to get rid of it all. No single entity could silence an idea. But in a world where Facebook and Google count their audiences in the billions, a decision by one of those big players could, essentially, quiet an unpopular voice. In December, in fact, Twitter started enforcing new rules to suspend accounts of people who use multiple slurs or racist or sexist tropes in their profile information.

Prince is aware of that power, but he also carefully titrates the various elements in the internet concoction. He argues that there is a fundamental difference between sites like Facebook or Twitter, which provide content, and deep infrastructure like hosting or security services. For Prince, the relative invisibility of Cloudflare to ordinary consumers makes it the wrong place to address speech. “I think that gives us a framework to say infrastructure isn’t the right place to be regulating content,” he says. “Facebook and YouTube still may be—and it’s an easier question for them, because they’re advertising-supported companies. If you’re Procter & Gamble, you don’t want your ad next to terrorist content, and so the business model and the policy line up.”

If this sounds like passing the buck, Prince’s argument does get philosophical support from civil liberties groups. The Electronic Frontier Foundation, which has taken a stand that what it calls “intermediaries”—services like Cloudflare and GoDaddy that do not generate the content themselves—should not be adjudicating what speech is acceptable. The EFF has a strong presumption that most speech, even vile speech, should be allowed, but when illegal activity, like inciting violence or defamation, occurs, the proper channel to deal with it is the legal system. “It seems to me that the last thing we should be doing is having intermediaries deputizing themselves to make decisions about what’s OK,” says Corynne McSherry, legal director of the EFF. “What law enforcement will tell you is that it’s better for them to be able to keep track of potentially dangerous groups if they’re not pushed down into the dark web.” She adds: “I want my Nazis where I can see them.”

In the months following the Charlottesville weekend, the Daily Stormer bounced around a series of websites, briefly appearing on Russian and then Albanian domains with new URLs. Prince himself has grown more certain that his company should not be in the speech-regulation business. Since ejecting the Daily Stormer, Cloudflare has received more than 7,000 complaints about sites in its network. “The weirdest was a totally nonpartisan cooking blog,” Prince says. “We’ve considered trying to make some of the recipes, to see if they’re just really terrible.” Though Prince’s blog post vowed to establish a framework for managing objectionable sites on its network, little has changed. “We’re still having the debate, but I think the likely outcome is that as an infrastructure company, we’re going to err on the side of being neutral and not do what we did to the Daily Stormer again,” Prince says now.

Cloudflare can legitimately embrace free speech tradition in the defense of its policy. But it is also protecting its business interests. Network software and algorithms have allowed Big Tech to organize and distribute (and in Cloudflare’s case, protect) staggering amounts of information. Looking for patterns in DDoS attacks, detecting the signatures of the Turkish Escort attackers—these are the kinds of problems that can be solved at scale with code. But evaluating 7,000 websites for, say, potential incitements to violence is not something that lends itself to a final determination by software alone; it invariably requires human judgment. Facebook and Google have confronted this issue in the past year with the infiltration of Russian ads and fake news into their feeds and screens. But humans are expensive. Only after public outcry did Facebook and YouTube pledge to hire thousands of human moderators to deal with suspicious ads and with videos that are inappropriate for children. Prince may be right that a service like Cloudflare’s is the wrong place to make those assessments, but it’s also convenient: Opting out of that obligation makes his business much easier to run.

One still-unresolved debate at Cloudflare is about how the company should memorialize the decision to eject the Daily Stormer. “We do a transparency report twice a year, and one of the things that we have is a list of ‘things we have never done.’ ” It’s a short list, and one of its key statements is, “We have never terminated a customer or taken down content due to political pressure.” That is no longer true. “So we’re having this conversation now internally,” Prince says, “about whether we have to remove that.”

As of December, Prince says, the company was leaning toward keeping the statement but adding an asterisk that links to a full account of the Daily Stormer affair. “So when the next controversy comes along, we’ll be able to point to that and say, ‘This was the one time we did that, and here are the dangers it creates.’ ” The Daily Stormer, however, has not been invited back.


The Free Speech Issue

  • Tech, Turmoil, and the New Censorship: Zeynep Tufekci explores how technology is upending everything we thought we knew about free speech.
  • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
  • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
  • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
  • 6 Tales of Censorship: What it’s like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

Steven Johnson (@stevenbjohnson) is the author of 10 books, most recently Wonderland: How Play Made the Modern World.

This article appears in the February issue. Subscribe now.

Source: https://www.wired.com/story/free-speech-issue-cloudflare/

Powered by WPeMatico

Our Best Hope for Civil Discourse on the Internet Is on … Reddit

Our Best Hope for Civil Discourse on the Internet Is on … Reddit

I had a view, and my view was this: Serial sexual abusers should submit to castration. Castration, I believed, would sideline the abuser’s compulsions and thus keep the world safe from him (or her lol). While castration hasn’t been tested on abusers in the Harvey Weinstein style, it’s been used with success on child molesters, bringing the recidivism rate, or so I’d read somewhere, from 75 percent to 2 percent. (Another upside: Castration is rumored to forestall male-pattern baldness.) Of course I meant an entirely bloodless course of hormone therapy. Not a hatchet. I’m not some harridan. The abusers would just get shot up with something called an anaphrodisiac, a brew to suppress androgens and other traces of Aphrodite in the blood.

My opinion was built on a couple of statistics, but less rational motivations were also in play. Like many who have held jobs, I’ve served my time in taxis and at happy hours showing down with groping goats in the garb of VIPs. I’ve either wised up to or aged out of this dispiriting cycle, but now, I imagined, with a touch of grandiosity, I might stop it dead. My view, if I really advocated for it, might not only redeem my own experiences, it would revise my earlier meekness with a Valkyrie-like reversal—and avenge the sisterhood.

Yet another contingency undergirded my pro-castration platform: a church-trained, perhaps sentimental worldview that even the worst among us can be delivered from evil—if not by prayer alone then by the ministrations of a compassionate endocrinologist. My hormone-­therapy prescription was designed both to recognize the suffering of the sinner—he’s “sick” and treatable with medicine—and to punish him with that pitiless word. Castration.

Sean Freeman

So I had this opinion, and as you can tell I adored it; it made the crooked places in my brain straight and the rough places plain. As the opinion gave me comfort, I grew more tenacious. I amassed an arsenal made of words sharpened to a fine point. I was all but spoiling for a fight.

At the same time, something seemed sinister in my view. Castration? It was zealous. It was maybe mean. At once I realized: I dearly wanted to have my opinion changed.

Because, look, as righteous as I felt, my conscience was also appalled that I wanted to disable the testicles of any mother’s son, however much that son liked to masturbate into potted plants and force frottage on colleagues at the vending machine. To recommend that those in power sterilize, spay, and geld the people they don’t approve of—that seems the very essence of barbarism. Had my desire for revenge made a Mengele of me? Worse still, was I trying to pass off my personal revenge fantasy as high-minded and rational, inspired by Google searches I digni­fied as “scientific data”? And so I signed on to Change My View, a section of Reddit where people post opinions and ask to have them changed.

Change My View was the brainchild of Kal Turnbull, a musician who was just 17 when he launched the subreddit in 2013, roughly three years before intransigence became the guiding principle of all debate everywhere. As a high school senior, Turnbull could have been forgiven for digging in his heels on teen truisms like punk’s not dead or—he’s Scottish—alba gu bràth. Instead he rebelled against all sloganeering and groupthink.

“I was generally surrounded by people that all think similarly,” Turnbull told me by email from near Inverness, in the Scottish Highlands, where he records music in a farm shed. Back in 2013 Turnbull and his mates tended to discuss Breaking Bad, Scottish independence, and indie rock, but Turnbull won’t say what the group’s consensus on those things was, because he’s assiduous about avoiding bias now. “In the grand scheme of the world, we all thought similarly,” he told me. “This led me to wonder, what does someone actually do when they want to hear a different perspective or change their view?”

Turnbull didn’t want to attract the chippy you-talkin’-to-me? crowd that was already adequately represented on Reddit. He meant to populate his forum with people sincerely in quest of lively and honorable debate. At first Change My View did attract rancor and ad hominem brattery, but Turnbull was patient and true to his vision of civil discourse. He enlisted moderators from among the more fair-minded regulars, and for five years now they have policed not just name-calling, rudeness, and hostility but superfluous jokes and mindless agreement. (Turnbull deletes what he calls “low-effort” comments.)

Change My View looks like a standard sub­reddit, a message board on which threads are organized by topic. (The parent company of Condé Nast, which owns WIRED, holds a majority stake in Reddit.) Yes, you have to trudge through the Caledonian Forest of Reddit’s UX and, as usual, risk being hazed when you trespass against Reddit’s clubby customs. But it’s worth it. CMV is a little heath of reason.

If you have a view, you post it. You’re a “submitter.” Then those who aim to change your view roll in, posting their views of your view. These are “commenters.” Submitters are not supposed to look for fights on Change My View; that’s for … everywhere else on the internet. Instead CMV posters foreground their flexibility—and maybe some insecurity, which brings with it a poignant willingness to be transformed.

Once you submit a view, you’ve committed to a mental marathon. The rule that makes Change My View different from a freewheeling chat room is that a submitter is required to respond within three hours to brook respectful challenges to their view. You can’t just post and skedaddle for the day. If a submitter doesn’t respond to commenters in good time, they’re considered AWOL, insincere, or obdurate, and the board moves on.

So you train your attention on the topic, and stay and debate. In come the comments, raising questions and courteously testing your conviction. If you’re unmoved by the comments and refuse to modify your original submission, the debate comes to a close when commenters get tired of it. But if you are persuaded to change your view, and only when you decide it’s changed, you award a delta, the mathematical symbol for change, which is rendered by Option-J on a Mac. The delta goes to the commenter who you believe made you modify, or overturn, your view. To have your view changed or to change someone else’s view are both counted as victories.

Recently a poster called Sherlocked_  plowed into a time-honored lion’s den: “I lean left but believe abortion should be illegal in most cases.” What appeared, however, were not lions at all. Instead, gentlemanly commenters filed in to make debating-society points about physical autonomy. Sherlocked_  heard them out, asking for clarification here and there, but refused to budge.

Finally Penny_lane67 moved the subject from the status of the fetus to the woman, saying that pregnancies can affect women in many ways—some physical, some otherwise. Sherlocked_  acknowledged that this thought was new to him. He mulled it over, ruminating in a few paragraphs.

At last he wrapped up the thread in a small internet miracle: “As I type this and think about it more I think you’re right, even if it wasn’t abuse and it was simply an accidental pregnancy, there is a chance the pregnancy could cause psychological harm to the mother. And because that would be so hard to diagnose, if I allow abortions in those cases I think I effectively have to in all cases.” Delta. ∆

Now I wanted nothing more than to have Sherlocked_’s intellectual curiosity, flexibility, generosity, broad-mindedness. But I wasn’t sure I could pull it off. I entered Change My View with trepidation. I felt like I was submitting to chemical castration myself.

Kal Turnbull, who is now 22, created the Change My View subreddit in 2013.

Kate Peters

Turnbull’s good gardening has let a thousand flowers bloom, and what’s astounding about Change My View is that no single radioactive topic—not Trump, Brexit, sex, guns—has overrun it. Instead, eclectic subjects, most far from the headlines, pile up like a tone poem. Submissions include “Chiropractors are pseudo-­scientific BS,” “Palestine will be completely annexed by Israel within 50 years,” and “In Mrs. Doubtfire (1993) Daniel is the villain.”

The diversity supplies a surge of faith in our fellows. In our era of idées fixes it’s almost disorienting to read an opinion that’s held lightly, so lightly it’s presented expressly for overhaul. Submitters here are by definition skeptical of their own views or otherwise dislike holding too fast to them. But initially I couldn’t fathom how to phrase a view as pre-undermined and prime for demolition. That is, until I started looking closely at the submitted views, which, as in the case of my castration view, contained hints of minds at war with themselves. The submitter who finds chiropractors quacks seemed to hope one might relieve their joint pain, where the Mrs. Doubtfire connoisseur, who took the controversial stand that lovable Daniel (Robin Williams) is the villain of the piece, appeared mostly to want to match wits with other fans of the film. As for the bold opiner on Israel, maybe this person feared for the future they nonetheless foresaw and was hoping someone would disabuse them of the prophecy. Sometimes an opinion seems like a burden you long to lay down.

If submitting is an act of trust, it follows that commenting on a submission is an act of dominance. Commenters on Change My View are a much more familiar internet type than are submitters, whom they far outnumber. After all, they prefer being right to doubting themselves. They also like debate, persuasion, and the sweet, swift QED of winning an argument. They crave those deltas.

When I first heard about the preponderance of commenters, I wondered whether CMV simply reproduced the power dynamics of ordinary internet shouting matches, with the sole innovation that it had found people, like me, entirely willing to play the fish at the poker table. I pushed Turnbull on this. “Those who are good at challenging views would not necessarily be good at being challenged themselves,” he admitted.

“Only one gets to be right!” I persisted, seeing a chance to win.

And that’s when Turnbull—who at 22 is less than half my age—opened my eyes. His reasoning instantly modeled exactly the civil, and enjoyable, discourse he’s promoting.

“Assuming the view change is correct, those who have gained new perspective also ‘get to be right,’ ” he wrote. He even wishes we were all more pleased when we find out we’re wrong about something. “I would try to celebrate it,” he went on, “but I agree it’s not always as simple as this. It seems to be in our nature to focus on how we were wrong over the fact that we’re now right (as if we can’t be works in progress), and we often attach our egos to what we believe. This is an idea we are trying to challenge at CMV. A view is just how you see something, it doesn’t have to define you, and trying to detach from it to gain understanding can be a very good thing.”

Racking up deltas is how you get on the leaderboard at CMV. But in some ways, the subreddit rewards change on either side. One of the highest scorers in delta-acquisition to date is a Brett W. Johnson, management consultant, Eagle Scout, and member of Mensa based in Houston.

Johnson emailed me at length explaining that he believes in regularly challenging his own views, and Change My View is the first place he has discovered where you can demonstrate a willingness to change course without being perceived as weak. “In many places, if someone is open to having their mind changed on an issue, they are often met with scorn or ridicule for not already believing the alternate view,” he wrote. “There are few places I have ever found where someone can come in and say, ‘I’m not sure why people don’t think like I do—can anyone help me understand the other side?’ and be met with honest, civil, and straightforward discussion.”

Johnson is now a moderator on Change My View, and he understood my anxiety about submitting a view for challenge. I realized I was abashed both about my view and my reasons for holding it. And I was about to expose both things to the internet. What if my logic was found wanting?

He wrote, “Personally, I love being wrong! Being shown that I was wrong means that I get to remove a little pocket of ignorance I had and gain a more complete understanding of the world.”

My fear of being polemically impotent now seemed embarrassing. I was ready to love being wrong. So at last I submitted my case for the chemical castration of sex abusers to Change My View. You have to post the reasons for your belief, however imminently erroneous; I did that. But I didn’t say why I was anxious about my view—that I feared I was a monster for holding it.

The commenters were exceedingly civil. With what seemed like plain curiosity, the first ones asked whether I imagined the men in question would have to have criminal convictions before they were considered serial abusers. I admitted I hadn’t thought of that; most of the men I had in mind were the ones who’d been exposed by extensive reporting, but they hadn’t been tried. I conceded that it could be an elective therapeutic treatment for men who acknowledged they were sexually compulsive and destructive, but compulsory castration would be appropriate only for convicts. That taught me that actually administering the kind of program I was advocating would be thorny. Then Moonflower, who has been awarded 60 deltas, wrote, “The problem with any kind of permanent-­physical-damage punishment is that occasionally an innocent person will be convicted, and these medications do carry health risks which it would be unethical to force upon a person who might turn out to be innocent.” I liked that Moonflower raised the specter of innocence among alleged sexual abusers without politics or stridency. In other forums—like, say, Twitter—anyone who extenuates sexual abuse is considered a traitor to the sisterhood. But “occasionally an innocent person will be convicted” was nothing but an acknowledgment of the imperfection of the criminal legal system. So far, I couldn’t tell anything about anyone’s political allegiances, gender, or cultural positioning; usually a conversation about sex, gender, and penises brings out the most entrenched ideologues. But here we were discussing logistical, practical, and ethical questions. It came to me in a flash: This had nothing to do with Trump!

That alone was a surprise. We were somehow free.

Related Stories

Damn do these people like to debate. Thomas­Edmund84 pulled up as a fellow traveler: “I can’t believe this topic came up today (been debating this issue all morning).” I asked him how he and his people had framed the conversation, and he said, “The nature of the debate was quite complex—as best I could tell from the literature, chem castration is very effective in some people and ineffective in others—high chance of side effects in both. I think in the end worth a shot if the person agrees, unfair without.” There was something in the “as best I could tell” that suggested he knew he was fallible, and that was the house style on the forum. We’re doing the best we can, trying to get to the truth, and no one of us has a monopoly on it.

Eventually I awarded deltas to three commenters who had helped me modify my view: I now allowed that the hormone treatment for sexual abusers would have to be post-conviction, voluntary, and reversible. My opinion was no longer a “take” fitted to Twitter or an op-ed. It was a responsible perspective, honed in a collegial atmosphere. There was something else surprising about this gang. Not one of them had called me a castrating bitch.

In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise. Who knew that my most heartening ideological conversation in ages would involve gonads, gender wars, and for heaven’s sake Reddit?

And in the end Change My View did change my view. It lifted—for a time, anyway—a set of persistent doubts about the sturdiness of my opinions. Yes, my opinions generally sound plausible. As a rule, I substantiate them. But occasionally I suspect with a shudder that I’ve conceived one in partisan bias, scattershot anxiety, or even outright malice. In short, I question my capacity to reason impartially. What if, in this case, my view was prompted exclusively by rage at widespread sexual mistreatment of women? Or even blind fury at men? I tried to see the bright side: At least I was questioning my beliefs and their underpinnings, which would make me fit right in on Change My View.

While I’ve been anxious about my moral character more times than I can count, I hadn’t realized that I was bringing all that private brooding to my first post on Change My View. What I wanted, in coming to CMV, was to drop my self-doubt—to be relieved of that view of myself.

In this I wasn’t alone. I suspected the antiabortion submitter had felt as I did, worried that his view of abortion was at odds with the rest of his ideals, and that the contradiction suggested something was wrong with him. Just as I feared that misandry motivated me to favor castration, this submitter, who said he was generally liberal, seemed anxious that in wanting to recriminalize abortion he was a closet misogynist.

Maybe what we share when we submit views for changing is not the view itself as much as those poltergeist doubts that haunt all of us—about our motives, our capacity to reason, our politics, our principles, even our essential goodness. It’s that profound vulnerability in users of the forum that makes Change My View such a trusting and rewarding community.

There’s something wrong with me. That was an opinion that felt like a burden I’d longed to lay down. That was a view it felt like a triumph to change.


The Free Speech Issue

  • Tech, Turmoil, and the New Censorship: Zeynep Tufekci explores how technology is upending everything we thought we knew about free speech.
  • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare’s decision to let an extremist stronghold burn.
  • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
  • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
  • 6 Tales of Censorship: What it’s like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

Virginia Heffernan (@page88), a WIRED contributor, is the author of (Magic and Loss: The Internet as Art).

This article appears in the February issue. Subscribe now.

Source: https://www.wired.com/story/free-speech-issue-reddit-change-my-view/

Powered by WPeMatico