The technology they like, no matter the social cost

“I was struck by how many of the wealthiest and post powerful figures in Silicon Valley — including some I knew — were now loudly backing Mr. Trump. ... Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost.”
Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money. New York Times Opinion Guest Essay by Chris Hughes, co-founder of Facebook and chair of the Economic Security Project. September 25, 2024.

A referendum on reality itself

There is perhaps no better place to witness what the culture of disinformation has already wrought in America than a Trump campaign rally.

Tony Willnow, a 34-year-old maintenance worker who had an American flag wrapped around his head, observed that Trump had won because he said things no other politician would say. When I asked him if it mattered whether those things were true, he thought for a moment before answering. “He tells you what you want to hear,” Willnow said. “And I don’t know if it’s true or not — but it sounds good, so fuck it.”

The political theorist Hannah Arendt once wrote that the most successful totalitarian leaders of the 20th century instilled in their followers “a mixture of gullibility and cynicism.” When they were lied to, they chose to believe it. When a lie was debunked, they claimed they’d known all along — and would then “admire the leaders for their superior tactical cleverness.” Over time, Arendt wrote, the onslaught of propaganda conditioned people to “believe everything and nothing, think that everything was possible and that nothing was true.”

Leaving the rally, I thought about Arendt, and the swaths of the country that are already gripped by the ethos she described. Should it prevail in 2020, the election’s legacy will be clear — not a choice between parties or candidates or policy platforms, but a referendum on reality itself.

The Billion-Dollar Disinformation Campaign to Reelect the President, by McKay Coppins, The Atlantic, March 2020

Facebook’s Frankenstein Moment

“But there may not be enough guardrails in the world to prevent bad outcomes on Facebook, whose scale is nearly inconceivable. Alex Stamos, Facebook’s security chief, said last month that the company shuts down more than a million user accounts every day for violating Facebook’s community standards. Even if only 1 percent of Facebook’s daily active users misbehaved, it would still mean 13 million rule breakers…”
Is This Facebook’s Frankenstein Moment? by Kevin Roose, 21 September 2017

Four times the number of votes

In a Facebook experiment published in Nature that was conducted on a whopping 61 million people, some randomly selected portion of this group received a neutral message to “go vote,” while others, also randomly selected, saw slightly more social version of the encouragement: small thumbnail pictures of a few of their friends who reported having voted were shown within the “go vote” pop-up.

The researchers measured that this slight tweak — completely within Facebook's control and conducted without the consent or notification of any of the millions of Facebook users — caused about 340,000 additional people to turn out to vote in the 2010 U.S. congressional elections.

(The true number may even be higher since the method of matching voter files to Facebook names only works for exact matches.)

That significant effect—from a one-time, single tweak—is more than four times the number of votes that determined that Donald Trump would be the winner of the 2016 election for presidency in the United States.

From Zeynep Tufecki's Twitter and Tear Gas (2017), page 157. The study published in Nature is available for free on PubMed Central here.

Facebook, Ferguson, and the Ice Bucket Challenge

On the evening of August 13 [2014], the police appeared on the streets of Ferguson in armored vehicles and wearing military gear, with snipers poised in position and pointing guns at the protesters. That is when I first noticed the news of Ferguson on Twitter—and was startled at such a massive overuse of police force in a suburban area in the United States.

On Twitter, among about a thousand people around the world that I follow, and which was still sorted chronologically at the time, the topic became dominant.

On Facebook's algorithmically controlled news feed, however, it was as if nothing had happened.

As I inquired more broadly, it appeared that Facebook’s algorithm may have decided that the Ferguson stories were lower priority to show to many users than other, more algorithm-friendly ones.

Instead of news of the Ferguson protests, my own Facebook's news feed was dominated by the “ice-bucket challenge,” a worthy cause in which people poured buckets of cold water over their heads and, in some cases, donated to an amyotrophic lateral sclerosis (ALS) charity. Many other people were reporting a similar phenomenon.

Facebook's algorithm was not prioritizing posts about the “Ice Bucket Challenge” rather than Ferguson posts because of a nefarious plot by Facebook's programmers or marketing department to bury the nascent social movement. The algorithm they designed and whose priorities they set, combined with the signals they allowed users on the platform to send, created that result.

From Zeynep Tufecki's Twitter and Tear Gas (2017), page 155.

Hurting people at scale

Selected passages and quotes from Ryan Mac and Craig Silverman’s outstanding piece in Buzzfeed News, Hurting People  At Scale: Facebook’s Employees Reckon With The Social Network They’ve Built

On July 1, Max Wang, a Boston-based software engineer who was leaving Facebook after more than seven years, shared a video on the company’s internal discussion board that was meant to serve as a warning.

“I think Facebook is hurting people at scale,” he wrote in a note accompanying the video. “If you think so too, maybe give this a watch.”

Most employees on their way out of the “Mark Zuckerberg production” typically post photos of their company badges along with farewell notes thanking their colleagues. Wang opted for a clip of himself speaking directly to the camera. What followed was a 24-minute clear-eyed hammering of Facebook’s leadership and decision-making over the previous year.

What the departing engineer said echoed what civil rights groups such as Color of Change have been saying since at least 2015: Facebook is more concerned with appearing unbiased than making internal adjustments or correcting policies that permit or enable real-world harm.

Yaël Eisenstat, Facebook's former election ads integrity lead, said the employees’ concerns reflect her experience at the company, which she believes is on a dangerous path heading into the election.

“All of these steps are leading up to a situation where, come November, a portion of Facebook users will not trust the outcome of the election because they have been bombarded with messages on Facebook preparing them to not trust it,” she told BuzzFeed News.

She said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned.

“These were very clear examples that didn't just upset me, they upset Facebook’s employees, they upset the entire civil rights community, they upset Facebook’s advertisers. If you still refuse to listen to all those voices, then you're proving that your decision-making is being guided by some other voice,” she said.

“[Zuckerberg] uses ‘diverse perspective’ as essentially a cover for right-wing thinking when the real problem is dangerous ideologies,” Brandi Collins-Dexter, a senior campaign director at Color of Change, told BuzzFeed News after reading excerpts of Zuckerberg’s comments. “If you are conflating conservatives with white nationalists, that seems like a far deeper problem because that’s what we’re talking about. We’re talking about hate groups and really specific dangerous ideologies and behavior.”
“Facebook is getting trapped by the ideology of free expression. It causes us to lose sight of other important premises, like how free expression is supposed to serve human needs.” — Max Wang

Replying to Wang’s video and comments, Facebook’s head of artificial intelligence Yann LeCun wrote,

“American Democracy is threatened and closer to collapse than most people realize. I would submit that a better underlying principle to content policy is the promotion and defense of liberal democracy.”

Other employees, like [engineer Dan Abramov], the engineer, have seized the moment to argue that Facebook has never been neutral, despite leadership’s repeated attempts to convince employees otherwise, and as such needed to make decisions to limit harm. Facebook has proactively taken down nudity, hate speech, and extremist content, while also encouraging people to participate in elections — an act that favors democracy, he wrote.

“As employees, we can’t entertain this illusion,” he said in his June 26 memo titled “Facebook Is Not Neutral.” “There is nothing neutral about connecting people together. It’s literally the opposite of the status quo.”

Zuckerberg seems to disagree. On June 5, he wrote that Facebook errs on the “side of free expression” and made a series of promises that his company would push for racial justice and fight for voter engagement.

The sentiment, while encouraging, arrived unaccompanied by any concrete plans. On Facebook’s internal discussion board, the replies rolled in.

Social media is a nuance destruction machine…
— Jeff Bezos, in testimony at an antitrust hearing of the US House Committee on the Judiciary, 29 July 2020. Via Geekwire

The full quote, in response to a question about so-called “cancel culture”, was, “What I find a little discouraging is that it appears to me that social media is a nuance destruction machine, and I don’t think that’s helpful for a democracy.”

Brave New Workplace

1980:

The computerized control of work has become so pervasive in Bell Telephone's clerical sector that management now has the capacity to measure how many times a phone rings before it is answered, how long a customer is put on hold, how long it takes a clerk to complete a call. …Each morning, workers receive computer printouts listing their break and lunch times based on the anticipated traffic patterns of the day. …Before computerization, a worker's morning break normally came about two hours after the beginning of the shift; now, it can come as early as fifteen minutes into the working day. Workers cannot go to the bathroom unless they find someone to take their place. If you close your terminal, right away the computer starts clacking away and starts ringing a bell.
From Brave New Workplace by Robert Howard, in Working Papers for a New Society, Cambridge Policy Studies Institute, November-December 1980 (As cited in New Information Technology: For What by Tom Athanisou, Processed World, April 1981)

The essay ends with, “In a world where everything and everyone is treated as an object to be bought and sold, the new technologies — and most of the old ones for that matter — will inevitably create hardship and human misery. […] The ease with which computers are used as instruments of social control cannot be allowed to obscure their liberatory potential.”

Trooly unctuous

“…and now we’re looking at groups of historically marginalized people being denied involvement in mainstream economic, political, cultural and social activities — at scale.”

Trooly (a play on ‘truly’, ugh), crawls social media, news sites, police and court registries, credit bureaus and similar sites and uses AI to determine whether, say, an AirBnB renter, is likely to be trustworthy, in their opinion.

It does this on-demand in about 30 seconds, for a cost of about $1.

The quote in full context, below.

Trooly — [now used by] Airbnb — is combining social credit scores with predictive policing. Tools like PredPol use AI that combines data points and historical events, factors like race and location, digital footprints and crime statistics, to predict likelihood of when and where crimes will occur (as well as victims and perpetrators). It’s no secret that predictive policing replicates and perpetuates discrimination.

Combine this with companies like Instagram, Facebook, YouTube, and yes, Airbnb deciding what legal behaviors are acceptable for service, and now we’re looking at groups of historically marginalized people being denied involvement in mainstream economic, political, cultural and social activities — at scale.

Our ads are always accurate so it’s good that Facebook won’t limit political messages because it encourages more Americans to be involved in the process. This is much better than the approaches from Twitter and Google, which will lead to voter suppression.
— Trump campaign spokesman Tim Murtaugh, as quoted in Facebook Says It Won’t Back Down From Allowing Lies in Political Ads, by Mike Isaac and Cecilia Kang, New York Times, 9 January 2020

Election 2020

“It's disappointing that the state has this rule in place, that the voters would have to vote using the system we want to replace in order to have the system that we want to replace be replaced.”
Shelby County Tennessee Commissioner Mick Wright, on the county's inability to upgrade voting machines before the upcoming 2020 elections, Ballot Bombshell: Election Machine Issue Becomes Moot by Jackson Baker, The Memphis Flyer, 13 February 2020. Via Jennifer Cohn (@jennycohn1). As Orwellian as this sounds, Commissioner Wright, a Republican in a county known for voting rights abuses, wants to replace the current crappy system with one that uses Ballot Marking Devices — a tech @jennycohn1 has convinced me is *bad news for democracy*.

Twitter was made for trouble

On Twitter…teens saw the street code in the workings of the site. “Whoever made Twitter,” said Tiana, in September 2010, “designed Twitter for trouble.”

She explained that she could see her friends’ confrontations with people she didn’t follow. Tiana was prepared to “jump into” these conflicts and expected her friends to do the same. In the context of the [street] code, Twitter seemed provocative. It placed users before a stream of other people’s conversations, with the prompt “What’s happening?”

Tiana, from The Digital Street by Jeffrey Lane, 2018, p. 72

The Sony hack, 2014

A major cyberattack against the United States in 2014 was a clear example of how civilians can bear the brunt of such operations. Almost all cybersecurity experts and the FBI believe that the Sony Pictures hack that year originated in North Korea. A hostile country hit a U.S. civilian target with the intention of destabilizing a major corporation, and it succeeded. Sony’s estimated cleanup costs were more than $100 million. The conventional warfare equivalent might look like the physical destruction of a Texas oil field or an Appalachian coal mine. If such a valuable civilian resource had been intentionally destroyed by a foreign adversary, it would be considered an act of war.
In cyberwar, there are no rules: Why the world desperately needs digital Geneva Conventions, by Tarah M. Wheeler, Foreign Policy, 12 September 2018. Thinking about this in a new light today as we head to war with Iraq.

Et tu, Instagram?

“Facebook is notorious for allowing anti-vaxxers and other conspiracy theorists to organize and spread their messages to millions—the two most-shared news stories on Facebook in 2019 so far are both false.

“But Facebook, Twitter, and YouTube are not where young people go to socialize. Instagram is.

“[Instagram] is likely where the next great battle against misinformation will be fought…”

"Society is more than a bazaar"

Some articles and references I’ve collected regarding the dark side of “social media.”

1. “Privacy findings.” Pew Research Center. Nov 2019. https://www.pewresearch.org/fact-tank/2019/11/15/key-takeaways-on-americans-views-about-privacy-surveillance-and-data-sharing/ 

  • Americans are concerned about how much data is being collected about them, and many feel their information is less secure than it used to be.

  • Very few Americans believe they understand what is being done with the data collected about them.

  • Most Americans see more risks than benefits from personal data collection.

  • Americans say they have very little understanding of current data protection laws, and most are in favor of more government regulation.

2. “Women are harassed every 30 seconds on Twitter, major study finds.” Amnesty International, Mashable.
By Rachel Thompson. 18 December 2019.
https://mashable.com/article/amnesty-study-twitter-abuse-women/ 

The research revealed that abuse is experienced by women "across the political spectrum" in both the U.S. and UK. 

[…] Now that this dataset is in place, there's data and research to "back up what women have long been telling us," said Marin in a statement. "That Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked," she added.” [Amnesty International’s senior advisor for tactical research, Milena Marin.]

3. “TikTok's local moderation guidelines ban pro-LGBT content.” The Guardian. By Alex Hern. 26 September 2019
https://www.theguardian.com/technology/2019/sep/26/tiktoks-local-moderation-guidelines-ban-pro-lgbt-content

TikTok’s efforts to provide locally sensitive moderation have resulted in it banning any content that could be seen as positive to gay people or gay rights, down to same-sex couples holding hands, even in countries where homosexuality has never been illegal, the Guardian can reveal.

4. “This is how we radicalized the world.” Buzzfeed. By Ryan Roderick. October 28-29, 2018
https://www.buzzfeednews.com/article/ryanhatesthis/brazil-jair-bolsonaro-facebook-elections 

(On the Bolsonaro election in Brazil.)

But it really doesn’t matter what country you’re in. The dance is the same everywhere you go.

Chances are, by now, your country has some, if not all, of the following. First off, you probably have some kind of local internet troll problem…

Then far-right influencers start appearing, aided by algorithms recommending content that increases user watch time. They will use Facebook, Twitter, and YouTube to transmit and amplify content and organize harassment and intimidation campaigns….

Some of these trolls and influencers will create more sophisticated far-right groups within the larger movement, like the Proud Boys, Generation Identity, or Movimento Brasil Livre…

While a far-right community is building in your country, a fake news blitz is usually raging online. It could be a rumor-based culture of misinformation, like the localized hoaxes that circulate in countries like India, Myanmar, or Brazil. Or …

Typically, large right-wing news channels or conservative tabloids will then take these stories going viral on Facebook and repackage them for older, mainstream audiences…

In most countries, reliable publications are going behind paywalls… Which will most likely leave the poor, the old, and the young to fall into an information divide. This is already happening. A study released this month from the UK found that poorer British readers got less, worse news than wealthier readers. And according to a new study by Pew Research Center, only 17% of people over the age of 65 were able to identify fact from opinion…

There are deserts of information where normal people are algorithmically served memes, poorly aggregated news articles, and YouTube videos without any editorial oversight or regulation…

5. “How YouTube Radicalized Brazil.” New York Times. By Max Fisher and Amanda Taub. 11 August 2019.
https://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html 

YouTube’s search and recommendation system appears to have systematically diverted users to far-right and conspiracy channels in Brazil…

“We need the companies to face their role,” Ms. Diniz said. “Ethically, they are responsible.”

As conspiracies spread on YouTube, video makers targeted aid groups whose work touches on controversial issues like abortion. Even some families that had long relied on such groups came to wonder if the videos might be true, and began to avoid them.

In Brazil, this is a growing online practice known as “linchamento” — lynching. Mr. Bolsonaro was an early pioneer, spreading videos in 2012 that falsely accused left-wing academics of plotting to force schools to distribute “gay kits” to convert children to homosexuality. Mr. Jordy, his tattooed Niterói protégé, was untroubled to learn that his own YouTube campaign, accusing teachers of spreading communism, had turned their lives upside down.

One of those teachers, Valeria Borges, said she and her colleagues had been overwhelmed with messages of hate, creating a climate of fear.

6. “Living in a sea of false signals: Are we being pushed from “trust, but verify” to “verify, then trust”?“ Nieman Lab. March 2018.
https://www.niemanlab.org/2018/03/living-in-a-sea-of-false-signals-are-we-being-pushed-from-trust-but-verify-to-verify-then-trust/ 

(Recap of a speech by Craig Silverman.)

For me, this example encompasses so much about the current reality of media and online misinformation. For one thing, it shows that online misinformation is about more than just information itself — it also involves economic and algorithmic incentives on a global scale. We also need to think about human factors, such as cognition and belief. […]

The cues that people have used to determine the authenticity of information are in many cases no longer sufficient. Just about everything can be fabricated. We must keep this in mind as we look to establish new signals of trust, because they too will inevitably be fabricated and gamed. …

One thing we must keep in mind is the technology and systems that enabled this reality are optimized to capture and surveil the activities of users. They therefore have the potential to become the world’s greatest censorship infrastructure. …

We have a human problem on our hands. Our cognitive abilities are in some ways overmatched by what we have created.

7. Sarah Thompson’s work, pointed to by Craig Silverman (above), investigating the harm done to First Nations peoples by fake Facebook accounts. Thompson argues that fake accounts do harm precisely because they interfere with the real good being done by First Nations’ people on Facebook. April 2019. https://exploitingtheniche.wordpress.com/2019/04/09/implants-and-extractions-part-ii/ 

In Part One I described two bogus websites that were using a variety of tricks to game both the audience and the advertisers. I showed some fake/stolen profiles that were being used to seed links, memes and to control a group with a large audience and a narrow focus. I showed an admin team of nine that had somehow positioned itself to take over a group 88,000 strong. All that together has a name now, it’s called “Coordinated inauthentic behavior.”

8. “Rebalancing Regulation of Speech: Hyper-Local Content on Global Web-Based Platforms.” By Chinmayi Arun. Berkman Center. 28 March 2018.
https://medium.com/berkman-klein-center/rebalancing-regulation-of-speech-hyper-local-content-on-global-web-based-platforms-1-386d65d86e32  

There is too much harmful speech online but too much harmless opinion is being censored. There is more big data on harmful speech than ever before but no information on what the data means. This is the conundrum of our times. […]



The web-based platforms are mostly US-based⁴ and mediate an unfathomable quantity of speech between local actors around the world. Some of this speech is harmful⁵, which means that data on harmful speech worldwide has been accumulating with global platforms.

As public discourse moves over to web-based platforms, the usual local modes of intervention and study lose footing. Online harmful speech therefore has global and local dimensions and this piece attempts to tease these intertwined questions out so that researchers may attempt to answer them.

9. “Internal Documents Show Facebook Has Never Deserved Our Trust or Our Data.” By Jason Koebler and Joseph Cox. Motherboard/Vice News. 5 December 2019.
https://www.vice.com/en_us/article/7xyenz/internal-documents-show-facebook-has-never-deserved-our-trust-or-our-data

[N]ew internal Facebook documents that are part of a lawsuit filed by a firm called Six4Three and published Wednesday by a member of the United Kingdom’s Parliament shows once and for all that Facebook knew the potential harms of its product all along and pushed forth anyway. The answer to the question “why should we trust Facebook?” is: We shouldn’t, and we never should have.

10. “The Art of Eyeball Harvesting.” By Shengwu Li. Logic Magazine, vol 6. January 2019.
https://logicmag.io/play/shengwu-li-on-online-advertising/ 

Whenever you're accessing one of the many websites that sells advertising in this way, all of your cookies are being made public in this fashion. I think people don't realize that this is part of their everyday internet experience […] Most people don't realize how many companies have access to the cookies that are in your browser, and how much information those companies can learn about you from those cookies.

11. “Temporal Limits of Privacy in Human Behavior." Sekara et al, 2018. Arxiv.org (https://arxiv.org/pdf/1806.03615.pdf) via https://twitter.com/SarahJamieLewis/status/1041646238280679424, @SarahJamieLewis. 17 September 2018.

(This speaks to how much “risk” to privacy can arise from very little information)

Knowing of 5 downloaded, non-vendor specific apps installed on a device is enough to uniquely identify over 90% of individuals in a sample of a million+ people. […] “This result is not surprising but it's a nice illustration of the how even seemingly "useless"/"unimportant" information like "what apps do you have installed" can impact total privacy.

12. “Freedom on the Net 2018: The Rise of Digital Authoritarianism.” Freedom House. 2018. https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism [New link as of April 2020: https://freedomhouse.org/report/freedom-net/2018/rise-digital-authoritarianism ]

[Note: A reader from Comparitech suggested their map of Internet censorship in 181 countries might be useful in this context, https://www.comparitech.com/blog/vpn-privacy/internet-censorship-map/ —MPE April 27, 2020]

(From the introduction by Adrian Shahbaz.)

The internet is growing less free around the world, and democracy itself is withering under its influence. […]

Securing internet freedom against the rise of digital authoritarianism is fundamental to protecting democracy as a whole. Technology should empower citizens to make their own social, economic, and political choices without coercion or hidden manipulation. The internet has become the modern public sphere, and social media and search engines have both tremendous power and a weighty responsibility to ensure that their platforms serve the public good. If antidemocratic entities effectively capture the internet, citizens will be denied a forum to articulate shared values, debate policy questions, and peacefully settle intrasocietal disputes. Democracy also requires a protected private sphere. The unrestrained and largely unexamined collection of personal data inhibits one’s right to be let alone, without which peace, prosperity, and individual freedom—the fruits of democratic governance—cannot be sustained or enjoyed.

If democracy is to survive the digital age, technology companies, governments, and civil society must work together to find real solutions to the problems of social media manipulation and abusive data collection.

13. “Freedom on the Net 2019, The Crisis of Social Media.” By Adrian Shahbaz and Allie Funk. Freedom House. November 2019. https://www.freedomonthenet.org/sites/default/files/2019-11/11042019_Report_FH_FOTN_2019_final_Public_Download.pdf  

What was once a liberating technology has become a conduit for surveillance and electoral manipulation…

While social media have at times served as a level playing field for civic discussion, they are now tilting dangerously toward illiberalism…

The expanding use of sophisticated social media surveillance tools raises the risk that constitutionally protected activities will be impaired […] Authorities in 47 countries arrested users for political, social, or religious speech—a record high.

14. “Goodbye, Chrome: Google’s Web browser has become spy software.” By Geoffrey Fowler. Washington Post. 21 June 2018.
https://www.washingtonpost.com/technology/2019/06/21/google-chrome-has-become-surveillance-software-its-time-switch/ 

In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

15. Carole Cadwalladr and Emma Graham-Harrison’s reporting on Cambridge Analytica for the Guardian, such as https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (17 March 2019)

Documents…show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.

16. “Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches.” Robyn Caplyn. Data In Society. 14 November 2018. https://datasociety.net/output/content-or-context-moderation/

At the Content Moderation at Scale event, Del Harvey, vice president of trust and safety at Twitter, noted that with this kind of scale, catching 99.9 percent of bad content still means that tens of thousands of problematic tweets remain.[…]

Companies using the industrial approach favor consistency across countries tend to collapse contexts in favor of establishing global rules that make little sense when applied to content across vastly different cultural and political contexts around the world. This can, at times, have significant negative impact on marginalized groups. Julia Angwin criticized this type of policy practice when Facebook attempted to implement a policy that incorporated notions of intersectionality divorced from existing power arrangements, essentially protecting the hegemonic groups of White and men, but not “Black children.” Her work demonstrated that attempts at universal anti-discrimination rules too often do not account for power differences along racial and gender lines.

17. “6 ways social media has become a direct threat to democracy.” By Pierre Omidyar. Washington Post. 9 October 2017.
https://www.washingtonpost.com/news/theworldpost/wp/2017/10/09/pierre-omidyar-6-ways-social-media-has-become-a-direct-threat-to-democracy/ 

For years, Facebook has been paid to distribute ads known as “dark posts,” which are only shared with highly targeted users selected by advertisers. When these ads are political or divisive in nature, their secrecy deprives those affected by the ads the opportunity to respond in a timely manner — say, before an election concludes. It also allows outsiders, such as the Russian government, to influence and manipulate U.S. citizens from the shadows.

[…] we’re still just beginning to address how social media across all platforms is being used to undermine transparency, accountability and trust in our democracy.

18. NY Times: Delay, “Deny and Deflect: How Facebook’s Leaders Fought Through Crisis.” By Sheera Frenkel, Nicholas Confessore, Cecilia Kang, Matthew Rosenberg and Jack Nicas. 14 November 2018.
https://www.nytimes.com/2018/11/14/technology/facebook-data-russia-election-racism.html

[A]s evidence accumulated that Facebook’s power could also be exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe, Mr. Zuckerberg and Ms. Sandberg stumbled. Bent on growth, the pair ignored warning signs and then sought to conceal them from public view. […]

Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.

19. “Two years after #Pizzagate showed the dangers of hateful conspiracies, they’re still rampant on YouTube.” By Craig Timberg, Elizabeth Dwoskin, and Andrew Ba Tran. Washington Post. 14 November 2018.
https://pb-impact.washingtonpost.com/business/technology/hateful-conspiracies-thrive-on-youtube-despite-pledge-to-clean-up-problematic-videos/2018/12/10/625730a8-f3f8-11e8-9240-e8028a62c722_story.html 

A year after YouTube’s chief executive promised to curb “problematic” videos, it continues to harbor and even recommend hateful, conspiratorial videos, allowing racists, anti-Semites and proponents of other extremist views to use the platform as an online library for spreading their ideas.

20. “On YouTube’s Digital Playground, an Open Gate for Pedophiles.” New York Times. By Max Fisher and Amanda Taub. 3 June 2019.
https://www.nytimes.com/2019/06/03/world/americas/youtube-pedophiles.html

YouTube has described its recommendation system as artificial intelligence that is constantly learning which suggestions will keep users watching. These recommendations, it says, drive 70 percent of views, but the company does not reveal details of how the system makes its choices.[…]

When [researchers] followed recommendations on sexually themed videos, they noticed something they say disturbed them: In many cases, the videos became more bizarre or extreme, and placed greater emphasis on youth. […]

Jennifer O’Connor, YouTube’s product director for trust and safety, said the company was committed to eradicating the exploitation of children on its platform and had worked nonstop since February on improving enforcement. “Protecting kids is at the top of our list,” she said.

But YouTube has not put in place the one change that researchers say would prevent this from happening again: turning off its recommendation system on videos of children, though the platform can identify such videos automatically. The company said that because recommendations are the biggest traffic driver, removing them would hurt “creators” who rely on those clicks.

21. Letter to Jack Dorsey, CEO of Twitter. David Kaye, United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. 10 December 2018 (posted to Twitter 21 December 2018).
https://twitter.com/davidakaye/status/1076018548378497024?lang=en

[Text of Tweet:] Platforms may be improving but they are often opaque to users whose accounts are suspended or posts hidden from view, esp when under pressure from governments. […]

[Letter to Jack Dorsey:] I am writing in connection with information I have received regarding account actions against Twitter users for posting or sharing Kashmir-related content. According to the information received, Twitter has withheld users’ tweets and accounts when they have participated in discussions conceming Kashmir on the platform. Affected users receive notifications that either inform them that their “account [or tweet] has been withheld in India in response to a legal demand,” or that their “account [or tweet] has been withheld based on local law(s).”

AL the same time, Twitter has a responsibility to respect the human rights of its users. Indeed, in an interview with WIRED, you have stated that, “[Wle believe our purpose is to serve the public conversation. And that does take a stance around freedom of expression as a fundamental human right.” According to international standards and best practices, the responsibilty to respect freedom of expression should, at a minimum, include a duty to “engage in prevention and mitigation strategies that respect principles of internationally recognized human rights to the greatest extent possible when faced with conflicting local law requirements." In particular, legal demands should be interpreted and implemented as narrowly as possible, to ensure the least possible restriction on expression Furthermore, when Twitter receives problematic requests, it should “seek clarification or modification; solicit the assistance of civil society, peer companies, relevant government authorities, itemational and regional bodies and ether stakeholders; ‘and explore all legal options for challenge.”

22. “Russian Meddling Is a Symptom, Not the Disease.” By Zeynep Tufekci. New York Times. 3 October 2018. https://www.nytimes.com/2018/10/03/opinion/midterms-facebook-foreign-meddling.html 

Traditional media outlets, of course, are frequently also cynical manipulators of sensationalistic content, but social media is better able to weaponize it. […]

For example, less than two weeks before the 2016 presidential election, a senior official with the Trump campaign told Bloomberg Businessweek that it had “three major voter suppression operations underway.” These were aimed at young women, African-Americans and white idealistic liberals. One of the key tools of the operation, according to Brad Parscale, a Trump campaign manager, was Facebook “dark posts” or “nonpublic posts,” whose viewership was controlled so that “only the people we want to see it, see it.”

23. “The Expensive Education of Mark Zuckerberg and Silicon Valley.” By Kara Swisher. New York Times. 2 August 2018.
https://www.nytimes.com/2018/08/02/opinion/the-expensive-education-of-mark-zuckerberg-and-silicon-valley.html 

All these companies began with a gauzy credo to change the world. But they have done that in ways they did not imagine — by weaponizing pretty much everything that could be weaponized. They have mutated human communication, so that connecting people has too often become about pitting them against one another, and turbocharged that discord to an unprecedented and damaging volume.

They have weaponized social media. They have weaponized the First Amendment. They have weaponized civic discourse. And they have weaponized, most of all, politics.

24. “Robotrolling 3.” NATO Strategic Communications Centre for Excellence. 2018. https://www.stratcomcoe.org/robotrolling-20183 

During the period 1 May - 31 July 2018, Russian-language bots created 49% of all Russian-language messages about NATO in the Baltic States and Poland. Of accounts posting in Russian, 36% were predominantly automated. […] The increasing proportions of anonymous accounts active during key political moments indicate that anonymity is being abused to cloak manipulation on social networks. 

25. “Where countries are tinder boxes and Facebook is a match”. By Amanda Taub and Max Fisher. New York Times. 21 April 2018
https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html  

For months we've been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest – a potentially damaging practice in countries with weak institutions.

One post declared, “Kill all Muslims, don’t even save an infant.” A prominent extremist urged his followers to descend on the city of Kandy to “reap without leaving an iota behind.” Desperate, the researchers flagged the video and subsequent posts using Facebook’s on-site reporting tool.

Though they and government officials had repeatedly asked Facebook to establish direct lines, the company had insisted this tool would be sufficient, they said. But nearly every report got the same response: the content did not violate Facebook’s standards.

“You report to Facebook, they do nothing,” one of the researchers, Amalini De Sayrah, said. “There’s incitements to violence against entire communities and Facebook says it doesn’t violate community standards.”

26. “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.” By Paul Mozur. New York Times. 15 October 2018.
https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html 

Members of the Myanmar military were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade and that targeted the country’s mostly Muslim Rohingya minority group, the people said.

The military exploited Facebook’s wide reach in Myanmar, where it is so broadly used that many of the country’s 18 million internet users confuse the Silicon Valley social media platform with the internet. Human rights groups blame the anti-Rohingya propaganda for inciting murders, rapes and the largest forced human migration in recent history.

While Facebook took down the official accounts of senior Myanmar military leaders in August, the breadth and details of the propaganda campaign — which was hidden behind fake names and sham accounts — went undetected. […]

In August, after months of reports about anti-Rohingya propaganda on Facebook, the company acknowledged that it had been too slow to act in Myanmar. By then, more than 700,000 Rohingya had fled the country in a year, in what United Nations officials called “a textbook example of ethnic cleansing.”

27. “How Facebook and YouTube help spread anti-vaxxer propaganda.” By Julia Carrie Wong. The Guardian. 1 February 2019
https://www.theguardian.com/media/2019/feb/01/facebook-youtube-anti-vaccination-misinformation-social-media 

The Guardian found that Facebook search results for groups and pages with information about vaccines were dominated by anti-vaccination propaganda, and that YouTube’s recommendation algorithm steers viewers from fact-based medical information toward anti-vaccine misinformation.

28. “Tim Berners-Lee unveils global plan to save the web.” By Ian Sample. The Guardian. 24 November 2019.
https://www.theguardian.com/technology/2019/nov/24/tim-berners-lee-unveils-global-plan-to-save-the-internet?CMP=share_btn_tw 

“I think people’s fear of bad things happening on the internet is becoming, justifiably, greater and greater,” Berners-Lee, the inventor of the web, told the Guardian. “If we leave the web as it is, there’s a very large number of things that will go wrong. We could end up with a digital dystopia if we don’t turn things around. […]

“The forces taking the web in the wrong direction have always been very strong,” Berners-Lee said. “Whether you’re a company or a government, controlling the web is a way to make huge profits, or a way of ensuring you remain in power. The people are arguably the most important part of this, because it’s only the people who will be motivated to hold the other two to account.”

29. “YouTube’s Privacy Problem.” By Natasha Singer, in the New York Times “Kids” section (print only). 12 November 2019.

Why was YouTube Fined?

[T]he Federal Trade Commission and the New York attorney general said that YouTube broke [the] law by collecting [COPPA-prohibited personal details] from children who watched kids' videos on the site — and then earned millions of dollars by using that data to target kids with ads.

Did YouTube know that the data it was collecting belonged to kids?

Yes […] YouTube told one advertising company that video channels on its site… did not have viewers under 13. But at the same time YouTube was promoting itself as the top online destination for kids … Some of YouTube's most-viewed challens are aimed at children — the videos on one channel, Cocomelon Nursey Rhymes, have been viewed more than 42 billion times.

30. “Why Tech is Starting to Make Me Uneasy.” By Farhad Manjoo. New York Times. 11 October 2017.
https://www.nytimes.com/2017/10/11/insider/tech-column-dread.html

…In 2007, when Mr. Jobs unveiled the iPhone, just about everyone greeted the new device as an unalloyed good. I was one of them. Here was a genuinely new thing that would Make Your Life Better, we all thought: It would be convenient, it would be fun, it wasn’t serious or dangerous or globe-shattering. That’s no longer true.

The State of the Art, today, is a bag of mixed emotions. Tech might improve everything. And it’s probably also terrible in ways we’re only just starting to understand.

And finally (for now), Ethan Zuckerman, https://twitter.com/EthanZ/status/1009838622449766400 (Commenting on NYT “Want to understand what ails the modern Internet? Look at eBay.” https://www.nytimes.com/2018/06/20/magazine/want-to-understand-what-ails-the-modern-internet-look-at-ebay.html

Society is more than a bazaar.

42 billion times

kidsTimes-youTube.JPG
Why was YouTube Fined?

[T]he Federal Trade Commission and the New York attorney general said that YouTube broke [the] law by collecting [COPPA-prohibited personal details] from children who watched kids' videos on the site — and then earned millions of dollars by using that data to target kids with ads.

Did YouTube know that the data it was collecting belonged to kids?

Yes […] YouTube told one advertising company that video channels on its site … did not have viewers under 13. But at the same time YouTube was promoting itself as the top online destination for kids … Some of YouTube's most-viewed channels are aimed at children — the videos on one channel, Cocomelon Nursey Rhymes, have been viewed more than 42 billion times.
YouTube’s Privacy Problem, by Natasha Singer, in the New York Times “Kids” section (print only), 12 November 2019

The Web We Want

"The Web We Want" Submitted remotely and screened at Ignite MCN. November 5, 2019 Music Box club, San Diego http://www.mcn.edu

I was supposed to be in San Diego this week for the Museum Computer Network conference, but business called me away. Here’s my Ignite talk, The Web We Want, composed with both the news of the day (fake news, propaganda on Facebook) and the rhythm of N.W.A.’s Express Yourself stuck in my head. I produced this video facsimile as a self-contained all-in-one production to try to give a sense of the moment — of being on stage with and for my people.

It’s a bit of a sequel to the last MCN Ignite talk, Jack the Museum, given in 2012.

Jack the Museum asked us to reach outside the constrictive idioms of traditional museum practice to seek greater impact in the world. Now, seven years later, with the humanistic vision of the Internet and the Web under threat, the Web We Want asks us to fight to reclaim the positive values of a digitally connected world.

Good luck tonight Nathan, Effie, Alison, Andrew, Beth, Koven and Nik — I’ll be with you in spirit, sending you all good vibes from somewhere over the Atlantic! Cheers!! https://conference.mcn.edu/2019/Ignite.cfm

Here’s the script (and an annotated version is here as a .pdf).

Hey I’m Mike; Cheers! Tonight I’m in absentia.
Talking to you across a digital connection.
Broadcasting from my trusty cyber station,
My code, copper, glass, and silicon creation.

Yeah I love the web — and it’s our baby.
And many-of-us are insider spiders that can ride her, maybe.
Or are we flies that come around …  get stuck and eaten?
That web’s a sticky place now, and that smilin’ spider we be greetin.

What am I talkinabout? Well you may know me, Willis.
May know the things I care about I think will kill us.
May know the scope, scale, and speed-lovin man I am.
May know the green-eggs-and-ham lovin Sam-I-am.

I look out my windows and I see a shit-show.
If you log-in, or blogging with your noggin there’s a quid-pro-quo.
So let me take you through the categoric history.
To sweep the evils of these platforms to the dustbin of our history.

Facebook, Insta, YouTube, and Twitter.
I loved them, but these days I’d use them for kitty litter.
Flush them right down the pot — yeah, I’m pretty bitter.
They stealin’ from us like a Ben Franklin counterfeiter.

Facebook? It aided a genocide. A genocide. A genocide.
Facebook? It aided a genocide. 
Yeah, they did that, and people died.

‘But we’re too big to mod-er-ate on-line activity.’
That’s a laugh! Facebook you could have the proclivity
To care about people and values and civility,
And redirect some of your vaunted corporate creativity
 
To make common sense solutions that work for all humanity.
Hate is not a fair choice ‘tween free speech, profit, and inanity.
Grow a spine and learn from those who’ve learned to love their community.
Weave a web of love and trust like RPG and Ravelry.

YouTube, oh, don’t get me started.
I love this platform but then they departed
The land of common, objective civic decency,
When they give aid and comfort those who spread conspiracy .

Twitter, hell, it’s a travesty,
How harassment and abuse is right there for all to see.
It’s not convenient to care when your mistress is a business model,
That makes you into every troll and dictator’s mollycoddle.

Google, aw, where to start?
They’ve turned exploitation of privacy into an art.
Micro-targeting, tracking, and ruthless data aggregation,
Reduce life-changing choices to an algorithmic calculation,

Blurring our lives into a smear of ruthless averages.
When they work they work but when they don’t who pays the damages?
Not people like me, white, straight, schooled, and privileged.
The grievous harm they cause to the powerless and poor can be unlimited.

So don’t tell me it don’t affect you.
Don’t affect those you met and those who beget you.
“Come into my parlor” say we spiders to the flies outside.
Hey, everybody goes there, why not? Don’t worry ‘bout the sticky side.

I think it’s a matter of owning up to consequence.
We all ask our global family to play here, and at great expense,
We burnish the street cred’ of dot coms with our edifice,
And risk harm to our community while we’re being generous.

Microsoft, Apple, and the Amazon crew.
ISP’s and the mobile’s are part of this too.
They claim public good, civic virtue, in their soundbites,
But when push comes to shove will they shove the Benjamins or human rights?

Oh bruh and sis, I almost forgot.
Elections and fake news are what we begot.
Remember that thing with Cambridge Analytica?
Well how’ you feelin’ about the current situation politica’?

Not so good? Huh — well me neither.
Catastrophic atmospheric carbon’s rising in the ether,
And just when we all must be connected, fast, and democratic,
The web we need is rotting, unacceptable-ly problematic.

Are we going to let 7 billion people live and love on a Web that’s autocratic?
Where the values of decency and common good make the dot-com’s panic?
Where the captains of Silicon Valley are running manic?
Piloting our commons to an iceberg like their own Titanic?

To transcend greed. Avarice. The fecklessness of feckless pricks,
We’re going to have to work as one, renegotiate some politics.
Boycott, cajole, write those letters band together,
Take a stand, take a risk, take the streets hell bent for leather.

This heart, these beats from this spider-web practitioner.
This time, these rhymes bustin’ from this long-distance exhibitioner
This urgency this planet this community can get get it done.
The web we want’s the dream we got if we spin our silk together, connected and strong.

[Updated 29 November 2019 to include link to annotated notes and link to official MCN version of the video.]



“It really doesn’t matter what country you’re in. The dance is the same everywhere you go.”

Chances are, by now, your country has some, if not all, of the following.

First off, you probably have some kind of local internet troll problem, like the MAGAsphere in the US, the Netto-uyoku in Japan, Fujitrolls in Peru, or AK-trolls in Turkey.

Your trolls will probably have been radicalized online via some kind of community for young men like Gamergate, Jeuxvideo.com ("videogames.com") in France, ForoCoches ("Cars Forum") in Spain, Ilbe Storehouse in South Korea, 2chan in Japan, or banter Facebook pages in the UK.

…Far-right influencers start appearing, aided by algorithms recommending content that increases user watch time. They will use Facebook, Twitter, and YouTube to transmit and amplify content and organize harassment and intimidation campaigns.

If these influencers become sophisticated enough, they will try to organize protests or rallies. The mini fascist comic cons they organize will be livestreamed and operate as an augmented reality game for the people watching at home. Violence and doxxing will follow them.

Some of these trolls and influencers will create more sophisticated far-right groups within the larger movement, like the Proud Boys, Generation Identity, or Movimento Brasil Livre. Or some will reinvigorate older, more established far-right or nationalist institutions like the Nordic Resistance Movement, the Football Lads Alliance, United Patriots Front, or PEGIDA.

While a far-right community is building in your country, a fake news blitz is usually raging online. It could be a rumor-based culture of misinformation, like the localized hoaxes that circulate in countries like India, Myanmar, or Brazil. Or it could be the more traditional “fake news” or hyperpartisan propaganda we see in predominantly English-speaking countries like the US, Australia, or the UK.

Typically, large right-wing news channels or conservative tabloids will then take these stories going viral on Facebook and repackage them for older, mainstream audiences. Depending on your country’s media landscape, the far-right trolls and influencers may try to hijack this social-media-to-newspaper-to-television pipeline. Which then creates more content to screenshot, meme, and share. It’s a feedback loop.

Populist leaders and the legions of influencers riding their wave […]create filter bubbles inside of platforms like Facebook or YouTube that promise a safer time, one that never existed in the first place, before the protests, the violence, the cascading crises, and endless news cycles. Donald Trump wants to Make American Great Again; Bolsonaro wants to bring back Brazil’s military dictatorship; Shinzo Abe wants to recapture Japan’s imperial past; Germany’s AFD performed the best with older East German voters longing for the days of authoritarianism. All of these leaders promise to close borders, to make things safe. Which will, of course, usually exacerbate the problems they’re promising to disappear. Another feedback loop.

…It really doesn’t matter what country you’re in. The dance is the same everywhere you go.

This Is How We Radicalized The World, by Ryan Broderick, Buzzfeed News, 29 October 2019 (with light editing). The subtitle of the article is “On Sunday, far-right evangelical Jair Bolsonaro was elected president of Brazil. The era of being surprised at this kind of politics is over. Now we have to live with what we've done.”