The technology they like, no matter the social cost

“I was struck by how many of the wealthiest and post powerful figures in Silicon Valley — including some I knew — were now loudly backing Mr. Trump. ... Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost.”
Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money. New York Times Opinion Guest Essay by Chris Hughes, co-founder of Facebook and chair of the Economic Security Project. September 25, 2024.

It's on us

Robert Kagan’s gut-wrenching essay in the Washington Post on Sunday about the crisis in American democracy (see below) reminded me of this 2018 piece by Zeynep Tufekci in the MIT Technology Review, How social media Took us from Tahir Square to Donald Trump.

At the end, Tufekci argues that while corporate social media and Russian election interference were a horrible influence on democratic processes, Russian trolls didn’t get us to where we are by themselves.

But we didn’t get where we are simply because of digital technologies. The Russian government may have used online platforms to remotely meddle in US elections, but Russia did not create the conditions of social distrust, weak institutions, and detached elites that made the US vulnerable to that kind of meddling.

Russia did not make the US (and its allies) initiate and then terribly mishandle a major war in the Middle East, the after-effects of which—among them the current refugee crisis—are still wreaking havoc, and for which practically nobody has been held responsible. Russia did not create the 2008 financial collapse: that happened through corrupt practices that greatly enriched financial institutions, after which all the culpable parties walked away unscathed, often even richer, while millions of Americans lost their jobs and were unable to replace them with equally good ones.

Russia did not instigate the moves that have reduced Americans’ trust in health authorities, environmental agencies, and other regulators. Russia did not create the revolving door between Congress and the lobbying firms that employ ex-politicians at handsome salaries. Russia did not defund higher education in the United States. Russia did not create the global network of tax havens in which big corporations and the rich can pile up enormous wealth while basic government services get cut.

These are the fault lines along which a few memes can play an outsize role. And not just Russian memes: whatever Russia may have done, domestic actors in the United States and Western Europe have been eager, and much bigger, participants in using digital platforms to spread viral misinformation.

Even the free-for-all environment in which these digital platforms have operated for so long can be seen as a symptom of the broader problem, a world in which the powerful have few restraints on their actions while everyone else gets squeezed. Real wages in the US and Europe are stuck and have been for decades while corporate profits have stayed high and taxes on the rich have fallen. Young people juggle multiple, often mediocre jobs, yet find it increasingly hard to take the traditional wealth-building step of buying their own home—unless they already come from privilege and inherit large sums.

If digital connectivity provided the spark, it ignited because the kindling was already everywhere. The way forward is not to cultivate nostalgia for the old-world information gatekeepers or for the idealism of the Arab Spring. It’s to figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century—not just for digital technologies but for politics and the economy in general. This responsibility isn’t on Russia, or solely on Facebook or Google or Twitter. It’s on us.

Social media is a nuance destruction machine…
— Jeff Bezos, in testimony at an antitrust hearing of the US House Committee on the Judiciary, 29 July 2020. Via Geekwire

The full quote, in response to a question about so-called “cancel culture”, was, “What I find a little discouraging is that it appears to me that social media is a nuance destruction machine, and I don’t think that’s helpful for a democracy.”

Twitter was made for trouble

On Twitter…teens saw the street code in the workings of the site. “Whoever made Twitter,” said Tiana, in September 2010, “designed Twitter for trouble.”

She explained that she could see her friends’ confrontations with people she didn’t follow. Tiana was prepared to “jump into” these conflicts and expected her friends to do the same. In the context of the [street] code, Twitter seemed provocative. It placed users before a stream of other people’s conversations, with the prompt “What’s happening?”

Tiana, from The Digital Street by Jeffrey Lane, 2018, p. 72

A certain lack of vision

The struggle to maintain Twitter is a double referendum: first, on the sustainability of scale; second, on the pervasive belief in Silicon Valley that technology can be neutral and should be treated as such. This idea, that systems will find their own equilibrium, echoes the libertarian spirit that has long animated the Valley and fails to account for actual power imbalances that exist in the real world. In 2019, it also suggests a certain lack of vision or imagination about what social technologies can, or should, be.

"Society is more than a bazaar"

Some articles and references I’ve collected regarding the dark side of “social media.”

1. “Privacy findings.” Pew Research Center. Nov 2019. https://www.pewresearch.org/fact-tank/2019/11/15/key-takeaways-on-americans-views-about-privacy-surveillance-and-data-sharing/ 

  • Americans are concerned about how much data is being collected about them, and many feel their information is less secure than it used to be.

  • Very few Americans believe they understand what is being done with the data collected about them.

  • Most Americans see more risks than benefits from personal data collection.

  • Americans say they have very little understanding of current data protection laws, and most are in favor of more government regulation.

2. “Women are harassed every 30 seconds on Twitter, major study finds.” Amnesty International, Mashable.
By Rachel Thompson. 18 December 2019.
https://mashable.com/article/amnesty-study-twitter-abuse-women/ 

The research revealed that abuse is experienced by women "across the political spectrum" in both the U.S. and UK. 

[…] Now that this dataset is in place, there's data and research to "back up what women have long been telling us," said Marin in a statement. "That Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked," she added.” [Amnesty International’s senior advisor for tactical research, Milena Marin.]

3. “TikTok's local moderation guidelines ban pro-LGBT content.” The Guardian. By Alex Hern. 26 September 2019
https://www.theguardian.com/technology/2019/sep/26/tiktoks-local-moderation-guidelines-ban-pro-lgbt-content

TikTok’s efforts to provide locally sensitive moderation have resulted in it banning any content that could be seen as positive to gay people or gay rights, down to same-sex couples holding hands, even in countries where homosexuality has never been illegal, the Guardian can reveal.

4. “This is how we radicalized the world.” Buzzfeed. By Ryan Roderick. October 28-29, 2018
https://www.buzzfeednews.com/article/ryanhatesthis/brazil-jair-bolsonaro-facebook-elections 

(On the Bolsonaro election in Brazil.)

But it really doesn’t matter what country you’re in. The dance is the same everywhere you go.

Chances are, by now, your country has some, if not all, of the following. First off, you probably have some kind of local internet troll problem…

Then far-right influencers start appearing, aided by algorithms recommending content that increases user watch time. They will use Facebook, Twitter, and YouTube to transmit and amplify content and organize harassment and intimidation campaigns….

Some of these trolls and influencers will create more sophisticated far-right groups within the larger movement, like the Proud Boys, Generation Identity, or Movimento Brasil Livre…

While a far-right community is building in your country, a fake news blitz is usually raging online. It could be a rumor-based culture of misinformation, like the localized hoaxes that circulate in countries like India, Myanmar, or Brazil. Or …

Typically, large right-wing news channels or conservative tabloids will then take these stories going viral on Facebook and repackage them for older, mainstream audiences…

In most countries, reliable publications are going behind paywalls… Which will most likely leave the poor, the old, and the young to fall into an information divide. This is already happening. A study released this month from the UK found that poorer British readers got less, worse news than wealthier readers. And according to a new study by Pew Research Center, only 17% of people over the age of 65 were able to identify fact from opinion…

There are deserts of information where normal people are algorithmically served memes, poorly aggregated news articles, and YouTube videos without any editorial oversight or regulation…

5. “How YouTube Radicalized Brazil.” New York Times. By Max Fisher and Amanda Taub. 11 August 2019.
https://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html 

YouTube’s search and recommendation system appears to have systematically diverted users to far-right and conspiracy channels in Brazil…

“We need the companies to face their role,” Ms. Diniz said. “Ethically, they are responsible.”

As conspiracies spread on YouTube, video makers targeted aid groups whose work touches on controversial issues like abortion. Even some families that had long relied on such groups came to wonder if the videos might be true, and began to avoid them.

In Brazil, this is a growing online practice known as “linchamento” — lynching. Mr. Bolsonaro was an early pioneer, spreading videos in 2012 that falsely accused left-wing academics of plotting to force schools to distribute “gay kits” to convert children to homosexuality. Mr. Jordy, his tattooed Niterói protégé, was untroubled to learn that his own YouTube campaign, accusing teachers of spreading communism, had turned their lives upside down.

One of those teachers, Valeria Borges, said she and her colleagues had been overwhelmed with messages of hate, creating a climate of fear.

6. “Living in a sea of false signals: Are we being pushed from “trust, but verify” to “verify, then trust”?“ Nieman Lab. March 2018.
https://www.niemanlab.org/2018/03/living-in-a-sea-of-false-signals-are-we-being-pushed-from-trust-but-verify-to-verify-then-trust/ 

(Recap of a speech by Craig Silverman.)

For me, this example encompasses so much about the current reality of media and online misinformation. For one thing, it shows that online misinformation is about more than just information itself — it also involves economic and algorithmic incentives on a global scale. We also need to think about human factors, such as cognition and belief. […]

The cues that people have used to determine the authenticity of information are in many cases no longer sufficient. Just about everything can be fabricated. We must keep this in mind as we look to establish new signals of trust, because they too will inevitably be fabricated and gamed. …

One thing we must keep in mind is the technology and systems that enabled this reality are optimized to capture and surveil the activities of users. They therefore have the potential to become the world’s greatest censorship infrastructure. …

We have a human problem on our hands. Our cognitive abilities are in some ways overmatched by what we have created.

7. Sarah Thompson’s work, pointed to by Craig Silverman (above), investigating the harm done to First Nations peoples by fake Facebook accounts. Thompson argues that fake accounts do harm precisely because they interfere with the real good being done by First Nations’ people on Facebook. April 2019. https://exploitingtheniche.wordpress.com/2019/04/09/implants-and-extractions-part-ii/ 

In Part One I described two bogus websites that were using a variety of tricks to game both the audience and the advertisers. I showed some fake/stolen profiles that were being used to seed links, memes and to control a group with a large audience and a narrow focus. I showed an admin team of nine that had somehow positioned itself to take over a group 88,000 strong. All that together has a name now, it’s called “Coordinated inauthentic behavior.”

8. “Rebalancing Regulation of Speech: Hyper-Local Content on Global Web-Based Platforms.” By Chinmayi Arun. Berkman Center. 28 March 2018.
https://medium.com/berkman-klein-center/rebalancing-regulation-of-speech-hyper-local-content-on-global-web-based-platforms-1-386d65d86e32  

There is too much harmful speech online but too much harmless opinion is being censored. There is more big data on harmful speech than ever before but no information on what the data means. This is the conundrum of our times. […]



The web-based platforms are mostly US-based⁴ and mediate an unfathomable quantity of speech between local actors around the world. Some of this speech is harmful⁵, which means that data on harmful speech worldwide has been accumulating with global platforms.

As public discourse moves over to web-based platforms, the usual local modes of intervention and study lose footing. Online harmful speech therefore has global and local dimensions and this piece attempts to tease these intertwined questions out so that researchers may attempt to answer them.

9. “Internal Documents Show Facebook Has Never Deserved Our Trust or Our Data.” By Jason Koebler and Joseph Cox. Motherboard/Vice News. 5 December 2019.
https://www.vice.com/en_us/article/7xyenz/internal-documents-show-facebook-has-never-deserved-our-trust-or-our-data

[N]ew internal Facebook documents that are part of a lawsuit filed by a firm called Six4Three and published Wednesday by a member of the United Kingdom’s Parliament shows once and for all that Facebook knew the potential harms of its product all along and pushed forth anyway. The answer to the question “why should we trust Facebook?” is: We shouldn’t, and we never should have.

10. “The Art of Eyeball Harvesting.” By Shengwu Li. Logic Magazine, vol 6. January 2019.
https://logicmag.io/play/shengwu-li-on-online-advertising/ 

Whenever you're accessing one of the many websites that sells advertising in this way, all of your cookies are being made public in this fashion. I think people don't realize that this is part of their everyday internet experience […] Most people don't realize how many companies have access to the cookies that are in your browser, and how much information those companies can learn about you from those cookies.

11. “Temporal Limits of Privacy in Human Behavior." Sekara et al, 2018. Arxiv.org (https://arxiv.org/pdf/1806.03615.pdf) via https://twitter.com/SarahJamieLewis/status/1041646238280679424, @SarahJamieLewis. 17 September 2018.

(This speaks to how much “risk” to privacy can arise from very little information)

Knowing of 5 downloaded, non-vendor specific apps installed on a device is enough to uniquely identify over 90% of individuals in a sample of a million+ people. […] “This result is not surprising but it's a nice illustration of the how even seemingly "useless"/"unimportant" information like "what apps do you have installed" can impact total privacy.

12. “Freedom on the Net 2018: The Rise of Digital Authoritarianism.” Freedom House. 2018. https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism [New link as of April 2020: https://freedomhouse.org/report/freedom-net/2018/rise-digital-authoritarianism ]

[Note: A reader from Comparitech suggested their map of Internet censorship in 181 countries might be useful in this context, https://www.comparitech.com/blog/vpn-privacy/internet-censorship-map/ —MPE April 27, 2020]

(From the introduction by Adrian Shahbaz.)

The internet is growing less free around the world, and democracy itself is withering under its influence. […]

Securing internet freedom against the rise of digital authoritarianism is fundamental to protecting democracy as a whole. Technology should empower citizens to make their own social, economic, and political choices without coercion or hidden manipulation. The internet has become the modern public sphere, and social media and search engines have both tremendous power and a weighty responsibility to ensure that their platforms serve the public good. If antidemocratic entities effectively capture the internet, citizens will be denied a forum to articulate shared values, debate policy questions, and peacefully settle intrasocietal disputes. Democracy also requires a protected private sphere. The unrestrained and largely unexamined collection of personal data inhibits one’s right to be let alone, without which peace, prosperity, and individual freedom—the fruits of democratic governance—cannot be sustained or enjoyed.

If democracy is to survive the digital age, technology companies, governments, and civil society must work together to find real solutions to the problems of social media manipulation and abusive data collection.

13. “Freedom on the Net 2019, The Crisis of Social Media.” By Adrian Shahbaz and Allie Funk. Freedom House. November 2019. https://www.freedomonthenet.org/sites/default/files/2019-11/11042019_Report_FH_FOTN_2019_final_Public_Download.pdf  

What was once a liberating technology has become a conduit for surveillance and electoral manipulation…

While social media have at times served as a level playing field for civic discussion, they are now tilting dangerously toward illiberalism…

The expanding use of sophisticated social media surveillance tools raises the risk that constitutionally protected activities will be impaired […] Authorities in 47 countries arrested users for political, social, or religious speech—a record high.

14. “Goodbye, Chrome: Google’s Web browser has become spy software.” By Geoffrey Fowler. Washington Post. 21 June 2018.
https://www.washingtonpost.com/technology/2019/06/21/google-chrome-has-become-surveillance-software-its-time-switch/ 

In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

15. Carole Cadwalladr and Emma Graham-Harrison’s reporting on Cambridge Analytica for the Guardian, such as https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (17 March 2019)

Documents…show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.

16. “Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches.” Robyn Caplyn. Data In Society. 14 November 2018. https://datasociety.net/output/content-or-context-moderation/

At the Content Moderation at Scale event, Del Harvey, vice president of trust and safety at Twitter, noted that with this kind of scale, catching 99.9 percent of bad content still means that tens of thousands of problematic tweets remain.[…]

Companies using the industrial approach favor consistency across countries tend to collapse contexts in favor of establishing global rules that make little sense when applied to content across vastly different cultural and political contexts around the world. This can, at times, have significant negative impact on marginalized groups. Julia Angwin criticized this type of policy practice when Facebook attempted to implement a policy that incorporated notions of intersectionality divorced from existing power arrangements, essentially protecting the hegemonic groups of White and men, but not “Black children.” Her work demonstrated that attempts at universal anti-discrimination rules too often do not account for power differences along racial and gender lines.

17. “6 ways social media has become a direct threat to democracy.” By Pierre Omidyar. Washington Post. 9 October 2017.
https://www.washingtonpost.com/news/theworldpost/wp/2017/10/09/pierre-omidyar-6-ways-social-media-has-become-a-direct-threat-to-democracy/ 

For years, Facebook has been paid to distribute ads known as “dark posts,” which are only shared with highly targeted users selected by advertisers. When these ads are political or divisive in nature, their secrecy deprives those affected by the ads the opportunity to respond in a timely manner — say, before an election concludes. It also allows outsiders, such as the Russian government, to influence and manipulate U.S. citizens from the shadows.

[…] we’re still just beginning to address how social media across all platforms is being used to undermine transparency, accountability and trust in our democracy.

18. NY Times: Delay, “Deny and Deflect: How Facebook’s Leaders Fought Through Crisis.” By Sheera Frenkel, Nicholas Confessore, Cecilia Kang, Matthew Rosenberg and Jack Nicas. 14 November 2018.
https://www.nytimes.com/2018/11/14/technology/facebook-data-russia-election-racism.html

[A]s evidence accumulated that Facebook’s power could also be exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe, Mr. Zuckerberg and Ms. Sandberg stumbled. Bent on growth, the pair ignored warning signs and then sought to conceal them from public view. […]

Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.

19. “Two years after #Pizzagate showed the dangers of hateful conspiracies, they’re still rampant on YouTube.” By Craig Timberg, Elizabeth Dwoskin, and Andrew Ba Tran. Washington Post. 14 November 2018.
https://pb-impact.washingtonpost.com/business/technology/hateful-conspiracies-thrive-on-youtube-despite-pledge-to-clean-up-problematic-videos/2018/12/10/625730a8-f3f8-11e8-9240-e8028a62c722_story.html 

A year after YouTube’s chief executive promised to curb “problematic” videos, it continues to harbor and even recommend hateful, conspiratorial videos, allowing racists, anti-Semites and proponents of other extremist views to use the platform as an online library for spreading their ideas.

20. “On YouTube’s Digital Playground, an Open Gate for Pedophiles.” New York Times. By Max Fisher and Amanda Taub. 3 June 2019.
https://www.nytimes.com/2019/06/03/world/americas/youtube-pedophiles.html

YouTube has described its recommendation system as artificial intelligence that is constantly learning which suggestions will keep users watching. These recommendations, it says, drive 70 percent of views, but the company does not reveal details of how the system makes its choices.[…]

When [researchers] followed recommendations on sexually themed videos, they noticed something they say disturbed them: In many cases, the videos became more bizarre or extreme, and placed greater emphasis on youth. […]

Jennifer O’Connor, YouTube’s product director for trust and safety, said the company was committed to eradicating the exploitation of children on its platform and had worked nonstop since February on improving enforcement. “Protecting kids is at the top of our list,” she said.

But YouTube has not put in place the one change that researchers say would prevent this from happening again: turning off its recommendation system on videos of children, though the platform can identify such videos automatically. The company said that because recommendations are the biggest traffic driver, removing them would hurt “creators” who rely on those clicks.

21. Letter to Jack Dorsey, CEO of Twitter. David Kaye, United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. 10 December 2018 (posted to Twitter 21 December 2018).
https://twitter.com/davidakaye/status/1076018548378497024?lang=en

[Text of Tweet:] Platforms may be improving but they are often opaque to users whose accounts are suspended or posts hidden from view, esp when under pressure from governments. […]

[Letter to Jack Dorsey:] I am writing in connection with information I have received regarding account actions against Twitter users for posting or sharing Kashmir-related content. According to the information received, Twitter has withheld users’ tweets and accounts when they have participated in discussions conceming Kashmir on the platform. Affected users receive notifications that either inform them that their “account [or tweet] has been withheld in India in response to a legal demand,” or that their “account [or tweet] has been withheld based on local law(s).”

AL the same time, Twitter has a responsibility to respect the human rights of its users. Indeed, in an interview with WIRED, you have stated that, “[Wle believe our purpose is to serve the public conversation. And that does take a stance around freedom of expression as a fundamental human right.” According to international standards and best practices, the responsibilty to respect freedom of expression should, at a minimum, include a duty to “engage in prevention and mitigation strategies that respect principles of internationally recognized human rights to the greatest extent possible when faced with conflicting local law requirements." In particular, legal demands should be interpreted and implemented as narrowly as possible, to ensure the least possible restriction on expression Furthermore, when Twitter receives problematic requests, it should “seek clarification or modification; solicit the assistance of civil society, peer companies, relevant government authorities, itemational and regional bodies and ether stakeholders; ‘and explore all legal options for challenge.”

22. “Russian Meddling Is a Symptom, Not the Disease.” By Zeynep Tufekci. New York Times. 3 October 2018. https://www.nytimes.com/2018/10/03/opinion/midterms-facebook-foreign-meddling.html 

Traditional media outlets, of course, are frequently also cynical manipulators of sensationalistic content, but social media is better able to weaponize it. […]

For example, less than two weeks before the 2016 presidential election, a senior official with the Trump campaign told Bloomberg Businessweek that it had “three major voter suppression operations underway.” These were aimed at young women, African-Americans and white idealistic liberals. One of the key tools of the operation, according to Brad Parscale, a Trump campaign manager, was Facebook “dark posts” or “nonpublic posts,” whose viewership was controlled so that “only the people we want to see it, see it.”

23. “The Expensive Education of Mark Zuckerberg and Silicon Valley.” By Kara Swisher. New York Times. 2 August 2018.
https://www.nytimes.com/2018/08/02/opinion/the-expensive-education-of-mark-zuckerberg-and-silicon-valley.html 

All these companies began with a gauzy credo to change the world. But they have done that in ways they did not imagine — by weaponizing pretty much everything that could be weaponized. They have mutated human communication, so that connecting people has too often become about pitting them against one another, and turbocharged that discord to an unprecedented and damaging volume.

They have weaponized social media. They have weaponized the First Amendment. They have weaponized civic discourse. And they have weaponized, most of all, politics.

24. “Robotrolling 3.” NATO Strategic Communications Centre for Excellence. 2018. https://www.stratcomcoe.org/robotrolling-20183 

During the period 1 May - 31 July 2018, Russian-language bots created 49% of all Russian-language messages about NATO in the Baltic States and Poland. Of accounts posting in Russian, 36% were predominantly automated. […] The increasing proportions of anonymous accounts active during key political moments indicate that anonymity is being abused to cloak manipulation on social networks. 

25. “Where countries are tinder boxes and Facebook is a match”. By Amanda Taub and Max Fisher. New York Times. 21 April 2018
https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html  

For months we've been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest – a potentially damaging practice in countries with weak institutions.

One post declared, “Kill all Muslims, don’t even save an infant.” A prominent extremist urged his followers to descend on the city of Kandy to “reap without leaving an iota behind.” Desperate, the researchers flagged the video and subsequent posts using Facebook’s on-site reporting tool.

Though they and government officials had repeatedly asked Facebook to establish direct lines, the company had insisted this tool would be sufficient, they said. But nearly every report got the same response: the content did not violate Facebook’s standards.

“You report to Facebook, they do nothing,” one of the researchers, Amalini De Sayrah, said. “There’s incitements to violence against entire communities and Facebook says it doesn’t violate community standards.”

26. “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.” By Paul Mozur. New York Times. 15 October 2018.
https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html 

Members of the Myanmar military were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade and that targeted the country’s mostly Muslim Rohingya minority group, the people said.

The military exploited Facebook’s wide reach in Myanmar, where it is so broadly used that many of the country’s 18 million internet users confuse the Silicon Valley social media platform with the internet. Human rights groups blame the anti-Rohingya propaganda for inciting murders, rapes and the largest forced human migration in recent history.

While Facebook took down the official accounts of senior Myanmar military leaders in August, the breadth and details of the propaganda campaign — which was hidden behind fake names and sham accounts — went undetected. […]

In August, after months of reports about anti-Rohingya propaganda on Facebook, the company acknowledged that it had been too slow to act in Myanmar. By then, more than 700,000 Rohingya had fled the country in a year, in what United Nations officials called “a textbook example of ethnic cleansing.”

27. “How Facebook and YouTube help spread anti-vaxxer propaganda.” By Julia Carrie Wong. The Guardian. 1 February 2019
https://www.theguardian.com/media/2019/feb/01/facebook-youtube-anti-vaccination-misinformation-social-media 

The Guardian found that Facebook search results for groups and pages with information about vaccines were dominated by anti-vaccination propaganda, and that YouTube’s recommendation algorithm steers viewers from fact-based medical information toward anti-vaccine misinformation.

28. “Tim Berners-Lee unveils global plan to save the web.” By Ian Sample. The Guardian. 24 November 2019.
https://www.theguardian.com/technology/2019/nov/24/tim-berners-lee-unveils-global-plan-to-save-the-internet?CMP=share_btn_tw 

“I think people’s fear of bad things happening on the internet is becoming, justifiably, greater and greater,” Berners-Lee, the inventor of the web, told the Guardian. “If we leave the web as it is, there’s a very large number of things that will go wrong. We could end up with a digital dystopia if we don’t turn things around. […]

“The forces taking the web in the wrong direction have always been very strong,” Berners-Lee said. “Whether you’re a company or a government, controlling the web is a way to make huge profits, or a way of ensuring you remain in power. The people are arguably the most important part of this, because it’s only the people who will be motivated to hold the other two to account.”

29. “YouTube’s Privacy Problem.” By Natasha Singer, in the New York Times “Kids” section (print only). 12 November 2019.

Why was YouTube Fined?

[T]he Federal Trade Commission and the New York attorney general said that YouTube broke [the] law by collecting [COPPA-prohibited personal details] from children who watched kids' videos on the site — and then earned millions of dollars by using that data to target kids with ads.

Did YouTube know that the data it was collecting belonged to kids?

Yes […] YouTube told one advertising company that video channels on its site… did not have viewers under 13. But at the same time YouTube was promoting itself as the top online destination for kids … Some of YouTube's most-viewed challens are aimed at children — the videos on one channel, Cocomelon Nursey Rhymes, have been viewed more than 42 billion times.

30. “Why Tech is Starting to Make Me Uneasy.” By Farhad Manjoo. New York Times. 11 October 2017.
https://www.nytimes.com/2017/10/11/insider/tech-column-dread.html

…In 2007, when Mr. Jobs unveiled the iPhone, just about everyone greeted the new device as an unalloyed good. I was one of them. Here was a genuinely new thing that would Make Your Life Better, we all thought: It would be convenient, it would be fun, it wasn’t serious or dangerous or globe-shattering. That’s no longer true.

The State of the Art, today, is a bag of mixed emotions. Tech might improve everything. And it’s probably also terrible in ways we’re only just starting to understand.

And finally (for now), Ethan Zuckerman, https://twitter.com/EthanZ/status/1009838622449766400 (Commenting on NYT “Want to understand what ails the modern Internet? Look at eBay.” https://www.nytimes.com/2018/06/20/magazine/want-to-understand-what-ails-the-modern-internet-look-at-ebay.html

Society is more than a bazaar.

Discourse Saboteurs

[Kathleen Hall Jamieson's] case is based on a growing body of knowledge about the electronic warfare waged by Russian trolls and hackers — whom she terms “discourse saboteurs” — and on five decades’ worth of academic studies about what kinds of persuasion can influence voters, and under what circumstances. Democracies around the world, she told me, have begun to realize that subverting an election doesn’t require tampering with voting machines. Extensive studies of past campaigns, Jamieson said, have demonstrated that “you can affect people, who then change their decision, and that alters the outcome.” She continued, “I’m not arguing that Russians pulled the voting levers. I’m arguing that they persuaded enough people to either vote a certain way or not vote at all.”
How Russia Helped Swing the Election for Trump, by Jane Mayer, New Yorker, 24 September 2018. The article is a profile of Kathleen Hall Jamieson forensic's analysis of the 2016 election: Cyberwar: How Russian Hackers and Trolls Helped Elect a President — What We Don’t, Can’t, and Do Know (Oxford University Press, 2018)

A pre-Newtonian moment

“Social media is in a pre-Newtonian moment, where we all understand that it works, but not how it works,” Mr. Systrom told me, comparing this moment in the tech world to the time before man could explain gravity. “There are certain rules that govern it and we have to make it our priority to understand the rules, or we cannot control it.”

Leo Laporte: Maybe what he's thinking is Mark Zuckerberg created Facebook to connect and everything like that. It was used against us in our elections by the Russians particularly to convince people not to vote or to stay at home mostly or to vote for somebody in particular. To me that was the come-to-Jesus moment where somebody figured out how to use social media in a very powerful way and they understood it but Zuckerberg did not and it took Facebook off guard, and at first they denied it even happened. Finally of late they've admitted yeah that's what happened.

Larry Magid: I think part of the problem for consumers is that most of us don't know how it works. We know that there are algorithms…

Leo Laporte: But do you think Zuck [Mark Zuckerberg] does is the question?

Larry Magid: That's what I'm saying, I assume that Zuck does, but maybe he doesn't fully understand it.

This Week in Tech (TWIT) 606, 19 March 2019, [at 43:51]

Different ground

The dinners demonstrated a commitment from Zuckerberg to solve the hard problems that Facebook has created for itself through its relentless quest for growth. But several people who attended the dinners said they believe that they were starting the conversation on fundamentally different ground: Zuckerberg believes that Facebook’s problems can be solved. Many experts do not.
The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People by Jason Koebler and Joseph Cox, Motherboard, 23 August 2019

The Tyranny of Analytics

In the social media age, the measurability and commoditization of content, in the form of traffic, clicks, and likes, has tethered editorial strategy to analytics like never before. The emphasis on quantifiable metrics stacks the news cycle with stories most likely to generate the highest level of engagement possible, across as many platforms as possible. Things traveling too far, too fast, with too much emotional urgency, is exactly the point, but these are also the conditions that can create harm.
From Executive Summary: The Tyranny of Analytics, in The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators Online by Whitney Phillips, Data & Society, May 2018

"Friends and other non-professional influencers"

At its core, the social revolution allows people to consume what they want, when they want, and largely on the recommendation of friends and other non-professional influencers.  Attempt to graft old models onto it and you are doomed to struggle; find models that are native to the medium and you will thrive.
— From It’s Not About You: The Truth About Social Media Marketing by Tim O'Reilly, October 2, 2012.