Authoritarian Algorithms: Deconstructing Myths of the Future

Popular rhetoric from the likes of Yuval Noah Harari fails us in our most weary and desperate hour.

Authoritarian Algorithms: Deconstructing Myths of the Future
“Artificial Intelligence is no match for Natural Stupidity” by Sean Davis is licensed under CC BY-ND 2.0.

Over supper with an interviewer from the New Yorker in 2020, Yuval Noah Harari “noted that, in the Middle Ages, “only what kings and queens did was important, and even then, not everything they did,”” (Parker, 2020). It’s easy enough to understand his point: from a certain perspective, big changes in history seem to be about one or two big figures, traipsing along, knocking over all age-old precedents in their way.

But history is less about the figures at the head than about the folk that surround them. All the Einstein’s, Hitler’s, Shakespeare’s, Stalin’s, and Harari’s of the world have been the product of a living system of humanity.

Oh, Ye of Useless Class, Beware!
How Yuval Harari’s “Useless Classes” of the future should learn a lesson from a secretive group of 1800s vandals.

You are every bit as important as any of Harari’s kings, every bit as valid a player on the grand human scale. The truth of history is that it changes depending on your perspective — your lens. As with all of Harari’s writing, we end up walking a sort of populist fiction, every bit as disengaged from reality as the pulpy television Harari prefers to novels.

That is not to say that all of Harari’s work is wrong. There are many conclusions he brings to the fore, many scientific ideas, and many popular political issues, that have some element of truth to them. The only problem is that it’s not always easy to tell where his fiction begins and a version of reality begins. In a novel, this is to be expected. In work trumpeted as the most important work of humanity since the pyramids of Giza (itself a gross misrepresentation of what matters in history) this sort of fictional blurring is a serious problem.

But then, Harari knows this. “If we think about art as kind of playing on the human emotional keyboard,” he said during a conference some years ago, “then I think A.I. will very soon revolutionize art completely”” (Parker, 2020). His work is precisely the sort of emotion-modulating work that he disavows, riling up visions of an apocalypse of artificially-intelligent machines that distract the reader from other questions. This is what concerns me the most about his work.

Harari touches on matters of very real concern, like the use of technology to spy on people and control their lives, or the threats posed by the large language models so frequently termed “Artificial Intelligence” by mass media. And yet, his writing on these topics makes them seem like inevitabilities, born out by an ancient curse of humanity to take all good things too far.

Indeed, there’s even a hint of technologistic glee in his tone.
“When the biotech revolution merges with the infotech revolution,” he writes, “it will produce Big Data algorithms that can monitor and understand my feelings much better than I can, and then authority will probably shift from humans to computers” (Harari, 2017, Listen to the Algorithm, Para. 8). This idea, as it just so happens, is simply not true. It is true, however, that corporations the world over are grossly misusing algorithms as if they could offer valuable predictions, and that’s the real problem (Narayanan, 2022).

And yet, according to Harari, the rationality of machines will echo how “some people are far more knowledgeable and rational than others, certainly when it comes to specific economic and political questions” (Harari, 2017, Big data is Watching You, Para. 5). Here, he has found evangelistic agreement in the neoliberal elite. An interesting group of fans, considering that Harari’s work tends to frequently criticize their tech-centered lives.

As Darshana Narayanan writes in a 2022 article for Current Affairs:

“Harari’s motives remain mysterious; but his descriptions of biology (and predictions about the future) are guided by an ideology prevalent among Silicon Valley technologists like Larry Page, Bill Gates, Elon Musk, and others. They may have differing opinions on whether the algorithms will save or destroy us. But they believe, all the same, in the transcendent power of digital computation.” (Narayanan, 2022)

Harari himself makes this point when he writes that once “AI makes better decisions than us about careers and perhaps even relationships, our concept of humanity and of life will have to change” (Harari, 2017, The drama of decision-making, Paras. 15–16). Note that he does not need to make the point a positive one in order to present it as an inevitability.

The point I am making is not that Harari’s vision of the future is often presented as pessimistically dystopian, a sort of Cyberpunk 2077 presented as future-fact. When you oppose Harari’s views to those of “positivist” pop-figures like Steven Pinker, you end up with the same fundamental principle: the promotion of surveillance capitalism.

I am not the first to make this point, and Narayana’s excellent 2022 article covers it extremely well, but it cannot be overstated how dangerous it is to take these new technologies as “inevitable.” Of course, that’s exactly what Harari and other “big view” pop-history writers tend to do. When looked at with a casual frame, history often does appear to be inevitable. But unless we’re subscribing to a deterministic view of physics, doing so is pure folly.

History did not emerge the way it did solely because of plagues, or because of the farming of crops, or because of the invention of gunpowder. It happened the way it did because many groups of people communicated with one another and used their rational capacity to build different models of living. We could, in essence, return to the point found so often in Greek philosophy: a human history that exists because lots of people were having a long conversation about what it means to be a good person.

And yet, perhaps it is by examining the ancient Greeks that we can find an answer for why Harari’s work has so enraptured the neoliberal elite.

In 21 Questions for the 21st Century, Harari writes that there “is nothing wrong with blind obedience, of course, as long as the robots happen to serve benign masters” (Harari, 2017, Digital Dictatorships, Para. 2). Granted, he’s writing about the obedience of future slave-robots in his sci-fi future, but this seems to touch on something greater at the same time. He writes elsewhere that the “liberal belief in the feelings and free choices of individuals is neither natural nor very ancient,” and that for “thousands of years people believed that authority came from divine laws… Only in the last few centuries did the source of authority shift from celestial deities to flesh-and-blood humans.” (2017, Listen to the Algorithm, Para. 1).

This is not only a gross simplification of the concepts of freedom and free will, it’s also eerily reminiscent of the argument in Plato’s Republic toward a tyrannical philosopher king, and the vision of democracies as foolish masses selfishly struggling and veering the ship of state off-course (found in 488a–489d). But Plato, distraught by the changing times and his mentor Socrates’s death, might not be the most unbiased person in all of history to draw our political conclusions from. And yet, Harari’s future is one of artificially-intelligent supremacy that renders a spooky callback to Plato’s vision of a perfect state. Harari’s best and worst futures are intertwined, and he presents our salvation as a world of beneficent intelligent technology controlled by equally-beneficent technologists with the good sense to keep humanity on its right course.

When he does highlight the human problem, such as when he points out that the danger of “robots is not their own artificial intelligence, but rather the natural stupidity and cruelty of their human masters” (Harari, 2017, Digital Dictatorships, Para. 4), he is only doing so to present an answer in the form of “a benign government [where] powerful surveillance algorithms can be the best thing that ever happened to humankind” (Harari, 2017, Digital Dictatorships, Para. 8).

Perhaps it is within Harari’s childhood that we can find out more. Harari spent much of his youth playing war games of his own invention, and “he went through a period when he was “a kind of stereotypical right-wing nationalist.” … He laughed. “You know — the usual stuff” (Parker, 2020). Harari may have come a long way from his nationalist childhood, to the point of quietly, if not openly, objecting to the right-wing slide of Israel’s politics, but his framework for understanding the world is still anchored in a specific class and mindset.

This less-savory aspect of Harari’s work can be summed up in a story he told about an incident in 2017 where a “tragicomic incident” took place.

A “Palestinian labourer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote ‘Good morning!’ An automatic algorithm made a small error when transliterating the Arabic letters. Instead of ‘ Ysabechhum!’ (which means ‘Good morning!’), the algorithm identified the letters as ‘ Ydbachhum!’ (which means ‘Kill them!’). Suspecting that the man might be a terrorist intending to use a bulldozer to run people over, Israeli security forces swiftly arrested him.” (Harari, 2017, Digital Dictatorships, Para. 12)

Harari has gone to some lengths to avoid condoning the conditions that the Israeli government forces upon the Palestinians, and yet his language here is endemic of a perspective that is deeply insulated by privilege, and still connected to some ghost of his childhood nationalism. After all, I doubt there was anything “comic” in the experience for the Palestinian and his family, whose terror became a source of gallows humor for Harari’s next bestseller.

We can see, now, a deeper failure of Harari’s focus on technology to likewise emerge. While Harari showcases the fallibility of the algorithm that made the Palestinian man’s innocent Facebook post a threat to his life, he fails to target the devil in the details. How is it that the algorithm was there in the first place? What is Facebook’s complicity in the terrorizing of a workman taking a selfie? These are questions Harari shies away from. While he posits numerous downfalls and terrible futuristic ordeals, he ultimately avoids discussing the problem in any meaningful way.

Shoshana Zuboff’s 2019 book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, offers a crystal clear example of the problem that Harari skates around. “We worry about companies that amass our personal data,” writes Zuboff, “and we wonder why they should profit. “Who owns the data?” we ask. But every discussion of data protection or data ownership omits the most important question of all: why is our experience rendered as behavioral data in the first place?” (Rendition: From Experience to Data, Para. 1).

Harari’s writing presents a cold-cut picture of a technologist’s dream future, without once presenting this as just another fable, another fiction of the sort he makes a point of highlighting at the beginning of his earlier work, Sapiens. Indeed, Harari praises what Zuboff terms “datafication” of human experience, and continually reminds us that our limited emotional perspective is inferior to the logic of the machines.

Are the algorithms created by the organized efforts of surveillance capitalists dangerous? Absolutely. But so is the sort of populist work of Harari, if it manages to do to a large population what it did for Hannah Hrabarska.

After reading Sapiens, the Ukrainian photographer found herself ““more compassionate” toward people around her, although less invested in their opinions. … She said, “This came from a feeling of ‘O.K., it doesn’t matter that much, I’m just a little human, no one cares.’” Since then, “Hrabarska has disengaged from politics. “I can choose to be involved, not to be involved,” she said. “No one cares, and I don’t care, too.”” (Parker, 2020).

What good does that compassion do if it distances you from the world you need to be involved in? If you become so certain of you and your fellows’ complete worthlessness in the grand scheme of time, and your utter pointlessness when compared to the power of some artificial intelligence owned by Facebook, what are you left with? A self-fulfilling prophecy, one that leaves only the light of Platoian socially elite luminaries to lead the way. And the only artificial intelligence those luminaries are selling is the sort “designed to render some tiny corner of lived experience as behavioral data” (Zuboff, 2019, Rendition: From Experience to Data, Para. 17).


Hi there! I’m Odin Halvorson, a librarian, independent scholar, film fanatic, fiction author, and tech enthusiast. If you like my work and want to support me, please consider subscribing!

References
Harari, Y. N. (2018). 21 Lessons for the 21st Century. Random House.

Narayanan, D. (2022, July 6). The Dangerous Populist Science of Yuval Noah Harari. Current Affairs, March/April 2022. https://www.currentaffairs.org/2022/07/the-dangerous-populist-science-of-yuval-noah-harari

Parker, I. (2020, February 10). Yuval Noah Harari’s History of Everyone, Ever. The New Yorker. https://www.newyorker.com/magazine/2020/02/17/yuval-noah-harari-gives-the-really-big-picture

Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the new frontier of power (First Trade Paperback Edition). PublicAffairs.

Subscribe for my regular newsletter. No spam, just the big updates.