In the ever-escalating ideological war of narratives, everyone agrees we need to “make the world a better place”—or claims to. In a general way, there is even agreement about how to do so: we need to make the world a better place by coming up with (and enforcing) better narratives. And regardless of the ideological (or ethical) specifics, hope is invariably the currency every narrative deals in.
Narratives that shape our identity determine our worldview and behavior. This leads to a gradual remolding of culture and society, shaping future generations, and so on. But what if the formative stories we are being exposed to as children are fictions? Are the stories we tell ourselves as adults any different? And how would we determine it, if we don’t have anything to compare them to?
Bound by Shared Aversion
In December 2018, Tim Cook, the CEO of Apple, was presented with the first ever “Courage Against Hate Award” by the Anti-Defamation League “for his work as a champion of unity, diversity, and social progress.”
Jonathan A. Greenblatt, the CEO and National Director of the ADL, described Cook as “a staunch advocate for the LGBTQ community and immigrants’ rights while denouncing racist vitriol like the events in Charlottesville.” Under Cook’s stewardship, Apple also banned Infowars, Alex Jones’ platform, from its App Store, due to “defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups . . . likely to humiliate, intimidate, or place a targeted individual or group in harm’s way.”
Introducing Cook for the presentation of the award, Greenblatt promised his audience that Cook “will not only add spark to our conversation: I am certain he will inspire us.” Even as Greenblatt repeats words like “thrilled” and “excited,” however, the flatness and enervation of his speech delivers the very opposite message. And following the subdued applause, Cook steps onstage and launches into an even more soporific speech. He expostulates in muted monotones on “the stubborn and constant evils of anti-Semitism,” and describes how Apples has “always prohibited music with a message of white supremacy. Why? Because it’s the right thing to do, and as we showed this year, we won’t give a platform to violent conspiracy theorists” (Jones again).
While “Violence and hate darken the streets of Pittsburgh and so many other places,” he adds, “more and more people [are] opening their eyes and rising to their feet and speaking out in defense of a society where we are all bound together by the values we have in common.”
All bound together by the values we have in common. The statement speaks volumes, especially considering that these shared values center around a quasi-consensual condemnation of Alex Jones and his millions of followers. Is it only a question of numbers? What exactly makes values common and binding—and what makes them vitriolic populism to be expunged at all costs?
Outside the echo chamber of the ADL, Cook’s speech was not especially well-received. On YouTube it has less than a thousand views, and both the comments and like/dislike function are disabled. This speaks for itself: the Apple-ADL elite are not interested in popular feedback if it might interfere with their narrative. At the same time, Cook’s speech is so lifeless and dispassionate that it barely passes the Turing test (unlike Jones, no one can accuse Cook of being a rabble-rouser). If the medium is the message, who are Apple and the ADL hoping to persuade?
As far as I know, what attention the presentation did attract was mostly due to its brazen, if ineffective, attempt to reframe restrictions upon of freedom of speech as evidence of courage and compassion beyond the call of duty. And less overtly, to further sell the idea of benign tech. Cook winds up his speech with the following:
You might know the phrase deus ex machina, God from the machine. [T]his idea . . . has stayed with us through the ages because it’s so comforting. Just when the world seems to be getting more dangerous, just when it seems like the challenges may be greater than our ability to solve them, it’s reassuring to think that some technological marvel, some creation of our own hands, will solve the problem for us. But what I admire so much about the ADL is that your entire history provides a lesson here. If the machines we build are going to help us solve the world’s problems, then the God part, that decency, mercy, and humanity, is going to have to come from all of us. After all, we only have one life, so why not use it to make the world a better place? [Emphasis added—there is nothing emphatic in Cook’s speech.]
While no technocratic speech is complete without that last, limp refrain, Cook’s question is of course wholly rhetorical. It is not meant to raises actual questions in our minds, such as: who decides what constitutes a better world, how to go about making it, or who it is better for—or what?
With his “hateful anti-government conspiratorial rants,” Alex Jones would surely at least claim to be driven by the same noble goal as Cook and Greenblatt. Apple sees Jones as fueling the fires of extremism and hate, and Jones sees his own personal demons of totalitarian mind control and the New World Order lurking behind globalist, neoliberal corporations like Apple. Both get to point a finger at the other, as a formidable adversary in their “shared” goal of making the world a better place.
So who represents the greater threat here? Or at least, who has the greater power and influence? With his histrionic pseudo-conviction and calculatingly uncouth white-trash bravado, Jones—like Donald Trump—is the crass antithesis of the smooth, android precision of the Apple-ADL faction, and provides the wormy counter-narrative to the technocracy’s shiny Apple.
It goes both ways: Banning Jones is a great way to legitimize his military-industrial strength soap-box and resuscitate his flagging audience, making him appear a genuine threat to the powers-that-be. Thanks to Apple, YouTube, and Twitter, Jones now has credibility for people who formerly dismissed him as a creakingly obvious meat puppet of controlled opposition. And by the same token, the ADL and Apple get to show off their compassion cojones by going after a big, bad alt-media wolf.
Like the Joker and Batman, each acts a bulwark to validate the other’s existence. If this isn’t actually theater—or pro-wrestling—it appears to adhere to the same basic principles.
All Watched Over by Machines of Loving Grace
When she was just a toddler, my niece enjoyed a British TV show called “Teletubbies.” “Teletubbies” consists of people (possibly small ones) dressed in brightly colored costumes, prancing around a brightly colored set, making sounds in lieu of language. With its primary colors, music and sound effects, constant movement, baby talk, and rudimentary storyline, “Teletubbies” is just about sophisticated enough to entertain adults and older children, without being too complex to appeal to infants and toddlers.
Judging by the success of “Teletubbies”—and by how engrossed my one-year-old niece was in it—human beings are able to follow narratives before they even have a clear sense of identity. This suggests that narration, language, and stories are central to how identity forms. Just as every sentence needs a subject, every story requires a protagonist: whether it’s first, second, or third, there has to be a person acting for any story to unfold. The correlation between narrative and identity is clear.
Developmental psychologists (Jean Piaget, David Premack)—not to mention parents—have observed that children develop a sense of a separate self over the first four years or so, prior to which, in the jargon, they haven’t learned “theory of mind.” In this regard, it’s interesting to note that children’s stories are often about animals—perhaps because animals don’t have theory of mind either?
At the start of his monotone speech, Cook quotes a talk he heard in Brussels earlier that year, called “Is Technology Designed to Serve Humankind?” by Maria Farrell. The question of whether technology is designed to serve mankind is certainly a heavily loaded and pressing one, and during the presentation, Farrell asks several more:
Why does Elektra have to murder her mother? Or why, in the Pied Piper of Hamelin, why does the Pied Piper punish those parents of those children he leads away in such a diabolical way? We don’t know, and that’s why those stories are stories that we all know, right? They’re still working on us, so they don’t yield up their meaning in a really obvious way. No. They keep working on us. Working on our unconscious. We take them in consciously but they work on us unconsciously. Almost as if there is a piece of software executing a subroutine, basically against our will. Stories are basically malware. [Emphasis added]
Bizarrely, Farrell does not go on to correct, or even address, her contention that narratives are “basically” malignant. Her presentation is about the need to generate better stories to shape a better future and (you guessed it) create a better world. Yet in the midst of her pitch, the best metaphor she can come up with for how stories function is that of an invasive technology designed to take over our systems and redirect them towards malicious ends.
The only reassurance she offers is this: “And that’s not necessarily a bad thing if you’re a storyteller or an educator.” Surprisingly, she doesn’t raise the question of whether a storyteller is ethical, inspired, or possessed of wisdom or benevolence, but simply whether or not we are (professional) storytellers. The goodness or badness of the technology apparently depends on which side of the class divide we are on. Farrell’s proviso is the necessary lead-in to the actual message of her presentation, which is not cautionary but promissory.
“Looking to the future, what kind of stories could we tell using artificial intelligence, using an augmented reality, virtual reality, that will help us to do this weird unconscious intermingling thing even better?”
Apparently she is speaking not just for but to the technocratic class when she says this. The emphasis is mine, of course, meant to emphasize that, unless I misread her badly, Farrell is proposing a more efficient form of consciousness-malware.
As confirmation, Farrell then uses the example of experiments conducted by Harry Harlow (an associate of Abraham Maslow) in the 1950s in which baby rhesus monkeys were given cloth and wire effigies in place of real mothers, after which they became demented. “I think we can actually do something much better,” she says, “to link our consciousnesses to not feel alone.” Better than wire effigies and dementia? How low is this bar? A brave new world awaits us—one of perfected effigies, and less demented monkeys.
In her ostensibly benign presentation, Farrell is advocating, more or less explicitly, using technology to create a surrogate experience of parenting—technology as wire effigies of the mother’s body, the mother’s gaze. She is suggesting replacing the organic matrix, which in optimal conditions creates a healthy and autonomous sense of self, with an artificial one. If we program A.I. correctly, she is saying, it will program us. It will become our Big Mother, telling us a never-ending bedtime story, inside of which we can all live happily ever after.
In her own words, she believes we can use A.I. in a similar way as prophesized by Neal Stephenson in Diamond Age, to optimize human potential—“to do stories bigger, to live in our stories, to be loved by our stories and love them back.”
Most strangely of all, she seems to have a raptly receptive audience to a proposal that, to some of us, may seem openly, not to say brazenly, dystopian.
Farrell’s talk was given at the International Conference of Data Protection and Privacy Commissioners, Debating Ethics: Dignity and Respect in Data Driven Life, so it might be the time to ask, what exactly is a data-driven life, and why exactly is it in need of dignity and respect?
Based on Farrell’s talk, a data-driven life is a life discreetly run by narrative malware. It is what we get when we allow our consciousness to be controlled by algorithms, when our behavior is increasingly inspired—or parented—by our technology. At the very least, it is what happens when we have learned to operate in a similar way to our technology. Humans are mimetic creatures—so why wouldn’t we start to imitate our own creations, especially when they are associated with efficiency, productivity, and power?
Have we engineered a technology that is now engineering our consciousness to imitate it? Is it a coincidence that, the more our stories are being mediated by the evolving forms of technology, the more centrally technology is featured in them, even to the point of becoming the protagonist of them?
At least since radio and television, governments have had the ability to force-feed narratives to mass populations. Both visibly and tangibly, this is an accumulative process, a cybernetic snowball leading to an information avalanche. It can be mapped, not only via the obvious advances in technology, but through the transmutation of our own consciousnesses, personalities, society, and behaviors. We are all becoming baby monkeys in A.I.’s laboratory, passive and faceless travelers on HAL’s spaceship.
If our ability to get lost in narratives precedes even our sense of identity, stories may indeed be used—like malware—to take over our nervous systems and shape our identity in its formative stages. If corporations were to implant these sorts of preverbal narratives into us at a young age, they could coopt our development to an unknown but profound degree. I am being needlessly conservative here, since clearly this project has been underway for decades now, via more and more sophisticated forms of technology. Now that we are several generations of malware infection in, how much more susceptible are we, as adults, to influence by more “grown-up,” sophisticated, and insidious manufactured narratives?
In our data-driven lives, we have become increasingly dependent on Tim Cook’s stewed apple of programmable “knowledge.” More and more, we address social problems by referring to a collective computerized databank. There’s a certain amount of awareness that the problems we are trying to fix to some extent originate with our beliefs, which implies that the database is inherently flawed. This is why we are constantly trying to “perfect the narrative,” to come up with the correct ideological lens through which to view the world. But it seems as though that awareness is never enough for us to turn the lens on ourselves, to look at—or even acknowledge—the hidden database, the malware, that’s driving us to create these narratives.
Instead, we keep looking for flaws and anomalies within the narrative: not as a way to expose the unreliability of the narrator, but as a way to fix the stories and make them run more smoothly, so we can continue to suspend our disbelief.
It’s not as though the flaw in all of these techno-utopian narratives isn’t glaringly obvious, either—the dearth of wisdom that’s being concealed by the shiny sophistication. If we attempt to put Cook’s, the ADL’s, or Farrell’s propositions in the context of the solar system, or even just the Earth, the fact is, these are all just temporary solutions pretending at permanence. They are a materialistic reworking of religiosity. What’s worse, because they aren’t sourced in a traditionally religious mindset, they have no reference to eternals or absolutes, only to never-ending “apps” in constant need of updating. Instead of promising transcendence of space and time, they propose to extend them indefinitely—offering Hell in lieu of Heaven.
Perhaps the part of our consciousness—the malware—that’s running our behaviors is continuously creating technology that will re-present itself to us, and so make it stronger? That would mean our technologies are reflecting back at us our own pathology—like a funhouse mirror distortion, like the man-meat tornadoes Alex Jones or Donald Trump flailing wildly in face of the Apple elite with their unbridled, uncouth id-ness.
The problem with seeing things this way is that it is painful and despair-inducing—humbling. It’s much more appealing to carry on tweaking our techno-toys and refurbishing the narrative. But in the process, are we also being tweaked? As we create ever more sophisticated algorithms to censor those voices—inner and outer—that disrupt our dream, is our consciousness becoming increasingly algorithmic?
If we are living more and more inside—taking refuge in—our own mind, paradoxically, it’s becoming an increasingly externalized mind. This presents both a problem and a solution. If we have externalized our minds—if every problem we encounter, everything we don’t like, is mirroring the configuration of our minds—we may finally be getting to see where the problem is actually sourced. At this point, every external problem becomes, potentially, a kind of solution.
The external problems aren’t actually solutions—not yet—but they’re presenting us with the opportunity to resolve something internally, to get free of and/or to change the configuration of our minds, and so exit the story we have told ourselves about ourselves and the world. It is most definitely no lullaby.
Hope for Sale
Farrell ends her presentation with the inspirational “spark” that Tim Cook offered, to ignite his anti-defamation bonfire of internet censorship and thought control:
“Despair is unethical,” she says. “We have, as some Irish diplomats put it, a duty of hope.”
We have to imagine those futures. We have to develop that muscle. . . . hope is a muscle, we can make it stronger the more we use it. . . . it’s not a question that we can take back our future; it’s not a question that we must take back our future. It’s a question, in fact it’s a statement, that we will take back our future, the future of all of our imagination, the future for all of us.
What’s the alternative? That the Despair Police will come for us and toss us like sacrificial effigies onto the Apple bonfire, along with Jones and all other offenders? Does anyone really believe that a better world is one in which hope is a duty and despair a crime? Or is Farrell’s hope-muscle a knot of perennial dissatisfaction which she would be better off massaging out of her system forever? Isn’t the best way to enter reality—however bleakly—to let go of infantile reliance on make-believe stories?
Farrell’s idea of unethical despair presumably relates to what might result from seeing our present social situation—even our own personal psychological neuroses—as beyond our power to directly resolve. That’s considered a bummer, a killer of hope. And yet, based on all the evidence, what else are we to conclude? How can we remove malware by relying on a program that’s covertly run by the same malware? Like Thomas Anderson inside the Matrix, we need an intervention.
Farrell’s view appears to be different, however. She proposes that we need to side with the malware itself—hence her odd flip-over from cautionary to promissory without explanation. If the world is still messed up, she is suggesting, it’s because we haven’t tried hard enough to fix it yet. Our stories aren’t big enough or bold enough. Yet world dictators and sociopaths are not known for their lack of effort, commitment, or epic vision. Is it possible there’s a clue in this?
If hope depends on some pre-formatted view of the way things should be, imposed onto reality by sheer will and ideological fervor, how is that different from fanaticism or delusion? Farrell, Cook, Greenblatt, and the techno-utopians believe they know the best future for everyone, and that they have the imperative—the duty—to engineer it. Our duty then becomes to believe in the narratives they are offering that offer hope. Since questioning them might invite despair, questioning becomes unethical. Talk about illegitimate persuasion! Bowman just got “checkmated” by HAL!
What Farrell is offering in her presentation is a softer version of the transhumanist dream of creating a surrogate reality to escape into. By talking about how we need to manufacture the narratives that will shape our future, she seems to be laying the groundwork for the creation of one, overriding, overarching, unified narrative that will love us back, a dream world we can totally immerse ourselves in. Is it all soft language, code, for “Welcome to Second Life”? That would certainly be one way to avoid everything we don’t like, to literally create a “better” world in which we would be bound together, forever, by common values of denial, avoidance, and wish-fulfilment fantasy.
But when it comes to racism, homophobia, white supremacy, anti-immigration, Brexit, Trump, globalism, populism, technocracy, or whatever is potentially leading us to question the reliability of our narratives, isn’t it wise to identify the source of these problems before formulating ways to eradicate them? These “glitches in the matrix” are making it painfully obvious that we’re not at the end of history, we’re not bound together by common values, and we haven’t come up with a mother-narrative to live in forever after (though this won’t stop us from trying).
If we have come up with some idea of future perfection, this is not a new idea, at all, just the same infantile fantasy at the root of every ideology: Slay the dragon, win the princess, save/convert everyone we can, and damn the rest. The solutions we’re coming up with, the future narratives we’re imagining will solve the problems of the present, keep on creating the problematic future we’re being data-driven into and trying to problem-solve our way out of!
The possibility the solutions we invent are making the problem exponentially worse is finally becoming both observable and tangible—tactile—to us. But even as the technology we build is taking over our lives, its promise is becoming ever more seductive.
 Piaget was also a pioneer in artificial intelligence; his work has recently gained new interest due to Jordan B. Peterson’s many citations.
 “Apple CEO Tim Cook to receive Anti-Defamation League award in December,” Apple Insider, November 14, 2018: https://appleinsider.com/articles/18/11/14/apple-ceo-tim-cook-to-receive-anti-defamation-league-award-in-december
 “Apple Inc bans Alex Jones app for ‘objectionable content,’” Reuters, September 7, 2018: