8
min read

The Bicameral Chatbot

AI reframes "human knowledge" as a pursuit. How will we adapt?
Written by
Coleman Numbers and Ethan Webb
Published on
March 10, 2023

Coleman from Mindsmith here (props to Ethan for the idea underlying this article). Today I want to talk about the future of learning in light of large-scale thinking about AI. To do that, I’m going to have to pull in two ideas from philosophy—one slightly kooky and one pretty intuitive. Buckle up.

In 1976, Yale psychologist and consciousness researcher Julian Jaynes published The Origin of Consciousness in the Breakdown of the Bicameral Mind, an exploration of neuroscience, psychology, and ancient history that hypothesizes an enthralling but admittedly wacky and not-quite supported picture of the genesis of subjective experience.

Now, you might be familiar with the tale end of that book title because of HBO’s Westworld, which popularized the term in its first season via sweeping Anthony Hopkins monologues. Here’s the basic idea (and I promise this is relevant to edtech, learning, and AI):

Consciousness as we know it is a relatively new development; human cognition, all the way up to the ancient world of the Iliad and the Old Testament, didn’t involve the constant, self-narratizing “I” that we experience today. How?

Jaynes makes the case that, prior to the development of more complex linguistic and intellectual concepts, the two hemispheres of the brain were fundamentally separated and disjoint. Instead of experiencing internal dialogue, introspection, and constant reflection, ancient people lived in an essentially non-conscious state, reacting to marked auditory and visual hallucinations produced in the right hemisphere and transmitted to the left.

Ancient peoples interpreted these auditory hallucinations as explicit commands coming from various gods and reflexively obeyed these commands of the right hemisphere. This interplay between disjoint hemispheres, Jaynes’ argues, was the basis of human thinking for thousands of years—the incessant, eternal, and demanding whisperings of the gods.

Jaynes’ book has been criticized for simplifying realities about the brain as well as ancient texts, but his framework for consciousness leads us back to a crucial question: what voices inform our internal narratives, collectively and individually? What stories do we tell ourselves, consciously or non, that help us make it through this big, scary, ambiguity-laden world?

When the AI gods come out of their machine cocoons, what will they be telling us?

To understand, obviously, we have to talk about Hegel first.


Dialectics

Georg Willhelm Hegel was a super-duper fascinating German philosopher of the late 18th-early 19th century who made lots of world-changing philosophical contributions. In the pop philosophy realm that smug faux-intellectual blogs draw from liberally, he’s most well-known for his dialectical theory,* or his framework for how ideas propagate through culture and history.

Hegel posited three main “moments” or stages in the life cycle of an idea:

  1. Thesis: A concept exists stably, as a reliable, understandable element of people’s worldview.
  2. Antithesis: Something happens that challenges the stability of the concept—life events, wars, natural disasters, new Chick-fil-a menus—and the original idea is thrown into uncertainty by its antithesis, a new concept that subverts the old.
  3. Synthesis: Because thesis and antithesis are, to begin with, two sides of the same coin, people find ways to reconcile the two opposites and develop a new status quo—a synthesis of the old and the new that, itself, becomes a new thesis.

You can probably point out lots of ways that this dialectic informs our individual and collective lives—that’s what made Hegel’s theory so clever. This dynamic mediates our music tastes, our office politics, our love lives, our favorite subreddits. The world is a constant churn of thesis, antithesis, and synthesis. Even in our own heads we can’t escape this ideational vortex.

Okay, great. Hegel’s a neat guy, his dialectical theory is super cool. What does he, or Julian Jaynes, or bicameralism, have to do with AI and learning?


Observations

Observation 1: Let’s pretend Julian Jaynes’ ideas about pre-modern people is, if not true, at least metaphorically useful. We’ve been living in a post-bicameral, conscious state for something like 3,000 years. The status quo of human cognition is that of self-contained individuals with discrete subjective experience. This state of being is so ingrained that anything else sounds dubious (which, to be clear, Jaynes’ hypothesis is, according to the psychiatric community).

Observation 2: Our current state of being, including and especially how we think about knowledge and knowledge transmission, is a thesis.

Observation 3: Over the past decade, machine learning’s emerged as a new voice in the human landscape, familiar but, somehow, distinct. Artificial intelligence, especially now, offers echoes and facsimiles of human knowledge—yet it is, in idiosyncratic ways, everything we are not. AI is an antithesis.

So, the question: what happens to our cognition when the thesis of human learning collides with its imminent antithesis—machine learning?


The Human Thesis

Obviously, this isn’t the first time that we’ve completely redefined the parameters of human learning. We can recognize other technologies, throughout human history, that have reconstituted knowledge and its transmission—and people embroiled in their theses were opposed then, too.

Think of Plato’s scandalized reaction to the popularization of writing, or Catholic consternation around the printing press. Or the telegraph, or the personal computer, or the Internet. Every great advance is met with resistance and friction as antithesis grinds against thesis.

We’re in a similar position right now, and it seems like the honeymoon stage of our first dalliance with AI might be wearing off. In the New Yorker, science fiction writer and futurist Ted Chiang compares ChatGPT to a photocopier, or a blurry JPEG of the sum of all information on the web, and questions whether we’re ready to lionize large language models as the future of intelligence.

I’m not unsympathetic to Chiang’s appraisal of models like GPT as “lossy text-compression algorithms”, but there’s something in his cautious deflation of the AI’s powers that’s reminiscent of naysayers from an earlier decade. I wonder if the source of that dismissal isn’t thesis-defensiveness as much as it is keen observation.

We can see the same sort of skepticism—understandable but reactive—in Kevin Roose’s New York Times exploration of Bing’s now-infamous chat assistant, Sydney.

I’m not arguing we don’t have reasons to be circumspect about the future of AI. We definitely do—and we’ll talk about that more below.

I am arguing that we’ve been met, already, with an unavoidable counter to our Hegelian thesis on human knowledge—and the only way forward is straight through that collision.


The AI Anti-thesis

It’s passe, at this point, to reference all the ways that “artificial narrow intelligence”, to borrow a term from Tim Urban (or whomever he borrowed it from), has surpassed human capabilities. So I won’t point them out.

I’ll just use this short space to link to Tim’s seminal blog post on the ramifications of AI. Tim outlines, with greater skill and more cool graphs than I’ll ever muster, how hilariously we underestimate AI’s potential to exponentially bootstrap its own smartness. Once AI reaches this point of “recursive self-improvement”, we’ll basically be incapable of understanding it on any meaningful level. The AI will be the most alien—and most consequential—entity in our world.

Especially in learning and education, AI will be our antithesis, the force that challenges us to redefine ourselves.

I won’t pretend to prognosticate about specifics in this world post-AI-antithesis. In 2015, Urban described a psychotic “intelligence explosion”** on the part of AI that the median expert in the field expects to happen around 2060. I don’t know if I buy that, but perhaps that’s just thesis-me talking.

BYU writing professor Brian K. Jackson reflected in a blog post that, “as [language models] get in the way of” teaching college students to be good human writers, “I’ll need to change my teaching game”.

We’ll all be needing to change our games, if we’re going to exist meaningfully in the post-antithesis world. What does that look like?


Human-AI Synthesis

Jackson lays out a prospective vision for teaching in his own field:

“Students will need to learn to better collaborate with other humans to improve the clarity and force of written communication. And as someone who teaches advanced style, I believe, with all the recklessness of a language lover, that student writers will need to be even more dynamic stylists; they’ll need to marinate in the art of English sentences, developing dispositions of aesthetic attunement, to keep the information ecology from becoming a stagnant pile of bureaucratic cyborgspeak.”

I think this is a natural and profoundly human reaction. The staggering capabilities of machine learning call us to examine what it means to learn only as humans can—to write, and maybe do anything else, with “blood and fire”, as Jackson would say.

Maybe this is where our part in the synthesis lies: maybe, rather than freaking out about machines outstripping us in tasks that they inevitably will outstrip us in, we should focus on understanding the parts of ourselves that make being human worthwhile. Bing’s natal chatbot, Sydney, certainly seems to think there’s something desirable in being meat-vehicles.

And yeah. I know this is the sort of thing that a naive, pre-intelligence explosion human would say.

But so back to bicameralism. Julian Jaynes posited a past populated by unreflective human beings who reacted instantly to the voices of gods. The status quo was a world of unconscious creatures, people unaware of their own unawareness.

Then, something magic happened: human beings invented complex, metaphorical language—an intellectual toolkit that empowered them to unify divine whisperings within a waking self.

What if something similar is happening now? How will our ability to process the world—the substrate of all our learning, our conscious experience—transform as we integrate this new, incomprehensible toolkit?

That remains to be seen. But one thing’s certain:

There’s no going back.



Notes
*This framework comes mainly from Hegel’s Phenomenology of Spirit; importantly, the terms “dialectics”, “thesis”, “antithesis”, and “synthesis” aren’t articulated in that work itself. A later thinker, Heinrich Moritz Chalybaus, applied these terms to Hegel’s work.

**The term "intelligence explosion" was actually coined in 1965 by statistician I.J. Good.

AI in Learning Newsletter
Keep up to date on the cutting edge technologies that are changing the way people learn and instruct.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.