9
min read

Infancy and AI

How do we use LLMs in learning contexts in ways that preserve authentic human experience rather than sterilize it?
Written by
Coleman Numbers
Published on
May 30, 2024

Introduction

How do we use LLMs in learning contexts in ways that preserve authentic human experience rather than sterilize it? I posed this question in my last post, which was mainly concerned with what “authentic human experience” means and how the digital age has made it harder to obtain. This post will be my attempt to begin answering the above question, so I recommend checking out aforementioned previous essay.

My essay will, again, draw heavily on the work of Giorgio Agamben and his 1978 book Infancy and History: On the Destruction of Experience. Specifically, Agamben’s insight on how humans interact with language will play a big role in distinguishing the role of human and non-human intelligences in learning.

First, though, I’m going to appease the SEO gods (and the 90% of you who have no interest in contemporary European philosophy) by getting topical.

Two Frontiers

In the past week, two headlines have dominated the AI discourse. Neither, I imagine, are too surprising to anybody; but, as it turns out, both are important for cracking the code on today’s question.

First: Scarlett Johansson accused OpenAI of trying to reproduce her voice for ChatGPT 4o, the company’s first native multimodal AI + voice assistant. While it isn’t clear if this was actually OpenAI’s intention, the media kerfuffle raises more intriguing and fundamental questions: why would making your real-life AI sound like one of Hollywood’s recent and prominent fictional AIs be desirable? What’s the social currency in that? And what does that say about our relationship with this technology?

This new type of AI interaction—a highly capable voice assistant that can mimic human interactions and emotions—of course opens up a lot of doors for L&D. Voice assistants that can convey empathy and adapt to learners' emotional states, for example, could revolutionize the way training is delivered, making it more interactive and responsive to individual needs. Again, though, this raises ethical considerations regarding authenticity and the potential for AI to blur the lines between real and simulated interactions; do we want to rely on tools that increasingly preempt the experience of human contact?

Second news item: Anthropic announced headway on machine interpretability. In a press release last Tuesday, they revealed that they’d managed to map Claude 3’s internal representations of millions of concepts to manipulable “features.” In real time, researchers were able to watch Claude 3’s calculations respond to inputs.

Anthropic is quick to qualify the magnitude of this research, but it seems to confirm a basic intuition we’ve had about state-of-the-art models: that there is some consistent linguistic representation going on. “Features are likely to be a faithful part of how the model internally represents the world, and how it uses these representations in its behavior,” the post explains.

If we can understand how AI models internally represent and manipulate linguistic concepts, we can develop more effective and transparent AI-driven educational tools. Ideally, this transparency will help educators and developers design AI systems that align more closely with human cognitive processes, facilitating better comprehension and retention of information. For example, L&D designers might be able to revise a chatbot in a learning module to intrinsically adopt certain teaching strategies or philosophies over others—all without having to wrestle with the occasionally arcane art of prompt engineering.

I mention these developments not merely for clicks, or for their pertinence to L&D, but because they represent two frontiers of our interaction with AI, two frontiers that will become directly pertinent to the question of reclaiming human experience: frontier one is the emotional and social. How we treat AI is becoming a humanistic as well as technical problem.

Frontier two is the linguistic. Experts might argue that this has always been an issue in machine learning, but I really mean that, in public consciousness, we’re becoming much more aware of the ramifications that large language models have for what we understand language to be and how it functions. The more we learn to interpret machine intelligence, I think, the more we’ll be compelled to examine what exactly it is we humans do with language.

And if we can recognize these differences between human and machine uses of language, the more equipped we’ll be to responsibly, meaningfully utilize this technology in learning settings.

Infancy and Language

Which brings me back to Agamben. For Agamben, the “destruction [or expropriation] of experience”1 really began way back with Descartes and his I think, therefore I am. This assertion, for Agamben, slammed together the old classical ideas of experience—obtained subjectively, within the self—and knowledge—which happens when a divine, external source “communicates with [the soul]”2— which before had been separate, into one rational subject.

This collision, which is now so commonplace in the West that we can’t really imagine the self any other way, “[made] experience the locus—the ‘method’; that is, the pathway—of knowledge”3. Thus empiricism and reasoned argument centered in an individual rational mind became the basis for learning things about the world. There is no more room for imagination, or spiritual intuition, or epiphany—that which communicates with the soul—as a basis for truth, because knowledge and experience have been forced into the same domain. We can only trust, then, what we directly see or measure.

But this is very bad, Agamben says. For one, “the Cartesian subject is nothing more than the subject of the verb, a purely linguistic-functional entity”4. Understanding the self this way—situated in language—creates philosophical and spiritual problems that the West would wrestle with to this day.

What does the “Cartesian I” mean, for example, for a schizophrenic patient who regularly experiences intensely religious episodes that don’t seem to correlate to any objective reality? Or what about split-brain patients who seem to be able to “see” something on one side of their visual field without consciously registering that they’re seeing anything at all?

But also this is bad because subjective experience is inherently uncertain and unstable, and the only way to account for this is to instead outsource our experience to “instruments and numbers”5. We’re inheritors of that instrumentalized world.

Agamben devotes much of the rest of his essay to outlining what all this means—but his way out depends again on language, and how humans use it. Specifically, he discusses semiotic language—language as a system of pure signs, language that is “merely recognized” and semantic language—language as communicated meaning, language that is “comprehended”6. What makes humans special is that, constantly, every day, perhaps in every moment, we move between these two domains. We’re awash in representative symbols, and we have the cultural and cognitive machinery to take those symbols and string together meaningful messages that connect us to other humans.

To put it another way: I know what the words “sleeping,” “hair clippers,” and “prank” all mean on their own. I can recognize the 1-1 correspondences between literal symbols and ideas. But what makes us human—what makes us able to really participate in experience—is that we can all fill use those symbols to receive and convey a comprehensible story. (I’ll leave you to fill in the blanks on this three-term story.)

And at the center of that process, Agamben claims, is “human infancy”7. The idea is a little nebulous and probably the only good way to understand everything that he’s saying is to read his whole book—but the short and skinny of it is that our default condition, the way we come into the world, is outside of language. So Agamben’s infancy means literal infancy, but also something much deeper—he’s getting at the state that we all inhabit, constantly, prior to grasping for symbols to string into meaningful messages. After all, we have to have something to talk about. And this, for Agamben, is where pure experience resides—“the individual as not already speaking, as having been and still being an infant”8.

Infancy and AI

This state, it seems to me, is also how we preserve authentic human experience in the face of AI—by recognizing what we are and what AI is not.

That’s where the two frontiers I mentioned earlier come in. I think we can easily map these two frontiers—the linguistic and the emotional/social—to the semiotic and the semantic. On one frontier of AI interaction, we’re combing the representational; we’re pushing the boundaries of how we understand LLMs to be using language.

On the other frontier, we’re pressing against the comprehensible—we’re madly trying to invent machines that can be lovers, confidantes, interlocutors, salespeople. We want to create models that can convey and receive our most precious messages.

The trap, I think, is the same one we’ve been falling into since Descartes. Because we’ve collapsed the human self into a purely linguistic-functional construct, and because we’ve displaced so much of our experience of the world onto artificial instruments, we think we can reverse engineer the self in the confines of a machine that is in actuality a language game.

What we fail to realize, though—what Agamben reveals—is that we are more than the semiotic + semantic. Between those two linguistic boundaries lies the infant self—a human before language, before symbols or messages. This infancy is what AI lacks, and in its current form I doubt it’s something it can ever give us directly.

So, vis-à-vis learning and AI, what I propose is a radical rethinking of what the human within the learning context is. We aren’t merely the sum of all the training modules completed or write-ups sent, or even the sum of a demonstrated and concrete skillset. Pure experience—the fundamental substance that makes up meaningful learning—can’t be reduced to linguistic products, and it can’t be mediated by purely linguistic processes.

Recently, L&D expert and AI-in-learning guru Philippa Hardman wrote about what a reckoning with this reality might look like—how AI can be used to expand the window for human infancy rather than contract it. After surveying some leading research in the behavioral science and learning fields, Hardman recommends two general practices that all learning professionals should adopt: “1. Shifting from content design to context design” and “2. Shifting from Short Term to Long Term Design.”

In other words, Hardman suggests, we need to focus less on the rote, momentary “presentation of new information” and more on “repetition in stable, unchanging contexts where cues trigger automatic behaviours without conscious intent.”9 Behavioral science seems to reflect the same insight that Agamben came to, and that is probably already intuitive for any L&D professional: lasting learning occurs at a level below explicit awareness, in a stage of experience before language.

Hardman gives some suggestions for how AI can help AI learning professionals this feature of human learning. If you want a full account of those, and some example ChatGPT prompts to try, I recommend you read her excellent post. Among other things, though, Hardman proposes that ChatGPT and similar chatbot-style learning models can offer learners personalized projects that help them develop and practice skills; chatbots can also offer specific and relevant feedback on demand.

One thing I appreciate about Hardman’s recommendations is that AI doesn’t become a “scaleable” proxy for human contact and connection. Instead, AI becomes a jumping off-point for connection, a personal assistant that can coordinate schedules between mentors and mentees and help plan productive meetings. In this way, too, AI can become a tool to foster rather than preempt human infancy in the workplace.

This idea of infancy, of course, goes a lot deeper than a little blog post can explore. But I hope my presentation of this ideas will lead you towards productive conversations and moments of pure experience that enrich not merely your approach to L&D but your whole relationship with technology—especially that oldest and most inescapable technology, language.

Notes

  1. Giorgio Agamben, Infancy and History: On the Destruction of Experience (New York: Verso, 1993), 15.
  2. Ibid., 21.
  3. Ibid., 22.
  4. Ibid., 25.
  5. Ibid., 20.
  6. Ibid., 67.
  7. Ibid., 58.
  8. Ibid., 58.
  9. “How Humans Do (and Don’t) Learn,” Dr Phil’s Newsletter, Philippa Hardman, published May 30, 2024.
AI in Learning Newsletter
Keep up to date on the cutting edge technologies that are changing the way people learn and instruct.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.