8
min read

“rant: HR-contracted online training courses are a self-serving waste of everyone's time.”

How to Build Trust in AI eLearning Companies
Written by
Coleman Numbers
Published on
May 17, 2024

So runs the title of a post from r/elearning, a subreddit dedicated, ostensibly, “to discussion of the techniques, difficulties and joys of creating, applying and evaluating elearning of all types.”

To quote another thread in a similar vein: “Am I the only one who feels that 85% of trainings are just bollocks?”

And its response: “If you think 15% of the trainings you take are useful, that's amazing.”

Tons of joy to be had here, yeah.

Of course, I probably don’t need to spell out the ways in which online corporate trainings often constitute a specific brand of Kafkaesque hell—if you’ve spent any amount of time around the average corporate training, you already knew that. And hopefully you’re reading this because Mindsmith’s streamlined, intuitive e-learning has helped your org, or the org of someone you dearly love, escape that hell forever.

For this post, I’m less worried about the various woes of corporate e-learning and their design-level solutions and more concerned with the organizational incentives that drive bad L&D—namely, that the for-profit structure of most, if not all, ventures that supply e-learning solutions doesn’t account for the negative externalities that affect learners. Which, when you say it, sounds silly, because aren’t learners the key stakeholders for any e-learning venture? That’s the sentiment we all pay lip service to, anyway.

Accordingly, I’m interested in an alternative governance structure—the long-term learning trust (LTLT)—that might work out this incentive problem. The LTLT is a fun term I made up that riffs on a real world governance structure at one of the world’s most famous AI firms—the long-term benefit trust. I’ll explain this in more detail later, but the short and skinny is that Anthropic has designed a governing body that’s financially disinterested, one that’s primary job is to make sure Anthropic board members are weighing questions of public benefit as well as fiduciary responsibilities.

But so, whether you’re an instructional designer, a manager, a CEO, or just the poor schmuck who has to sit through these painful trainings, I think this essay, and its attempt to propose a novel way to align stakeholder incentives, might be helpful.

Let’s jump in.

The Disconnect

Like I noted above, it’s a maybe obvious point that the way corporate trainings work—particularly the way online, module-based trainings work—isn’t serving learners. Harvard Business Review contributor Steve Glaveski noted in 2019 that only a quarter of employees surveyed by McKinsey reported that training prepared them more for their jobs—and in another survey, three quarters of managers from various organizations reported dissatisfaction with the results of corporate training efforts.

One reason for this discrepancy, Glaveski proposes, is that employees themselves undergo these trainings not to gain new skills, but for social and professional “signaling”—to demonstrate to their bosses that they’re a high-value employee.

Likewise, Glaveski writes, “L&D staff signal their worth by meeting flawed KPIs, such as the total CPE [continuous professional education] credits employees earn, rather than focusing on the business impact created.”

Importantly, this same signaling phenomenon unfolds at the level of interaction between firms who contract with companies that design corporate e-learning. Firms have big incentives when it comes to demonstrating that, in theory, they’re training their employees—but these incentives don’t have much to do with direct accountability to the learners.

The consequence is an ecosystem of e-learning platforms more interested in selling ineffective “signals” rather than genuinely enriching training experiences. The business-to-business incentive structure doesn’t account for the needs of learners. Instead, it creates negative externalities like wasted time, miserable user experience, and skill stagnation.

Anthropic’s Long-Term Benefit Trust

How do we address this negative externality? I think it comes down to realigning the incentives that drive the suppliers of e-learning platforms and solutions. To do so, I want to draw on the AI firm Anthropic’s somewhat novel approach to corporate governance.

Elsewhere in this blog I’ve written about the many negative externalities that accompany rapid, uncritical adoption of AI. Anthropic is ostensibly concerned about the same problems, and their governance structure tentatively reflects that worry.

Nearly a year ago, Anthropic posted on their website about what they called the “Long-Term Benefit Trust,” (LTBT) an independent group of trustees that have power to steer Anthropic towards serving “the long-term benefit of humanity” as well as shareholder profit. The LTBT can do this because they hold a special kind of stock that enables them to elect a majority of Anthropic board members over an extended period. These board members, in turn, are responsible for balancing public interest as well as fiduciary responsibilities.

As a result (in theory, anyway), Anthropic is poised to make strategic decisions that address the negative externalities the use of their products might create. As a public benefit corporation, this position is written directly into their company charter—and the LTBT works to materialize that position.

The Long-Term Learning Trust

E-learning companies can do something similar, if not more specific—they can implement a “Long-Term Learning Trust” (LTLT).

Like we’ve discussed, the main stakeholders who are left out of the equation for e-learning companies are, weirdly, the learners. But this is actually intuitive.

E-learning companies, after all, don’t sell to learners. They sell their platforms, and sometimes their bespoke products, to L&D departments at other companies, who in turn use those products to meet the often signal-based and perfunctory training goals of their organizations. Because of this, learners have no direct input on how e-learning products are designed—they aren’t part of the transaction.

A Long-Term Learning Trust would account for this by creating a group of informed individuals who represent the interests of learners in the same way that Anthropic’s LTBT comprises a group of informed individuals who represent different concerns related to AI impact. And, like the LTBT, an e-learning company’s LTLT would have power to elect board members, who are in turn accountable to balance the needs of learners and more immediate shareholders.

An LTLT might include pedagogy scholars, cognitive scientists, experienced entrepreneurs in the learning space, or seasoned teachers. Depending on the specific mission of an e-learning company, the composition of this group might be even more specific. If the company is interested in reaching underserved populations, for example, the LTLT might include sociologists who have experience with that group of people, or advocates with personal and professional backgrounds from that community.

I recognize that this all sounds somewhat utopian and abstract. Perhaps it is. This structure raises a lot of questions—how do you tune the logistics of this structure so that members of the LTLT aren’t swayed by commercial interests? How do you find an LTLT that can meet the needs of all, or even most, types of learners? How do you measure actual learner success over the long-term, especially when individuals move between organizations?

I won’t dismiss those questions, nor will I claim to have any ready answers. But I will point out that they’re far from intractable. And the gains that could be made by structuring e-learning companies this way—confidence from customers, higher participation rates from learners, and real, durable learning—make the prospect worth a second look.

AI in Learning Newsletter
Keep up to date on the cutting edge technologies that are changing the way people learn and instruct.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.