In this episode, Chris discusses his journey in AI and education. He explores the potential of teachable AI systems that learn from minimal examples, the role of AI in supporting teachers, and the need for natural interactions between humans and AI. He also addresses the challenges of transparency in AI and the limitations of large language models.

Non-linear path into human-computer interaction. Chris’s journey into human-computer interaction exemplifies what Herbert Simon called a “biased random walk”—a career trajectory shaped by both strategic choices and serendipitous opportunities. His path began in mathematics during his undergraduate years, where he first discovered artificial intelligence and became fascinated by the original goal of AI: understanding how minds work by building them. His initial graduate school experience focused on traditional AI optimization—making algorithms faster, more accurate, and more data-efficient. However, this technical focus felt disconnected from his deeper interest in understanding cognition and intelligence. This realization led to a pivotal decision to switch advisors and work with Pat Langley, a student of Herbert Simon, who introduced him to cognitive systems research. When Pat relocated to New Zealand, Chris faced another crossroads that required reapplying to graduate programs. This apparent setback became an opportunity when he found his way to Carnegie Mellon University and Ken Koedinger, who needed a student to build simulated students—computational models that could replicate human learning within tutoring systems. This perfect match aligned with Chris’s interests in understanding minds through computational modeling. The transition to Carnegie Mellon also provided an unexpected educational opportunity. Since his previous graduate credits didn’t fully transfer, rather than simply retaking AI courses, Chris broadened his expertise by studying human-computer interaction at one of the premier institutions in the field and delving into cognitive science and learning sciences. This interdisciplinary foundation, combining AI, HCI, and learning sciences, would prove instrumental in shaping his unique research perspective and approach to building more human-compatible AI systems.

Teachable AI. Chris’s concept of teachable AI addresses a fundamental limitation in current artificial intelligence systems: their inability to learn efficiently from human instruction. While large language models demonstrate impressive capabilities across many tasks, they require vastly more training data than humans would ever encounter in a lifetime. This inefficiency highlights a crucial gap between artificial and human learning that Chris aims to bridge through his research. The inspiration for teachable AI comes from observing what makes humans unique among species: cultural transmission of knowledge. Unlike evolutionary processes that require multiple generations to transmit information, humans can rapidly share ideas through direct interaction. When two people engage in conversation, they can exchange concepts that immediately become part of each other’s knowledge base, enabling rapid cultural evolution that far outpaces biological adaptation. Current AI systems, despite processing enormous amounts of human-generated data, cannot engage in this type of direct knowledge exchange. Large language models synthesize patterns from vast datasets but cannot be meaningfully updated through individual interactions. While techniques like retrieval-augmented generation provide workarounds by maintaining external knowledge stores, they don’t fundamentally change how the underlying model processes and integrates new information. Chris envisions AI systems that can engage in genuine social learning—systems that can be taught new concepts through natural interaction and can, in turn, teach humans in ways that leverage our evolved capacity for cultural knowledge transmission. This vision requires insights from multiple disciplines: learning science to understand how humans effectively share knowledge, human-computer interaction to design natural interfaces for teaching AI, and artificial intelligence to build systems capable of rapid adaptation from minimal examples. The goal is creating AI that participates in the distinctly human process of cultural learning rather than simply processing static datasets.

AI’s role in supporting teachers. Chris advocates for a symbiotic relationship between AI and human educators rather than viewing AI as a replacement for teachers. His perspective draws from decades of research in intelligent tutoring systems. The key insight is that teachers often find themselves overwhelmed with routine instructional tasks, leaving insufficient time for the deeper mentorship, emotional support, and social connection that represent uniquely human contributions to education. AI tutoring systems excel at providing structured practice, immediate feedback, and adaptive sequencing of learning materials—tasks that, while pedagogically important, don’t require the complex social and emotional intelligence that human teachers bring to their work. By deploying AI systems to handle what some might characterize as “drill and kill” activities—the repetitive practice necessary for skill consolidation—teachers could be freed to focus on higher-level educational goals. This includes providing the rich social and emotional support that develops lifelong learning attitudes, offering personalized mentorship that helps students navigate academic and personal challenges, and creating the human connections that make learning meaningful and engaging. This division of labor plays to the strengths of both humans and machines. AI systems can provide unlimited patience for repetitive practice, consistent application of pedagogical principles, and adaptive responses to student performance patterns. Human teachers contribute irreplaceable elements: understanding the lived experience of being human, providing emotional support during challenging learning periods, and offering the kind of inspirational guidance that shapes students’ long-term educational trajectories. Chris’s vision positions AI as sophisticated teaching assistants that enhance rather than diminish the fundamentally human aspects of education.

Natural interactions. Chris challenges the common assumption that natural language interfaces automatically create natural interactions with AI systems. His perspective, influenced by design principles from Don Norman and James Gibson’s concept of affordances, suggests that truly natural interaction depends more on compatibility with human behavioral schemas than on the communication modality itself. The problem with purely language-based interfaces lies in their lack of clear affordances—visual or contextual cues that communicate how users should interact with a system. Unlike physical objects such as door handles that signal whether to push or pull, language interfaces provide no inherent guidance about appropriate interaction patterns. This absence of structural cues leaves users uncertain about how to engage effectively with AI systems. Chris proposes that natural interaction emerges from designing systems that align with existing human social schemas—the learned patterns people use to interact with each other. These schemas develop through cultural experience and include established patterns for knowledge exchange: providing examples, asking clarifying questions, correcting mistakes, and building on previous exchanges. Rather than forcing users to adapt to novel interaction paradigms, effective AI systems should recognize and accommodate these evolved social patterns. His research involves studying human-human learning interactions—observing how people naturally teach and learn from each other in various contexts—and designing AI systems that can participate in these familiar patterns. This approach leverages the extensive cultural evolution that has optimized human knowledge transmission over generations, rather than requiring users to develop entirely new interaction skills for AI systems. The goal is creating AI that feels natural not because it uses language, but because it participates in the social schemas people already understand. This compatibility allows users to apply existing knowledge about human interaction to their engagement with AI, reducing cognitive load and enabling more effective collaboration between humans and artificial systems.

OpenAI Five example. The OpenAI Five experiment provides a compelling illustration of how human-AI collaboration can break down despite impressive individual AI capabilities. In this competition, five professional Dota 2 players faced five reinforcement learning agents trained through self-play. While the AI agents could defeat human teams under certain conditions, the mixed-team matches revealed fundamental challenges in human-AI cooperation. When teams combined humans and AI agents, human players reported feeling like they were solving puzzles while trying to play the game. Professional player Blitz described the experience of not understanding what the AI agents would do or why they made specific decisions. More problematically, human players felt that AI teammates would abandon them in critical moments when the AI’s decision-making process diverged from human expectations. This breakdown illustrates the crucial importance of mutual theory of mind in effective collaboration. Successful human teamwork relies on each person’s ability to model their teammates’ thinking processes, anticipate likely actions, and coordinate accordingly. Even when working with strangers, humans can draw on shared experience of being human—understanding common decision-making processes, emotional responses, and strategic thinking patterns. AI agents trained through self-play develop decision-making processes that are fundamentally alien to human cognition. They optimize for game outcomes through exploration patterns and reward structures that don’t match human learning or reasoning. This creates a black box problem where human collaborators cannot predict, understand, or effectively coordinate with AI behavior. The implications extend beyond gaming to any scenario requiring human-AI collaboration. Chris argues that effective teaming requires AI systems designed with human interpretability in mind. Rather than optimizing solely for task performance, AI systems intended for human collaboration need to make their decision-making processes accessible to human understanding, enabling the mutual theory of mind that makes genuine teamwork possible.

Transparency issues with AI. Chris expresses significant concerns about the concentration of AI development within corporate entities and the resulting lack of transparency in foundational AI systems. The enormous computational resources required for training large language models create an asymmetry where well-funded corporations can develop capabilities that remain inaccessible to academic researchers, nonprofits, and smaller organizations. This concentration of AI development raises multiple concerns beyond simple access. Even when companies release models under open licenses, the training processes, data sources, and architectural decisions remain opaque. Researchers and users have no insight into potential copyright violations in training data, biases embedded in model development, or the specific techniques used to create these systems. The dominance of large language models across AI conferences and research communities creates what Chris characterizes as technology-driven rather than human-centered research. The field increasingly focuses on finding new applications for existing models rather than developing AI capabilities that address genuine human needs. This represents a concerning shift from human-centered design principles that should prioritize user requirements and social benefits. Chris particularly worries about the sustainability and accessibility of current AI approaches. The financial models underlying large language model development often involve companies absorbing significant losses to gain market share, with plans to raise prices once dependency is established. Given the limited technology budgets in educational contexts, this pricing trajectory could create significant access barriers for institutions that cannot afford commercial AI services. The environmental costs of current AI development add another dimension to these concerns. The massive computational requirements drive expansion of data center infrastructure, increase water consumption for cooling, and may necessitate nuclear power generation to meet energy demands. Chris questions whether the benefits of current AI approaches justify these substantial environmental and social costs, particularly when alternative approaches might achieve similar educational and collaborative goals with significantly lower resource requirements.

Towards third wave AI. Chris articulates a vision for third wave AI that moves beyond the limitations of both knowledge-based systems and current data-driven approaches. While first wave AI required extensive manual engineering of knowledge representations, and second wave AI derives capabilities from large-scale data processing, third wave AI would enable direct human interaction with AI systems to customize and improve their capabilities. The key distinction lies in the source and nature of knowledge integration. First wave systems relied on engineers to encode expert knowledge into rule-based representations. Second wave systems extract patterns from existing datasets—primarily content that humans have already created and published online. Third wave systems would learn directly from users through interactive teaching, enabling personalization and adaptation that reflects individual needs and contexts. This vision incorporates elements of neuro-symbolic computing, which aims to combine the interpretability and grounding of knowledge-based systems with the adaptability and pattern recognition capabilities of machine learning approaches. However, Chris emphasizes that the crucial differentiator is the interactive nature of knowledge acquisition, where users directly contribute the knowledge they need rather than relying on either pre-engineered representations or found data. Third wave AI would maintain human-interpretable knowledge representations that users can understand and modify. Unlike the black box nature of large language models, these systems would provide transparency about why specific decisions are made and how new knowledge integrates with existing capabilities. This interpretability enables users to correct mistakes, add new information, and understand how their teaching influences system behavior. The practical implications include AI systems that can be customized by users without requiring technical expertise, that learn from individual interactions rather than needing massive datasets, and that maintain clear connections between human knowledge and system capabilities. This approach would democratize AI customization and reduce dependence on large corporations while enabling more human-compatible learning and collaboration.

Challenges in education and learning. Chris highlights a fundamental tension in educational technology: the gap between what people perceive as effective learning and what research demonstrates actually produces learning gains. This challenge is particularly relevant when evaluating AI systems for educational applications, where subjective user satisfaction may not correlate with genuine learning outcomes. Research consistently shows that people often misjudge their own learning effectiveness. Students frequently report preferring passive learning methods like watching lecture videos over active engagement with problem-solving exercises, even though assessments demonstrate greater learning gains from the more challenging active approaches. This pattern suggests that educational technologies should not be designed primarily around user preferences or satisfaction ratings. The concern extends to current applications of large language models in education. While conversational AI interfaces may feel more natural and engaging to students, they risk creating an illusion of learning where students feel they understand concepts without actually developing independent problem-solving capabilities. When AI systems provide answers rather than scaffolding student thinking, students may experience temporary comprehension that doesn’t translate into retained knowledge or transferable skills. Chris advocates for grounding educational AI development in learning science research rather than intuitive design decisions. This means designing interactions that optimize for genuine learning outcomes rather than user satisfaction or engagement metrics. Effective educational AI might require students to engage in effortful processing, struggle with challenging concepts, and receive feedback that promotes reflection rather than immediate gratification. The challenge for educational technology developers is creating systems that feel supportive while maintaining the productive difficulty that drives real learning. This requires careful attention to pedagogical research about effective instruction, assessment methods that measure actual learning rather than perceived learning, and design decisions that prioritize long-term educational outcomes over short-term user satisfaction. The goal is developing AI that genuinely enhances human learning capacity rather than creating dependency relationships that undermine independent thinking skills.

Overreliance on LLMs. Chris expresses deep concern about the widespread adoption of large language models in education without sufficient attention to their pedagogical implications. His worries extend beyond technical limitations to fundamental questions about learning effectiveness, cost sustainability, and environmental impact of current AI approaches. The core educational concern centers on the risk that language models may create an illusion of learning while actually undermining skill development. When students can easily obtain answers from AI systems, they may avoid the effortful cognitive processing that drives genuine learning. Research suggests that this offloading of thinking to AI systems can leave students feeling more capable while actually reducing their independent problem-solving abilities. As an educator, Chris addresses these concerns by designing assignments that are difficult to complete using language models—typically problems that push beyond the boundaries of web-available information or require novel reasoning about unfamiliar scenarios. He also emphasizes discussion-based learning that provides uniquely human insights and mentorship that AI systems cannot replicate. His goal is helping students develop critical thinking skills while understanding both the capabilities and limitations of AI technologies they’ll encounter throughout their careers.

Human-AI interaction class. Chris’s Human-AI Interaction course exemplifies his interdisciplinary approach to understanding AI’s role in society. The class operates on two scales: examining both broad societal implications of AI and specific technical considerations for designing human-compatible AI systems. This dual focus helps students develop both critical thinking about AI’s social impact and practical skills for building more ethical and effective AI interfaces. The course draws from historical perspectives on human-machine collaboration, beginning with foundational works like Licklider’s vision of human-computer symbiosis and extending to more critical analyses of technology’s social implications. Students explore how AI functions as a power enhancement tool and examine the political dimensions of technological development, including how certain technologies can only emerge within specific social and economic structures. A key theme is helping students understand that AI systems embody the values and organizational structures that create them. Large language models, for example, could not exist without the corporate structures and capitalistic frameworks that enable massive resource coordination. This means that the values and priorities of these organizations become embedded in the technologies they produce, which then reinforce those same social systems. The course also zooms into practical questions about interface design: what makes interactions feel natural, how to evaluate system usability, and why many current AI interfaces fail to provide genuinely helpful user experiences. Students analyze both successful and unsuccessful examples of human-AI interaction design, developing skills for creating more human-compatible AI systems. Perhaps most importantly, the class addresses how students can maintain ethical agency as they enter careers in AI development. Many students will work at major technology companies where they’ll face pressure to prioritize efficiency and profit over human welfare. The course explores strategies for advocating for more human-centered technology development while navigating organizational constraints and career pressures. This emphasis on ethical agency preparation reflects Chris’s commitment to training the next generation of AI developers to consider broader social implications of their work.

Connecting the dots. Chris attributes his interdisciplinary perspective less to exceptional breadth of reading than to the fortunate combination of experiences and mentors that shaped his academic journey. His path through multiple graduate programs and research areas provided exposure to core ideas across AI, learning sciences, and human-computer interaction—a combination that remains relatively rare in academic research. The apparent setback of switching graduate programs ultimately became an opportunity to gain formal training in multiple disciplines rather than deep specialization in a single area. This experience taught him that career pivots don’t waste previous investments but rather create unique combinations of knowledge and perspective that can generate novel insights and approaches to complex problems. Chris emphasizes the importance of having excellent mentors who can identify truly foundational ideas rather than getting lost in the vast volume of incremental research that characterizes much contemporary academic publishing. He suggests that reading more papers isn’t necessarily better than reading the right papers—those that introduce genuinely important concepts that shape entire fields of inquiry. His approach to maintaining focus involves setting clear goals for the kind of future he wants to help create, while remaining open to changing those goals as circumstances and understanding evolve. This balance between directed effort and adaptive flexibility reflects the “biased random walk” metaphor from the conversation’s opening—maintaining enough direction to make strategic choices while remaining open to unexpected opportunities and insights.

On the go? There’s an audio-only version too. Click here.

Christopher MacLellan is an assistant professor in the School of Interactive Computing at Georgia Tech, where runs the Teachable AI Lab (TAIL). His work aims to improve our understanding of how people teach and learn, building AI systems that can teach and learn like people do.