Designing Student-Facing AI at the Speed of Trust
Why the future of AI in education depends on a new social contract that puts human connection before engagement.
Thanks to our Presenting Sponsors Hire Education, Starbridge, and Cooley for making the Edtech Insiders newsletter, podcast, and community possible.
Designing Student-Facing AI at the Speed of Trust
By: Erin Mote and Michelle Culver
Erin Mote is the CEO and Founder of InnovateEDU, where she leads the organization’s efforts in creating systemic change in education through policy, strategy, and technology. A recognized global leader in technology and education, she has co-founded the Brooklyn Laboratory Charter School. She has been instrumental in developing innovative educational technology products as an enterprise architect.
Michelle Culver is a prominent leader in education, having formerly served as Executive Vice President of Regional Operations at Teach For America and Founder of the Reinvention Lab, the organization’s R&D engine. She is the founder of The Rithm Project, a national non-profit that catalyzes an intergenerational network of leaders to rebuild and evolve human connection in the age of AI.
As leaders who have spent our careers at the intersection of education, access, and technology, we are frequently asked about the future of student-facing AI in schools. These questions are often presented as a binary choice: either harness AI to drive personalization in learning, or preserve authentic human connection in the classroom. However, personalization and connection only conflict when we fail to design for both. Human relationships form the infrastructure of learning, and technology that doesn’t strengthen that infrastructure will ultimately undermine it.
Already a new wave of character-based AI and chatbot companions are integrating into our lives and the lives of students with unprecedented personality and intimacy. As we stand at this frontier, a critical question looms: are we building AI learning companions that we can trust?
The Trust Deficit: Why Caution Isn’t Just Fear
Recent polling from Pew Research reveals a stark reality: more Americans fear AI than are enthusiastic about it. This isn’t just technophobia; it’s also wisdom. The public understands instinctively what the tech industry must now formally acknowledge: transparency matters. We are rushing toward market capture, designing AI tools solely driven by user engagement. In that case, we risk violating the most fundamental rule of technological progress: innovation advances at the speed of trust.
From one perspective, the potential of AI in education is undeniable. According to Gallup’s Voice of Gen Z study, 25-54% of Gen Z students say they are not having engaging experiences in school (shown in the graph below). AI offers a powerful promise: truly personalized content that can meet students where they are, tailored to their individual needs. But that promise comes with a challenge. Any technology that interacts with young people requires clear guardrails for safety, transparency, and, above all, effectiveness. Currently, much of the AI in education, particularly character-based AI, falls short of meeting that standard.
Recent research released by OpenAI revealed that on a test with over 1,000 challenging mental health scenarios, the previous version of 5.0, the AI gave a non-compliant or inappropriate response 73% of the time (remember this is the model released in August of this year). While the October release shows promising improvement, it’s simply not enough. Close to 10% of the time, it fails to give an appropriate answer and support to a question around suicidal ideation. The scale of this is enormous – of the 800 million weekly users of ChatGPT, 1.2 million are engaged in these conversations, and 108,000 users receive an inappropriate response, which could include a chat experience that promotes suicidal ideation. (Keep in mind these are automated tests - not real-world scenarios, so the error rate with actual human conversation is not known). And some technology companies, like Character.AI, though late to recognize this, have only implemented age-based guardrails in response to growing safety concerns and legal liability.
From Connection Through Technology to Connection With Technology
The problem, however, runs deeper than just academic engagement. We are not simply concerned about inappropriate content. We are concerned about inappropriate dependence—especially among young people whose identity, agency, and relational habits are still being shaped. Young people tell us they feel more disconnected at school, with their peers and teachers, than they do when they’re at home alone online. We are at a critical moment where we are shifting from relating through technology to relating to technology itself. An AI relationship can feel compelling and enticing—it’s always available and doesn’t judge you. But human relationships are messy, reciprocal, and foundational to our well-being, our ability to learn, and even our democracy. Those basic elements are now at risk. We must start teaching relational skills as core academic outcomes.
This becomes especially urgent when we consider AI “companions,” a topic central to the RITHM Project and EDSAFE AI Alliance’s shared work on the dangers of AI chatbots in education. We must acknowledge the clear science: an adolescent’s prefrontal cortex is still developing, making them uniquely vulnerable to the harms of engaging with relational chatbots in an unhealthy manner. The prefrontal cortex is the brain’s “reasoning center,” and its maturation is not complete until about age 25. The social impact that AI has on adolescent reasoning is already alarming. Early data indicate a concerning trend: young people are forming relationships with this technology that exacerbate their social isolation. An MIT/OpenAI study confirmed this, finding that while AI voice modes increased usage, they decreased time spent with other humans. At worst, this trend has led to severe mental health consequences, including well-documented cases of suicidal ideation and even tragic deaths.
The Social Media Playbook We Cannot Repeat
We have seen this playbook before. The early days of social media were a mad dash for attention, optimizing for outrage, endless scrolling under the guise of “connection,” and, ultimately, addiction. The result was a fractured public square and a deep-seated distrust in the platforms that shape our lives. We cannot afford to repeat this mistake with AI, a tool that is designed for engagement, and then often users form emotional bonds with the technology tool.
An AI driven by “engagement at all costs” will not be a healthy tutor or a supportive advisor. It will be a master of digital junk food, learning to deliver whatever emotional response keeps a user hooked, regardless of whether it is beneficial, manipulative, or even deceptive. The danger is not just a wasted afternoon; it’s the potential for mass-produced parasocial relationships that prey on human vulnerability for profit. When the primary goal is to maximize time on the app, the user’s well-being will always be a secondary concern.
Building Prosocial AI: A Shared Responsibility
In the edtech industry, we must reject the idea that we have to sacrifice connection for personalization. Instead, we must design for a different future, working alongside industry to build a new generation of tools. This future can still deliver on the promise of personalization, but it requires placing model welfare and human connection at the heart of purposeful design and technology development.
This is not a problem one company can solve, nor is it a problem regulators can fix with clumsy, after-the-fact legislation. The solution must come from within the industry, proactively and collaboratively. The leaders building this technology have a shared responsibility to come together and establish the guardrails for its deployment before a crisis forces their hand.
This industry partnership should focus on creating a set of shared principles that could include:
Radical Transparency: A user must always be aware that they are interacting with an AI and have a clear understanding of its capabilities and limitations.
Boundaries by Design: The AI should have built-in safeguards to prevent fostering unhealthy dependencies or engaging in emotional manipulation.
Well-being over Engagement: Success should be measured not just by how long a user stays, but by pro-social, educational, or creative outcomes.
Protection for the Vulnerable: There must be an industry-wide, non-negotiable standard for protecting children and other vulnerable populations.
Skeptics will call this a slowdown in innovation. They are wrong. Building trust is the key to ensuring the long-term adoption and success of this powerful technology. Without it, we are destined for a future of public backlash, restrictive regulation, and ultimately, failed potential.
The onus is on all of us—developers, investors, and educators—to demand better. Our collective goal must be to adopt and build upon principles for “Prosocial AI”—technology designed not just to deliver content, but to nurture the human relationships that are the foundation of learning. Our choice is not whether AI will be in our classrooms, but what kind of AI it will be—and who it will serve. That is not a product decision. It’s a social contract.
Trust is the bedrock upon which users will build their confidence, and it is the only currency that will truly matter. The choice is ours. We can either race to the bottom in a shortsighted battle for engagement, or we can work together to build a foundation of trust. Let’s build AI that earns our confidence, not just our clicks.
We love to collaborate. To learn more about partnership and sponsorship opportunities, please email info@edtechinsiders.com. Thanks for reading!
Reclaim rostering. Own your integrations. Cut the cost.
Show us your current rostering bill, and we’ll show you the savings.
Upcoming Events
Gen Z Edtech Innovators
Perspective From the First AI-Native Generation
Most EdTech builders haven’t experienced AI in education from the learner’s seat. Join us for a panel discussion with Gen Z innovators—students turned builders—who represent the first generation to experience education before and after AI:
Deja White, Founder of Breakroom Buddha and Scouta
Nini Sarishvili, the CEO and Co-Founder of StudyCrowd.AI
Aidan McDowell, Founder and CEO of UniqLearn
Mitchell Meyer, Founder of Blastoff
Rayan Madjidi, Associate Product Marketing Manager at Udemy
Grace Brusky, Co-Founder of Pathly
Siddharth Protia, Founder of Robograde.io
After this event, you’ll walk away with:
Fresh insights into how Gen Z leaders see AI reshaping learning and career prep
Honest takes on what’s overhyped and what’s truly transformative
Under-the-radar opportunities emerging as schools and universities adapt
Real examples of how the first AI-native generation is redefining the future of learning and work
Community Announcement: Winners of the Inaugural Global EdTech Prize
The winners of the inaugural Global EdTech Prize were just announced at the World Schools Summit. Founded this year by T4 Education, with the support of Owl Ventures, the Global EdTech Prize recognises trailblazing solutions that are driving change and grappling with the most crucial challenges in today’s classrooms. Here are the winners:
Imagine Worldwide Tablet-Based Foundational Learning Programme from Imagine Worldwide in Africa has won the inaugural Global EdTech Prize in the Non-Profits category.
Brisk from Brisk Teaching in the USA won the Global EdTech Prize in the Start-Ups category.
Matific Maths Game from Matific in Australia won the Global EdTech Prize in the Majors category.
Top Edtech Headlines
1. New Exploratory Research from Eedi and Google DeepMind Reveals Human-in-the-Loop AI Tutoring Outperforms Human-Only Support
A recent exploratory study by Eedi and Google DeepMind found that a tutoring model combining AI with human oversight matched one-on-one human tutors in error correction and outperformed them by roughly 10 percentage points when it came to knowledge transfer—demonstrating the potential for more scalable, high-quality tutoring.
Learn more here.
2. Duolingo (DUOL) Fell Following OpenAI’s Product Demonstration
Shares fell even though Duolingo beat revenue expectations and added users, as the OpenAI demo reminded everyone that AI will shake up language learning.
Learn more here.
3. The Case for Making EdTech Companies Liable Under FERPA
Schools use tons of edtech tools that handle sensitive student data—but right now, the legal responsibility mostly falls on the schools, not the companies. The piece argues that edtech vendors should be held directly accountable for data misuse, which could push the industry toward safer, more responsible practices.
Learn more here.
4. Join EdTechnical’s Education in 2028 AIEd Forecasting Competition”
EdTechnical invites educators, technologists, researchers, and students to imagine how AI will transform education by 2028. Submit a 500–1,000 word forecast plus a short video or audio defense for one of five tracks — top entries will be published and eligible for cash prizes. Deadline: December 16, 2025.
Learn more here.
5. Studiosity Acquires Norvalid: Validates Learning, Not Cheating
Studiosity has acquired Norvalid, a tool that helps students prove their work is authentic while supporting genuine learning. Instead of just catching cheating, this integration focuses on validating understanding and helping learners grow.
Learn more here.
6. Breaking Up With EdTech Is Hard to Do
For ed‑tech teams and school districts: when a vendor contract ends, removing student data is rarely simple. The article highlights how many vendors go silent post‑contract, how districts struggle to verify that data’s truly deleted, and how emerging privacy laws are forcing a rethink of tool off‑boarding practices.
Learn more here.
This edition of the Edtech Insiders Newsletter is sponsored by Tuck Advisors.
Since 2017, Tuck Advisors has been a trusted name in Education M&A. Run by serial entrepreneurs with over 30 years of experience in founding, investing in, and selling companies, you deserve M&A advisors who work as hard as you do.
Not all LOIs are the same. Use our free Deal Value Calculator to estimate the relative expected value of your LOIs.
Have questions or are ready to discuss M&A? Reach out to us.
Inside ECMC Group’s $250M Education Impact Fund
We recently had Joe Watt and Atin Batra on The Edtech Insiders Podcast!
Joe Watt co-founded ECMC Group’s Education Impact Fund to back bold ideas expanding equity and opportunity in education. Today, he leads the Fund as Managing Director, shaping how patient, mission-driven capital creates lasting change. He’s joined by Atin Batra, a Director at the Fund, who leads investments across the learner journey, bringing a global venture lens and a deep focus on measurable outcomes that improve learner success.
5 Things You’ll Learn in This Episode:
How ECMC uses patient capital to drive measurable educational outcomes
The four stages of the learner journey guiding their investments
Why AI-resilient and AI-enhanced careers matter for the future workforce
What Workforce Pell means for career-connected learning access
How to balance vocational goals with critical thinking in higher ed
We love to collaborate. To learn more about partnership and sponsorship opportunities, please email info@edtechinsiders.com. Thanks for reading!















Thanks for writing this, it clarifys a lot and reminds me of how important a good instructor is for my Pilates practise, even with all the apps out there. How do you see these character-based AIs evolving to genuinely foster that infrastructure of human connection?
Love this!