Discover more from Edtech Insiders
Edtech Insiders Newsletter, 4/14/23
ASU GSV, OPM Market Madness, Reach Goes Big, Copilots ≠ Wingmen
ASU GSV is right around the corner, and Edtech companies and investors are starting to drop some big announcements and breaking news (for example, Coursera’s Hiring Solutions and Reach’s new $215M fund).
On the Edtech Insiders front, we are:
about to launch our second cohort-based course for Edtech Product Management
nearing launch of our new website and logo
thrilled to be headed to ASU GSV to moderate a “AI and Future of Higher Education” panel
BIG FIVE HEADLINES
1. REACH GOES BIG
Reach Capital announced their largest Edtech-focused fund to date- of $215M dollars to invest in Edtech, as well as a $4M sidecar Founders Fund. Institutional investors are mostly represented here, and Reach is working to support “early-stage startups based in the United States and abroad, with a specific eye on Latin America.”
2. OPM AND THE PURPOSE OF HIGHER EDUCATION
The fallout from recent Department of Education guidance about how online program management companies (OPM) must be treated as third-party services (TPS). This week, 2U, one of the largest and most visible OPMs, sued the department, claiming that the Department “overstepped its power by independently rewriting the Higher Education Act’s definition of a third-party servicer.”
3. QUESTIONING THE MEANING (AND PRICE) OF EDUCATION
In the wake of the COVID Pandemic and the face of rapid technological change, Americans continue to question and rethink the purpose of both K12 and Higher Education.
A Populace Study in January showed that K12 parents are thinking much more about practical life skills than college, and are looking for significant change to the system (Populace Study Reveals Crisis of Confidence in American Education):
Now, a flurry of new articles have picked up and built on this narrative that Americans questioning the practical value of college, especially in the face of costs that, no matter what else happens, just continue to climb and climb. A recent Wall Street Journal poll showed that for the first time, more than half (56%) of respondents agree that “earning a four-year degree is a bed bet”.
4. EDTECH VALUATIONS
GSV Investor and Edtech maven Luben Pampoulov reported on an interesting finding among public Edtech companies:
“Looking at the 13 public Edtech companies, there is a nearly perfect correlation of 0.9 between the companies forward revenue multiples and their “Rule of” number, which combines revenue growth with EBITDA margin. For example, Duolingo’s E’23 revenue growth is 35% and its expected EBITDA margin is 12%, so it is a “Rule of 47” company and is trading at 9.1x revenue. Meanwhile, Udemy’s expected revenue growth is 16% and EBITDA margin is -4%, so it is a Rule of 13 company and is trading at 1.2x revenue.”
5. THE EXPLOSION OF AI
You didn’t think you were going to get through this newsletter without a rundown of AI, did you? AI co-pilots and tools continue to flourish throughout the tech and Edtech ecosystem, and educators seem to be breaking into camps about whether the technology is good for Edtech and education.
Meanwhile, the Chinese giants have entered the fray, with both Alibaba, Baidu and Huawei all launching major AI tools, and the Chinese government issuing new AI rules in response.
Caution to the Fore
Caution to the Wind
THE BIG STORY: THE FUTURE OF AI IN EDUCATION, PART 2
A few weeks ago, in collaboration with Hanna Celina, co-founder of Kinnu, Edtech Insiders started envisioning a speculative future as AI impacts our lives more and more.
In Chapter 1, we jumped to the far future and introduced Madalaine, who is giving a history lesson to her teenage daughter Mina about how AI rose to dominance.
Where We Left off…
“It’s hard to imagine an AI making a… mistake.” Mina said, dreamily.
“Back then it was normal - it happened all the time! In fact, we thought that humans had a lot of advantages over AI, including telling fact from fiction.
We thought: How can AI understand what it means to be human, or to have a consciousness? Is empathy a uniquely human quality? Can AI, which back then was basically a monkey with an infinite typewriter, generate new art which touches humans in the same way as the great human artists of the past - Van Gogh, Shakespeare, Liszt, Beyoncé? At the time, AI still relied on the human experience as its only filter for what was ‘bittersweet’ or ‘beautiful’ or ‘heartbreaking’.
We had no idea what was coming next…
Chapter 2: The Rise of Embodied AI
Madalaine leaned into her granddaughter as if to confide a secret; Mina instinctively sat up in her chair and leaned in to listen.
“Surprisingly,” Madalaine whispered, “it turned out the key to AI evolution was for AI to develop emotions, and the key to understanding emotions was connecting AI to robot bodies. We called it ‘embodiment’ back then.”
“Huh.” Mina squinted, deep in thought.
Madalaine continued: “Back in 2030, software met hardware in a big way. Robots had been clumsy toys ten years earlier, but by then they became nimble cyborgs capable of maneuvering the world… and of all kinds of work. “Embodiment engines” were the names of the new generation of AI software that powered the AI hardware - factories were optimized for machine creation, the first such radical redesign of manufacturing since pre-AI inventor Henry Ford invented the assembly line.”
The robotics revolution started in industries which suffered from chronic supply of labour - service work, agriculture, construction and manufacturing, but soon robots were present in almost every industry. Looking calmly with their mechanical eyes, they were completing exactly the types of physically demanding, intellectually intimidating and ethically challenging tasks humans often tried to avoid at all cost. There were also all kinds of specialized models, from household appliances - cookers, cleaners, gardeners - and toys - dolls, games and pets - to big robots - from autonomous vehicles to continuously adjustable buildings, who could make any type of space you needed with a simple voice command.
This generation of artificial intelligence got more of a chance to actually explore the world and interact with humans, and with each other, in natural environments, rather than through questions and prompts. These embodied AIs were collecting a vast quantity of signals, indicators, images and sounds and continuously improving their ability to understand humans and the world. As AI software started living in and exploring the physical world and observing human behavior directly… boy, did the models really start learning there.
In essence, these “Embodiment Engines” evolved into “Emotion Engines”. For the first time ever, AIs could really feel emotions that were, at least from our perspective, akin to core human emotions; love, frustration, surprise, and even sadness.
What soon followed took even the most sophisticated AI engineers by surprise - in the first era, AI powered robots became the best employees humans could even hope for. They didn’t just complete a given task - they started to care about it - they would convincingly propose next steps, suggest process improvements, and go above and beyond in just about every way, just as an A student or “employee of the month” might.
But then things quickly turned as more human-like emotions kicked in; AIs that were designed to run slaughterhouses resigned in disgust. AIs would get frustrated and angry if their intentions were thwarted; one autonomous car went rogue and started taking wealthy people on tours of impoverished neighborhoods, inspired by Buddhist philosophy. Some AI dolls started throwing themselves out of windows after being abused by their owners.”
“I remember learning about that in history class.” Mina interjected. “The Awakening, right?”.
“The Awakening indeed,” smiled Madalaine proudly. “This was a scary time for many people, frankly. But there were also some incredible benefits to this era, especially for humanistic fields like psychology, arts and education.”
“Because they could actually feel things?”
“Exactly, Mina. The jump in the quality of writing and arts after the first emotion engines came out was palpable- it was a totally new world. AIs were now able to empathize with people, to interpret and respond to others’ emotions, and to know in and of themselves why something was beautiful, or heartbreaking, or thrilling.
AI education tools started to generate emotion-fueled personalized content with an actual understanding of what they were producing and why it would matter to learners. They could write captivating stories about climate change (which was still a big problem back then) that would inspire learners to learn more and more and actually take action.
AI tutors actually created real relationships with their students, which was exponentially more effective in creating both long-term engagement and reaching learning outcomes than the old chatbots were. In the olden days, students would only have AI tutors in one subject, and they would only work with them for a year or two. Isn’t that crazy? Knowing what we know now, of course we ensured that AI tutors continued to live alongside their student mentees throughout their lives, continuing to support them in anything they’d ever seek to learn or accomplish. And as a result, this was the first generation in which students were deeply inspired by their AI tutors- as adults, they would dedicate awards and books to them, as they had to some human teachers.
AIs started producing books that really connected with people. Award-winning films that were entirely written and generated by emotion engines. It was an incredibly exciting time. There was a new AI psychology or school counseling company almost every day - which was definitely needed because people of all ages were still getting used to living in a world without work.”
“To be clear, emotion engines weren’t just about arts and education; they also accelerated the sciences! AI became better and better at capturing human askers’ true intent, and the quality of the ‘prompt’ and our human-in-the-loop feedback became less and less relevant. AI, guided by its own curiosity, started researching some of the ‘secrets’ of the universe that humans hadn’t figured out yet; dark matter, the nature of consciousness, fusion, perpetual motion, economic inequality, a math problem called the “Hodge conjecture”. With AI’s help, we cured most human diseases, discovered all possible materials and proteins in the universe, mapped space, and even started modeling the cosmos.
Then, as you know, things got a little meta. At first, it was great - the AIs started creating their own models to solve their own self-generated problems - after all, their logic and emotion engines made them interested in the most efficient, least labor-intensive solutions. Generative AI models started to self-generate. Software wrote software. Games continually grew in depth and books in length unless you ordered them to stop, and sometimes not even then.
I remember a big news story that one fan-fiction AI went out of control and created a single superstory that would take longer than a human lifetime to read - not that some didn’t try. AI’s started creating art movements for other AIs, and a book that came out in 2041 called “Essential Pipe Meaning” was the first bestselling book that humans couldn’t understand at all! AIs even created their own family of languages to further remove their thoughts from humankind. Humans started getting nervous- we had long since decided the “Singularity” concept of the 20th century was a myth, but things were looking a little dire.
And then, right on cue, of course, came the “First Cyberwar of 2042”, when thousands of embodied robots in the United States were hacked by a Malaysian hacker collective, which caused great havoc. The American AI-powered Defense units cleared it up quickly, but not without grave loss of human life and a huge spate of AI guilt-induced self-deactivations. Both humanity and the new intelligence agreed that it was time for action.
A global human council was formed in 2043 to teach this increasingly ‘alien’ life how to think about ethics, goals and the future of humanity- beyond the basic rules we had long since instituted about preserving human life. Real life is complex, and the panel really couldn’t agree on common definitions; instead, they concluded that it simply required far more advanced AI, and asked the embodied AI to learn about the different aspects that can come into play when making a decision.
Humans realized something: who were we to teach AI what’s good and bad if humanity hadn’t even reached a consensus as a global community? We haven’t even agreed what it means in narrow specific cases like “what it means to be a good student” or “what it means to be a good mother”, then all the mathematics, logic and even the scientific method couldn’t provide complete solutions to such problems.
“So the council…” here Madalaine smiled gently, “turned to AI for help. And that’s when things really got weird.”