I'm so glad you have a name for this and are making education aware of this. I noticed this affect when online platforms were first introduced for math when my daughter was in middle school. It appeared that she learned the content, but when it came time to test her knowledge, the learning did not stick. I'm in education and believe AI has a place, but the people designing it really need to have deep knowledge of learning, so that it doesn't just "look good."
Excellent read. I agree with all recomms, esp. protecting the first attempt, and having prompt data available as a signal for teachers. I also agree with "Target the Right Kind of Load:" as an objective, but I think we need to exercise the same caution here as "protecting the first attempt." AI may well explain a concept more efficiently or in a more versatile manner, esp. for the neurodivergent learners I focus on. E.g., molecular angles in OChem, or Activity Based Costing in Mgrl Accounting. However, asking the right questions -- initial as well as follow-up -- is a premium skill in this context, and may require appropriate, tailored scaffolding by a teacher or coach. Otherwise, we risk a 2-round session, and boom: learning achieved. Or not.
What I also found implicit throughout your analysis is a call to better develop executive function skills. Thank you again for a well-written piece.
This piece articulates the design problem we've been working on at Quantum Learning Machines.
The key insight for us was the same one the research points to: the productive struggle that causes encoding is precisely what most AI tools eliminate. So we built STEM simulations where students predict before they observe — they commit a hypothesis, run the experiment, see the result, and write the explanation. No AI generates answers at any point.
We also built spaced review into the platform — when understanding begins to fade after a mission, the system re-engages the student before the debt compounds. We're early and haven't validated outcomes yet, but the architecture is designed around exactly the principles this synthesis calls for.
I'm open to pilots, strategic collaborations, and institutional framework licensing.
Toprani, D. (2026, March 6). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 1. https://doi.org/10.35542/osf.io/fnk7z_v1
Toprani, D. (2026, April 22). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 2. https://doi.org/10.35542/osf.io/fnk7z_v2
Toprani, D. (2026, May 8). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 3. https://doi.org/10.35542/osf.io/fnk7z_v4
Licensed under CC BY-NC-ND 4.0. Somagraphic Learning™ is a trademark of Devika Toprani (USPTO filing active).
I'm so glad you have a name for this and are making education aware of this. I noticed this affect when online platforms were first introduced for math when my daughter was in middle school. It appeared that she learned the content, but when it came time to test her knowledge, the learning did not stick. I'm in education and believe AI has a place, but the people designing it really need to have deep knowledge of learning, so that it doesn't just "look good."
Excellent read. I agree with all recomms, esp. protecting the first attempt, and having prompt data available as a signal for teachers. I also agree with "Target the Right Kind of Load:" as an objective, but I think we need to exercise the same caution here as "protecting the first attempt." AI may well explain a concept more efficiently or in a more versatile manner, esp. for the neurodivergent learners I focus on. E.g., molecular angles in OChem, or Activity Based Costing in Mgrl Accounting. However, asking the right questions -- initial as well as follow-up -- is a premium skill in this context, and may require appropriate, tailored scaffolding by a teacher or coach. Otherwise, we risk a 2-round session, and boom: learning achieved. Or not.
What I also found implicit throughout your analysis is a call to better develop executive function skills. Thank you again for a well-written piece.
This piece articulates the design problem we've been working on at Quantum Learning Machines.
The key insight for us was the same one the research points to: the productive struggle that causes encoding is precisely what most AI tools eliminate. So we built STEM simulations where students predict before they observe — they commit a hypothesis, run the experiment, see the result, and write the explanation. No AI generates answers at any point.
We also built spaced review into the platform — when understanding begins to fade after a mission, the system re-engages the student before the debt compounds. We're early and haven't validated outcomes yet, but the architecture is designed around exactly the principles this synthesis calls for.
We wrote about our approach and where we think the evidence leads here: quantumlearningmachines.com/research/cognitive-debt
Academic integrity and citations are appreciated!
I'm open to pilots, strategic collaborations, and institutional framework licensing.
Toprani, D. (2026, March 6). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 1. https://doi.org/10.35542/osf.io/fnk7z_v1
Toprani, D. (2026, April 22). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 2. https://doi.org/10.35542/osf.io/fnk7z_v2
Toprani, D. (2026, May 8). Somagraphic Learning™ Framework: A human-first, AI-supported visual cognitive approach. OSF Preprints. Version 3. https://doi.org/10.35542/osf.io/fnk7z_v4
Licensed under CC BY-NC-ND 4.0. Somagraphic Learning™ is a trademark of Devika Toprani (USPTO filing active).
Not sure about the claim that 78% of ChatGPT users couldn't quote their own essays ..it is presented without a citation or methodological detail