An AI-powered teaching simulator with dual-layer feedback. The only tool that combines instructional strategy coaching with sustainability mindset analysis.
189
Unique scenarios
78%
Reflected in a new way
3.6/4
Likelihood to recommend
~28
Interactions per user
TL;DR
I designed and shipped an AI teaching simulator from zero: defining the product vision, content architecture, feedback system, and avatar framework. The key innovation: a second feedback layer that surfaces the unconscious mental models driving a teacher's decisions, not just the strategies they use.
01 · Context
The problem I set out to solve
Bilingual educators are chronically underserved by professional development. Most PD is lecture-based, English-only, disconnected from real classroom dynamics, and offers no space for practice. Teachers leave training sessions with notes but no muscle memory.
Meanwhile, the bilingual teacher shortage is deepening. Schools need their existing educators to grow faster: but the systems designed to support them weren't built for them.
The insight
Teachers don't fail because they lack knowledge. They struggle because they've never rehearsed applying that knowledge under realistic pressure: with a student who switches languages mid-sentence, or a parent navigating cultural differences, or a colleague resistant to collaboration.
02 · Strategy
How I framed the opportunity
Rather than building another content library, I proposed an immersive practice environment: a simulator where educators interact with AI personas representing real classroom stakeholders and receive personalized feedback on both their strategies and their underlying mindset.
Why a simulator, not a course?
IMV already had a strong course product in The Fellowship. But completion data showed educators struggled most when applying strategies in unpredictable, real-time situations. The gap wasn't knowledge: it was practice. A simulator addresses this directly: low-stakes repetition in realistic conditions.
Competitive positioning
Other AI teaching simulators exist: but none serve bilingual contexts, none are built by practitioners, and none combine instructional strategy feedback with mindset analysis. The Teaching Tank layer is a genuine category differentiator that no competitor offers.
Key decision
I chose to build the Coach as an extension of The Fellowship's existing content framework (4 modules, 21 subtopics, 63 strategies) rather than starting from scratch. This meant faster time-to-value, immediate credibility, and a natural upsell path from the free Fellowship into deeper practice.
03 · Architecture
How the product is structured
The content architecture creates a funnel of specificity: from broad topic down to a single scenario with a specific avatar, context, and set of goals. This ensures every interaction is focused, assessable, and repeatable.
User flow
Module
→
Subtopic
→
Strategy
→
Scenario
→
Simulation
→
Dual Feedback
Avatar system design
I designed three avatar types: students (individual and small groups), colleagues, and caretakers: each with detailed profiles covering linguistic background, personality, communication style, strengths, and challenges. This ensures educators practice with the full spectrum of stakeholders they actually encounter, not generic stand-ins.
Avatars span elementary through middle school and include diverse linguistic, cultural, and neurodiverse profiles with multiple language pairings.
Technical architecture decisions
Built on Laravel with OpenAI for simulation generation. The critical architectural choice: separating scenario content from the simulation engine. This means the content library (189 scenarios) can grow independently of the AI layer, and the engine can be reused across entirely different content domains: an essential decision for the platform scaling path.
04 · The key innovation
Dual-layer feedback: strategy + mindset
Most teaching tools stop at "did you use the right strategy?" I pushed the product further: the second feedback layer: The Teaching Tank: analyzes the mental models underneath a teacher's decisions.
Layer 1: Instructional Strategy Feedback
Performance assessed on a four-level developmental scale: Early Awareness → Emerging Practice → Consistent Application → Integrated Practice. Feedback includes what the educator did well (with transcript evidence), areas for growth with concrete "try this instead" alternatives, missed opportunities, and transparency around AI-assisted vs. independent responses.
Layer 2: The Teaching Tank (Mindset Analysis)
Scans the full interaction to detect which of 23 mental models (from Jaimie Cloud's Education for Sustainability framework) are driving the educator's decisions: 11 sustainable, 12 unsustainable. For every unsustainable model detected, the system identifies a sustainable antidote using the educator's own words, a reframe, and sample language.
Overall mindset score on a four-level scale: Unsustainable → Reactive → Emerging → Aligned. Detection evaluates reasoning structure, not surface keywords: a critical design constraint I enforced to prevent bias.
Why this matters
A teacher might use the "right" strategy but apply it from an unsustainable mental model: like a control-driven approach disguised as classroom management. Layer 2 catches what Layer 1 can't: the thinking beneath the technique.
05 · Process
How I built it
Phase 1: Foundation
Content architecture design. Mapped The Fellowship's 4 modules into 21 subtopics × 3 strategies × 3 scenarios = 189 unique simulation paths. Defined avatar profiles and feedback rubrics.
Phase 2: Prototype
Core simulation loop. Built the interaction engine (Laravel + OpenAI), avatar conversation system, and Layer 1 feedback generation. Tested with small group of Fellowship coaches.
Phase 3: The Teaching Tank
Mindset analysis layer. Integrated Jaimie Cloud's 23 mental models framework. Designed detection logic to evaluate reasoning structure (not keywords). Built the antidote system with educator-voice reframes.
Phase 4: Beta
Launched to ~40 Fellowship educators. 25 active users, 699 requests over 14 days. Collected structured feedback (32 responses) to inform next iteration.
Hardest problem I solved
Making the Teaching Tank detection accurate without introducing bias. Early versions flagged mental models based on surface keywords (e.g., flagging "control" as unsustainable regardless of context). I redesigned the detection to evaluate reasoning structure: the logic and assumptions behind a teacher's choices: not their vocabulary. This required extensive prompt engineering and iterative testing with the coaching team.
Cross-functional team & my role
I led product vision, content architecture, feedback system design, and the avatar framework. Worked with a lead developer on the technical build, coaches and evaluators on rubric calibration, and the digital learning team on integration with the Fellowship LMS. Also managed the grant-aligned development process, ensuring deliverables met U.S. Department of Education requirements.
06 · Outcomes
What the data shows
3.6
Recommend (of 4)
3.53
Value for growth
3.53
Scenario relevance
3.44
Persona realism
78%
Reflected on teaching in a new way
~28
Avg. interactions per user
699
Requests in 14 days
What I'd improve next
Teaching Tank accuracy scored lowest at 3.31/4: which tells me the mindset detection needs further calibration. The strategy feedback clarity (3.34) also has room. Both are solvable with better prompt engineering and expanded rubric examples. I'd also add longitudinal tracking so educators can see their growth across sessions, not just one-off feedback.
A window into the product
Try it yourself
Create a free account and walk through an actual simulation scenario: choose your avatar, receive dual-layer feedback, and see how the coach works firsthand.
I've scoped two growth trajectories: one deepens the product for existing users, the other scales it as a platform.
Growth roadmap
From beta to platform
Path 1: Deepen
User dashboard with longitudinal tracking, voice integration, video submission with AI feedback, expanded scenario library, extended language support.
Path 2: Scale as Platform
District and school customization, pre-service teacher prep, new teacher onboarding, Teaching Tank as a universal layer deployable across any content domain.
08 · Reflection
What I learned building this
The biggest lesson: the hardest product problems aren't technical: they're conceptual. Building the simulation engine was complex. But designing a feedback system that surfaces unconscious mental models without introducing bias, and without making educators feel judged, required a different kind of rigor: one rooted in practitioner empathy, not just engineering.
This product works because it was built by someone who has stood in front of a classroom. That's not a marketing line: it's a design constraint that shaped every decision, from avatar profiles to feedback tone to the four-level developmental scale. You can't fake practitioner credibility in a product like this. The users know.