LearnDash 5.0 → REST API Stabilization + Agentic AI Release

feat: stabilized REST API + built foundation for agentic AI
design: roadmap rebuild + synchronized team rollout
perf: stabilized core platform + aligned systems

By mid-2025, the ground beneath WordPress was shifting. Agentic AI wasn’t just a buzzword anymore — it was becoming the new admin.

Elementor prepared to launch Angie, its Agentic AI assistant. StellarWP chose its flagship brands — The Events Calendar, GiveWP, and LearnDash — to lead its own AI initiative.

That meant one thing: LearnDash had to speak AI-fluently. Every team, every function, every workflow.

And the timeline? Tight.

Project Start: September 1, 2025
Intended Release Date: October 15, 2025

But under the hood, the REST API was logging warnings.

The risk was obvious, but so was the opportunity: stabilize the API, connect it to MCP, and LearnDash could define what an AI-ready LMS means.

MCPs, APIs, OH MY! What is this project actually all about?

T.vNext Technical Documentation

REST API
A REST API has endpoints and capabilities. It’s not a programming language, but a set of rules that developers use to connect with data, build user interfaces, and perform actions through code — like creating or deleting content.

Stabilizing the REST API wouldn’t only enable AI connections. It would also let developers integrate more deeply with LearnDash and extend it further.

OpenAPI
This is a “public interface” that documents the REST API in a way both humans and machines can read. It lives in the code, like the REST API, and defines each endpoint and data model. It acts as a translator between the REST API and the MCP server.

Model Context Protocol (MCP)
MCP is a layer that sits between an OpenAPI definition and LLMs (large language models). It tells the LLM how to instruct the AI agent and what it can do through that connection. This protocol requires a server to run (the MCP server): local or hosted.

  • Local example: Non-technical users can spin up a local MCP server with Cursor through natural language prompts using a WordPress Application Password (generated in the user profile screen in the admin UI).
  • Hosted example: Elementor’s Angie uses a hosted server. When installed alongside a plugin configured for MCP, it connects automatically.

Large Language Models (LLMs)
LLMs like GPT-5, Claude 4, and Grok 4 are powered by massive datasets. They compete across reasoning, speed, and reliability. Each has unique strengths, architectures, and ideal use cases.

AI Agents
Agents are usually included in AI clients, like Cursor. They can execute prompts step by step for better results and perform actions directly. They’re more capable than a standard chat interface, like ChatGPT before agentic support.

Together, they empower nontechnical users to manage their LearnDash sites with natural language prompts.

hours → seconds: reporting; single-course content updates; grading (< users)
days → minutes: simple course building; multi-course content updates; QA; grading (> users)
weeks → hours: setup; complex course building; debugging complex issues

The Risk: A Legacy System on the Edge

Two weeks into discovery, four from launch, the real problem surfaced.

REST API v2 wasn’t broken once — it was fractured everywhere.
A few endpoints worked mostly, but not enough for an AI agent parsing layered instructions.

Meanwhile, version 5.0 was already packed with a major new feature and a heavy QA load. We’d planned safe fixes, not a rebuild.

The inflection point
• AI was reshaping WordPress, and MCP offered a shared language between APIs and large language models.
• If LearnDash moved first, it could define what “AI-ready LMS” means.
• Developers were already asking for API stability — trust we couldn’t afford to lose.

The call was clear: stabilize the core, design for what’s next, ship on time.

Rewriting the Roadmap

The discovery triggered an abrupt roadmap refactor — a platform-wide pivot that rippled through the system architecture. Every LearnDash team was part of that architecture, and every shift cascaded downstream.

The new roadmap clarified direction, but its success hinged on a synchronized rollout across every team.

Before

  • 4.x → MCP Support / Agentic AI feature launch (paired with MCP Server v1 release)
  • 5.0 → Beta v2 REST API stabilized and production-ready; Student Management UI + new “student” WordPress user role; early MVP database migration system
  • 5.1 → Full new database migration system

After

  • 5.0 → API stabilization + MCP foundation
  • 5.1 → AI endpoint expansion
  • 5.2 → New Database migration system
  • 5.3 → New Student Management UI
  • 5.x → Incremental steps to new course builder (and new database structure).
Expand to view the full evolution and how it matured over time.

At some point during this process, AI helped me go from assessing a bug ticket as a test for a new system, to inducing a low risk, low lift, high reward feature to this roadmap (will link to the Ai workflow use case tab- this is the other artifact already written).

  • 5.0 → API stabilization + MCP foundation
  • 5.1 → AI endpoint expansion
  • 5.2 → Native multilingual translation support (no add-on needed to integrate with translation plugins)
  • 5.3 → New Database migration system
  • 5.4 → New Student Management UI
  • 5.5 → REST API translation endpoints + MCP support
  • 5.x → Incremental steps to new course builder (and new database structure) + more product stabilization and modernization.

Suddenly, I had a product story to tell for months. Time, and to creatively and strategically design a revolutionary LMS UX with our designer.

I had BIG plans for LearnDash.

AI-Assisted Execution

# Context
system: Product Management AI Integration (early iteration)
goal: Ship major refactor in 4 weeks

# Tools
- custom GPTs
- single AI context document (specs, tone, history)

# Process
generate → summarize → review → refine → ship

# Outputs
✔ developer briefs      — consistent
✔ stakeholder updates    — clear
✔ launch copy            — fast to iterate

# Principle
Not automation for automation’s sake → clarity at scale.

Cross-Functional Coordination

You can format it with headings and columns instead of a single code block:

Roles & Requirements

Team

Need

Engineering

Clear product requirements (customer needs)

Marketing

Exciting use case stories (demand generation)

Support

Enablement specific to providing customer technical support

CX

Language for how to talk with customers and leads

Stakeholders

Clarity on the strategy and expected outcomes

Product

Another me, because this project was massive and had a short timeline

System Design

Built shared documentation streams connected to one AI context document —
AI-generated where possible, human-edited where it mattered.

Result

A repeatable release infrastructure — turning chaos into choreography.
(Later evolved into the Agentic AI Ops workflow.)

Impact

commit: learnDash_5.0_AI_refactor

  ├─ rest_api_v2: moved from beta → production-ready  
  ├─ mcp_integration: unlocked AI workflows (LearnDash ↔ Agentic AI)  
  ├─ roadmap_pivot: landed in <4 weeks, despite restructure  
  ├─ ai_enablement: adopted by other StellarWP teams  
  └─ LearnDash: leading AI-readiness in LMS + WordPress  

result: proof of teamwork as the foundation — and AI as the force amplifier

Early results showed that this would be the most powerful LearnDash release ever.

Peer Commits

Every release has its changelog — and every leader has theirs. The next commits come from the people who shipped beside me.

“I had the absolute pleasure of working alongside Taylor on an AI project at StellarWP, and she was an unstoppable force from start to finish. Driven, focused, and endlessly curious, Taylor has a rare ability to balance strategic vision with hands-on execution.

Her contributions to the AI project were absolutely critical, she brought the focus, structure, and follow-through that turned into real progress…”

– Justin Frydman
Senior Staff Software Engineer @ StellarWP
on LinkedIn

I had the pleasure of working closely with Taylor on the LearnDash MCP project, where I handled QA. Her guidance, support, and leadership were truly exceptional and always spot on…”

– Muhammad Shakeel
Technical Support Specialist (Tier II)
on LinkedIn