Building an AI-Driven Product Management Workflow System in Cursor
feat: built agentic AI Ops prototype in Cursor
design: unified data with strategy + triage with automation
perf: improved efficiency by ~50%
The Challenge: Navigating Rapid Change, High Stakes
This system wasn’t conceived for novelty — it was built for survival.
“Ops” was born out of the LearnDash 5.0 pivot — a moment when everything needed to move faster, but the process couldn’t keep up.
The roadmap took a hard right. Dependencies were tight, high-stakes, and scattered across teams. Communication loops kept collapsing under the weight of context switching. I was managing simultaneous engineering calls, internal documentation and enablement, customer and integration-partner readiness, and stakeholder alignment — and every lost hour slowed recovery.
I needed a way to rebuild flow mid-disruption — something that could stabilize operations, surface clarity, and keep the roadmap alive as it evolved. That need led to an AI-driven workflow system: a lightweight structure designed to restore order without adding friction.
Build
In Cursor, I designed a lightweight AI operations system — part workflow engine, part thinking partner.
It listened for signals across documents and chat logs, surfaced what mattered, and drafted updates automatically. It didn’t automate decisions — it framed them. The structure combined prompt templates, API-assisted automations, and natural-language scripting — all woven together to keep information flowing where it belonged.
The final prototype structure looked a little like this:
/ops
├── handler.py # Dispatch, routing, guardrails
├── agents/
│ ├── triage_agent # Ticket completeness + coaching
│ ├── engineering_advisor # Code-aware analysis, feasibility
│ └── ticket_manager # Safe Jira CRUD, preview/confirm
├── docs/ # Process templates, naming, briefs
├── registry/ # Reusable prompts, rubrics, labels
├── platforms/ # Jira • GitHub • Help Scout adapters
├── shared/ # Formatting, validation, auth utils
├── deliverables/ # Reports, comments, release notes
└── logs/ # Read-only traces, cost tracking
The result felt less like a bot and more like a quiet collaborator: a system that noticed friction before people did.
Design Principles
Before building anything, I defined what good needed to mean under pressure. These became the core design rules — simple enough to follow, strong enough to scale.
Teaching by default.
The system should guide and inform, not replace human thinking.Context at every step.
Each output should understand why it exists — not just what it is.Human in the loop.
AI could summarize, suggest, and surface — but decisions would stay human.Lightweight by design.
If a solution added friction, it wasn’t the solution.
Outcome
The impact was subtle but powerful:
- Fewer pings, more strategic conversations.
- Routine updates generated themselves.
- Customer insights were powerful and instantly available.
- Project visibility improved without more meetings.
Within weeks, the experiment saved several hours per day — personal gains that compounded across the team. More importantly, it proved a concept I’d been chasing: that AI in product management isn’t about automation — it’s about amplification.
This build became the foundation for how I now think about human-AI systems: lightweight, contextual, and unmistakably human in intent.
Use Case: Automated Triage with Dual Agent Coordination
By mid-2025, our Jira backlog had become a white-noise party nobody RSVP’d to. Support tickets lingered half-finished, engineers spent hours decoding them, and sprint planning turned into Slack archaeology.
I built the Agentic Triage Workflow — a dual-agent system that coached Support on writing dev-ready bug reports and equipped Engineering with code-aware context. It was designed to teach, not just automate.
How it works:
- Triage Agent checks template compliance — like a friendly editor flagging missing details and showing how to fix them.
- Engineering Advisor runs a technical feasibility review — scanning related code, identifying affected files, and generating guidance to speed up developer onboarding.
Together, they form a built-in quality-control loop: the Triage Agent ensures completeness, the Engineering Advisor ensures accuracy. Domain awareness is embedded at every step, so the output isn’t just correct — it’s actionable.
Prompt Flow:
Triage Agent
New Bug → Jira Ticket → Read Ticket → [Filter: Template Compliance] → Sufficient? Y/N
Engineering Advisor
Sufficient Ticket → Read Ticket → [Filter: LearnDash contribution.md + custom context] → Reference GitHub repo → Output: Developer Guidance

Lessons Learned:
- Tone matters: coaching outperforms criticism.
- Train to trust: read-only mode built credibility before automation.
- Start simple: micro-feedback compounds fast.
Use Case: Managing Jira through Conversation
Managing Jira by hand was tedious and error-prone. Valuable insights from QA and triage often died in conversation because creating tickets took too long.
The Ticket Manager agent solved that. Living inside the same platform as Triage and Engineering Advisor, it turned natural language into structured Jira actions — safely, transparently, and always with human approval.
Now, I can simply type commands like:
“Create tickets for post-DQA follow-ups.”
“Add notes to all accessibility items.”
“Convert these insights into Jira tasks.”
Or, I can upload a .csv file with QA or DQA line items and create tickets in bulk.
Each request flows through one transparent layer: the agent surfaces its plan, confirms it, and then updates Jira.
The result: less manual work, more captured insight, and a tighter feedback loop between analysis and action.
Use Case: Translating Bug Trends into Strategy
During early testing, the triage workflow surfaced a recurring issue in a LearnDash add-on. That single signal evolved into a full product strategy — merging the add-on into core. The change reduced technical debt and created new value for customers.
Prompt Flow
Triage Workflow → User ↔ Engineering Advisor Chat (Brainstorm) → Feasibility Report → Engineering Advisor Chat (Refinement) → Strategy Doc
The generated strategy document became the basis for discussion with the Lead Engineer and Engineering Manager. After one short conversation, the work was approved for the next sprint — and the merge was nearly complete by the time of my departure.

Use Case: AI Diagnostics for Support Escalation
A support agent uses the Ops System to investigate a customer question about a deprecated LearnDash feature.
The diagnostic agent retrieves context from documentation and code, explains its reasoning, and validates the source before answering — cutting time to resolution and improving trust.
Prompt Input:

Teaching:
Before answering, the agent follows its custom instructions — explaining its logic and showing where each piece of information was found.


Output:
The agent verified the team’s answer: the feature the customer reported was deprecated and no longer supported.

