Building an AI-Driven Product Management Workflow System in Cursor

feat: built agentic AI Ops prototype in Cursor
design: unified data with strategy + triage with automation
perf: improved efficiency by ~50%

Expand/Collapse Meta Data
Artifact: Cursor Agentic AI Product Management Workflow
Status: Archived
Build Type: Internal Tooling / AI Workflow System
Mission: Bridge analysis and action through agentic automation.
Impact: Reduced manual triage time by ~50% – Unified AI + Jira workflows
Key Systems Engaged: Cursor • Jira • Help Scout – Human-in-Loop AI
Build Environment: IndeOps @ ai-labs • Q3 2025 – vNext.experimental

The Challenge: Navigating Rapid Change, High Stakes

This system wasn’t conceived for novelty — it was built for survival.

“Ops” was born out of the LearnDash 5.0 pivot — a moment when everything needed to move faster, but the process couldn’t keep up.

The roadmap took a hard right. Dependencies were tight, high-stakes, and scattered across teams. Communication loops kept collapsing under the weight of context switching. I was managing simultaneous engineering calls, internal documentation and enablement, customer and integration-partner readiness, and stakeholder alignment — and every lost hour slowed recovery.

I needed a way to rebuild flow mid-disruption — something that could stabilize operations, surface clarity, and keep the roadmap alive as it evolved. That need led to an AI-driven workflow system: a lightweight structure designed to restore order without adding friction.

Build

In Cursor, I designed a lightweight AI operations system — part workflow engine, part thinking partner.

It listened for signals across documents and chat logs, surfaced what mattered, and drafted updates automatically. It didn’t automate decisions — it framed them. The structure combined prompt templates, API-assisted automations, and natural-language scripting — all woven together to keep information flowing where it belonged.

The final prototype structure looked a little like this:

/ops
├── handler.py           # Dispatch, routing, guardrails
├── agents/
│   ├── triage_agent     # Ticket completeness + coaching
│   ├── engineering_advisor # Code-aware analysis, feasibility
│   └── ticket_manager   # Safe Jira CRUD, preview/confirm
├── docs/                # Process templates, naming, briefs
├── registry/            # Reusable prompts, rubrics, labels
├── platforms/           # Jira • GitHub • Help Scout adapters
├── shared/              # Formatting, validation, auth utils
├── deliverables/        # Reports, comments, release notes
└── logs/                # Read-only traces, cost tracking

The result felt less like a bot and more like a quiet collaborator: a system that noticed friction before people did.

Expand/Collapse Nerdy Details

System Capabilities

  • Summarize support-ticket trends..
  • Produce cross-tool performance reports.
  • Synchronize tasks and dependencies across tools.
  • Draft documentation, release notes & internal updates.
  • Triage software bug reports. 
  • Recommend engineering strategies and generate code scaffolding.

Technical Highlights

  • Cursor as the backbone: Modular, fast, and developer-friendly.
  • Unified registries: Central control for tone, prompts, and rubrics.
  • 5-D Feasibility Framework: Balanced engineering detail with roadmap logic.
  • Thin adapters: Lightweight integration layers for easy maintenance.

Safeguards and Oversight

  • Read-only by default.
  • Sensitive data scrubbed before analysis.
  • Rate limits and cost budgets per run.
  • Validation checkpoints to ensure data quality.
  • WCAG-aware tone templates for accessible communication.

Design Principles

Before building anything, I defined what good needed to mean under pressure. These became the core design rules — simple enough to follow, strong enough to scale.

Teaching by default.
The system should guide and inform, not replace human thinking.

Context at every step.
Each output should understand why it exists — not just what it is.

Human in the loop.
AI could summarize, suggest, and surface — but decisions would stay human.

Lightweight by design.
If a solution added friction, it wasn’t the solution.

Outcome

The impact was subtle but powerful:

  • Fewer pings, more strategic conversations.
  • Routine updates generated themselves.
  • Customer insights were powerful and instantly available.
  • Project visibility improved without more meetings.

Within weeks, the experiment saved several hours per day — personal gains that compounded across the team. More importantly, it proved a concept I’d been chasing: that AI in product management isn’t about automation — it’s about amplification.

This build became the foundation for how I now think about human-AI systems: lightweight, contextual, and unmistakably human in intent.

Use Case: Automated Triage with Dual Agent Coordination

By mid-2025, our Jira backlog had become a white-noise party nobody RSVP’d to. Support tickets lingered half-finished, engineers spent hours decoding them, and sprint planning turned into Slack archaeology.

I built the Agentic Triage Workflow — a dual-agent system that coached Support on writing dev-ready bug reports and equipped Engineering with code-aware context. It was designed to teach, not just automate.

How it works:

  1. Triage Agent checks template compliance — like a friendly editor flagging missing details and showing how to fix them.
  2. Engineering Advisor runs a technical feasibility review — scanning related code, identifying affected files, and generating guidance to speed up developer onboarding.

Together, they form a built-in quality-control loop: the Triage Agent ensures completeness, the Engineering Advisor ensures accuracy. Domain awareness is embedded at every step, so the output isn’t just correct — it’s actionable.

Prompt Flow:

Triage Agent
New Bug → Jira Ticket → Read Ticket → [Filter: Template Compliance] → Sufficient? Y/N

Engineering Advisor
Sufficient Ticket → Read Ticket → [Filter: LearnDash contribution.md + custom context] → Reference GitHub repo → Output: Developer Guidance

Validation summary report from a triage agent for ticket LDT-475, marked as insufficient due to missing environment details. Includes sections for missing elements, issues identified, and next steps recommending customer follow-up for missing data.
Example Output: When a ticket isn’t ready for engineering, the triage agent replies like a coach — showing what to add or why the issue might not hold up.

Lessons Learned:

  • Tone matters: coaching outperforms criticism.
  • Train to trust: read-only mode built credibility before automation.
  • Start simple: micro-feedback compounds fast.

Use Case: Managing Jira through Conversation

Managing Jira by hand was tedious and error-prone. Valuable insights from QA and triage often died in conversation because creating tickets took too long.

The Ticket Manager agent solved that. Living inside the same platform as Triage and Engineering Advisor, it turned natural language into structured Jira actions — safely, transparently, and always with human approval.

Now, I can simply type commands like:

“Create tickets for post-DQA follow-ups.”
“Add notes to all accessibility items.”
“Convert these insights into Jira tasks.”

Or, I can upload a .csv file with QA or DQA line items and create tickets in bulk.

Each request flows through one transparent layer: the agent surfaces its plan, confirms it, and then updates Jira.

The result: less manual work, more captured insight, and a tighter feedback loop between analysis and action.

Use Case: Translating Bug Trends into Strategy

During early testing, the triage workflow surfaced a recurring issue in a LearnDash add-on. That single signal evolved into a full product strategy — merging the add-on into core. The change reduced technical debt and created new value for customers.

Prompt Flow

Triage Workflow → User ↔ Engineering Advisor Chat (Brainstorm) → Feasibility Report → Engineering Advisor Chat (Refinement) → Strategy Doc

The generated strategy document became the basis for discussion with the Lead Engineer and Engineering Manager. After one short conversation, the work was approved for the next sprint — and the merge was nearly complete by the time of my departure.

Generated strategy document titled “LearnDash Product Strategy Template (for Developer + Engineering Input)” with a project overview describing the LearnDash Multilingual Integration Merge, including goals, expected impact, and strategic alignment.

Use Case: AI Diagnostics for Support Escalation

A support agent uses the Ops System to investigate a customer question about a deprecated LearnDash feature.

The diagnostic agent retrieves context from documentation and code, explains its reasoning, and validates the source before answering — cutting time to resolution and improving trust.

Prompt Input:

Chat interface showing an instruction to use @ops and @engineering_advisor to investigate a HelpScout ticket and GitHub repository. The text discusses verifying a potential legacy template issue, questioning customer and support due diligence, and includes a HelpScout ticket link.
Early prototype prompt — agent invocation varied by model version.

Teaching:
Before answering, the agent follows its custom instructions — explaining its logic and showing where each piece of information was found.

Key findings report listing a legacy template support timeline and recent changes. It includes details like the official end date of June 15, 2025, deprecation notice, and patch notes for versions 4.21.0 and 4.25.4 with no recent breaking changes.
Markdown-style report section titled “Due Diligence Assessment” showing customer due diligence marked as questionable and support team due diligence marked as adequate, with bullet points detailing reasons for each rating.

Output:

The agent verified the team’s answer: the feature the customer reported was deprecated and no longer supported.