An AI-powered development pipeline that handles design, implementation, code review, screenshots, and Jira updates — so developers can focus on decisions, not process.
Developers waste enormous amounts of time on process overhead that surrounds actual coding. Reading Jira tickets, context-switching to design tools, manually screenshotting changes, writing comments, logging time, updating status — the actual code change might take 30 minutes, but the surrounding ceremony adds another 30–60 minutes per ticket.
Every step is manual. Context-switching is constant. And this repeats for every single ticket.
More than half of a developer's time on a ticket is spent on process overhead, not on writing or reasoning about code.
Browser to IDE to terminal to design tool to browser to Jira — each switch has a cognitive cost and invites distraction.
Time logging forgotten. Screenshots skipped. Status not updated. Code reviews postponed. The boring steps get dropped.
The developer stays in the terminal — their natural habitat. One slash command kicks off the entire workflow. The AI handles the tedious process steps (fetching context, creating mockups, screenshotting, uploading, logging time) while the human makes the decisions that matter: approving designs, reviewing implementation approaches, and deciding when something is ready.
We eliminate the 60%+ of development time that goes to process instead of code. Instead of ten manual steps across five different tools, the developer types one command and gets asked a few focused questions. The AI does the legwork; the human does the thinking.
Type /jira FO-2847 for a single ticket, or /jira sprint to process an entire sprint backlog. One command, full lifecycle.
The AI never acts without asking. Design approval, implementation approach, Jira updates — all require explicit user confirmation.
UI designer, backend dev, frontend dev, code reviewer, report generator — each agent is an expert at its role.
Single ticket: /jira FO-2847 — a Jira ticket number typed in the terminal
Whole sprint: /jira sprint — fetches all active sprint tickets, lets you pick which to process, works through them sequentially
Quick time log: /tempo addTime FO-2847 2h "Bug fix" — log time without leaving the terminal
List teams: /jira teams — fetches all available Scrum teams from Jira dynamically
Analyze a JAM: /jam https://jam.dev/c/abc123 — analyzes a JAM bug recording with video analysis, console logs, network requests, and user events
Unit test scan: /unit-test * control-backend-api — scans a project for test coverage gaps, maps all existing tests, creates missing unit tests, and runs them until green
Fix ignored tests: /unit-test --fix-ignored control-backend-api — finds all @Disabled/@Ignore/skip tests, lets you select which to unignore, then debugs and fixes them until green
Dependency audit: /deps control-backend-api — scans all dependencies for CVEs, outdated packages, and license risks with a health score (A–F grade)
Output: Implemented fix, design mockups, before/after screenshot comparison, HTML report, Jira comment + attachments, logged time, transitioned status
Calls Jira REST API to get ticket details, downloads attachments (including images for visual inspection), classifies the work as Bug/UI/Backend/Full-stack, finds relevant code files across the codebase.
Automatically scans the ticket description and comments for jam.dev links. When found, uses the JAM MCP tools to fetch the full recording: video analysis, console errors, failed network requests, and user interaction timeline. The analysis is included in the ticket summary to inform all subsequent phases.
Always asks the user which project paths to use — never assumes or auto-detects. The current working directory is the AI orchestration project, not the ticket's codebase. The user provides explicit paths (e.g., Backend: D:\Finago\control-backend-api, Frontend: D:\Finago\control-frontend). Then detects the stack for each provided path and builds a profile so all agents use the correct language and framework.
Presents ticket analysis, classification, affected files, and suggested approach. User reviews and approves, modifies, or requests analysis only.
UI Designer agent creates visual mockups in Paper using MCP — artboards with HTML/CSS matching the app's design language. Mandatory for any UI change, even a color tweak.
User opens Paper to review mockups. Can accept, request changes, or modify the design directly in Paper — the AI will fetch the updated design via MCP.
Playwright captures screenshots of all affected pages before any code changes are made. These become the baseline for the visual comparison.
Specialized Frontend and/or Backend developer agents implement the fix following the approved design. Agents can run in parallel for full-stack tickets.
Code Analyst agent reviews only the diff for security/quality issues (auto-fixes critical ones). Playwright takes matching after screenshots of the same pages and generates a side-by-side comparison HTML with red “Before” and green “After” labels.
Report Generator agent compiles all findings into a professional HTML report with ticket details, changes, code analysis, and verification screenshots.
User decides: let the AI update Jira (comment + attachments + status transition + time log), handle it manually, or skip. No external action without explicit permission.
Posts a comment, uploads the HTML report and all screenshots as attachments, transitions the ticket status, and logs a realistic time estimate via the Jira worklog API.
The main skill (/jira) acts as a senior tech lead, coordinating all phases and spawning specialized agents via the Agent tool.
Full integration: fetch issues, download attachments, post comments in Atlassian Document Format, upload files, transition status, log worklogs.
Model Context Protocol integration with Paper.design — create artboards, write HTML/CSS designs, take screenshots, export JSX, all programmatically.
Headless browser automation for capturing real screenshots of the running application after changes are implemented.
Connects to jam.dev via Model Context Protocol — fetches video analysis, console logs, network requests, and user event timelines from bug recordings linked in Jira tickets.
Ticket FO-2872: “Change color of sign in button” — a real UI change walked through the full pipeline. Click each step to expand.
The developer is in their terminal, inside Claude Code. They type:
That single command triggers the entire pipeline. The orchestrator takes over.
The orchestrator loads .env credentials, calls the Jira REST API, downloads the ticket including any attachments (screenshots, specs), and presents a structured summary:
Ticket: FO-2872 — “Change color of sign in button”
Type: DevBug | Priority: Medium | Status: In Progress
Classification: UI Change | Requires Design: Yes
Affected Files: LoginPage.jsx, LoginPage.css
The AI asks: “How would you like to proceed? Proceed / Modify Plan / Just Analyze”
Since this is a UI change, the orchestrator spawns a UI Designer agent that connects to Paper.design via MCP. The agent:
This is mandatory even for a simple color change. The skill enforces it with explicit “MANDATORY” language.
The user opens Paper.design and sees the mockup artboards. They have four options:
mcp__paper__get_jsx and uses those as the implementation referenceThis is the critical human checkpoint. The user has full control of the visual direction, and can even edit directly in Paper.
A Frontend Developer agent is spawned with the full ticket context, the approved design reference, and the identified files. It modifies the CSS to change the button color, following the existing code patterns.
For a full-stack ticket, frontend and backend agents would run in parallel.
Before any code changes, Playwright automatically captures “before” screenshots of all affected pages. For this ticket, it screenshots the login page with the current button color. These baseline images are saved for comparison later.
Three things happen automatically after implementation:
git diff) for security vulnerabilities, logic errors, and code quality issues. Critical issues are auto-fixed.The orchestrator presents a summary of everything done and asks:
No external action without explicit permission. This is a core design principle.
If the user chose “Agent updates Jira”, the pipeline:
Done. One command, full lifecycle.
| Area | Before (Manual) | After (AI Pipeline) |
|---|---|---|
| Total Time | 45–90 min per ticket including all overhead steps | 5–10 min with human checkpoints; ~1 command to start |
| Manual Steps | 8–10 manual steps across 5+ different tools | 2–3 decision points; everything else automated |
| Screenshots | Manually taken, cropped, and uploaded to Jira — often skipped entirely | Playwright auto-captures all affected pages; uploaded as Jira attachments automatically |
| Design Review | Discussed in Jira comments or Slack — no visual preview before coding | Full visual mockup in Paper.design for interactive review before a single line is coded |
| Code Review | Depends on team process; often shipped without review for small changes | Automatic security and quality analysis on every change — critical issues auto-fixed |
| Time Logging | Forgotten, back-filled at end of week, or estimated loosely | AI suggests a realistic estimate based on work complexity; logs it via the Jira API immediately |
| Jira Updates | Comment written manually, attachments uploaded one by one, status changed by hand | Structured comment, all attachments, and status transition in one batch — with user approval |
| Visual Verification | Manual screenshots, no comparison — reviewers have to remember what it looked like before | Automatic before/after screenshots with side-by-side HTML comparison (red/green labels) uploaded to Jira |
| Documentation | Rarely done for small tickets; knowledge stays in the developer's head | Professional HTML report generated for every ticket with full implementation details |
| Stack Support | Hardcoded for React/Express only | Auto-detects any stack (C#, Java, Python, Go, Rust, etc.) and tailors agent prompts |
AI agents naturally try to optimize. For a simple color change, the agent would reason “this is too trivial for a design mockup” and skip straight to implementation. We had to use very explicit language — “MANDATORY for ANY UI change. Even a one-line color change gets a Paper mockup. No exceptions.” — and add multiple reinforcement points throughout the skill.
Paper's MCP integration doesn't support creating new pages — only artboards within the current page. We worked around this with a naming convention: all artboards are prefixed with the ticket key (e.g., “FO-2872 — Proposed Fix”) so designs are grouped and identifiable on a shared canvas.
When Claude Code agents spawn sub-agents that run in different working directories, the .env file for Jira credentials couldn't be found with relative paths. The fix: always use absolute paths — source d:/Kunder/247/AIComp/.env — hardcoded into the skill. Not elegant, but reliable.
Early versions of the skill would generate screenshots but forget to upload them to Jira, or would upload the report but not the images. It took multiple iterations of the skill prompt to ensure the upload step was mandatory and explicit, with separate upload loops for design screenshots and verification screenshots.
The biggest design insight: every external action must require user confirmation. The AI can analyze, design, implement, and generate reports autonomously. But the moment it touches something external — posting to Jira, transitioning status, logging time — it must ask first. This builds trust and prevents mistakes.
The AI would auto-detect the current working directory and assume it was the ticket's codebase — but the orchestration project (AIComp) is never the right target. After the AI incorrectly routed work to the wrong project, we added a hard rule: always ask the user for project paths, never assume. The skill now explicitly states the current directory is NOT the codebase and removes all "current directory" shortcut options.
Initial attempts to fetch JAM recordings via WebFetch returned only the Vite app shell — no replay data. JAM is a fully client-rendered SPA. The solution: integrate the JAM MCP server, which provides direct API access to video analysis, console logs, network requests, and user events without needing a browser.
/jira sprint fetches all active sprint tickets assigned to you or unassigned, filters out done/closed/rejected, and presents a prioritized table. Pick all tickets, specific ones, only yours, or only unassigned — then process each sequentially through the full pipeline. Between tickets you can continue, skip, or stop. Ends with a sprint summary table.
/project-index skill when detection is unclear.
/jira teams lists all available Scrum teams fetched live from Jira. /jira sprint now dynamically fetches teams and presents them as selectable options instead of hardcoded team names. Currently finds 13 teams across the FO project.
/unit-test scans projects for test coverage gaps across any tech stack (Java/JUnit, C#/xUnit, JS/Jest, Python/pytest, Go, Rust, etc.). Three modes: full scan (/unit-test *) maps all existing tests and creates missing ones, single file mode creates tests for a specific class, and project-aware mode resolves project names from prior work. All tests are run and fixed iteratively until green, with a detailed HTML report.
/deps scans project dependencies for known CVEs (with exploitability assessment), outdated packages (staleness score), and license compatibility risks. Supports any stack (npm, Maven, Gradle, NuGet, pip, Go, Cargo, Composer, Bundler). Generates a risk-scored health grade (A–F) and can auto-apply safe updates or export Dependabot/Renovate configs.
Beyond the core /jira pipeline, the project includes a full suite of development workflow skills:
/jiraFull ticket orchestrator — 5 phases from fetch to Jira update, with before/after visual comparison. The core pipeline.
/jira sprintSprint batch mode — fetches all active sprint tickets, lets you pick which to process, works through them sequentially with continue/skip/stop controls.
/jira teamsLists all available Scrum teams fetched dynamically from Jira. Shows a numbered table of teams found in the FO project.
/jamAnalyzes JAM bug recordings via MCP — fetches video analysis, console logs, network requests, and user events. Can take a URL, JAM ID, or Jira ticket key as input.
/unit-testScans projects for unit test coverage gaps, creates missing tests, runs and fixes them until green. Supports full scan, single file, project-name resolution, and --fix-ignored mode to rehabilitate disabled tests.
/depsDependency health auditor — scans for CVEs (with exploitability check), outdated packages (staleness score), and license risks. Health grade A–F. Can auto-fix or export CI configs.
/tempoQuick time logging. /tempo addTime FO-2872 2h "Bug fix" logs time without leaving the terminal.
/new-feature6-phase feature pipeline: plan, screenshot, design in Paper, parallel implementation, code analysis, master report.
/code-analysisReviews only changed code (git diff) for security, logic, quality. Auto-fixes critical issues.
/dev-team5 specialized agents in an iterative loop: scan, fix, test, verify, repeat until zero findings.
/full-pipelineEnd-to-end delivery: quality loop + Playwright E2E + Docker build/deploy + integration tests + master report.