AI Competition 2026

From Jira Ticket to
Shipped Code in One Command

An AI-powered development pipeline that handles design, implementation, code review, screenshots, and Jira updates — so developers can focus on decisions, not process.

By Bård Øvrebø Finago — 24SO ERP Claude Code + Jira + Paper.design + JAM + Playwright
🔴

1. Problem Statement

The Hidden Cost of Developer Ceremony

Developers waste enormous amounts of time on process overhead that surrounds actual coding. Reading Jira tickets, context-switching to design tools, manually screenshotting changes, writing comments, logging time, updating status — the actual code change might take 30 minutes, but the surrounding ceremony adds another 30–60 minutes per ticket.

A Typical Workflow (Before AI)

  1. Open Jira, find your ticket, read description and comments
  2. Download any attachments (screenshots of the bug, specs)
  3. Open the codebase, search for relevant files
  4. Context-switch to a design tool if UI changes are needed
  5. Make the actual code changes
  6. Start the app, navigate to the right page, take screenshots
  7. Switch back to Jira, write a comment describing what you did
  8. Upload screenshots and any reports as attachments
  9. Log time (if you remember)
  10. Update the ticket status

Every step is manual. Context-switching is constant. And this repeats for every single ticket.

60%+ Time on Process

More than half of a developer's time on a ticket is spent on process overhead, not on writing or reasoning about code.

🔀

Constant Context Switching

Browser to IDE to terminal to design tool to browser to Jira — each switch has a cognitive cost and invites distraction.

🚫

Missed Steps

Time logging forgotten. Screenshots skipped. Status not updated. Code reviews postponed. The boring steps get dropped.

💡

2. Our Idea

One-liner: We built an AI pipeline that takes a Jira ticket number and handles everything from design mockup to implementation to Jira updates — with human checkpoints at every critical decision.

For: Developers Who Want to Focus on Code, Not Process

The developer stays in the terminal — their natural habitat. One slash command kicks off the entire workflow. The AI handles the tedious process steps (fetching context, creating mockups, screenshotting, uploading, logging time) while the human makes the decisions that matter: approving designs, reviewing implementation approaches, and deciding when something is ready.

The Key Problem We Solve

We eliminate the 60%+ of development time that goes to process instead of code. Instead of ten manual steps across five different tools, the developer types one command and gets asked a few focused questions. The AI does the legwork; the human does the thinking.

🎯

Single Entry Point

Type /jira FO-2847 for a single ticket, or /jira sprint to process an entire sprint backlog. One command, full lifecycle.

🛡

Human Checkpoints

The AI never acts without asking. Design approval, implementation approach, Jira updates — all require explicit user confirmation.

🤖

Specialized Agents

UI designer, backend dev, frontend dev, code reviewer, report generator — each agent is an expert at its role.

3. How It Works

Input → Output

Single ticket: /jira FO-2847 — a Jira ticket number typed in the terminal

Whole sprint: /jira sprint — fetches all active sprint tickets, lets you pick which to process, works through them sequentially

Quick time log: /tempo addTime FO-2847 2h "Bug fix" — log time without leaving the terminal

List teams: /jira teams — fetches all available Scrum teams from Jira dynamically

Analyze a JAM: /jam https://jam.dev/c/abc123 — analyzes a JAM bug recording with video analysis, console logs, network requests, and user events

Unit test scan: /unit-test * control-backend-api — scans a project for test coverage gaps, maps all existing tests, creates missing unit tests, and runs them until green

Fix ignored tests: /unit-test --fix-ignored control-backend-api — finds all @Disabled/@Ignore/skip tests, lets you select which to unignore, then debugs and fixes them until green

Dependency audit: /deps control-backend-api — scans all dependencies for CVEs, outdated packages, and license risks with a health score (A–F grade)

Output: Implemented fix, design mockups, before/after screenshot comparison, HTML report, Jira comment + attachments, logged time, transitioned status

Pipeline Flow

API Fetch & Analyze Ticket

Calls Jira REST API to get ticket details, downloads attachments (including images for visual inspection), classifies the work as Bug/UI/Backend/Full-stack, finds relevant code files across the codebase.

Auto Detect & Analyze JAM Recordings

Automatically scans the ticket description and comments for jam.dev links. When found, uses the JAM MCP tools to fetch the full recording: video analysis, console errors, failed network requests, and user interaction timeline. The analysis is included in the ticket summary to inform all subsequent phases.

Auto Detect Tech Stack & Ask for Paths

Always asks the user which project paths to use — never assumes or auto-detects. The current working directory is the AI orchestration project, not the ticket's codebase. The user provides explicit paths (e.g., Backend: D:\Finago\control-backend-api, Frontend: D:\Finago\control-frontend). Then detects the stack for each provided path and builds a profile so all agents use the correct language and framework.

User Approve Plan

Presents ticket analysis, classification, affected files, and suggested approach. User reviews and approves, modifies, or requests analysis only.

AI Agent Design in Paper.design

UI Designer agent creates visual mockups in Paper using MCP — artboards with HTML/CSS matching the app's design language. Mandatory for any UI change, even a color tweak.

User Review & Approve Design

User opens Paper to review mockups. Can accept, request changes, or modify the design directly in Paper — the AI will fetch the updated design via MCP.

Auto Before Screenshots

Playwright captures screenshots of all affected pages before any code changes are made. These become the baseline for the visual comparison.

AI Agents Implement Changes

Specialized Frontend and/or Backend developer agents implement the fix following the approved design. Agents can run in parallel for full-stack tickets.

Auto Code Analysis & After Screenshots

Code Analyst agent reviews only the diff for security/quality issues (auto-fixes critical ones). Playwright takes matching after screenshots of the same pages and generates a side-by-side comparison HTML with red “Before” and green “After” labels.

Auto Generate HTML Report

Report Generator agent compiles all findings into a professional HTML report with ticket details, changes, code analysis, and verification screenshots.

User Choose Jira Update Strategy

User decides: let the AI update Jira (comment + attachments + status transition + time log), handle it manually, or skip. No external action without explicit permission.

API Update Jira & Log Time

Posts a comment, uploads the HTML report and all screenshots as attachments, transitions the ticket status, and logs a realistic time estimate via the Jira worklog API.

Key Components

Claude Code Orchestrator

The main skill (/jira) acts as a senior tech lead, coordinating all phases and spawning specialized agents via the Agent tool.

Jira REST API

Full integration: fetch issues, download attachments, post comments in Atlassian Document Format, upload files, transition status, log worklogs.

Paper MCP

Model Context Protocol integration with Paper.design — create artboards, write HTML/CSS designs, take screenshots, export JSX, all programmatically.

Playwright

Headless browser automation for capturing real screenshots of the running application after changes are implemented.

JAM MCP Integration

Connects to jam.dev via Model Context Protocol — fetches video analysis, console logs, network requests, and user event timelines from bug recordings linked in Jira tickets.

Real Prompt Examples from the Skills

jira orchestrator $ /jira FO-2872 # The orchestrator fetches the ticket via curl: source .env && curl -s -u "${JIRA_EMAIL}:${JIRA_API_TOKEN}" \ "${JIRA_BASE_URL}/rest/api/3/issue/FO-2872?expand=renderedFields"
tempo time tracking $ /tempo addTime FO-2872 14:00 16:00 "Implemented sign-in button color change" $ /tempo getTime FO-2872 # Output: | Date | Day | Start | End | Duration | Author | Description | |------------|-----|-------|-------|----------|-------------|--------------------------------------| | 2026-03-18 | Wed | 14:00 | 16:00 | 2h 0m | Bård Øvrebø | Implemented sign-in button color change |
ui designer agent prompt # Excerpt from the UI Designer agent prompt passed to Paper MCP: You are a Senior UI/UX Designer. Use the Paper MCP tools to design the solution. IMPORTANT: Prefix ALL artboard names with the ticket key: "FO-2872 — Current State" "FO-2872 — Proposed Fix" Match the existing application's design language. Use realistic content. Screenshot after every 2-3 modifications to verify quality.
jam analysis via MCP $ /jam https://jam.dev/c/4828d597-d49b-4fa8-99b8-f696321af056 # Parallel MCP calls: mcp__Jam__getDetails(jamId: "https://jam.dev/c/...") mcp__Jam__analyzeVideo(jamId: "https://jam.dev/c/...") mcp__Jam__getConsoleLogs(jamId: "https://jam.dev/c/...", logLevel: "error") mcp__Jam__getNetworkRequests(jamId: "https://jam.dev/c/...", statusCode: "5xx") mcp__Jam__getUserEvents(jamId: "https://jam.dev/c/...") # Output: structured analysis with steps, errors, environment info
dynamic team fetching $ /jira teams # Fetches all Scrum teams from Jira via JQL: | # | Team Name | |----|------------------------| | 1 | Annual accounts | | 2 | BANK | | 3 | Control | | 4 | Rocket | | ...| (13 teams total) | # /jira sprint also uses this to let you pick a team dynamically
🎬

4. Live Demo Walkthrough

Ticket FO-2872: “Change color of sign in button” — a real UI change walked through the full pipeline. Click each step to expand.

Step 1 User types the command

The developer is in their terminal, inside Claude Code. They type:

$ /jira FO-2872

That single command triggers the entire pipeline. The orchestrator takes over.

Step 2 AI fetches and analyzes the ticket

The orchestrator loads .env credentials, calls the Jira REST API, downloads the ticket including any attachments (screenshots, specs), and presents a structured summary:

Ticket: FO-2872 — “Change color of sign in button”
Type: DevBug | Priority: Medium | Status: In Progress
Classification: UI Change | Requires Design: Yes
Affected Files: LoginPage.jsx, LoginPage.css

The AI asks: “How would you like to proceed? Proceed / Modify Plan / Just Analyze”

Step 3 AI creates design mockup in Paper

Since this is a UI change, the orchestrator spawns a UI Designer agent that connects to Paper.design via MCP. The agent:

  • Reads the existing CSS to match the app's design language
  • Creates artboards: “FO-2872 — Current State” and “FO-2872 — Proposed Fix”
  • Writes HTML/CSS into the artboards showing the button color change
  • Takes screenshots to verify the design renders correctly

This is mandatory even for a simple color change. The skill enforces it with explicit “MANDATORY” language.

Step 4 User reviews and approves the design

The user opens Paper.design and sees the mockup artboards. They have four options:

  1. Accept designs — proceed to implementation
  2. I've made changes in Paper — the AI fetches updated designs via mcp__paper__get_jsx and uses those as the implementation reference
  3. Request changes — describe what to change; the designer agent iterates
  4. Skip design — go straight to implementation

This is the critical human checkpoint. The user has full control of the visual direction, and can even edit directly in Paper.

Step 5 AI implements the change

A Frontend Developer agent is spawned with the full ticket context, the approved design reference, and the identified files. It modifies the CSS to change the button color, following the existing code patterns.

For a full-stack ticket, frontend and backend agents would run in parallel.

Step 5.5 Before screenshots captured (before implementation)

Before any code changes, Playwright automatically captures “before” screenshots of all affected pages. For this ticket, it screenshots the login page with the current button color. These baseline images are saved for comparison later.

Step 6 AI runs code review, takes after screenshots, generates comparison

Three things happen automatically after implementation:

  • Code Analyst agent reviews only the diff (git diff) for security vulnerabilities, logic errors, and code quality issues. Critical issues are auto-fixed.
  • Playwright script launches a real browser, navigates to the same pages as the “before” step, and captures matching “after” screenshots.
  • Before/After comparison — generates a side-by-side HTML with red-labeled “Before” and green-labeled “After” images. This comparison HTML plus all screenshots are uploaded to Jira as attachments.
Step 7 User chooses how to handle Jira update

The orchestrator presents a summary of everything done and asks:

  1. I'll update Jira myself — gives ready-to-paste text and file paths
  2. Agent updates Jira — post comment, upload report + screenshots, transition status
  3. Do nothing on Jira — just keep the local report

No external action without explicit permission. This is a core design principle.

Step 8 Report + screenshots uploaded, time logged

If the user chose “Agent updates Jira”, the pipeline:

  • Uploads the HTML report as a Jira attachment
  • Uploads Paper design screenshots and Playwright verification screenshots
  • Posts a structured comment summarizing the resolution
  • Asks about status transition (In Review / Done / Ready for PROD / Keep current)
  • Asks about time logging with a smart estimate (e.g., “30 minutes” for a simple color change)

Done. One command, full lifecycle.

🔄

5. Before vs After

Area Before (Manual) After (AI Pipeline)
Total Time 45–90 min per ticket including all overhead steps 5–10 min with human checkpoints; ~1 command to start
Manual Steps 8–10 manual steps across 5+ different tools 2–3 decision points; everything else automated
Screenshots Manually taken, cropped, and uploaded to Jira — often skipped entirely Playwright auto-captures all affected pages; uploaded as Jira attachments automatically
Design Review Discussed in Jira comments or Slack — no visual preview before coding Full visual mockup in Paper.design for interactive review before a single line is coded
Code Review Depends on team process; often shipped without review for small changes Automatic security and quality analysis on every change — critical issues auto-fixed
Time Logging Forgotten, back-filled at end of week, or estimated loosely AI suggests a realistic estimate based on work complexity; logs it via the Jira API immediately
Jira Updates Comment written manually, attachments uploaded one by one, status changed by hand Structured comment, all attachments, and status transition in one batch — with user approval
Visual Verification Manual screenshots, no comparison — reviewers have to remember what it looked like before Automatic before/after screenshots with side-by-side HTML comparison (red/green labels) uploaded to Jira
Documentation Rarely done for small tickets; knowledge stays in the developer's head Professional HTML report generated for every ticket with full implementation details
Stack Support Hardcoded for React/Express only Auto-detects any stack (C#, Java, Python, Go, Rust, etc.) and tailors agent prompts
🧠

6. Challenges & Learnings

Agents Try to Skip Steps

AI agents naturally try to optimize. For a simple color change, the agent would reason “this is too trivial for a design mockup” and skip straight to implementation. We had to use very explicit language — “MANDATORY for ANY UI change. Even a one-line color change gets a Paper mockup. No exceptions.” — and add multiple reinforcement points throughout the skill.

Paper MCP Limitations

Paper's MCP integration doesn't support creating new pages — only artboards within the current page. We worked around this with a naming convention: all artboards are prefixed with the ticket key (e.g., “FO-2872 — Proposed Fix”) so designs are grouped and identifiable on a shared canvas.

Environment and Path Issues

When Claude Code agents spawn sub-agents that run in different working directories, the .env file for Jira credentials couldn't be found with relative paths. The fix: always use absolute paths — source d:/Kunder/247/AIComp/.env — hardcoded into the skill. Not elegant, but reliable.

Getting Screenshots Uploaded Consistently

Early versions of the skill would generate screenshots but forget to upload them to Jira, or would upload the report but not the images. It took multiple iterations of the skill prompt to ensure the upload step was mandatory and explicit, with separate upload loops for design screenshots and verification screenshots.

The Autonomy vs Control Balance

The biggest design insight: every external action must require user confirmation. The AI can analyze, design, implement, and generate reports autonomously. But the moment it touches something external — posting to Jira, transitioning status, logging time — it must ask first. This builds trust and prevents mistakes.

Project Path Assumptions

The AI would auto-detect the current working directory and assume it was the ticket's codebase — but the orchestration project (AIComp) is never the right target. After the AI incorrectly routed work to the wrong project, we added a hard rule: always ask the user for project paths, never assume. The skill now explicitly states the current directory is NOT the codebase and removes all "current directory" shortcut options.

JAM Recordings Are Client-Side SPAs

Initial attempts to fetch JAM recordings via WebFetch returned only the Vite app shell — no replay data. JAM is a fully client-rendered SPA. The solution: integrate the JAM MCP server, which provides direct API access to video analysis, console logs, network requests, and user events without needing a browser.

🚀

7. Future Plan

✅ Recently Shipped

🚧 Up Next

🛠

Appendix: Full Skills Library

Beyond the core /jira pipeline, the project includes a full suite of development workflow skills:

/jira

Full ticket orchestrator — 5 phases from fetch to Jira update, with before/after visual comparison. The core pipeline.

/jira sprint

Sprint batch mode — fetches all active sprint tickets, lets you pick which to process, works through them sequentially with continue/skip/stop controls.

/jira teams

Lists all available Scrum teams fetched dynamically from Jira. Shows a numbered table of teams found in the FO project.

/jam

Analyzes JAM bug recordings via MCP — fetches video analysis, console logs, network requests, and user events. Can take a URL, JAM ID, or Jira ticket key as input.

/unit-test

Scans projects for unit test coverage gaps, creates missing tests, runs and fixes them until green. Supports full scan, single file, project-name resolution, and --fix-ignored mode to rehabilitate disabled tests.

/deps

Dependency health auditor — scans for CVEs (with exploitability check), outdated packages (staleness score), and license risks. Health grade A–F. Can auto-fix or export CI configs.

/tempo

Quick time logging. /tempo addTime FO-2872 2h "Bug fix" logs time without leaving the terminal.

/new-feature

6-phase feature pipeline: plan, screenshot, design in Paper, parallel implementation, code analysis, master report.

/code-analysis

Reviews only changed code (git diff) for security, logic, quality. Auto-fixes critical issues.

/dev-team

5 specialized agents in an iterative loop: scan, fix, test, verify, repeat until zero findings.

/full-pipeline

End-to-end delivery: quality loop + Playwright E2E + Docker build/deploy + integration tests + master report.