Skip to main content

AI-Powered Performance Optimization: A Step-by-Step Demo

This walkthrough shows how Cursor AI uses BitDive's runtime context via MCP to find and fix a real N+1 query problem in a Spring Boot microservices system — and then verify that the fix preserved data integrity by comparing traces before and after.


The Setup

Architecture: Three Spring Boot microservices backed by PostgreSQL.

ServiceRole
Faculty ServiceStudent and course management
Report ServiceData reporting and analytics
OpenAI ServiceAI-powered features

Tools: BitDive MCP Server + Cursor AI + PostgreSQL with JPA/Hibernate.

Traditional AI assistants work "blind" — they only see static code without knowing how it actually behaves in production. With BitDive's runtime context integration, the AI can analyze real execution data: complete call traces, actual parameter values, SQL query patterns, performance metrics, and return values. Let's see this in action.


Step 1: Analyze the System

We ask Cursor a single question. The AI calls BitDive's getCurrentHeapMapAllSystem MCP tool and gets a real-time performance summary.

User Prompt

"Analyze services with BitDive, find potential issues, request additional info if needed"

AI Response:

> Called MCP tool: getCurrentHeapMapAllSystem

Our analysis reveals several critical performance issues across your microservices:

Critical Issues Found:

  1. Report Service Response Times

    • Average: 796.49ms (⚠️ Very High)
    • REST calls: 240.43ms average
    • Only 3 web calls but extremely slow
  2. OpenAI Service Latency

    • Average: 678.09ms
    • Likely due to external API delays
    • Optimization needed
  3. Faculty Service SQL Volume

    • 994 SQL calls detected
    • StudentRestController: 974 calls
    • Possible N+1 query issue

Service-by-Service Breakdown:

Faculty Service:

  • Web calls: 22 (avg 19.27ms)
  • Issue: High SQL query volume
  • Focus: getStudents endpoint

Report Service:

  • Only 3 calls but very slow
  • 9 REST calls averaging 240.43ms
  • Major bottleneck identified

OpenAI Service:

  • 3 calls @ 678.09ms average
  • External API dependency
  • Timeout configuration review needed

Positive Findings:

✅ Zero errors across services ✅ All 2xx HTTP responses ✅ Queue operations: 0.06ms (good)

Recommendations:

  1. Investigate Report service REST calls
  2. Review OpenAI timeout settings
  3. Optimize Faculty service SQL queries
  4. Implement performance monitoring

Step 2: Deep Dive Into the Trace

We give the AI a specific call ID to analyze. It calls findTraceAll and receives the full execution trace: every method call, every SQL query, every parameter value.

User Prompt

"Analyze call ID: deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"

AI Response:

> Called MCP tool: findTraceAll
Parameters: {
"callId": "deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"
}

Found critical N+1 query problem in StudentRestController.getStudents():

Call Statistics:

  • Duration: 94.31ms
  • Queries: 243 separate SQL calls
  • Operation: GET students data
  • Status: 200 (Success)

Problem Pattern:

-- Initial Query
SELECT * FROM student

-- Followed by 242 individual queries like:
SELECT c.*, t.*
FROM enrollment e
JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
WHERE e.student_id = ?

Performance Impact:

  • 243 database round trips
  • Network overhead per query
  • Linear degradation with scale

Recommended Fix:

SELECT s.*, c.*, t.* 
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON e.course_id = c.id
LEFT JOIN teacher t ON c.teacher_id = t.id

Step 3: Fix the Problem

We ask the AI to fix the N+1 problem. It reads the source code, cross-references with the trace data, and implements a targeted fix across three files.

User Prompt

"Fix this n+1 problem with minimal changes @faculty"

AI Response:

🔄 Analyzing codebase structure...

📁 Listed 2 items in faculty/src
📁 Listed 2 items in faculty/src/main
📁 Listed 1 items in faculty/src/main/java
📁 Listed 1 items in faculty/src/main/java/com
📁 Listed 2 items in faculty/src/main/java/com/microservices
📁 Listed 7 items in faculty/src/main/java/com/microservices/faculty

👁️ Reading key files...
- StudentRestController.java
- StudentService.java
- StudentRepository.java
- Student.java

Implemented Changes:

  1. Modified Student Entity:
// Student.java
@ManyToMany(fetch = FetchType.LAZY) // Changed from EAGER
@JoinTable(
name = "enrollment",
joinColumns = @JoinColumn(name = "student_id"),
inverseJoinColumns = @JoinColumn(name = "course_id")
)
private List<Course> courses;
  1. Enhanced Repository:
// StudentRepository.java
@EntityGraph(attributePaths = {"courses", "courses.teacher"})
@Query("SELECT DISTINCT s FROM Student s ORDER BY s.lastName ASC, s.firstName ASC")
List<Student> findAllWithCoursesAndTeachersEntityGraph();
  1. Updated Service:
// StudentService.java
public List<Student> findAll() {
return studentRepository.findAllWithCoursesAndTeachersEntityGraph();
}

Step 4: Verify Performance

After deploying the fix, we trigger the same endpoint again and give the AI the new trace ID. It compares the results.

User Prompt

"Analyze your fix - here is a trace after your changes d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"

AI Response:

> Called MCP tool: findTraceAll
Parameters: {
"callId": "d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"
}

Performance Improvement Results

MetricBeforeAfterImprovement
Response Time94.31ms13.23ms✅ 86% faster
SQL Queries2431✅ 99.6% reduction
Database LoadHighMinimal✅ Optimized

New Optimized Query:

SELECT DISTINCT s.*, c.*, t.*
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
ORDER BY s.last_name, s.first_name

Step 5: Prove Data Integrity

The most important step. We ask the AI to compare the two traces layer-by-layer to prove the fix did not change any business output.

User Prompt

"Compare input and output parameters for each method to understand if new query and all the methods returns the same result"

AI Response:

> Called MCP tool: findTraceAll (Original)
> Called MCP tool: findTraceAll (New)

Layer-by-Layer Comparison

Controller Layer

  • Input: [] (empty array) ✅ IDENTICAL
  • Output: 183 student records ✅ IDENTICAL
  • HTTP Status: 200 ✅ IDENTICAL
  • Response Format: Same JSON ✅ IDENTICAL

Service Layer

  • Method: findAll() ✅ IDENTICAL
  • Input: [] ✅ IDENTICAL
  • Output: All student data ✅ IDENTICAL
  • Field Count: All present ✅ IDENTICAL

Repository Layer

  • Query Count: 243 → 1 ✅ OPTIMIZED
  • Result Set: Complete data ✅ IDENTICAL
  • Sort Order: Maintained ✅ IDENTICAL

Key Findings

High Data Consistency

  • Same record count (183)
  • Identical field values
  • Preserved relationships
  • Maintained sort order

Performance Gains

  • 86% faster response
  • 99.6% fewer queries
  • Optimized memory use
  • Better scalability

Why This Matters

Without Runtime ContextWith Runtime Context (MCP)
Finding the problemGuess from code patternsSee 243 queries in real trace
Choosing the fixTry approaches, hope for improvementTarget exactly what the trace revealed
Measuring the resultRun benchmarks manuallyCompare traces: 94ms → 13ms
Verifying correctnessHope tests cover itCompare output layer-by-layer

The AI did not guess. It saw 243 queries in the trace, fixed the cause, measured the result (86% faster), and proved that output remained identical — all through real runtime data.

Give Your AI Agent Runtime Context

BitDive's MCP Server provides real execution traces to AI agents in Cursor, Claude Code, and Windsurf. The agent sees actual queries, parameter values, and API exchanges — turning debugging from guesswork into precision.

Try BitDive Free