AI-Powered Performance Optimization: A Step-by-Step Demo
This walkthrough shows how Cursor AI uses BitDive's runtime context via MCP to find and fix a real N+1 query problem in a Spring Boot microservices system — and then verify that the fix preserved data integrity by comparing traces before and after.
The Setup
Architecture: Three Spring Boot microservices backed by PostgreSQL.
| Service | Role |
|---|---|
| Faculty Service | Student and course management |
| Report Service | Data reporting and analytics |
| OpenAI Service | AI-powered features |
Tools: BitDive MCP Server + Cursor AI + PostgreSQL with JPA/Hibernate.
Traditional AI assistants work "blind" — they only see static code without knowing how it actually behaves in production. With BitDive's runtime context integration, the AI can analyze real execution data: complete call traces, actual parameter values, SQL query patterns, performance metrics, and return values. Let's see this in action.
Step 1: Analyze the System
We ask Cursor a single question. The AI calls BitDive's getCurrentHeapMapAllSystem MCP tool and gets a real-time performance summary.
"Analyze services with BitDive, find potential issues, request additional info if needed"
AI Response:
> Called MCP tool: getCurrentHeapMapAllSystem
Our analysis reveals several critical performance issues across your microservices:
Critical Issues Found:
-
Report Service Response Times
- Average: 796.49ms (⚠️ Very High)
- REST calls: 240.43ms average
- Only 3 web calls but extremely slow
-
OpenAI Service Latency
- Average: 678.09ms
- Likely due to external API delays
- Optimization needed
-
Faculty Service SQL Volume
- 994 SQL calls detected
- StudentRestController: 974 calls
- Possible N+1 query issue
Service-by-Service Breakdown:
Faculty Service:
- Web calls: 22 (avg 19.27ms)
- Issue: High SQL query volume
- Focus:
getStudentsendpoint
Report Service:
- Only 3 calls but very slow
- 9 REST calls averaging 240.43ms
- Major bottleneck identified
OpenAI Service:
- 3 calls @ 678.09ms average
- External API dependency
- Timeout configuration review needed
Positive Findings:
✅ Zero errors across services ✅ All 2xx HTTP responses ✅ Queue operations: 0.06ms (good)
Recommendations:
- Investigate Report service REST calls
- Review OpenAI timeout settings
- Optimize Faculty service SQL queries
- Implement performance monitoring
Step 2: Deep Dive Into the Trace
We give the AI a specific call ID to analyze. It calls findTraceAll and receives the full execution trace: every method call, every SQL query, every parameter value.
"Analyze call ID: deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"
AI Response:
> Called MCP tool: findTraceAll
Parameters: {
"callId": "deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"
}
Found critical N+1 query problem in StudentRestController.getStudents():
Call Statistics:
- Duration: 94.31ms
- Queries: 243 separate SQL calls
- Operation: GET students data
- Status: 200 (Success)
Problem Pattern:
-- Initial Query
SELECT * FROM student
-- Followed by 242 individual queries like:
SELECT c.*, t.*
FROM enrollment e
JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
WHERE e.student_id = ?
Performance Impact:
- 243 database round trips
- Network overhead per query
- Linear degradation with scale
Recommended Fix:
SELECT s.*, c.*, t.*
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON e.course_id = c.id
LEFT JOIN teacher t ON c.teacher_id = t.id
Step 3: Fix the Problem
We ask the AI to fix the N+1 problem. It reads the source code, cross-references with the trace data, and implements a targeted fix across three files.
"Fix this n+1 problem with minimal changes @faculty"
AI Response:
🔄 Analyzing codebase structure...
📁 Listed 2 items in faculty/src
📁 Listed 2 items in faculty/src/main
📁 Listed 1 items in faculty/src/main/java
📁 Listed 1 items in faculty/src/main/java/com
📁 Listed 2 items in faculty/src/main/java/com/microservices
📁 Listed 7 items in faculty/src/main/java/com/microservices/faculty
👁️ Reading key files...
- StudentRestController.java
- StudentService.java
- StudentRepository.java
- Student.java
Implemented Changes:
- Modified Student Entity:
// Student.java
@ManyToMany(fetch = FetchType.LAZY) // Changed from EAGER
@JoinTable(
name = "enrollment",
joinColumns = @JoinColumn(name = "student_id"),
inverseJoinColumns = @JoinColumn(name = "course_id")
)
private List<Course> courses;
- Enhanced Repository:
// StudentRepository.java
@EntityGraph(attributePaths = {"courses", "courses.teacher"})
@Query("SELECT DISTINCT s FROM Student s ORDER BY s.lastName ASC, s.firstName ASC")
List<Student> findAllWithCoursesAndTeachersEntityGraph();
- Updated Service:
// StudentService.java
public List<Student> findAll() {
return studentRepository.findAllWithCoursesAndTeachersEntityGraph();
}
Step 4: Verify Performance
After deploying the fix, we trigger the same endpoint again and give the AI the new trace ID. It compares the results.
"Analyze your fix - here is a trace after your changes d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"
AI Response:
> Called MCP tool: findTraceAll
Parameters: {
"callId": "d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"
}
Performance Improvement Results
| Metric | Before | After | Improvement |
|---|---|---|---|
| Response Time | 94.31ms | 13.23ms | ✅ 86% faster |
| SQL Queries | 243 | 1 | ✅ 99.6% reduction |
| Database Load | High | Minimal | ✅ Optimized |
New Optimized Query:
SELECT DISTINCT s.*, c.*, t.*
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
ORDER BY s.last_name, s.first_name
Step 5: Prove Data Integrity
The most important step. We ask the AI to compare the two traces layer-by-layer to prove the fix did not change any business output.
"Compare input and output parameters for each method to understand if new query and all the methods returns the same result"
AI Response:
> Called MCP tool: findTraceAll (Original)
> Called MCP tool: findTraceAll (New)
Layer-by-Layer Comparison
Controller Layer
- Input:
[](empty array) ✅ IDENTICAL - Output: 183 student records ✅ IDENTICAL
- HTTP Status: 200 ✅ IDENTICAL
- Response Format: Same JSON ✅ IDENTICAL
Service Layer
- Method:
findAll()✅ IDENTICAL - Input:
[]✅ IDENTICAL - Output: All student data ✅ IDENTICAL
- Field Count: All present ✅ IDENTICAL
Repository Layer
- Query Count: 243 → 1 ✅ OPTIMIZED
- Result Set: Complete data ✅ IDENTICAL
- Sort Order: Maintained ✅ IDENTICAL
Key Findings
✅ High Data Consistency
- Same record count (183)
- Identical field values
- Preserved relationships
- Maintained sort order
✅ Performance Gains
- 86% faster response
- 99.6% fewer queries
- Optimized memory use
- Better scalability
Why This Matters
| Without Runtime Context | With Runtime Context (MCP) | |
|---|---|---|
| Finding the problem | Guess from code patterns | See 243 queries in real trace |
| Choosing the fix | Try approaches, hope for improvement | Target exactly what the trace revealed |
| Measuring the result | Run benchmarks manually | Compare traces: 94ms → 13ms |
| Verifying correctness | Hope tests cover it | Compare output layer-by-layer |
The AI did not guess. It saw 243 queries in the trace, fixed the cause, measured the result (86% faster), and proved that output remained identical — all through real runtime data.
Give Your AI Agent Runtime Context
BitDive's MCP Server provides real execution traces to AI agents in Cursor, Claude Code, and Windsurf. The agent sees actual queries, parameter values, and API exchanges — turning debugging from guesswork into precision.
Try BitDive FreeRelated
- Runtime Context for AI Agents (MCP) — How BitDive provides traces to AI agents
- BitDive MCP Server: Bringing JVM Runtime Context to AI Agents — Technical deep dive
- Testing Spring Boot Applications with BitDive — Strategy guide