Skip to main content

Analyze Services with BitDive - Interactive Demo

This document demonstrates a complete performance analysis and optimization workflow using BitDive monitoring tools integrated with Cursor AI. Follow along with this real conversation to see how to identify and fix performance issues in a microservices architecture.

What You'll Learn

  • How to use BitDive's performance monitoring tools to identify bottlenecks
  • Step-by-step analysis of microservices performance data
  • How to diagnose and fix N+1 query problems
  • Real-time trace analysis and performance optimization
  • Measuring the impact of performance fixes
  • Understanding the underlying MCP tool calls and transaction flow

Demo Scenario

Architecture: Spring Boot microservices with three main services:

  • Faculty Service: Student and course management
  • Report Service: Data reporting and analytics
  • OpenAI Service: AI-powered features

Tools Used:

  • BitDive MCP Tools: System performance monitoring and tracing
  • Cursor AI: Code analysis and automated optimization
  • PostgreSQL: Database with JPA/Hibernate ORM

How Code Explains Itself with Runtime Context

This demo session shows how code analysis transforms when AI has access to complete runtime context. Instead of guessing based on static code, the AI assistant can see exactly how the code behaves in production, making its analysis and fixes precise and reliable.

The Power of Runtime Context

Traditional AI assistants work "blind" - they only see static code without knowing how it actually behaves in production. But with BitDive's runtime context integration, the AI can analyze real execution data including complete call traces, actual parameter values, SQL query patterns, performance metrics, and return values. Let's see this in action through a real debugging session.

Interactive Analysis Session

Initial System Analysis

User Query

"Analyze services with BitDive, find potential issues, request additional info if needed"

Analysis Results
> Called MCP tool: getCurrentHeapMapAllSystem

Performance Analysis Results

Our analysis reveals several critical performance issues across your microservices:

Critical Issues Found:

  1. Report Service Response Times

    • Average: 796.49ms (⚠️ Very High)
    • REST calls: 240.43ms average
    • Only 3 web calls but extremely slow
  2. OpenAI Service Latency

    • Average: 678.09ms
    • Likely due to external API delays
    • Optimization needed
  3. Faculty Service SQL Volume

    • 994 SQL calls detected
    • StudentRestController: 974 calls
    • Possible N+1 query issue

Service-by-Service Breakdown:

Faculty Service:

  • Web calls: 22 (avg 19.27ms)
  • Issue: High SQL query volume
  • Focus: getStudents endpoint

Report Service:

  • Only 3 calls but very slow
  • 9 REST calls averaging 240.43ms
  • Major bottleneck identified

OpenAI Service:

  • 3 calls @ 678.09ms average
  • External API dependency
  • Timeout configuration review needed

Positive Findings:

✅ Zero errors across services ✅ All 2xx HTTP responses ✅ Queue operations: 0.06ms (good)

Recommendations:

  1. Investigate Report service REST calls
  2. Review OpenAI timeout settings
  3. Optimize Faculty service SQL queries
  4. Implement performance monitoring

Deep Dive Analysis

User Query

"Analyze call ID: deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"

Detailed Analysis
> Called MCP tool: findTraceAll
Parameters: {
"callId": "deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e"
}

Call Trace Analysis

Found critical N+1 query problem in StudentRestController.getStudents():

Call Statistics:

  • Duration: 94.31ms
  • Queries: 243 separate SQL calls
  • Operation: GET students data
  • Status: 200 (Success)

Problem Pattern:

-- Initial Query
SELECT * FROM student

-- Followed by 242 individual queries like:
SELECT c.*, t.*
FROM enrollment e
JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
WHERE e.student_id = ?

Performance Impact:

  • 243 database round trips
  • Network overhead per query
  • Linear degradation with scale

Recommended Fix:

SELECT s.*, c.*, t.* 
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON e.course_id = c.id
LEFT JOIN teacher t ON c.teacher_id = t.id

Optimization Implementation

User Query

"Fix this n+1 problem with minimal changes @faculty"

Solution Implementation
🔄 Analyzing codebase structure...

📁 Listed 2 items in faculty/src
📁 Listed 2 items in faculty/src/main
📁 Listed 1 items in faculty/src/main/java
📁 Listed 1 items in faculty/src/main/java/com
📁 Listed 2 items in faculty/src/main/java/com/microservices
📁 Listed 7 items in faculty/src/main/java/com/microservices/faculty

👁️ Reading key files...
- StudentRestController.java
- StudentService.java
- StudentRepository.java
- Student.java

Implemented Changes:

  1. Modified Student Entity:
// Student.java
@ManyToMany(fetch = FetchType.LAZY) // Changed from EAGER
@JoinTable(
name = "enrollment",
joinColumns = @JoinColumn(name = "student_id"),
inverseJoinColumns = @JoinColumn(name = "course_id")
)
private List<Course> courses;
  1. Enhanced Repository:
// StudentRepository.java
@EntityGraph(attributePaths = {"courses", "courses.teacher"})
@Query("SELECT DISTINCT s FROM Student s ORDER BY s.lastName ASC, s.firstName ASC")
List<Student> findAllWithCoursesAndTeachersEntityGraph();
  1. Updated Service:
// StudentService.java
public List<Student> findAll() {
return studentRepository.findAllWithCoursesAndTeachersEntityGraph();
}

Performance Verification

User Query

"Analyze your fix - here is a trace after your changes d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"

Performance Results
> Called MCP tool: findTraceAll
Parameters: {
"callId": "d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb"
}

Performance Improvement Results

MetricBeforeAfterImprovement
Response Time94.31ms13.23ms✅ 86% faster
SQL Queries2431✅ 99.6% reduction
Database LoadHighMinimal✅ Optimized

New Optimized Query:

SELECT DISTINCT s.*, c.*, t.*
FROM student s
LEFT JOIN enrollment e ON s.id = e.student_id
LEFT JOIN course c ON c.id = e.course_id
LEFT JOIN teacher t ON t.id = c.teacher_id
ORDER BY s.last_name, s.first_name

Data Integrity Validation

User Query

"Compare input and output parameters for each method to understand if new query and all the methods returns the same result"

Validation Results
> Called MCP tool: findTraceAll (Original)
> Called MCP tool: findTraceAll (New)

Layer-by-Layer Comparison

Controller Layer

  • Input: [] (empty array) ✅ IDENTICAL
  • Output: 183 student records ✅ IDENTICAL
  • HTTP Status: 200 ✅ IDENTICAL
  • Response Format: Same JSON ✅ IDENTICAL

Service Layer

  • Method: findAll() ✅ IDENTICAL
  • Input: [] ✅ IDENTICAL
  • Output: All student data ✅ IDENTICAL
  • Field Count: All present ✅ IDENTICAL

Repository Layer

  • Query Count: 243 → 1 ✅ OPTIMIZED
  • Result Set: Complete data ✅ IDENTICAL
  • Sort Order: Maintained ✅ IDENTICAL

Key Findings

100% Data Consistency

  • Same record count (183)
  • Identical field values
  • Preserved relationships
  • Maintained sort order

Performance Gains

  • 86% faster response
  • 99.6% fewer queries
  • Optimized memory use
  • Better scalability

Why Runtime Context Matters

With traditional static analysis, AI can only guess based on code patterns with no visibility into actual behavior, making the impact of changes uncertain and optimization a matter of trial and error.

With runtime context, the AI can identify problems precisely, optimize based on facts rather than guesses, measure improvements accurately, and verify solutions against real behavior.

Key Benefits Demonstrated

This session demonstrated how runtime context enabled precise analysis that reduced 243 queries to just 1, improved response time from 94.31ms to 13.23ms, and did so without any functional changes. The results were verified with identical data returns, maintained API contracts, and improved scalability.

The combination of BitDive's runtime context and Cursor's AI capabilities transforms debugging from guesswork into precise, data-driven optimization.


This demo uses BitDive MCP Tools with Cursor AI - Making performance optimization precise and reliable.