Skip to main content

BitDive MCP Tools Reference

This page is the practical reference for BitDive MCP tools. Use it when you already know you want to work through MCP and need to choose the right tool for the next step.

If you need the high-level setup and workflow first, start with BitDive MCP Integration.


Core Concepts

Most BitDive MCP work revolves around a few identifiers:

  • module_name: a logical application or deployment group
  • service_name: the concrete service inside that module
  • call_id: one recorded execution trace
  • test group / test script: a saved regression suite built from traces

The usual sequence is:

  1. discover the right service
  2. find the right call_id
  3. inspect and compare traces
  4. update the regression baseline only after confirming intended behavior

1. Discovery and Monitoring

Use these tools first when you do not yet know where to focus.

ToolUse It WhenWhat It Returns
get_heatmap_all_systemYou need a system-wide view of hot or failing servicesPerformance and activity metrics across modules and services
get_heatmap_for_moduleYou already know the moduleFiltered heatmap for one module
get_heatmap_for_serviceYou already know the module and serviceService-level metrics for the target service
get_last_callsYou want recent traces for a serviceRecent call_id values and recent executions

Typical use: identify a slow or failing service, then move from the heatmap to a concrete recent trace.


2. Deep Trace Analysis

These tools help you understand one execution in detail.

ToolUse It WhenWhat It Returns
find_trace_summaryYou want the fastest readable view of a traceExecution tree with timings, SQL, downstream calls, return values, and errors
find_trace_for_methodYou need to zoom into one method inside a traceMethod-level execution details within a given call_id
find_trace_between_timeYou need historical traces from a precise time windowMatching traces for a class and method between two timestamps
get_reproduction_commandYou want to replay the exact request locallyA ready-to-run curl or PowerShell command

Best starting point: find_trace_summary. It gives the most value without making you parse raw payloads.


3. Comparison and Verification

These tools answer the question: "What changed?"

ToolUse It WhenWhat It Returns
compare_tracesYou have one BEFORE trace and one AFTER traceBehavioral diff: timing, SQL count, child calls, errors, contract changes
compare_trace_evolutionYou want to compare multiple versions chronologicallyEvolution across several traces ordered oldest to newest

Use comparison after a bug fix, refactor, performance optimization, dependency upgrade, or API change.

Good questions to ask:

  • Did the intended error disappear?
  • Did query counts improve or regress?
  • Did the response contract drift?
  • Did the service start calling a new downstream dependency?

4. Regression Management

These tools turn validated behavior into deterministic replay protection.

ToolUse It WhenWhat It Returns
auto_generate_tests_for_serviceYou want to create a fresh suite for a serviceA new test group built from the latest successful calls
get_all_test_scriptsYou need to inventory existing suitesAll registered test groups
get_script_dataYou need to inspect one test group in detailThe structure and entries of a single test group
get_test_failure_detailsYou need to understand why a suite failedFailure summary for each failing method
update_existing_test_groupMany expected behaviors changedFull refresh of a test group from the latest successful traces
update_failed_tests_in_groupOnly the failed methods need refreshingTargeted refresh for failed entries only
replace_test_with_latest_traceOne specific method should be replaced surgicallyReplacement of one test entry with the latest good trace

Important rule: update baselines only after trace comparison confirms the new behavior is correct.


Diagnose a Bug

  1. get_heatmap_for_service
  2. get_last_calls
  3. find_trace_summary
  4. find_trace_for_method
  5. get_reproduction_command

Verify a Code Change

  1. get_last_calls to retrieve BEFORE and AFTER call_id
  2. find_trace_summary for each side if you need context
  3. compare_traces
  4. mvn test
  5. update the regression baseline only if the diff is expected

Refresh a Failing Regression Suite

  1. get_all_test_scripts
  2. get_test_failure_details
  3. update_failed_tests_in_group or replace_test_with_latest_trace
  4. rerun the suite

Build a New Service-Level Baseline

  1. get_last_calls
  2. validate the recent calls you want to use
  3. auto_generate_tests_for_service

Practical Guidance

Start Broad, Then Narrow

Do not begin with method-level tools if you are not yet sure which service or path matters. Heatmap first, trace second, method third.

Prefer Summary Over Raw Data

In most debugging sessions, find_trace_summary is enough. Drop to method-level tools only when you need exact local detail for one class or method.

Treat call_id as Evidence

Once you identify a useful call_id, keep it. It becomes the anchor for explanation, reproduction, and comparison.

Comparison Is the Proof Step

A successful HTTP response is not enough. Use compare_traces to confirm that internal behavior also changed only as intended.

Replay Tests and Trace Capture Are Different

Refreshing a JUnit suite does not create a new runtime trace. To produce an AFTER trace, trigger a real request against the updated service.