BitDive MCP Tools Reference
This page is the practical reference for BitDive MCP tools. Use it when you already know you want to work through MCP and need to choose the right tool for the next step.
If you need the high-level setup and workflow first, start with BitDive MCP Integration.
Core Concepts
Most BitDive MCP work revolves around a few identifiers:
module_name: a logical application or deployment groupservice_name: the concrete service inside that modulecall_id: one recorded execution trace- test group / test script: a saved regression suite built from traces
The usual sequence is:
- discover the right service
- find the right
call_id - inspect and compare traces
- update the regression baseline only after confirming intended behavior
1. Discovery and Monitoring
Use these tools first when you do not yet know where to focus.
| Tool | Use It When | What It Returns |
|---|---|---|
get_heatmap_all_system | You need a system-wide view of hot or failing services | Performance and activity metrics across modules and services |
get_heatmap_for_module | You already know the module | Filtered heatmap for one module |
get_heatmap_for_service | You already know the module and service | Service-level metrics for the target service |
get_last_calls | You want recent traces for a service | Recent call_id values and recent executions |
Typical use: identify a slow or failing service, then move from the heatmap to a concrete recent trace.
2. Deep Trace Analysis
These tools help you understand one execution in detail.
| Tool | Use It When | What It Returns |
|---|---|---|
find_trace_summary | You want the fastest readable view of a trace | Execution tree with timings, SQL, downstream calls, return values, and errors |
find_trace_for_method | You need to zoom into one method inside a trace | Method-level execution details within a given call_id |
find_trace_between_time | You need historical traces from a precise time window | Matching traces for a class and method between two timestamps |
get_reproduction_command | You want to replay the exact request locally | A ready-to-run curl or PowerShell command |
Best starting point: find_trace_summary. It gives the most value without making you parse raw payloads.
3. Comparison and Verification
These tools answer the question: "What changed?"
| Tool | Use It When | What It Returns |
|---|---|---|
compare_traces | You have one BEFORE trace and one AFTER trace | Behavioral diff: timing, SQL count, child calls, errors, contract changes |
compare_trace_evolution | You want to compare multiple versions chronologically | Evolution across several traces ordered oldest to newest |
Use comparison after a bug fix, refactor, performance optimization, dependency upgrade, or API change.
Good questions to ask:
- Did the intended error disappear?
- Did query counts improve or regress?
- Did the response contract drift?
- Did the service start calling a new downstream dependency?
4. Regression Management
These tools turn validated behavior into deterministic replay protection.
| Tool | Use It When | What It Returns |
|---|---|---|
auto_generate_tests_for_service | You want to create a fresh suite for a service | A new test group built from the latest successful calls |
get_all_test_scripts | You need to inventory existing suites | All registered test groups |
get_script_data | You need to inspect one test group in detail | The structure and entries of a single test group |
get_test_failure_details | You need to understand why a suite failed | Failure summary for each failing method |
update_existing_test_group | Many expected behaviors changed | Full refresh of a test group from the latest successful traces |
update_failed_tests_in_group | Only the failed methods need refreshing | Targeted refresh for failed entries only |
replace_test_with_latest_trace | One specific method should be replaced surgically | Replacement of one test entry with the latest good trace |
Important rule: update baselines only after trace comparison confirms the new behavior is correct.
Recommended Tool Sequences
Diagnose a Bug
get_heatmap_for_serviceget_last_callsfind_trace_summaryfind_trace_for_methodget_reproduction_command
Verify a Code Change
get_last_callsto retrieve BEFORE and AFTERcall_idfind_trace_summaryfor each side if you need contextcompare_tracesmvn test- update the regression baseline only if the diff is expected
Refresh a Failing Regression Suite
get_all_test_scriptsget_test_failure_detailsupdate_failed_tests_in_grouporreplace_test_with_latest_trace- rerun the suite
Build a New Service-Level Baseline
get_last_calls- validate the recent calls you want to use
auto_generate_tests_for_service
Practical Guidance
Start Broad, Then Narrow
Do not begin with method-level tools if you are not yet sure which service or path matters. Heatmap first, trace second, method third.
Prefer Summary Over Raw Data
In most debugging sessions, find_trace_summary is enough. Drop to method-level tools only when you need exact local detail for one class or method.
Treat call_id as Evidence
Once you identify a useful call_id, keep it. It becomes the anchor for explanation, reproduction, and comparison.
Comparison Is the Proof Step
A successful HTTP response is not enough. Use compare_traces to confirm that internal behavior also changed only as intended.
Replay Tests and Trace Capture Are Different
Refreshing a JUnit suite does not create a new runtime trace. To produce an AFTER trace, trigger a real request against the updated service.