The BitDive Developer Workflow
BitDive is most valuable when you use it as part of a repeatable change cycle, not as a dashboard you open only after something breaks.
The workflow below is the recommended default for feature work, bug fixes, refactors, and performance improvements in services monitored by BitDive.
The 4-Stage Change Cycle
Stage 1: Prep and Behavioral Baseline
Before changing code, establish what the system does today.
1. Identify the target path
Start with the real target:
- the module and service you are about to touch
- the endpoint, message flow, or scheduled job involved
- the methods, SQL, or downstream calls you expect to affect
Use BitDive's HeatMap and recent calls views to confirm that you are looking at the right execution path, not a guessed one.
2. Run the current test suite
Capture the current baseline before your change:
mvn test
Record:
- which tests are already green
- which tests are already failing
This matters later. Existing failures are not evidence that your change introduced a regression.
3. Capture or locate the BEFORE trace
Trigger the current behavior or fetch a recent matching call from BitDive. Then inspect the trace summary:
- execution tree
- SQL queries and counts
- downstream HTTP or Kafka calls
- timings and errors
The BEFORE trace becomes your behavioral baseline.
Output of Stage 1
By the end of this stage, you should know:
- what the system currently does
- which failures already existed
- what evidence will prove your change worked
Stage 2: Precise Code Change
Make the smallest change that addresses the observed problem.
BitDive helps here because the trace narrows the scope:
- you know which method chain is involved
- you know which SQL or external interaction is wrong
- you know the real inputs that trigger the behavior
This is the moment to avoid speculative refactors. If the trace isolates the issue, prefer a focused change over a broad rewrite.
If you need local reproduction, use the same request shape and payload that produced the original trace. Reproducing the real case is much more reliable than inventing a synthetic scenario.
Output of Stage 2
You now have:
- a concrete code change
- a clear expectation of what should change in runtime behavior
- a clear expectation of what should stay identical
Stage 3: Verification and Reflection
After the code change, verify behavior with runtime evidence, not just an HTTP status code.
1. Capture the AFTER trace
Run the updated service and trigger the same business flow again. Wait for BitDive to record the new execution.
2. Compare BEFORE and AFTER
Use trace comparison to answer:
- Did the intended error disappear?
- Did SQL counts improve or unexpectedly increase?
- Did the request or response contract change?
- Did new downstream calls appear?
- Did the call path shift in places you did not intend?
Correct result does not automatically mean correct behavior. The diff is the proof.
3. Run tests after the change
mvn test
Then classify every failure:
- Already failing before the change: pre-existing noise, not caused by your work.
- Changed method, expected difference: likely intentional. Update the baseline only after confirming the new runtime behavior is correct.
- Unchanged area, new failure: unexpected regression. Fix the code. Do not bless the test.
Output of Stage 3
You should now be able to explain:
- what changed
- why it changed
- why the rest of the system still behaves as expected
Stage 4: Regression Management
Once the behavior is confirmed, lock it into the regression suite.
Use the smallest update that matches the change:
- replace one method entry if the change was surgical
- refresh only failed methods if a few known expectations changed
- update the whole test group only when a broader baseline legitimately moved
The rule is strict:
- update tests for intended, verified behavior changes
- never update tests to hide an unexplained regression
Final verification
Run the suite again:
mvn test
Your final state should match the Stage 1 baseline except for the intentional changes you just verified and refreshed.
Quick Playbooks
Bug Fix
Use the workflow exactly as written:
- capture the failing BEFORE trace
- make one focused fix
- compare traces after the fix
- update only the methods whose expected behavior changed
Performance Optimization
Pay special attention to:
- SQL count
- repeated query patterns
- sequential downstream calls
- latency blocks inside the trace tree
For N+1 fixes or batching work, trace comparison is often more informative than the final endpoint response.
API Regression Check
When changing DTOs, serializers, error handling, or client code, compare traces specifically for:
- request body shape
- response body shape
- headers
- status codes
- call ordering between services
BitDive's runtime contract view is often the fastest way to catch changes that static reviews miss.
Related Guides
- How Trace-Based Testing Works
- BitDive MCP Integration
- Autonomous Quality Loop for AI Agents
- Inter-Service API Verification
- Regression Management in BitDive
The workflow is intentionally simple: establish the baseline, change with evidence, verify with traces, then refresh only the expected regression baseline.