Skip to content

Fix manul failover with CLIENT PAUSE/UNPAUSE#389

Open
Paragrf wants to merge 4 commits into
apache:unstablefrom
Paragrf:pause
Open

Fix manul failover with CLIENT PAUSE/UNPAUSE#389
Paragrf wants to merge 4 commits into
apache:unstablefrom
Paragrf:pause

Conversation

@Paragrf
Copy link
Copy Markdown
Contributor

@Paragrf Paragrf commented Apr 23, 2026

Motivation

Implement Controller modifications to ensure master-replica data consistency during failover, aligning with server-side changes in apache/kvrocks#3377

Solution

Step 1 (Pause): Send CLIENT PAUSE WRITE from the controller to the current master.

Step 2 (Wait): Monitor the master-replica sequence gap until it hits zero, ensuring no data loss.

Step 3 (Metadata): Update the global topology metadata for the switchover.

Step 4 (Switch & Unpause): Promote the target and demote the old master; then explicitly call CLIENT UNPAUSE on the old master to restore its status.

Step 5 (Replicate): Reconfigure all other followers to sync from the new master.

Configuration Options

To prevent excessive blocking durations during periods of high write traffic, a maximum pause timeout has been introduced; the failover process will fail if the synchronization times out. The following parameters are added to the Controller failover configuration:

  • "force_on_timeout": false,

  • "sync_timeout_ms": 100,

  • "pause_timeout_ms": 500

Related Issues

Fixes #384

@Paragrf Paragrf changed the title fix(cli): fix  manul failover with CLIENT PAUSE/UNPAUSE Fix manul failover with CLIENT PAUSE/UNPAUSE Apr 23, 2026
@Paragrf
Copy link
Copy Markdown
Contributor Author

Paragrf commented Apr 23, 2026

In actual testing with the Controller and Kvrocks deployed in the same IDC, the system achieved a single-node write QPS of 10k/s with 20MB/s throughput. The write-stop duration (stall time) remained consistently under 10ms.

@git-hulk git-hulk self-requested a review April 24, 2026 08:42
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 28, 2026

Codecov Report

❌ Patch coverage is 43.30357% with 127 lines in your changes missing coverage. Please review.
✅ Project coverage is 49.67%. Comparing base (6c56470) to head (8723839).
⚠️ Report is 106 commits behind head on unstable.

Files with missing lines Patch % Lines
store/cluster_node.go 10.00% 54 Missing ⚠️
store/cluster_shard.go 55.81% 32 Missing and 6 partials ⚠️
server/api/shard.go 45.94% 14 Missing and 6 partials ⚠️
controller/cluster.go 36.36% 5 Missing and 2 partials ⚠️
store/cluster_mock_node.go 80.95% 4 Missing ⚠️
server/api/handler.go 0.00% 2 Missing ⚠️
server/route.go 0.00% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable     #389      +/-   ##
============================================
+ Coverage     43.38%   49.67%   +6.28%     
============================================
  Files            37       45       +8     
  Lines          2971     4103    +1132     
============================================
+ Hits           1289     2038     +749     
- Misses         1544     1839     +295     
- Partials        138      226      +88     
Flag Coverage Δ
unittests 49.67% <43.30%> (+6.28%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@jihuayu jihuayu self-requested a review May 12, 2026 09:19
Copy link
Copy Markdown
Member

@jihuayu jihuayu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Paragrf. Thanks for your contribution. I think the changes for manual failover are necessary, but why do we need this for automatic failover?
Are there any other cases that do it this way?

Comment thread config/config.go
Comment thread store/cluster_shard.go
@Paragrf
Copy link
Copy Markdown
Contributor Author

Paragrf commented May 12, 2026

@jihuayu All issues are fixed. Ready for another round of review. Thanks!

@Paragrf
Copy link
Copy Markdown
Contributor Author

Paragrf commented May 12, 2026

@jihuayu @git-hulk By the way,I initially planned to use slave lags for sync checks, but found it adds a 1s delay because the master only updates these stats every second. To get real-time offsets, I had to query both master and slaves, which unfortunately complicates the logic. Why does Kvrocks cap the slave lag update frequency at 1s rather than providing real-time updates? Is it a performance trade-off to avoid overhead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement a write-stop during failover to ensure data consistency during a planned primary-secondary switch

3 participants