TL;DR: The XDCValidator contract (0x88) was deployed with Solidity 0.4.21 and has never received a major upgrade. The ecosystem has matured significantly since then. Liquid staking protocols, institutional staking services, and DeFi composability all demand a more flexible validator lifecycle. I'm proposing we use a contract upgrade as an opportunity to modernize the staking/unstaking mechanism, fix accumulated state inefficiencies, and unlock new use cases. Below are three concrete scenarios for community discussion.
Why Now?
The current XDCValidator contract has served the network well, but several realities make an upgrade worth discussing:
Compiler vintage. Solidity 0.4.21 predates significant compiler improvements (SafeMath built-ins, custom errors, overflow checks, ABI encoder v2). Newer Solidity versions offer better gas efficiency, security defaults, and developer ergonomics.
State hygiene. Over the network's lifetime the contract has accumulated storage artifacts (zeroed-out array entries, stale mappings) that waste gas and complicate on-chain queries. A migration is an opportunity for a clean slate.
The "30-day" wall that's actually 35-42 days. The contract uses block-number-based delays set at genesis:
candidateWithdrawDelay = 1,296,000 blocksfor masternode owners resigning, andvoterWithdrawDelay = 432,000 blocksfor voters unvoting. These are hardcoded constructor parameters with no governance mechanism to adjust them.
The 2-second block time is a theoretical target, not a guarantee. Real-world data from xdcscan tells a different story:
| Period | Avg block time | Effective owner withdrawal delay |
|---|---|---|
| Theoretical | 2.00s | 30.0 days |
| Last 30 days (Mar 2026) | 2.33s | ~35 days |
| Last 365 days | 2.33s | ~35 days |
| Last 90 days (includes Jan 2026 spikes) | 2.83s | ~42 days |
| Worst days (Jan 11-12, 2026) | 13.6s | Would imply ~204 days if sustained |
In practice, masternode owners resigning today should expect to wait ~35 days under normal conditions, and potentially longer during network congestion events. The block time has never consistently held at 2.00s. The network regularly averages 2.2-2.8s, and periodic spikes (Dec 26: 8.4s, Dec 28-30: 4-5s, Jan 10-12: 5-14s) push the effective delay even further.
For liquid staking protocols, this unpredictability on top of an already long delay is the binding constraint. Protocols must either (a) keep a large idle reserve buffer sized for worst-case timing, (b) force users to wait 35+ days, or (c) route through DEX secondary markets at a discount. All three options erode capital efficiency and hurt adoption.
Important detail: resign() only unlocks the owner's own voter stake, not the full candidacy cap. Other voters who staked on that candidate must separately call unvote() (with a ~12-day effective delay at current block times) to retrieve their funds. withdraw() must be called individually per withdrawal entry (specific blockNumber + index), there is no batch withdrawal.
-
Ecosystem competitiveness. Other networks have already moved to dynamic or queue-based unstaking:
- Ethereum: epoch-based churn limit (~8 validators/epoch), exit times range from hours to weeks depending on demand
- Polkadot: RFC-0097 introduces dynamic unbonding from 2 days (empty queue) to 28 days (high demand)
- Cosmos: 21-day unbonding with per-block queue maturation
- Avalanche: ACP-273 proposes reducing minimum staking from 14 days to 48 hours
XDC's flat 30 days with zero flexibility is an outlier.
Current Mechanism (Deployed at 0x88)
For context, here's the live on-chain state of the contract, queried via eth_getStorageAt against mainnet RPC and cross-referenced with the verified source on xdcscan:
Contract Parameters (immutable since genesis)
| Parameter | Storage slot | On-chain value | Real-world effect |
|---|---|---|---|
candidateWithdrawDelay |
0x0e |
1,296,000 blocks | ~35 days at current avg 2.33s/block |
voterWithdrawDelay |
0x0f |
432,000 blocks | ~12 days at current avg 2.33s/block |
minCandidateCap |
0x0b |
10,000,000 XDC | Stake required to propose a masternode |
minVoterCap |
0x0c |
25,000 XDC | Minimum vote amount |
maxValidatorNumber |
0x0d |
18 | Genesis relic, consensus layer overrides to 108 |
| Compiler | - | Solidity 0.4.21 | No overflow protection, known compiler bugs |
Live Network State (queried March 2026)
| Metric | Value | Notes |
|---|---|---|
| Contract balance | 2,580,878,498 XDC (~$80M) | All staked funds held here |
candidateCount |
253 | Active candidates registered |
ownerCount |
219 | Includes resigned owners who are still counted |
candidates[] array length |
529 | Entries ever created (never shrinks) |
| Active (non-zero) entries | 253 | 47.8% of the array |
| Ghost (zeroed-out) entries | 276 | 52.2% waste, gas overhead on every enumeration |
| Consensus active set | 108 masternodes | Set by XDPoS v2, not the contract's maxValidatorNumber
|
| Current block | ~100,783,000 | ~7.4 years of blocks at avg 2.33s |
Withdrawal Flow Today
- Owner calls
resign(candidateAddress)→isCandidateset to false, owner's own stake scheduled for release atblock.number + 1,296,000. - Owner waits ~35 days in practice (1.296M blocks at real-world avg of 2.33s/block, can stretch to 40+ days during congestion spikes).
- Owner calls
withdraw(blockNumber, index). Must specify the exact block number and array index. No batch withdrawal. - Voters on the same candidate must independently call
unvote()→ their stake is released afterblock.number + 432,000(~12 days actual).
Key Limitations
- Block-count based delays, not timestamp-based. The effective wait drifts with block time fluctuations. At the recent 90-day average of 2.83s/block, the owner delay stretches to ~42 days.
-
No parameter governance.
candidateWithdrawDelayandvoterWithdrawDelayare constructor-set with no admin function to adjust them. -
maxValidatorNumberis stale. Contract says 18, consensus layer uses 108. The parameter serves no active purpose. -
ownerCountonly increments.resign()does not decrementownerCount, meaning it includes all historical owners. This inflates the denominator in any governance calculation that uses it.
The Three Scenarios
Scenario A: Three-Tier Exit Queue (Simple & Predictable)
Concept: The withdrawal lock duration depends on how many masternodes are currently pending withdrawal at the time you resign. Three tiers create a clear incentive structure: exits are fast when the queue is light, moderate at normal load, and fall back to the full 30 days only under heavy exit pressure.
| Queue depth (pending withdrawals) | Lock duration | Rationale |
|---|---|---|
| < 5 nodes | 3 days | Low pressure, network is healthy, fast exits are safe |
| 5 - 10 nodes | 14 days | Moderate pressure, allow exits but with a buffer |
| > 10 nodes | 30 days | High pressure, full lock to protect network stability |
| Parameter | Value |
|---|---|
| Queue model | Real-time count of pending withdrawals |
| Tier boundaries | 5 and 10 (tunable via governance) |
| Lock assignment | Snapshot at time of resign()
|
How it works:
- Validator calls
resign(). - Contract reads
pendingWithdrawalCount(number of validators currently in the withdrawal pipeline). - If < 5 → assign 3-day lock.
- If 5-10 → assign 14-day lock.
- If > 10 → assign 30-day lock.
- Withdraw after lock expires (unchanged).
Why liquid staking loves this: Under normal network conditions, the exit queue rarely has more than a handful of nodes pending at once. A liquid staking protocol operating 20 masternodes could rotate nodes with just a 3-day turnaround, small enough to serve redemptions from a thin buffer without sacrificing yield. When the queue gets crowded, the longer delays kick in automatically, giving the protocol a clear signal to pause redemptions or route to secondary markets.
Trade-offs:
- Simple to implement and reason about, just two
ifchecks. - Tier boundaries (5 / 10) need tuning based on network size; could be made governance-adjustable.
- Lock is fixed at resign time, so a validator who resigns at queue depth 4 gets 3 days even if 20 more resign in the next block.
Scenario B: Continuous Dynamic Duration Based on Queue Depth (Polkadot-Inspired)
Concept: Instead of hard tiers, the lock duration scales smoothly between a floor and ceiling based on the number of masternodes currently pending withdrawal. Below 5 nodes the lock approaches its minimum; above 10 it ramps toward the maximum; and it caps at 30 days.
| Parameter | Value |
|---|---|
| Minimum lock (0 nodes pending) | 2 days |
| Maximum lock (≥ 10 nodes pending) | 30 days |
| Fast zone | < 5 nodes pending → 2-7 days |
| Transition zone | 5-10 nodes pending → 7-30 days |
| Scaling curve | Linear between breakpoints (or sigmoid for smoother UX) |
Formula:
if pendingNodes < 5:
lockDays = 2 + (pendingNodes * 1) // 2d → 6d
else if pendingNodes <= 10:
lockDays = 7 + (pendingNodes - 5) * 4.6 // 7d → 30d
else:
lockDays = 30 // hard cap
How it works:
- Validator calls
resign(). - Contract reads
pendingWithdrawalCount. - Lock duration is computed from the formula above.
- Assign
lockDurationto this withdrawal entry. - Withdraw after lock expires (unchanged).
Example at current network size (~108 active masternodes):
| Nodes currently pending withdrawal | Lock duration | Zone |
|---|---|---|
| 0 | 2 days | Fast |
| 1 | 3 days | Fast |
| 2 | 4 days | Fast |
| 4 | 6 days | Fast |
| 5 | 7 days | Transition |
| 7 | 16 days | Transition |
| 10 | 30 days | Cap |
| 15 | 30 days | Cap |
Why liquid staking loves this: In the common case (1-3 nodes cycling out), the lock is 2-5 days. A liquid staking protocol can keep a minimal reserve buffer and serve redemptions almost in real-time. The smooth curve also means there's no "cliff" at a tier boundary. Protocols can forecast lock times with a simple view function call before deciding whether to initiate a withdrawal.
Trade-offs:
- Slightly more complex than tiered (requires on-chain math), but still a single storage read + arithmetic.
- Lock duration isn't known until the moment of resignation (though a
previewLockDuration()view function makes this transparent). - Elegant security guarantee: if many nodes rush to exit, the lock automatically stretches to 30 days, protecting the network without any governance intervention.
Scenario C: Adaptive Churn Limit with FIFO Queue (Ethereum-Inspired)
Concept: Validators enter a FIFO exit queue on resign(). The network processes exits at each epoch boundary, but the processing speed adapts based on queue depth: fast when the queue is short, throttled when it's deep.
| Queue depth | Exits processed per epoch | Post-queue lock | Effective behavior |
|---|---|---|---|
| < 5 nodes | 5 per epoch (drain immediately) | 48 hours | Near-instant: queue clears in one epoch |
| 5 - 10 nodes | 2 per epoch | 48 hours | Moderate: queue drains over several epochs |
| > 10 nodes | 1 per epoch | 7 days | Slow: deliberate pacing + extended lock |
| Parameter | Value |
|---|---|
| Queue model | FIFO, processed at epoch boundary |
| Epoch duration | 900 blocks (~30 min at 2s blocks) |
| Churn rate adapts every | Epoch (re-evaluated based on current queue depth) |
How it works:
- Validator calls
resign()→ enters the exit queue with a sequence number. Post-lock duration is assigned at this point based on current queue depth. - At each epoch boundary, the contract reads the current queue depth and processes exits accordingly.
- If queue < 5 → process up to 5 exits this epoch. Post-lock for new resignations: 48 hours.
- If queue 5-10 → process up to 2 exits this epoch. Post-lock for new resignations: 48 hours.
- If queue > 10 → process only 1 exit this epoch. Post-lock for new resignations: 7 days.
- Withdraw after post-lock expires.
Note: the churn rate adapts as the queue drains. A queue of 25 doesn't process at 1/epoch the whole time. Once it drops below 10, it speeds up to 2/epoch, then 5/epoch below 5.
Example queue scenarios (traced through adaptive processing):
| Queue depth at resign | Post-lock (fixed at resign) | Queue drain breakdown | Total estimated time |
|---|---|---|---|
| 0 (you're the only one) | 48h | 1 epoch (~30min) | ~2 days |
| 3 (light traffic) | 48h | 1 epoch, all 4 clear at 5/epoch | ~2 days |
| 7 (moderate) | 48h | 3 epochs at 2/epoch, then 1 at 5/epoch = 4 epochs (~2h) | ~2 days |
| 12 (heavy) | 7 days | 3 at 1/epoch → 3 at 2/epoch → 1 at 5/epoch = 7 epochs (~3.5h) | ~7 days |
| 25 (extreme) | 7 days | 16 at 1/epoch → 3 at 2/epoch → 1 at 5/epoch = 20 epochs (~10h) | ~7.5 days |
Why liquid staking loves this: Under normal conditions (1-4 nodes exiting), the entire queue flushes in a single epoch and exit time is just 2 days. The protocol can predict its exact queue position and calculate withdrawal timing deterministically. When exit pressure rises, the system automatically shifts to a more conservative pace, giving protocols a clear signal to adjust their redemption flow.
Trade-offs:
- Most complex to implement (requires epoch hooks or a keeper-based trigger + adaptive rate logic).
- The dual lever (churn rate + post-lock duration) gives fine-grained control but adds cognitive overhead.
- When the queue is deep (> 10), exit times are still dramatically better than the current 30 days, but the 7-day post-lock provides a meaningful security buffer.
- Churn rate boundaries (5 / 10) should be governance-adjustable as the network grows.
Additional Upgrade Opportunities
Since we'd be touching the contract anyway, it would be worth bundling other improvements:
- Compiler upgrade to Solidity ≥0.8.x (built-in overflow protection, custom errors, gas optimizations)
- Array compaction for the candidates list (eliminate zeroed entries, reduce gas for enumeration)
- Event coverage for all state-changing operations (currently some paths emit no events)
- ownerCount bookkeeping to accurately reflect the active validator set
-
Withdrawal batching. Currently
withdraw(blockNumber, index)requires one call per entry; awithdrawAll()that claims all mature entries in a single transaction would drastically improve UX - View functions for queue position, estimated lock duration, pending withdrawal count, and total exiting stake
- Timestamp-based delays instead of block-count, decoupling the lock period from block time fluctuations
- KYC flow improvements with proper access control and state management
- SafeMath / overflow protection for all arithmetic (native with 0.8.x)
Comparison Matrix
| Feature | Current XDC | Scenario A (3-Tier) | Scenario B (Dynamic Curve) | Scenario C (Adaptive Churn) |
|---|---|---|---|---|
| Min unlock (< 5 pending) | ~35d actual (1.296M blocks) | 3 days | 2 days | ~2 days |
| Moderate unlock (5-10 pending) | ~35d actual (1.296M blocks) | 14 days | 7-30 days | ~2 days (queue wait + 48h) |
| Max unlock (> 10 pending) | ~35d actual (1.296M blocks) | 30 days | 30 days | ~7-7.5 days (queue wait + 7d) |
| Delay type | Block-count (drifts with block time) | Timestamp-based | Timestamp-based | Epoch + timestamp |
| Voter delay | ~12d actual (432K blocks) | Unified with owner | Unified with owner | Unified with owner |
| Batch withdrawal | No (per-entry) | Yes | Yes | Yes |
| Predictability | Fixed | Very high (3 tiers) | Medium (continuous) | High (position-based) |
| Liquid staking friendly | Poor | Good | Very good | Excellent |
| Implementation complexity | - | Low | Low-Medium | Medium-High |
| Security under mass exit | Same 30d always | Full 30d fallback | Scales to 30d automatically | Throttled churn + 7d post-lock |
| Consensus layer changes | No | No | No | Possibly (epoch hooks) |
What I'd Like to Hear From the Community
- Which scenario resonates most with how you think XDC should evolve?
- Are there hybrid approaches? For example, Scenario B (dynamic duration) + Scenario C (churn limit) combined.
- What's the right security floor? Is 48 hours too aggressive for the minimum lock? Should it be 7 days?
- Upgrade mechanism. Should this be a full contract migration with state transfer, or an upgradeable proxy pattern going forward?
- Backwards compatibility. How do we handle existing stakers and pending withdrawals during migration?
The 30-day fixed lock was a reasonable default at launch, but the XDC ecosystem has outgrown it. Liquid staking, institutional staking services, and DeFi protocols all need a more dynamic validator lifecycle. Let's design it together.
References:
Discussion (0)