You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The vector clock on an object will increase in size for every new vnode_id which attempts to co-ordinate a PUT for an object. The increase is limited by the pruning limits - young_vclock, old_vclock, small_vclock and big_vclock. The primary limit is the small_vclock which will cause an object hitting this limit (50 unique vnodes by default) to be pruned.
The pruning will normally cut back by 1 entry only - i.e. a clock that has reached a size of 51 will be pruned back to 50. There are some controls which may prevent this pruning - young_vclock, old_vclock - but by the defualt the big_vclock also being set to 50 means that clocks will remain at this limit.
In one key situation, pruning is disabled. When an update is a consequence of read_repair - the clock will not be pruned, so objects can be persisted with clocks longer than 50.
In a multi-cluster scenario, after read repairs there can be a circumstance where some objects have clocks of length > small_vclock on one cluster, but of length == small_vclock on another.
If there is now an attempt to full-sync between the clusters, the delta can be detected - but when the object is dispatched from the cluster with the dominant (longer) clock, the receiving cluster will prune the inbound clock ... and the delta will persist. In this case, the objects are the same, but the full-sync mechanism cannot determine they are the same - so continuously discovers differences.
Clocks being at the pruning length is more likely to occur on very long-lived clusters which have been subject to lots of leaves/joins - but as many users have been running Riak for > 10 years this is increasingly probable.
The text was updated successfully, but these errors were encountered:
It should be noted that read-repair will also override any sibling limit, so this can also lead to full-sync discrepancies. Hitting sibling limits is an indication of a broader fault, whereas pruning clocks is eventually inevitable, and so this is the primary concern.
The vector clock on an object will increase in size for every new vnode_id which attempts to co-ordinate a PUT for an object. The increase is limited by the pruning limits -
young_vclock
,old_vclock
,small_vclock
andbig_vclock
. The primary limit is thesmall_vclock
which will cause an object hitting this limit (50 unique vnodes by default) to be pruned.The pruning will normally cut back by 1 entry only - i.e. a clock that has reached a size of 51 will be pruned back to 50. There are some controls which may prevent this pruning -
young_vclock
,old_vclock
- but by the defualt thebig_vclock
also being set to 50 means that clocks will remain at this limit.In one key situation, pruning is disabled. When an update is a consequence of read_repair - the clock will not be pruned, so objects can be persisted with clocks longer than 50.
In a multi-cluster scenario, after read repairs there can be a circumstance where some objects have clocks of length >
small_vclock
on one cluster, but of length ==small_vclock
on another.If there is now an attempt to full-sync between the clusters, the delta can be detected - but when the object is dispatched from the cluster with the dominant (longer) clock, the receiving cluster will prune the inbound clock ... and the delta will persist. In this case, the objects are the same, but the full-sync mechanism cannot determine they are the same - so continuously discovers differences.
Clocks being at the pruning length is more likely to occur on very long-lived clusters which have been subject to lots of leaves/joins - but as many users have been running Riak for > 10 years this is increasingly probable.
The text was updated successfully, but these errors were encountered: