-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add whitelist for reorg handling #88
base: main
Are you sure you want to change the base?
Conversation
@@ -303,6 +305,29 @@ pub(crate) fn verify_finality_signature( | |||
Ok(()) | |||
} | |||
|
|||
/// `whitelist_forked_blocks` adds a set of forked blocks to the whitelist. These blocks will be skipped by FG | |||
/// (ie. treated as finalized) duringconsecutive quorum checks to unblock the OP derivation pipeline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
during consecutive
check_admin(&deps, info)?; | ||
|
||
// Check array is non-empty | ||
if forked_blocks.is_empty() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also check the first element is less or equal to the second element in the tuple?
FORKED_BLOCKS.update::<_, ContractError>(deps.storage, |mut blocks| { | ||
blocks.extend(forked_blocks); | ||
Ok(blocks) | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to allow overalp?
e.g. [(1,10) (5, 8) (3, 20)]
should we enforce
- no overlap
- always incrementing
so it can only be
e.g. [(1,10) (11, 18), (300,400)]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, would suggest we do not allow overlap. This simplifies some CRUD around the forked blocks
if height < *start { | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't understand the logic here. shouldn't we have a continue
statement somewhere
lets say we have [(1,5), (8,13)] and height is 3
the function will return false
b/c of the break
here but it should return true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
use std::collections::HashSet; | ||
|
||
/// Map of signatures by block height and fp | ||
pub(crate) const SIGNATURES: Map<(u64, &str), Vec<u8>> = Map::new("fp_sigs"); | ||
|
||
/// Map of (block height, block hash) tuples to the list of fps that voted for this combination | ||
pub(crate) const BLOCK_VOTES: Map<(u64, &[u8]), HashSet<String>> = Map::new("block_hashes"); | ||
|
||
/// Ordered list of forked blocks [(start_height_1, end_height_1), (start_height_2, end_height_2), ...] | ||
pub(crate) const FORKED_BLOCKS: Item<Vec<(u64, u64)>> = Item::new("forked_blocks"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This structure means that the vector of start/end heights is considered as a single item. If we track a lot of forks (which might be the case), then the DB ops over it could be some bottleneck.
Would we consider other structures like Map<u64, u64>
where key is the start height and value is the end height?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then the DB ops over it could be some bottleneck.
i think if it's ordered with no overlap, in practice it's gonna be efficient b/c we can just use a binary search
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we store it as a map, i don't know how to design an efficient algorithm to decide if a block is reorged or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think prefix_range
or other iterator functions can be useful here. Check out prefixed_range_works
under cw-storage-plus` repo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there is a much simpler way: given a height, use prefix_range
to find the first key that is smaller than the height, and then check if the height is between the key and value
FORKED_BLOCKS.update::<_, ContractError>(deps.storage, |mut blocks| { | ||
blocks.extend(forked_blocks); | ||
Ok(blocks) | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, would suggest we do not allow overlap. This simplifies some CRUD around the forked blocks
ok we discussed offline. so the issue was unlikely caused by an L1 reorg. instead, it's likely caused by the L1 RPC rate limiting thus causing the L2 reorgs so this can still be useful but probably not a high-pri for now b/c usually L1 reorg depth is just 1 block and it has to get very unlucky to have a batch tx hit the exact L1 reorg block |
Summary
This PR adds whitelisting for reorg handling.
The current FG cannot handle L2 block reorgs because the new block hash breaks the finality votes query. This causes the finalized head to become stuck.
We implement a new whitelist system in the CW contract to allow:
Test plan