* replace bitvec with more efficient bitpack algorithm
* optimise proof_unpack_len
* move proof pack length calculation
* small refactor
* first pass attempt at not deserializing proof nonces in difficulty iter
* another 10 seconds gained by not deserialising the proof from the difficulty iterator
* add new deser parameters to tests where needed
* add skip_proof variants to store
* remove hash from difficulty iterator struct, rename HeaderInfo to HeaderDifficultyInfo
* replace bitvec with more efficient bitpack algorithm
* optimise proof_unpack_len
* move proof pack length calculation
* small refactor
* first pass attempt at not deserializing proof nonces in difficulty iter
* another 10 seconds gained by not deserialising the proof from the difficulty iterator
* add new deser parameters to tests where needed
* add skip_proof variants to store
* remove hash from difficulty iterator struct, rename HeaderInfo to HeaderDifficultyInfo
* use 0-based positions in methods pmmr_leaf_to_insertion_index and bintree_postorder_height; add round_up_to_leaf_pos method
* use 0-based positions in method insertion_to_pmmr_index
* use 0-based positions in method is_leaf
* use 0-based positions in method family()
* use 0-based positions in method is_left_sibling
* use 0-based positions in method family_branch
* use 0-based positions in methods bintree_{left,right}most
* use 0-based positions in method bintree_pos_iter
* use 0-based positions in method bintree_range
* use 0-based positions in method bintree_leaf_pos_iter
* rename last_pos in MMR related structs to size
* use 0-based positions in method prune
* use 0-based positions in method push and apply_output return value
* use 0-based position argument of method merkle_proof
* use 0-based outputs in method pmmr::peaks
* fix peaks() code comments
* refix peaks() code comments
* use 0-based positions in method get_peak_from_file
* use 0-based positions in methods get_data_from_file
* use 0-based positions in methods get_from_file
* use 0-based positions in methods get_data
* use 0-based positions in methods get_hash
* use 0-based positions in method peak_path
* use 0-based positions in method bag_the_rhs
* use 0-based positions in method Backend::remove
* use 0-based positions in method leaf_pos_iter
* use 0-based positions in method self.LeafSet::includes
* use 0-based positions in methods self.LeafSet::{add,remove}
* use 0-based positions in methods is_pruned,is_pruned_root,is_compacted
* use 0-based positions in methods PruneList::append
* use 0-based positions in methods append_pruned_subtree
* use 0-based positions in method calculate_next_leaf_shift
* use 0-based positions in method append_single
* use 0-based positions in method calculate_next_shift
* use 0-based positions in method segment_pos_range
* use 0-based positions in method reconstruct_root
* use 0-based positions in method validate_with
* use 0-based positions in method validate
* rename size (formerly last_pos) to mmr_size
* use 0-based positions in Segment's hash_pos and leaf_pos
* minimize use of saturating_sub(1) and rename some pos/idx to size
* use 0-based positions in methods get_output_pos
* use 0-based positions in method get_unspent_output_at
* use 0-based positions in method get_header_hash
* use 0-based positions in methods MerkleProof::verify{,_consume}
* use 0-based positions in method cleanup_subtree
* don't allow 0 in prunelist bitmap
* use 0-based positions in methods get_{,leaf_}shift
* rename some 1-based pos to pos1; identify TODO
* Address yeastplume's PR review comments
* refactor prune_list with aim of allowing pruned subtree appending
* add test coverage around pmmr::is_leaf() and pmmr::bintree_leaf_pos_iter()
* comments
* cleanup
* implement append pruned subtree for prune_list
* commit
* we can now append to prune_list
* fix our prune_list corruption...
* rework how we rewrite the prune list during compaction
* test coverage for improved prune list api
* continuing to merge
* finish merge, tests passing again
* add function pmmr_leaf_to_insertion_index, and modify bintree_lef_pos_iter to use it. Note there's still an unwrap that needs to be dealt with sanely
* change pmmr_leaf_to_insertion_index to simpler version + handle conversion between 1 and 0 based in bintree_leaf_pos_iter
Co-authored-by: antiochp <30642645+antiochp@users.noreply.github.com>
* wip
* use range beneath subtree for efficient is_pruned check
* various iterators over the prune list
* improved prune list iter and subtree handling
* use take_while so unpruned iterators are not infinite
* add segmenter for generating segments from txhashset with consistent rewind
* rework segmenter to take a txhashset wrapped in rwlock
rework our rewindable pmmr so we can convert to readonly easily
* placeholder code for rewinding readonly txhashset extension to build a rangeproof segment
* segment creation for outputs/rangeproofs/kernels/bitmaps
* placeholder segment impl
* commit
* rework segmenter to use a cached bitmap (rewind is expensive)
* cache segmenter instance based on current archive header
* integrate the real segment and segment identifier with our segmenter
* exercise the segmenter code on chain init
* wrap accumulator in an arc, no need to clone each time
* Chunk generation and validation
* Rename chunk -> segment
* Missed a few
* Generate and validate merkle proof
* Fix bugs in generation and validation
* Add test for unprunable MMR of various sizes
* Add missing docs
* Remove unused functions
* Remove segment error variant on chain error type
* Simplify calculation by using a Vec instead of HashMap
* Use vectors in segment definition
* Compare subtree root during tests
* Add test of segments for a prunable mmr
* Remove assertion
* Only send intermediary hashes for prunable MMRs
* Get hash from file directly
* Require both leaves if one of them is not pruned
* More pruning tests
* Add segment (de)serialization
* Require sorted vectors in segment deser
* Store pos and data separately in segment
* Rename log_size -> height
* Fix bitmap index in root calculation
* Add validation function for output (bitmap) MMRs
* Remove left over debug statements
* Fix test
* Edge case: final segment with uneven number of leaves
* Use last_pos instead of segment_last_pos
* Simplify pruning in test
* Add leaf and hash iterators
* Support fully pruned segments
* Drop backend before deleting dir in pruned_segment test
* Simplify output of first_unpruned_parent
* Introduce CommitOnly variant of Inputs.
Introduce CommitWrapper so we can sort commit only inputs correctly.
* rememebr to resort if converting
* write inputs based on variant and protocol version
* read and write protocol version specific inputs
* store full blocks in local db in v3
convert to v2 when relaying to v2 peers
* add debug version_str for inputs
* no assumptions about spent index sort order
* add additional version debug logs
* fix ser/deser tests for proto v3
* cleanup coinbase maturity
* rework pool to better handle v2 conversion robustly
* cleanup txpool add_to_pool
* fix nrd kernel test
* move init conversion earlier
* cleanup
* cleanup based on PR feedback
* variants for output_pos linked list entries (head/tail/middle/unique)
next and prev and Vec<u8> lmdb keys
* get_pos on enum
* break list and list entries out into separate enums
* track output features in the new output_pos index, so we can determine coinbase maturity
* push entry impl for none and unique
* some test coverage for output_pos_list
* commit
* wip - FooListEntry
* use instance of the index
* linked list of output_pos and commit_pos both now supported
* linked_list
* cleanup and rename
* rename
* peek_pos
* push some, peek some, pop some
* cleanup
* commit pos
cleanup
* split list and entry out into separate db prefixes
* cleanup and add placeholder for pop_back
* pop_pos_back (for popping off the back of the linked list)
test coverage for pop_pos_back
* wip
* placeholder for prune via a trait
pos must always increase in the index
* rewind kernel_pos_idx when calling rewind_single_block
* RewindableListIndex with rewind support.
* test coverage for rewindable list index
* test coverage for rewind back to 0
* rewind past end of list
* add tests for kernel_pos_idx with multiple commits
* commit
* cleanup
* hook NRD relative lock height validation into block processing and tx validation
* cleanup
* set local chain type for kernel_idx tests
* add test coverage for NRD rules in block processing
* NRD test coverage and cleanup
* NRD relative height 1 test
* test coverage for NRD kernels in block processing
* cleanup
* start of test coverage for txpool NRD kernel rules
* wip
* rework pool tests to use real chain (was mock chain) to better reflect reality (tx/block validation rules etc.)
* cleanup
* cleanup pruneable trait for kernel pos index
* add clear() to kernel_pos idx and test coverage
* hook kernel_pos rebuild into node startup, compaction and fast sync
* verify full NRD history on fast sync
* return early if nrd disabled
* fix header sync issue
We have a pattern in the code - return Vec, turn it into an iterator, filter and collect to another Vec. The idea was to return iterator where possible to avoid allocating intermediate vecs.
* Introduce GLOBAL_CHAIN_TYPE and make CHAIN_TYPE thread_local.
This makes testing more explicit and significantly more robust.
* set_local_chain_type() in tests
* cleanup - weird
* get pool tests working with explicit local chain_type config
* core tests working with explicit local chain_type
* p2p tests working with explicit local chain_type
* store tests working
* cleanup, feedback
We have a method get which returns a Vec, it's used in cases when we can't implement FromLmdbBytes or Readable on a desired type, eg when both traits and type are external for the current crate. This approach requires an extra allocation of a Vec, this PR introduces a method which accepts desirialzer as closure, which allows to desrialize in place without an intermidiate heap allocation.
It's still possible to get a Vec if needed, just pass a Vec constructor to get_with.
We have to make an extra allocation per db get request because key generation function to_key takes Vec. Taking byte slice (AsRef<[u8]> to be precise) also simplifes the code a bit.
Currently Writable accepts trait Write as a type parameter but Readable
takes Read as a trait object, which is not symmetrical and also less performant. This PR changes Readable trait and all places where it's used