* Update versioning on master to 5.4.0-alpha.0
* updates for 1.80 and other accumulated warnings
* further warning cleanups
* move dead code tag to function defn rather than module
* first pass compilation of croaring update
* cargo.lock
* add roaring arch flag into CI build scripts
* revert CI to use windows 2019 image
* add more debug
* more debug info
* update range arguments to bitmap remove_range function calls
* remove unnecessary cast
* [PIBD_IMPL] Introduce PIBD state into sync workflow (#3685)
* experimental addition of pibd download state for testnet only
* fixes to bitmap number of segments calculation + conversion of bitmap accumulator to bitmap
* attempt to call a test message
* add p2p methods for receiving bitmap segment and applying to desegmenter associated with chain
* fixes to state sync
* add pibd receive messages to network, and basic calls to desegmenter from each (#3686)
* [PIBD_IMPL] PIBD Desegmenter State (#3688)
* add functions to desegmenter to report next desired segments, begin to add state to determine which segments have been requested
* add segmentidentifier type to id requested segments uniquely
* make a call on where to keep track of which PIBD segments have been requested
* move segmenttype definition, add functions to manipulate peer segment list
* remove desegmenter state enum
* change chain desegmenter function to provide rwlock
* trace, warning cleanup
* udpate to test compliation
* [PIBD_IMPL] Bitmap accumulator reconstruction + TxHashset set reconstruction (#3689)
* application of received bitmap segments to local accumulator
* add all required elements to send/receive output segment requests and responses
* testing of output sync
* add special cases to pmmr segment request
* [PIBD_IMPL] PMMR Reassembly from Segments (#3690)
* update pibd copy test to use new desgmenter structure
* begin reconstruction of output pmmr
* clean up hash/leaf insertion logic
* push pruned subtree appears to be working, now also calculates left hand hashes correctly
* factor out ordering of segment/hash order array
* refactor for pmmr application code
* test of chain copy appears to be working
* add rangeproof functions to desegmenter
* add kernel functions, attempt refactor
* small test cleanup, reconstruction of live chain working in manual copy test
* [PIBD_IMPL] PIBD tree sync via network and kill/resume functionality (#3691)
* add functions to determing latest verifiable block height for the given pibd state
* attempting to allow for pibd to resume after killing process
* fix to ensure prune list is properly flushed during pibd sync
* removal of unneeded code
* ignore test for now (fix before full merge)
* [PIBD_IMPL] Finalize PIBD download and move state to chain validation (#3692)
* investigations as to why a slight rewind is needed on startup during PIBD
* move validation code into desegmenter validation thread (for now)
* ensure genesis entries in pmmrs are removed if they're removed in the first segment
* validation all working except for verifying kernel sums
* remove unneeded pmmr rollbacks on resume now root cause was found
* updates to remove unpruned leaves from leaf set when rebuilding pmmr
* remove + 1 to segment traversal iter length
* [PIBD_IMPL] PIBD Stats + Retry on validation errors (#3694)
* start to add stats and reset chain state after errors detected
* add functions to reset prune list when resetting chain pibd state
* debug statement
* remove test function
* [PIBD_IMPL] Update number of simultaneous peer requests for segments (#3696)
* cleanup of segment request list
* allow for more simultaneous requests during state sync
* up number of simultaneous peer requests for segments
* [PIBD_IMPL] Thread simplification + More TUI Updates + Stop State Propagation (#3698)
* change pibd stat display to show progress as a percentage of downloaded leaves
* attempt some inline rp validation
* propagate shutdown state through kernel validation
* change validation loop timing
* simplify validator threading
* add more detailed tracking of kernel history validation to tui, allow stop state during
* adding more stop state + tui progress indication
* remove progressive validate
* test fix
* revert to previous method of applying segments (#3699)
* fix for deadlock issue (#3700)
* update Cargo.lock for next release
* [PIBD_IMPL] Catch-Up functionality + Fixes based on testing (#3702)
* ensure desegmenter attempts to apply correct block after a resumte
* ensure txhashset's committed implementation takes into account output bitmap for summing purposes
* remove check to de-apply outputs during segment application
* return removal of spent outputs during pibd
* remove unneeded status
* remove uneeded change to rewind function
* documentation updates + todo fixes (#3703)
* add pibd abort timeout case (#3704)
* [PIBD_IMPL] BitmapAccumulator Serialization Fix (#3705)
* fix for writing / calculating incorrect length for negative indices
* update capabilities with new version of PIBD hist
* remove incorrect comment
* fix capabilities flag, trace output
* test fix
* Merge DNSSeed scope changes into pibd impl branch (#3708)
* update Cargo.lock for next release
* visibility scope tweaks to aid seed test utilities (#3707)
* move all PIBD-related constants into pibd_params modules (#3711)
* remove potential double read lock during compaction
* WIP remove failure from all `Cargo.toml`
* WIP remove `extern crate failure_derive`
* Use `thiserror` to fix all errors
* StoreErr is still a tuple
* Remove another set of unnecessary `.into()`s
* update fuzz tests
* update pool/fuzz dependencies in cargo.lock
* small changes based on feedback
Co-authored-by: trevyn <trevyn-git@protonmail.com>
* replace bitvec with more efficient bitpack algorithm
* optimise proof_unpack_len
* move proof pack length calculation
* small refactor
* first pass attempt at not deserializing proof nonces in difficulty iter
* another 10 seconds gained by not deserialising the proof from the difficulty iterator
* add new deser parameters to tests where needed
* add skip_proof variants to store
* remove hash from difficulty iterator struct, rename HeaderInfo to HeaderDifficultyInfo
* replace bitvec with more efficient bitpack algorithm
* optimise proof_unpack_len
* move proof pack length calculation
* small refactor
* first pass attempt at not deserializing proof nonces in difficulty iter
* another 10 seconds gained by not deserialising the proof from the difficulty iterator
* add new deser parameters to tests where needed
* add skip_proof variants to store
* remove hash from difficulty iterator struct, rename HeaderInfo to HeaderDifficultyInfo
* use 0-based positions in methods pmmr_leaf_to_insertion_index and bintree_postorder_height; add round_up_to_leaf_pos method
* use 0-based positions in method insertion_to_pmmr_index
* use 0-based positions in method is_leaf
* use 0-based positions in method family()
* use 0-based positions in method is_left_sibling
* use 0-based positions in method family_branch
* use 0-based positions in methods bintree_{left,right}most
* use 0-based positions in method bintree_pos_iter
* use 0-based positions in method bintree_range
* use 0-based positions in method bintree_leaf_pos_iter
* rename last_pos in MMR related structs to size
* use 0-based positions in method prune
* use 0-based positions in method push and apply_output return value
* use 0-based position argument of method merkle_proof
* use 0-based outputs in method pmmr::peaks
* fix peaks() code comments
* refix peaks() code comments
* use 0-based positions in method get_peak_from_file
* use 0-based positions in methods get_data_from_file
* use 0-based positions in methods get_from_file
* use 0-based positions in methods get_data
* use 0-based positions in methods get_hash
* use 0-based positions in method peak_path
* use 0-based positions in method bag_the_rhs
* use 0-based positions in method Backend::remove
* use 0-based positions in method leaf_pos_iter
* use 0-based positions in method self.LeafSet::includes
* use 0-based positions in methods self.LeafSet::{add,remove}
* use 0-based positions in methods is_pruned,is_pruned_root,is_compacted
* use 0-based positions in methods PruneList::append
* use 0-based positions in methods append_pruned_subtree
* use 0-based positions in method calculate_next_leaf_shift
* use 0-based positions in method append_single
* use 0-based positions in method calculate_next_shift
* use 0-based positions in method segment_pos_range
* use 0-based positions in method reconstruct_root
* use 0-based positions in method validate_with
* use 0-based positions in method validate
* rename size (formerly last_pos) to mmr_size
* use 0-based positions in Segment's hash_pos and leaf_pos
* minimize use of saturating_sub(1) and rename some pos/idx to size
* use 0-based positions in methods get_output_pos
* use 0-based positions in method get_unspent_output_at
* use 0-based positions in method get_header_hash
* use 0-based positions in methods MerkleProof::verify{,_consume}
* use 0-based positions in method cleanup_subtree
* don't allow 0 in prunelist bitmap
* use 0-based positions in methods get_{,leaf_}shift
* rename some 1-based pos to pos1; identify TODO
* Address yeastplume's PR review comments
* refactor prune_list with aim of allowing pruned subtree appending
* add test coverage around pmmr::is_leaf() and pmmr::bintree_leaf_pos_iter()
* comments
* cleanup
* implement append pruned subtree for prune_list
* commit
* we can now append to prune_list
* fix our prune_list corruption...
* rework how we rewrite the prune list during compaction
* test coverage for improved prune list api
* continuing to merge
* finish merge, tests passing again
* add function pmmr_leaf_to_insertion_index, and modify bintree_lef_pos_iter to use it. Note there's still an unwrap that needs to be dealt with sanely
* change pmmr_leaf_to_insertion_index to simpler version + handle conversion between 1 and 0 based in bintree_leaf_pos_iter
Co-authored-by: antiochp <30642645+antiochp@users.noreply.github.com>
* wip
* use range beneath subtree for efficient is_pruned check
* various iterators over the prune list
* improved prune list iter and subtree handling
* use take_while so unpruned iterators are not infinite
* add segmenter for generating segments from txhashset with consistent rewind
* rework segmenter to take a txhashset wrapped in rwlock
rework our rewindable pmmr so we can convert to readonly easily
* placeholder code for rewinding readonly txhashset extension to build a rangeproof segment
* segment creation for outputs/rangeproofs/kernels/bitmaps
* placeholder segment impl
* commit
* rework segmenter to use a cached bitmap (rewind is expensive)
* cache segmenter instance based on current archive header
* integrate the real segment and segment identifier with our segmenter
* exercise the segmenter code on chain init
* wrap accumulator in an arc, no need to clone each time
* Chunk generation and validation
* Rename chunk -> segment
* Missed a few
* Generate and validate merkle proof
* Fix bugs in generation and validation
* Add test for unprunable MMR of various sizes
* Add missing docs
* Remove unused functions
* Remove segment error variant on chain error type
* Simplify calculation by using a Vec instead of HashMap
* Use vectors in segment definition
* Compare subtree root during tests
* Add test of segments for a prunable mmr
* Remove assertion
* Only send intermediary hashes for prunable MMRs
* Get hash from file directly
* Require both leaves if one of them is not pruned
* More pruning tests
* Add segment (de)serialization
* Require sorted vectors in segment deser
* Store pos and data separately in segment
* Rename log_size -> height
* Fix bitmap index in root calculation
* Add validation function for output (bitmap) MMRs
* Remove left over debug statements
* Fix test
* Edge case: final segment with uneven number of leaves
* Use last_pos instead of segment_last_pos
* Simplify pruning in test
* Add leaf and hash iterators
* Support fully pruned segments
* Drop backend before deleting dir in pruned_segment test
* Simplify output of first_unpruned_parent
* Introduce CommitOnly variant of Inputs.
Introduce CommitWrapper so we can sort commit only inputs correctly.
* rememebr to resort if converting
* write inputs based on variant and protocol version
* read and write protocol version specific inputs
* store full blocks in local db in v3
convert to v2 when relaying to v2 peers
* add debug version_str for inputs
* no assumptions about spent index sort order
* add additional version debug logs
* fix ser/deser tests for proto v3
* cleanup coinbase maturity
* rework pool to better handle v2 conversion robustly
* cleanup txpool add_to_pool
* fix nrd kernel test
* move init conversion earlier
* cleanup
* cleanup based on PR feedback