* make sure header PMMR is init correctly based on header_head from db
* fix to ensure header PMMR is consistent with header_head in db on chain init
* change order of operations - validate header earlier
and avoid applying header to header PMMR before validation
as this potentially leaves the PMMR in a tricky state to rewind from
* variants for output_pos linked list entries (head/tail/middle/unique)
next and prev and Vec<u8> lmdb keys
* get_pos on enum
* break list and list entries out into separate enums
* track output features in the new output_pos index, so we can determine coinbase maturity
* push entry impl for none and unique
* some test coverage for output_pos_list
* commit
* wip - FooListEntry
* use instance of the index
* linked list of output_pos and commit_pos both now supported
* linked_list
* cleanup and rename
* rename
* peek_pos
* push some, peek some, pop some
* cleanup
* commit pos
cleanup
* split list and entry out into separate db prefixes
* cleanup and add placeholder for pop_back
* pop_pos_back (for popping off the back of the linked list)
test coverage for pop_pos_back
* wip
* placeholder for prune via a trait
pos must always increase in the index
* rewind kernel_pos_idx when calling rewind_single_block
* RewindableListIndex with rewind support.
* test coverage for rewindable list index
* test coverage for rewind back to 0
* rewind past end of list
* add tests for kernel_pos_idx with multiple commits
* commit
* cleanup
* hook NRD relative lock height validation into block processing and tx validation
* cleanup
* set local chain type for kernel_idx tests
* add test coverage for NRD rules in block processing
* NRD test coverage and cleanup
* NRD relative height 1 test
* test coverage for NRD kernels in block processing
* cleanup
* start of test coverage for txpool NRD kernel rules
* wip
* rework pool tests to use real chain (was mock chain) to better reflect reality (tx/block validation rules etc.)
* cleanup
* cleanup pruneable trait for kernel pos index
* add clear() to kernel_pos idx and test coverage
* hook kernel_pos rebuild into node startup, compaction and fast sync
* verify full NRD history on fast sync
* return early if nrd disabled
* fix header sync issue
* wip
* commit
* add block level validation rule around NRD kernels and HF3 block height
* wip
* test coverage for kernel ser/deser for NRD kernels
* add some type safety around ser/deser of nrd relative height
* cleanup
* cleanup tx sig handling and add tests around NRD kernel sig
* test coverage for chain apply block containing NRD kernel
* verify kernel variants when adding tx to pool
NRD kernels will not be accepted until HF3
* add txpool test for NRD kernel handling
* fix merge
* fix api for NRD kernels
* commit
* wip - feature flag for NRD kernel support
* wip
* global config and thread local feature flag for NRD kernel support
* add feature flag support for NRD kernels
default to false when starting up node
allow it to be set to true in context of individual tests
* feature flag
* explicit testing of NRD kernel via feature flag
* add chain_type and feature flag logging on startup
* PR feedback and cleanup
* PR feedback
* Do not allocate String for static status, use Cow instead, eg we allocate "Running" string at every refresh
* Use static dispatch for Views instead of Box<dyn View>
* Update only the current view
* Simplify Listener trait
* cleanup how we handle key splitting for transaction offset
add test to demonstrate a pair of transaction halves sharing same kernel excess
* cleanup
* cleanup
We have a pattern in the code - return Vec, turn it into an iterator, filter and collect to another Vec. The idea was to return iterator where possible to avoid allocating intermediate vecs.
* Introduce GLOBAL_CHAIN_TYPE and make CHAIN_TYPE thread_local.
This makes testing more explicit and significantly more robust.
* set_local_chain_type() in tests
* cleanup - weird
* get pool tests working with explicit local chain_type config
* core tests working with explicit local chain_type
* p2p tests working with explicit local chain_type
* store tests working
* cleanup, feedback
Currently we call it in a loop without any delays which burns cpu cycles for little value.
On my machine I see decrease of cpu usage from 12% to 6% on a synced node after applying this fix.
We have a method get which returns a Vec, it's used in cases when we can't implement FromLmdbBytes or Readable on a desired type, eg when both traits and type are external for the current crate. This approach requires an extra allocation of a Vec, this PR introduces a method which accepts desirialzer as closure, which allows to desrialize in place without an intermidiate heap allocation.
It's still possible to get a Vec if needed, just pass a Vec constructor to get_with.
We have to make an extra allocation per db get request because key generation function to_key takes Vec. Taking byte slice (AsRef<[u8]> to be precise) also simplifes the code a bit.
Currently Writable accepts trait Write as a type parameter but Readable
takes Read as a trait object, which is not symmetrical and also less performant. This PR changes Readable trait and all places where it's used
Tx pool takes some parameters as trait objects. It's not an idiomatic Rust code, in this particular case we should use generic types. Trait object makes sense when we accept in runtime different concrete types which implement the trait as a value of the same field. It's not the case here. Trait objects come with a price - instead of method dispatch in compile time we have to accept runtime dispatch. My guess we did it to not clutter the code with type parameters, which is understandable but still suboptimal.
Currently we pass a Vec. This requires an extra allocation and copy of all elements if a caller doesn't have a Vec already, which is at least 95% of cases.
Another, a smaller issue, we have a function util::to_hex and some structs implement to_hex() on top of it, so we have a mix of it in the code. This PR introduces a trait and a blanket impl for AsRef<[u8]> which brings a uniform API (obj.to_hex()). One unfortunate case is arrays of size bigger than 32 - Rust doesn't implement AsRef for them so it requires an ugly hack (&array[..]).to_hex().