When we send a txhashet archive a peer's thread is busy with sending it
and can't send other messages, eg pings. If the network connection is
slow buffer capacity 10 may be not enough, hence the peer's drop.
Safer attempt to address #2929 in 2.0.0
* introduce protocol version to deserialize and read
* thread protocol version through our reader
* example protocol version access in kernel read
* fix our StreamingReader impl (WouldBlock woes)
* debug log progress of txhashset download
* create 2.0.0 branch
* fix humansize version
* update grin.yml version
* PoW HardFork (#2866)
* allow version 2 blocks for next 6 months
* add cuckarood.rs with working tests
* switch cuckaroo to cuckarood at right heights
* reorder to reduce conditions
* remove _ prefix on used args; fix typo
* Make Valid Header Version dependant on ChainType
* Rustfmt
* Add tests, uncomment header v2
* Rustfmt
* Add FLOONET_FIRST_HARD_FORK height and simplify logic
* assume floonet stays closer to avg 60s block time
* move floonet hf forward by half a day
* update version in new block when previous no longer valid
* my next commit:-)
* micro optimization
* Support new Bulletproof rewind scheme (#2848)
* Update keychain with new rewind scheme
* Refactor: proof builder trait
* Update tests, cleanup
* rustfmt
* Move conversion of SwitchCommitmentType
* Add proof build trait to tx builders
* Cache hashes in proof builders
* Proof builder tests
* Add ViewKey struct
* Fix some warnings
* Zeroize proof builder secrets on drop
* Modify mine_block to use wallet V2 API (#2892)
* update mine_block to use V2 wallet API
* rustfmt
* Add version endpoint to node API, rename pool/push (#2897)
* add node version API, tweak pool/push parameter
* rustfmt
* Upate version api call (#2899)
* Update version number for next (potential) release
* zeroize: Upgrade to v0.9 (#2914)
* zeroize: Upgrade to v0.9
* missed Cargo.lock
* [PENDING APPROVAL] put phase outs of C32 and beyond on hold (#2714)
* put phase outs of C32 and beyond on hold
* update tests for phaseouts on hold
* Don't wait for p2p-server thread (#2917)
Currently p2p.stop() stops and wait for all peers to exit, that's
basically all we need. However we also run a TCP listener in this thread
which is blocked on `accept` most of the time. We do an attempt to stop
it but it would work only if we get an incoming connection during the
shutdown, which is a week guarantee.
This fix remove joining to p2p-server thread, it stops all peers and
makes an attempt to stop the listener.
Fixes [#2906]
* rustfmt
* allow version 2 blocks for next 6 months
* add cuckarood.rs with working tests
* switch cuckaroo to cuckarood at right heights
* reorder to reduce conditions
* remove _ prefix on used args; fix typo
* Make Valid Header Version dependant on ChainType
* Rustfmt
* Add tests, uncomment header v2
* Rustfmt
* Add FLOONET_FIRST_HARD_FORK height and simplify logic
* assume floonet stays closer to avg 60s block time
* move floonet hf forward by half a day
* update version in new block when previous no longer valid
* my next commit:-)
* micro optimization
* generate txhashset archives on 250 block intervals.
* moved txhashset_archive_interval to global and added a simple test.
* cleaning up the tests and adding license.
* increasing cleanup duration to 24 hours to prevent premature deletion of the current txhashset archive
* bug fixes and changing request_state to request height using archive_interval.
* removing stopstate from chain_test_helper to fix compile issue
* Implement simple zeroing of BlindingFactor in Drop
* rustfmt
* Make Debug implementation for BlindingFactor empty
* Implement BlindingFactor zeroing unit test
* mnemonic.rs: fix deprecated warning in test_bip39_random test
* Use zeroize crate to clear BlindingFactor
* Fix comment and implement dummy Debug trait for BlindingFactor
* Fix formatter
I made an suboptimal (aka stupid) decision to stop and wait for peers
one by one which makes shutdown very slow - O(n). This PR decouples sending
stop signal from waiting a thread to exit. On top of it in Peers we
first send stop signal to all peers and only after that start waiting
for them to exit. It gives us a constant time of shutdown in most of the
cases.
* Calculate reorg depth in BlockStatus::Reorg enum member
* rustfmt
* Fix reorg height calculation and implement reorg test
* rustfmt
* Report reorg depth in webhook payload
* Add optional depth field to the block webhook JSON reply
* fix: try to fix issue #2585 by adding block cleanup from db directly.
Signed-off-by: Mike Tang <daogangtang@gmail.com>
* use another effective algorithm to do old block and short-lived block cleaup.
Signed-off-by: Mike Tang <daogangtang@gmail.com>
* 1. rename iter_lived_blocks to blocks_iter;
2. comments and iterator calling optimiztions.
Signed-off-by: Mike Tang <daogangtang@gmail.com>
* Fix locking bug when calling is_on_current_chain() in batch.blocks_iter just by removing it.
Because "we want to delete block older (i.e. lower height) than tail.height".
Signed-off-by: Mike Tang <daogangtang@gmail.com>