* Chain init now handles genesis body properly, related unit test creating the genesis with reward
* Avoid making block body public by adding a with_reward method
* apply_block in all genesis cases works
* start wallet command refactoring
* another re-structuring attempt
* rustfmt
* begin splitting up wallet commands
* rustfmt
* clean up wallet arg checking
* rustfmt
* macro for arg parsing
* rustfmt
* factor out init commands
* rustfmt
* move recover to new format
* rustfmt
* add listen command to new format
* rustfmt
* Finish moving commands to new format
* rustfmt
* rustfmt
* propogate errors more cleanly
* rustfmt
* error handling cleanup
* replace header_by_height index with reads into the header MMR
* rustfmt
* cleanup
* cleanup chain tests
* fix locate_headers to stop on our max header
* fix the deadlock in comact_blocks_db...
* cleanup and docs/comments
* Cuckatoo size shift upgrade schedule
* Move the schedule into graph_weight instead of messing with min edge bits
* Cleanup and fixes now that we have an agreed upon schedule
* PoW context is now properly picked depending on the chain type,
edge bits and block height. Height const for T4 hard fork leaving
a couple weeks to have miners in place. Removed now unused Cuckoo context.
* Simplified block siphash
* Fix servers crate compilation
* Tiny bit cleaner block siphash. Maybe.
* Cuckatoo min edge bits update for T4 and mainnet
* Fix header size tests, Cuckatoo31 default means one more bit per edge
* Remove redundant param from verify_size
* block_accepted via adapter is now reorg aware
we skip reconciling the txpool is block is not most work
we skip reconciling the reorg_cache is not a reorg
* rustfmt
* logging tweaks
* rework block_accepted interface
* rustfmt
* rework reorg determination
* introduce BlockStatus to represent next vs reorg vs fork
* rustfmt
* cleanup logging
* log from adapter, even during sync
* split horizon into two explicit values for cut through and txhashset request
* let node which has 2-7 days of history be able to handle forks larger than 2 days
* add test simulate_long_fork
* add pause/resume feature on p2p for tests
* refactor the state_sync
* ignore the test case simulate_long_fork for normal Travis-CI
* refactor function check_txhashset_needed to be shared with body_sync
* fix: state TxHashsetDone should allow header sync
* Cleanup syncer and sync header (locator)
* Simplify body sync
* Remove duplicate head in locator, add greater case in close_enough
* Various sync small fixes and tuning after testing
* More close_enough tests and related minor fixes
* Add a struct to encapsulate common references and avoid passing
them around on every function.
* Consolidate `skip_sync_wait` and `awaiting_peers` in an
additional sync status.
* New awaiting peer status is initial too
* Initial expired peers removal
* Stop expired peers
* Simplify peer removal and remove only Defunct peers
* Make seed to check for expired peers every hour
* Get rid of unused vector of peers to remove
* Make peer deletion predicate closure immutable
* Replace logging backend to flexi-logger and add log rotation
* Changed flexi_logger to log4rs
* Disable logging level filtering in Root logger
* Support different logging levels for file and stdout
* Don't log messages from modules other than Grin-related
* Fix formatting
* Place backed up compressed log copies into log file directory
* Increase default log file size to 16 MiB
* Add comment to config file on log_max_size option
* Add peers used bandwidth calculation and display in TUI
* Fix formatting
* Change Mutex to RwLock from peer's used bandwidth statistics in Tracker
* Make used bandwidth column in TUI peers list sort by sum of bytes
We select a peer to ask a block randomly. Peer's send channel has
capacity 10. If we need too many blocks we limit number of blocks to asks as a number of peers
* 10, which means that there is some probability (pretty high) that we
will overflow send buffer capacity.
This fix freezes a peer list (which gives also some performance boost)
and create a cycle iteraror to equally distribute requests among the
peers.
There is a risk that a peer may be disconnected while we are sending a
request to the chanel, but stricltly speaking it was possible in the old
code too, perhaps with a lower probability.
Fixes#1748