* verify a tx like we verify a block (experimental)
* first minimal_pool test up and running but not testing what we need to
* rework tx_pool validation to use txhashset extension
* minimal tx pool wired up but rough
* works locally (rough statew though)
delete "legacy" pool and graph code
* rework the new pool into TransactionPool and Pool impls
* rework pool to store pool entries
with associated timer and source etc.
* all_transactions
* extra_txs so we can validate stempool against existing txpool
* rework reconcile_block
* txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx)
* wip - txhashset tx tests
* more flexible rewind on MMRs
* add tests to cover apply_raw_txs on txhashset extension
* add_to_stempool and add_to_txpool
* deaggregate multi kernel tx when adding to txpoool
* handle freshness in stempool
handle propagation of stempool txs via dandelion monitor
* patience timer and fluff if we cannot propagate
to next relay
* aggregate and fluff stempool is we have no relay
* refactor coinbase maturity
* rewrote basic tx pool tests to use a real txhashset via chain adapter
* rework dandelion monitor to reflect recent discussion
works locally but needs a cleanup
* refactor dandelion_monitor - split out phases
* more pool test coverage
* remove old test code from pool (still wip)
* block_building and block_reconciliation tests
* tracked down chain test failure...
* fix test_coinbase_maturity
* dandelion_monitor now runs...
* refactor dandelion config, shared across p2p and pool components
* fix pool tests with new config
* fix p2p tests
* rework tx pool to deal with duplicate commitments (testnet2 limitation)
* cleanup and address some PR feedback
* add big comment about pre_tx...
* beginning to refactor keychain into wallet lib
* rustfmt
* more refactor of aggsig lib, simplify aggsig context manager, hold instance statically for now
* clean some warnings
* clean some warnings
* fix wallet send test a bit
* fix core tests, move wallet dependent tests into integration tests
* repair chain tests
* refactor/fix pool tests
* fix wallet tests, moved from keychain
* add wallet tests
Adds a `BufWrite` around each file that gets incrementally written
to in the MMR store main types. Should speed up a few operations
quite a bit, but particularly flushing the remove log. Saving
txhashsets on new blocks is now 10x to 20x faster, making full
block syncs a lot faster.
* wip BlockMarker struct, get rid of PMMRMetadata
* use rewind to init the txhashet correctly on startup, we do not need to track pos via metadata (we have block markers), we do not need to open the txhashset with specific pos (we have rewind)
* better logging on init
* keep rewinding and validating on init, to find a good block
* use validate_roots on chain init
* Bump up crates versions
* Finally add a Cargo.lock to avoid dependency breakages
* Build doc update for testnet2
* Fix test framework not really using its mining config
* Testnet2 genesis, best so far at 128 difficulty (a nice number)
* Minor build doc update
* speed up Travis startup
- keep all test tempdata in target/tmp
- purge before saving target/* to S3 cache
* allow 4 minutes for cache up/download from Travis' S3 cache stores
* rewind to header as part of txhashset validation
otherwise we risk including a new block and the roots do not match
* fix bug in rm_log rewind (wants to be inclusive of provided index)
* put block marker in the index so we can rewind correctly
during validation of the new txhashset
* rustfmt
* hash_with_index on non-leaf nodes
rework the Merkle proof to include the pos of each sibling in the path
* rustfmt
* cleanup
* cleanup
* use get_from_file in validate (children may have been "removed")
* rustfmt
* fixup store tests
* wip
* failing test for being too eager when pruning a sibling
* commit
* rustfmt
* [WIP] modified get_shift and get_leaf_shift to account for leaving "pruned but not compacted" leaves in place
Note: this currently breaks check_compact as nothing else is aware of the modified behavior
* rustfmt
* commit
* rustfmt
* basic prune/compact/shift working
* rustfmt
* commit
* rustfmt
* next_pruned_idx working (I think)
* commit
* horizon test uncovered some subtle issues - wip
* rustfmt
* cleanup
* rustfmt
* commit
* cleanup
* cleanup
* commit
* rustfmt
* contains -> binary_search
* rustfmt
* no need for height==0 special case
* wip - works for single compact, 2nd one breaks the mmr hashes
* commit
* rustfmt
* fixed it (needs a lot of cleanup)
we were not traversing all the way up to the peak if we pruned an entire tree
so rm_log and prune list were inconsistent
* multiple compact steps are working
data file not being copmacted currently (still to investigate)
* cleanup store tests
* cleanup
* cleanup up debug
* rustfmt
* take kernel offsets into account when summing kernels and outputs for full txhashset validation
validate chain state pre and post compaction
* rustfmt
* fix wallet refresh (we need block height to be refreshed on non-coinbase outputs)
otherwise we cannot spend them...
* rustfmt
* Implementation of compaction for the chain. Single entry point on the chain triggers compaction of all MMRs as well as the cleanup of the positional index and full blocks.
* API endpoint, additional tests and more fixes for compaction
* Also prune PMMR metadata, minor bug fix
* PMMR store tests fix
* adding file position index data accessable to the chain, and allowing for storage of such within db
* missing file
* restart files at last recorded position in stored file metadata
* just use tip to store last pmmr index information
* error handling
* test fix
* family_branch() to recursively call family() up the branch
todo
- we hit a peak, then we need to get to the root somehow
- actually get the hashes to build the proof
* wip
* some additional testing around merkle tree branches
* track left/right branch for each sibling as we build the merkle path up
* MerkleProof and basic (incomplete) verify fn
* I think a MerkleProof verifies correctly now
need to test on test case with multiple peaks
* basic pmmr merkle proof working
* MerkleProof now serializable/deserializable
* coinbase maturity via merkle proof basically working
* ser/deser merkle proof into hex in api and wallet.dat
* cleanup
* wip - temporarily saving merkle proofs to the commit index
* assert merkle proof in store matches the rewound version
there are cases where it does not...
* commit
* commit
* can successfully rewind the output PMMR and generate a Merkle proof
need to fix the tests up now
and cleanup the code
and add docs for functions etc.
* core tests passing
* fixup chain tests using merkle proofs
* pool tests working with merkle proofs
* api tests working with merkle proof
* fix the broken comapct block hashing behavior
made nonce for short_ids explicit to help with this
* cleanup and comment as necessary
* cleanup variety of TODOs
* beginning to remove sum
* continuing to remove sumtree sums
* finished removing sums from pmmr core
* renamed sumtree files, and completed changes+test updates in core and store
* updating grin/chain to include removelogs
* integration of flatfile structure, changes to chain/sumtree to start using them
* tests on chain, core and store passing
* cleaning up api and tests
* formatting
* flatfiles stored as part of PMMR backend instead
* all compiling and tests running
* documentation
* added remove + pruning to flatfiles
* remove unneeded enum
* adding sumtree root struct
* Util to zip and unzip directories
* First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks.
* Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot.
* Sumtree hash structure validation
* Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler.
* Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware.
* Full state validation. Checks that:
- MMRs are sane (hash and sum each node)
- Tree roots match the corresponding header
- Kernel signatures are valid
- Sum of all kernel excesses equals the sum of UTXO commitments
minus the supply
* Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync
mode to get the full bodies of the last blocks we're missing.
* First fully working fast sync
* Facility in p2p conn to deal with attachments (raw binary after message).
* Re-introduced sumtree send and receive message handling using the above.
* Fixed test and finished updating all required db state after sumtree validation.
* Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup.
* Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
* Tried but failed to fix `cargo build` complaint about unused #[macro use]. See also 7a803a8dc1
* Compiler complaints be-gone
* Give sumtree tests method-tagged folder names so they don't overwrite each others' files
Fixes#658
Without a backup, the in-memory data structure stays truncated
even if the rewind is abandoned. Also add some logging on our
most problematic block in case it still is problematic.
Fixes buffer misalignment when reading back pruning data.
Everything after the buffer length (8000 bytes) was badly read on
restart, leading to accepting double spends.
The MMR storage has a buffer for all changes so they can either
be discarded without side effects or committed to disk. Pruning
can also happen in the buffer if an input directly spends an
output in a block (not likely), or a fork has an input spending
a previous output within it.
When the buffer gets flushed to storage, the pruned element was
just skipped, causing an offset within the underlying storage and
a shorter file than expected. This fix writes zeroes instead, so
the size is consistent. Note that the zeroes will be removed with
all other pruned elements on next compaction.
Fixes#444
Only the temporary remove log for the last block was written to
disk, and then overwritten for every block. The fix writes the
temp remove log (just last block) to the permanent remove log,
and saves that to disk.
Moved the HTTP APIs away from the REST endpoint abstraction and
to simpler Hyper handlers. Re-established all routes as v1.
Changed wallet receiver port to 13415 to avoid a gap in port
numbers.
Finally, rustfmt seems to have ignored specific files arguments,
running on everything.
* Adding switch commit to grin outputs
* logging output fix
* adding switch commitment hash to sum tree node
* added hash_with to Hashed trait, to allow for hashing to include another writeable element
* adding hash_with as method in hashed trait
* added global slog instance, changed all logging macro formats to include logger instance
* adding configuration to logging, allowing for multiple log outputs
* updates to test, changes to build docs
* rustfmt
* moving logging functions into util crate
* Integrate PMMR and its persistent backend with the Chain
* Chain can set tree roots; PMMR backend discard
* Check spent and prune for each input in new block
* Handling of forks by rewinding the state
* More PMMR tests and fixes, mostly around rewind
* Rewrite get_unspent to use the sumtrees, fix remaining compilation issues
* Base MMR storage structures
Implementations of the MMR append-only file structure and its
remove log. The append-only file is backed by a mmap for read
access. The remove log is stored in memory for quick checking
and backed by a simple file to persist it.
* Add PMMR backend buffer, make PMMR Backend mutable
* The Backend trait now has &mut self methods, and an &mut
reference in PMMR. This simplifies the implementation of all
backends by not forcing them to be interior mutable. Slight
drawback is that a backend can't be used directly as long as it's
used by a PMMR instance.
* Introduced a buffer in the PMMR persistent backend to allow
reads before the underlying files are fully flushed. Implemented
with a temporary VecBackend.
* Implement a prune list to use with dense backends
The PruneList is useful when implementing compact backends for a PMMR (for
example a single large byte array or a file). As nodes get pruned and
removed from the backend to free space, the backend will get more compact
but positions of a node within the PMMR will not match positions in the
backend storage anymore. The PruneList accounts for that mismatch and does
the position translation.
* PMMR store compaction
Implement actual pruning of the underlying PMMR storage by
flushing the remove log. This triggers a rewrite of the PMMR nodes
data (hashes and sums), removing pruned nodes. The information of
what has been removed is kept in a prune list and the remove log
is truncated.
* PMMR store pruning tests and fixes
* minor cleanup - unused imports
* cleanup build warnings - unused vars
* make structs pub to get rid of the private_in_public lint warning
* missing docs on RangeProof
* add missing docs to store delete function
* cleaned up deprecation warning -
tokio_core -> tokio_io
complete() -> send()
* Adding ability to serialise parts of the header, pre-nonce and post-nonce
* Some test integration of queueing functions in cuckoo-miner
* more cuckoo-miner async mode integration, now more or less working
* integrating async miner workflow
* rocksdb update
* u64 internal difficulty representation, and integration of latest Cuckoo-miner API
* change to cuckoo-miner notify function
* Issue in testing, and if use_async value is None in grin.toml
* making async mode explicit in tests - 2
* fiddle with port numbers for CI
* update tag to ensure cuckoo-miner build doesn't fail on windows
* change the order in which tests are run
* Starting to refactor test, adding http post to wallet sender
* Implemented ability to run servers on different ports (mostly for testing), and implemented ability to post http requests directly to receiving wallets
* Adding detailed instructions on running multiple servers on the same machine
* Changes to build.doc to outline server running process
* Removed unwanted debug statements
* WIP Local server testing framework evolution
* More refactoring of server pool, checked in because there's a problem
* Added server reference structure, and ability to return server references from tests cleanly
* Added simulate_parallel_mining test, which puts some artificial slowdown into test mining loops, and the difficulty can currently be watched in the log
* Removing the ServerRef structure placed in earlier and replaced with a ServerStats structure, that just returns relevant info about the Server state without exposing it to the world
* Transactions coming from the network are now pushed to the pool
through the net adapter.
* New blocks accepted by the chain are sent to the pool for
eviction.
* The miner requests transactions from the pool to build its
blocks.
* The push API adds to the pool, removing the mock.
* Implementation of the adapter to the chain required by the pool
to get consistent UTXOs. Grossly unoptimized until we have the UTXO
MMR ready.
* Sample Signatures for put_enc and get_dec
* Implement put_enc and get_dec
* Implement ChainCodec in grin_chain
* Truncate src only on complete Blocks
* Truncate src only on complete Tip + Check Len
* Move BlockHeader Encoding to BlockHeaderCodec
* Define put_enc for store::Batch
* Replace BlockCodec and BlockHeaderCodec with generic BlockCodec<T>
* Implement Default for BlockCodec Manually
* Replace get_ser/put_ser with get_enc/get_dec for chain::ChainKVStore
* Remove Writeable/Readable for chain::Tip
* Add Tokio-io and Bytes to grin_p2p
* Additional Setup for Message enum + Msg{Encode,Decode} traits
* base msg ping pong encoding and test
* fill out msg-codec tests
* Implement Hand Encoding/Decoding
* msg-encode shake
* msg-encode getpeeraddr
* codec peer-addrs message, SockAddr struct wierdness
* header message codec
* msg encoding finished prelim
* Implement PeerCodec Encoding/Decoding
* Set PeerStore to use PeerCodec for Encoding/Decoding
* Add a DecIterator
* Prune PeerStore
* Replace Decoding and Encoding in handle_payload
* Prune Writeable/Readable methods in store::Store
* Remove Incomplete Frame Testing ( Not Nessesary right now )
* separate block and tx codec tests
* Refactor {Tx,Block}Codec Tests
Builds codecs to encode and decode blocks, block headers, transactions, kernels, etc. Will be used by the store and peer-to-peer layer for serialization, but also to compute hashes. Separates out serialization from core.
* core: cleanup slicing impls for Hash
* core: clean up Readable trait, implement Readable/Writeable for various integers
* core: change Hash debug output to hex
* core: correct warnings in all modules
* Replace AsFixedBytes with Sized + AsRef<[u8]>
* Add AsRef<u8> to impl_array_newtype!
* Include AsFixedBytes as marker trait
* Related fixes
* Remove Deref