grin/servers/src/common/adapters.rs

905 lines
25 KiB
Rust
Raw Normal View History

// Copyright 2018 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Adapters connecting new block, new transaction, and accepted transaction
//! events to consumers of those events.
use crate::util::RwLock;
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
use std::fs::File;
use std::sync::{Arc, Weak};
use std::thread;
use std::time::Instant;
use crate::chain::{self, BlockStatus, ChainAdapter, Options};
use crate::common::hooks::{ChainEvents, NetEvents};
use crate::common::types::{
self, ChainValidationMode, DandelionEpoch, ServerConfig, SyncState, SyncStatus,
};
use crate::core::core::hash::{Hash, Hashed};
use crate::core::core::transaction::Transaction;
use crate::core::core::verifier_cache::VerifierCache;
use crate::core::core::{BlockHeader, BlockSums, CompactBlock};
use crate::core::pow::Difficulty;
use crate::core::{core, global};
use crate::p2p;
use crate::p2p::types::PeerInfo;
use crate::pool;
use crate::pool::types::DandelionConfig;
use crate::util::OneTime;
use chrono::prelude::*;
use chrono::Duration;
use rand::prelude::*;
/// Implementation of the NetAdapter for the . Gets notified when new
/// blocks and transactions are received and forwards to the chain and pool
/// implementations.
pub struct NetToChainAdapter {
sync_state: Arc<SyncState>,
chain: Weak<chain::Chain>,
tx_pool: Arc<RwLock<pool::TransactionPool>>,
verifier_cache: Arc<RwLock<dyn VerifierCache>>,
peers: OneTime<Weak<p2p::Peers>>,
config: ServerConfig,
hooks: Vec<Box<dyn NetEvents + Send + Sync>>,
}
impl p2p::ChainAdapter for NetToChainAdapter {
fn total_difficulty(&self) -> Result<Difficulty, chain::Error> {
Ok(self.chain().head()?.total_difficulty)
}
fn total_height(&self) -> Result<u64, chain::Error> {
Ok(self.chain().head()?.height)
}
fn get_transaction(&self, kernel_hash: Hash) -> Option<core::Transaction> {
self.tx_pool.read().retrieve_tx_by_kernel_hash(kernel_hash)
}
fn tx_kernel_received(
&self,
kernel_hash: Hash,
peer_info: &PeerInfo,
) -> Result<bool, chain::Error> {
// nothing much we can do with a new transaction while syncing
if self.sync_state.is_syncing() {
return Ok(true);
}
let tx = self.tx_pool.read().retrieve_tx_by_kernel_hash(kernel_hash);
if tx.is_none() {
self.request_transaction(kernel_hash, peer_info);
}
Ok(true)
}
fn transaction_received(
&self,
tx: core::Transaction,
stem: bool,
) -> Result<bool, chain::Error> {
// nothing much we can do with a new transaction while syncing
if self.sync_state.is_syncing() {
return Ok(true);
}
let source = pool::TxSource {
debug_name: "p2p".to_string(),
identifier: "?.?.?.?".to_string(),
};
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
let header = self.chain().head_header()?;
for hook in &self.hooks {
hook.on_transaction_received(&tx);
}
let tx_hash = tx.hash();
let mut tx_pool = self.tx_pool.write();
match tx_pool.add_to_pool(source, tx, stem, &header) {
Ok(_) => Ok(true),
Err(e) => {
debug!("Transaction {} rejected: {:?}", tx_hash, e);
Ok(false)
}
}
}
fn block_received(
&self,
b: core::Block,
peer_info: &PeerInfo,
was_requested: bool,
) -> Result<bool, chain::Error> {
debug!(
"Received block {} at {} from {} [in/out/kern: {}/{}/{}] going to process.",
b.hash(),
b.header.height,
peer_info.addr,
b.inputs().len(),
b.outputs().len(),
b.kernels().len(),
);
self.process_block(b, peer_info, was_requested)
2018-03-04 03:19:54 +03:00
}
fn compact_block_received(
&self,
cb: core::CompactBlock,
peer_info: &PeerInfo,
) -> Result<bool, chain::Error> {
let bhash = cb.hash();
debug!(
"Received compact_block {} at {} from {} [out/kern/kern_ids: {}/{}/{}] going to process.",
bhash,
cb.header.height,
peer_info.addr,
cb.out_full().len(),
cb.kern_full().len(),
cb.kern_ids().len(),
);
let cb_hash = cb.hash();
if cb.kern_ids().is_empty() {
// push the freshly hydrated block through the chain pipeline
match core::Block::hydrate_from(cb, vec![]) {
Ok(block) => {
if !self.sync_state.is_syncing() {
for hook in &self.hooks {
hook.on_block_received(&block, &peer_info.addr);
}
}
self.process_block(block, peer_info, false)
}
Err(e) => {
debug!("Invalid hydrated block {}: {:?}", cb_hash, e);
return Ok(false);
}
}
} else {
// check at least the header is valid before hydrating
if let Err(e) = self
.chain()
.process_block_header(&cb.header, self.chain_opts(false))
{
debug!("Invalid compact block header {}: {:?}", cb_hash, e.kind());
return Ok(!e.is_bad_data());
}
let (txs, missing_short_ids) = {
self.tx_pool
.read()
.retrieve_transactions(cb.hash(), cb.nonce, cb.kern_ids())
};
debug!(
"adapter: txs from tx pool - {}, (unknown kern_ids: {})",
txs.len(),
missing_short_ids.len(),
);
// TODO - 3 scenarios here -
// 1) we hydrate a valid block (good to go)
// 2) we hydrate an invalid block (txs legit missing from our pool)
// 3) we hydrate an invalid block (peer sent us a "bad" compact block) - [TBD]
let block = match core::Block::hydrate_from(cb.clone(), txs) {
Ok(block) => {
if !self.sync_state.is_syncing() {
for hook in &self.hooks {
hook.on_block_received(&block, &peer_info.addr);
}
}
block
}
Err(e) => {
debug!("Invalid hydrated block {}: {:?}", cb.hash(), e);
return Ok(false);
}
};
if let Ok(prev) = self.chain().get_previous_header(&cb.header) {
if block
.validate(&prev.total_kernel_offset, self.verifier_cache.clone())
.is_ok()
{
debug!("successfully hydrated block from tx pool!");
self.process_block(block, peer_info, false)
} else {
if self.sync_state.status() == SyncStatus::NoSync {
debug!("adapter: block invalid after hydration, requesting full block");
self.request_block(&cb.header, peer_info);
Ok(true)
} else {
debug!("block invalid after hydration, ignoring it, cause still syncing");
Ok(true)
}
}
} else {
debug!("failed to retrieve previous block header (still syncing?)");
Ok(true)
}
}
}
fn header_received(
&self,
bh: core::BlockHeader,
peer_info: &PeerInfo,
) -> Result<bool, chain::Error> {
let bhash = bh.hash();
debug!(
"Received block header {} at {} from {}, going to process.",
bhash, bh.height, peer_info.addr,
);
// pushing the new block header through the header chain pipeline
// we will go ask for the block if this is a new header
let res = self
.chain()
.process_block_header(&bh, self.chain_opts(false));
if let Err(e) = res {
debug!("Block header {} refused by chain: {:?}", bhash, e.kind());
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
if e.is_bad_data() {
return Ok(false);
} else {
// we got an error when trying to process the block header
// but nothing serious enough to need to ban the peer upstream
return Err(e);
}
}
// we have successfully processed a block header
// so we can go request the block itself
self.request_compact_block(&bh, peer_info);
// done receiving the header
Ok(true)
}
fn headers_received(
&self,
bhs: &[core::BlockHeader],
peer_info: &PeerInfo,
) -> Result<bool, chain::Error> {
info!(
"Received {} block headers from {}",
bhs.len(),
peer_info.addr
);
if bhs.len() == 0 {
return Ok(false);
}
// try to add headers to our header chain
match self.chain().sync_block_headers(bhs, self.chain_opts(true)) {
Ok(_) => Ok(true),
Err(e) => {
debug!("Block headers refused by chain: {:?}", e);
if e.is_bad_data() {
return Ok(false);
} else {
Err(e)
}
}
}
}
fn locate_headers(&self, locator: &[Hash]) -> Result<Vec<core::BlockHeader>, chain::Error> {
debug!("locator: {:?}", locator);
let header = match self.find_common_header(locator) {
Some(header) => header,
None => return Ok(vec![]),
};
let max_height = self.chain().header_head()?.height;
let txhashset = self.chain().txhashset();
let txhashset = txhashset.read();
// looks like we know one, getting as many following headers as allowed
let hh = header.height;
let mut headers = vec![];
for h in (hh + 1)..=(hh + (p2p::MAX_BLOCK_HEADERS as u64)) {
if h > max_height {
break;
}
if let Ok(header) = txhashset.get_header_by_height(h) {
headers.push(header);
} else {
error!("Failed to locate headers successfully.");
break;
}
}
debug!("returning headers: {}", headers.len());
Ok(headers)
}
/// Gets a full block by its hash.
fn get_block(&self, h: Hash) -> Option<core::Block> {
let b = self.chain().get_block(&h);
match b {
Ok(b) => Some(b),
_ => None,
}
}
/// Provides a reading view into the current txhashset state as well as
/// the required indexes for a consumer to rewind to a consistent state
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
/// at the provided block hash.
fn txhashset_read(&self, h: Hash) -> Option<p2p::TxHashSetRead> {
match self.chain().txhashset_read(h.clone()) {
Ok((out_index, kernel_index, read)) => Some(p2p::TxHashSetRead {
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
output_index: out_index,
kernel_index: kernel_index,
reader: read,
}),
Err(e) => {
warn!("Couldn't produce txhashset data for block {}: {:?}", h, e);
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
None
}
}
}
fn txhashset_receive_ready(&self) -> bool {
match self.sync_state.status() {
SyncStatus::TxHashsetDownload { .. } => true,
_ => false,
}
}
fn txhashset_download_update(
&self,
start_time: DateTime<Utc>,
downloaded_size: u64,
total_size: u64,
) -> bool {
match self.sync_state.status() {
[1.1.0] Merge master into 1.1.0 (#2720) * cleanup legacy "3 dot" check (#2625) * Allow to peers behind NAT to get up to preferred_max connections (#2543) Allow to peers behind NAT to get up to preffered_max connections If peer has only outbound connections it's mot likely behind NAT and we should not stop it from getting more outbound connections * Reduce usage of unwrap in p2p crate (#2627) Also change store crate a bit * Simplify (and fix) output_pos cleanup during chain compaction (#2609) * expose leaf pos iterator use it for various things in txhashset when iterating over outputs * fix * cleanup * rebuild output_pos index (and clear it out first) when compacting the chain * fixup tests * refactor to match on (output, proof) tuple * add comments to compact() to explain what is going on. * get rid of some boxing around the leaf_set iterator * cleanup * [docs] Add switch commitment documentation (#2526) * remove references to no-longer existing switch commitment hash (as switch commitments were removed in ca8447f3bd49e80578770da841e5fbbac2c23cde and moved into the blinding factor of the Pedersen Commitment) * some rewording (points vs curves) and fix of small formatting issues * Add switch commitment documentation * [docs] Documents in grin repo had translated in Korean. (#2604) * Start to M/W intro translate in Korean * translate in Korean * add korean translation on intro * table_of_content.md translate in Korean. * table_of_content_KR.md finish translate in Korean, start to translate State_KR.md * add state_KR.md & commit some translation in State_KR.md * WIP stat_KR.md translation * add build_KR.md && stratum_KR.md * finish translate stratum_KR.md & table_of_content_KR.md * rename intro.KR.md to intro_KR.md * add intro_KR.md file path each language's intro.md * add Korean translation file path to stratum.md & table_of_contents.md * fix difference with grin/master * Fix TxHashSet file filter for Windows. (#2641) * Fix TxHashSet file filter for Windows. * rustfmt * Updating regexp * Adding in test case * Display the current download rate rather than the average when syncing the chain (#2633) * When syncing the chain, calculate the displayed download speed using the current rate from the most recent iteration, rather than the average download speed from the entire syncing process. * Replace the explicitly ignored variables in the pattern with an implicit ignore * remove root = true from editorconfig (#2655) * Add Medium post to intro (#2654) Spoke to @yeastplume who agreed it makes sense to add the "Grin Transactions Explained, Step-by-Step" Medium post to intro.md Open for suggestions on a better location. * add a new configure item for log_max_files (#2601) * add a new configure item for log_max_files * rustfmt * use a constant instead of multiple 32 * rustfmt * Fix the build warning of deprecated trim_right_matches (#2662) * [DOC] state.md, build.md and chain directory documents translate in Korean. (#2649) * add md files for translation. * start to translation fast-sync, code_structure. add file build_KR.md, states_KR.md * add dandelion_KR.md && simulation_KR.md for Korean translation. * add md files for translation. * start to translation fast-sync, code_structure. add file build_KR.md, states_KR.md * add dandelion_KR.md && simulation_KR.md for Korean translation. * remove some useless md files for translation. this is rearrange set up translation order. * add dot end of sentence & translate build.md in korean * remove fast-sync_KR.md * finish build_KR.md translation * finish build_KR.md translation * finish translation state_KR.md & add phrase in state.md to move other language md file * translate blocks_and_headers.md && chain_sync.md in Korean * add . in chain_sync.md , translation finished in doc/chain dir. * fix some miss typos * Api documentation fixes (#2646) * Fix the API documentation for Chain Validate (v1/chain/validate). It was documented as a POST, but it is actually a GET request, which can be seen in its handler ChainValidationHandler * Update the API V1 route list response to include the headers and merkleproof routes. Also clarify that for the chain/outputs route you must specify either byids or byheight to select outputs. * refactor(ci): reorganize CI related code (#2658) Break-down the CI related code into smaller more maintainable pieces. * Specify grin or nanogrins in API docs where applicable (#2642) * Set Content-Type in API client (#2680) * Reduce number of unwraps in chain crate (#2679) * fix: the restart of state sync doesn't work sometimes (#2687) * let check_txhashset_needed return true on abnormal case (#2684) * Reduce number of unwwaps in api crate (#2681) * Reduce number of unwwaps in api crate * Format use section * Small QoL improvements for wallet developers (#2651) * Small changes for wallet devs * Move create_nonce into Keychain trait * Replace match by map_err * Add flag to Slate to skip fee check * Fix secp dependency * Remove check_fee flag in Slate * Add Japanese edition of build.md (#2697) * catch the panic to avoid peer thread quit early (#2686) * catch the panic to avoid peer thread quit before taking the chance to ban * move catch wrapper logic down into the util crate * log the panic info * keep txhashset.rs untouched * remove a warning * [DOC] dandelion.md, simulation.md ,fast-sync.md and pruning.md documents translate in Korean. (#2678) * Show response code in API client error message (#2683) It's hard to investigate what happens when an API client error is printed out * Add some better logging for get_outputs_by_id failure states (#2705) * Switch commitment doc fixes (#2645) Fix some typos and remove the use of parentheses in a couple of places to make the reading flow a bit better. * docs: update/add new README.md badges (#2708) Replace existing badges with SVG counterparts and add a bunch of new ones. * Update intro.md (#2702) Add mention of censoring attack prevented by range proofs * use sandbox folder for txhashset validation on state sync (#2685) * use sandbox folder for txhashset validation on state sync * rustfmt * use temp directory as the sandbox instead actual db_root txhashset dir * rustfmt * move txhashset overwrite to the end of full validation * fix travis-ci test * rustfmt * fix: hashset have 2 folders including txhashset and header * rustfmt * (1)switch to rebuild_header_mmr instead of copy the sandbox header mmr (2)lock txhashset when overwriting and opening and rebuild * minor improve on sandbox_dir * add Japanese edition of state.md (#2703) * Attempt to fix broken TUI locale (#2713) Can confirm that on the same machine 1.0.2 TUI looks great and is broken on the current master. Bump of `cursive` version fixed it for me. Fixes #2676 * clean the header folder in sandbox (#2716) * forgot to clean the header folder in sandbox in #2685 * Reduce number of unwraps in servers crate (#2707) It doesn't include stratum server which is sufficiently changed in 1.1 branch and adapters, which is big enough for a separate PR. * rustfmt * change version to beta
2019-04-01 13:47:48 +03:00
SyncStatus::TxHashsetDownload {
update_time: old_update_time,
downloaded_size: old_downloaded_size,
..
} => self
.sync_state
.update_txhashset_download(SyncStatus::TxHashsetDownload {
start_time,
prev_update_time: old_update_time,
update_time: Utc::now(),
prev_downloaded_size: old_downloaded_size,
downloaded_size,
total_size,
}),
_ => false,
}
}
/// Writes a reading view on a txhashset state that's been provided to us.
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
/// If we're willing to accept that new state, the data stream will be
/// read as a zip file, unzipped and the resulting state files should be
/// rewound to the provided indexes.
fn txhashset_write(
&self,
h: Hash,
txhashset_data: File,
_peer_info: &PeerInfo,
) -> Result<bool, chain::Error> {
// check status again after download, in case 2 txhashsets made it somehow
if let SyncStatus::TxHashsetDownload { .. } = self.sync_state.status() {
} else {
return Ok(true);
}
if let Err(e) = self
.chain()
.txhashset_write(h, txhashset_data, self.sync_state.as_ref())
2018-07-02 02:23:24 +03:00
{
[1.1.0] Merge master into 1.1.0 (#2720) * cleanup legacy "3 dot" check (#2625) * Allow to peers behind NAT to get up to preferred_max connections (#2543) Allow to peers behind NAT to get up to preffered_max connections If peer has only outbound connections it's mot likely behind NAT and we should not stop it from getting more outbound connections * Reduce usage of unwrap in p2p crate (#2627) Also change store crate a bit * Simplify (and fix) output_pos cleanup during chain compaction (#2609) * expose leaf pos iterator use it for various things in txhashset when iterating over outputs * fix * cleanup * rebuild output_pos index (and clear it out first) when compacting the chain * fixup tests * refactor to match on (output, proof) tuple * add comments to compact() to explain what is going on. * get rid of some boxing around the leaf_set iterator * cleanup * [docs] Add switch commitment documentation (#2526) * remove references to no-longer existing switch commitment hash (as switch commitments were removed in ca8447f3bd49e80578770da841e5fbbac2c23cde and moved into the blinding factor of the Pedersen Commitment) * some rewording (points vs curves) and fix of small formatting issues * Add switch commitment documentation * [docs] Documents in grin repo had translated in Korean. (#2604) * Start to M/W intro translate in Korean * translate in Korean * add korean translation on intro * table_of_content.md translate in Korean. * table_of_content_KR.md finish translate in Korean, start to translate State_KR.md * add state_KR.md & commit some translation in State_KR.md * WIP stat_KR.md translation * add build_KR.md && stratum_KR.md * finish translate stratum_KR.md & table_of_content_KR.md * rename intro.KR.md to intro_KR.md * add intro_KR.md file path each language's intro.md * add Korean translation file path to stratum.md & table_of_contents.md * fix difference with grin/master * Fix TxHashSet file filter for Windows. (#2641) * Fix TxHashSet file filter for Windows. * rustfmt * Updating regexp * Adding in test case * Display the current download rate rather than the average when syncing the chain (#2633) * When syncing the chain, calculate the displayed download speed using the current rate from the most recent iteration, rather than the average download speed from the entire syncing process. * Replace the explicitly ignored variables in the pattern with an implicit ignore * remove root = true from editorconfig (#2655) * Add Medium post to intro (#2654) Spoke to @yeastplume who agreed it makes sense to add the "Grin Transactions Explained, Step-by-Step" Medium post to intro.md Open for suggestions on a better location. * add a new configure item for log_max_files (#2601) * add a new configure item for log_max_files * rustfmt * use a constant instead of multiple 32 * rustfmt * Fix the build warning of deprecated trim_right_matches (#2662) * [DOC] state.md, build.md and chain directory documents translate in Korean. (#2649) * add md files for translation. * start to translation fast-sync, code_structure. add file build_KR.md, states_KR.md * add dandelion_KR.md && simulation_KR.md for Korean translation. * add md files for translation. * start to translation fast-sync, code_structure. add file build_KR.md, states_KR.md * add dandelion_KR.md && simulation_KR.md for Korean translation. * remove some useless md files for translation. this is rearrange set up translation order. * add dot end of sentence & translate build.md in korean * remove fast-sync_KR.md * finish build_KR.md translation * finish build_KR.md translation * finish translation state_KR.md & add phrase in state.md to move other language md file * translate blocks_and_headers.md && chain_sync.md in Korean * add . in chain_sync.md , translation finished in doc/chain dir. * fix some miss typos * Api documentation fixes (#2646) * Fix the API documentation for Chain Validate (v1/chain/validate). It was documented as a POST, but it is actually a GET request, which can be seen in its handler ChainValidationHandler * Update the API V1 route list response to include the headers and merkleproof routes. Also clarify that for the chain/outputs route you must specify either byids or byheight to select outputs. * refactor(ci): reorganize CI related code (#2658) Break-down the CI related code into smaller more maintainable pieces. * Specify grin or nanogrins in API docs where applicable (#2642) * Set Content-Type in API client (#2680) * Reduce number of unwraps in chain crate (#2679) * fix: the restart of state sync doesn't work sometimes (#2687) * let check_txhashset_needed return true on abnormal case (#2684) * Reduce number of unwwaps in api crate (#2681) * Reduce number of unwwaps in api crate * Format use section * Small QoL improvements for wallet developers (#2651) * Small changes for wallet devs * Move create_nonce into Keychain trait * Replace match by map_err * Add flag to Slate to skip fee check * Fix secp dependency * Remove check_fee flag in Slate * Add Japanese edition of build.md (#2697) * catch the panic to avoid peer thread quit early (#2686) * catch the panic to avoid peer thread quit before taking the chance to ban * move catch wrapper logic down into the util crate * log the panic info * keep txhashset.rs untouched * remove a warning * [DOC] dandelion.md, simulation.md ,fast-sync.md and pruning.md documents translate in Korean. (#2678) * Show response code in API client error message (#2683) It's hard to investigate what happens when an API client error is printed out * Add some better logging for get_outputs_by_id failure states (#2705) * Switch commitment doc fixes (#2645) Fix some typos and remove the use of parentheses in a couple of places to make the reading flow a bit better. * docs: update/add new README.md badges (#2708) Replace existing badges with SVG counterparts and add a bunch of new ones. * Update intro.md (#2702) Add mention of censoring attack prevented by range proofs * use sandbox folder for txhashset validation on state sync (#2685) * use sandbox folder for txhashset validation on state sync * rustfmt * use temp directory as the sandbox instead actual db_root txhashset dir * rustfmt * move txhashset overwrite to the end of full validation * fix travis-ci test * rustfmt * fix: hashset have 2 folders including txhashset and header * rustfmt * (1)switch to rebuild_header_mmr instead of copy the sandbox header mmr (2)lock txhashset when overwriting and opening and rebuild * minor improve on sandbox_dir * add Japanese edition of state.md (#2703) * Attempt to fix broken TUI locale (#2713) Can confirm that on the same machine 1.0.2 TUI looks great and is broken on the current master. Bump of `cursive` version fixed it for me. Fixes #2676 * clean the header folder in sandbox (#2716) * forgot to clean the header folder in sandbox in #2685 * Reduce number of unwraps in servers crate (#2707) It doesn't include stratum server which is sufficiently changed in 1.1 branch and adapters, which is big enough for a separate PR. * rustfmt * change version to beta
2019-04-01 13:47:48 +03:00
self.chain().clean_txhashset_sandbox();
error!("Failed to save txhashset archive: {}", e);
let is_good_data = !e.is_bad_data();
self.sync_state.set_sync_error(types::Error::Chain(e));
Ok(is_good_data)
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
} else {
info!("Received valid txhashset data for {}.", h);
Ok(true)
[WIP] Abridged sync (#440) * Util to zip and unzip directories * First pass at sumtree request/response. Add message types, implement the exchange in the protocol, zip up the sumtree directory and stream the file over, with necessary adapter hooks. * Implement the sumtree archive receive logicGets the sumtree archive data stream from the network and write it to a file. Unzip the file, place it at the right spot and reconstruct the sumtree data structure, rewinding where to the right spot. * Sumtree hash structure validation * Simplify sumtree backend buffering logic. The backend for a sumtree has to implement some in-memory buffering logic to provide a commit/rollback interface. The backend itself is an aggregate of 3 underlying storages (an append only file, a remove log and a skip list). The buffering was previously implemented both by the backend and some of the underlying storages. Now pushing back all buffering logic to the storages to keep the backend simpler. * Add kernel append only store file to sumtrees. The chain sumtrees structure now also saves all kernels to a dedicated file. As that storage is implemented by the append only file wrapper, it's also rewind-aware. * Full state validation. Checks that: - MMRs are sane (hash and sum each node) - Tree roots match the corresponding header - Kernel signatures are valid - Sum of all kernel excesses equals the sum of UTXO commitments minus the supply * Fast sync handoff to body sync. Once the fast-sync state is fully setup, get bacj in body sync mode to get the full bodies of the last blocks we're missing. * First fully working fast sync * Facility in p2p conn to deal with attachments (raw binary after message). * Re-introduced sumtree send and receive message handling using the above. * Fixed test and finished updating all required db state after sumtree validation. * Massaged a little bit the pipeline orphan check to still work after the new sumtrees have been setup. * Various cleanup. Consolidated fast sync and full sync into a single function as they're very similar. Proper conditions to trigger a sumtree request and some checks on receiving it.
2018-02-10 01:32:16 +03:00
}
}
}
impl NetToChainAdapter {
/// Construct a new NetToChainAdapter instance
pub fn new(
sync_state: Arc<SyncState>,
chain: Arc<chain::Chain>,
tx_pool: Arc<RwLock<pool::TransactionPool>>,
verifier_cache: Arc<RwLock<dyn VerifierCache>>,
config: ServerConfig,
hooks: Vec<Box<dyn NetEvents + Send + Sync>>,
) -> NetToChainAdapter {
NetToChainAdapter {
sync_state,
chain: Arc::downgrade(&chain),
tx_pool,
verifier_cache,
peers: OneTime::new(),
config,
hooks,
}
}
2017-12-03 15:46:00 +03:00
/// Initialize a NetToChainAdaptor with reference to a Peers object.
/// Should only be called once.
pub fn init(&self, peers: Arc<p2p::Peers>) {
self.peers.init(Arc::downgrade(&peers));
}
fn peers(&self) -> Arc<p2p::Peers> {
self.peers
.borrow()
.upgrade()
.expect("Failed to upgrade weak ref to our peers.")
}
fn chain(&self) -> Arc<chain::Chain> {
self.chain
.upgrade()
.expect("Failed to upgrade weak ref to our chain.")
}
// Find the first locator hash that refers to a known header on our main chain.
fn find_common_header(&self, locator: &[Hash]) -> Option<BlockHeader> {
let txhashset = self.chain().txhashset();
let txhashset = txhashset.read();
for hash in locator {
if let Ok(header) = self.chain().get_block_header(&hash) {
if let Ok(header_at_height) = txhashset.get_header_by_height(header.height) {
if header.hash() == header_at_height.hash() {
return Some(header);
}
}
2018-03-04 03:19:54 +03:00
}
}
None
}
// pushing the new block through the chain pipeline
// remembering to reset the head if we have a bad block
fn process_block(
&self,
b: core::Block,
peer_info: &PeerInfo,
was_requested: bool,
) -> Result<bool, chain::Error> {
// We cannot process blocks earlier than the horizon so check for this here.
{
let head = self.chain().head()?;
let horizon = head
.height
.saturating_sub(global::cut_through_horizon() as u64);
if b.header.height < horizon {
return Ok(true);
}
}
2018-03-04 03:19:54 +03:00
let bhash = b.hash();
let previous = self.chain().get_previous_header(&b.header);
match self
.chain()
.process_block(b, self.chain_opts(was_requested))
{
Ok(_) => {
self.validate_chain(bhash);
self.check_compact();
Ok(true)
}
2018-03-04 03:19:54 +03:00
Err(ref e) if e.is_bad_data() => {
self.validate_chain(bhash);
Ok(false)
2018-03-04 03:19:54 +03:00
}
Err(e) => {
match e.kind() {
chain::ErrorKind::Orphan => {
if let Ok(previous) = previous {
// make sure we did not miss the parent block
if !self.chain().is_orphan(&previous.hash())
&& !self.sync_state.is_syncing()
{
debug!("process_block: received an orphan block, checking the parent: {:}", previous.hash());
self.request_block_by_hash(previous.hash(), peer_info)
}
}
Ok(true)
}
_ => {
debug!(
"process_block: block {} refused by chain: {}",
bhash,
e.kind()
);
Ok(true)
}
}
2018-03-04 03:19:54 +03:00
}
}
}
fn validate_chain(&self, bhash: Hash) {
// If we are running in "validate the full chain every block" then
// panic here if validation fails for any reason.
// We are out of consensus at this point and want to track the problem
// down as soon as possible.
// Skip this if we are currently syncing (too slow).
if self.config.chain_validation_mode == ChainValidationMode::EveryBlock
&& self.chain().head().unwrap().height > 0
&& !self.sync_state.is_syncing()
{
let now = Instant::now();
debug!(
"process_block: ***** validating full chain state at {}",
bhash,
);
self.chain()
.validate(true)
.expect("chain validation failed, hard stop");
debug!(
"process_block: ***** done validating full chain state, took {}s",
now.elapsed().as_secs(),
);
2018-03-04 03:19:54 +03:00
}
}
fn check_compact(&self) {
// Skip compaction if we are syncing.
if self.sync_state.is_syncing() {
return;
}
// Roll the dice to trigger compaction at 1/COMPACTION_CHECK chance per block,
// uses a different thread to avoid blocking the caller thread (likely a peer)
let mut rng = thread_rng();
if 0 == rng.gen_range(0, global::COMPACTION_CHECK) {
let chain = self.chain().clone();
let _ = thread::Builder::new()
.name("compactor".to_string())
.spawn(move || {
if let Err(e) = chain.compact() {
error!("Could not compact chain: {:?}", e);
}
});
}
}
fn request_transaction(&self, h: Hash, peer_info: &PeerInfo) {
self.send_tx_request_to_peer(h, peer_info, |peer, h| peer.send_tx_request(h))
}
// After receiving a compact block if we cannot successfully hydrate
// it into a full block then fallback to requesting the full block
// from the same peer that gave us the compact block
// consider additional peers for redundancy?
fn request_block(&self, bh: &BlockHeader, peer_info: &PeerInfo) {
self.request_block_by_hash(bh.hash(), peer_info)
}
fn request_block_by_hash(&self, h: Hash, peer_info: &PeerInfo) {
self.send_block_request_to_peer(h, peer_info, |peer, h| peer.send_block_request(h))
2018-03-04 03:19:54 +03:00
}
// After we have received a block header in "header first" propagation
// we need to go request the block (compact representation) from the
// same peer that gave us the header (unless we have already accepted the block)
fn request_compact_block(&self, bh: &BlockHeader, peer_info: &PeerInfo) {
self.send_block_request_to_peer(bh.hash(), peer_info, |peer, h| {
2018-03-04 03:19:54 +03:00
peer.send_compact_block_request(h)
})
}
fn send_tx_request_to_peer<F>(&self, h: Hash, peer_info: &PeerInfo, f: F)
where
F: Fn(&p2p::Peer, Hash) -> Result<(), p2p::Error>,
{
match self.peers().get_connected_peer(peer_info.addr) {
None => debug!(
"send_tx_request_to_peer: can't send request to peer {:?}, not connected",
peer_info.addr
),
Some(peer) => {
if let Err(e) = f(&peer, h) {
error!("send_tx_request_to_peer: failed: {:?}", e)
}
}
}
}
fn send_block_request_to_peer<F>(&self, h: Hash, peer_info: &PeerInfo, f: F)
2018-03-04 03:19:54 +03:00
where
F: Fn(&p2p::Peer, Hash) -> Result<(), p2p::Error>,
{
match self.chain().block_exists(h) {
Ok(false) => match self.peers().get_connected_peer(peer_info.addr) {
None => debug!(
"send_block_request_to_peer: can't send request to peer {:?}, not connected",
peer_info.addr
),
Some(peer) => {
if let Err(e) = f(&peer, h) {
error!("send_block_request_to_peer: failed: {:?}", e)
}
}
},
Ok(true) => debug!("send_block_request_to_peer: block {} already known", h),
Err(e) => error!(
"send_block_request_to_peer: failed to check block exists: {:?}",
e
),
}
2018-03-04 03:19:54 +03:00
}
/// Prepare options for the chain pipeline
fn chain_opts(&self, was_requested: bool) -> chain::Options {
let opts = if was_requested {
chain::Options::SYNC
} else {
chain::Options::NONE
};
opts
}
}
/// Implementation of the ChainAdapter for the network. Gets notified when the
/// accepted a new block, asking the pool to update its state and
/// the network to broadcast the block
pub struct ChainToPoolAndNetAdapter {
tx_pool: Arc<RwLock<pool::TransactionPool>>,
peers: OneTime<Weak<p2p::Peers>>,
hooks: Vec<Box<dyn ChainEvents + Send + Sync>>,
}
impl ChainAdapter for ChainToPoolAndNetAdapter {
fn block_accepted(&self, b: &core::Block, status: BlockStatus, opts: Options) {
// not broadcasting blocks received through sync
if !opts.contains(chain::Options::SYNC) {
for hook in &self.hooks {
hook.on_block_accepted(b, &status);
}
// If we mined the block then we want to broadcast the compact block.
// If we received the block from another node then broadcast "header first"
// to minimize network traffic.
if opts.contains(Options::MINE) {
// propagate compact block out if we mined the block
let cb: CompactBlock = b.clone().into();
self.peers().broadcast_compact_block(&cb);
} else {
// "header first" propagation if we are not the originator of this block
self.peers().broadcast_header(&b.header);
}
}
// Reconcile the txpool against the new block *after* we have broadcast it too our peers.
// This may be slow and we do not want to delay block propagation.
// We only want to reconcile the txpool against the new block *if* total work has increased.
if status == BlockStatus::Next || status == BlockStatus::Reorg {
let mut tx_pool = self.tx_pool.write();
let _ = tx_pool.reconcile_block(b);
// First "age out" any old txs in the reorg_cache.
let cutoff = Utc::now() - Duration::minutes(30);
tx_pool.truncate_reorg_cache(cutoff);
}
if status == BlockStatus::Reorg {
let _ = self.tx_pool.write().reconcile_reorg_cache(&b.header);
}
}
}
impl ChainToPoolAndNetAdapter {
/// Construct a ChainToPoolAndNetAdapter instance.
pub fn new(
tx_pool: Arc<RwLock<pool::TransactionPool>>,
hooks: Vec<Box<dyn ChainEvents + Send + Sync>>,
) -> ChainToPoolAndNetAdapter {
ChainToPoolAndNetAdapter {
tx_pool,
peers: OneTime::new(),
hooks: hooks,
}
}
/// Initialize a ChainToPoolAndNetAdapter instance with handle to a Peers
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
/// object. Should only be called once.
pub fn init(&self, peers: Arc<p2p::Peers>) {
self.peers.init(Arc::downgrade(&peers));
}
fn peers(&self) -> Arc<p2p::Peers> {
self.peers
.borrow()
.upgrade()
.expect("Failed to upgrade weak ref to our peers.")
}
}
/// Adapter between the transaction pool and the network, to relay
/// transactions that have been accepted.
pub struct PoolToNetAdapter {
peers: OneTime<Weak<p2p::Peers>>,
dandelion_epoch: Arc<RwLock<DandelionEpoch>>,
}
/// Adapter between the Dandelion monitor and the current Dandelion "epoch".
pub trait DandelionAdapter: Send + Sync {
/// Is the node stemming (or fluffing) transactions in the current epoch?
fn is_stem(&self) -> bool;
/// Is the current Dandelion epoch expired?
fn is_expired(&self) -> bool;
/// Transition to the next Dandelion epoch (new stem/fluff state, select new relay peer).
fn next_epoch(&self);
}
impl DandelionAdapter for PoolToNetAdapter {
fn is_stem(&self) -> bool {
self.dandelion_epoch.read().is_stem()
}
fn is_expired(&self) -> bool {
self.dandelion_epoch.read().is_expired()
}
fn next_epoch(&self) {
self.dandelion_epoch.write().next_epoch(&self.peers());
}
}
impl pool::PoolAdapter for PoolToNetAdapter {
fn tx_accepted(&self, tx: &core::Transaction) {
self.peers().broadcast_transaction(tx);
}
fn stem_tx_accepted(&self, tx: &core::Transaction) -> Result<(), pool::PoolError> {
// Take write lock on the current epoch.
// We need to be able to update the current relay peer if not currently connected.
let mut epoch = self.dandelion_epoch.write();
// If "stem" epoch attempt to relay the tx to the next Dandelion relay.
// Fallback to immediately fluffing the tx if we cannot stem for any reason.
// If "fluff" epoch then nothing to do right now (fluff via Dandelion monitor).
if epoch.is_stem() {
if let Some(peer) = epoch.relay_peer(&self.peers()) {
match peer.send_stem_transaction(tx) {
Ok(_) => {
info!("Stemming this epoch, relaying to next peer.");
Ok(())
}
Err(e) => {
error!("Stemming tx failed. Fluffing. {:?}", e);
Err(pool::PoolError::DandelionError)
}
}
} else {
error!("No relay peer. Fluffing.");
Err(pool::PoolError::DandelionError)
}
} else {
info!("Fluff epoch. Aggregating stem tx(s). Will fluff via Dandelion monitor.");
Ok(())
}
}
}
impl PoolToNetAdapter {
/// Create a new pool to net adapter
pub fn new(config: DandelionConfig) -> PoolToNetAdapter {
PoolToNetAdapter {
peers: OneTime::new(),
dandelion_epoch: Arc::new(RwLock::new(DandelionEpoch::new(config))),
}
}
/// Setup the p2p server on the adapter
pub fn init(&self, peers: Arc<p2p::Peers>) {
self.peers.init(Arc::downgrade(&peers));
}
fn peers(&self) -> Arc<p2p::Peers> {
self.peers
.borrow()
.upgrade()
.expect("Failed to upgrade weak ref to our peers.")
}
}
/// Implements the view of the required by the TransactionPool to
/// operate. Mostly needed to break any direct lifecycle or implementation
/// dependency between the pool and the chain.
#[derive(Clone)]
pub struct PoolToChainAdapter {
chain: OneTime<Weak<chain::Chain>>,
}
impl PoolToChainAdapter {
/// Create a new pool adapter
pub fn new() -> PoolToChainAdapter {
PoolToChainAdapter {
chain: OneTime::new(),
}
}
/// Set the pool adapter's chain. Should only be called once.
pub fn set_chain(&self, chain_ref: Arc<chain::Chain>) {
self.chain.init(Arc::downgrade(&chain_ref));
}
fn chain(&self) -> Arc<chain::Chain> {
self.chain
.borrow()
.upgrade()
.expect("Failed to upgrade the weak ref to our chain.")
}
}
impl pool::BlockChain for PoolToChainAdapter {
fn chain_head(&self) -> Result<BlockHeader, pool::PoolError> {
self.chain()
.head_header()
.map_err(|_| pool::PoolError::Other(format!("failed to get head_header")))
}
fn get_block_header(&self, hash: &Hash) -> Result<BlockHeader, pool::PoolError> {
self.chain()
.get_block_header(hash)
.map_err(|_| pool::PoolError::Other(format!("failed to get block_header")))
}
fn get_block_sums(&self, hash: &Hash) -> Result<BlockSums, pool::PoolError> {
self.chain()
.get_block_sums(hash)
.map_err(|_| pool::PoolError::Other(format!("failed to get block_sums")))
}
fn validate_tx(&self, tx: &Transaction) -> Result<(), pool::PoolError> {
self.chain()
.validate_tx(tx)
.map_err(|_| pool::PoolError::Other(format!("failed to validate tx")))
}
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
fn verify_coinbase_maturity(&self, tx: &Transaction) -> Result<(), pool::PoolError> {
self.chain()
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
.verify_coinbase_maturity(tx)
.map_err(|_| pool::PoolError::ImmatureCoinbase)
2018-03-04 03:19:54 +03:00
}
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
fn verify_tx_lock_height(&self, tx: &Transaction) -> Result<(), pool::PoolError> {
self.chain()
Minimal Transaction Pool (#1067) * verify a tx like we verify a block (experimental) * first minimal_pool test up and running but not testing what we need to * rework tx_pool validation to use txhashset extension * minimal tx pool wired up but rough * works locally (rough statew though) delete "legacy" pool and graph code * rework the new pool into TransactionPool and Pool impls * rework pool to store pool entries with associated timer and source etc. * all_transactions * extra_txs so we can validate stempool against existing txpool * rework reconcile_block * txhashset apply_raw_tx can now rewind to a checkpoint (prev raw tx) * wip - txhashset tx tests * more flexible rewind on MMRs * add tests to cover apply_raw_txs on txhashset extension * add_to_stempool and add_to_txpool * deaggregate multi kernel tx when adding to txpoool * handle freshness in stempool handle propagation of stempool txs via dandelion monitor * patience timer and fluff if we cannot propagate to next relay * aggregate and fluff stempool is we have no relay * refactor coinbase maturity * rewrote basic tx pool tests to use a real txhashset via chain adapter * rework dandelion monitor to reflect recent discussion works locally but needs a cleanup * refactor dandelion_monitor - split out phases * more pool test coverage * remove old test code from pool (still wip) * block_building and block_reconciliation tests * tracked down chain test failure... * fix test_coinbase_maturity * dandelion_monitor now runs... * refactor dandelion config, shared across p2p and pool components * fix pool tests with new config * fix p2p tests * rework tx pool to deal with duplicate commitments (testnet2 limitation) * cleanup and address some PR feedback * add big comment about pre_tx...
2018-05-30 23:57:13 +03:00
.verify_tx_lock_height(tx)
.map_err(|_| pool::PoolError::ImmatureTransaction)
}
}