mirror of
https://github.com/mimblewimble/grin.git
synced 2025-01-20 19:11:08 +03:00
WIP: Tracking Transaction Pool Implementation (#48)
* Beginning work on pool design doc * Refining data structures; adding connect capability * Fleshing out the connectivity paths for the tx pool * Bringing tx pool and orphan set add logic up into parent TransactionPool * Use output's commitment as identifier in graph structures * Breaking a bunch of stuff to start migration to output commitment as id instead of hash * Wrapping up updates to pool using commitment keys, dummy blockchain. Contains lots of cleanup on the internal flow. * Beginning work on new block reconciliation * WIP: Replacing monolithic pool cleanup with mark-and-sweep, which greatly simplifies the logic. * Laying the groundwork for pool tests; test tx generator * WIP: More elaborate test helpers; starting work on more elaborate block acceptance test. * Need DummyUtxoSet to actually apply blocks now * Using search_for_best_output to validate output status in test_basic_pool_add * Enable modification of chain while under shared pool ownership. Cleanup pending * WIP: Begining to untangle the TransactionPool impl from Pool and Orphans data structures * Finishing refactoring of pool block reconciliaition; getting tests working again * Add metrics for graph sizes; prereq to pool size throttling * Remove redundant search_for_available_output from pool graph container * Minimum viable block builder: return all fully rooted txs * Tests for block building procedure * Delegate duplicate output checking to check_duplicate_outputs * Delegate orphan reference resolution to resolve_orphan_refs
This commit is contained in:
parent
eb1e49094b
commit
23fd07be60
10 changed files with 1786 additions and 2 deletions
|
@ -4,7 +4,7 @@ version = "0.1.0"
|
|||
authors = ["Ignotus Peverell <igno.peverell@protonmail.com>"]
|
||||
|
||||
[workspace]
|
||||
members = ["api", "chain", "core", "grin", "p2p", "store", "util"]
|
||||
members = ["api", "chain", "core", "grin", "p2p", "store", "util", "pool"]
|
||||
|
||||
[dependencies]
|
||||
grin_grin = { path = "./grin" }
|
||||
|
|
|
@ -192,7 +192,7 @@ fn u64_to_32bytes(n: u64) -> [u8; 32] {
|
|||
mod test {
|
||||
use super::*;
|
||||
|
||||
use secp::{self, Secp256k1};
|
||||
use secp::{self, key, Secp256k1};
|
||||
|
||||
#[test]
|
||||
fn blind_simple_tx() {
|
||||
|
@ -202,4 +202,12 @@ mod test {
|
|||
.unwrap();
|
||||
tx.verify_sig(&secp).unwrap();
|
||||
}
|
||||
#[test]
|
||||
fn blind_simpler_tx() {
|
||||
let secp = Secp256k1::with_caps(secp::ContextFlag::Commit);
|
||||
let (tx, _) =
|
||||
transaction(vec![input_rand(6), output(2, key::ONE_KEY), with_fee(4)])
|
||||
.unwrap();
|
||||
tx.verify_sig(&secp).unwrap();
|
||||
}
|
||||
}
|
||||
|
|
61
doc/internal/pool.md
Normal file
61
doc/internal/pool.md
Normal file
|
@ -0,0 +1,61 @@
|
|||
Transaction Pool
|
||||
==================
|
||||
|
||||
This document describes some of the basic functionality and requirements of grin's transaction pool.
|
||||
|
||||
## Overview of Required Capabilities
|
||||
|
||||
The primary purpose of the memory pool is to maintain a list of mineable transactions to be supplied to the miner service while building new blocks. The design will center around ensuring correct behavior here, especially around tricky conditions like head switching.
|
||||
|
||||
For standard (non-mining) nodes, the primary purpose of the memory pool is to serve as a moderator for transaction broadcasts by requiring connectivity to the blockchain. Secondary uses include monitoring incoming transactions, for example for giving early notice of an unconfirmed transaction to the user's wallet.
|
||||
|
||||
Given the focus of grin (and mimblewimble) on reduced resource consumption, the memory pool should be an optional but recommended component for non-mining nodes.
|
||||
|
||||
## Design Overview
|
||||
|
||||
The primary structure of the transaction pool is a pair of Directed Acyclic Graphs. Since each transaction is rooted directly by its inputs in a non-cyclic way, this structure naturally encompasses the directionality of the chains of unconfirmed transactions. Defining this structure has a few other nice properties: descendent invalidation (when a conflicting transaction is accepted for a given input) is nearly free, and the mineability of a given transaction is clearly depicted in its location in the heirarchy.
|
||||
|
||||
Another, non-obvious reason for the choice of a DAG is that the acyclic nature of transactions is a necessary property but must be explicitly verified in a way that is not true of other UTXO-based cryptocurrencies. Consider the following loop of single-input single-output transactions in BTC:
|
||||
|
||||
A->B->C->A
|
||||
|
||||
Because each input in Bitcoin specifically references the hash and output index of the output in a preceding transaction, for a loop to exist, a transaction must reference (and know the hash of) a transaction that does not yet exist (C, in the trivial example.) Furthermore, the hash and output index pair (called an "outpoint" in Bitcoin) is covered by the transaction hash of A, such that any change to either causes the hash of A to change. Therefore, attempting to build such a loop by amending A with the proper outpoint in C after C has been built causes A's hash to change, invalidating B, and so forth.
|
||||
|
||||
In grin, an input references an output by the output's own hash. Thus, the backreference does not include the situation the output was generated in, which allows (from a purely mechanical point of view) the creation of a loop without the ability to generate a specific hash from a tightly constrained preimage.
|
||||
|
||||
The pair of graphs represents the connected graph and the orphans graph. (While it is possible to represent both groups of transactions in a single graph, it makes determination of orphan status of a given transaction non-trivial, requiring either the maintainence of a flag or traversal upwards of potentially many inputs.)
|
||||
|
||||
A transaction reference in the pool has parents, one for each input. The parents fall into one of four states:
|
||||
|
||||
* Unknown
|
||||
* Blockchain transaction
|
||||
* Pool transaction
|
||||
* Orphan transaction
|
||||
|
||||
A mineable transaction is defined as a transaction which has met all of its locktime requirements and which all parents are either blockchain transactions are mineable pool transactions. One such requirement is the maturity requirement for spending newly generated coins. This will also include the explicit per-transaction locktime, if adopted.
|
||||
|
||||
## Transaction Selection
|
||||
|
||||
In terms of needs, preference should be given to older transactions; beyond this, it seems beneficial to target transactions that reduce the maximum depth of the transaction graph, as this reduces the computational complexity of traversing the graph and making changes to it. Since fees are largely static, there is no need for fee preference.
|
||||
|
||||
Kahn's algorithm with the parameters above to break ties could provide a efficient mechanism for producing a correctly ordrered transaction list while providing hooks for limited customization.
|
||||
|
||||
## Summary of Common Operations
|
||||
|
||||
### Adding a Transaction
|
||||
|
||||
The most basic task of the transaction pool is to add an incoming transaction to the graph.
|
||||
|
||||
The first step is the validation of the transaction itself. This involves the enforcement of all consensus rules surrounding the construction of the transaction itself, and the verification of all relevant signatures and proofs.
|
||||
|
||||
The next step is enforcement of node-level transaction acceptability policy. These are generally weaker restrictions governing relay and inclusion that may be adjusted without the need of hard- or soft-forking mechanisms. Additionally, this will include toggles and customizations made by operators or fork maintainers. Bitcoin's "standardness" language is adopted here.
|
||||
|
||||
Note that there are some elements of node-level policy which are not enforced here, for example the maximum size of the pool in memory.
|
||||
|
||||
Next, the state of the transaction and where it would be located in the graph is determined. Each of the transactions' inputs are resolved between the current blockchain UTXO set and the additional set of outputs generated by pool transactions.
|
||||
|
||||
## Adversarial Conditions
|
||||
|
||||
Under adversarial situations, the primary concerns to the transaction pool are denial-of-service attacks. The greatest concern should be maintaining the ability of the node to provide services to miners, by supplying ready made transactions to the mining service for inclusion in blocks. Resource consumption should be constrained as well. As we've seen on other chains, miners often have little incentive to include transactions if doing so impacts their ability to collect their primary reward.
|
||||
|
||||
###
|
15
pool/Cargo.toml
Normal file
15
pool/Cargo.toml
Normal file
|
@ -0,0 +1,15 @@
|
|||
[package]
|
||||
name = "grin_pool"
|
||||
version = "0.1.0"
|
||||
authors = ["Grin Authors <mimblewimble@lists.launchpad.net>"]
|
||||
|
||||
[dependencies]
|
||||
grin_core = { path = "../core" }
|
||||
grin_store = { path = "../store" }
|
||||
grin_p2p = { path = "../p2p" }
|
||||
secp256k1zkp = { path = "../secp256k1zkp" }
|
||||
time = "^0.1"
|
||||
rand = "0.3"
|
||||
log = "0.3"
|
||||
|
||||
[dev-dependencies]
|
3
pool/rustfmt.toml
Normal file
3
pool/rustfmt.toml
Normal file
|
@ -0,0 +1,3 @@
|
|||
hard_tabs = true
|
||||
wrap_comments = true
|
||||
write_mode = "Overwrite"
|
103
pool/src/blockchain.rs
Normal file
103
pool/src/blockchain.rs
Normal file
|
@ -0,0 +1,103 @@
|
|||
// This file is (hopefully) temporary.
|
||||
//
|
||||
// It contains a trait based on (but not exactly equal to) the trait defined
|
||||
// for the blockchain UTXO set, discussed at
|
||||
// https://github.com/ignopeverell/grin/issues/29, and a dummy implementation
|
||||
// of said trait.
|
||||
// Notably, UtxoDiff has been left off, and the question of how to handle
|
||||
// abstract return types has been deferred.
|
||||
|
||||
use core::core::hash;
|
||||
use core::core::block;
|
||||
use core::core::transaction;
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::clone::Clone;
|
||||
|
||||
use secp::pedersen::Commitment;
|
||||
|
||||
use std::sync::RwLock;
|
||||
|
||||
/// A DummyUtxoSet for mocking up the chain
|
||||
pub struct DummyUtxoSet {
|
||||
outputs : HashMap<Commitment, transaction::Output>
|
||||
}
|
||||
|
||||
impl DummyUtxoSet {
|
||||
pub fn empty() -> DummyUtxoSet{
|
||||
DummyUtxoSet{outputs: HashMap::new()}
|
||||
}
|
||||
pub fn root(&self) -> hash::Hash {
|
||||
hash::ZERO_HASH
|
||||
}
|
||||
pub fn apply(&self, b: &block::Block) -> DummyUtxoSet {
|
||||
let mut new_hashmap = self.outputs.clone();
|
||||
for input in &b.inputs {
|
||||
new_hashmap.remove(&input.commitment());
|
||||
}
|
||||
for output in &b.outputs {
|
||||
new_hashmap.insert(output.commitment(), output.clone());
|
||||
}
|
||||
DummyUtxoSet{outputs: new_hashmap}
|
||||
}
|
||||
pub fn with_block(&mut self, b: &block::Block) {
|
||||
for input in &b.inputs {
|
||||
self.outputs.remove(&input.commitment());
|
||||
}
|
||||
for output in &b.outputs {
|
||||
self.outputs.insert(output.commitment(), output.clone());
|
||||
}
|
||||
}
|
||||
pub fn rewind(&self, b: &block::Block) -> DummyUtxoSet {
|
||||
DummyUtxoSet{outputs: HashMap::new()}
|
||||
}
|
||||
pub fn get_output(&self, output_ref: &Commitment) -> Option<&transaction::Output> {
|
||||
self.outputs.get(output_ref)
|
||||
}
|
||||
|
||||
fn clone(&self) -> DummyUtxoSet {
|
||||
DummyUtxoSet{outputs: self.outputs.clone()}
|
||||
}
|
||||
|
||||
// only for testing: add an output to the map
|
||||
pub fn add_output(&mut self, output: transaction::Output) {
|
||||
self.outputs.insert(output.commitment(), output);
|
||||
}
|
||||
// like above, but doesn't modify in-place so no mut ref needed
|
||||
pub fn with_output(&self, output: transaction::Output) -> DummyUtxoSet {
|
||||
let mut new_map = self.outputs.clone();
|
||||
new_map.insert(output.commitment(), output);
|
||||
DummyUtxoSet{outputs: new_map}
|
||||
}
|
||||
}
|
||||
|
||||
/// A DummyChain is the mocked chain for playing with what methods we would
|
||||
/// need
|
||||
pub struct DummyChainImpl {
|
||||
utxo: RwLock<DummyUtxoSet>
|
||||
}
|
||||
|
||||
impl DummyChainImpl {
|
||||
pub fn new() -> DummyChainImpl {
|
||||
DummyChainImpl{
|
||||
utxo: RwLock::new(DummyUtxoSet{outputs: HashMap::new()})}
|
||||
}
|
||||
}
|
||||
|
||||
impl DummyChain for DummyChainImpl {
|
||||
fn get_best_utxo_set(&self) -> DummyUtxoSet {
|
||||
self.utxo.read().unwrap().clone()
|
||||
}
|
||||
fn update_utxo_set(&mut self, new_utxo: DummyUtxoSet) {
|
||||
self.utxo = RwLock::new(new_utxo);
|
||||
}
|
||||
fn apply_block(&self, b: &block::Block) {
|
||||
self.utxo.write().unwrap().with_block(b);
|
||||
}
|
||||
}
|
||||
|
||||
pub trait DummyChain {
|
||||
fn get_best_utxo_set(&self) -> DummyUtxoSet;
|
||||
fn update_utxo_set(&mut self, new_utxo: DummyUtxoSet);
|
||||
fn apply_block(&self, b: &block::Block);
|
||||
}
|
249
pool/src/graph.rs
Normal file
249
pool/src/graph.rs
Normal file
|
@ -0,0 +1,249 @@
|
|||
// Copyright 2017 The Grin Developers
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Base types for the transaction pool's Directed Acyclic Graphs
|
||||
|
||||
use std::vec::Vec;
|
||||
use std::sync::Arc;
|
||||
use std::sync::RwLock;
|
||||
use std::sync::Weak;
|
||||
use std::cell::RefCell;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use secp::pedersen::Commitment;
|
||||
use secp::{Secp256k1, ContextFlag};
|
||||
use secp::key;
|
||||
|
||||
use time;
|
||||
use rand;
|
||||
|
||||
use std::fmt;
|
||||
|
||||
use core::core;
|
||||
|
||||
/// An entry in the transaction pool.
|
||||
/// These are the vertices of both of the graph structures
|
||||
pub struct PoolEntry {
|
||||
// Core data
|
||||
// Unique identifier of this pool entry and the corresponding transaction
|
||||
pub transaction_hash: core::hash::Hash,
|
||||
|
||||
// Metadata
|
||||
size_estimate: u64,
|
||||
pub receive_ts: time::Tm,
|
||||
}
|
||||
|
||||
impl PoolEntry {
|
||||
pub fn new(tx: &core::transaction::Transaction) -> PoolEntry {
|
||||
PoolEntry{
|
||||
transaction_hash: transaction_identifier(tx),
|
||||
size_estimate : estimate_transaction_size(tx),
|
||||
receive_ts: time::now()}
|
||||
}
|
||||
}
|
||||
|
||||
fn estimate_transaction_size(tx: &core::transaction::Transaction) -> u64 {
|
||||
0
|
||||
}
|
||||
|
||||
/// An edge connecting graph vertices.
|
||||
/// For various use cases, one of either the source or destination may be
|
||||
/// unpopulated
|
||||
pub struct Edge {
|
||||
// Source and Destination are the vertex id's, the transaction (kernel)
|
||||
// hash.
|
||||
source: Option<core::hash::Hash>,
|
||||
destination: Option<core::hash::Hash>,
|
||||
|
||||
// Output is the output hash which this input/output pairing corresponds
|
||||
// to.
|
||||
output: Commitment,
|
||||
}
|
||||
|
||||
impl Edge{
|
||||
pub fn new(source: Option<core::hash::Hash>, destination: Option<core::hash::Hash>, output: Commitment) -> Edge {
|
||||
Edge{source: source, destination: destination, output: output}
|
||||
}
|
||||
|
||||
pub fn with_source(&self, src: Option<core::hash::Hash>) -> Edge {
|
||||
Edge{source: src, destination: self.destination, output: self.output}
|
||||
}
|
||||
|
||||
pub fn with_destination(&self, dst: Option<core::hash::Hash>) -> Edge {
|
||||
Edge{source: self.source, destination: dst, output: self.output}
|
||||
}
|
||||
|
||||
pub fn output_commitment(&self) -> Commitment {
|
||||
self.output
|
||||
}
|
||||
pub fn destination_hash(&self) -> Option<core::hash::Hash> {
|
||||
self.destination
|
||||
}
|
||||
pub fn source_hash(&self) -> Option<core::hash::Hash> {
|
||||
self.source
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for Edge {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
write!(f, "Edge {{source: {:?}, destination: {:?}, commitment: {:?}}}",
|
||||
self.source, self.destination, self.output)
|
||||
}
|
||||
}
|
||||
|
||||
/// The generic graph container. Both graphs, the pool and orphans, embed this
|
||||
/// structure and add additional capability on top of it.
|
||||
pub struct DirectedGraph {
|
||||
edges: HashMap<Commitment, Edge>,
|
||||
vertices: Vec<PoolEntry>,
|
||||
|
||||
// A small optimization: keeping roots (vertices with in-degree 0) in a
|
||||
// separate list makes topological sort a bit faster. (This is true for
|
||||
// Kahn's, not sure about other implementations)
|
||||
roots: Vec<PoolEntry>,
|
||||
}
|
||||
|
||||
impl DirectedGraph {
|
||||
pub fn empty() -> DirectedGraph {
|
||||
DirectedGraph{
|
||||
edges: HashMap::new(),
|
||||
vertices: Vec::new(),
|
||||
roots: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_edge_by_commitment(&self, output_commitment: &Commitment) -> Option<&Edge> {
|
||||
self.edges.get(output_commitment)
|
||||
}
|
||||
|
||||
pub fn remove_edge_by_commitment(&mut self, output_commitment: &Commitment) -> Option<Edge> {
|
||||
self.edges.remove(output_commitment)
|
||||
}
|
||||
|
||||
pub fn remove_vertex(&mut self, tx_hash: core::hash::Hash) -> Option<PoolEntry> {
|
||||
match self.roots.iter().position(|x| x.transaction_hash == tx_hash) {
|
||||
Some(i) => Some(self.roots.swap_remove(i)),
|
||||
None => {
|
||||
match self.vertices.iter().position(|x| x.transaction_hash == tx_hash) {
|
||||
Some(i) => Some(self.vertices.swap_remove(i)),
|
||||
None => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Adds a vertex and a set of incoming edges to the graph.
|
||||
///
|
||||
/// The PoolEntry at vertex is added to the graph; depending on the
|
||||
/// number of incoming edges, the vertex is either added to the vertices
|
||||
/// or to the roots.
|
||||
///
|
||||
/// Outgoing edges must not be included in edges; this method is designed
|
||||
/// for adding vertices one at a time and only accepts incoming edges as
|
||||
/// internal edges.
|
||||
pub fn add_entry(&mut self, vertex: PoolEntry, mut edges: Vec<Edge>) {
|
||||
if edges.len() == 0 {
|
||||
self.roots.push(vertex);
|
||||
} else {
|
||||
self.vertices.push(vertex);
|
||||
for edge in edges.drain(..) {
|
||||
self.edges.insert(edge.output_commitment(), edge);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// add_vertex_only adds a vertex, meant to be complemented by add_edge_only
|
||||
// in cases where delivering a vector of edges is not feasible or efficient
|
||||
pub fn add_vertex_only(&mut self, vertex: PoolEntry, is_root: bool) {
|
||||
if is_root {
|
||||
self.roots.push(vertex);
|
||||
} else {
|
||||
self.vertices.push(vertex);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add_edge_only(&mut self, edge: Edge) {
|
||||
self.edges.insert(edge.output_commitment(), edge);
|
||||
}
|
||||
|
||||
/// Number of vertices (root + internal)
|
||||
pub fn len_vertices(&self) -> usize {
|
||||
self.vertices.len() + self.roots.len()
|
||||
}
|
||||
|
||||
/// Number of root vertices only
|
||||
pub fn len_roots(&self) -> usize {
|
||||
self.roots.len()
|
||||
}
|
||||
|
||||
/// Number of edges
|
||||
pub fn len_edges(&self) -> usize {
|
||||
self.edges.len()
|
||||
}
|
||||
|
||||
/// Get the current list of roots
|
||||
pub fn get_roots(&self) -> Vec<core::hash::Hash> {
|
||||
self.roots.iter().map(|x| x.transaction_hash).collect()
|
||||
}
|
||||
}
|
||||
|
||||
/// Using transaction merkle_inputs_outputs to calculate a deterministic hash;
|
||||
/// this hashing mechanism has some ambiguity issues especially around range
|
||||
/// proofs and any extra data the kernel may cover, but it is used initially
|
||||
/// for testing purposes.
|
||||
pub fn transaction_identifier(tx: &core::transaction::Transaction) -> core::hash::Hash {
|
||||
core::transaction::merkle_inputs_outputs(&tx.inputs, &tx.outputs)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_add_entry() {
|
||||
let ec = Secp256k1::with_caps(ContextFlag::Commit);
|
||||
|
||||
let output_commit = ec.commit_value(70).unwrap();
|
||||
let inputs = vec![core::transaction::Input(ec.commit_value(50).unwrap()),
|
||||
core::transaction::Input(ec.commit_value(25).unwrap())];
|
||||
let outputs = vec![core::transaction::Output{
|
||||
features: core::transaction::DEFAULT_OUTPUT,
|
||||
commit: output_commit,
|
||||
proof: ec.range_proof(0, 100, key::ZERO_KEY, output_commit)}];
|
||||
let test_transaction = core::transaction::Transaction::new(inputs,
|
||||
outputs, 5);
|
||||
|
||||
let test_pool_entry = PoolEntry::new(&test_transaction);
|
||||
|
||||
let incoming_edge_1 = Edge::new(Some(random_hash()),
|
||||
Some(core::hash::ZERO_HASH), output_commit);
|
||||
|
||||
|
||||
let mut test_graph = DirectedGraph::empty();
|
||||
|
||||
test_graph.add_entry(test_pool_entry, vec![incoming_edge_1]);
|
||||
|
||||
assert_eq!(test_graph.vertices.len(), 1);
|
||||
assert_eq!(test_graph.roots.len(), 0);
|
||||
assert_eq!(test_graph.edges.len(), 1);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
/// For testing/debugging: a random tx hash
|
||||
fn random_hash() -> core::hash::Hash {
|
||||
let hash_bytes: [u8;32]= rand::random();
|
||||
core::hash::Hash(hash_bytes)
|
||||
}
|
35
pool/src/lib.rs
Normal file
35
pool/src/lib.rs
Normal file
|
@ -0,0 +1,35 @@
|
|||
// Copyright 2017 The Grin Developers
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! The transaction pool, keeping a view of currently-valid transactions that
|
||||
//! may be confirmed soon.
|
||||
|
||||
#![deny(non_upper_case_globals)]
|
||||
#![deny(non_camel_case_types)]
|
||||
#![deny(non_snake_case)]
|
||||
#![deny(unused_mut)]
|
||||
#![warn(missing_docs)]
|
||||
|
||||
pub mod graph;
|
||||
pub mod types;
|
||||
pub mod blockchain;
|
||||
pub mod pool;
|
||||
|
||||
extern crate time;
|
||||
extern crate rand;
|
||||
#[macro_use]
|
||||
extern crate log;
|
||||
|
||||
extern crate grin_core as core;
|
||||
extern crate secp256k1zkp as secp;
|
924
pool/src/pool.rs
Normal file
924
pool/src/pool.rs
Normal file
|
@ -0,0 +1,924 @@
|
|||
// Copyright 2017 The Grin Developers
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Top-level Pool type, methods, and tests
|
||||
|
||||
use types::{Pool, Orphans, Parent, PoolError, TxSource, TransactionGraphContainer};
|
||||
pub use graph;
|
||||
|
||||
use core::core::transaction;
|
||||
use core::core::block;
|
||||
use core::core::hash;
|
||||
// Temporary blockchain dummy impls
|
||||
use blockchain::{DummyChain, DummyChainImpl, DummyUtxoSet};
|
||||
|
||||
use secp::pedersen::Commitment;
|
||||
|
||||
use std::sync::{Arc, RwLock, Weak};
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// The pool itself.
|
||||
/// The transactions HashMap holds ownership of all transactions in the pool,
|
||||
/// keyed by their transaction hash.
|
||||
struct TransactionPool {
|
||||
pub transactions: HashMap<hash::Hash, Box<transaction::Transaction>>,
|
||||
|
||||
pub pool : Pool,
|
||||
pub orphans: Orphans,
|
||||
|
||||
// blockchain is a DummyChain, for now, which mimics what the future
|
||||
// chain will offer to the pool
|
||||
blockchain: Arc<Box<DummyChain>>,
|
||||
}
|
||||
|
||||
|
||||
impl TransactionPool {
|
||||
/// Searches for an output, designated by its commitment, from the current
|
||||
/// best UTXO view, presented by taking the best blockchain UTXO set (as
|
||||
/// determined by the blockchain component) and rectifying pool spent and
|
||||
/// unspents.
|
||||
/// Detects double spends and unknown references from the pool and
|
||||
/// blockchain only; any conflicts with entries in the orphans set must
|
||||
/// be accounted for separately, if relevant.
|
||||
pub fn search_for_best_output(&self, output_commitment: &Commitment) -> Parent {
|
||||
// The current best unspent set is:
|
||||
// Pool unspent + (blockchain unspent - pool->blockchain spent)
|
||||
// Pool unspents are unconditional so we check those first
|
||||
self.pool.get_available_output(output_commitment).
|
||||
map(|x| Parent::PoolTransaction{tx_ref: x.source_hash().unwrap()}).
|
||||
or(self.search_blockchain_unspents(output_commitment)).
|
||||
or(self.search_pool_spents(output_commitment)).
|
||||
unwrap_or(Parent::Unknown)
|
||||
}
|
||||
|
||||
// search_blockchain_unspents searches the current view of the blockchain
|
||||
// unspent set, represented by blockchain unspents - pool spents, for an
|
||||
// output designated by output_commitment.
|
||||
fn search_blockchain_unspents(&self, output_commitment: &Commitment) -> Option<Parent> {
|
||||
self.blockchain.get_best_utxo_set().get_output(output_commitment).
|
||||
map(|o| match self.pool.get_blockchain_spent(output_commitment) {
|
||||
Some(x) => Parent::AlreadySpent{other_tx: x.destination_hash().unwrap()},
|
||||
None => Parent::BlockTransaction,
|
||||
})
|
||||
}
|
||||
|
||||
// search_pool_spents is the second half of pool input detection, after the
|
||||
// available_outputs have been checked. This returns either a
|
||||
// Parent::AlreadySpent or None.
|
||||
fn search_pool_spents(&self, output_commitment: &Commitment) -> Option<Parent> {
|
||||
self.pool.get_internal_spent(output_commitment).
|
||||
map(|x| Parent::AlreadySpent{other_tx: x.destination_hash().unwrap()})
|
||||
|
||||
}
|
||||
|
||||
/// Get the number of transactions in the pool
|
||||
pub fn pool_size(&self) -> usize {
|
||||
self.pool.num_transactions()
|
||||
}
|
||||
|
||||
pub fn orphans_size(&self) -> usize {
|
||||
self.orphans.num_transactions()
|
||||
}
|
||||
|
||||
pub fn total_size(&self) -> usize {
|
||||
self.pool.num_transactions() + self.orphans.num_transactions()
|
||||
}
|
||||
|
||||
/// Attempts to add a transaction to the pool.
|
||||
///
|
||||
/// Adds a transation to the memory pool, deferring to the orphans pool
|
||||
/// if necessary, and performing any connection-related validity checks.
|
||||
/// Happens under an exclusive mutable reference gated by the write portion
|
||||
/// of a RWLock.
|
||||
///
|
||||
pub fn add_to_memory_pool(&mut self, source: TxSource, tx: transaction::Transaction) -> Result<(), PoolError> {
|
||||
// The first check invovles ensuring that an identical transaction is
|
||||
// not already in the pool's transaction set.
|
||||
// A non-authoritative similar check should be performed under the
|
||||
// pool's read lock before we get to this point, which would catch the
|
||||
// majority of duplicate cases. The race condition is caught here.
|
||||
// TODO: When the transaction identifier is finalized, the assumptions
|
||||
// here may change depending on the exact coverage of the identifier.
|
||||
// The current tx.hash() method, for example, does not cover changes
|
||||
// to fees or other elements of the signature preimage.
|
||||
let tx_hash = graph::transaction_identifier(&tx);
|
||||
if self.transactions.contains_key(&tx_hash) {
|
||||
return Err(PoolError::AlreadyInPool)
|
||||
}
|
||||
|
||||
|
||||
// The next issue is to identify all unspent outputs that
|
||||
// this transaction will consume and make sure they exist in the set.
|
||||
let mut pool_refs: Vec<graph::Edge> = Vec::new();
|
||||
let mut orphan_refs: Vec<graph::Edge> = Vec::new();
|
||||
let mut blockchain_refs: Vec<graph::Edge> = Vec::new();
|
||||
|
||||
|
||||
for input in &tx.inputs {
|
||||
let base = graph::Edge::new(None, Some(tx_hash),
|
||||
input.commitment());
|
||||
|
||||
// Note that search_for_best_output does not examine orphans, by
|
||||
// design. If an incoming transaction consumes pool outputs already
|
||||
// spent by the orphans set, this does not preclude its inclusion
|
||||
// into the pool.
|
||||
match self.search_for_best_output(&input.commitment()) {
|
||||
Parent::PoolTransaction{tx_ref: x} => pool_refs.push(base.with_source(Some(x))),
|
||||
Parent::BlockTransaction => blockchain_refs.push(base),
|
||||
Parent::Unknown => orphan_refs.push(base),
|
||||
Parent::AlreadySpent{other_tx: x} => return Err(PoolError::DoubleSpend{other_tx: x, spent_output: input.commitment()}),
|
||||
}
|
||||
}
|
||||
|
||||
let is_orphan = orphan_refs.len() > 0;
|
||||
|
||||
// Next we examine the outputs this transaction creates and ensure
|
||||
// that they do not already exist.
|
||||
// I believe its worth preventing duplicate outputs from being
|
||||
// accepted, even though it is possible for them to be mined
|
||||
// with strict ordering. In the future, if desirable, this could
|
||||
// be node policy config or more intelligent.
|
||||
for output in &tx.outputs {
|
||||
self.check_duplicate_outputs(output, is_orphan)?
|
||||
}
|
||||
|
||||
// Assertion: we have exactly as many resolved spending references as
|
||||
// inputs to the transaction.
|
||||
assert_eq!(tx.inputs.len(),
|
||||
blockchain_refs.len() + pool_refs.len() + orphan_refs.len());
|
||||
|
||||
// At this point we know if we're spending all known unspents and not
|
||||
// creating any duplicate unspents.
|
||||
let pool_entry = graph::PoolEntry::new(&tx);
|
||||
let new_unspents = tx.outputs.iter().
|
||||
map(|x| graph::Edge::new(Some(tx_hash), None, x.commitment())).
|
||||
collect();
|
||||
|
||||
if !is_orphan {
|
||||
// In the non-orphan (pool) case, we've ensured that every input
|
||||
// maps one-to-one with an unspent (available) output, and each
|
||||
// output is unique. No further checks are necessary.
|
||||
self.pool.add_pool_transaction(pool_entry, blockchain_refs,
|
||||
pool_refs, new_unspents);
|
||||
|
||||
self.reconcile_orphans();
|
||||
self.transactions.insert(tx_hash, Box::new(tx));
|
||||
Ok(())
|
||||
|
||||
} else {
|
||||
// At this point, we're pretty sure the transaction is an orphan,
|
||||
// but we have to explicitly check for double spends against the
|
||||
// orphans set; we do not check this as part of the connectivity
|
||||
// checking above.
|
||||
// First, any references resolved to the pool need to be compared
|
||||
// against active orphan pool_connections.
|
||||
// Note that pool_connections here also does double duty to
|
||||
// account for blockchain connections.
|
||||
for pool_ref in pool_refs.iter().chain(blockchain_refs.iter()) {
|
||||
match self.orphans.get_external_spent_output(&pool_ref.output_commitment()){
|
||||
// Should the below err be subtyped to orphans somehow?
|
||||
Some(x) => return Err(PoolError::DoubleSpend{other_tx: x.destination_hash().unwrap(), spent_output: x.output_commitment()}),
|
||||
None => {},
|
||||
}
|
||||
}
|
||||
|
||||
// Next, we have to consider the possibility of double spends
|
||||
// within the orphans set.
|
||||
// We also have to distinguish now between missing and internal
|
||||
// references.
|
||||
let missing_refs = self.resolve_orphan_refs(tx_hash, &mut orphan_refs)?;
|
||||
|
||||
// We have passed all failure modes.
|
||||
pool_refs.append(&mut blockchain_refs);
|
||||
self.orphans.add_orphan_transaction(pool_entry,
|
||||
pool_refs, orphan_refs, missing_refs, new_unspents);
|
||||
|
||||
Err(PoolError::OrphanTransaction)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/// Check the output for a conflict with an existing output.
|
||||
///
|
||||
/// Checks the output (by commitment) against outputs in the blockchain
|
||||
/// or in the pool. If the transaction is destined for orphans, the
|
||||
/// orphans set is checked as well.
|
||||
fn check_duplicate_outputs(&self, output : &transaction::Output, is_orphan: bool) -> Result<(), PoolError> {
|
||||
// Checking against current blockchain unspent outputs
|
||||
// We want outputs even if they're spent by pool txs, so we ignore
|
||||
// consumed_blockchain_outputs
|
||||
if self.blockchain.get_best_utxo_set().get_output(&output.commitment()).is_some() {
|
||||
return Err(PoolError::DuplicateOutput{
|
||||
other_tx: None,
|
||||
in_chain: true,
|
||||
output: output.commitment()})
|
||||
}
|
||||
|
||||
|
||||
// Check for existence of this output in the pool
|
||||
match self.pool.find_output(&output.commitment()) {
|
||||
Some(x) => {
|
||||
return Err(PoolError::DuplicateOutput{
|
||||
other_tx: Some(x),
|
||||
in_chain: false,
|
||||
output: output.commitment()})
|
||||
},
|
||||
None => {},
|
||||
};
|
||||
|
||||
|
||||
// If the transaction might go into orphans, perform the same
|
||||
// checks as above but against the orphan set instead.
|
||||
if is_orphan {
|
||||
// Checking against orphan outputs
|
||||
match self.orphans.find_output(&output.commitment()){
|
||||
Some(x) => {
|
||||
return Err(PoolError::DuplicateOutput{
|
||||
other_tx: Some(x),
|
||||
in_chain: false,
|
||||
output: output.commitment()})
|
||||
},
|
||||
None => {},
|
||||
};
|
||||
// No need to check pool connections since those are covered
|
||||
// by pool unspents and blockchain connections.
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Distinguish between missing, unspent, and spent orphan refs.
|
||||
///
|
||||
/// Takes the set of orphans_refs produced during transaction connectivity
|
||||
/// validation, which do not point at valid unspents in the blockchain or
|
||||
/// pool. These references point at either a missing (orphaned) commitment,
|
||||
/// an unspent output of the orphans set, or a spent output either within
|
||||
/// the orphans set or externally from orphans to the pool or blockchain.
|
||||
/// The last case results in a failure condition and transaction acceptance
|
||||
/// is aborted.
|
||||
fn resolve_orphan_refs(&self, tx_hash: hash::Hash, orphan_refs: &mut Vec<graph::Edge>) -> Result<HashMap<usize, ()>, PoolError> {
|
||||
let mut missing_refs: HashMap<usize, ()> = HashMap::new();
|
||||
for (i, orphan_ref) in orphan_refs.iter_mut().enumerate() {
|
||||
let orphan_commitment = &orphan_ref.output_commitment();
|
||||
match self.orphans.get_available_output(&orphan_commitment) {
|
||||
// If the edge is an available output of orphans,
|
||||
// update the prepared edge
|
||||
Some(x) => *orphan_ref = x.with_destination(Some(tx_hash)),
|
||||
// If the edge is not an available output, it is either
|
||||
// already consumed or it belongs in missing_refs.
|
||||
None => {
|
||||
match self.orphans.get_internal_spent(&orphan_commitment) {
|
||||
Some(x) => return Err(PoolError::DoubleSpend{
|
||||
other_tx: x.destination_hash().unwrap(),
|
||||
spent_output: x.output_commitment()}),
|
||||
None => {
|
||||
// The reference does not resolve to anything.
|
||||
// Make sure this missing_output has not already
|
||||
// been claimed, then add this entry to
|
||||
// missing_refs
|
||||
match self.orphans.get_unknown_output(&orphan_commitment) {
|
||||
Some(x) => return Err(PoolError::DoubleSpend{
|
||||
other_tx: x.destination_hash().unwrap(),
|
||||
spent_output: x.output_commitment()}),
|
||||
None => missing_refs.insert(i, ()),
|
||||
};
|
||||
},
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
Ok(missing_refs)
|
||||
}
|
||||
|
||||
/// The primary goal of the reconcile_orphans method is to eliminate any
|
||||
/// orphans who conflict with the recently accepted pool transaction.
|
||||
/// TODO: How do we handle fishing orphans out that look like they could
|
||||
/// be freed? Current thought is to do so under a different lock domain
|
||||
/// so that we don't have the potential for long recursion under the write
|
||||
/// lock.
|
||||
pub fn reconcile_orphans(&self)-> Result<(),PoolError> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Updates the pool with the details of a new block.
|
||||
///
|
||||
/// Along with add_to_memory_pool, reconcile_block is the other major entry
|
||||
/// point for the transaction pool. This method reconciles the records in
|
||||
/// the transaction pool with the updated view presented by the incoming
|
||||
/// block. This involves removing any transactions which appear to conflict
|
||||
/// with inputs and outputs consumed in the block, and invalidating any
|
||||
/// descendents or parents of the removed transaction, where relevant.
|
||||
///
|
||||
/// Returns a list of transactions which have been evicted from the pool
|
||||
/// due to the recent block. Because transaction association information is
|
||||
/// irreversibly lost in the blockchain, we must keep track of these
|
||||
/// evicted transactions elsewhere so that we can make a best effort at
|
||||
/// returning them to the pool in the event of a reorg that invalidates
|
||||
/// this block.
|
||||
pub fn reconcile_block(&mut self, block: &block::Block) -> Result<Vec<Box<transaction::Transaction>>, PoolError> {
|
||||
// Prepare the new blockchain-only UTXO view for this process
|
||||
let updated_blockchain_utxo = self.blockchain.get_best_utxo_set();
|
||||
|
||||
// If this pool has been kept in sync correctly, serializing all
|
||||
// updates, then the inputs must consume only members of the blockchain
|
||||
// utxo set.
|
||||
// If the block has been resolved properly and reduced fully to its
|
||||
// canonical form, no inputs may consume outputs generated by previous
|
||||
// transactions in the block; they would be cut-through. TODO: If this
|
||||
// is not consensus enforced, then logic must be added here to account
|
||||
// for that.
|
||||
// Based on this, we operate under the following algorithm:
|
||||
// For each block input, we examine the pool transaction, if any, that
|
||||
// consumes the same blockchain output.
|
||||
// If one exists, we mark the transaction and then examine its
|
||||
// children. Recursively, we mark each child until a child is
|
||||
// fully satisfied by outputs in the updated utxo view (after
|
||||
// reconciliation of the block), or there are no more children.
|
||||
//
|
||||
// Additionally, to protect our invariant dictating no duplicate
|
||||
// outputs, each output generated by the new utxo set is checked
|
||||
// against outputs generated by the pool and the corresponding
|
||||
// transactions are also marked.
|
||||
//
|
||||
// After marking concludes, sweeping begins. In order, the marked
|
||||
// transactions are removed, the vertexes corresponding to the
|
||||
// transactions are removed, all the marked transactions' outputs are
|
||||
// removed, and all remaining non-blockchain inputs are returned to the
|
||||
// unspent_outputs set.
|
||||
//
|
||||
// After the pool has been successfully processed, an orphans
|
||||
// reconciliation job is triggered.
|
||||
let mut marked_transactions: HashMap<hash::Hash, ()> = HashMap::new();
|
||||
{
|
||||
let mut conflicting_txs: Vec<hash::Hash> = block.inputs.iter().
|
||||
filter_map(|x|
|
||||
self.pool.get_external_spent_output(&x.commitment())).
|
||||
map(|x| x.destination_hash().unwrap()).
|
||||
collect();
|
||||
|
||||
let mut conflicting_outputs: Vec<hash::Hash> = block.outputs.iter().
|
||||
filter_map(|x: &transaction::Output|
|
||||
self.pool.get_internal_spent_output(&x.commitment()).
|
||||
or(self.pool.get_available_output(&x.commitment()))).
|
||||
map(|x| x.source_hash().unwrap()).
|
||||
collect();
|
||||
|
||||
conflicting_txs.append(&mut conflicting_outputs);
|
||||
|
||||
println!("Conflicting txs: {:?}", conflicting_txs);
|
||||
|
||||
for txh in conflicting_txs {
|
||||
self.mark_transaction(&updated_blockchain_utxo,
|
||||
txh, &mut marked_transactions);
|
||||
}
|
||||
}
|
||||
let freed_txs = self.sweep_transactions(marked_transactions,
|
||||
&updated_blockchain_utxo);
|
||||
|
||||
self.reconcile_orphans();
|
||||
|
||||
Ok(freed_txs)
|
||||
}
|
||||
|
||||
/// The mark portion of our mark-and-sweep pool cleanup.
|
||||
///
|
||||
/// The transaction designated by conflicting_tx is immediately marked.
|
||||
/// Each output of this transaction is then examined; if a transaction in
|
||||
/// the pool spends this output and the output is not replaced by an
|
||||
/// identical output included in the updated UTXO set, the child is marked
|
||||
/// as well and the process continues recursively.
|
||||
///
|
||||
/// Marked transactions are added to the mutable marked_txs HashMap which
|
||||
/// is supplied by the calling function.
|
||||
fn mark_transaction(&self, updated_utxo: &DummyUtxoSet,
|
||||
conflicting_tx: hash::Hash,
|
||||
marked_txs: &mut HashMap<hash::Hash, ()>) {
|
||||
|
||||
marked_txs.insert(conflicting_tx, ());
|
||||
|
||||
let tx_ref = self.transactions.get(&conflicting_tx);
|
||||
|
||||
for output in &tx_ref.unwrap().outputs {
|
||||
match self.pool.get_internal_spent_output(&output.commitment()) {
|
||||
Some(x) => {
|
||||
if updated_utxo.get_output(&x.output_commitment()).is_none() {
|
||||
self.mark_transaction(updated_utxo,
|
||||
x.destination_hash().unwrap(), marked_txs);
|
||||
}
|
||||
},
|
||||
None => {},
|
||||
};
|
||||
}
|
||||
}
|
||||
/// The sweep portion of mark-and-sweep pool cleanup.
|
||||
///
|
||||
/// The transactions that exist in the hashmap are removed from the
|
||||
/// heap storage as well as the vertex set. Any incoming edges are removed
|
||||
/// and added to a list of freed edges. Any outbound edges are removed from
|
||||
/// both the graph and the list of freed edges. It is the responsibility of
|
||||
/// this method to ensure that the list of freed edges (inputs) are
|
||||
/// consistent.
|
||||
///
|
||||
/// TODO: There's some iteration overlap between this and the mark step.
|
||||
/// Additional bookkeeping in the mark step could optimize that away.
|
||||
fn sweep_transactions(&mut self,
|
||||
marked_transactions: HashMap<hash::Hash, ()>,
|
||||
updated_utxo: &DummyUtxoSet)->Vec<Box<transaction::Transaction>> {
|
||||
|
||||
println!("marked_txs: {:?}", marked_transactions);
|
||||
let mut removed_txs = Vec::new();
|
||||
|
||||
for tx_hash in marked_transactions.keys() {
|
||||
let removed_tx = self.transactions.remove(tx_hash).unwrap();
|
||||
|
||||
self.pool.remove_pool_transaction(&removed_tx,
|
||||
&marked_transactions);
|
||||
|
||||
removed_txs.push(removed_tx);
|
||||
}
|
||||
removed_txs
|
||||
}
|
||||
|
||||
/// Fetch mineable transactions.
|
||||
///
|
||||
/// Select a set of mineable transactions for block building.
|
||||
pub fn prepare_mineable_transactions(&self, num_to_fetch: u32) -> Vec<Box<transaction::Transaction>>{
|
||||
self.pool.get_mineable_transactions(num_to_fetch).iter().
|
||||
map(|x| self.transactions.get(x).unwrap().clone()).collect()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use types::*;
|
||||
use secp::{Secp256k1, ContextFlag, constants};
|
||||
use secp::key;
|
||||
use core::core::build;
|
||||
|
||||
macro_rules! expect_output_parent {
|
||||
($pool:expr, $expected:pat, $( $output:expr ),+ ) => {
|
||||
$(
|
||||
match $pool.search_for_best_output(&test_output($output).commitment()) {
|
||||
$expected => {},
|
||||
x => panic!("Unexpected result from output search for {:?}, got {:?}", $output, x),
|
||||
};
|
||||
)*
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#[test]
|
||||
/// A basic test; add a pair of transactions to the pool.
|
||||
fn test_basic_pool_add() {
|
||||
let mut dummy_chain = DummyChainImpl::new();
|
||||
|
||||
let parent_transaction = test_transaction(vec![5,6,7],vec![11,4]);
|
||||
// We want this transaction to be rooted in the blockchain.
|
||||
let new_utxo = DummyUtxoSet::empty().
|
||||
with_output(test_output(5)).
|
||||
with_output(test_output(6)).
|
||||
with_output(test_output(7)).
|
||||
with_output(test_output(8));
|
||||
|
||||
// Prepare a second transaction, connected to the first.
|
||||
let child_transaction = test_transaction(vec![11,4], vec![12]);
|
||||
|
||||
dummy_chain.update_utxo_set(new_utxo);
|
||||
|
||||
// To mirror how this construction is intended to be used, the pool
|
||||
// is placed inside a RwLock.
|
||||
let pool = RwLock::new(test_setup(&Arc::new(Box::new(dummy_chain))));
|
||||
|
||||
// Take the write lock and add a pool entry
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
assert_eq!(write_pool.total_size(), 0);
|
||||
|
||||
// First, add the transaction rooted in the blockchain
|
||||
let result = write_pool.add_to_memory_pool(test_source(),
|
||||
parent_transaction);
|
||||
if result.is_err() {
|
||||
panic!("got an error adding parent tx: {:?}",
|
||||
result.err().unwrap());
|
||||
}
|
||||
|
||||
// Now, add the transaction connected as a child to the first
|
||||
let child_result = write_pool.add_to_memory_pool(test_source(),
|
||||
child_transaction);
|
||||
|
||||
if child_result.is_err() {
|
||||
panic!("got an error adding child tx: {:?}",
|
||||
child_result.err().unwrap());
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Now take the read lock and use a few exposed methods to check
|
||||
// consistency
|
||||
{
|
||||
let read_pool = pool.read().unwrap();
|
||||
assert_eq!(read_pool.total_size(), 2);
|
||||
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::PoolTransaction{tx_ref: _}, 12);
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::AlreadySpent{other_tx: _}, 11, 5);
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::BlockTransaction, 8);
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::Unknown, 20);
|
||||
|
||||
}
|
||||
}
|
||||
#[test]
|
||||
/// Testing various expected error conditions
|
||||
pub fn test_pool_add_error() {
|
||||
let mut dummy_chain = DummyChainImpl::new();
|
||||
|
||||
let new_utxo = DummyUtxoSet::empty().
|
||||
with_output(test_output(5)).
|
||||
with_output(test_output(6)).
|
||||
with_output(test_output(7));
|
||||
|
||||
dummy_chain.update_utxo_set(new_utxo);
|
||||
|
||||
let pool = RwLock::new(test_setup(&Arc::new(Box::new(dummy_chain))));
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
assert_eq!(write_pool.total_size(), 0);
|
||||
|
||||
// First expected failure: duplicate output
|
||||
let duplicate_tx = test_transaction(vec![5,6], vec![7]);
|
||||
|
||||
match write_pool.add_to_memory_pool(test_source(),
|
||||
duplicate_tx) {
|
||||
Ok(_) => panic!("Got OK from add_to_memory_pool when dup was expected"),
|
||||
Err(x) =>{ match x {
|
||||
PoolError::DuplicateOutput{other_tx, in_chain, output} => {
|
||||
if other_tx.is_some() || !in_chain || output != test_output(7).commitment() {
|
||||
panic!("Unexpected parameter in DuplicateOutput: {:?}", x);
|
||||
}
|
||||
|
||||
},
|
||||
_ => panic!("Unexpected error when adding duplicate output transaction: {:?}", x),
|
||||
};},
|
||||
};
|
||||
|
||||
// To test DoubleSpend and AlreadyInPool conditions, we need to add
|
||||
// a valid transaction.
|
||||
let valid_transaction = test_transaction(vec![5,6], vec![8]);
|
||||
|
||||
match write_pool.add_to_memory_pool(test_source(),
|
||||
valid_transaction) {
|
||||
Ok(_) => {},
|
||||
Err(x) => panic!("Unexpected error while adding a valid transaction: {:?}", x),
|
||||
};
|
||||
|
||||
// Now, test a DoubleSpend by consuming the same blockchain unspent
|
||||
// as valid_transaction:
|
||||
let double_spend_transaction = test_transaction(vec![6], vec![2]);
|
||||
|
||||
match write_pool.add_to_memory_pool(test_source(),
|
||||
double_spend_transaction) {
|
||||
Ok(_) => panic!("Expected error when adding double spend, got Ok"),
|
||||
Err(x) => {
|
||||
match x {
|
||||
PoolError::DoubleSpend{other_tx, spent_output} => {
|
||||
if spent_output != test_output(6).commitment() {
|
||||
panic!("Unexpected parameter in DoubleSpend: {:?}", x);
|
||||
}
|
||||
},
|
||||
_ => panic!("Unexpected error when adding double spend transaction: {:?}", x),
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
let already_in_pool = test_transaction(vec![5,6], vec![8]);
|
||||
|
||||
match write_pool.add_to_memory_pool(test_source(),
|
||||
already_in_pool) {
|
||||
Ok(_) => panic!("Expected error when adding already in pool, got Ok"),
|
||||
Err(x) => {
|
||||
match x {
|
||||
PoolError::AlreadyInPool => {},
|
||||
_ => panic!("Unexpected error when adding already in pool tx: {:?}",
|
||||
x),
|
||||
};
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
assert_eq!(write_pool.total_size(), 1);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Testing an expected orphan
|
||||
fn test_add_orphan() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Testing block reconciliation
|
||||
fn test_block_reconciliation() {
|
||||
let mut dummy_chain = DummyChainImpl::new();
|
||||
|
||||
let new_utxo = DummyUtxoSet::empty().
|
||||
with_output(test_output(10)).
|
||||
with_output(test_output(20)).
|
||||
with_output(test_output(30)).
|
||||
with_output(test_output(40));
|
||||
|
||||
dummy_chain.update_utxo_set(new_utxo);
|
||||
|
||||
let chain_ref = Arc::new(Box::new(dummy_chain) as Box<DummyChain>);
|
||||
|
||||
let pool = RwLock::new(test_setup(&chain_ref));
|
||||
|
||||
// Preparation: We will introduce a three root pool transactions.
|
||||
// 1. A transaction that should be invalidated because it is exactly
|
||||
// contained in the block.
|
||||
// 2. A transaction that should be invalidated because the input is
|
||||
// consumed in the block, although it is not exactly consumed.
|
||||
// 3. A transaction that should remain after block reconciliation.
|
||||
let block_transaction = test_transaction(vec![10], vec![8]);
|
||||
let conflict_transaction = test_transaction(vec![20], vec![12,7]);
|
||||
let valid_transaction = test_transaction(vec![30], vec![14,15]);
|
||||
|
||||
// We will also introduce a few children:
|
||||
// 4. A transaction that descends from transaction 1, that is in
|
||||
// turn exactly contained in the block.
|
||||
let block_child = test_transaction(vec![8], vec![4,3]);
|
||||
// 5. A transaction that descends from transaction 4, that is not
|
||||
// contained in the block at all and should be valid after
|
||||
// reconciliation.
|
||||
let pool_child = test_transaction(vec![4], vec![1]);
|
||||
// 6. A transaction that descends from transaction 2 that does not
|
||||
// conflict with anything in the block in any way, but should be
|
||||
// invalidated (orphaned).
|
||||
let conflict_child = test_transaction(vec![12], vec![11]);
|
||||
// 7. A transaction that descends from transaction 2 that should be
|
||||
// valid due to its inputs being satisfied by the block.
|
||||
let conflict_valid_child = test_transaction(vec![7], vec![5]);
|
||||
// 8. A transaction that descends from transaction 3 that should be
|
||||
// invalidated due to an output conflict.
|
||||
let valid_child_conflict = test_transaction(vec![14], vec![9]);
|
||||
// 9. A transaction that descends from transaction 3 that should remain
|
||||
// valid after reconciliation.
|
||||
let valid_child_valid = test_transaction(vec![15], vec![13]);
|
||||
// 10. A transaction that descends from both transaction 6 and
|
||||
// transaction 9
|
||||
let mixed_child = test_transaction(vec![11,13], vec![2]);
|
||||
|
||||
// Add transactions.
|
||||
// Note: There are some ordering constraints that must be followed here
|
||||
// until orphans is 100% implemented. Once the orphans process has
|
||||
// stabilized, we can mix these up to exercise that path a bit.
|
||||
let mut txs_to_add = vec![block_transaction, conflict_transaction,
|
||||
valid_transaction, block_child, pool_child, conflict_child,
|
||||
conflict_valid_child, valid_child_conflict, valid_child_valid,
|
||||
mixed_child];
|
||||
|
||||
let expected_pool_size = txs_to_add.len();
|
||||
|
||||
// First we add the above transactions to the pool; all should be
|
||||
// accepted.
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
assert_eq!(write_pool.total_size(), 0);
|
||||
|
||||
for tx in txs_to_add.drain(..) {
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
tx).is_ok());
|
||||
}
|
||||
|
||||
assert_eq!(write_pool.total_size(), expected_pool_size);
|
||||
}
|
||||
// Now we prepare the block that will cause the above condition.
|
||||
// First, the transactions we want in the block:
|
||||
// - Copy of 1
|
||||
let mut block_tx_1 = test_transaction(vec![10], vec![8]);
|
||||
// - Conflict w/ 2, satisfies 7
|
||||
let mut block_tx_2 = test_transaction(vec![20], vec![7]);
|
||||
// - Copy of 4
|
||||
let mut block_tx_3 = test_transaction(vec![8], vec![4,3]);
|
||||
// - Output conflict w/ 8
|
||||
let mut block_tx_4 = test_transaction(vec![40], vec![9]);
|
||||
let block_transactions = vec![&mut block_tx_1, &mut block_tx_2,
|
||||
&mut block_tx_3, &mut block_tx_4];
|
||||
|
||||
let block = block::Block::new(&block::BlockHeader::default(),
|
||||
block_transactions, key::ONE_KEY).unwrap();
|
||||
|
||||
chain_ref.apply_block(&block);
|
||||
|
||||
// Block reconciliation
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
|
||||
let evicted_transactions = write_pool.reconcile_block(&block);
|
||||
|
||||
assert!(evicted_transactions.is_ok());
|
||||
|
||||
assert_eq!(evicted_transactions.unwrap().len(), 6);
|
||||
|
||||
// TODO: Txids are not yet deterministic. When they are, we should
|
||||
// check the specific transactions that were evicted.
|
||||
}
|
||||
|
||||
|
||||
// Using the pool's methods to validate a few end conditions.
|
||||
{
|
||||
let read_pool = pool.read().unwrap();
|
||||
|
||||
assert_eq!(read_pool.total_size(), 4);
|
||||
|
||||
// We should have available blockchain outputs at 9 and 3
|
||||
expect_output_parent!(read_pool, Parent::BlockTransaction, 9, 3);
|
||||
|
||||
// We should have spent blockchain outputs at 4 and 7
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::AlreadySpent{other_tx: _}, 4, 7);
|
||||
|
||||
// We should have spent pool references at 15
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::AlreadySpent{other_tx: _}, 15);
|
||||
|
||||
// We should have unspent pool references at 1, 13, 14
|
||||
expect_output_parent!(read_pool,
|
||||
Parent::PoolTransaction{tx_ref: _}, 1, 13, 14);
|
||||
|
||||
// References internal to the block should be unknown
|
||||
expect_output_parent!(read_pool, Parent::Unknown, 8);
|
||||
|
||||
// Evicted transactions should have unknown outputs
|
||||
expect_output_parent!(read_pool, Parent::Unknown, 2, 11);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
#[test]
|
||||
/// Test transaction selection and block building.
|
||||
fn test_block_building() {
|
||||
// Add a handful of transactions
|
||||
let mut dummy_chain = DummyChainImpl::new();
|
||||
|
||||
let new_utxo = DummyUtxoSet::empty().
|
||||
with_output(test_output(10)).
|
||||
with_output(test_output(20)).
|
||||
with_output(test_output(30)).
|
||||
with_output(test_output(40));
|
||||
|
||||
dummy_chain.update_utxo_set(new_utxo);
|
||||
|
||||
let chain_ref = Arc::new(Box::new(dummy_chain) as Box<DummyChain>);
|
||||
|
||||
let pool = RwLock::new(test_setup(&chain_ref));
|
||||
|
||||
let root_tx_1 = test_transaction(vec![10,20], vec![25]);
|
||||
let root_tx_2 = test_transaction(vec![30], vec![28]);
|
||||
let root_tx_3 = test_transaction(vec![40], vec![38]);
|
||||
|
||||
let child_tx_1 = test_transaction(vec![25],vec![23]);
|
||||
let child_tx_2 = test_transaction(vec![38],vec![32]);
|
||||
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
assert_eq!(write_pool.total_size(), 0);
|
||||
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
root_tx_1).is_ok());
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
root_tx_2).is_ok());
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
root_tx_3).is_ok());
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
child_tx_1).is_ok());
|
||||
assert!(write_pool.add_to_memory_pool(test_source(),
|
||||
child_tx_2).is_ok());
|
||||
|
||||
assert_eq!(write_pool.total_size(), 5);
|
||||
}
|
||||
|
||||
// Request blocks
|
||||
let block: block::Block;
|
||||
let mut txs: Vec<Box<transaction::Transaction>>;
|
||||
{
|
||||
let read_pool = pool.read().unwrap();
|
||||
txs = read_pool.prepare_mineable_transactions(3);
|
||||
assert_eq!(txs.len(), 3);
|
||||
// TODO: This is ugly, either make block::new take owned
|
||||
// txs instead of mut refs, or change
|
||||
// prepare_mineable_transactions to return mut refs
|
||||
let mut block_txs: Vec<transaction::Transaction> = txs.drain(..).map(|x| *x).collect();
|
||||
let tx_refs = block_txs.iter_mut().collect();
|
||||
block = block::Block::new(&block::BlockHeader::default(),
|
||||
tx_refs, key::ONE_KEY).unwrap();
|
||||
}
|
||||
|
||||
chain_ref.apply_block(&block);
|
||||
// Reconcile block
|
||||
{
|
||||
let mut write_pool = pool.write().unwrap();
|
||||
|
||||
let evicted_transactions = write_pool.reconcile_block(&block);
|
||||
|
||||
assert!(evicted_transactions.is_ok());
|
||||
|
||||
assert_eq!(evicted_transactions.unwrap().len(), 3);
|
||||
assert_eq!(write_pool.total_size(), 2);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
|
||||
fn test_setup(dummy_chain: &Arc<Box<DummyChain>>) -> TransactionPool {
|
||||
TransactionPool{
|
||||
transactions: HashMap::new(),
|
||||
pool: Pool::empty(),
|
||||
orphans: Orphans::empty(),
|
||||
blockchain: dummy_chain.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Cobble together a test transaction for testing the transaction pool.
|
||||
///
|
||||
/// Connectivity here is the most important element.
|
||||
/// Every output is given a blinding key equal to its value, so that the
|
||||
/// entire commitment can be derived deterministically from just the value.
|
||||
///
|
||||
/// Fees are the remainder between input and output values, so the numbers
|
||||
/// should make sense.
|
||||
fn test_transaction(input_values: Vec<u64>, output_values: Vec<u64>) -> transaction::Transaction {
|
||||
let fees: i64 = input_values.iter().sum::<u64>() as i64 - output_values.iter().sum::<u64>() as i64;
|
||||
assert!(fees >= 0);
|
||||
|
||||
let mut tx_elements = Vec::new();
|
||||
|
||||
for input_value in input_values {
|
||||
tx_elements.push(build::input(input_value, test_key(input_value)));
|
||||
}
|
||||
|
||||
for output_value in output_values {
|
||||
tx_elements.push(build::output(output_value, test_key(output_value)));
|
||||
}
|
||||
tx_elements.push(build::with_fee(fees as u64));
|
||||
|
||||
println!("Fee was {}", fees as u64);
|
||||
|
||||
let (tx, _) = build::transaction(tx_elements).unwrap();
|
||||
tx
|
||||
}
|
||||
|
||||
/// Deterministically generate an output defined by our test scheme
|
||||
fn test_output(value: u64) -> transaction::Output {
|
||||
let ec = Secp256k1::with_caps(ContextFlag::Commit);
|
||||
let output_key = test_key(value);
|
||||
let output_commitment = ec.commit(value, output_key).unwrap();
|
||||
transaction::Output{
|
||||
features: transaction::DEFAULT_OUTPUT,
|
||||
commit: output_commitment,
|
||||
proof: ec.range_proof(0, value, output_key, output_commitment)}
|
||||
}
|
||||
|
||||
/// Makes a SecretKey from a single u64
|
||||
fn test_key(value: u64) -> key::SecretKey {
|
||||
let ec = Secp256k1::with_caps(ContextFlag::Commit);
|
||||
// SecretKey takes a SECRET_KEY_SIZE slice of u8.
|
||||
assert!(constants::SECRET_KEY_SIZE > 8);
|
||||
|
||||
// (SECRET_KEY_SIZE - 8) zeros, followed by value as a big-endian byte
|
||||
// sequence
|
||||
let mut key_slice = vec![0;constants::SECRET_KEY_SIZE - 8];
|
||||
|
||||
key_slice.push((value >> 56) as u8);
|
||||
key_slice.push((value >> 48) as u8);
|
||||
key_slice.push((value >> 40) as u8);
|
||||
key_slice.push((value >> 32) as u8);
|
||||
key_slice.push((value >> 24) as u8);
|
||||
key_slice.push((value >> 16) as u8);
|
||||
key_slice.push((value >> 8) as u8);
|
||||
key_slice.push(value as u8);
|
||||
|
||||
key::SecretKey::from_slice(&ec, &key_slice).unwrap()
|
||||
}
|
||||
|
||||
/// A generic TxSource representing a test
|
||||
fn test_source() -> TxSource{
|
||||
TxSource{
|
||||
debug_name: "test".to_string(),
|
||||
identifier: "127.0.0.1".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
386
pool/src/types.rs
Normal file
386
pool/src/types.rs
Normal file
|
@ -0,0 +1,386 @@
|
|||
// Copyright 2017 The Grin Developers
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! The primary module containing the implementations of the transaction pool
|
||||
//! and its top-level members.
|
||||
|
||||
use std::vec::Vec;
|
||||
use std::sync::Arc;
|
||||
use std::sync::RwLock;
|
||||
use std::sync::Weak;
|
||||
use std::cell::RefCell;
|
||||
use std::collections::HashMap;
|
||||
use std::iter::Iterator;
|
||||
use std::fmt;
|
||||
|
||||
use secp::pedersen::Commitment;
|
||||
|
||||
pub use graph;
|
||||
|
||||
use time;
|
||||
|
||||
use core::core::transaction;
|
||||
use core::core::block;
|
||||
use core::core::hash;
|
||||
|
||||
|
||||
|
||||
/// Placeholder: the data representing where we heard about a tx from.
|
||||
///
|
||||
/// Used to make decisions based on transaction acceptance priority from
|
||||
/// various sources. For example, a node may want to bypass pool size
|
||||
/// restrictions when accepting a transaction from a local wallet.
|
||||
///
|
||||
/// Most likely this will evolve to contain some sort of network identifier,
|
||||
/// once we get a better sense of what transaction building might look like.
|
||||
pub struct TxSource {
|
||||
/// Human-readable name used for logging and errors.
|
||||
pub debug_name: String,
|
||||
/// Unique identifier used to distinguish this peer from others.
|
||||
pub identifier: String,
|
||||
}
|
||||
|
||||
/// This enum describes the parent for a given input of a transaction.
|
||||
#[derive(Clone)]
|
||||
pub enum Parent {
|
||||
Unknown,
|
||||
BlockTransaction,
|
||||
PoolTransaction{tx_ref: hash::Hash},
|
||||
AlreadySpent{other_tx: hash::Hash},
|
||||
}
|
||||
|
||||
impl fmt::Debug for Parent {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
&Parent::Unknown => write!(f, "Parent: Unknown"),
|
||||
&Parent::BlockTransaction => write!(f, "Parent: Block Transaction"),
|
||||
&Parent::PoolTransaction{tx_ref: x} => write!(f,
|
||||
"Parent: Pool Transaction ({:?})", x),
|
||||
&Parent::AlreadySpent{other_tx: x} => write!(f,
|
||||
"Parent: Already Spent By {:?}", x),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum PoolError {
|
||||
Invalid,
|
||||
AlreadyInPool,
|
||||
DuplicateOutput{other_tx: Option<hash::Hash>, in_chain: bool,
|
||||
output: Commitment},
|
||||
DoubleSpend{other_tx: hash::Hash, spent_output: Commitment},
|
||||
// An orphan successfully added to the orphans set
|
||||
OrphanTransaction,
|
||||
}
|
||||
|
||||
/// Pool contains the elements of the graph that are connected, in full, to
|
||||
/// the blockchain.
|
||||
/// Reservations of outputs by orphan transactions (not fully connected) are
|
||||
/// not respected.
|
||||
/// Spending references (input -> output) exist in two structures: internal
|
||||
/// graph references are contained in the pool edge sets, while references
|
||||
/// sourced from the blockchain's UTXO set are contained in the
|
||||
/// blockchain_connections set.
|
||||
/// Spent by references (output-> input) exist in two structures: pool-pool
|
||||
/// connections are in the pool edge set, while unspent (dangling) references
|
||||
/// exist in the available_outputs set.
|
||||
pub struct Pool {
|
||||
graph : graph::DirectedGraph,
|
||||
|
||||
// available_outputs are unspent outputs of the current pool set,
|
||||
// maintained as edges with empty destinations, keyed by the
|
||||
// output's hash.
|
||||
available_outputs: HashMap<Commitment, graph::Edge>,
|
||||
|
||||
// Consumed blockchain utxo's are kept in a separate map.
|
||||
consumed_blockchain_outputs: HashMap<Commitment, graph::Edge>
|
||||
}
|
||||
|
||||
impl Pool {
|
||||
pub fn empty() -> Pool {
|
||||
Pool{
|
||||
graph: graph::DirectedGraph::empty(),
|
||||
available_outputs: HashMap::new(),
|
||||
consumed_blockchain_outputs: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Given an output, check if a spending reference (input -> output)
|
||||
/// already exists in the pool.
|
||||
/// Returns the transaction (kernel) hash corresponding to the conflicting
|
||||
/// transaction
|
||||
pub fn check_double_spend(&self, o: &transaction::Output) -> Option<hash::Hash> {
|
||||
self.graph.get_edge_by_commitment(&o.commitment()).or(self.consumed_blockchain_outputs.get(&o.commitment())).map(|x| x.destination_hash().unwrap())
|
||||
}
|
||||
|
||||
|
||||
pub fn get_blockchain_spent(&self, c: &Commitment) -> Option<&graph::Edge> {
|
||||
self.consumed_blockchain_outputs.get(c)
|
||||
}
|
||||
|
||||
pub fn add_pool_transaction(&mut self, pool_entry: graph::PoolEntry,
|
||||
mut blockchain_refs: Vec<graph::Edge>, pool_refs: Vec<graph::Edge>,
|
||||
mut new_unspents: Vec<graph::Edge>) {
|
||||
|
||||
// Removing consumed available_outputs
|
||||
for new_edge in &pool_refs {
|
||||
// All of these should correspond to an existing unspent
|
||||
assert!(self.available_outputs.remove(&new_edge.output_commitment()).is_some());
|
||||
}
|
||||
|
||||
// Accounting for consumed blockchain outputs
|
||||
for new_blockchain_edge in blockchain_refs.drain(..) {
|
||||
self.consumed_blockchain_outputs.insert(
|
||||
new_blockchain_edge.output_commitment(),
|
||||
new_blockchain_edge);
|
||||
}
|
||||
|
||||
// Adding the transaction to the vertices list along with internal
|
||||
// pool edges
|
||||
self.graph.add_entry(pool_entry, pool_refs);
|
||||
|
||||
// Adding the new unspents to the unspent map
|
||||
for unspent_output in new_unspents.drain(..) {
|
||||
self.available_outputs.insert(
|
||||
unspent_output.output_commitment(), unspent_output);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn remove_pool_transaction(&mut self, tx: &transaction::Transaction,
|
||||
marked_txs: &HashMap<hash::Hash, ()>) {
|
||||
|
||||
self.graph.remove_vertex(graph::transaction_identifier(tx));
|
||||
|
||||
for input in tx.inputs.iter().map(|x| x.commitment()) {
|
||||
match self.graph.remove_edge_by_commitment(&input) {
|
||||
Some(x) => {
|
||||
if !marked_txs.contains_key(&x.source_hash().unwrap()) {
|
||||
self.available_outputs.insert(x.output_commitment(),
|
||||
x.with_destination(None));
|
||||
}
|
||||
},
|
||||
None => {
|
||||
self.consumed_blockchain_outputs.remove(&input);
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
for output in tx.outputs.iter().map(|x| x.commitment()) {
|
||||
match self.graph.remove_edge_by_commitment(&output) {
|
||||
Some(x) => {
|
||||
if !marked_txs.contains_key(
|
||||
&x.destination_hash().unwrap()) {
|
||||
|
||||
self.consumed_blockchain_outputs.insert(
|
||||
x.output_commitment(),
|
||||
x.with_source(None));
|
||||
}
|
||||
},
|
||||
None => {
|
||||
self.available_outputs.remove(&output);
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/// Simplest possible implementation: just return the roots
|
||||
pub fn get_mineable_transactions(&self, num_to_fetch: u32) -> Vec<hash::Hash> {
|
||||
let mut roots = self.graph.get_roots();
|
||||
roots.truncate(num_to_fetch as usize);
|
||||
roots
|
||||
}
|
||||
}
|
||||
|
||||
impl TransactionGraphContainer for Pool {
|
||||
fn get_graph(&self) -> &graph::DirectedGraph {
|
||||
&self.graph
|
||||
}
|
||||
fn get_available_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.available_outputs.get(output)
|
||||
}
|
||||
fn get_external_spent_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.consumed_blockchain_outputs.get(output)
|
||||
}
|
||||
fn get_internal_spent_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.graph.get_edge_by_commitment(output)
|
||||
}
|
||||
}
|
||||
|
||||
/// Orphans contains the elements of the transaction graph that have not been
|
||||
/// connected in full to the blockchain.
|
||||
pub struct Orphans {
|
||||
graph : graph::DirectedGraph,
|
||||
|
||||
// available_outputs are unspent outputs of the current orphan set,
|
||||
// maintained as edges with empty destinations.
|
||||
available_outputs: HashMap<Commitment, graph::Edge>,
|
||||
|
||||
// missing_outputs are spending references (inputs) with missing
|
||||
// corresponding outputs, maintained as edges with empty sources.
|
||||
missing_outputs: HashMap<Commitment, graph::Edge>,
|
||||
|
||||
// pool_connections are bidirectional edges which connect to the pool
|
||||
// graph. They should map one-to-one to pool graph available_outputs.
|
||||
// pool_connections should not be viewed authoritatively, they are
|
||||
// merely informational until the transaction is officially connected to
|
||||
// the pool.
|
||||
pool_connections: HashMap<Commitment, graph::Edge>,
|
||||
}
|
||||
|
||||
impl Orphans {
|
||||
pub fn empty() -> Orphans {
|
||||
Orphans{
|
||||
graph: graph::DirectedGraph::empty(),
|
||||
available_outputs : HashMap::new(),
|
||||
missing_outputs: HashMap::new(),
|
||||
pool_connections: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Checks for a double spent output, given the hash of the output,
|
||||
/// ONLY in the data maintained by the orphans set. This includes links
|
||||
/// to the pool as well as links internal to orphan transactions.
|
||||
/// Returns the transaction hash corresponding to the conflicting
|
||||
/// transaction.
|
||||
fn check_double_spend(&self, o: transaction::Output) -> Option<hash::Hash> {
|
||||
self.graph.get_edge_by_commitment(&o.commitment()).or(self.pool_connections.get(&o.commitment())).map(|x| x.destination_hash().unwrap())
|
||||
}
|
||||
|
||||
pub fn get_unknown_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.missing_outputs.get(output)
|
||||
}
|
||||
|
||||
/// Add an orphan transaction to the orphans set.
|
||||
///
|
||||
/// This method adds a given transaction (represented by the PoolEntry at
|
||||
/// orphan_entry) to the orphans set.
|
||||
///
|
||||
/// This method has no failure modes. All checks should be passed before
|
||||
/// entry.
|
||||
///
|
||||
/// Expects a HashMap at is_missing describing the indices of orphan_refs
|
||||
/// which correspond to missing (vs orphan-to-orphan) links.
|
||||
pub fn add_orphan_transaction(&mut self, orphan_entry: graph::PoolEntry,
|
||||
mut pool_refs: Vec<graph::Edge>, mut orphan_refs: Vec<graph::Edge>,
|
||||
is_missing: HashMap<usize, ()>, mut new_unspents: Vec<graph::Edge>) {
|
||||
|
||||
// Removing consumed available_outputs
|
||||
for (i, new_edge) in orphan_refs.drain(..).enumerate() {
|
||||
if is_missing.contains_key(&i) {
|
||||
self.missing_outputs.insert(new_edge.output_commitment(),
|
||||
new_edge);
|
||||
} else {
|
||||
assert!(self.available_outputs.remove(&new_edge.output_commitment()).is_some());
|
||||
self.graph.add_edge_only(new_edge);
|
||||
}
|
||||
}
|
||||
|
||||
// Accounting for consumed blockchain and pool outputs
|
||||
for external_edge in pool_refs.drain(..) {
|
||||
self.pool_connections.insert(
|
||||
external_edge.output_commitment(), external_edge);
|
||||
}
|
||||
|
||||
// if missing_refs is the same length as orphan_refs, we have
|
||||
// no orphan-orphan links for this transaction and it is a
|
||||
// root transaction of the orphans set
|
||||
self.graph.add_vertex_only(orphan_entry,
|
||||
is_missing.len() == orphan_refs.len());
|
||||
|
||||
|
||||
// Adding the new unspents to the unspent map
|
||||
for unspent_output in new_unspents.drain(..) {
|
||||
self.available_outputs.insert(
|
||||
unspent_output.output_commitment(), unspent_output);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TransactionGraphContainer for Orphans {
|
||||
fn get_graph(&self) -> &graph::DirectedGraph {
|
||||
&self.graph
|
||||
}
|
||||
fn get_available_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.available_outputs.get(output)
|
||||
}
|
||||
fn get_external_spent_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.pool_connections.get(output)
|
||||
}
|
||||
fn get_internal_spent_output(&self, output: &Commitment) -> Option<&graph::Edge> {
|
||||
self.graph.get_edge_by_commitment(output)
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait for types that embed a graph and connect to external state.
|
||||
///
|
||||
/// The types implementing this trait consist of a graph with nodes and edges
|
||||
/// representing transactions and outputs, respectively. Outputs fall into one
|
||||
/// of three categories:
|
||||
/// 1) External spent: An output sourced externally consumed by a transaction
|
||||
/// in this graph,
|
||||
/// 2) Internal spent: An output produced by a transaction in this graph and
|
||||
/// consumed by another transaction in this graph,
|
||||
/// 3) [External] Unspent: An output produced by a transaction in this graph
|
||||
/// that is not yet spent.
|
||||
///
|
||||
/// There is no concept of an external "spent by" reference (output produced by
|
||||
/// a transaction in the graph spent by a transaction in another source), as
|
||||
/// these references are expected to be maintained by descendent graph. Outputs
|
||||
/// follow a heirarchy (Blockchain -> Pool -> Orphans) where each descendent
|
||||
/// exists at a lower priority than their parent. An output consumed by a
|
||||
/// child graph is marked as unspent in the parent graph and an external spent
|
||||
/// in the child. This ensures that no descendent set must modify state in a
|
||||
/// set of higher priority.
|
||||
pub trait TransactionGraphContainer {
|
||||
/// Accessor for graph object
|
||||
fn get_graph(&self) -> &graph::DirectedGraph;
|
||||
/// Accessor for internal spents
|
||||
fn get_internal_spent_output(&self, output: &Commitment) -> Option<&graph::Edge>;
|
||||
/// Accessor for external unspents
|
||||
fn get_available_output(&self, output: &Commitment) -> Option<&graph::Edge>;
|
||||
/// Accessor for external spents
|
||||
fn get_external_spent_output(&self, output: &Commitment) -> Option<&graph::Edge>;
|
||||
|
||||
/// Checks if the available_output set has the output at the given
|
||||
/// commitment
|
||||
fn has_available_output(&self, c: &Commitment) -> bool {
|
||||
self.get_available_output(c).is_some()
|
||||
}
|
||||
|
||||
/// Checks if the pool has anything by this output already, between
|
||||
/// available outputs and internal ones.
|
||||
fn find_output(&self, c: &Commitment) -> Option<hash::Hash> {
|
||||
self.get_available_output(c).
|
||||
or(self.get_internal_spent_output(c)).
|
||||
map(|x| x.source_hash().unwrap())
|
||||
}
|
||||
|
||||
/// Search for a spent reference internal to the graph
|
||||
fn get_internal_spent(&self, c: &Commitment) -> Option<&graph::Edge> {
|
||||
self.get_internal_spent_output(c)
|
||||
}
|
||||
|
||||
fn num_root_transactions(&self) -> usize {
|
||||
self.get_graph().len_roots()
|
||||
}
|
||||
|
||||
fn num_transactions(&self) -> usize {
|
||||
self.get_graph().len_vertices()
|
||||
}
|
||||
|
||||
fn num_output_edges(&self) -> usize {
|
||||
self.get_graph().len_edges()
|
||||
}
|
||||
|
||||
}
|
Loading…
Reference in a new issue