You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

txmempool.h 20KB

estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488
  1. // Copyright (c) 2009-2010 Satoshi Nakamoto
  2. // Copyright (c) 2009-2014 The Bitcoin Core developers
  3. // Distributed under the MIT software license, see the accompanying
  4. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
  5. #ifndef BITCOIN_TXMEMPOOL_H
  6. #define BITCOIN_TXMEMPOOL_H
  7. #include <list>
  8. #include <set>
  9. #include "amount.h"
  10. #include "coins.h"
  11. #include "primitives/transaction.h"
  12. #include "sync.h"
  13. #undef foreach
  14. #include "boost/multi_index_container.hpp"
  15. #include "boost/multi_index/ordered_index.hpp"
  16. class CAutoFile;
  17. inline double AllowFreeThreshold()
  18. {
  19. return COIN * 144 / 250;
  20. }
  21. inline bool AllowFree(double dPriority)
  22. {
  23. // Large (in bytes) low-priority (new, small-coin) transactions
  24. // need a fee.
  25. return dPriority > AllowFreeThreshold();
  26. }
  27. /** Fake height value used in CCoins to signify they are only in the memory pool (since 0.8) */
  28. static const unsigned int MEMPOOL_HEIGHT = 0x7FFFFFFF;
  29. class CTxMemPool;
  30. /** \class CTxMemPoolEntry
  31. *
  32. * CTxMemPoolEntry stores data about the correponding transaction, as well
  33. * as data about all in-mempool transactions that depend on the transaction
  34. * ("descendant" transactions).
  35. *
  36. * When a new entry is added to the mempool, we update the descendant state
  37. * (nCountWithDescendants, nSizeWithDescendants, and nFeesWithDescendants) for
  38. * all ancestors of the newly added transaction.
  39. *
  40. * If updating the descendant state is skipped, we can mark the entry as
  41. * "dirty", and set nSizeWithDescendants/nFeesWithDescendants to equal nTxSize/
  42. * nTxFee. (This can potentially happen during a reorg, where we limit the
  43. * amount of work we're willing to do to avoid consuming too much CPU.)
  44. *
  45. */
  46. class CTxMemPoolEntry
  47. {
  48. private:
  49. CTransaction tx;
  50. CAmount nFee; //! Cached to avoid expensive parent-transaction lookups
  51. size_t nTxSize; //! ... and avoid recomputing tx size
  52. size_t nModSize; //! ... and modified size for priority
  53. size_t nUsageSize; //! ... and total memory usage
  54. int64_t nTime; //! Local time when entering the mempool
  55. double dPriority; //! Priority when entering the mempool
  56. unsigned int nHeight; //! Chain height when entering the mempool
  57. bool hadNoDependencies; //! Not dependent on any other txs when it entered the mempool
  58. // Information about descendants of this transaction that are in the
  59. // mempool; if we remove this transaction we must remove all of these
  60. // descendants as well. if nCountWithDescendants is 0, treat this entry as
  61. // dirty, and nSizeWithDescendants and nFeesWithDescendants will not be
  62. // correct.
  63. uint64_t nCountWithDescendants; //! number of descendant transactions
  64. uint64_t nSizeWithDescendants; //! ... and size
  65. CAmount nFeesWithDescendants; //! ... and total fees (all including us)
  66. public:
  67. CTxMemPoolEntry(const CTransaction& _tx, const CAmount& _nFee,
  68. int64_t _nTime, double _dPriority, unsigned int _nHeight, bool poolHasNoInputsOf = false);
  69. CTxMemPoolEntry(const CTxMemPoolEntry& other);
  70. const CTransaction& GetTx() const { return this->tx; }
  71. double GetPriority(unsigned int currentHeight) const;
  72. CAmount GetFee() const { return nFee; }
  73. size_t GetTxSize() const { return nTxSize; }
  74. int64_t GetTime() const { return nTime; }
  75. unsigned int GetHeight() const { return nHeight; }
  76. bool WasClearAtEntry() const { return hadNoDependencies; }
  77. size_t DynamicMemoryUsage() const { return nUsageSize; }
  78. // Adjusts the descendant state, if this entry is not dirty.
  79. void UpdateState(int64_t modifySize, CAmount modifyFee, int64_t modifyCount);
  80. /** We can set the entry to be dirty if doing the full calculation of in-
  81. * mempool descendants will be too expensive, which can potentially happen
  82. * when re-adding transactions from a block back to the mempool.
  83. */
  84. void SetDirty();
  85. bool IsDirty() const { return nCountWithDescendants == 0; }
  86. uint64_t GetCountWithDescendants() const { return nCountWithDescendants; }
  87. uint64_t GetSizeWithDescendants() const { return nSizeWithDescendants; }
  88. CAmount GetFeesWithDescendants() const { return nFeesWithDescendants; }
  89. };
  90. // Helpers for modifying CTxMemPool::mapTx, which is a boost multi_index.
  91. struct update_descendant_state
  92. {
  93. update_descendant_state(int64_t _modifySize, CAmount _modifyFee, int64_t _modifyCount) :
  94. modifySize(_modifySize), modifyFee(_modifyFee), modifyCount(_modifyCount)
  95. {}
  96. void operator() (CTxMemPoolEntry &e)
  97. { e.UpdateState(modifySize, modifyFee, modifyCount); }
  98. private:
  99. int64_t modifySize;
  100. CAmount modifyFee;
  101. int64_t modifyCount;
  102. };
  103. struct set_dirty
  104. {
  105. void operator() (CTxMemPoolEntry &e)
  106. { e.SetDirty(); }
  107. };
  108. // extracts a TxMemPoolEntry's transaction hash
  109. struct mempoolentry_txid
  110. {
  111. typedef uint256 result_type;
  112. result_type operator() (const CTxMemPoolEntry &entry) const
  113. {
  114. return entry.GetTx().GetHash();
  115. }
  116. };
  117. /** \class CompareTxMemPoolEntryByFee
  118. *
  119. * Sort an entry by max(feerate of entry's tx, feerate with all descendants).
  120. */
  121. class CompareTxMemPoolEntryByFee
  122. {
  123. public:
  124. bool operator()(const CTxMemPoolEntry& a, const CTxMemPoolEntry& b)
  125. {
  126. bool fUseADescendants = UseDescendantFeeRate(a);
  127. bool fUseBDescendants = UseDescendantFeeRate(b);
  128. double aFees = fUseADescendants ? a.GetFeesWithDescendants() : a.GetFee();
  129. double aSize = fUseADescendants ? a.GetSizeWithDescendants() : a.GetTxSize();
  130. double bFees = fUseBDescendants ? b.GetFeesWithDescendants() : b.GetFee();
  131. double bSize = fUseBDescendants ? b.GetSizeWithDescendants() : b.GetTxSize();
  132. // Avoid division by rewriting (a/b > c/d) as (a*d > c*b).
  133. double f1 = aFees * bSize;
  134. double f2 = aSize * bFees;
  135. if (f1 == f2) {
  136. return a.GetTime() < b.GetTime();
  137. }
  138. return f1 > f2;
  139. }
  140. // Calculate which feerate to use for an entry (avoiding division).
  141. bool UseDescendantFeeRate(const CTxMemPoolEntry &a)
  142. {
  143. double f1 = (double)a.GetFee() * a.GetSizeWithDescendants();
  144. double f2 = (double)a.GetFeesWithDescendants() * a.GetTxSize();
  145. return f2 > f1;
  146. }
  147. };
  148. class CompareTxMemPoolEntryByEntryTime
  149. {
  150. public:
  151. bool operator()(const CTxMemPoolEntry& a, const CTxMemPoolEntry& b)
  152. {
  153. return a.GetTime() < b.GetTime();
  154. }
  155. };
  156. class CBlockPolicyEstimator;
  157. /** An inpoint - a combination of a transaction and an index n into its vin */
  158. class CInPoint
  159. {
  160. public:
  161. const CTransaction* ptx;
  162. uint32_t n;
  163. CInPoint() { SetNull(); }
  164. CInPoint(const CTransaction* ptxIn, uint32_t nIn) { ptx = ptxIn; n = nIn; }
  165. void SetNull() { ptx = NULL; n = (uint32_t) -1; }
  166. bool IsNull() const { return (ptx == NULL && n == (uint32_t) -1); }
  167. size_t DynamicMemoryUsage() const { return 0; }
  168. };
  169. /**
  170. * CTxMemPool stores valid-according-to-the-current-best-chain
  171. * transactions that may be included in the next block.
  172. *
  173. * Transactions are added when they are seen on the network
  174. * (or created by the local node), but not all transactions seen
  175. * are added to the pool: if a new transaction double-spends
  176. * an input of a transaction in the pool, it is dropped,
  177. * as are non-standard transactions.
  178. *
  179. * CTxMemPool::mapTx, and CTxMemPoolEntry bookkeeping:
  180. *
  181. * mapTx is a boost::multi_index that sorts the mempool on 2 criteria:
  182. * - transaction hash
  183. * - feerate [we use max(feerate of tx, feerate of tx with all descendants)]
  184. *
  185. * Note: the term "descendant" refers to in-mempool transactions that depend on
  186. * this one, while "ancestor" refers to in-mempool transactions that a given
  187. * transaction depends on.
  188. *
  189. * In order for the feerate sort to remain correct, we must update transactions
  190. * in the mempool when new descendants arrive. To facilitate this, we track
  191. * the set of in-mempool direct parents and direct children in mapLinks. Within
  192. * each CTxMemPoolEntry, we track the size and fees of all descendants.
  193. *
  194. * Usually when a new transaction is added to the mempool, it has no in-mempool
  195. * children (because any such children would be an orphan). So in
  196. * addUnchecked(), we:
  197. * - update a new entry's setMemPoolParents to include all in-mempool parents
  198. * - update the new entry's direct parents to include the new tx as a child
  199. * - update all ancestors of the transaction to include the new tx's size/fee
  200. *
  201. * When a transaction is removed from the mempool, we must:
  202. * - update all in-mempool parents to not track the tx in setMemPoolChildren
  203. * - update all ancestors to not include the tx's size/fees in descendant state
  204. * - update all in-mempool children to not include it as a parent
  205. *
  206. * These happen in UpdateForRemoveFromMempool(). (Note that when removing a
  207. * transaction along with its descendants, we must calculate that set of
  208. * transactions to be removed before doing the removal, or else the mempool can
  209. * be in an inconsistent state where it's impossible to walk the ancestors of
  210. * a transaction.)
  211. *
  212. * In the event of a reorg, the assumption that a newly added tx has no
  213. * in-mempool children is false. In particular, the mempool is in an
  214. * inconsistent state while new transactions are being added, because there may
  215. * be descendant transactions of a tx coming from a disconnected block that are
  216. * unreachable from just looking at transactions in the mempool (the linking
  217. * transactions may also be in the disconnected block, waiting to be added).
  218. * Because of this, there's not much benefit in trying to search for in-mempool
  219. * children in addUnchecked(). Instead, in the special case of transactions
  220. * being added from a disconnected block, we require the caller to clean up the
  221. * state, to account for in-mempool, out-of-block descendants for all the
  222. * in-block transactions by calling UpdateTransactionsFromBlock(). Note that
  223. * until this is called, the mempool state is not consistent, and in particular
  224. * mapLinks may not be correct (and therefore functions like
  225. * CalculateMemPoolAncestors() and CalculateDescendants() that rely
  226. * on them to walk the mempool are not generally safe to use).
  227. *
  228. * Computational limits:
  229. *
  230. * Updating all in-mempool ancestors of a newly added transaction can be slow,
  231. * if no bound exists on how many in-mempool ancestors there may be.
  232. * CalculateMemPoolAncestors() takes configurable limits that are designed to
  233. * prevent these calculations from being too CPU intensive.
  234. *
  235. * Adding transactions from a disconnected block can be very time consuming,
  236. * because we don't have a way to limit the number of in-mempool descendants.
  237. * To bound CPU processing, we limit the amount of work we're willing to do
  238. * to properly update the descendant information for a tx being added from
  239. * a disconnected block. If we would exceed the limit, then we instead mark
  240. * the entry as "dirty", and set the feerate for sorting purposes to be equal
  241. * the feerate of the transaction without any descendants.
  242. *
  243. */
  244. class CTxMemPool
  245. {
  246. private:
  247. bool fSanityCheck; //! Normally false, true if -checkmempool or -regtest
  248. unsigned int nTransactionsUpdated;
  249. CBlockPolicyEstimator* minerPolicyEstimator;
  250. uint64_t totalTxSize; //! sum of all mempool tx' byte sizes
  251. uint64_t cachedInnerUsage; //! sum of dynamic memory usage of all the map elements (NOT the maps themselves)
  252. public:
  253. typedef boost::multi_index_container<
  254. CTxMemPoolEntry,
  255. boost::multi_index::indexed_by<
  256. // sorted by txid
  257. boost::multi_index::ordered_unique<mempoolentry_txid>,
  258. // sorted by fee rate
  259. boost::multi_index::ordered_non_unique<
  260. boost::multi_index::identity<CTxMemPoolEntry>,
  261. CompareTxMemPoolEntryByFee
  262. >
  263. >
  264. > indexed_transaction_set;
  265. mutable CCriticalSection cs;
  266. indexed_transaction_set mapTx;
  267. typedef indexed_transaction_set::nth_index<0>::type::iterator txiter;
  268. struct CompareIteratorByHash {
  269. bool operator()(const txiter &a, const txiter &b) const {
  270. return a->GetTx().GetHash() < b->GetTx().GetHash();
  271. }
  272. };
  273. typedef std::set<txiter, CompareIteratorByHash> setEntries;
  274. private:
  275. typedef std::map<txiter, setEntries, CompareIteratorByHash> cacheMap;
  276. struct TxLinks {
  277. setEntries parents;
  278. setEntries children;
  279. };
  280. typedef std::map<txiter, TxLinks, CompareIteratorByHash> txlinksMap;
  281. txlinksMap mapLinks;
  282. const setEntries & GetMemPoolParents(txiter entry) const;
  283. const setEntries & GetMemPoolChildren(txiter entry) const;
  284. void UpdateParent(txiter entry, txiter parent, bool add);
  285. void UpdateChild(txiter entry, txiter child, bool add);
  286. public:
  287. std::map<COutPoint, CInPoint> mapNextTx;
  288. std::map<uint256, std::pair<double, CAmount> > mapDeltas;
  289. CTxMemPool(const CFeeRate& _minRelayFee);
  290. ~CTxMemPool();
  291. /**
  292. * If sanity-checking is turned on, check makes sure the pool is
  293. * consistent (does not contain two transactions that spend the same inputs,
  294. * all inputs are in the mapNextTx array). If sanity-checking is turned off,
  295. * check does nothing.
  296. */
  297. void check(const CCoinsViewCache *pcoins) const;
  298. void setSanityCheck(bool _fSanityCheck) { fSanityCheck = _fSanityCheck; }
  299. // addUnchecked must updated state for all ancestors of a given transaction,
  300. // to track size/count of descendant transactions. First version of
  301. // addUnchecked can be used to have it call CalculateMemPoolAncestors(), and
  302. // then invoke the second version.
  303. bool addUnchecked(const uint256& hash, const CTxMemPoolEntry &entry, bool fCurrentEstimate = true);
  304. bool addUnchecked(const uint256& hash, const CTxMemPoolEntry &entry, setEntries &setAncestors, bool fCurrentEstimate = true);
  305. void remove(const CTransaction &tx, std::list<CTransaction>& removed, bool fRecursive = false);
  306. void removeCoinbaseSpends(const CCoinsViewCache *pcoins, unsigned int nMemPoolHeight);
  307. void removeConflicts(const CTransaction &tx, std::list<CTransaction>& removed);
  308. void removeForBlock(const std::vector<CTransaction>& vtx, unsigned int nBlockHeight,
  309. std::list<CTransaction>& conflicts, bool fCurrentEstimate = true);
  310. void clear();
  311. void queryHashes(std::vector<uint256>& vtxid);
  312. void pruneSpent(const uint256& hash, CCoins &coins);
  313. unsigned int GetTransactionsUpdated() const;
  314. void AddTransactionsUpdated(unsigned int n);
  315. /**
  316. * Check that none of this transactions inputs are in the mempool, and thus
  317. * the tx is not dependent on other mempool transactions to be included in a block.
  318. */
  319. bool HasNoInputsOf(const CTransaction& tx) const;
  320. /** Affect CreateNewBlock prioritisation of transactions */
  321. void PrioritiseTransaction(const uint256 hash, const std::string strHash, double dPriorityDelta, const CAmount& nFeeDelta);
  322. void ApplyDeltas(const uint256 hash, double &dPriorityDelta, CAmount &nFeeDelta);
  323. void ClearPrioritisation(const uint256 hash);
  324. public:
  325. /** Remove a set of transactions from the mempool.
  326. * If a transaction is in this set, then all in-mempool descendants must
  327. * also be in the set.*/
  328. void RemoveStaged(setEntries &stage);
  329. /** When adding transactions from a disconnected block back to the mempool,
  330. * new mempool entries may have children in the mempool (which is generally
  331. * not the case when otherwise adding transactions).
  332. * UpdateTransactionsFromBlock() will find child transactions and update the
  333. * descendant state for each transaction in hashesToUpdate (excluding any
  334. * child transactions present in hashesToUpdate, which are already accounted
  335. * for). Note: hashesToUpdate should be the set of transactions from the
  336. * disconnected block that have been accepted back into the mempool.
  337. */
  338. void UpdateTransactionsFromBlock(const std::vector<uint256> &hashesToUpdate);
  339. /** Try to calculate all in-mempool ancestors of entry.
  340. * (these are all calculated including the tx itself)
  341. * limitAncestorCount = max number of ancestors
  342. * limitAncestorSize = max size of ancestors
  343. * limitDescendantCount = max number of descendants any ancestor can have
  344. * limitDescendantSize = max size of descendants any ancestor can have
  345. * errString = populated with error reason if any limits are hit
  346. */
  347. bool CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString);
  348. unsigned long size()
  349. {
  350. LOCK(cs);
  351. return mapTx.size();
  352. }
  353. uint64_t GetTotalTxSize()
  354. {
  355. LOCK(cs);
  356. return totalTxSize;
  357. }
  358. bool exists(uint256 hash) const
  359. {
  360. LOCK(cs);
  361. return (mapTx.count(hash) != 0);
  362. }
  363. bool lookup(uint256 hash, CTransaction& result) const;
  364. /** Estimate fee rate needed to get into the next nBlocks */
  365. CFeeRate estimateFee(int nBlocks) const;
  366. /** Estimate priority needed to get into the next nBlocks */
  367. double estimatePriority(int nBlocks) const;
  368. /** Write/Read estimates to disk */
  369. bool WriteFeeEstimates(CAutoFile& fileout) const;
  370. bool ReadFeeEstimates(CAutoFile& filein);
  371. size_t DynamicMemoryUsage() const;
  372. private:
  373. /** UpdateForDescendants is used by UpdateTransactionsFromBlock to update
  374. * the descendants for a single transaction that has been added to the
  375. * mempool but may have child transactions in the mempool, eg during a
  376. * chain reorg. setExclude is the set of descendant transactions in the
  377. * mempool that must not be accounted for (because any descendants in
  378. * setExclude were added to the mempool after the transaction being
  379. * updated and hence their state is already reflected in the parent
  380. * state).
  381. *
  382. * If updating an entry requires looking at more than maxDescendantsToVisit
  383. * transactions, outside of the ones in setExclude, then give up.
  384. *
  385. * cachedDescendants will be updated with the descendants of the transaction
  386. * being updated, so that future invocations don't need to walk the
  387. * same transaction again, if encountered in another transaction chain.
  388. */
  389. bool UpdateForDescendants(txiter updateIt,
  390. int maxDescendantsToVisit,
  391. cacheMap &cachedDescendants,
  392. const std::set<uint256> &setExclude);
  393. /** Update ancestors of hash to add/remove it as a descendant transaction. */
  394. void UpdateAncestorsOf(bool add, txiter hash, setEntries &setAncestors);
  395. /** For each transaction being removed, update ancestors and any direct children. */
  396. void UpdateForRemoveFromMempool(const setEntries &entriesToRemove);
  397. /** Sever link between specified transaction and direct children. */
  398. void UpdateChildrenForRemoval(txiter entry);
  399. /** Populate setDescendants with all in-mempool descendants of hash.
  400. * Assumes that setDescendants includes all in-mempool descendants of anything
  401. * already in it. */
  402. void CalculateDescendants(txiter it, setEntries &setDescendants);
  403. /** Before calling removeUnchecked for a given transaction,
  404. * UpdateForRemoveFromMempool must be called on the entire (dependent) set
  405. * of transactions being removed at the same time. We use each
  406. * CTxMemPoolEntry's setMemPoolParents in order to walk ancestors of a
  407. * given transaction that is removed, so we can't remove intermediate
  408. * transactions in a chain before we've updated all the state for the
  409. * removal.
  410. */
  411. void removeUnchecked(txiter entry);
  412. };
  413. /**
  414. * CCoinsView that brings transactions from a memorypool into view.
  415. * It does not check for spendings by memory pool transactions.
  416. */
  417. class CCoinsViewMemPool : public CCoinsViewBacked
  418. {
  419. protected:
  420. CTxMemPool &mempool;
  421. public:
  422. CCoinsViewMemPool(CCoinsView *baseIn, CTxMemPool &mempoolIn);
  423. bool GetCoins(const uint256 &txid, CCoins &coins) const;
  424. bool HaveCoins(const uint256 &txid) const;
  425. };
  426. #endif // BITCOIN_TXMEMPOOL_H