You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

txmempool.cpp 41KB

estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068
  1. // Copyright (c) 2009-2010 Satoshi Nakamoto
  2. // Copyright (c) 2009-2016 The Starwels developers
  3. // Distributed under the MIT software license, see the accompanying
  4. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
  5. #include "txmempool.h"
  6. #include "consensus/consensus.h"
  7. #include "consensus/tx_verify.h"
  8. #include "consensus/validation.h"
  9. #include "validation.h"
  10. #include "policy/policy.h"
  11. #include "policy/fees.h"
  12. #include "reverse_iterator.h"
  13. #include "streams.h"
  14. #include "timedata.h"
  15. #include "util.h"
  16. #include "utilmoneystr.h"
  17. #include "utiltime.h"
  18. CTxMemPoolEntry::CTxMemPoolEntry(const CTransactionRef& _tx, const CAmount& _nFee,
  19. int64_t _nTime, unsigned int _entryHeight,
  20. bool _spendsCoinbase, int64_t _sigOpsCost, LockPoints lp):
  21. tx(_tx), nFee(_nFee), nTime(_nTime), entryHeight(_entryHeight),
  22. spendsCoinbase(_spendsCoinbase), sigOpCost(_sigOpsCost), lockPoints(lp)
  23. {
  24. nTxWeight = GetTransactionWeight(*tx);
  25. nUsageSize = RecursiveDynamicUsage(tx);
  26. nCountWithDescendants = 1;
  27. nSizeWithDescendants = GetTxSize();
  28. nModFeesWithDescendants = nFee;
  29. feeDelta = 0;
  30. nCountWithAncestors = 1;
  31. nSizeWithAncestors = GetTxSize();
  32. nModFeesWithAncestors = nFee;
  33. nSigOpCostWithAncestors = sigOpCost;
  34. }
  35. CTxMemPoolEntry::CTxMemPoolEntry(const CTxMemPoolEntry& other)
  36. {
  37. *this = other;
  38. }
  39. void CTxMemPoolEntry::UpdateFeeDelta(int64_t newFeeDelta)
  40. {
  41. nModFeesWithDescendants += newFeeDelta - feeDelta;
  42. nModFeesWithAncestors += newFeeDelta - feeDelta;
  43. feeDelta = newFeeDelta;
  44. }
  45. void CTxMemPoolEntry::UpdateLockPoints(const LockPoints& lp)
  46. {
  47. lockPoints = lp;
  48. }
  49. size_t CTxMemPoolEntry::GetTxSize() const
  50. {
  51. return GetVirtualTransactionSize(nTxWeight, sigOpCost);
  52. }
  53. // Update the given tx for any in-mempool descendants.
  54. // Assumes that setMemPoolChildren is correct for the given tx and all
  55. // descendants.
  56. void CTxMemPool::UpdateForDescendants(txiter updateIt, cacheMap &cachedDescendants, const std::set<uint256> &setExclude)
  57. {
  58. setEntries stageEntries, setAllDescendants;
  59. stageEntries = GetMemPoolChildren(updateIt);
  60. while (!stageEntries.empty()) {
  61. const txiter cit = *stageEntries.begin();
  62. setAllDescendants.insert(cit);
  63. stageEntries.erase(cit);
  64. const setEntries &setChildren = GetMemPoolChildren(cit);
  65. for (const txiter childEntry : setChildren) {
  66. cacheMap::iterator cacheIt = cachedDescendants.find(childEntry);
  67. if (cacheIt != cachedDescendants.end()) {
  68. // We've already calculated this one, just add the entries for this set
  69. // but don't traverse again.
  70. for (const txiter cacheEntry : cacheIt->second) {
  71. setAllDescendants.insert(cacheEntry);
  72. }
  73. } else if (!setAllDescendants.count(childEntry)) {
  74. // Schedule for later processing
  75. stageEntries.insert(childEntry);
  76. }
  77. }
  78. }
  79. // setAllDescendants now contains all in-mempool descendants of updateIt.
  80. // Update and add to cached descendant map
  81. int64_t modifySize = 0;
  82. CAmount modifyFee = 0;
  83. int64_t modifyCount = 0;
  84. for (txiter cit : setAllDescendants) {
  85. if (!setExclude.count(cit->GetTx().GetHash())) {
  86. modifySize += cit->GetTxSize();
  87. modifyFee += cit->GetModifiedFee();
  88. modifyCount++;
  89. cachedDescendants[updateIt].insert(cit);
  90. // Update ancestor state for each descendant
  91. mapTx.modify(cit, update_ancestor_state(updateIt->GetTxSize(), updateIt->GetModifiedFee(), 1, updateIt->GetSigOpCost()));
  92. }
  93. }
  94. mapTx.modify(updateIt, update_descendant_state(modifySize, modifyFee, modifyCount));
  95. }
  96. // vHashesToUpdate is the set of transaction hashes from a disconnected block
  97. // which has been re-added to the mempool.
  98. // for each entry, look for descendants that are outside vHashesToUpdate, and
  99. // add fee/size information for such descendants to the parent.
  100. // for each such descendant, also update the ancestor state to include the parent.
  101. void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashesToUpdate)
  102. {
  103. LOCK(cs);
  104. // For each entry in vHashesToUpdate, store the set of in-mempool, but not
  105. // in-vHashesToUpdate transactions, so that we don't have to recalculate
  106. // descendants when we come across a previously seen entry.
  107. cacheMap mapMemPoolDescendantsToUpdate;
  108. // Use a set for lookups into vHashesToUpdate (these entries are already
  109. // accounted for in the state of their ancestors)
  110. std::set<uint256> setAlreadyIncluded(vHashesToUpdate.begin(), vHashesToUpdate.end());
  111. // Iterate in reverse, so that whenever we are looking at a transaction
  112. // we are sure that all in-mempool descendants have already been processed.
  113. // This maximizes the benefit of the descendant cache and guarantees that
  114. // setMemPoolChildren will be updated, an assumption made in
  115. // UpdateForDescendants.
  116. for (const uint256 &hash : reverse_iterate(vHashesToUpdate)) {
  117. // we cache the in-mempool children to avoid duplicate updates
  118. setEntries setChildren;
  119. // calculate children from mapNextTx
  120. txiter it = mapTx.find(hash);
  121. if (it == mapTx.end()) {
  122. continue;
  123. }
  124. auto iter = mapNextTx.lower_bound(COutPoint(hash, 0));
  125. // First calculate the children, and update setMemPoolChildren to
  126. // include them, and update their setMemPoolParents to include this tx.
  127. for (; iter != mapNextTx.end() && iter->first->hash == hash; ++iter) {
  128. const uint256 &childHash = iter->second->GetHash();
  129. txiter childIter = mapTx.find(childHash);
  130. assert(childIter != mapTx.end());
  131. // We can skip updating entries we've encountered before or that
  132. // are in the block (which are already accounted for).
  133. if (setChildren.insert(childIter).second && !setAlreadyIncluded.count(childHash)) {
  134. UpdateChild(it, childIter, true);
  135. UpdateParent(childIter, it, true);
  136. }
  137. }
  138. UpdateForDescendants(it, mapMemPoolDescendantsToUpdate, setAlreadyIncluded);
  139. }
  140. }
  141. bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString, bool fSearchForParents /* = true */) const
  142. {
  143. LOCK(cs);
  144. setEntries parentHashes;
  145. const CTransaction &tx = entry.GetTx();
  146. if (fSearchForParents) {
  147. // Get parents of this transaction that are in the mempool
  148. // GetMemPoolParents() is only valid for entries in the mempool, so we
  149. // iterate mapTx to find parents.
  150. for (unsigned int i = 0; i < tx.vin.size(); i++) {
  151. txiter piter = mapTx.find(tx.vin[i].prevout.hash);
  152. if (piter != mapTx.end()) {
  153. parentHashes.insert(piter);
  154. if (parentHashes.size() + 1 > limitAncestorCount) {
  155. errString = strprintf("too many unconfirmed parents [limit: %u]", limitAncestorCount);
  156. return false;
  157. }
  158. }
  159. }
  160. } else {
  161. // If we're not searching for parents, we require this to be an
  162. // entry in the mempool already.
  163. txiter it = mapTx.iterator_to(entry);
  164. parentHashes = GetMemPoolParents(it);
  165. }
  166. size_t totalSizeWithAncestors = entry.GetTxSize();
  167. while (!parentHashes.empty()) {
  168. txiter stageit = *parentHashes.begin();
  169. setAncestors.insert(stageit);
  170. parentHashes.erase(stageit);
  171. totalSizeWithAncestors += stageit->GetTxSize();
  172. if (stageit->GetSizeWithDescendants() + entry.GetTxSize() > limitDescendantSize) {
  173. errString = strprintf("exceeds descendant size limit for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantSize);
  174. return false;
  175. } else if (stageit->GetCountWithDescendants() + 1 > limitDescendantCount) {
  176. errString = strprintf("too many descendants for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantCount);
  177. return false;
  178. } else if (totalSizeWithAncestors > limitAncestorSize) {
  179. errString = strprintf("exceeds ancestor size limit [limit: %u]", limitAncestorSize);
  180. return false;
  181. }
  182. const setEntries & setMemPoolParents = GetMemPoolParents(stageit);
  183. for (const txiter &phash : setMemPoolParents) {
  184. // If this is a new ancestor, add it.
  185. if (setAncestors.count(phash) == 0) {
  186. parentHashes.insert(phash);
  187. }
  188. if (parentHashes.size() + setAncestors.size() + 1 > limitAncestorCount) {
  189. errString = strprintf("too many unconfirmed ancestors [limit: %u]", limitAncestorCount);
  190. return false;
  191. }
  192. }
  193. }
  194. return true;
  195. }
  196. void CTxMemPool::UpdateAncestorsOf(bool add, txiter it, setEntries &setAncestors)
  197. {
  198. setEntries parentIters = GetMemPoolParents(it);
  199. // add or remove this tx as a child of each parent
  200. for (txiter piter : parentIters) {
  201. UpdateChild(piter, it, add);
  202. }
  203. const int64_t updateCount = (add ? 1 : -1);
  204. const int64_t updateSize = updateCount * it->GetTxSize();
  205. const CAmount updateFee = updateCount * it->GetModifiedFee();
  206. for (txiter ancestorIt : setAncestors) {
  207. mapTx.modify(ancestorIt, update_descendant_state(updateSize, updateFee, updateCount));
  208. }
  209. }
  210. void CTxMemPool::UpdateEntryForAncestors(txiter it, const setEntries &setAncestors)
  211. {
  212. int64_t updateCount = setAncestors.size();
  213. int64_t updateSize = 0;
  214. CAmount updateFee = 0;
  215. int64_t updateSigOpsCost = 0;
  216. for (txiter ancestorIt : setAncestors) {
  217. updateSize += ancestorIt->GetTxSize();
  218. updateFee += ancestorIt->GetModifiedFee();
  219. updateSigOpsCost += ancestorIt->GetSigOpCost();
  220. }
  221. mapTx.modify(it, update_ancestor_state(updateSize, updateFee, updateCount, updateSigOpsCost));
  222. }
  223. void CTxMemPool::UpdateChildrenForRemoval(txiter it)
  224. {
  225. const setEntries &setMemPoolChildren = GetMemPoolChildren(it);
  226. for (txiter updateIt : setMemPoolChildren) {
  227. UpdateParent(updateIt, it, false);
  228. }
  229. }
  230. void CTxMemPool::UpdateForRemoveFromMempool(const setEntries &entriesToRemove, bool updateDescendants)
  231. {
  232. // For each entry, walk back all ancestors and decrement size associated with this
  233. // transaction
  234. const uint64_t nNoLimit = std::numeric_limits<uint64_t>::max();
  235. if (updateDescendants) {
  236. // updateDescendants should be true whenever we're not recursively
  237. // removing a tx and all its descendants, eg when a transaction is
  238. // confirmed in a block.
  239. // Here we only update statistics and not data in mapLinks (which
  240. // we need to preserve until we're finished with all operations that
  241. // need to traverse the mempool).
  242. for (txiter removeIt : entriesToRemove) {
  243. setEntries setDescendants;
  244. CalculateDescendants(removeIt, setDescendants);
  245. setDescendants.erase(removeIt); // don't update state for self
  246. int64_t modifySize = -((int64_t)removeIt->GetTxSize());
  247. CAmount modifyFee = -removeIt->GetModifiedFee();
  248. int modifySigOps = -removeIt->GetSigOpCost();
  249. for (txiter dit : setDescendants) {
  250. mapTx.modify(dit, update_ancestor_state(modifySize, modifyFee, -1, modifySigOps));
  251. }
  252. }
  253. }
  254. for (txiter removeIt : entriesToRemove) {
  255. setEntries setAncestors;
  256. const CTxMemPoolEntry &entry = *removeIt;
  257. std::string dummy;
  258. // Since this is a tx that is already in the mempool, we can call CMPA
  259. // with fSearchForParents = false. If the mempool is in a consistent
  260. // state, then using true or false should both be correct, though false
  261. // should be a bit faster.
  262. // However, if we happen to be in the middle of processing a reorg, then
  263. // the mempool can be in an inconsistent state. In this case, the set
  264. // of ancestors reachable via mapLinks will be the same as the set of
  265. // ancestors whose packages include this transaction, because when we
  266. // add a new transaction to the mempool in addUnchecked(), we assume it
  267. // has no children, and in the case of a reorg where that assumption is
  268. // false, the in-mempool children aren't linked to the in-block tx's
  269. // until UpdateTransactionsFromBlock() is called.
  270. // So if we're being called during a reorg, ie before
  271. // UpdateTransactionsFromBlock() has been called, then mapLinks[] will
  272. // differ from the set of mempool parents we'd calculate by searching,
  273. // and it's important that we use the mapLinks[] notion of ancestor
  274. // transactions as the set of things to update for removal.
  275. CalculateMemPoolAncestors(entry, setAncestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy, false);
  276. // Note that UpdateAncestorsOf severs the child links that point to
  277. // removeIt in the entries for the parents of removeIt.
  278. UpdateAncestorsOf(false, removeIt, setAncestors);
  279. }
  280. // After updating all the ancestor sizes, we can now sever the link between each
  281. // transaction being removed and any mempool children (ie, update setMemPoolParents
  282. // for each direct child of a transaction being removed).
  283. for (txiter removeIt : entriesToRemove) {
  284. UpdateChildrenForRemoval(removeIt);
  285. }
  286. }
  287. void CTxMemPoolEntry::UpdateDescendantState(int64_t modifySize, CAmount modifyFee, int64_t modifyCount)
  288. {
  289. nSizeWithDescendants += modifySize;
  290. assert(int64_t(nSizeWithDescendants) > 0);
  291. nModFeesWithDescendants += modifyFee;
  292. nCountWithDescendants += modifyCount;
  293. assert(int64_t(nCountWithDescendants) > 0);
  294. }
  295. void CTxMemPoolEntry::UpdateAncestorState(int64_t modifySize, CAmount modifyFee, int64_t modifyCount, int modifySigOps)
  296. {
  297. nSizeWithAncestors += modifySize;
  298. assert(int64_t(nSizeWithAncestors) > 0);
  299. nModFeesWithAncestors += modifyFee;
  300. nCountWithAncestors += modifyCount;
  301. assert(int64_t(nCountWithAncestors) > 0);
  302. nSigOpCostWithAncestors += modifySigOps;
  303. assert(int(nSigOpCostWithAncestors) >= 0);
  304. }
  305. CTxMemPool::CTxMemPool(CBlockPolicyEstimator* estimator) :
  306. nTransactionsUpdated(0), minerPolicyEstimator(estimator)
  307. {
  308. _clear(); //lock free clear
  309. // Sanity checks off by default for performance, because otherwise
  310. // accepting transactions becomes O(N^2) where N is the number
  311. // of transactions in the pool
  312. nCheckFrequency = 0;
  313. }
  314. bool CTxMemPool::isSpent(const COutPoint& outpoint)
  315. {
  316. LOCK(cs);
  317. return mapNextTx.count(outpoint);
  318. }
  319. unsigned int CTxMemPool::GetTransactionsUpdated() const
  320. {
  321. LOCK(cs);
  322. return nTransactionsUpdated;
  323. }
  324. void CTxMemPool::AddTransactionsUpdated(unsigned int n)
  325. {
  326. LOCK(cs);
  327. nTransactionsUpdated += n;
  328. }
  329. bool CTxMemPool::addUnchecked(const uint256& hash, const CTxMemPoolEntry &entry, setEntries &setAncestors, bool validFeeEstimate)
  330. {
  331. NotifyEntryAdded(entry.GetSharedTx());
  332. // Add to memory pool without checking anything.
  333. // Used by AcceptToMemoryPool(), which DOES do
  334. // all the appropriate checks.
  335. LOCK(cs);
  336. indexed_transaction_set::iterator newit = mapTx.insert(entry).first;
  337. mapLinks.insert(make_pair(newit, TxLinks()));
  338. // Update transaction for any feeDelta created by PrioritiseTransaction
  339. // TODO: refactor so that the fee delta is calculated before inserting
  340. // into mapTx.
  341. std::map<uint256, CAmount>::const_iterator pos = mapDeltas.find(hash);
  342. if (pos != mapDeltas.end()) {
  343. const CAmount &delta = pos->second;
  344. if (delta) {
  345. mapTx.modify(newit, update_fee_delta(delta));
  346. }
  347. }
  348. // Update cachedInnerUsage to include contained transaction's usage.
  349. // (When we update the entry for in-mempool parents, memory usage will be
  350. // further updated.)
  351. cachedInnerUsage += entry.DynamicMemoryUsage();
  352. const CTransaction& tx = newit->GetTx();
  353. std::set<uint256> setParentTransactions;
  354. for (unsigned int i = 0; i < tx.vin.size(); i++) {
  355. mapNextTx.insert(std::make_pair(&tx.vin[i].prevout, &tx));
  356. setParentTransactions.insert(tx.vin[i].prevout.hash);
  357. }
  358. // Don't bother worrying about child transactions of this one.
  359. // Normal case of a new transaction arriving is that there can't be any
  360. // children, because such children would be orphans.
  361. // An exception to that is if a transaction enters that used to be in a block.
  362. // In that case, our disconnect block logic will call UpdateTransactionsFromBlock
  363. // to clean up the mess we're leaving here.
  364. // Update ancestors with information about this tx
  365. for (const uint256 &phash : setParentTransactions) {
  366. txiter pit = mapTx.find(phash);
  367. if (pit != mapTx.end()) {
  368. UpdateParent(newit, pit, true);
  369. }
  370. }
  371. UpdateAncestorsOf(true, newit, setAncestors);
  372. UpdateEntryForAncestors(newit, setAncestors);
  373. nTransactionsUpdated++;
  374. totalTxSize += entry.GetTxSize();
  375. if (minerPolicyEstimator) {minerPolicyEstimator->processTransaction(entry, validFeeEstimate);}
  376. vTxHashes.emplace_back(tx.GetWitnessHash(), newit);
  377. newit->vTxHashesIdx = vTxHashes.size() - 1;
  378. return true;
  379. }
  380. void CTxMemPool::removeUnchecked(txiter it, MemPoolRemovalReason reason)
  381. {
  382. NotifyEntryRemoved(it->GetSharedTx(), reason);
  383. const uint256 hash = it->GetTx().GetHash();
  384. for (const CTxIn& txin : it->GetTx().vin)
  385. mapNextTx.erase(txin.prevout);
  386. if (vTxHashes.size() > 1) {
  387. vTxHashes[it->vTxHashesIdx] = std::move(vTxHashes.back());
  388. vTxHashes[it->vTxHashesIdx].second->vTxHashesIdx = it->vTxHashesIdx;
  389. vTxHashes.pop_back();
  390. if (vTxHashes.size() * 2 < vTxHashes.capacity())
  391. vTxHashes.shrink_to_fit();
  392. } else
  393. vTxHashes.clear();
  394. totalTxSize -= it->GetTxSize();
  395. cachedInnerUsage -= it->DynamicMemoryUsage();
  396. cachedInnerUsage -= memusage::DynamicUsage(mapLinks[it].parents) + memusage::DynamicUsage(mapLinks[it].children);
  397. mapLinks.erase(it);
  398. mapTx.erase(it);
  399. nTransactionsUpdated++;
  400. if (minerPolicyEstimator) {minerPolicyEstimator->removeTx(hash, false);}
  401. }
  402. // Calculates descendants of entry that are not already in setDescendants, and adds to
  403. // setDescendants. Assumes entryit is already a tx in the mempool and setMemPoolChildren
  404. // is correct for tx and all descendants.
  405. // Also assumes that if an entry is in setDescendants already, then all
  406. // in-mempool descendants of it are already in setDescendants as well, so that we
  407. // can save time by not iterating over those entries.
  408. void CTxMemPool::CalculateDescendants(txiter entryit, setEntries &setDescendants)
  409. {
  410. setEntries stage;
  411. if (setDescendants.count(entryit) == 0) {
  412. stage.insert(entryit);
  413. }
  414. // Traverse down the children of entry, only adding children that are not
  415. // accounted for in setDescendants already (because those children have either
  416. // already been walked, or will be walked in this iteration).
  417. while (!stage.empty()) {
  418. txiter it = *stage.begin();
  419. setDescendants.insert(it);
  420. stage.erase(it);
  421. const setEntries &setChildren = GetMemPoolChildren(it);
  422. for (const txiter &childiter : setChildren) {
  423. if (!setDescendants.count(childiter)) {
  424. stage.insert(childiter);
  425. }
  426. }
  427. }
  428. }
  429. void CTxMemPool::removeRecursive(const CTransaction &origTx, MemPoolRemovalReason reason)
  430. {
  431. // Remove transaction from memory pool
  432. {
  433. LOCK(cs);
  434. setEntries txToRemove;
  435. txiter origit = mapTx.find(origTx.GetHash());
  436. if (origit != mapTx.end()) {
  437. txToRemove.insert(origit);
  438. } else {
  439. // When recursively removing but origTx isn't in the mempool
  440. // be sure to remove any children that are in the pool. This can
  441. // happen during chain re-orgs if origTx isn't re-accepted into
  442. // the mempool for any reason.
  443. for (unsigned int i = 0; i < origTx.vout.size(); i++) {
  444. auto it = mapNextTx.find(COutPoint(origTx.GetHash(), i));
  445. if (it == mapNextTx.end())
  446. continue;
  447. txiter nextit = mapTx.find(it->second->GetHash());
  448. assert(nextit != mapTx.end());
  449. txToRemove.insert(nextit);
  450. }
  451. }
  452. setEntries setAllRemoves;
  453. for (txiter it : txToRemove) {
  454. CalculateDescendants(it, setAllRemoves);
  455. }
  456. RemoveStaged(setAllRemoves, false, reason);
  457. }
  458. }
  459. void CTxMemPool::removeForReorg(const CCoinsViewCache *pcoins, unsigned int nMemPoolHeight, int flags)
  460. {
  461. // Remove transactions spending a coinbase which are now immature and no-longer-final transactions
  462. LOCK(cs);
  463. setEntries txToRemove;
  464. for (indexed_transaction_set::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
  465. const CTransaction& tx = it->GetTx();
  466. LockPoints lp = it->GetLockPoints();
  467. bool validLP = TestLockPointValidity(&lp);
  468. if (!CheckFinalTx(tx, flags) || !CheckSequenceLocks(tx, flags, &lp, validLP)) {
  469. // Note if CheckSequenceLocks fails the LockPoints may still be invalid
  470. // So it's critical that we remove the tx and not depend on the LockPoints.
  471. txToRemove.insert(it);
  472. } else if (it->GetSpendsCoinbase()) {
  473. for (const CTxIn& txin : tx.vin) {
  474. indexed_transaction_set::const_iterator it2 = mapTx.find(txin.prevout.hash);
  475. if (it2 != mapTx.end())
  476. continue;
  477. const Coin &coin = pcoins->AccessCoin(txin.prevout);
  478. if (nCheckFrequency != 0) assert(!coin.IsSpent());
  479. if (coin.IsSpent() || (coin.IsCoinBase() && ((signed long)nMemPoolHeight) - coin.nHeight < COINBASE_MATURITY)) {
  480. txToRemove.insert(it);
  481. break;
  482. }
  483. }
  484. }
  485. if (!validLP) {
  486. mapTx.modify(it, update_lock_points(lp));
  487. }
  488. }
  489. setEntries setAllRemoves;
  490. for (txiter it : txToRemove) {
  491. CalculateDescendants(it, setAllRemoves);
  492. }
  493. RemoveStaged(setAllRemoves, false, MemPoolRemovalReason::REORG);
  494. }
  495. void CTxMemPool::removeConflicts(const CTransaction &tx)
  496. {
  497. // Remove transactions which depend on inputs of tx, recursively
  498. LOCK(cs);
  499. for (const CTxIn &txin : tx.vin) {
  500. auto it = mapNextTx.find(txin.prevout);
  501. if (it != mapNextTx.end()) {
  502. const CTransaction &txConflict = *it->second;
  503. if (txConflict != tx)
  504. {
  505. ClearPrioritisation(txConflict.GetHash());
  506. removeRecursive(txConflict, MemPoolRemovalReason::CONFLICT);
  507. }
  508. }
  509. }
  510. }
  511. /**
  512. * Called when a block is connected. Removes from mempool and updates the miner fee estimator.
  513. */
  514. void CTxMemPool::removeForBlock(const std::vector<CTransactionRef>& vtx, unsigned int nBlockHeight)
  515. {
  516. LOCK(cs);
  517. std::vector<const CTxMemPoolEntry*> entries;
  518. for (const auto& tx : vtx)
  519. {
  520. uint256 hash = tx->GetHash();
  521. indexed_transaction_set::iterator i = mapTx.find(hash);
  522. if (i != mapTx.end())
  523. entries.push_back(&*i);
  524. }
  525. // Before the txs in the new block have been removed from the mempool, update policy estimates
  526. if (minerPolicyEstimator) {minerPolicyEstimator->processBlock(nBlockHeight, entries);}
  527. for (const auto& tx : vtx)
  528. {
  529. txiter it = mapTx.find(tx->GetHash());
  530. if (it != mapTx.end()) {
  531. setEntries stage;
  532. stage.insert(it);
  533. RemoveStaged(stage, true, MemPoolRemovalReason::BLOCK);
  534. }
  535. removeConflicts(*tx);
  536. ClearPrioritisation(tx->GetHash());
  537. }
  538. lastRollingFeeUpdate = GetTime();
  539. blockSinceLastRollingFeeBump = true;
  540. }
  541. void CTxMemPool::_clear()
  542. {
  543. mapLinks.clear();
  544. mapTx.clear();
  545. mapNextTx.clear();
  546. totalTxSize = 0;
  547. cachedInnerUsage = 0;
  548. lastRollingFeeUpdate = GetTime();
  549. blockSinceLastRollingFeeBump = false;
  550. rollingMinimumFeeRate = 0;
  551. ++nTransactionsUpdated;
  552. }
  553. void CTxMemPool::clear()
  554. {
  555. LOCK(cs);
  556. _clear();
  557. }
  558. void CTxMemPool::check(const CCoinsViewCache *pcoins) const
  559. {
  560. if (nCheckFrequency == 0)
  561. return;
  562. if (GetRand(std::numeric_limits<uint32_t>::max()) >= nCheckFrequency)
  563. return;
  564. LogPrint(BCLog::MEMPOOL, "Checking mempool with %u transactions and %u inputs\n", (unsigned int)mapTx.size(), (unsigned int)mapNextTx.size());
  565. uint64_t checkTotal = 0;
  566. uint64_t innerUsage = 0;
  567. CCoinsViewCache mempoolDuplicate(const_cast<CCoinsViewCache*>(pcoins));
  568. const int64_t nSpendHeight = GetSpendHeight(mempoolDuplicate);
  569. LOCK(cs);
  570. std::list<const CTxMemPoolEntry*> waitingOnDependants;
  571. for (indexed_transaction_set::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
  572. unsigned int i = 0;
  573. checkTotal += it->GetTxSize();
  574. innerUsage += it->DynamicMemoryUsage();
  575. const CTransaction& tx = it->GetTx();
  576. txlinksMap::const_iterator linksiter = mapLinks.find(it);
  577. assert(linksiter != mapLinks.end());
  578. const TxLinks &links = linksiter->second;
  579. innerUsage += memusage::DynamicUsage(links.parents) + memusage::DynamicUsage(links.children);
  580. bool fDependsWait = false;
  581. setEntries setParentCheck;
  582. int64_t parentSizes = 0;
  583. int64_t parentSigOpCost = 0;
  584. for (const CTxIn &txin : tx.vin) {
  585. // Check that every mempool transaction's inputs refer to available coins, or other mempool tx's.
  586. indexed_transaction_set::const_iterator it2 = mapTx.find(txin.prevout.hash);
  587. if (it2 != mapTx.end()) {
  588. const CTransaction& tx2 = it2->GetTx();
  589. assert(tx2.vout.size() > txin.prevout.n && !tx2.vout[txin.prevout.n].IsNull());
  590. fDependsWait = true;
  591. if (setParentCheck.insert(it2).second) {
  592. parentSizes += it2->GetTxSize();
  593. parentSigOpCost += it2->GetSigOpCost();
  594. }
  595. } else {
  596. assert(pcoins->HaveCoin(txin.prevout));
  597. }
  598. // Check whether its inputs are marked in mapNextTx.
  599. auto it3 = mapNextTx.find(txin.prevout);
  600. assert(it3 != mapNextTx.end());
  601. assert(it3->first == &txin.prevout);
  602. assert(it3->second == &tx);
  603. i++;
  604. }
  605. assert(setParentCheck == GetMemPoolParents(it));
  606. // Verify ancestor state is correct.
  607. setEntries setAncestors;
  608. uint64_t nNoLimit = std::numeric_limits<uint64_t>::max();
  609. std::string dummy;
  610. CalculateMemPoolAncestors(*it, setAncestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy);
  611. uint64_t nCountCheck = setAncestors.size() + 1;
  612. uint64_t nSizeCheck = it->GetTxSize();
  613. CAmount nFeesCheck = it->GetModifiedFee();
  614. int64_t nSigOpCheck = it->GetSigOpCost();
  615. for (txiter ancestorIt : setAncestors) {
  616. nSizeCheck += ancestorIt->GetTxSize();
  617. nFeesCheck += ancestorIt->GetModifiedFee();
  618. nSigOpCheck += ancestorIt->GetSigOpCost();
  619. }
  620. assert(it->GetCountWithAncestors() == nCountCheck);
  621. assert(it->GetSizeWithAncestors() == nSizeCheck);
  622. assert(it->GetSigOpCostWithAncestors() == nSigOpCheck);
  623. assert(it->GetModFeesWithAncestors() == nFeesCheck);
  624. // Check children against mapNextTx
  625. CTxMemPool::setEntries setChildrenCheck;
  626. auto iter = mapNextTx.lower_bound(COutPoint(it->GetTx().GetHash(), 0));
  627. int64_t childSizes = 0;
  628. for (; iter != mapNextTx.end() && iter->first->hash == it->GetTx().GetHash(); ++iter) {
  629. txiter childit = mapTx.find(iter->second->GetHash());
  630. assert(childit != mapTx.end()); // mapNextTx points to in-mempool transactions
  631. if (setChildrenCheck.insert(childit).second) {
  632. childSizes += childit->GetTxSize();
  633. }
  634. }
  635. assert(setChildrenCheck == GetMemPoolChildren(it));
  636. // Also check to make sure size is greater than sum with immediate children.
  637. // just a sanity check, not definitive that this calc is correct...
  638. assert(it->GetSizeWithDescendants() >= childSizes + it->GetTxSize());
  639. if (fDependsWait)
  640. waitingOnDependants.push_back(&(*it));
  641. else {
  642. CValidationState state;
  643. bool fCheckResult = tx.IsCoinBase() ||
  644. Consensus::CheckTxInputs(tx, state, mempoolDuplicate, nSpendHeight);
  645. assert(fCheckResult);
  646. UpdateCoins(tx, mempoolDuplicate, 1000000);
  647. }
  648. }
  649. unsigned int stepsSinceLastRemove = 0;
  650. while (!waitingOnDependants.empty()) {
  651. const CTxMemPoolEntry* entry = waitingOnDependants.front();
  652. waitingOnDependants.pop_front();
  653. CValidationState state;
  654. if (!mempoolDuplicate.HaveInputs(entry->GetTx())) {
  655. waitingOnDependants.push_back(entry);
  656. stepsSinceLastRemove++;
  657. assert(stepsSinceLastRemove < waitingOnDependants.size());
  658. } else {
  659. bool fCheckResult = entry->GetTx().IsCoinBase() ||
  660. Consensus::CheckTxInputs(entry->GetTx(), state, mempoolDuplicate, nSpendHeight);
  661. assert(fCheckResult);
  662. UpdateCoins(entry->GetTx(), mempoolDuplicate, 1000000);
  663. stepsSinceLastRemove = 0;
  664. }
  665. }
  666. for (auto it = mapNextTx.cbegin(); it != mapNextTx.cend(); it++) {
  667. uint256 hash = it->second->GetHash();
  668. indexed_transaction_set::const_iterator it2 = mapTx.find(hash);
  669. const CTransaction& tx = it2->GetTx();
  670. assert(it2 != mapTx.end());
  671. assert(&tx == it->second);
  672. }
  673. assert(totalTxSize == checkTotal);
  674. assert(innerUsage == cachedInnerUsage);
  675. }
  676. bool CTxMemPool::CompareDepthAndScore(const uint256& hasha, const uint256& hashb)
  677. {
  678. LOCK(cs);
  679. indexed_transaction_set::const_iterator i = mapTx.find(hasha);
  680. if (i == mapTx.end()) return false;
  681. indexed_transaction_set::const_iterator j = mapTx.find(hashb);
  682. if (j == mapTx.end()) return true;
  683. uint64_t counta = i->GetCountWithAncestors();
  684. uint64_t countb = j->GetCountWithAncestors();
  685. if (counta == countb) {
  686. return CompareTxMemPoolEntryByScore()(*i, *j);
  687. }
  688. return counta < countb;
  689. }
  690. namespace {
  691. class DepthAndScoreComparator
  692. {
  693. public:
  694. bool operator()(const CTxMemPool::indexed_transaction_set::const_iterator& a, const CTxMemPool::indexed_transaction_set::const_iterator& b)
  695. {
  696. uint64_t counta = a->GetCountWithAncestors();
  697. uint64_t countb = b->GetCountWithAncestors();
  698. if (counta == countb) {
  699. return CompareTxMemPoolEntryByScore()(*a, *b);
  700. }
  701. return counta < countb;
  702. }
  703. };
  704. } // namespace
  705. std::vector<CTxMemPool::indexed_transaction_set::const_iterator> CTxMemPool::GetSortedDepthAndScore() const
  706. {
  707. std::vector<indexed_transaction_set::const_iterator> iters;
  708. AssertLockHeld(cs);
  709. iters.reserve(mapTx.size());
  710. for (indexed_transaction_set::iterator mi = mapTx.begin(); mi != mapTx.end(); ++mi) {
  711. iters.push_back(mi);
  712. }
  713. std::sort(iters.begin(), iters.end(), DepthAndScoreComparator());
  714. return iters;
  715. }
  716. void CTxMemPool::queryHashes(std::vector<uint256>& vtxid)
  717. {
  718. LOCK(cs);
  719. auto iters = GetSortedDepthAndScore();
  720. vtxid.clear();
  721. vtxid.reserve(mapTx.size());
  722. for (auto it : iters) {
  723. vtxid.push_back(it->GetTx().GetHash());
  724. }
  725. }
  726. static TxMempoolInfo GetInfo(CTxMemPool::indexed_transaction_set::const_iterator it) {
  727. return TxMempoolInfo{it->GetSharedTx(), it->GetTime(), CFeeRate(it->GetFee(), it->GetTxSize()), it->GetModifiedFee() - it->GetFee()};
  728. }
  729. std::vector<TxMempoolInfo> CTxMemPool::infoAll() const
  730. {
  731. LOCK(cs);
  732. auto iters = GetSortedDepthAndScore();
  733. std::vector<TxMempoolInfo> ret;
  734. ret.reserve(mapTx.size());
  735. for (auto it : iters) {
  736. ret.push_back(GetInfo(it));
  737. }
  738. return ret;
  739. }
  740. CTransactionRef CTxMemPool::get(const uint256& hash) const
  741. {
  742. LOCK(cs);
  743. indexed_transaction_set::const_iterator i = mapTx.find(hash);
  744. if (i == mapTx.end())
  745. return nullptr;
  746. return i->GetSharedTx();
  747. }
  748. TxMempoolInfo CTxMemPool::info(const uint256& hash) const
  749. {
  750. LOCK(cs);
  751. indexed_transaction_set::const_iterator i = mapTx.find(hash);
  752. if (i == mapTx.end())
  753. return TxMempoolInfo();
  754. return GetInfo(i);
  755. }
  756. void CTxMemPool::PrioritiseTransaction(const uint256& hash, const CAmount& nFeeDelta)
  757. {
  758. {
  759. LOCK(cs);
  760. CAmount &delta = mapDeltas[hash];
  761. delta += nFeeDelta;
  762. txiter it = mapTx.find(hash);
  763. if (it != mapTx.end()) {
  764. mapTx.modify(it, update_fee_delta(delta));
  765. // Now update all ancestors' modified fees with descendants
  766. setEntries setAncestors;
  767. uint64_t nNoLimit = std::numeric_limits<uint64_t>::max();
  768. std::string dummy;
  769. CalculateMemPoolAncestors(*it, setAncestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy, false);
  770. for (txiter ancestorIt : setAncestors) {
  771. mapTx.modify(ancestorIt, update_descendant_state(0, nFeeDelta, 0));
  772. }
  773. // Now update all descendants' modified fees with ancestors
  774. setEntries setDescendants;
  775. CalculateDescendants(it, setDescendants);
  776. setDescendants.erase(it);
  777. for (txiter descendantIt : setDescendants) {
  778. mapTx.modify(descendantIt, update_ancestor_state(0, nFeeDelta, 0, 0));
  779. }
  780. ++nTransactionsUpdated;
  781. }
  782. }
  783. LogPrintf("PrioritiseTransaction: %s feerate += %s\n", hash.ToString(), FormatMoney(nFeeDelta));
  784. }
  785. void CTxMemPool::ApplyDelta(const uint256 hash, CAmount &nFeeDelta) const
  786. {
  787. LOCK(cs);
  788. std::map<uint256, CAmount>::const_iterator pos = mapDeltas.find(hash);
  789. if (pos == mapDeltas.end())
  790. return;
  791. const CAmount &delta = pos->second;
  792. nFeeDelta += delta;
  793. }
  794. void CTxMemPool::ClearPrioritisation(const uint256 hash)
  795. {
  796. LOCK(cs);
  797. mapDeltas.erase(hash);
  798. }
  799. bool CTxMemPool::HasNoInputsOf(const CTransaction &tx) const
  800. {
  801. for (unsigned int i = 0; i < tx.vin.size(); i++)
  802. if (exists(tx.vin[i].prevout.hash))
  803. return false;
  804. return true;
  805. }
  806. CCoinsViewMemPool::CCoinsViewMemPool(CCoinsView* baseIn, const CTxMemPool& mempoolIn) : CCoinsViewBacked(baseIn), mempool(mempoolIn) { }
  807. bool CCoinsViewMemPool::GetCoin(const COutPoint &outpoint, Coin &coin) const {
  808. // If an entry in the mempool exists, always return that one, as it's guaranteed to never
  809. // conflict with the underlying cache, and it cannot have pruned entries (as it contains full)
  810. // transactions. First checking the underlying cache risks returning a pruned entry instead.
  811. CTransactionRef ptx = mempool.get(outpoint.hash);
  812. if (ptx) {
  813. if (outpoint.n < ptx->vout.size()) {
  814. coin = Coin(ptx->vout[outpoint.n], MEMPOOL_HEIGHT, false);
  815. return true;
  816. } else {
  817. return false;
  818. }
  819. }
  820. return base->GetCoin(outpoint, coin);
  821. }
  822. size_t CTxMemPool::DynamicMemoryUsage() const {
  823. LOCK(cs);
  824. // Estimate the overhead of mapTx to be 15 pointers + an allocation, as no exact formula for boost::multi_index_contained is implemented.
  825. return memusage::MallocUsage(sizeof(CTxMemPoolEntry) + 15 * sizeof(void*)) * mapTx.size() + memusage::DynamicUsage(mapNextTx) + memusage::DynamicUsage(mapDeltas) + memusage::DynamicUsage(mapLinks) + memusage::DynamicUsage(vTxHashes) + cachedInnerUsage;
  826. }
  827. void CTxMemPool::RemoveStaged(setEntries &stage, bool updateDescendants, MemPoolRemovalReason reason) {
  828. AssertLockHeld(cs);
  829. UpdateForRemoveFromMempool(stage, updateDescendants);
  830. for (const txiter& it : stage) {
  831. removeUnchecked(it, reason);
  832. }
  833. }
  834. int CTxMemPool::Expire(int64_t time) {
  835. LOCK(cs);
  836. indexed_transaction_set::index<entry_time>::type::iterator it = mapTx.get<entry_time>().begin();
  837. setEntries toremove;
  838. while (it != mapTx.get<entry_time>().end() && it->GetTime() < time) {
  839. toremove.insert(mapTx.project<0>(it));
  840. it++;
  841. }
  842. setEntries stage;
  843. for (txiter removeit : toremove) {
  844. CalculateDescendants(removeit, stage);
  845. }
  846. RemoveStaged(stage, false, MemPoolRemovalReason::EXPIRY);
  847. return stage.size();
  848. }
  849. bool CTxMemPool::addUnchecked(const uint256&hash, const CTxMemPoolEntry &entry, bool validFeeEstimate)
  850. {
  851. LOCK(cs);
  852. setEntries setAncestors;
  853. uint64_t nNoLimit = std::numeric_limits<uint64_t>::max();
  854. std::string dummy;
  855. CalculateMemPoolAncestors(entry, setAncestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy);
  856. return addUnchecked(hash, entry, setAncestors, validFeeEstimate);
  857. }
  858. void CTxMemPool::UpdateChild(txiter entry, txiter child, bool add)
  859. {
  860. setEntries s;
  861. if (add && mapLinks[entry].children.insert(child).second) {
  862. cachedInnerUsage += memusage::IncrementalDynamicUsage(s);
  863. } else if (!add && mapLinks[entry].children.erase(child)) {
  864. cachedInnerUsage -= memusage::IncrementalDynamicUsage(s);
  865. }
  866. }
  867. void CTxMemPool::UpdateParent(txiter entry, txiter parent, bool add)
  868. {
  869. setEntries s;
  870. if (add && mapLinks[entry].parents.insert(parent).second) {
  871. cachedInnerUsage += memusage::IncrementalDynamicUsage(s);
  872. } else if (!add && mapLinks[entry].parents.erase(parent)) {
  873. cachedInnerUsage -= memusage::IncrementalDynamicUsage(s);
  874. }
  875. }
  876. const CTxMemPool::setEntries & CTxMemPool::GetMemPoolParents(txiter entry) const
  877. {
  878. assert (entry != mapTx.end());
  879. txlinksMap::const_iterator it = mapLinks.find(entry);
  880. assert(it != mapLinks.end());
  881. return it->second.parents;
  882. }
  883. const CTxMemPool::setEntries & CTxMemPool::GetMemPoolChildren(txiter entry) const
  884. {
  885. assert (entry != mapTx.end());
  886. txlinksMap::const_iterator it = mapLinks.find(entry);
  887. assert(it != mapLinks.end());
  888. return it->second.children;
  889. }
  890. CFeeRate CTxMemPool::GetMinFee(size_t sizelimit) const {
  891. LOCK(cs);
  892. if (!blockSinceLastRollingFeeBump || rollingMinimumFeeRate == 0)
  893. return CFeeRate(rollingMinimumFeeRate);
  894. int64_t time = GetTime();
  895. if (time > lastRollingFeeUpdate + 10) {
  896. double halflife = ROLLING_FEE_HALFLIFE;
  897. if (DynamicMemoryUsage() < sizelimit / 4)
  898. halflife /= 4;
  899. else if (DynamicMemoryUsage() < sizelimit / 2)
  900. halflife /= 2;
  901. rollingMinimumFeeRate = rollingMinimumFeeRate / pow(2.0, (time - lastRollingFeeUpdate) / halflife);
  902. lastRollingFeeUpdate = time;
  903. if (rollingMinimumFeeRate < (double)incrementalRelayFee.GetFeePerK() / 2) {
  904. rollingMinimumFeeRate = 0;
  905. return CFeeRate(0);
  906. }
  907. }
  908. return std::max(CFeeRate(rollingMinimumFeeRate), incrementalRelayFee);
  909. }
  910. void CTxMemPool::trackPackageRemoved(const CFeeRate& rate) {
  911. AssertLockHeld(cs);
  912. if (rate.GetFeePerK() > rollingMinimumFeeRate) {
  913. rollingMinimumFeeRate = rate.GetFeePerK();
  914. blockSinceLastRollingFeeBump = false;
  915. }
  916. }
  917. void CTxMemPool::TrimToSize(size_t sizelimit, std::vector<COutPoint>* pvNoSpendsRemaining) {
  918. LOCK(cs);
  919. unsigned nTxnRemoved = 0;
  920. CFeeRate maxFeeRateRemoved(0);
  921. while (!mapTx.empty() && DynamicMemoryUsage() > sizelimit) {
  922. indexed_transaction_set::index<descendant_score>::type::iterator it = mapTx.get<descendant_score>().begin();
  923. // We set the new mempool min fee to the feerate of the removed set, plus the
  924. // "minimum reasonable fee rate" (ie some value under which we consider txn
  925. // to have 0 fee). This way, we don't allow txn to enter mempool with feerate
  926. // equal to txn which were removed with no block in between.
  927. CFeeRate removed(it->GetModFeesWithDescendants(), it->GetSizeWithDescendants());
  928. removed += incrementalRelayFee;
  929. trackPackageRemoved(removed);
  930. maxFeeRateRemoved = std::max(maxFeeRateRemoved, removed);
  931. setEntries stage;
  932. CalculateDescendants(mapTx.project<0>(it), stage);
  933. nTxnRemoved += stage.size();
  934. std::vector<CTransaction> txn;
  935. if (pvNoSpendsRemaining) {
  936. txn.reserve(stage.size());
  937. for (txiter iter : stage)
  938. txn.push_back(iter->GetTx());
  939. }
  940. RemoveStaged(stage, false, MemPoolRemovalReason::SIZELIMIT);
  941. if (pvNoSpendsRemaining) {
  942. for (const CTransaction& tx : txn) {
  943. for (const CTxIn& txin : tx.vin) {
  944. if (exists(txin.prevout.hash)) continue;
  945. pvNoSpendsRemaining->push_back(txin.prevout);
  946. }
  947. }
  948. }
  949. }
  950. if (maxFeeRateRemoved > CFeeRate(0)) {
  951. LogPrint(BCLog::MEMPOOL, "Removed %u txn, rolling minimum fee bumped to %s\n", nTxnRemoved, maxFeeRateRemoved.ToString());
  952. }
  953. }
  954. bool CTxMemPool::TransactionWithinChainLimit(const uint256& txid, size_t chainLimit) const {
  955. LOCK(cs);
  956. auto it = mapTx.find(txid);
  957. return it == mapTx.end() || (it->GetCountWithAncestors() < chainLimit &&
  958. it->GetCountWithDescendants() < chainLimit);
  959. }
  960. SaltedTxidHasher::SaltedTxidHasher() : k0(GetRand(std::numeric_limits<uint64_t>::max())), k1(GetRand(std::numeric_limits<uint64_t>::max())) {}