You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

txmempool.cpp 15KB

estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
7 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433
  1. // Copyright (c) 2009-2010 Satoshi Nakamoto
  2. // Copyright (c) 2009-2014 The Bitcoin Core developers
  3. // Distributed under the MIT software license, see the accompanying
  4. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
  5. #include "txmempool.h"
  6. #include "clientversion.h"
  7. #include "consensus/consensus.h"
  8. #include "consensus/validation.h"
  9. #include "main.h"
  10. #include "policy/fees.h"
  11. #include "streams.h"
  12. #include "util.h"
  13. #include "utilmoneystr.h"
  14. #include "version.h"
  15. using namespace std;
  16. CTxMemPoolEntry::CTxMemPoolEntry():
  17. nFee(0), nTxSize(0), nModSize(0), nUsageSize(0), nTime(0), dPriority(0.0), hadNoDependencies(false)
  18. {
  19. nHeight = MEMPOOL_HEIGHT;
  20. }
  21. CTxMemPoolEntry::CTxMemPoolEntry(const CTransaction& _tx, const CAmount& _nFee,
  22. int64_t _nTime, double _dPriority,
  23. unsigned int _nHeight, bool poolHasNoInputsOf):
  24. tx(_tx), nFee(_nFee), nTime(_nTime), dPriority(_dPriority), nHeight(_nHeight),
  25. hadNoDependencies(poolHasNoInputsOf)
  26. {
  27. nTxSize = ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION);
  28. nModSize = tx.CalculateModifiedSize(nTxSize);
  29. nUsageSize = tx.DynamicMemoryUsage();
  30. }
  31. CTxMemPoolEntry::CTxMemPoolEntry(const CTxMemPoolEntry& other)
  32. {
  33. *this = other;
  34. }
  35. double
  36. CTxMemPoolEntry::GetPriority(unsigned int currentHeight) const
  37. {
  38. CAmount nValueIn = tx.GetValueOut()+nFee;
  39. double deltaPriority = ((double)(currentHeight-nHeight)*nValueIn)/nModSize;
  40. double dResult = dPriority + deltaPriority;
  41. return dResult;
  42. }
  43. CTxMemPool::CTxMemPool(const CFeeRate& _minRelayFee) :
  44. nTransactionsUpdated(0)
  45. {
  46. // Sanity checks off by default for performance, because otherwise
  47. // accepting transactions becomes O(N^2) where N is the number
  48. // of transactions in the pool
  49. fSanityCheck = false;
  50. minerPolicyEstimator = new CBlockPolicyEstimator(_minRelayFee);
  51. }
  52. CTxMemPool::~CTxMemPool()
  53. {
  54. delete minerPolicyEstimator;
  55. }
  56. void CTxMemPool::pruneSpent(const uint256 &hashTx, CCoins &coins)
  57. {
  58. LOCK(cs);
  59. std::map<COutPoint, CInPoint>::iterator it = mapNextTx.lower_bound(COutPoint(hashTx, 0));
  60. // iterate over all COutPoints in mapNextTx whose hash equals the provided hashTx
  61. while (it != mapNextTx.end() && it->first.hash == hashTx) {
  62. coins.Spend(it->first.n); // and remove those outputs from coins
  63. it++;
  64. }
  65. }
  66. unsigned int CTxMemPool::GetTransactionsUpdated() const
  67. {
  68. LOCK(cs);
  69. return nTransactionsUpdated;
  70. }
  71. void CTxMemPool::AddTransactionsUpdated(unsigned int n)
  72. {
  73. LOCK(cs);
  74. nTransactionsUpdated += n;
  75. }
  76. bool CTxMemPool::addUnchecked(const uint256& hash, const CTxMemPoolEntry &entry, bool fCurrentEstimate)
  77. {
  78. // Add to memory pool without checking anything.
  79. // Used by main.cpp AcceptToMemoryPool(), which DOES do
  80. // all the appropriate checks.
  81. LOCK(cs);
  82. mapTx[hash] = entry;
  83. const CTransaction& tx = mapTx[hash].GetTx();
  84. for (unsigned int i = 0; i < tx.vin.size(); i++)
  85. mapNextTx[tx.vin[i].prevout] = CInPoint(&tx, i);
  86. nTransactionsUpdated++;
  87. totalTxSize += entry.GetTxSize();
  88. cachedInnerUsage += entry.DynamicMemoryUsage();
  89. minerPolicyEstimator->processTransaction(entry, fCurrentEstimate);
  90. return true;
  91. }
  92. void CTxMemPool::remove(const CTransaction &origTx, std::list<CTransaction>& removed, bool fRecursive)
  93. {
  94. // Remove transaction from memory pool
  95. {
  96. LOCK(cs);
  97. std::deque<uint256> txToRemove;
  98. txToRemove.push_back(origTx.GetHash());
  99. if (fRecursive && !mapTx.count(origTx.GetHash())) {
  100. // If recursively removing but origTx isn't in the mempool
  101. // be sure to remove any children that are in the pool. This can
  102. // happen during chain re-orgs if origTx isn't re-accepted into
  103. // the mempool for any reason.
  104. for (unsigned int i = 0; i < origTx.vout.size(); i++) {
  105. std::map<COutPoint, CInPoint>::iterator it = mapNextTx.find(COutPoint(origTx.GetHash(), i));
  106. if (it == mapNextTx.end())
  107. continue;
  108. txToRemove.push_back(it->second.ptx->GetHash());
  109. }
  110. }
  111. while (!txToRemove.empty())
  112. {
  113. uint256 hash = txToRemove.front();
  114. txToRemove.pop_front();
  115. if (!mapTx.count(hash))
  116. continue;
  117. const CTransaction& tx = mapTx[hash].GetTx();
  118. if (fRecursive) {
  119. for (unsigned int i = 0; i < tx.vout.size(); i++) {
  120. std::map<COutPoint, CInPoint>::iterator it = mapNextTx.find(COutPoint(hash, i));
  121. if (it == mapNextTx.end())
  122. continue;
  123. txToRemove.push_back(it->second.ptx->GetHash());
  124. }
  125. }
  126. BOOST_FOREACH(const CTxIn& txin, tx.vin)
  127. mapNextTx.erase(txin.prevout);
  128. removed.push_back(tx);
  129. totalTxSize -= mapTx[hash].GetTxSize();
  130. cachedInnerUsage -= mapTx[hash].DynamicMemoryUsage();
  131. mapTx.erase(hash);
  132. nTransactionsUpdated++;
  133. minerPolicyEstimator->removeTx(hash);
  134. }
  135. }
  136. }
  137. void CTxMemPool::removeCoinbaseSpends(const CCoinsViewCache *pcoins, unsigned int nMemPoolHeight)
  138. {
  139. // Remove transactions spending a coinbase which are now immature
  140. LOCK(cs);
  141. list<CTransaction> transactionsToRemove;
  142. for (std::map<uint256, CTxMemPoolEntry>::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
  143. const CTransaction& tx = it->second.GetTx();
  144. BOOST_FOREACH(const CTxIn& txin, tx.vin) {
  145. std::map<uint256, CTxMemPoolEntry>::const_iterator it2 = mapTx.find(txin.prevout.hash);
  146. if (it2 != mapTx.end())
  147. continue;
  148. const CCoins *coins = pcoins->AccessCoins(txin.prevout.hash);
  149. if (fSanityCheck) assert(coins);
  150. if (!coins || (coins->IsCoinBase() && ((signed long)nMemPoolHeight) - coins->nHeight < COINBASE_MATURITY)) {
  151. transactionsToRemove.push_back(tx);
  152. break;
  153. }
  154. }
  155. }
  156. BOOST_FOREACH(const CTransaction& tx, transactionsToRemove) {
  157. list<CTransaction> removed;
  158. remove(tx, removed, true);
  159. }
  160. }
  161. void CTxMemPool::removeConflicts(const CTransaction &tx, std::list<CTransaction>& removed)
  162. {
  163. // Remove transactions which depend on inputs of tx, recursively
  164. list<CTransaction> result;
  165. LOCK(cs);
  166. BOOST_FOREACH(const CTxIn &txin, tx.vin) {
  167. std::map<COutPoint, CInPoint>::iterator it = mapNextTx.find(txin.prevout);
  168. if (it != mapNextTx.end()) {
  169. const CTransaction &txConflict = *it->second.ptx;
  170. if (txConflict != tx)
  171. {
  172. remove(txConflict, removed, true);
  173. }
  174. }
  175. }
  176. }
  177. /**
  178. * Called when a block is connected. Removes from mempool and updates the miner fee estimator.
  179. */
  180. void CTxMemPool::removeForBlock(const std::vector<CTransaction>& vtx, unsigned int nBlockHeight,
  181. std::list<CTransaction>& conflicts, bool fCurrentEstimate)
  182. {
  183. LOCK(cs);
  184. std::vector<CTxMemPoolEntry> entries;
  185. BOOST_FOREACH(const CTransaction& tx, vtx)
  186. {
  187. uint256 hash = tx.GetHash();
  188. if (mapTx.count(hash))
  189. entries.push_back(mapTx[hash]);
  190. }
  191. BOOST_FOREACH(const CTransaction& tx, vtx)
  192. {
  193. std::list<CTransaction> dummy;
  194. remove(tx, dummy, false);
  195. removeConflicts(tx, conflicts);
  196. ClearPrioritisation(tx.GetHash());
  197. }
  198. // After the txs in the new block have been removed from the mempool, update policy estimates
  199. minerPolicyEstimator->processBlock(nBlockHeight, entries, fCurrentEstimate);
  200. }
  201. void CTxMemPool::clear()
  202. {
  203. LOCK(cs);
  204. mapTx.clear();
  205. mapNextTx.clear();
  206. totalTxSize = 0;
  207. cachedInnerUsage = 0;
  208. ++nTransactionsUpdated;
  209. }
  210. void CTxMemPool::check(const CCoinsViewCache *pcoins) const
  211. {
  212. if (!fSanityCheck)
  213. return;
  214. LogPrint("mempool", "Checking mempool with %u transactions and %u inputs\n", (unsigned int)mapTx.size(), (unsigned int)mapNextTx.size());
  215. uint64_t checkTotal = 0;
  216. uint64_t innerUsage = 0;
  217. CCoinsViewCache mempoolDuplicate(const_cast<CCoinsViewCache*>(pcoins));
  218. LOCK(cs);
  219. list<const CTxMemPoolEntry*> waitingOnDependants;
  220. for (std::map<uint256, CTxMemPoolEntry>::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
  221. unsigned int i = 0;
  222. checkTotal += it->second.GetTxSize();
  223. innerUsage += it->second.DynamicMemoryUsage();
  224. const CTransaction& tx = it->second.GetTx();
  225. bool fDependsWait = false;
  226. BOOST_FOREACH(const CTxIn &txin, tx.vin) {
  227. // Check that every mempool transaction's inputs refer to available coins, or other mempool tx's.
  228. std::map<uint256, CTxMemPoolEntry>::const_iterator it2 = mapTx.find(txin.prevout.hash);
  229. if (it2 != mapTx.end()) {
  230. const CTransaction& tx2 = it2->second.GetTx();
  231. assert(tx2.vout.size() > txin.prevout.n && !tx2.vout[txin.prevout.n].IsNull());
  232. fDependsWait = true;
  233. } else {
  234. const CCoins* coins = pcoins->AccessCoins(txin.prevout.hash);
  235. assert(coins && coins->IsAvailable(txin.prevout.n));
  236. }
  237. // Check whether its inputs are marked in mapNextTx.
  238. std::map<COutPoint, CInPoint>::const_iterator it3 = mapNextTx.find(txin.prevout);
  239. assert(it3 != mapNextTx.end());
  240. assert(it3->second.ptx == &tx);
  241. assert(it3->second.n == i);
  242. i++;
  243. }
  244. if (fDependsWait)
  245. waitingOnDependants.push_back(&it->second);
  246. else {
  247. CValidationState state;
  248. assert(CheckInputs(tx, state, mempoolDuplicate, false, 0, false, NULL));
  249. UpdateCoins(tx, state, mempoolDuplicate, 1000000);
  250. }
  251. }
  252. unsigned int stepsSinceLastRemove = 0;
  253. while (!waitingOnDependants.empty()) {
  254. const CTxMemPoolEntry* entry = waitingOnDependants.front();
  255. waitingOnDependants.pop_front();
  256. CValidationState state;
  257. if (!mempoolDuplicate.HaveInputs(entry->GetTx())) {
  258. waitingOnDependants.push_back(entry);
  259. stepsSinceLastRemove++;
  260. assert(stepsSinceLastRemove < waitingOnDependants.size());
  261. } else {
  262. assert(CheckInputs(entry->GetTx(), state, mempoolDuplicate, false, 0, false, NULL));
  263. UpdateCoins(entry->GetTx(), state, mempoolDuplicate, 1000000);
  264. stepsSinceLastRemove = 0;
  265. }
  266. }
  267. for (std::map<COutPoint, CInPoint>::const_iterator it = mapNextTx.begin(); it != mapNextTx.end(); it++) {
  268. uint256 hash = it->second.ptx->GetHash();
  269. map<uint256, CTxMemPoolEntry>::const_iterator it2 = mapTx.find(hash);
  270. const CTransaction& tx = it2->second.GetTx();
  271. assert(it2 != mapTx.end());
  272. assert(&tx == it->second.ptx);
  273. assert(tx.vin.size() > it->second.n);
  274. assert(it->first == it->second.ptx->vin[it->second.n].prevout);
  275. }
  276. assert(totalTxSize == checkTotal);
  277. assert(innerUsage == cachedInnerUsage);
  278. }
  279. void CTxMemPool::queryHashes(vector<uint256>& vtxid)
  280. {
  281. vtxid.clear();
  282. LOCK(cs);
  283. vtxid.reserve(mapTx.size());
  284. for (map<uint256, CTxMemPoolEntry>::iterator mi = mapTx.begin(); mi != mapTx.end(); ++mi)
  285. vtxid.push_back((*mi).first);
  286. }
  287. bool CTxMemPool::lookup(uint256 hash, CTransaction& result) const
  288. {
  289. LOCK(cs);
  290. map<uint256, CTxMemPoolEntry>::const_iterator i = mapTx.find(hash);
  291. if (i == mapTx.end()) return false;
  292. result = i->second.GetTx();
  293. return true;
  294. }
  295. CFeeRate CTxMemPool::estimateFee(int nBlocks) const
  296. {
  297. LOCK(cs);
  298. return minerPolicyEstimator->estimateFee(nBlocks);
  299. }
  300. double CTxMemPool::estimatePriority(int nBlocks) const
  301. {
  302. LOCK(cs);
  303. return minerPolicyEstimator->estimatePriority(nBlocks);
  304. }
  305. bool
  306. CTxMemPool::WriteFeeEstimates(CAutoFile& fileout) const
  307. {
  308. try {
  309. LOCK(cs);
  310. fileout << 109900; // version required to read: 0.10.99 or later
  311. fileout << CLIENT_VERSION; // version that wrote the file
  312. minerPolicyEstimator->Write(fileout);
  313. }
  314. catch (const std::exception&) {
  315. LogPrintf("CTxMemPool::WriteFeeEstimates(): unable to write policy estimator data (non-fatal)");
  316. return false;
  317. }
  318. return true;
  319. }
  320. bool
  321. CTxMemPool::ReadFeeEstimates(CAutoFile& filein)
  322. {
  323. try {
  324. int nVersionRequired, nVersionThatWrote;
  325. filein >> nVersionRequired >> nVersionThatWrote;
  326. if (nVersionRequired > CLIENT_VERSION)
  327. return error("CTxMemPool::ReadFeeEstimates(): up-version (%d) fee estimate file", nVersionRequired);
  328. LOCK(cs);
  329. minerPolicyEstimator->Read(filein);
  330. }
  331. catch (const std::exception&) {
  332. LogPrintf("CTxMemPool::ReadFeeEstimates(): unable to read policy estimator data (non-fatal)");
  333. return false;
  334. }
  335. return true;
  336. }
  337. void CTxMemPool::PrioritiseTransaction(const uint256 hash, const string strHash, double dPriorityDelta, const CAmount& nFeeDelta)
  338. {
  339. {
  340. LOCK(cs);
  341. std::pair<double, CAmount> &deltas = mapDeltas[hash];
  342. deltas.first += dPriorityDelta;
  343. deltas.second += nFeeDelta;
  344. }
  345. LogPrintf("PrioritiseTransaction: %s priority += %f, fee += %d\n", strHash, dPriorityDelta, FormatMoney(nFeeDelta));
  346. }
  347. void CTxMemPool::ApplyDeltas(const uint256 hash, double &dPriorityDelta, CAmount &nFeeDelta)
  348. {
  349. LOCK(cs);
  350. std::map<uint256, std::pair<double, CAmount> >::iterator pos = mapDeltas.find(hash);
  351. if (pos == mapDeltas.end())
  352. return;
  353. const std::pair<double, CAmount> &deltas = pos->second;
  354. dPriorityDelta += deltas.first;
  355. nFeeDelta += deltas.second;
  356. }
  357. void CTxMemPool::ClearPrioritisation(const uint256 hash)
  358. {
  359. LOCK(cs);
  360. mapDeltas.erase(hash);
  361. }
  362. bool CTxMemPool::HasNoInputsOf(const CTransaction &tx) const
  363. {
  364. for (unsigned int i = 0; i < tx.vin.size(); i++)
  365. if (exists(tx.vin[i].prevout.hash))
  366. return false;
  367. return true;
  368. }
  369. CCoinsViewMemPool::CCoinsViewMemPool(CCoinsView *baseIn, CTxMemPool &mempoolIn) : CCoinsViewBacked(baseIn), mempool(mempoolIn) { }
  370. bool CCoinsViewMemPool::GetCoins(const uint256 &txid, CCoins &coins) const {
  371. // If an entry in the mempool exists, always return that one, as it's guaranteed to never
  372. // conflict with the underlying cache, and it cannot have pruned entries (as it contains full)
  373. // transactions. First checking the underlying cache risks returning a pruned entry instead.
  374. CTransaction tx;
  375. if (mempool.lookup(txid, tx)) {
  376. coins = CCoins(tx, MEMPOOL_HEIGHT);
  377. return true;
  378. }
  379. return (base->GetCoins(txid, coins) && !coins.IsPruned());
  380. }
  381. bool CCoinsViewMemPool::HaveCoins(const uint256 &txid) const {
  382. return mempool.exists(txid) || base->HaveCoins(txid);
  383. }
  384. size_t CTxMemPool::DynamicMemoryUsage() const {
  385. LOCK(cs);
  386. return memusage::DynamicUsage(mapTx) + memusage::DynamicUsage(mapNextTx) + memusage::DynamicUsage(mapDeltas) + cachedInnerUsage;
  387. }