Compare commits

...

11 commits

Author SHA1 Message Date
l0rinc
32821f3606
Merge 1d99994da8 into 66aa6a47bd 2025-01-08 20:41:49 +01:00
glozow
66aa6a47bd
Merge bitcoin/bitcoin#30391: BlockAssembler: return selected packages virtual size and fee
Some checks are pending
CI / test each commit (push) Waiting to run
CI / macOS 14 native, arm64, no depends, sqlite only, gui (push) Waiting to run
CI / macOS 14 native, arm64, fuzz (push) Waiting to run
CI / Win64 native, VS 2022 (push) Waiting to run
CI / Win64 native fuzz, VS 2022 (push) Waiting to run
CI / ASan + LSan + UBSan + integer, no depends, USDT (push) Waiting to run
7c123c08dd  miner: add package feerate vector to CBlockTemplate (ismaelsadeeq)

Pull request description:

  This PR enables `BlockAssembler` to add all selected packages' fee and virtual size to a vector, and then return the vector as a member of `CBlockTemplate` struct.

  This PR is the first step in the https://github.com/bitcoin/bitcoin/issues/30392 project.

  The packages' vsize and fee are used in #30157 to select a percentile fee rate of the top block in the mempool.

ACKs for top commit:
  rkrux:
    tACK 7c123c08dd
  ryanofsky:
    Code review ACK 7c123c08dd. Changes since last review are rebasing due to a test conflict, giving the new field a better name and description, resolving the test conflict, and renaming a lot of test variables. The actual code change is still one-line change.
  glozow:
    reACK 7c123c08dd

Tree-SHA512: 767b0b3d4273cf1589fd2068d729a66c7414c0f9574b15989fbe293f8c85cd6c641dd783cde55bfabab32cd047d7d8a071d6897b06ed4295c0d071e588de0861
2025-01-08 13:01:23 -05:00
ismaelsadeeq
7c123c08dd
miner: add package feerate vector to CBlockTemplate
- The package feerates are ordered by the sequence in which
  packages are selected for inclusion in the block template.

- The commit also tests this new behaviour.

Co-authored-by: willcl-ark <will@256k1.dev>
2025-01-07 15:29:17 -05:00
Lőrinc
1d99994da8 optimization: Buffer serialization writes in SaveBlockUndo and SaveBlock
Similarly to the serialization reads, buffered writes will enable batched xor calculations - especially since currently we need to copy the write inputs Span to do the obfuscation on it, batching will enable doing the xor on the internal buffer instead.

All write operations are delegated to `AutoFile`.
Xor key offsets are also calculated based on where we are in the underlying file.
To avoid the buffered write of 4096 in `AutoFile.write`, we're disabling obfuscation for the underlying `m_dest` and doing it ourselves for the whole buffer (instead of byte-by-byte) before writing it to file.
We can't obfuscate the write's src directly since it would mutate the method's input (for very big writes it would avoid copying), but we can fill our own buffer and xor all of that safely.

`BufferedFile` wasn't used here because it does too many unrelated operations (and might be removed in the future).

Before:

|               ns/op |                op/s |    err% |     total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
|        5,260,934.43 |              190.08 |    1.2% |     11.08 | `SaveBlockToDiskBench`

After:
|               ns/op |                op/s |    err% |     total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
|        1,804,208.61 |              554.26 |    1.4% |     10.89 | `SaveBlockToDiskBench`
2024-12-21 15:51:50 +01:00
Lőrinc
d3d3955c99 optimization: Buffer serialization reads in UndoReadFromDisk and ReadBlockFromDisk
The Obfuscation (XOR) operations are currently done byte-by-byte during serialization, buffering the reads will enable batching the obfuscation operations later (not yet done here).

Also, different operating systems seem to handle file caching differently, so reading bigger batches (and processing those from memory) is also a bit faster (likely because of fewer native fread calls or less locking).

Buffered reading was done by first exhausting the buffer, and if more data is still needed, reading directly into the destination (to avoid copying into the buffer when we have huge destination Spans, such as the many megabyte vectors in later blocks), and lastly refilling the buffer completely.

Running the `ReadBlockFromDiskTest` benchmarks with different buffer sizes indicated that 16 KiB is the optimal buffer size (roughly 25% faster than master).

Testing was done by randomly reading data from the same file with `AutoFile` and `BufferedReadOnlyFile` and making sure that the exact same data is read.

`BufferedFile` wasn't used here because it does too many unrelated operations (and might be removed in the future).

Before:

|               ns/op |                op/s |    err% |     total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
|        2,288,264.16 |              437.01 |    0.2% |     11.00 | `ReadBlockFromDiskBench`

After:
|               ns/op |                op/s |    err% |     total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
|        1,847,259.64 |              541.34 |    0.2% |     11.03 | `ReadBlockFromDiskBench`
2024-12-21 15:41:55 +01:00
Lőrinc
ed8ba94b86 scripted-diff: rename block and undo functions for consistency
Co-authored-by: Ryan Ofsky <ryan@ofsky.org>

-BEGIN VERIFY SCRIPT-
sed -i \
    -e 's/\bSaveBlockToDisk\b/SaveBlock/g' \
    -e 's/\bWriteUndoDataForBlock\b/SaveBlockUndo/g' \
    $(git ls-files)
-END VERIFY SCRIPT-
2024-12-21 15:41:55 +01:00
Lőrinc
c509c62db0 refactor,blocks: remove costly asserts
When the behavior was changes in a previous commit (caching `GetSerializeSize` and avoiding `AutoFile.tell`), asserts were added to make sure the behavior was kept - to make sure reviewers and CI validates it.
We can safely remove them now.

Co-authored-by: Anthony Towns <aj@erisian.com.au>
2024-12-21 15:41:55 +01:00
Lőrinc
85ef8558b0 refactor,blocks: cache block serialized size for consecutive calls
For consistency `UNDO_DATA_DISK_OVERHEAD` was also extracted to avoid the constant's ambiguity.
2024-12-21 15:41:55 +01:00
Lőrinc
e76130d635 refactor,blocks: inline WriteBlockToDisk
`WriteBlockToDisk` wasn't really extracting a meaningful subset of the `SaveBlockToDisk` functionality, it's tied closely to the only caller (needs the header size twice, recalculated block serializes size, returns multiple branches, mutates parameter).

The inlined code should only differ in these parts (modernization will be done in other commits):
* renamed `blockPos` to `pos` in `SaveBlockToDisk` to match the parameter name;
* changed `return false` to `return FlatFilePos()`.

Also removed remaining references to `SaveBlockToDisk`.

Co-authored-by: Ryan Ofsky <ryan@ofsky.org>
2024-12-21 15:41:55 +01:00
Lőrinc
f259d14cca refactor,blocks: inline UndoWriteToDisk
`UndoWriteToDisk` wasn't really extracting a meaningful subset of the `WriteUndoDataForBlock` functionality, it's tied closely to the only caller (needs the header size twice, recalculated undo serializes size, returns multiple branches, modifies parameter, needs documentation).

The inlined code should only differ in these parts (modernization will be done in other commits):
* renamed `_pos` to `pos` in `WriteUndoDataForBlock` to match the parameter name;
* inlined `hashBlock` parameter usage into `hasher << block.pprev->GetBlockHash()`;
* changed `return false` to `return FatalError`.

Co-authored-by: Ryan Ofsky <ryan@ofsky.org>
2024-12-21 15:41:55 +01:00
Lőrinc
a20191c567 bench: add SaveBlockToDiskBench 2024-12-21 15:41:55 +01:00
13 changed files with 338 additions and 160 deletions

View file

@ -44,7 +44,7 @@ add_executable(bench_bitcoin
pool.cpp
prevector.cpp
random.cpp
readblock.cpp
readwriteblock.cpp
rollingbloom.cpp
rpc_blockchain.cpp
rpc_mempool.cpp

View file

@ -1,60 +0,0 @@
// Copyright (c) 2023 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <bench/bench.h>
#include <bench/data/block413567.raw.h>
#include <flatfile.h>
#include <node/blockstorage.h>
#include <primitives/block.h>
#include <primitives/transaction.h>
#include <serialize.h>
#include <span.h>
#include <streams.h>
#include <test/util/setup_common.h>
#include <validation.h>
#include <cassert>
#include <cstdint>
#include <memory>
#include <vector>
static FlatFilePos WriteBlockToDisk(ChainstateManager& chainman)
{
DataStream stream{benchmark::data::block413567};
CBlock block;
stream >> TX_WITH_WITNESS(block);
return chainman.m_blockman.SaveBlockToDisk(block, 0);
}
static void ReadBlockFromDiskTest(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
ChainstateManager& chainman{*testing_setup->m_node.chainman};
CBlock block;
const auto pos{WriteBlockToDisk(chainman)};
bench.run([&] {
const auto success{chainman.m_blockman.ReadBlockFromDisk(block, pos)};
assert(success);
});
}
static void ReadRawBlockFromDiskTest(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
ChainstateManager& chainman{*testing_setup->m_node.chainman};
std::vector<uint8_t> block_data;
const auto pos{WriteBlockToDisk(chainman)};
bench.run([&] {
const auto success{chainman.m_blockman.ReadRawBlockFromDisk(block_data, pos)};
assert(success);
});
}
BENCHMARK(ReadBlockFromDiskTest, benchmark::PriorityLevel::HIGH);
BENCHMARK(ReadRawBlockFromDiskTest, benchmark::PriorityLevel::HIGH);

View file

@ -0,0 +1,79 @@
// Copyright (c) 2023 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <bench/bench.h>
#include <bench/data/block413567.raw.h>
#include <flatfile.h>
#include <node/blockstorage.h>
#include <primitives/block.h>
#include <primitives/transaction.h>
#include <serialize.h>
#include <span.h>
#include <streams.h>
#include <test/util/setup_common.h>
#include <validation.h>
#include <cassert>
#include <cstdint>
#include <memory>
#include <vector>
CBlock CreateTestBlock()
{
DataStream stream{benchmark::data::block413567};
CBlock block;
stream >> TX_WITH_WITNESS(block);
return block;
}
static void GetSerializeSizeBench(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
const CBlock block{CreateTestBlock()};
bench.run([&] {
const uint32_t block_size{static_cast<uint32_t>(GetSerializeSize(TX_WITH_WITNESS(block)))};
assert(block_size == benchmark::data::block413567.size());
});
}
static void SaveBlockToDiskBench(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
auto& blockman{testing_setup->m_node.chainman->m_blockman};
const CBlock block{CreateTestBlock()};
bench.run([&] {
const auto pos{blockman.SaveBlock(block, 413'567)};
assert(!pos.IsNull());
});
}
static void ReadBlockFromDiskBench(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
auto& blockman{testing_setup->m_node.chainman->m_blockman};
const auto pos{blockman.SaveBlock(CreateTestBlock(), 413'567)};
CBlock block;
bench.run([&] {
const auto success{blockman.ReadBlockFromDisk(block, pos)};
assert(success);
});
}
static void ReadRawBlockFromDiskBench(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN)};
auto& blockman{testing_setup->m_node.chainman->m_blockman};
const auto pos{blockman.SaveBlock(CreateTestBlock(), 413'567)};
std::vector<uint8_t> block_data;
blockman.ReadRawBlockFromDisk(block_data, pos); // warmup
bench.run([&] {
const auto success{blockman.ReadRawBlockFromDisk(block_data, pos)};
assert(success);
});
}
BENCHMARK(GetSerializeSizeBench, benchmark::PriorityLevel::HIGH);
BENCHMARK(SaveBlockToDiskBench, benchmark::PriorityLevel::HIGH);
BENCHMARK(ReadBlockFromDiskBench, benchmark::PriorityLevel::HIGH);
BENCHMARK(ReadRawBlockFromDiskBench, benchmark::PriorityLevel::HIGH);

View file

@ -669,39 +669,12 @@ CBlockFileInfo* BlockManager::GetBlockFileInfo(size_t n)
return &m_blockfile_info.at(n);
}
bool BlockManager::UndoWriteToDisk(const CBlockUndo& blockundo, FlatFilePos& pos, const uint256& hashBlock) const
{
// Open history file to append
AutoFile fileout{OpenUndoFile(pos)};
if (fileout.IsNull()) {
LogError("%s: OpenUndoFile failed\n", __func__);
return false;
}
// Write index header
unsigned int nSize = GetSerializeSize(blockundo);
fileout << GetParams().MessageStart() << nSize;
// Write undo data
long fileOutPos = fileout.tell();
pos.nPos = (unsigned int)fileOutPos;
fileout << blockundo;
// calculate & write checksum
HashWriter hasher{};
hasher << hashBlock;
hasher << blockundo;
fileout << hasher.GetHash();
return true;
}
bool BlockManager::UndoReadFromDisk(CBlockUndo& blockundo, const CBlockIndex& index) const
{
const FlatFilePos pos{WITH_LOCK(::cs_main, return index.GetUndoPos())};
// Open history file to read
AutoFile filein{OpenUndoFile(pos, true)};
BufferedReadOnlyFile filein{m_undo_file_seq, pos, m_xor_key};
if (filein.IsNull()) {
LogError("%s: OpenUndoFile failed for %s\n", __func__, pos.ToString());
return false;
@ -963,62 +936,55 @@ bool BlockManager::FindUndoPos(BlockValidationState& state, int nFile, FlatFileP
return true;
}
bool BlockManager::WriteBlockToDisk(const CBlock& block, FlatFilePos& pos) const
{
// Open history file to append
AutoFile fileout{OpenBlockFile(pos)};
if (fileout.IsNull()) {
LogError("%s: OpenBlockFile failed\n", __func__);
return false;
}
// Write index header
unsigned int nSize = GetSerializeSize(TX_WITH_WITNESS(block));
fileout << GetParams().MessageStart() << nSize;
// Write block
long fileOutPos = fileout.tell();
pos.nPos = (unsigned int)fileOutPos;
fileout << TX_WITH_WITNESS(block);
return true;
}
bool BlockManager::WriteUndoDataForBlock(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex& block)
bool BlockManager::SaveBlockUndo(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex& block)
{
AssertLockHeld(::cs_main);
const BlockfileType type = BlockfileTypeForHeight(block.nHeight);
auto& cursor = *Assert(WITH_LOCK(cs_LastBlockFile, return m_blockfile_cursors[type]));
// Write undo information to disk
if (block.GetUndoPos().IsNull()) {
FlatFilePos _pos;
if (!FindUndoPos(state, block.nFile, _pos, ::GetSerializeSize(blockundo) + 40)) {
FlatFilePos pos;
const uint32_t blockundo_size{static_cast<uint32_t>(GetSerializeSize(blockundo))};
if (!FindUndoPos(state, block.nFile, pos, UNDO_DATA_DISK_OVERHEAD + blockundo_size)) {
LogError("%s: FindUndoPos failed\n", __func__);
return false;
}
if (!UndoWriteToDisk(blockundo, _pos, block.pprev->GetBlockHash())) {
BufferedWriteOnlyFile fileout{m_undo_file_seq, pos, m_xor_key};
if (fileout.IsNull()) {
LogError("%s: OpenUndoFile failed\n", __func__);
return FatalError(m_opts.notifications, state, _("Failed to write undo data."));
}
// Write index header
fileout << GetParams().MessageStart() << blockundo_size;
pos.nPos += BLOCK_SERIALIZATION_HEADER_SIZE;
fileout << blockundo;
// calculate & write checksum
HashWriter hasher{};
hasher << block.pprev->GetBlockHash();
hasher << blockundo;
fileout << hasher.GetHash();
// rev files are written in block height order, whereas blk files are written as blocks come in (often out of order)
// we want to flush the rev (undo) file once we've written the last block, which is indicated by the last height
// in the block file info as below; note that this does not catch the case where the undo writes are keeping up
// with the block writes (usually when a synced up node is getting newly mined blocks) -- this case is caught in
// the FindNextBlockPos function
if (_pos.nFile < cursor.file_num && static_cast<uint32_t>(block.nHeight) == m_blockfile_info[_pos.nFile].nHeightLast) {
if (pos.nFile < cursor.file_num && static_cast<uint32_t>(block.nHeight) == m_blockfile_info[pos.nFile].nHeightLast) {
// Do not propagate the return code, a failed flush here should not
// be an indication for a failed write. If it were propagated here,
// the caller would assume the undo data not to be written, when in
// fact it is. Note though, that a failed flush might leave the data
// file untrimmed.
if (!FlushUndoFile(_pos.nFile, true)) {
LogPrintLevel(BCLog::BLOCKSTORAGE, BCLog::Level::Warning, "Failed to flush undo file %05i\n", _pos.nFile);
if (!FlushUndoFile(pos.nFile, true)) {
LogPrintLevel(BCLog::BLOCKSTORAGE, BCLog::Level::Warning, "Failed to flush undo file %05i\n", pos.nFile);
}
} else if (_pos.nFile == cursor.file_num && block.nHeight > cursor.undo_height) {
} else if (pos.nFile == cursor.file_num && block.nHeight > cursor.undo_height) {
cursor.undo_height = block.nHeight;
}
// update nUndoPos in block index
block.nUndoPos = _pos.nPos;
block.nUndoPos = pos.nPos;
block.nStatus |= BLOCK_HAVE_UNDO;
m_dirty_blockindex.insert(&block);
}
@ -1031,7 +997,7 @@ bool BlockManager::ReadBlockFromDisk(CBlock& block, const FlatFilePos& pos) cons
block.SetNull();
// Open history file to read
AutoFile filein{OpenBlockFile(pos, true)};
BufferedReadOnlyFile filein{m_block_file_seq, pos, m_xor_key};
if (filein.IsNull()) {
LogError("%s: OpenBlockFile failed for %s\n", __func__, pos.ToString());
return false;
@ -1119,22 +1085,25 @@ bool BlockManager::ReadRawBlockFromDisk(std::vector<uint8_t>& block, const FlatF
return true;
}
FlatFilePos BlockManager::SaveBlockToDisk(const CBlock& block, int nHeight)
FlatFilePos BlockManager::SaveBlock(const CBlock& block, int nHeight)
{
unsigned int nBlockSize = ::GetSerializeSize(TX_WITH_WITNESS(block));
// Account for the 4 magic message start bytes + the 4 length bytes (8 bytes total,
// defined as BLOCK_SERIALIZATION_HEADER_SIZE)
nBlockSize += static_cast<unsigned int>(BLOCK_SERIALIZATION_HEADER_SIZE);
FlatFilePos blockPos{FindNextBlockPos(nBlockSize, nHeight, block.GetBlockTime())};
if (blockPos.IsNull()) {
const uint32_t block_size{static_cast<uint32_t>(GetSerializeSize(TX_WITH_WITNESS(block)))};
FlatFilePos pos{FindNextBlockPos(BLOCK_SERIALIZATION_HEADER_SIZE + block_size, nHeight, block.GetBlockTime())};
if (pos.IsNull()) {
LogError("%s: FindNextBlockPos failed\n", __func__);
return FlatFilePos();
}
if (!WriteBlockToDisk(block, blockPos)) {
BufferedWriteOnlyFile fileout{m_block_file_seq, pos, m_xor_key};
if (fileout.IsNull()) {
LogError("%s: OpenBlockFile failed\n", __func__);
m_opts.notifications.fatalError(_("Failed to write block."));
return FlatFilePos();
}
return blockPos;
fileout << GetParams().MessageStart() << block_size;
pos.nPos += BLOCK_SERIALIZATION_HEADER_SIZE;
fileout << TX_WITH_WITNESS(block);
return pos;
}
static auto InitBlocksdirXorKey(const BlockManager::Options& opts)

View file

@ -74,8 +74,11 @@ static const unsigned int UNDOFILE_CHUNK_SIZE = 0x100000; // 1 MiB
/** The maximum size of a blk?????.dat file (since 0.8) */
static const unsigned int MAX_BLOCKFILE_SIZE = 0x8000000; // 128 MiB
/** Size of header written by WriteBlockToDisk before a serialized CBlock */
static constexpr size_t BLOCK_SERIALIZATION_HEADER_SIZE = std::tuple_size_v<MessageStartChars> + sizeof(unsigned int);
/** Size of header written by SaveBlock before a serialized CBlock (8 bytes) */
static constexpr uint32_t BLOCK_SERIALIZATION_HEADER_SIZE = std::tuple_size_v<MessageStartChars> + sizeof(uint32_t);
/** Total overhead when writing undo data: header (8 bytes) plus checksum (32 bytes) */
static constexpr uint32_t UNDO_DATA_DISK_OVERHEAD = BLOCK_SERIALIZATION_HEADER_SIZE + uint256::size();
// Because validation code takes pointers to the map's CBlockIndex objects, if
// we ever switch to another associative container, we need to either use a
@ -161,7 +164,7 @@ private:
* blockfile info, and checks if there is enough disk space to save the block.
*
* The nAddSize argument passed to this function should include not just the size of the serialized CBlock, but also the size of
* separator fields which are written before it by WriteBlockToDisk (BLOCK_SERIALIZATION_HEADER_SIZE).
* separator fields (BLOCK_SERIALIZATION_HEADER_SIZE).
*/
[[nodiscard]] FlatFilePos FindNextBlockPos(unsigned int nAddSize, unsigned int nHeight, uint64_t nTime);
[[nodiscard]] bool FlushChainstateBlockFile(int tip_height);
@ -169,15 +172,6 @@ private:
AutoFile OpenUndoFile(const FlatFilePos& pos, bool fReadOnly = false) const;
/**
* Write a block to disk. The pos argument passed to this function is modified by this call. Before this call, it should
* point to an unused file location where separator fields will be written, followed by the serialized CBlock data.
* After this call, it will point to the beginning of the serialized CBlock data, after the separator fields
* (BLOCK_SERIALIZATION_HEADER_SIZE)
*/
bool WriteBlockToDisk(const CBlock& block, FlatFilePos& pos) const;
bool UndoWriteToDisk(const CBlockUndo& blockundo, FlatFilePos& pos, const uint256& hashBlock) const;
/* Calculate the block/rev files to delete based on height specified by user with RPC command pruneblockchain */
void FindFilesToPruneManual(
std::set<int>& setFilesToPrune,
@ -330,7 +324,7 @@ public:
/** Get block file info entry for one block file */
CBlockFileInfo* GetBlockFileInfo(size_t n);
bool WriteUndoDataForBlock(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex& block)
bool SaveBlockUndo(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex& block)
EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
/** Store block on disk and update block file statistics.
@ -341,14 +335,13 @@ public:
* @returns in case of success, the position to which the block was written to
* in case of an error, an empty FlatFilePos
*/
FlatFilePos SaveBlockToDisk(const CBlock& block, int nHeight);
FlatFilePos SaveBlock(const CBlock& block, int nHeight);
/** Update blockfile info while processing a block during reindex. The block must be available on disk.
*
* @param[in] block the block being processed
* @param[in] nHeight the height of the block
* @param[in] pos the position of the serialized CBlock on disk. This is the position returned
* by WriteBlockToDisk pointing at the CBlock, not the separator fields before it
* @param[in] pos the position of the serialized CBlock on disk
*/
void UpdateBlockInfo(const CBlock& block, unsigned int nHeight, const FlatFilePos& pos);

View file

@ -421,6 +421,7 @@ void BlockAssembler::addPackageTxs(int& nPackagesSelected, int& nDescendantsUpda
}
++nPackagesSelected;
pblocktemplate->m_package_feerates.emplace_back(packageFees, static_cast<int32_t>(packageSize));
// Update transactions that depend on each of these
nDescendantsUpdated += UpdatePackagesForAdded(mempool, ancestors, mapModifiedTx);

View file

@ -10,6 +10,7 @@
#include <policy/policy.h>
#include <primitives/block.h>
#include <txmempool.h>
#include <util/feefrac.h>
#include <memory>
#include <optional>
@ -39,6 +40,9 @@ struct CBlockTemplate
std::vector<CAmount> vTxFees;
std::vector<int64_t> vTxSigOpsCost;
std::vector<unsigned char> vchCoinbaseCommitment;
/* A vector of package fee rates, ordered by the sequence in which
* packages are selected for inclusion in the block template.*/
std::vector<FeeFrac> m_package_feerates;
};
// Container for tracking updates to ancestor feerate as we include (parent)

View file

@ -51,7 +51,7 @@ void AutoFile::seek(int64_t offset, int origin)
}
}
int64_t AutoFile::tell()
int64_t AutoFile::tell() const
{
if (!m_position.has_value()) throw std::ios_base::failure("AutoFile::tell: position unknown");
return *m_position;

View file

@ -15,6 +15,7 @@
#include <assert.h>
#include <cstddef>
#include <cstdio>
#include <flatfile.h>
#include <ios>
#include <limits>
#include <optional>
@ -23,6 +24,7 @@
#include <string>
#include <utility>
#include <vector>
#include <util/check.h>
namespace util {
inline void Xor(Span<std::byte> write, Span<const std::byte> key, size_t key_offset = 0)
@ -428,7 +430,7 @@ public:
bool IsNull() const { return m_file == nullptr; }
/** Continue with a different XOR key */
void SetXor(std::vector<std::byte> data_xor) { m_xor = data_xor; }
void SetXor(const std::vector<std::byte>& data_xor) { m_xor = data_xor; }
/** Implementation detail, only used internally. */
std::size_t detail_fread(Span<std::byte> dst);
@ -437,7 +439,7 @@ public:
void seek(int64_t offset, int origin);
/** Find position within the file. Will throw if unknown. */
int64_t tell();
int64_t tell() const;
/** Wrapper around FileCommit(). */
bool Commit();
@ -614,4 +616,87 @@ public:
}
};
class BufferedReadOnlyFile
{
AutoFile m_src;
std::vector<std::byte> m_buf;
size_t m_buf_start{0}, m_buf_end{0};
public:
explicit BufferedReadOnlyFile(const FlatFileSeq& block_file_seq,
const FlatFilePos& pos,
const std::vector<std::byte>& m_xor_key,
const size_t buf_size = 16 << 10)
: m_src{block_file_seq.Open(pos, /*read_only=*/true), m_xor_key},
m_buf{buf_size} {}
void read(Span<std::byte> dst)
{
if (m_buf_start < m_buf_end) {
const size_t chunk = Assert(std::min(dst.size(), m_buf_end - m_buf_start));
std::memcpy(dst.data(), m_buf.data() + m_buf_start, chunk);
m_buf_start += chunk;
dst = dst.subspan(chunk);
}
if (!dst.empty()) {
Assume(m_buf_start == m_buf_end);
m_src.read(dst);
m_buf_start = 0;
m_buf_end = m_src.detail_fread(m_buf);
}
}
bool IsNull() const { return m_src.IsNull(); }
template <typename T> void operator>>(T&& obj) { ::Unserialize(*this, obj); }
};
class BufferedWriteOnlyFile {
const std::vector<std::byte>& m_xor_key;
AutoFile m_dest;
std::vector<std::byte> m_buf;
size_t m_buf_pos{0};
void flush() {
if (m_buf_pos == 0) return;
const auto bytes = (m_buf_pos == m_buf.size()) ? m_buf : Span{m_buf}.first(m_buf_pos);
util::Xor(bytes, m_xor_key, m_dest.tell());
m_dest.write(bytes);
m_buf_pos = 0;
}
public:
explicit BufferedWriteOnlyFile(const FlatFileSeq& block_file_seq,
const FlatFilePos& pos,
const std::vector<std::byte>& m_xor_key,
const size_t buf_size = 1 << 20)
: m_xor_key{m_xor_key},
m_dest{block_file_seq.Open(pos, /*read_only=*/false), {}}, // We'll handle obfuscation internally
m_buf{buf_size} {}
~BufferedWriteOnlyFile() { flush(); }
void write(Span<const std::byte> src) {
while (!src.empty()) {
if (m_buf_pos == m_buf.size()) flush();
const size_t chunk = Assert(std::min(src.size(), m_buf.size() - m_buf_pos));
std::memcpy(m_buf.data() + m_buf_pos, src.data(), chunk);
m_buf_pos += chunk;
src = src.subspan(chunk);
}
}
bool IsNull() const { return m_dest.IsNull(); }
int64_t tell() const { return m_dest.tell() + m_buf_pos; }
template<typename T> BufferedWriteOnlyFile& operator<<(const T& obj) {
::Serialize(*this, obj);
return *this;
}
};
#endif // BITCOIN_STREAMS_H

View file

@ -36,7 +36,7 @@ BOOST_AUTO_TEST_CASE(blockmanager_find_block_pos)
};
BlockManager blockman{*Assert(m_node.shutdown_signal), blockman_opts};
// simulate adding a genesis block normally
BOOST_CHECK_EQUAL(blockman.SaveBlockToDisk(params->GenesisBlock(), 0).nPos, BLOCK_SERIALIZATION_HEADER_SIZE);
BOOST_CHECK_EQUAL(blockman.SaveBlock(params->GenesisBlock(), 0).nPos, BLOCK_SERIALIZATION_HEADER_SIZE);
// simulate what happens during reindex
// simulate a well-formed genesis block being found at offset 8 in the blk00000.dat file
// the block is found at offset 8 because there is an 8 byte serialization header
@ -49,7 +49,7 @@ BOOST_AUTO_TEST_CASE(blockmanager_find_block_pos)
// this is a check to make sure that https://github.com/bitcoin/bitcoin/issues/21379 does not recur
// 8 bytes (for serialization header) + 285 (for serialized genesis block) = 293
// add another 8 bytes for the second block's serialization header and we get 293 + 8 = 301
FlatFilePos actual{blockman.SaveBlockToDisk(params->GenesisBlock(), 1)};
FlatFilePos actual{blockman.SaveBlock(params->GenesisBlock(), 1)};
BOOST_CHECK_EQUAL(actual.nPos, BLOCK_SERIALIZATION_HEADER_SIZE + ::GetSerializeSize(TX_WITH_WITNESS(params->GenesisBlock())) + BLOCK_SERIALIZATION_HEADER_SIZE);
}
@ -158,10 +158,10 @@ BOOST_AUTO_TEST_CASE(blockmanager_flush_block_file)
BOOST_CHECK_EQUAL(blockman.CalculateCurrentUsage(), 0);
// Write the first block to a new location.
FlatFilePos pos1{blockman.SaveBlockToDisk(block1, /*nHeight=*/1)};
FlatFilePos pos1{blockman.SaveBlock(block1, /*nHeight=*/1)};
// Write second block
FlatFilePos pos2{blockman.SaveBlockToDisk(block2, /*nHeight=*/2)};
FlatFilePos pos2{blockman.SaveBlock(block2, /*nHeight=*/2)};
// Two blocks in the file
BOOST_CHECK_EQUAL(blockman.CalculateCurrentUsage(), (TEST_BLOCK_SIZE + BLOCK_SERIALIZATION_HEADER_SIZE) * 2);

View file

@ -16,6 +16,7 @@
#include <txmempool.h>
#include <uint256.h>
#include <util/check.h>
#include <util/feefrac.h>
#include <util/strencodings.h>
#include <util/time.h>
#include <util/translation.h>
@ -25,6 +26,7 @@
#include <test/util/setup_common.h>
#include <memory>
#include <vector>
#include <boost/test/unit_test.hpp>
@ -123,19 +125,22 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
tx.vout[0].nValue = 5000000000LL - 1000;
// This tx has a low fee: 1000 satoshis
Txid hashParentTx = tx.GetHash(); // save this txid for later use
AddToMempool(tx_mempool, entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
const auto parent_tx{entry.Fee(1000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx)};
AddToMempool(tx_mempool, parent_tx);
// This tx has a medium fee: 10000 satoshis
tx.vin[0].prevout.hash = txFirst[1]->GetHash();
tx.vout[0].nValue = 5000000000LL - 10000;
Txid hashMediumFeeTx = tx.GetHash();
AddToMempool(tx_mempool, entry.Fee(10000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx));
const auto medium_fee_tx{entry.Fee(10000).Time(Now<NodeSeconds>()).SpendsCoinbase(true).FromTx(tx)};
AddToMempool(tx_mempool, medium_fee_tx);
// This tx has a high fee, but depends on the first transaction
tx.vin[0].prevout.hash = hashParentTx;
tx.vout[0].nValue = 5000000000LL - 1000 - 50000; // 50k satoshi fee
Txid hashHighFeeTx = tx.GetHash();
AddToMempool(tx_mempool, entry.Fee(50000).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx));
const auto high_fee_tx{entry.Fee(50000).Time(Now<NodeSeconds>()).SpendsCoinbase(false).FromTx(tx)};
AddToMempool(tx_mempool, high_fee_tx);
std::unique_ptr<BlockTemplate> block_template = mining->createNewBlock(options);
BOOST_REQUIRE(block_template);
@ -145,6 +150,21 @@ void MinerTestingSetup::TestPackageSelection(const CScript& scriptPubKey, const
BOOST_CHECK(block.vtx[2]->GetHash() == hashHighFeeTx);
BOOST_CHECK(block.vtx[3]->GetHash() == hashMediumFeeTx);
// Test the inclusion of package feerates in the block template and ensure they are sequential.
const auto block_package_feerates = BlockAssembler{m_node.chainman->ActiveChainstate(), &tx_mempool, options}.CreateNewBlock()->m_package_feerates;
BOOST_CHECK(block_package_feerates.size() == 2);
// parent_tx and high_fee_tx are added to the block as a package.
const auto combined_txs_fee = parent_tx.GetFee() + high_fee_tx.GetFee();
const auto combined_txs_size = parent_tx.GetTxSize() + high_fee_tx.GetTxSize();
FeeFrac package_feefrac{combined_txs_fee, combined_txs_size};
// The package should be added first.
BOOST_CHECK(block_package_feerates[0] == package_feefrac);
// The medium_fee_tx should be added next.
FeeFrac medium_tx_feefrac{medium_fee_tx.GetFee(), medium_fee_tx.GetTxSize()};
BOOST_CHECK(block_package_feerates[1] == medium_tx_feefrac);
// Test that a package below the block min tx fee doesn't get included
tx.vin[0].prevout.hash = hashHighFeeTx;
tx.vout[0].nValue = 5000000000LL - 1000 - 50000; // 0 fee

View file

@ -9,6 +9,7 @@
#include <util/strencodings.h>
#include <boost/test/unit_test.hpp>
#include <node/blockstorage.h>
using namespace std::string_literals;
@ -553,6 +554,92 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file_rand)
fs::remove(streams_test_filename);
}
BOOST_AUTO_TEST_CASE(buffered_read_only_file_matches_autofile_random_content)
{
const FlatFileSeq test_file{m_args.GetDataDirBase(), "buffered_file_test_random", node::BLOCKFILE_CHUNK_SIZE};
constexpr size_t file_size{1 << 20};
constexpr size_t max_read_length{100};
const FlatFilePos pos{0, 0};
FastRandomContext rng{/*fDeterministic=*/false};
const std::vector obfuscation{rng.randbytes<std::byte>(8)};
AutoFile{test_file.Open(pos, false), obfuscation}.write(rng.randbytes<std::byte>(file_size));
AutoFile auto_file{test_file.Open(pos, true), obfuscation};
BufferedReadOnlyFile buffered{test_file, pos, obfuscation};
for (size_t total_read{0}; total_read < file_size;) {
const size_t read{Assert(std::min(rng.randrange(max_read_length) + 1, file_size - total_read))};
std::vector<std::byte> auto_file_buffer{read};
auto_file.read(auto_file_buffer);
std::vector<std::byte> buffered_buffer{read};
buffered.read(buffered_buffer);
BOOST_CHECK_EQUAL_COLLECTIONS(
auto_file_buffer.begin(), auto_file_buffer.end(),
buffered_buffer.begin(), buffered_buffer.end()
);
total_read += read;
}
std::vector<std::byte> excess{1};
BOOST_CHECK_EXCEPTION(auto_file.read(excess), std::ios_base::failure, HasReason{"end of file"});
BOOST_CHECK_EXCEPTION(buffered.read(excess), std::ios_base::failure, HasReason{"end of file"});
try { fs::remove(test_file.FileName(pos)); } catch (...) {}
}
BOOST_AUTO_TEST_CASE(buffered_write_only_file_matches_autofile_random_content)
{
const FlatFileSeq test_buffered{m_args.GetDataDirBase(), "buffered_write_test", node::BLOCKFILE_CHUNK_SIZE};
const FlatFileSeq test_direct{m_args.GetDataDirBase(), "direct_write_test", node::BLOCKFILE_CHUNK_SIZE};
constexpr size_t file_size{1 << 20};
constexpr size_t max_write_length{100};
const FlatFilePos pos{0, 0};
FastRandomContext rng{/*fDeterministic=*/false};
const std::vector obfuscation{rng.randbytes<std::byte>(8)};
{
std::vector test_data{rng.randbytes<std::byte>(file_size)};
AutoFile direct_file{test_direct.Open(pos, false), obfuscation};
BufferedWriteOnlyFile buffered{test_buffered, pos, obfuscation};
BOOST_CHECK_EQUAL(direct_file.tell(), buffered.tell());
for (size_t total_written{0}; total_written < file_size;) {
const size_t write_size{Assert(std::min(rng.randrange(max_write_length) + 1, file_size - total_written))};
auto current_span = Span{test_data}.subspan(total_written, write_size);
direct_file.write(current_span);
buffered.write(current_span);
BOOST_CHECK_EQUAL(direct_file.tell(), buffered.tell());
total_written += write_size;
}
}
// Compare the resulting files
AutoFile verify_direct{test_direct.Open(pos, true), obfuscation};
std::vector<std::byte> direct_result{file_size};
verify_direct.read(direct_result);
AutoFile verify_buffered{test_buffered.Open(pos, true), obfuscation};
std::vector<std::byte> buffered_result{file_size};
verify_buffered.read(buffered_result);
BOOST_CHECK_EQUAL_COLLECTIONS(
direct_result.begin(), direct_result.end(),
buffered_result.begin(), buffered_result.end()
);
try {
fs::remove(test_direct.FileName(pos));
fs::remove(test_buffered.FileName(pos));
} catch (...) {}
}
BOOST_AUTO_TEST_CASE(streams_hashed)
{
DataStream stream{};
@ -567,4 +654,4 @@ BOOST_AUTO_TEST_CASE(streams_hashed)
BOOST_CHECK_EQUAL(hash_writer.GetHash(), hash_verifier.GetHash());
}
BOOST_AUTO_TEST_SUITE_END()
BOOST_AUTO_TEST_SUITE_END()

View file

@ -2747,7 +2747,7 @@ bool Chainstate::ConnectBlock(const CBlock& block, BlockValidationState& state,
return true;
}
if (!m_blockman.WriteUndoDataForBlock(blockundo, state, *pindex)) {
if (!m_blockman.SaveBlockUndo(blockundo, state, *pindex)) {
return false;
}
@ -4564,7 +4564,7 @@ bool ChainstateManager::AcceptBlock(const std::shared_ptr<const CBlock>& pblock,
blockPos = *dbp;
m_blockman.UpdateBlockInfo(block, pindex->nHeight, blockPos);
} else {
blockPos = m_blockman.SaveBlockToDisk(block, pindex->nHeight);
blockPos = m_blockman.SaveBlock(block, pindex->nHeight);
if (blockPos.IsNull()) {
state.Error(strprintf("%s: Failed to find position to write new block to disk", __func__));
return false;
@ -5062,7 +5062,7 @@ bool Chainstate::LoadGenesisBlock()
try {
const CBlock& block = params.GenesisBlock();
FlatFilePos blockPos{m_blockman.SaveBlockToDisk(block, 0)};
FlatFilePos blockPos{m_blockman.SaveBlock(block, 0)};
if (blockPos.IsNull()) {
LogError("%s: writing genesis block to disk failed\n", __func__);
return false;