This was broken by the reorg fix, since we now have to add blocks
regardless of their starting height. We now check whether we know
the parent for the first block in the next span, or whether it was
requested. If neither, it's an orphan. If it is not known, but was
requested, we wait to get that block.
Add get_fork_version and add_ideal_fork_version to core so
cryptonote_protocol does not have to need the Blockchain
class directly, as it's not in its dependencies, and add
those to the fake core classes in tests too.
When a node is dropped, we stop considering its claimed blockchain
height as a factor in the target height calculation. This prevents
a runaway chain from being still thought to be the target even if
the nodes carrying it are dropped.
We won't even talk to a peer which claims a wrong version
for its top block. This will avoid syncing to known bad
peers in the first place.
Also add IP fails when failing to verify a block.
Connections can be dropped by the net_node layer,
unbeknownst to cryptonote_protocol, which would then
not flush any spans scheduled to that connection,
which would cause it to be only downloaded again
once it becomes the next span (possibly after a small
delay if it had been requested less than 5 seconds
ago).
A block queue is now placed between block download and
block processing. Blocks are now requested only from one
peer (unless starved).
Includes a new sync_info coommand.
All code which was using ip and port now uses a new IPv4 object,
subclass of a new network_address class. This will allow easy
addition of I2P addresses later (and also IPv6, etc).
Both old style and new style peer lists are now sent in the P2P
protocol, which is inefficient but allows peers using both
codebases to talk to each other. This will be removed in the
future. No other subclasses than IPv4 exist yet.
- only pause mining once we've got the lock (in practice, it'll
already be paused by another thread if we can't get the lock
at once though)
- do not call prepare_handle_incoming_blocks when we dismissed
all the blocks, it only causes cleanup_handle_incoming_blocks
to complain afterwards
8bdc86be protocol: speed up sync by minimizing duplicate work (moneromooo-monero)
61dfa310 epee: fix some log macros not printing context nicely (moneromooo-monero)
In particular, the prepare_handle_incoming_blocks call
is pretty lengthy, and entirely pointless in the common
case where several different connections will prepare
the exact same blocks.
- fix wrong block being used when a new block is received between
a node elaying a fluffy block and sending a new fluffy block
with txes a peer did not have
- misc a neverending ping pong requesting the same missing txids
when a new block is received in the meantime, causing the top
block to not be the one we need
- send the original fluffy block message block height when sending
a new fluffy block, not the current top height, which might
have been updated since
- avoid sending back the whole block blob when asking for txes,
send only the hash instead
- plus misc cleanup and additional debugging logs
0644eed7 Remove boost/foreach.cpp includes (Miguel Herranz)
36dd3e23 Replace BOOST_REVERSE_FOREACH with ranged for (Miguel Herranz)
629e3101 Replace BOOST_FOREACH with C++11 ranged for (Miguel Herranz)
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
Added a new command to the P2P protocol definitions to allow querying for support flags.
Implemented handling of new support flags command in net_node. Changed for_each callback template to include support flags. Updated print_connections command to show peer support flags.
Added p2p constant for signaling fluffy block support.
Added get_pool_transaction function to cryptnote_core.
Added new commands to cryptonote protocol for relaying fluffy blocks.
Implemented handling of fluffy block command in cryptonote protocol.
Enabled fluffy block support in node initial configuration.
Implemented get_testnet function in cryptonote_core.
Made it so that fluffy blocks only run on testnet.
This will be when we can't find common ground between the peer's
short chain history and our blockchain.
This fixes bad peers claiming a higher blockchain height from never
dropped, and keeping the node in synchronizing state forever, since
we will never get blocks from that peer.
The last relayed time of a transaction is maintained, and
transactions will be relayed again if they are still in the
pool after a certain amount of time, which increases with
the transaction's age. All such transactions are resent,
whether or not they originated on the local node.
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
new update of the pr with network limits
more debug options:
discarding downloaded blocks all or after given height.
trying to trigger the locking errors.
debug levels polished/tuned to sane values.
debug/logging improved.
warning: this pr should be correct code, but it could make
an existing (in master version) locking error appear more often.
it's a race on the list (map) of peers, e.g. between closing/deleting
them versus working on them in net-limit sleep in sending chunk.
the bug is not in this code/this pr, but in the master version.
the locking problem of master will be fixed in other pr.
problem is ub, and in practice is seems to usually cause program abort
(tested on debian stable with updated gcc). see --help for option
to add sleep to trigger the error faster.
Update of the PR with network limits
works very well for all speeds
(but remember that low download speed can stop upload
because we then slow down downloading of blockchain
requests too)
more debug options
fixed pedantic warnings in our code
should work again on Mac OS X and FreeBSD
fixed warning about size_t
tested on Debian, Ubuntu, Windows(testing now)
TCP options and ToS (QoS) flag
FIXED peer number limit
FIXED some spikes in ingress/download
FIXED problems when other up and down limit
commands and options for network limiting
works very well e.g. for 50 KiB/sec up and down
ToS (QoS) flag
peer number limit
TODO some spikes in ingress/download
TODO problems when other up and down limit
added "otshell utils" - simple logging (with colors, text files channels)