8231c7cd rpc: fix bootstrap RPC payment RPC being made in raw JSON, not JSON RPC (moneromooo-monero)
81c26589 rpc: don't auto fail RPC needing payment in bootstrap mode (moneromooo-monero)
The backward compatibility code was always setting it to 1
in modern wallets since store_tx_keys was not present and thus
assumed to be 1 by default.
Reported by SeventhAlpaca
Adding a new `amounts` field ot the output of `get_transfers` RPC
method. This field specifies individual payments made to a single
subaddress in a single transaction, e.g., made by this command:
transfer <addr1> <amount1> <addr1> <amount2>
The added condition "hshd.current_height >= target" guards against
reporting "synchronized" too early in the special situation that the
very first peer sending us data is synced to a lower height than
ourselves.
This is technically a record encrypted in two pieces,
so the iv needs to be different.
Some backward compatibility is added to read data written
by existing code, but new data is written with the new code.
M100 = max{300kb, min{100block_median, m_long_term_effective_median_block_weight}}
not
M100 = max{300kb, m_long_term_effective_median_block_weight}
Fix base reward in get_dynamic_base_fee_estimate().
get_dynamic_base_fee_estimate() should match check_fee()
Fee is calculated based on block reward, and the reward penalty takes into account 0.5*max_block_weight (both before and after HF_VERSION_EFFECTIVE_SHORT_TERM_MEDIAN_IN_PENALTY).
Moved median calculation according to best practice of 'keep definitions close to where they are used'.
If more than one thread wants to make sure of the spend secret key,
then we decrypt on the first caller and reencrypt on the last caller,
otherwise we could use an invalid secret key.
If the hashes received would move the current blockchain past the
stop point, the short history would not be updated, since we do
not expect another loop, but the daemon might return earlier hashes,
causing the end index to not be enough to reach the threshold and
this require another loop, which will download the same hashes and
cause an infinite loop.
Dividing `dt` here by 1e6 converts it to seconds, but that is clearly
wrong since `REQUEST_NEXT_SCHEDULED_SPAN_THRESHOLD_STANDBY` is measured
in microseconds. As a result, this if statement was effectively never
used.
The highlight check was based on height, so would highlight
any output at that height, resulting in several matches if
a fake out was picked at the same height as the real spend
Avoids a DB error (leading to an assert) where a thread uses
a read txn previously created with an environment that was
since closed and reopened. While this usually works since
BlockchainLMDB renews txns if it detects the environment has
changed, this will not work if objects end up being allocated
at the same address as the previous instance, leading to stale
data usage.
Thanks hyc for the LMDB debugging.
38f691048 simplewallet: plug a timing leak (moneromooo-monero)
dcff02e4c epee: allow a random component in once_a_time timeouts (moneromooo-monero)
e10833024 wallet: reuse cached height when set after refresh (moneromooo-monero)
5956beaa1 wallet2: fix is_synced checking target height, not height (moneromooo-monero)
fd35e2304 wallet: fix another facet of "did I get some monero" information leak (moneromooo-monero)
d5472bd87 wallet2: do not send an unnecessary last getblocks.bin call on refresh (moneromooo-monero)
97ae7bb5c wallet2: do not repeatedly ask for pool txes sent to us (moneromooo-monero)
As reported by Tramèr et al, timing of refresh requests can be used
to see whether a password was requested (and thus at least one output
received) since this will induce a delay in subsequent calls.
To avoid this, we schedule calls at a given time instead of sleeping
for a set time (which would make delays additive).
To further avoid a scheduled call being during the time in which a
password is prompted, the actual scheduled time is now randomized.
Refreshing sets cached height, which is otherwise got by calling
get_info. Since get_info is called upon needing to display a prompt
after a command has finished, it can be used to determine how much
time a given command took to run if the cache timeout lapses while
the command runs. Refreshing caches the height as a side effect, so
get_info will never be called as a result of displaying a prompt
after refreshing (and potentially leaking how much time it took to
process a set of transactions, therefore leaking whether we got
some monero in them).
Target height would be appropriate for the daemon, which syncs
off other daemons, but the wallet syncs off the daemon it's
connected to, and its target is the daemon's current height.
We get new pool txes before processing any tx, pool or not.
This ensures that if we're asked for a password, this does not
cause a measurable delay in the txpool query after the last
block query.
The "everything refreshed" state was detected when a refresh call did
not return any new blocks. This can be detected without that extra
"empty" call by comparing the claimed node height to the height of
the last block retrieved. Doing this avoids that last call, saves
some bandwidth, and makes the common refresh case use only one call
rather than two.
As a side effect, it prevents an information leak reported by
Tramèr et al: if the wallet retrieves a set of blocks which includes
an output sent to the refreshing wallet, the wallet will prompt the
user for the password to decode the amount and calculate the key
image for the new output, and this will delay subsequent calls to
getblocks.bin, allowing a passive adversary to note the delay and
deduce when the wallet receives at least one output.
This can still happen if the wallet downloads more than 1000 blocks,
since this will be split in several calls, but then the most the
adversary can tell is which 1000 block section the user received
some monero (the adversary can estimate the heights of the blocks
by calculating how many "large" transfers are done, which will be
sections of blocks, the last of which will usually be below 1000,
but the size of the data should allow the actual number of blocks
sent to be determined fairly accurately).
This timing trick still be used via the subsequent scan for incoming
txes in the txpool, which will be fixed later.
This lets a passive attacker with access to the network link
between node and wallet perform traffic analysis to deduce
when an idle wallet receives a transaction.
Reported by Tramèr et al.
This allows flushing internal caches (for now, the bad tx cache,
which will allow debugging a stuck monerod after it has failed to
verify a transaction in a block, since it would otherwise not try
again, making subsequent log changes pointless)
b3a9a4d add a quick early out to get_blocks.bin when up to date (moneromooo-monero)
2899379 daemon, wallet: new pay for RPC use system (moneromooo-monero)
ffa4602 simplewallet: add public_nodes command (moneromooo-monero)
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
added for mainnet, testnet, and stagenet.
server is owner by snipa, both snipa and I have access to it. No idea where its hosted.
xmrchain.net is a block explorer thats been around a while.
* Faster cache initialization with SSSE3/AVX2
* Automatic detection of CPU capabilities in RandomX
* Fixed a possible out-of-bounds access in superscalar program generator
* Use MONERO_RANDOMX_UMASK to manually disable RandomX flags in monerod
In case of a 0 tx weight, we use a placeholder value to insert in the
fee-per-byte set. This is used for pruning and mining, and those txes
are pruned, so will not be too large, nor added to the block template
if mining, so this is safe.
CID 204465
Use the lesser of the short and long terms medians, rather then
the long term median alone
From ArticMine:
I found a bug in the new fee calculation formula with using only the long term median
It actually needs to be the lesser of the long term median and the old (modified short term median)
short term median with the last 10 blocks calculated as empty
Yes the issue occurs if there is a large long term median and, the short term median then falls and tries to then rise again
The fees are could be not high enough
for example LTM and STM rise to say 2000000 bytes
STM falls back to 300000 bytes
Fees are now based on 2000000 bytes until LTM also falls
So the STM is could prevented from rising back up
STM short term median LTM long term median
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks