Return `EXIT_SUCCESS` from `main()` on error, not the bool `false`
(introduced in #13112). This is the correct value to return on error,
and also shuts up a clang warning.
Also add a final return for clarity.
If an unknown option is given via either the command line args or
the conf file, throw an error and exit
Update tests for ArgsManager knowing args
Ignore unknown options in the config file for bitcoin-cli
Fix tests and bitcoin-cli to match actual options used
Many options are extremely technical, and refer internals, making it
difficult to translate usefully. This came up in discussion of e.g.
#10949. If a message is not understood by translators (which are
typically end-users, not developers) they'll either translate it
literally, making it harder to understand instead of easier, with the
added drawback of the user no longer being able to google it.
Also the translation was only working for bitcoin-qt as with
the console programs, there is no translation backend. So it was
injecting never-used translation messages for bitcoin-cli, -tx.
For these reasons, stop translating options help completely. This should
not affect the output **in any way** except for bitcoin-qt when a
non-English language is configured in the locale.
This implements #10962.
* inline performance critical code
* Average runtime is specified and used to calculate iterations.
* Console: show median of multiple runs
* plot: show box plot
* filter benchmarks
* specify scaling factor
* ignore src/test and src/bench in command line check script
* number of iterations instead of time
* Replaced runtime in BENCHMARK makro number of iterations.
* Added -? to bench_bitcoin
* Benchmark plotly.js URL, width, height can be customized
* Fixed incorrect precision warning
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881