grass.benchmark package

Benchmarking for GRASS GIS modules

This subpackage of the grass package is experimental and the API can change anytime. The API of the package is defined by what is imported in the top-level __init__.py file of the subpackage.

The functions in the Python API raise exceptions, although calls of other functions from the grass package may call grass.script.fatal and exit (see grass.script.core.set_raise_on_error() for changing the behavior). This applies to the CLI interface of this subpackage too except that raised usage exceptions originating in the CLI code result in sys.exit with an error message, not traceback. Messages and other user-visible texts in this package are not translatable.

Submodules

grass.benchmark.app module

CLI for the benchmark package

exception grass.benchmark.app.CliUsageError[source]

Bases: ValueError

Raised when error is in the command line arguments.

Used when the error is discovered only after argparse parsed the arguments.

class grass.benchmark.app.ExtendAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: argparse.Action

Support for agrparse action=”extend” before Python 3.8

Each parser instance needs the action to be registered.

grass.benchmark.app.add_plot_io_arguments(parser)[source]

Add input and output arguments to parser.

grass.benchmark.app.add_plot_subcommand(parent_subparsers)[source]

Add plot subcommand.

grass.benchmark.app.add_plot_title_argument(parser)[source]

Add title argument to parser.

grass.benchmark.app.add_results_subcommand(parent_subparsers)[source]

Add results subcommand.

grass.benchmark.app.add_subcommand_parser(subparsers, name, description)[source]

Add parser for a subcommand into subparsers.

grass.benchmark.app.add_subparsers(parser, dest)[source]

Add subparsers in a unified way.

Uses title ‘subcommands’ for the list of commands (instead of the ‘positional’ which is the default).

The dest should be ‘command’, ‘subcommand’, etc. with appropriate nesting.

grass.benchmark.app.define_arguments()[source]

Define top level parser and create subparsers.

grass.benchmark.app.get_executable_name()[source]

Get name of the executable and module.

This is a workaround for Python issue: argparse support for “python -m module” in help https://bugs.python.org/issue22240

grass.benchmark.app.join_results_cli(args)[source]

Translate CLI parser result to API calls.

grass.benchmark.app.main(args=None)[source]

Define and parse command line parameters then run the appropriate handler.

grass.benchmark.app.plot_cells_cli(args)[source]

Translate CLI parser result to API calls.

grass.benchmark.app.plot_nprocs_cli(args)[source]

Translate CLI parser result to API calls.

grass.benchmark.plots module

Plotting functionality for benchmark results

grass.benchmark.plots.get_pyplot(to_file)[source]

Get pyplot from matplotlib

Lazy import to easily run code importing this function on limited installations. Only actual call to this function requires matplotlib.

The to_file parameter can be set to True to avoid tkinter dependency if the interactive show method is not needed.

grass.benchmark.plots.nprocs_plot(results, filename=None, title=None)[source]

Plot results from a multiple nprocs (thread) benchmarks.

results is a list of individual results from separate benchmarks. One result is required to have attributes: nprocs, times, label. The nprocs attribute is a list of all processing elements (cores, threads, processes) used in the benchmark. The times attribute is a list of corresponding times for each value from the nprocs list. The label attribute identifies the benchmark in the legend.

Optionally, result can have an all_times attribute which is a list of lists. One sublist is all times recorded for each value of nprocs.

Each result can come with a different list of nprocs, i.e., benchmarks which used different values for nprocs can be combined in one plot.

grass.benchmark.plots.num_cells_plot(results, filename=None, title=None, show_resolution=False)[source]

Plot results from a multiple raster grid size benchmarks.

results is a list of individual results from separate benchmarks with one result being similar to the nprocs_plot() function. The result is required to have times and label attributes and may have an all_times attribute. Further, it is required to have cells attribute, or, when show_resolution=True, it needs to have a resolutions attribute.

Each result can come with a different list of nprocs, i.e., benchmarks which used different values for nprocs can be combined in one plot.

grass.benchmark.results module

Handling of raw results from benchmarking

class grass.benchmark.results.ResultsEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: json.encoder.JSONEncoder

Results encoder for JSON which handles SimpleNamespace objects

Constructor for JSONEncoder, with sensible defaults.

If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.

If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.

If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an OverflowError). Otherwise, no such check takes place.

If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.

If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.

If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.

If specified, separators should be an (item_separator, key_separator) tuple. The default is (‘, ‘, ‘: ‘) if indent is None and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.

If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError.

default(o)[source]

Handle additional types

grass.benchmark.results.join_results(results, prefixes=None, select=None, prefixes_as_labels=False)[source]

Join multiple lists of results together

The results argument either needs to be a list of result objects or an object with attribute results which is the list of result objects. This allows for results loaded from a file to be combined with a simple list.

The function always returns just a simple list of result objects.

grass.benchmark.results.join_results_from_files(source_filenames, prefixes=None, select=None, prefixes_as_labels=False)[source]

Join multiple files into one results object.

grass.benchmark.results.load_results(data)[source]

Load results structure from JSON.

Takes str, returns nested structure with SimpleNamespace instead of the default dictionary object. Use attribute access to access by key (not dict-like syntax).

grass.benchmark.results.load_results_from_file(filename)[source]

Loads results from a JSON file.

See load_results() for details.

grass.benchmark.results.save_results(data)[source]

Save results structure to JSON.

If the provided object does not have results attribute, it is assumed that the list which should be results attribute was provided, so the provided object object is saved under new results key.

Returns JSON as str.

grass.benchmark.results.save_results_to_file(results, filename)[source]

Saves results to as file as JSON.

See save_results() for details.

grass.benchmark.runners module

Basic functions for benchmarking modules

grass.benchmark.runners.benchmark_nprocs(module, label, max_nprocs, repeat=5, shuffle=True)[source]

Benchmark module using values of nprocs up to max_nprocs.

module is an instance of PyGRASS Module class or any object which has a update method taking nprocs as a keyword argument, a run which takes no arguments and executes the benchmarked code, and attribute time which is set to execution time after the run function returned. Additionally, the object should be convertible to str for printing.

The module is executed for each generated value of nprocs. max_nprocs is used to generate a continuous range of integer values from 1 up to max_nprocs. repeat sets how many times the each run is repeated. So, the module will run max_nprocs * repeat times. Runs are executed in random order, set shuffle to false if they need to be executed in order based on number of threads.

label is a text to add to the result (for user-facing display). Optional nprocs is passed to the module if present.

Returns an object with attributes times (list of average execution times), all_times (list of lists of measured execution times), efficiency (parallel efficiency), nprocs (list of nprocs values used), and label (the provided parameter as is).

grass.benchmark.runners.benchmark_resolutions(module, resolutions, label, repeat=5, nprocs=None)[source]

Benchmark module using different resolutions.

module is an instance of PyGRASS Module class or any object with attributes as specified in benchmark_nprocs() except that the update method is required only when nprocs is set.

resolutions is a list of resolutions to set (current region is currently used and changed but that may change in the future). repeat sets how many times the each run is repeated. So, the module will run len(resolutions) * repeat times.

label is a text to add to the result (for user-facing display). Optional nprocs is passed to the module if present (the called module does not have to support nprocs parameter).

Returns an object with attributes times (list of average execution times), all_times (list of lists of measured execution times), resolutions (the provided parameter as is), cells (number of cells in the region), and label (the provided parameter as is).

grass.benchmark.runners.benchmark_single(module, label, repeat=5)[source]

Benchmark module as is without chaning anything.

module is an instance of PyGRASS Module class or any object which has a run method which takes no arguments and executes the benchmarked code, and attribute time which is set to execution time after the run function returned. Additionally, the object should be convertible to str for printing.

repeat sets how many times the each run is repeated. label is a text to add to the result (for user-facing display).

Returns an object with attributes time (an average execution time), all_times (list of measured execution times), and label (the provided parameter as is).