Analysis test

Support for testing analysis phase logic, such as rules.

analysis_test

analysis_test(name, target=None, targets=None, impl, expect_failure=False, attrs={}, attr_values={}, fragments=[], config_settings={}, extra_target_under_test_aspects=[], collect_actions_recursively=False, provider_subject_factories=[])

Creates an analysis test from its implementation function.

An analysis test verifies the behavior of a “real” rule target by examining and asserting on the providers given by the real target.

Each analysis test is defined in an implementation function. This function handles the boilerplate to create and return a test target and captures the implementation function’s name so that it can be printed in test feedback.

An example of an analysis test:

def basic_test(name):
    my_rule(name = name + "_subject", ...)

    analysistest(name = name, target = name + "_subject", impl = _your_test)

def _your_test(env, target, actions):
    env.assert_that(target).runfiles().contains_at_least("foo.txt")
    env.assert_that(find_action(actions, generating="foo.txt")).argv().contains("--a")

PARAMETERS

name:

Name of the target. It should be a Starlark identifier, matching pattern ‘[A-Za-z_][A-Za-z0-9_]*’.

target:

(default None) The value for the singular target attribute under test. Mutually exclusive with the targets arg. The type of value passed determines the type of attribute created:

* A list creates an `attr.label_list`
* A dict creates an `attr.label_keyed_string_dict`
* Other values (string and Label) create an `attr.label`.

When set, the impl function is passed the value for the attribute under test, e.g. passing a list here will pass a list to the impl function. These targets all have the target under test config applied.

targets:

(default None) dict of attribute names and their values to test. Mutually exclusive with target. Each key must be a valid attribute name and Starlark identifier. When set, the impl function is passed a struct of the targets under test, where each attribute corresponds to this dict’s keys. The attributes have the target under test config applied and can be customized using attrs.

impl:

The implementation function of the analysis test.

expect_failure:

(default False) If true, the analysis test will expect the target to fail. Assertions can be made on the underlying failure using truth.expect_failure

attrs:

(default {}) An optional dictionary to supplement the attrs passed to the unit test’s rule() constructor. Each value can be one of two types:

  1. An attribute object, e.g. from attr.string().

  2. A dict that describes how to create the attribute; such objects have the target under test settings from other args applied. The dict supports the following keys:

    • @attr: A function to create an attribute, e.g. attr.label. If unset, attr.label is used.

    • @config_settings: A dict of config settings; see the config_settings argument. These are merged with the config_settings arg, with these having precendence.

    • aspects: additional aspects to apply in addition to the regular target under test aspects.

    • cfg: Not supported; replaced with the target-under-test transition.

    • All other keys are treated as kwargs for the @attr function.

attr_values:

(default {}) An optional dictionary of kwargs to pass onto the analysis test target itself (e.g. common attributes like tags, target_compatible_with, or attributes from attrs). Note that these are for the analysis test target itself, not the target under test.

fragments:

(default []) An optional list of fragment names that can be used to give rules access to language-specific parts of configuration.

config_settings:

(default {}) A dictionary of configuration settings to change for the target under test and its dependencies. This may be used to essentially change ‘build flags’ for the target under test, and may thus be utilized to test multiple targets with different flags in a single build. NOTE: When values that are labels (e.g. for the –platforms flag), it’s suggested to always explicitly call Label() on the value before passing it in. This ensures the label is resolved in your repository’s context, not rule_testing’s.

extra_target_under_test_aspects:

(default []) An optional list of aspects to apply to the target_under_test in addition to those set up by default for the test harness itself.

collect_actions_recursively:

(default False) If true, runs testing_aspect over all attributes, otherwise it is only applied to the target under test.

provider_subject_factories:

(default []) Optional list of ProviderSubjectFactory structs, these are additional provider factories on top of built in ones. A ProviderSubjectFactory is a struct with the following fields:

  • type: A provider object, e.g. the callable FooInfo object

  • name: A human-friendly name of the provider (eg. “FooInfo”)

  • factory: A callable to convert an instance of the provider to a subject; see TargetSubject.provider()’s factory arg for the signature.

RETURNS

(None)

test_suite

test_suite(kwargs)

This is an alias to lib/test_suite.bzl#test_suite.

PARAMETERS

kwargs:

Args passed through to test_suite