pytest
This page outlines how the Launchable CLI interfaces with pytest.
This is a reference page. See Getting started, Sending data to Launchable, and Subsetting your test runs for more comprehensive usage guidelines.

Native pytest plugin

We offer a new way to integrate Launchable, a native pytest plugin.

Installing the plugin

The Launchable pytest plugin is a Python3 package that you can install from PyPI.
The plugin requires Python 3.7+, Pytest 4.2.0+, and Java 8+.
If you use Pipenv, you can install the plugin into your repository:
1
pipenv install --dev pytest-launchable
Copied!
Or, you can install the CLI in your CI pipeline by adding this to the part of your CI script where you install dependencies:
1
pip3 install pytest-launchable
Copied!
You don't need to install Lanchable CLI separately because the plugin automatically installs the CLI and uses it internally.

Setting your API key

First, create an API key for your workspace at app.launchableinc.com. This authentication token allows the pytest plugin to talk to Launchable.
Then, make this API key available as the LAUNCHABLE_TOKEN environment variable in your CI process. How you do this depends on your CI system:
CI system
Docs
Azure DevOps Pipelines
Bitbucket Pipelines
GitHub Actions
Jenkins
(Create a global "secret text" to use in your job)
Travis CI

Generate a config file

launchable-config is a command-line tool to generate and validate configuration files. The Launchable pytest plugin uses this config.
First, generate a new config file:
1
# via pipenv
2
pipenv run launchable-config --create
3
4
# via pip
5
launchable-config --create
Copied!
This generates a template .launchable.d/config.yml file in the current directory that looks like this:
1
# Launchable test session configuration file
2
# See https://docs.launchableinc.com/resources/cli-reference for detailed usage of these options
3
#
4
schema-version: 1.0
5
build-name: commit_hash
6
record-build:
7
# Put your git repository location here
8
source: .
9
max_days: 30
10
record-session:
11
subset:
12
# mode is subset, subset-and-rest, or record-only
13
mode: subset
14
# you must specify one of target/confidence/time
15
# examples:
16
# target: 30% # Create a variable time-based subset of the given percentage. (0%-100%)
17
# confidence: 30% # Create a confidence-based subset of the given percentage. (0%-100%)
18
# time: 30m # Create a fixed time-based subset. Select the best set of tests that run within the given time bound. (e.g. 10m for 10 minutes, 2h30m for 2.5 hours, 1w3d for 7+3=10 days. )
19
target: 30%
20
record-tests:
21
# The test results are placed here in JUnit XML format
22
result_dir: launchable-test-result
Copied!
You can then edit the config file per the directions below.

Recording test results (pytest plugin)

Update your config file

In .launchable.d/config.yml:
  1. 1.
    Check that the source option in the record-build section points to your Git repository (the default is ., the current directory).
  2. 2.
    Check that the mode option in the subset section is set to record-only

Verify your config file

Verify the contents of the .launchable.d/config.yml file:
1
# via pipenv
2
pipenv run launchable-config --verify
3
4
# via pip
5
launchable-config --verify
Copied!
If any problems are reported, edit the file accordingly.

Use the plugin with pytest

Then, just add an --launchable option to the pytest command. It is very easy:
1
pytest --launchable <your-pytest-project>
Copied!
If the configuration file is not in the current directory, use the --launchable-conf-path option:
1
pytest --launchable --launchable-conf-path <path-to-launchable-configuration-file> <your-pytest-project>
Copied!
This will:
  1. 1.
    Create a build in your Launchable workspace
  2. 2.
    Run your tests
  3. 3.
    Submit your test reports to Launchable
  4. 4.
    Leave XML reports in the launchable-test-result by default

Subsetting your test runs (pytest plugin)

Update your config file

In .launchable.d/config.yml:
  1. 1.
    Check that the source option in the record-build section points to your Git repository (the default is ., the current directory).
  2. 2.
    Check that the mode option in the subset section is set to subset or subset_and_rest based on your needs
  3. 3.
    Check that one of the three optimization target options are set (target, confidence, or time)

Verify your config file

Verify the contents of the .launchable.d/config.yml file:
1
# via pipenv
2
pipenv run launchable-config --verify
3
4
# via pip
5
launchable-config --verify
Copied!
If any problems are reported, edit the file accordingly.

Use the plugin with pytest

Then, just add an --launchable option to the pytest command. It is very easy:
1
pytest --launchable <your-pytest-project>
Copied!
If the configuration file is not in the current directory, use the --launchable-conf-path option:
1
pytest --launchable --launchable-conf-path <path-to-launchable-configuration-file> <your-pytest-project>
Copied!
This will:
  1. 1.
    Create a build in your Launchable workspace
  2. 2.
    Request a subset of tests based on your optimization target
  3. 3.
    Run those tests (or run all the tests if subset_and_rest mode is chosen)
  4. 4.
    Submit your test reports to Launchable
  5. 5.
    Leave XML reports in the launchable-test-result by default

Legacy CLI profile

Recording test results

When you run tests, create a JUnit XML test report using the --junit-xml option, e.g.:
1
pytest --junit-xml=test-results/results.xml
Copied!
If you are using pytest 6 or later, please specify junit_family=legacy as the report format. pytest has changed its default test report format from xunit1 to xunit2 since version 6. See Deprecations and Removals — pytest documentation. The xunit2 format does not output the file name in the report, and the file name is required to use Launchable.
Then, after running tests, point the CLI to your test report file(s) to collect test results and train the model:
1
launchable record tests --build <BUILD NAME> pytest ./test-results/
Copied!

--json option

When you produce report files used by pytest-dev/pytest-reportlog plugin, you can use --json option.
1
pytest --report-log=test-results/results.json
2
launchable record tests --build <BUILD NAME> pytest --json ./tests-results/
Copied!
You might need to take extra steps to make sure that launchable record tests always runs even if the build fails. See Always record tests.

Subsetting your test runs

The high level flow for subsetting is:
  1. 1.
    Get the full list of test paths and pass that to launchable subset with an optimization target for the subset
  2. 2.
    launchable subset will get a subset from the Launchable platform and output that list to a text file
  3. 3.
    Pass the text file into your test runner to run only those tests
To retrieve a subset of tests, first list all the tests you would normally run and pass that to launchable subset:
1
pytest --collect-only -q | launchable subset \
2
--build <BUILD NAME> \
3
--confidence <TARGET> \
4
pytest > launchable-subset.txt
Copied!
  • The --build should use the same <BUILD NAME> value that you used before in launchable record build.
  • The --confidence option should be a percentage; we suggest 90% to start. You can also use --time or --target; see Subsetting your test runs for more info.
This creates a file called launchable-subset.txt that you can pass into your command to run tests:
1
pytest --junit-xml=test-results/subset.xml $(cat launchable-subset.txt)
Copied!