Soon after you've started sending data, you can start using Launchable to subset your test runs and save time.
The high level flow for subsetting is:
Get the full list of tests (or test files, targets, etc.) and pass that to
launchable subset along with:
an optimization target for the subset
a build name, so Launchable can choose the best tests for the changes in the build being tested
launchable subset will get a subset from the Launchable platform and output that list to a text file
Pass the text file into your test runner to run only those tests
The diagram below illustrates the interactions between your tools, the Launchable CLI, and the Launchable platform:
Depending on your goal, you might need to make a few changes to your pipeline to adopt subsetting.
After subsetting your tests, you should make sure to run the full suite of tests at some point later in your pipeline.
For example, once you start running a subset of an integration test suite that runs on pull requests, you should make sure to run the full integration test suite after a PR is merged (and record the outcome of those runs with
launchable record tests).
If your goal is to run a short subset of a long test suite earlier in the development process, then you may need to set up a new pipeline to run tests in that development phase. For example, if you currently run a long nightly test suite, and you want to run a subset of that suite every hour, you may need to create a pipeline to build, deploy, and run the subset if one doesn't exist already.
You'll also want to continue running the full test suite every night (and recording the outcome of those runs with
launchable record tests).
The optimization target you choose determines how Launchable populates a subset with tests. You can use the Confidence curve shown at app.launchableinc.com to choose an optimization target. "Confidence" is defined as the likelihood an entire test run will pass or fail.
Confidence is shown on the y-axis of a confidence curve. When you request a subset using
--confidence 90%, Launchable will populate the subset with relevant tests up to the corresponding expected duration value on the x-axis. For example, if the corresponding duration value for 90% confidence is 3 minutes, Launchable will populate the subset with up to 3 minutes of the most relevant tests for the changes in that build. This is useful to start with because the duration should decrease over time as Launchable learns more about your changes and tests.
Time is shown on the x-axis of a confidence curve. When you request a subset using
--time 600, Launchable will populate the subset with up to 10 minutes (600 seconds) of the most relevant tests for the changes in that build. This is useful if you have a maximum test runtime in mind.
Percentage time is not yet shown in any charts at app.launchableinc.com. When you request a subset using
--target 20%, Launchable will populate the subset with 20% of the expected duration of the most relevant tests. For example, if the expected duration of the full list of tests passed to
launchable subset is 100 minutes, Launchable will return up to 20 minutes of the most relevant tests for the changes in that build. This is useful if your test runs vary in duration.
To retrieve a subset of tests, first list all the tests you would normally run and pass that to
launchable subset. Here's an example using Ruby/Minitest and
launchable subset \--build <BUILD NAME> \--confidence 90% \minitest test/**/*.rb > launchable-subset.txt
--build option should use the same
<BUILD NAME> value that you used in
launchable record build.
This creates a file called
launchable-subset.txt that you can pass into your command to run tests:
bundle exec rails test $(cat launchable-subset.txt)
Subsetting instructions depend on the test runner or build tool you use to run tests. Click the appropriate link below to get started:
You can start subsetting by just splitting your existing suite into an intelligent subset and then the rest of the tests. After you've dialed in the right subset target, you can then remove the remainder and run the full suite less frequently. See the diagram below for a visual explanation.
The middle row of the diagram shows how you can start by splitting your existing test run into two parts:
A subset of dynamically selected tests, and
The rest of the tests
The example below shows how you can generate a subset (
launchable-subset.txt) and the remainder (
launchable-remainder.txt) using the
--rest option. Here we're using Ruby and Minitest:
launchable subset \--build <BUILD NAME> \--confidence 90% \--rest launchable-remainder.txt \minitest test/**/*.rb > launchable-subset.txt
This creates two files called
launchable-remainder.txt that you can pass into your command to run tests in two stages. Again, using Ruby as an example:
bundle exec rails test $(cat launchable-subset.txt)bundle exec rails test $(cat launchable-remainder.txt)
You can remove the second part after you're happy with the subset's performance. Once you do this, make sure to continue running the full test suite at some stage as described in Preparing your pipeline.
Some teams manually split their test suites into several "bins" to run them in parallel. This presents a challenge adopting Launchable, because you don't want to lose the benefit of parallelization.
Luckily, with split subsets you can replace your manually selected bins with automatically populated bins from a Launchable subset.
For example, let's say you currently run ~80 minutes of tests split coarsely into four bins and run in parallel across four workers:
Worker 1: ~20 minutes of tests
Worker 2: ~15 minutes of tests
Worker 3: ~20 minutes of tests
Worker 4: ~25 minutes of tests
With a split subset, you can generate a subset of the full 80 minutes of tests and then call Launchable once in each worker to get the bin of tests for that runner.
The high level flow is:
Request a subset of tests to run from Launchable by running
launchable subset with the
--split option. Instead of outputting a list of tests, the command will output a subset ID that you should save and pass into each runner.
Start up your parallel test worker, e.g. four runners from the example above
In each worker, request the bin of tests that worker should run. To do this, run
launchable split-subset with:
--subset-id option set to the ID you saved earlier, and
--bin value set to
Run the tests in each worker.
After each run finishes in each worker, record test results using
launchable record tests with the
--subset-id option set to the ID you saved earlier.
# main$ launchable record build --name $BUILD_ID --source src=.$ launchable subset --split --confidence 90% --build $BUILD_ID bazel .subset/12345...# worker 1$ launchable split-subset --subset-id subset/12345 --bin 1/3 --rest rest.txt bazel > subset.txt$ bazel test $(cat subset.txt)$ launchable record tests --subset-id subset/12345 bazel .# worker 2$ launchable split-subset --subset-id subset/12345 --bin 2/3 --rest rest.txt bazel > subset.txt$ bazel test $(cat subset.txt)$ launchable record tests --subset-id subset/12345 bazel .# worker 3$ launchable split-subset --subset-id subset/12345 --bin 3/3 --rest rest.txt bazel > subset.txt$ bazel test $(cat subset.txt)$ launchable record tests --subset-id subset/12345 bazel .