Skip to content

Examples

Android

Cirrus CI has a set of Docker images ready for Android development. If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI. For those images .cirrus.yml configuration file can look like:

container:
  image: cirrusci/android-sdk:27

check_android_task:
  check_script: ./gradlew check connectedCheck

Or like this if a running emulator is needed for the tests:

container:
  image: cirrusci/android-sdk:18
  cpu: 4
  memory: 10G

check_android_task:
  create_device_script:
    echo no | avdmanager create avd --force
        -n test
        -k "system-images;android-18;default;armeabi-v7a"
  start_emulator_background_script:
    $ANDROID_HOME/emulator/emulator
        -avd test
        -no-audio
        -no-window
  wait_for_emulator_script:
    - adb wait-for-device
    - adb shell input keyevent 82
  check_script: ./gradlew check connectedCheck

Info

Please don't forget to setup Remote Build Cache for your Gradle project. Or at least simple folder caching.

Bazel

Bazel Team provides a set of official Docker images with Bazel pre-installed. Here is an example of how .cirrus.yml can look like for Bazel:

container:
  image: l.gcr.io/google/bazel:latest
task:
  build_script: bazel build //...

If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI.

Remote Cache

Cirrus CI has built-in HTTP Cache which is compatible with Bazel's remote cache.

Here is an example of how Cirrus CI HTTP Cache can be used with Bazel:

container:
  image: l.gcr.io/google/bazel:latest
task:
  build_script:
    bazel build
      --spawn_strategy=sandboxed
      --strategy=Javac=sandboxed
      --genrule_strategy=sandboxed
      --remote_http_cache=http://$CIRRUS_HTTP_CACHE_HOST
      //...

C++

Official GCC Docker images can be used for builds. Here is an example of a .cirrus.yml that runs tests:

container:
  image: gcc:latest
task:
  tests_script: make tests

Elixir

Official Elixir Docker images can be used for builds. Here is an example of a .cirrus.yml that runs tests:

test_task:
  container:
    image: elixir:latest
  mix_cache:
    folder: deps
    fingerprint_script: cat mix.lock
    populate_script: mix deps.get
  compile_script: mix compile
  test_script: mix test

Erlang

Official Erlang Docker images can be used for builds. Here is an example of a .cirrus.yml that runs tests:

test_task:
  container:
    image: erlang:latest
  rebar3_cache:
    folder: _build
    fingerprint_script: cat rebar.lock
    populate_script: rebar3 compile --deps_only
  compile_script: rebar3 compile
  test_script: rebar3 ct

Flutter

Cirrus CI provides a set of Docker images with Flutter and Dart SDK pre-installed. Here is an example of how .cirrus.yml can look like for Flutter:

container:
  image: cirrusci/flutter:latest

test_task:
  pub_cache:
    folder: ~/.pub-cache
  test_script: flutter test

If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI.

Flutter Web

Our Docker images with Flutter and Dart SDK pre-installed have special *-web tags with Chromium pre-installed. You can use these tags to run Flutter Web

First define new chromium platform in your dart_test.yaml:

define_platforms:
  chromium:
    name: Chromium
    extends: chrome
    settings:
      arguments: --no-sandbox
      executable:
        linux: chromium

Now you'll be able to run tests targeting web via pub run test test -p chromium

Go

The best way to test Go projects is by using official Go Docker images. Here is an example of how .cirrus.yml can look like for a project using Go Modules:

container:
  image: golang:latest

env:
  GOPROXY: https://proxy.golang.org

test_task:
  modules_cache:
    fingerprint_script: cat go.sum
    folder: $GOPATH/pkg/mod
  get_script: go get ./...
  build_script: go build ./...
  test_script: go test ./...

Gradle

We recommend to use the official Gradle Docker containers since they have Gradle specific configurations already set up. For example, standard java containers don't have a pre-configured user and as a result don't have HOME environment variable presented which makes Gradle complain.

Caching

To preserve caches between Gradle runs simply add a cache instruction as shown below. The trick here is to clean up ~/.gradle/caches folder in the very end of a build. Gradle creates some unique nondeterministic files in ~/.gradle/caches folder on every run which makes Cirrus CI re-upload the cache every time. This way, you get faster builds!

container:
  image: gradle:jdk8

check_task:
  gradle_cache:
    folder: ~/.gradle/caches
  check_script: gradle check
  cleanup_before_cache_script:
    - rm -rf ~/.gradle/caches/$GRADLE_VERSION/
    - rm -rf ~/.gradle/caches/transforms-1
    - rm -rf ~/.gradle/caches/journal-1
    - find ~/.gradle/caches/ -name "*.lock" -type f -delete

Build Cache

Here is how HTTP Cache can be used with Gradle simply by adding following lines to settings.gradle:

ext.isCiServer = System.getenv().containsKey("CIRRUS_CI")
ext.isMasterBranch = System.getenv()["CIRRUS_BRANCH"] == "master"

buildCache {
  local {
    enabled = !isCiServer
  }
  remote(HttpBuildCache) {
    url = 'http://' + System.getenv().getOrDefault("CIRRUS_HTTP_CACHE_HOST", "localhost:12321") + "/"
    enabled = isCiServer
    push = isMasterBranch
  }
}

Please make sure you are running Gradle commands with --build-cache flag or have org.gradle.caching enabled in gradle.properties file. Here is an example of a gradle.properties file that we use internally for all Gradle projects:

org.gradle.daemon=true
org.gradle.caching=true
org.gradle.parallel=true
org.gradle.configureondemand=true
org.gradle.jvmargs=-Dfile.encoding=UTF-8

JUnit

Here is a .cirrus.yml that (once succeeded or failed), parses and uploads JUnit reports:

junit_test_task:
  junit_script: <replace this comment with instructions to run the test suites>
  always:
    junit_result_artifacts:
      path: "**/test-results/**/*.xml"
      format: junit

If it is running on a pull request, annotations will also be displayed in-line.

Maven

Official Maven Docker images can be used for building and testing Maven projects:

task:
  name: Cirrus CI
  container:
    image: maven:latest
  maven_cache:
    folder: ~/.m2
  test_script: mvn test -B

MySQL

Additional Containers feature makes it super simple to run the same Docker MySQL image as you might be running in production for your application. Getting a running instance of the latest GA version of MySQL can be as simple as the following six lines in your .cirrus.yml:

container:
  image: golang:latest
  additional_containers:
    - name: mysql
      image: mysql:latest
      port: 3306
      env:
        MYSQL_ROOT_PASSWORD: ""

With the configuration above MySQL will be available on localhost:3306. Use empty password to login as root user.

Node

Official NodeJS Docker images can be used for building and testing Node.JS applications.

npm

Here is an example of a .cirrus.yml that caches node_modules based on contents of package-lock.json file and runs tests:

container:
  image: node:latest

test_task:
  node_modules_cache:
    folder: node_modules
    fingerprint_script: cat package-lock.json
    populate_script: npm ci
  test_script: npm test

Yarn

Here is an example of a .cirrus.yml that caches node_modules based on the contents of a yarn.lock file and runs tests:

container:
  image: node:latest

test_task:
  node_modules_cache:
    folder: node_modules
    fingerprint_script: cat yarn.lock
    populate_script: yarn install
  test_script: yarn run test

Python

Official Python Docker images can be used for builds. Here is an example of a .cirrus.yml that caches installed packages based on contents of requirements.txt and runs pytest:

container:
  image: python:slim

test_task:
  pip_cache:
    folder: ~/.cache/pip
    fingerprint_script: echo $PYTHON_VERSION && cat requirements.txt
    populate_script: pip install -r requirements.txt
  test_script: pytest

Building PyPI Packages

Also using the Python Docker images, you can run tests if you are making packages for PyPI. Here is an example .cirrus.yml for doing so:

container:
  image: python:slim

build_package_test_task:
  pip_cache:
    folder: ~/.cache/pip
    fingerprint_script: echo $PYTHON_VERSION
    populate_script: python3 -m pip install --upgrade setuptools wheel twine
  build_package_test_script: python3 setup.py sdist bdist_wheel bdist_egg

Linting

You can easily set up linting with Cirrus CI and flake8, here is an example:

lint_task:
  container:
    image: alpine/flake8:latest
  script: flake8 *.py

Unittest Annotations

Python Unittest reports are supported by Cirrus CI Annotations. This way you can see what tests are failing without leaving the pull request you are reviewing! Here is an example of .cirrus.yml that produces and stores Unittest reports:

unittest_task:
  container:
    image: python:slim
  install_dependencies_script: |
    pip3 install unittest_xml_reporting
  run_tests_script: python3 -m xmlrunner tests
  # replace 'tests' with the module,
  # unittest.TestCase, or unittest.TestSuite
  # that the tests are in
  always:
    upload_results_artifacts:
      path: ./*.xml
      format: junit

Now you should get annotations for your test results.

Release Assets

Cirrus CI doesn't provide a built-in functionality to upload artifacts on a GitHub release but this functionality can be added via a simple script. For a release Cirrus CI will provide CIRRUS_RELEASE environment variable along with CIRRUS_TAG environment variable. CIRRUS_RELEASE indicates release id which can be used to upload assets.

Cirrus CI only requires write access to Check API and doesn't require write access to repository contents because of security reasons. That's why you need to create a personal access token with full access to repo scope. Once an access token is created, please create an encrypted variable from it and save it to .cirrus.yml:

env:
  GITHUB_TOKEN: ENCRYPTED[qwerty]

Now you can use a simple script to upload your assets:

#!/usr/bin/env bash

if [[ "$CIRRUS_RELEASE" == "" ]]; then
  echo "Not a release. No need to deploy!"
  exit 0
fi

if [[ "$GITHUB_TOKEN" == "" ]]; then
  echo "Please provide GitHub access token via GITHUB_TOKEN environment variable!"
  exit 1
fi

file_content_type="application/octet-stream"
files_to_upload=(
  # relative paths of assets to upload
)

for fpath in $files_to_upload
do
  echo "Uploading $fpath..."
  name=$(basename "$fpath")
  url_to_upload="https://uploads.github.com/repos/$CIRRUS_REPO_FULL_NAME/releases/$CIRRUS_RELEASE/assets?name=$name"
  curl -X POST \
    --data-binary @$fpath \
    --header "Authorization: token $GITHUB_TOKEN" \
    --header "Content-Type: $file_content_type" \
    $url_to_upload
done

Ruby

Official Ruby Docker images can be used for builds. Here is an example of a .cirrus.yml that caches installed gems based on contents of Gemfile.lock and runs rspec:

container:
  image: ruby:latest

rspec_task:
  install_bundler_script: gem install bundler
  bundle_cache:
    folder: /usr/local/bundle
    fingerprint_script: echo $RUBY_VERSION && cat Gemfile && cat Gemfile.lock
    populate_script: bundle install
  rspec_script: bundle exec rspec

Test Parallelization

It's super easy to add intelligent test splitting by using Knapsack Pro and matrix modification. After setting up Knapsack Pro gem simply add sharding like this:

task:
  matrix:
    name: rspec (shard 1)
    name: rspec (shard 2)
    name: rspec (shard 3)
    name: rspec (shard 4)
  bundle_cache:
    folder: /usr/local/bundle
    fingerprint_script: cat Gemfile.lock
    populate_script: bundle install
  rspec_script: bundle exec rake knapsack_pro:rspec

Which will create four shards that will theoretically run tests 4x faster by equaly splitting all tests between these four shards.

Rust

Official Rust Docker images can be used for builds. Here is a simple example of .cirrus.yml that caches crates in $CARGO_HOME based on contents of Cargo.lock:

container:
  image: rust:latest

test_task:
  cargo_cache:
    folder: $CARGO_HOME/registry
    fingerprint_script: cat Cargo.lock
  build_script: cargo build
  test_script: cargo test
  before_cache_script: rm -rf $CARGO_HOME/registry/index

Caching Cleanup

Please note before_cache_script that removes registry index from the cache before uploading it in the end of a successful task. Registry index is changing very rapidly making the cache invalid. before_cache_script deletes the index and leaves just the required crates for caching.

Rust Nightly

It is possible to use nightly builds of Rust via an official rustlang/rust:nightly container. Here is an example of a .cirrus.yml to run tests against the latest stable and nightly versions of Rust:

test_task:
  matrix:
    - container:
        image: rust:latest
    - allow_failures: true
      container:
        image: rustlang/rust:nightly
  cargo_cache:
    folder: $CARGO_HOME/registry
    fingerprint_script: cat Cargo.lock
  build_script: cargo build
  test_script: cargo test
  before_cache_script: rm -rf $CARGO_HOME/registry/index