Skip to content

Supported Computing Services

For every task Cirrus CI starts a new Virtual Machine or a new Docker Container on a given compute service. Using a new VM or a new Docker Container each time for running tasks has many benefits:

  • Atomic changes to an environment where tasks are executed. Everything about a task is configured in .cirrus.yml file including VM image version and Docker Container image version. After commiting changes to .cirrus.yml not only new tasks will use the new environment but also outdated branches will continue using the old configuration.
  • Reproducibility. Fresh environment guarantees no corrupted artifacts or caches are presented from the previous tasks.
  • Cost efficiency. Most compute services are offering per-second pricing which makes them ideal for using with Cirrus CI. Also each task for repository can define ideal amount of CPUs and Memory specific for a nature of the task. No need to manage pools of similar VMs or try to fit workloads within limits of a given Continuous Integration systems.

To be fair there are of course some disadvantages of starting a new VM or a container for every task:

  • Virtual Machine Startup Speed. Starting a VM can take from a few dozen seconds to a minute or two depending on a cloud provider and a particular VM image. Starting a container on the other hand just takes a few hundred milliseconds! But even a minute on average for starting up VMs is not a big inconvenience in favor of more stable, reliable and more reproducible CI.
  • Cold local caches for every task execution. Many tools tend to store some caches like downloaded dependencies locally to avoid downloading them again in future. Since Cirrus CI always uses fresh VMs and containers such local caches will always be empty. Performance implication of empty local caches can be avoided by using Cirrus CI features like built-in caching mechanism. Some tools like Gradle can even take advantages of built-in HTTP cache!

Please check the list of currently supported cloud compute services below and please see what's coming next.

Google Cloud

Cirrus CI can schedule tasks on several Google Cloud Compute services. In order to interact with Google Cloud APIs Cirrus CI needs permissions. Creating a service account is a common way to safely give granular access to parts of Google Cloud Projects.

Isolation

We do recommend to create a separate Google Cloud project for running CI builds to make sure tests are isolated from production data. Having a separate project also will show how much money is spent on CI and how efficient Cirrus CI is 😉

Once you have a Google Cloud project for Cirrus CI please create a service account by running the following command:

gcloud iam service-accounts create cirrus-ci \
    --project $PROJECT_ID

Depending on a compute service Cirrus CI will need different roles assigned to the service account. But Cirrus CI will always need permissions to act as a service account:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/iam.serviceAccountUser

Cirrus CI uses Google Cloud Storage to store logs and caches. In order to give Google Cloud Storage permissions to the service account please run:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/storage.admin

Default Logs Retentions Period

By default Cirrus CI will store logs and caches for 30 days but it can be changed by manually configuring a lifecycle rule for a Google Cloud Storage bucket that Cirrus CI is using.

Now we have a service account that Cirrus CI can use! It's time to let Cirrus CI know about that fact by securely providing a private key for the service account. A private key can be created by running the following command:

gcloud iam service-accounts keys create service-account-credentials.json \
  --iam-account cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com

At last create an encrypted variable from contents of service-account-credentials.json file and add it to the top of .cirrus.yml file:

gcp_credentials: ENCRYPTED[qwerty239abc]

Now Cirrus CI can store logs and caches for scheduled tasks in Google Cloud Storage. Please check following sections with additional instructions about Compute Engine or Kubernetes Engine.

Compute Engine

In order to schedule tasks on Google Compute Engine a service account that Cirrus CI operates via should have a necessary role assigned. It can be done by running a gcloud command:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/compute.admin

Now tasks can be scheduled on Compute Engine within $PROJECT_ID project by configuring gce_instance something like this:

gce_instance:
  image_project: ubuntu-os-cloud
  image_name: ubuntu-1604-xenial-v20171121a
  zone: us-central1-a
  cpu: 8
  memory: 40Gb
  disk: 20

task:
  script: ./run-ci.sh

Specify Machine Type

It is possible to specify a predefined machine type via type field:

gce_instance:
  image_project: ubuntu-os-cloud
  image_name: ubuntu-1604-xenial-v20171121a
  zone: us-central1-a
  type: n1-standard-8
  disk: 20

Custom VM images

Building an immutable VM image with all necessary software pre-configured is a known best practice with many benefits. It makes sure environment where a task is executed is always the same and that no time is spent on useless work like installing a package over and over again for every single task.

There are many ways how one can create a custom image for Google Compute Engine. Please refer to the official documentation. At Cirrus Labs we are using Packer to automate building such images. An example of how we use it can be found in our public GitHub repository.

Windows Support

Google Compute Engine support Windows images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:

gce_instance:
  image_project: windows-cloud
  image_name: windows-server-2016-dc-core-v20170913
  platform: windows
  zone: us-central1-a
  cpu: 8
  memory: 40Gb
  disk: 20

task:
  script: run-ci.bat

FreeBSD Support

Google Compute Engine support FreeBSD images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:

gce_instance:
  image_project: freebsd-org-cloud-dev
  image_name: freebsd-11-2-release-amd64
  platform: FreeBSD
  zone: us-central1-a
  cpu: 8
  memory: 40Gb
  disk: 50

task:
  script: printenv

Instance Scopes

By default Cirrus CI will create Google Compute instances without any scopes so an instance can't access Google Cloud Storage for example. But sometimes it can be useful to give some permissions to an instance by using scopes key of gce_instance. For example if a particular task builds Docker images and then pushes them to Container Registry it's configuration file can look something like:

gcp_credentials: ENCRYPTED[qwerty239abc]

gce_instance:
  image_project: my-project
  image_name: my-custom-image-with-docker
  zone: us-central1-a
  cpu: 8
  memory: 40Gb
  disk: 20

test_task:
  test_script: ./scripts/test.sh

push_docker_task:
  depends_on: test
  only_if: $CIRRUS_BRANCH == "master"
  gce_instance:
    scopes: cloud-platform
  push_script: ./scripts/push_docker.sh

Preemptible Instances

Cirrus CI can schedule preemptible instances with all price benefits and stability risks. But sometimes risks of an instance being preempted at any time can be tolerated. For example gce_instance can be configured to schedule preemptible instance for non master branches like this:

gce_instance:
  image_project: my-project
  image_name: my-custom-image-with-docker
  zone: us-central1-a
  preemptible: $CIRRUS_BRANCH != "master"

Kubernetes Engine

Scheduling tasks on Compute Engine has one big disadvantage of waiting for an instance to start which usually takes around a minute. One minute is not that long but can't compete with hundreds of milliseconds that takes a container cluster on GKE to start a container.

To start scheduling tasks on a container cluster we first need to create one using gcloud. Here is a command to create an auto-scalable cluster that will scale down to zero nodes when there is no load for some time and quickly scale up with the load during pick hours:

gcloud container clusters create cirrus-ci-cluster \
  --project cirruslabs-ci \
  --zone us-central1-a \
  --num-nodes 1 --machine-type n1-standard-8 \
  --enable-autoscaling --min-nodes=0 --max-nodes=10

A service account that Cirrus CI operates via should be assigned with container.admin role that allows to administrate GKE clusters:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/container.admin

Done! Now after creating cirrus-ci-cluster cluster and having gcp_credentials configured tasks can be scheduled on the newly created cluster like this:

gcp_credentials: ENCRYPTED[qwerty239abc]

gke_container:
  image: gradle:jdk8
  cluster_name: cirrus-ci-cluster
  zone: us-central1-a
  namespace: default
  cpu: 6
  memory: 24Gb

Using in-memory disk

By default Cirrus CI mounts a simple emptyDir into /tmp path to protect the pod from unnecessary eviction by autoscaler. It is possible to switch emptyDir's medium to use in-memory tmpfs storage instead of a default one by setting use_in_memory_disk field of gke_container to true or any other expression that uses environment variables.

Azure

Cirrus CI can schedule tasks on several Azure services. In order to interact with Azure APIs Cirrus CI needs permissions. First, please choose a subscription you want to use for scheduling CI tasks. Navigate to the Subscriptions blade within the Azure Portal and save $SUBSCRIPTION_ID that we'll use below for setting up a service principle.

Creating a service principal is a common way to safely give granular access to parts of Azure:

az ad sp create-for-rbac --name CirrusCI --sdk-auth \
  --scopes "/subscriptions/$SUBSCRIPTION_ID"

Command above will create a new service principal and will print something like:

{
  "appId": "...",
  "displayName": "CirrusCI",
  "name": "http://CirrusCI",
  "password": "...",
  "tenant": "..."
}

Please create an encrypted variable from this output and add it to the top of .cirrus.yml file:

azure_credentials: ENCRYPTED[qwerty239abc]

Now Cirrus CI can interact with Azure APIs.

Azure Container Instances

Azure Container Instances (ACI) allows is an ideal candidate for running modern CI workloads. ACI allows just to run Linux and Windows containers without thinking about underlying infrastructure.

Once azure_credentials is configured as described above, tasks can be scheduled on ACI by configuring aci_instance like this:

azure_container_instance:
  image: cirrusci/windowsservercore:2016
  resource_group: CirrusCI
  region: westus
  platform: windows
  cpu: 4
  memory: 12G

About Docker Images to use with ACI

Linux-based images are usually pretty small and doesn't require much tweaking. For Windows containers ACI recommends to follow a few simple advices in order to reduce startup time.

Anka

Anka Build by Veertu is a solution to create private macOS clouds for iOS CI infrastructure. Anka Hypervisor leverages Apple's Hypervisor.framework which provides lightweight but powerful macOS VMs that act almost like containers. Overall Anka is a perfect solution for a modern Continuous Integration system.

MacStadium is the leading provider of hosted Mac infrastructure and recently MacStadium partnered with Veertu to provide Hosted Anka Cloud solution. CI infrastructure for macOS has never been that accessible before.

Cirrus CI supports Anka Build as a computing service to schedule tasks on. In order to connect Anka Cloud to Cirrus CI, Cirrus Labs created Anka Controller Extended which can connect to Anka Cloud's private network and securely expose API for Cirrus CI to connect.

Please check Anka Controller Extended Documentation for details and don't hesitate to reach out support with any question.

Once Anka Controller Extended is up and running, Cirrus CI can use it's API to schedule tasks. Simply use anka_instance in your .cirrus.yml file like this:

anka_instance:
  controller_endpoint: <anka-controller-extended-IP>:<PORT>
  access_token: ENCRYPTED[qwerty239]
  template: high-sierra
  tag: xcode-9.4

Custom Anka VM Templates

Anka allows to easily build hierarchy of VMs much like containers with their layers. Please check our example repository

Hosted Anka Cloud on MacStadium

If you choose to use hosted Anka Cloud solution from MacStadium please mention Cirrus CI upon the registration for a quicker installation process.

Coming Soon

We are actively working on supporting AWS.