Skip to content

Computing Services

For every task Cirrus CI starts a new Virtual Machine or a new Docker Container on a given compute service. Using a new VM or a new Docker Container each time for running tasks has many benefits:

  • Atomic changes to an environment where tasks are executed. Everything about a task is configured in .cirrus.yml file including VM image version and Docker Container image version. After commiting changes to .cirrus.yml not only new tasks will use the new environment but also outdated branches will continue using the old configuration.
  • Reproducibility. Fresh environment guarantees no corrupted artifacts or caches are presented from the previous tasks.
  • Cost efficiency. Most compute services are offering per-second pricing which makes them ideal for using with Cirrus CI. Also each task for repository can define ideal amount of CPUs and Memory specific for a nature of the task. No need to manage pools of similar VMs or try to fit workloads within limits of a given Continuous Integration systems.

To be fair there are of course some disadvantages of starting a new VM or a container for every task:

  • Virtual Machine Startup Speed. Starting a VM can take from a few dozen seconds to a minute or two depending on a cloud provider and a particular VM image. Starting a container on the other hand just takes a few hundred milliseconds! But even a minute on average for starting up VMs is not a big inconvenience in favor of more stable, reliable and more reproducible CI.
  • Cold local caches for every task execution. Many tools tend to store some caches like downloaded dependencies locally to avoid downloading them again in future. Since Cirrus CI always uses fresh VMs and containers such local caches will always be empty. Performance implication of empty local caches can be avoided by using Cirrus CI features like built-in caching mechanism. Some tools like Gradle can even take advantages of built-in HTTP cache!

Please check the list of currently supported cloud compute services below.

Google Cloud

Cirrus CI can schedule tasks on several Google Cloud Compute services. In order to interact with Google Cloud APIs Cirrus CI needs permissions. Creating a service account is a common way to safely give granular access to parts of Google Cloud Projects.

Isolation

We do recommend to create a separate Google Cloud project for running CI builds to make sure tests are isolated from production data. Having a separate project also will show how much money is spent on CI and how efficient Cirrus CI is 😉

Once you have a Google Cloud project for Cirrus CI please create a service account by running the following command:

gcloud iam service-accounts create cirrus-ci \
    --project $PROJECT_ID

Depending on a compute service Cirrus CI will need different roles assigned to the service account. But Cirrus CI will always need permissions to act as a service account and be able to view monitoring:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/iam.serviceAccountUser \
    --role roles/monitoring.viewer

Cirrus CI uses Google Cloud Storage to store logs and caches. In order to give Google Cloud Storage permissions to the service account please run:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/storage.admin

Default Logs Retentions Period

By default Cirrus CI will store logs and caches for 90 days but it can be changed by manually configuring a lifecycle rule for a Google Cloud Storage bucket that Cirrus CI is using.

Now we have a service account that Cirrus CI can use! It's time to let Cirrus CI know about that fact by securely providing a private key for the service account. A private key can be created by running the following command:

gcloud iam service-accounts keys create service-account-credentials.json \
  --iam-account cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com

At last create an encrypted variable from contents of service-account-credentials.json file and add it to the top of .cirrus.yml file:

gcp_credentials: ENCRYPTED[qwerty239abc]

Now Cirrus CI can store logs and caches in Google Cloud Storage for tasks scheduled on either GCE or GKE. Please check following sections with additional instructions about Compute Engine or Kubernetes Engine.

Supported Regions

Cirrus CI currently supports following GCP regions: us-central1, us-east1, us-east4, us-west1, us-west2, europe-west1, europe-west2, europe-west3 and europe-west4.

Please contact support if you are interested in support for other regions.

Compute Engine

In order to schedule tasks on Google Compute Engine a service account that Cirrus CI operates via should have a necessary role assigned. It can be done by running a gcloud command:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/compute.admin

Now tasks can be scheduled on Compute Engine within $PROJECT_ID project by configuring gce_instance something like this:

gce_instance:
  image_project: ubuntu-os-cloud
  image_name: ubuntu-1904-disco-v20190417
  zone: us-central1-a
  cpu: 8
  memory: 40GB
  disk: 60
  use_ssd: true # default to false

task:
  script: ./run-ci.sh

Specify Machine Type

It is possible to specify a predefined machine type via type field:

gce_instance:
  image_project: ubuntu-os-cloud
  image_name: ubuntu-1604-xenial-v20171121a
  zone: us-central1-a
  type: n1-standard-8
  disk: 20

Specify Image Family

It's also possible to specify image family instead of the concrete image name. Simply specify image_family field instead of image_name:

gce_instance:
  image_project: ubuntu-os-cloud
  image_family: ubuntu-1904

Custom VM images

Building an immutable VM image with all necessary software pre-configured is a known best practice with many benefits. It makes sure environment where a task is executed is always the same and that no time is spent on useless work like installing a package over and over again for every single task.

There are many ways how one can create a custom image for Google Compute Engine. Please refer to the official documentation. At Cirrus Labs we are using Packer to automate building such images. An example of how we use it can be found in our public GitHub repository.

Windows Support

Google Compute Engine support Windows images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:

gce_instance:
  image_project: windows-cloud
  image_name: windows-server-2016-dc-core-v20170913
  platform: windows
  zone: us-central1-a
  cpu: 8
  memory: 40GB
  disk: 20

task:
  script: run-ci.bat

FreeBSD Support

Google Compute Engine support FreeBSD images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:

gce_instance:
  image_project: freebsd-org-cloud-dev
  image_family: freebsd-12-1
  platform: FreeBSD
  zone: us-central1-a
  cpu: 8
  memory: 40GB
  disk: 50

task:
  script: printenv

Docker Containers on Dedicated VMs

It is possible to run a container directly on a Compute Engine VM with pre-installed Docker. Simply use gce_container to specify a VM image and a Docker container to execute on the VM (gce_container simply extends gce_instance definition with a few additional fields):

gce_container:
  image_project: my-project
  image_name: my-custom-ubuntu-with-docker
  container: golang:latest
  additional_containers:
    - name: redis
      image: redis:3.2-alpine
      port: 6379

Note that gce_container always runs containers in privileged mode.

If your VM image has Nested Virtualization Enabled it's possible to use KVM from the container by specifying enable_nested_virtualization flag. Here is an example of using KVM-enabled container to run a hardware accelerated Android emulator:

gce_container:
  image_project: my-project
  image_name: my-custom-ubuntu-with-docker-and-KVM
  container: cirrusci/android-sdk:29
  enable_nested_virtualization: true
  accel_check_script:
    - sudo chown cirrus:cirrus /dev/kvm
    - emulator -accel-check

Instance Scopes

By default Cirrus CI will create Google Compute instances without any scopes so an instance can't access Google Cloud Storage for example. But sometimes it can be useful to give some permissions to an instance by using scopes key of gce_instance. For example, if a particular task builds Docker images and then pushes them to Container Registry, its configuration file can look something like:

gcp_credentials: ENCRYPTED[qwerty239abc]

gce_instance:
  image_project: my-project
  image_name: my-custom-image-with-docker
  zone: us-central1-a
  cpu: 8
  memory: 40GB
  disk: 20

test_task:
  test_script: ./scripts/test.sh

push_docker_task:
  depends_on: test
  only_if: $CIRRUS_BRANCH == "master"
  gce_instance:
    scopes: cloud-platform
  push_script: ./scripts/push_docker.sh

Preemptible Instances

Cirrus CI can schedule preemptible instances with all price benefits and stability risks. But sometimes risks of an instance being preempted at any time can be tolerated. For example gce_instance can be configured to schedule preemptible instance for non master branches like this:

gce_instance:
  image_project: my-project
  image_name: my-custom-image-with-docker
  zone: us-central1-a
  preemptible: $CIRRUS_BRANCH != "master"

Kubernetes Engine

Scheduling tasks on Compute Engine has one big disadvantage of waiting for an instance to start which usually takes around a minute. One minute is not that long but can't compete with hundreds of milliseconds that takes a container cluster on GKE to start a container.

To start scheduling tasks on a container cluster we first need to create one using gcloud. Here is a recommended configuration of a cluster that is very similar to what is used for the managed contianer instances. We recommend creating a cluster with two node pools:

  • default-pool with a single node and no autoscaling for system pods required by Kubernetes.
  • workers-pool that will use Compute-Optimized instances and SSD storage for better performance. This pool also will be able to scale to 0 when there are no tasks to run.
gcloud container clusters create cirrus-ci-cluster \
  --autoscaling-profile optimize-utilization \
  --zone us-central1-a \
  --num-nodes "1" \
  --machine-type "e2-standard-2" \
  --disk-type "pd-standard" --disk-size "100"

gcloud container node-pools create "workers-pool" \
  --cluster cirrus-ci-cluster \
  --zone "us-central1-a" \
  --num-nodes "0" \
  --enable-autoscaling --min-nodes "0" --max-nodes "8" \
  --node-taints dedicated=system:PreferNoSchedule \
  --machine-type "c2-standard-30" \
  --disk-type "pd-ssd" --disk-size "500"

A service account that Cirrus CI operates via should be assigned with container.admin role that allows to administrate GKE clusters:

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com \
    --role roles/container.admin

Done! Now after creating cirrus-ci-cluster cluster and having gcp_credentials configured tasks can be scheduled on the newly created cluster like this:

gcp_credentials: ENCRYPTED[qwerty239abc]

gke_container:
  image: gradle:jdk8
  cluster_name: cirrus-ci-cluster
  location: us-central1-a # cluster zone or region for multi-zone clusters
  namespace: default # Kubernetes namespace to create pods in
  cpu: 6
  memory: 24GB

Using in-memory disk

By default Cirrus CI mounts an emptyDir into /tmp path to protect the pod from unnecessary eviction by autoscaler. It is possible to switch emptyDir's medium to use in-memory tmpfs storage instead of a default one by setting use_in_memory_disk field of gke_container to true or any other expression that uses environment variables.

Running privileged containers

You can run privileged containers on your private GKE cluster by setting privileged field of gke_container to true or any other expression that uses environment variables. privileged field is also available for any additional container.

Here is an example of how to run docker-in-docker

gke_container:
  image: my-docker-client:latest
  cluster_name: my-gke-cluster
  location: us-west1-c
  namespace: cirrus-ci
  additional_containers:
    - name: docker
      image: docker:dind
      privileged: true
      cpu: 2
      memory: 6G
      port: 2375

For a full example on leveraging this to do docker-in-docker builds on Kubernetes checkout Docker Builds on Kubernetes

Greedy instances

Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.

AWS

Cirrus CI can schedule tasks on several AWS services. In order to interact with AWS APIs Cirrus CI needs permissions. Creating an IMA user for programmatic access is a common way to safely give granular access to parts of your AWS.

Once you created a user for Cirrus CI you'll need to provide key id and access key itself. In order to do so please create an encrypted variable with the following content:

[default]
aws_access_key_id=...
aws_secret_access_key=...

Then you'll be able to use the encrypted variable in your .cirrus.yml file like this:

aws_credentials: ENCRYPTED[...]

task:  
  ec2_instance:
    ...

task:  
  eks_instance:
    ...

Permissions

A user that Cirrus CI will be using for orchestrating tasks on AWS should at least have access to S3 in order to store logs and cache artifacts. Here is a list of actions that Cirrus CI requires to store logs and artifacts:

"Action": [
  "s3:CreateBucket",
  "s3:GetObject",
  "s3:PutObject",
  "s3:DeleteObject",
  "s3:PutLifecycleConfiguration"
]

EC2

In order to schedule tasks on EC2 please make sure that IAM user that Cirrus CI is using has following permissions:

"Action": [
  "ec2:DescribeInstances",
  "ec2:RunInstances",
  "ec2:TerminateInstances"
]

Now tasks can be scheduled on EC2 by configuring ec2_instance something like this:

task:
  ec2_instance:
    image: ami-03790f6959fc34ef3
    type: t2.micro
    region: us-east-1
  script: ./run-ci.sh

EKS

Please follow instructions on how to create a EKS cluster and add workers nodes to it. And don't forget to add necessary permissions for the IAM user that Cirrus CI is using:

"Action": [
  "iam:PassRole",
  "eks:DescribeCluster",
  "eks:CreateCluster",
  "eks:DeleteCluster",
  "eks:UpdateClusterVersion"
]

To verify that Cirrus CI will be able to communicate with your cluster please make sure that if you are locally logged in as the user that Cirrus CI acts as you can successfully run the following commands and see your worker nodes up and running:

$: aws sts get-caller-identity
{
    "UserId": "...",
    "Account": "...",
    "Arn": "USER_USED_BY_CIRRUS_CI"
}
$: aws eks --region $REGION update-kubeconfig --name $CLUSTER_NAME
$: kubectl get nodes

EKS Access Denied

If you have an issue with accessing your EKS cluster via kubectl, most likely you did not create the cluster with the user that Cirrus CI is using. The easiest way to do so is to create the cluster through AWS CLI with the following command:

$: aws sts get-caller-identity
{
    "UserId": "...",
    "Account": "...",
    "Arn": "USER_USED_BY_CIRRUS_CI"
}
$: aws eks --region $REGION \
    create-cluster --name cirrus-ci \
    --role-arn ... \
    --resources-vpc-config subnetIds=...,securityGroupIds=...

Now tasks can be scheduled on EKS by configuring eks_container something like this:

task:
  eks_container:
    image: node:latest
    region: us-east-1
    cluster_name: cirrus-ci
  script: ./run-ci.sh

S3 Access for Caching

Please add AmazonS3FullAccess policy to the role used for creation of EKS workers (same role you put in aws-auth-cm.yaml when enabled worker nodes to join the cluster).

Greedy instances

Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.

Azure

Cirrus CI can schedule tasks on several Azure services. In order to interact with Azure APIs Cirrus CI needs permissions. First, please choose a subscription you want to use for scheduling CI tasks. Navigate to the Subscriptions blade within the Azure Portal and save $SUBSCRIPTION_ID that we'll use below for setting up a service principle.

Creating a service principal is a common way to safely give granular access to parts of Azure:

az ad sp create-for-rbac --name CirrusCI --sdk-auth \
  --scopes "/subscriptions/$SUBSCRIPTION_ID"

Command above will create a new service principal and will print something like:

{
  "clientId": "...",
  "clientSecret": "...",
  "subscriptionId": "...",
  "tenantId": "...",
  ...
}

Please also remember clientId from the JSON as $CIRRUS_CLIENT_ID. It will be used later for configuring blob storage access.

Please create an encrypted variable from this output and add it to the top of .cirrus.yml file:

azure_credentials: ENCRYPTED[qwerty239abc]

You also need to create a resource group that Cirrus CI will use for scheduling tasks:

az group create --location eastus --name CirrusCI

Please also allow the newly created CirrusCI principle to access blob storage in order to manage logs and caches.

az role assignment create \
    --role "Storage Blob Data Contributor" \
    --assignee $CIRRUS_CLIENT_ID \
    --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/CirrusCI"

Now Cirrus CI can interact with Azure APIs.

Azure Container Instances

Azure Container Instances (ACI) is an ideal candidate for running modern CI workloads. ACI allows just to run Linux and Windows containers without thinking about underlying infrastructure.

Once azure_credentials is configured as described above, tasks can be scheduled on ACI by configuring aci_instance like this:

azure_container_instance:
  image: cirrusci/windowsservercore:2016
  resource_group: CirrusCI
  region: westus
  platform: windows
  cpu: 4
  memory: 12G

About Docker Images to use with ACI

Linux-based images are usually pretty small and doesn't require much tweaking. For Windows containers ACI recommends to follow a few basic tips in order to reduce startup time.

Oracle Cloud

Cirrus CI can schedule tasks on several Oracle Cloud services. In order to interact with OCI APIs Cirrus CI needs permissions. Please create a user that Cirrus CI will behalf on:

oci iam user create --name cirrus --description "Cirrus CI Orchestrator"

Please configure the cirrus user to be able to access storage, launch instances and have access to Kubernetes clusters. The easiest way is to add cirrus user to Administrators group, but it's not as secure as a granular access configuration.

By default, for every repository you'll start using Cirrus CI with, Cirrus will create a bucket with 90 days lifetime policy. In order to allow Cirrus to configure lifecycle policies please add the following policy as described in the documentation. Here is an example of the policy for us-ashburn-1 region:

Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy

Once you created and configured cirrus user you'll need to provide its API key. Once you generate an API key you should get a *.pem file with the private key that will be used by Cirrus CI.

Normally your config file for local use looks like this:

[DEFAULT]
user=ocid1.user.oc1..XXX
fingerprint=11:22:...:99
tenancy=ocid1.tenancy.oc1..YYY
region=us-ashburn-1
key_file=<path to your *.pem private keyfile>

For Cirrus to use, you'll need to use a different format:

<user value>
<fingerprint value>
<tenancy value>
<region value>
<content of your *.pem private keyfile>

This way you'll be able to create a single encrypted variable with the contents of the Cirrus specific credentials above.

oracle_credentials: ENCRYPTED[qwerty239abc]

Kubernetes Cluster

Please create a Kubernetes cluster and make sure Kubernetes API Public Endpoint is enabled for the cluster so Cirrus can access it. Then copy cluster id which can be used in configuring oke_container:

task:
  oke_container:
    cluster_id: ocid1.cluster.oc1.iad.xxxxxx
    image: golang:latest
  script: ./run-ci.sh

Ampere A1 Support

The cluster can utilize Oracle's Ampere A1 Arm instances in order to run arm64 CI workloads!

Greedy instances

Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.