Build and Push to ACR
This topic explains how to configure the Build and Push to ACR step in a Harness CI pipeline. This step is used to build and push to Azure Container Registry (ACR).
You need:
- Access to ACR and an ACR repo where you can upload your image.
- An Azure Cloud Provider connector.
- A Harness CI pipeline with a Build stage that uses a Linux platform on a Kubernetes cluster build infrastructure.
Kubernetes cluster build infrastructure is required
The Build and Push to ACR step is supported for Linux platforms on Kubernetes cluster build infrastructures only. For other platforms and build infrastructures, use the Build and Push to Docker Registry step to push to ACR.
Root access is required
With Kubernetes cluster build infrastructures, all Build and Push steps use kaniko. This tool requires root access to build the Docker image, and it doesn't support non-root users.
If your build runs as non-root (runAsNonRoot: true
), and you want to run the Build and Push step as root, you can set Run as User to 0
on the Build and Push step to use the root user for that individual step only.
If your security policy doesn't allow running as root, go to Build and push with non-root users.
Add a Build and Push to ACR step
In your pipeline's Build stage, add a Build and Push to ACR step and configure the settings accordingly.
Here is a YAML example of a minimum Build and Push to ACR step.
- step:
type: BuildAndPushACR
name: BuildAndPushACR_1
identifier: BuildAndPushACR_1
spec:
connectorRef: YOUR_AZURE_CONNECTOR_ID
repository: CONTAINER-REGISTRY-NAME.azurecr.io/IMAGE-NAME
tags:
- <+pipeline.sequenceId>
When you run a pipeline, you can observe the step logs on the build details page. If the Build and Push to ACR step succeeds, you can find the uploaded image on ACR.
You can also:
Build and Push to ACR step settings
The Build and Push to ACR step has the following settings. Some settings are located under Optional Configuration in the visual pipeline editor.
Name
Enter a name summarizing the step's purpose. Harness automatically assigns an Id (Entity Identifier Reference) based on the Name. You can change the Id.
Azure Connector
The Harness Azure Cloud connector to use to connect to your ACR. This step supports Azure Cloud connectors that use access key authentication. This step doesn't support Azure Cloud connectors that inherit delegate credentials.
For more information about Azure connectors, including details about required permissions, go to Add a Microsoft Azure Cloud Provider connector.
Repository
The URL for the target ACR repository where you want to push your artifact. You must use this format: CONTAINER-REGISTRY-NAME.azurecr.io/IMAGE-NAME
.
Subscription Id
Name or ID of an ACR subscription. This field is required for artifacts to appear in the build's Artifacts tab.
For more information about, go to the Microsoft documentation about How to manage Azure subscriptions with the Azure CLI.
Tags
Add Docker build tags. This is equivalent to the -t
flag.
Add each tag separately.
When you push an image to a repo, you tag the image so you can identify it later. For example, in one pipeline stage, you push the image, and, in a later stage, you use the image name and tag to pull it and run integration tests on it.
Harness expressions are a useful way to define tags. For example, you can use the expression <+pipeline.sequenceId>
as a tag. This expression represents the incremental build identifier, such as 9
. By using a variable expression, rather than a fixed value, you don't have to use the same image name every time.
For example, if you use <+pipeline.sequenceId>
as a tag, after the pipeline runs, you can see the Build Id
in the output.
And you can see where the Build Id
is used to tag your image:
Later in the pipeline, you can use the same expression to pull the tagged image, such as myrepo/myimage:<+pipeline.sequenceId>
.
Optimize
Select this option to enable --snapshotMode=redo
. This setting causes file metadata to be considered when creating snapshots, and it can reduce the time it takes to create snapshots. For more information, go to the kaniko documentation for the snapshotMode flag.
For information about setting other kaniko runtime flags, go to Set plugin runtime flags.
Dockerfile
The name of the Dockerfile. If you don't provide a name, Harness assumes that the Dockerfile is in the root folder of the codebase.
Context
Enter a path to a directory containing files that make up the build's context. When the pipeline runs, the build process can refer to any files found in the context. For example, a Dockerfile can use a COPY
instruction to reference a file in the context.
Labels
Specify Docker object labels to add metadata to the Docker image.
Build Arguments
The Docker build-time variables. This is equivalent to the --build-arg
flag.
Target
The Docker target build stage, equivalent to the --target
flag, such as build-env
.
Remote Cache Image
Use this setting to enable remote Docker layer caching where each Docker layer is uploaded as an image to a Docker repo you identify. If the same layer is used in later builds, Harness downloads the layer from the Docker repo. You can also specify the same Docker repo for multiple Build and Push steps, enabling these steps to share the same remote cache. This can dramatically improve build time by sharing layers across pipelines, stages, and steps.
For Remote Cache Image, enter the name of the remote cache registry and image, such as <container-registry-name>.azurecr.io/<image-name>
.
The remote cache repository must be in the same account and organization as the build image. For caching to work, the entered image name must exist.
Run as User
Specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.
Because the Build and Push to ACR step requires root access, use the Run as User setting if your build runs as non-root (runAsNonRoot: true
) and you can run the Build and Push to ACR step as root. To do this, set Run as User to 0
on the Build and Push to ACR step to use the root user for this individual step only.
If your security policy doesn't allow running as root, go to Build and push with non-root users.
Set Container Resources
Set maximum resource limits for the resources used by the container at runtime:
- Limit Memory: The maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes
G
orM
. You can also use the power-of-two equivalentsGi
andMi
. The default is500Mi
. - Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed; for example, you can specify one hundred millicpu as
0.1
or100m
. The default is400m
. For more information, go to Resource units in Kubernetes.
Timeout
Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:
Conditions, looping, and failure strategies
You can find the following settings on the Advanced tab in the step settings pane:
- Conditional Execution: Set conditions to determine when/if the step should run.
- Failure Strategy: Control what happens to your pipeline when a step fails.
- Use looping strategies: Define a matrix, repeat, or parallelism strategy for an individual step.
Set plugin runtime flags
Build and Push steps use plugins to complete build and push operations. With Kubernetes cluster build infrastructures, these steps use kaniko, and, with other build infrastructures, these steps use drone-docker.
These plugins have a number of additional runtime flags that you might need for certain use cases. For information about the flags, go to the kaniko plugin documentation and the drone-docker plugin documentation. Currently, Harness supports the following flags:
expand-tag
: Enable semver tagging.auto-tag
: Enable auto-generated build tags.auto-tag-suffix
: Auto-generated build tag suffix.create-repository
: Creates an ECR repository.custom-labels
: Additional arbitrary key-value labels.registry-mirrors
: Docker registry mirrors.snapshot-mode
: Specify snapshot mode asfull
,redo
, ortime
.lifecycle-policy
: Provide the path to a lifecycle policy file.repository-policy
: Provide the path to a repository policy file.artifact-file
: Harness uses this to show links to uploaded artifacts on the Artifacts tab.no-push
: Disables pushing to the registry. Configures the Build and Push step to only build the image.verbosity
: Set the log level aspanic
,fatal
,error
,warn
,info
,debug
, ortrace
. The default isinfo
.tar-path
: Use this flag to save the image as a tarball at a specified path. Set this flag's value to the desired path.skip-tls-verify
: Set totrue
to skip TLS verification.custom_dns
(for drone-docker only): Provide your custom CNS address.
To set these flags in your Build and Push steps, add stage variables formatted as PLUGIN_FLAG_NAME
.
For example, to set --skip-tls-verify
for kaniko, add a stage variable named PLUGIN_SKIP_TLS_VERIFY
and set the variable value to true
.
variables:
- name: PLUGIN_SKIP_TLS_VERIFY
type: String
description: ""
required: false
value: "true"
To set custom_dns
for drone-docker, add a stage variable named PLUGIN_CUSTOM_DNS
and set the variable value to your custom DNS address.
variables:
- name: PLUGIN_CUSTOM_DNS
type: String
description: ""
required: false
value: "vvv.xxx.yyy.zzz"
Plugin runtime flags are also used to build without pushing.
Troubleshoot Build and Push steps
Go to the CI Knowledge Base for questions and issues related to building and pushing images, such as:
- What drives the Build and Push steps? What is kaniko?
- Does a kaniko build use images cached locally on the node? Can I enable caching for kaniko?
- Can I run Build and Push steps as root if my build infrastructure runs as non-root? What if my security policy doesn't allow running as root?
- Can I set kaniko and drone-docker runtime flags, such as skip-tls-verify or custom-dns?
- Can I push without building?
- Can I build without pushing?
- Is remote caching supported in Build and Push steps?
- Why doesn't the Build and Push step include the content of VOLUMES from my Dockerfile in the final image?