Surendhar Reddy

Container deployment on Google Cloud

Earlier this month, I wrote a piece about how we started moving our operations to use containers on the Google cloud platform1. After reading the article, few friends got back to me with questions about how continuous deployments and orchestration work on the platform. I realized that I emphasized infrastructure choices and discussed implementation on a high note, which left them with the questions and me with an urge to document it.

So here’s my attempt to demonstrate the inner workings of containers on the Google cloud platform. I’m breaking this down into two parts to discuss build, distribution and deployment separately.

. . .

Building and publishing (continuously) images

We used google/cloud-sdk to make it easy; it’s a docker image installed with Cloud SDK on Debian-based OS image. Depending on the configuration, we can trigger cloud build locally and/or on the CI tool.

Cloud build can be triggered using:

gcloud builds submit -t <hostname>/<project-name>/<image-name>:tag

For the app, it looks something like this:

gcloud builds submit -t

This command above will build images from the source (Dockerfile) and submit them to the container registry. Build logs will be available on the console for any debugging (Console > Cloud Build > History). All the images will be available in the container registry backed by cloud storage. A thing to take note of here is the container registry host; Google offers multiple hosts 2, which specifies the location where you will store the images.

Using cloud build is straightforward, but it has some limitations3 that are achievable with extra work. For instance, we can’t build an image with multiple tags using the CLI, we should use a configuration file (YAML)4 to define the multiple tags, and it can be dynamic.

Here’s a simple configuration file that allows you to create an image with multiple tags — latest and _COMMIT_ID, which is dynamically derived.

  - name: ""
    id: "Build"
      ["build", "-t", "${_GOOGLE_PROJECT_ID}/${_APP_IMAGE_NAME}", "."]
  - name: ""
    id: "Tag"
    entrypoint: "/bin/bash"
        "docker tag${_GOOGLE_PROJECT_ID}/${_APP_IMAGE_NAME}:latest",
        "docker tag${_GOOGLE_PROJECT_ID}/${_APP_IMAGE_NAME}:${_COMMIT_ID}",

Configuration and substitutions can be passed as arguments to the CLI command to generate images with multiple tags.

gcloud builds submit --config=cloudbuild.yml --substitutions=_GOOGLE_PROJECT_ID=$GOOGLE_STAGING_PROJECT_ID,_APP_IMAGE_NAME=$APP_IMAGE_NAME,_COMMIT_ID=$_COMMIT_ID

Suppose it feels like too much of a configuration. In that case, you can always configure docker to use gcloud by registering it as a docker credential helper5 with gcloud auth configure-docker and continue using docker to publish images. It incurs some additional configuration on your CI, like setting up docker daemon and gcloud to build and publish images, but it can help us retain the implementation if we want to port the setup to any other platform.

. . .

Distributing the images

Google offers multiple solutions to deploy the containers — GKE, Cloud Run, and Container Optimized OS6.

We opted for container optimized OS as it was a safe bet for us and not to get locked into any one vendor/solution. It improves portability as we can replicate the setup on any system that supports docker runtime. Container optimized OS is not as powerful as your regular Linux distribution, it lacks a package manager but it’s light and secure.

Our deployments are designed to leverage the instance templates feature on the Google platform. We create templates with app images and relevant tags during resource provisioning and use them to create new instances for distribution.

Here’s how to create an instance template with an image and associated environment variables using gcloud CLI.

gcloud compute instance-templates create-with-container awesome-app-template \
    --container-image  \
    --machine-type n1-standard-2  \
    --tags "http-server" \
    --container-env SECRET=ssssh \
    --metadata startup-script="
    #! /bin/bash
    # Forward 80s traffic to 8000
    iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8000"

If you’re using Terraform, you can use terraform-google-modules/terraform-google-container-vm to create an image and create a template with that module.

module "gce-container" {
  source = "terraform-google-modules/container-vm/google"
  version = "~> 2.0"

  container = {
    env = [
        name = "SECRET"
        value = "ssssh"

  metadata_startup_script = "iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8000"

resource "google_compute_instance_template" "default" {
  disk {
    source_image = module.gce-container.source_image

This will create an instance template that can be used to provision VMs. A couple of things to take note of here;

  1. Make sure the default compute service account has access to read images from the container registry
  2. You have to write an IP table rule to map VMs ports to container ports7
. . .

I’m sure there are many other ways to achieve the same outcome, and it all depends on the trade-offs we make to choose one. We currently run this setup to build and distribute our images; it allowed us to leverage the platform features while also designing a portable infrastructure.

Thanks for reading.

Please feel free to reach out to me (email) if you have any questions. I’ll be happy to take your feedback, help, and discuss it further.


  1. Our journey with Docker and Terraform on the Google cloud platform