Automated deployments of a Phoenix application to Google Cloud Run and Cloud SQL

Published: Friday, 26 Jan 2024

Updated: Friday, 02 Feb 2024

This post explains how to set up automated deployments and migrations for a Phoenix project on Google Cloud’s managed services using the Google Cloud CLI (mostly). The Phoenix app will be hosted on Google Cloud Run and the PostgreSQL database will be hosted on Cloud SQL. Deployments will be automatically triggered when changes are pushed to the main branch of your git repository (GitHub specifically in this post).

This post allows you to input your own specific values throughout the journey to make following along considerably easier. Look out for ⭐ INPUT.

At a high level we will:

  1. Prepare your application
  2. Create a GCP project
  3. Enable the services we need
  4. Create an Artifact Registry repository to store our compiled app
  5. Create a service account
  6. Create a Cloud SQL database instance
  7. Create environment variables in Secrets Manager
  8. Connect a GitHub repository to Cloud Build
  9. Create a Cloud Build trigger
  10. Create a build configuration file
  11. Trigger a deploy to Cloud Run
  12. (OPTIONAL) psql into Cloud SQL


  • A Google account
  • The Google Cloud CLI installed and logged in
  • A billing account set up on your Google Cloud organisation

Note: This was written using MacOS on an M1 Macbook. Some commands and steps may require variations if you are on a different OS/architecture.

1. Prepare your application

If you don’t have an app ready to go but want to following along, I suggest generating a basic project with the following series of commands:

⭐ INPUT your product name:
mix insight
cd insight
# create something for us to test DB interaction with e.g.,
mix Products Product products name brand
# remember to update lib/insight_web/router.ex

In your existing app (or newly generated app), generate a Dockerfile and other useful release helpers with the following command

mix phx.gen.release --docker

Next we update our runtime config to delete the production database environment variables because we will leverage PostgreSQL environment variables (e.g., PGHOST, PGDATABASE, PGUSER, PGPASSWORD), which is conveniently what Postgrex.start_link/1 defaults to under the hood if you do not specify database connection details in your code.

# config/runtime.exs
if config_env() == :prod do
  # removed database_url block

  maybe_ipv6 = if System.get_env("ECTO_IPV6") in ~w(true 1), do: [:inet6], else: []

  config :insight, Insight.Repo,
    # removed url: database_url
    pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"),
    socket_options: maybe_ipv6

  # nothing changed beyond this

Cloud Run will automatically generate a semi randomised URL for your app once deployed. It will be in the form of https://[SERVICE NAME]-[RANDOM NUMBERS] To prevent infinite reloading behaviour in LiveView we need to update config/prod.exs to allow-list the Cloud Run origin.

# config/prod.exs
config :insight, InsightWeb.Endpoint,
  cache_static_manifest: "priv/static/cache_manifest.json",
  check_origin: ["https://*"] # add this

2. Create a GCP project

Create a new project with the name of your product/service. Please note that project names on GCP must be unique.

⭐ INPUT your GCP Project ID:
gcloud projects create insight-098765

Set the Google Cloud CLI to use the newly created project.

gcloud config set project insight-098765

Find the billing account you set up (refer to prerequisites).

gcloud billing accounts list

Link the billing account to the new project.

⭐ INPUT your Billing ID:
gcloud billing projects link insight-098765 --billing-account 000000-000000-000000

3. Enable the services we need

Google Cloud disables all cloud products/services on a new project by default so we will need to enable all the services we will use for this deployment: Artifact Registry, Cloud Build, Cloud SQL, Secret Manager, Cloud Run, and the IAM API.

The following command will enable all the services we need.

gcloud services enable \ \ \ \ \ \

4. Create an Artifact Registry repository to store our compiled app

Create a new repository with an identifier (I generally align this with my elixir app name) and specifying the format and region.

⭐ INPUT your desired GCP Region:
gcloud artifacts repositories create insight \
	--repository-format=docker \
	--location=australia-southeast1 \
	--description="insight application"

Once that is created we need to retrieve the repository’s Registry URL with the following command:

gcloud artifacts repositories describe insight \
  --location australia-southeast1

It will look something like

⭐ INPUT the full Registry URL:

We won’t use these until later, but let’s define what we want to call our compiled artifact:

⭐ INPUT your desired compiled artifact name:

Note: Later on your compiled image will look something like At build time we tag it with latest for easy reference.

5. Create a service account

This service account will own our Cloud Run app and will need various permissions to services and secrets.

Create the service account with a useful identifier.

⭐ INPUT your desired Service Account name:
gcloud iam service-accounts create insight-sa \
  --description="insight app service account"

Service accounts are referenced using a fully qualified email address, not just a name. To retrieve the full email address for the service account we just created run:

gcloud iam service-accounts list

It will look something like

⭐ INPUT your full Service Account email:

We will also provide some IAM permissions to the Service Account that will be needed later:

  • roles/logging.logWriter permissions are required by Cloud Build
  • roles/cloudsql.client permissions are required to interact with Cloud SQL
  • roles/artifactregistry.writer permissions are required to read/write to Artifact Registry
  • roles/run.developer permissions are required to deploy on Cloud Run
  • roles/iam.serviceAccountUser permissions are required to allow the Service Account to “act as” another service account and assign ownership of services (such as Cloud Run). In this case the account is acting as itself, but it is still required despite being self-referential

Above can be added with the following commands:

gcloud projects add-iam-policy-binding insight-098765 \
  --member="" \
  --role="roles/logging.logWriter" \
  --condition None
gcloud projects add-iam-policy-binding insight-098765 \
  --member="" \
  --role="roles/cloudsql.client" \
  --condition None
gcloud projects add-iam-policy-binding insight-098765 \
  --member="" \
  --role="roles/artifactregistry.writer" \
  --condition None

gcloud projects add-iam-policy-binding insight-098765 \
  --member="" \
  --role="roles/run.developer" \
  --condition None
gcloud iam service-accounts add-iam-policy-binding "" \
    --member "" \
    --role "roles/iam.serviceAccountUser"

6. Create a Cloud SQL database instance

Create a new PostgreSQL instance specifying your desired region, type of DB, and compute tier. We’ve used the cheapest tier for this example.

⭐ INPUT your desired database instance name:
gcloud sql instances create insight \
  --region=australia-southeast1 \
  --database-version=POSTGRES_14 \

Now we will create a user for our application to use when interacting with the database.

⭐ INPUT your desired database user name: ⭐ INPUT your desired database user's password:
gcloud sql users create insight_admin \
  --instance=insight \

Next we will create our database.

⭐ INPUT your desired database name:
gcloud sql databases create insight_dev --instance insight

We also need to retrieve our instance connectionName for later:

gcloud sql instances describe insight --format='value(connectionName)'

The connection name will look something like PROJECT:REGION:INSTANCE-NAME.

⭐ INPUT the full Connection Name:

7. Create environment variables in Secrets Manager

Now we need to create the secrets on GCP that our Phoenix app will use (on Cloud Run). We will create these in Secrets Manager:

  • DEV_SECRET_KEY_BASE (mapped to SECRET_KEY_BASE in deploy step)
  • DB_USER (mapped to PGUSER in deploy step)
  • DB_PASS (mapped to PGPASSWORD in deploy step)
  • DB_HOST (mapped to PGHOST in deploy step)

Create each of these txt files and populate with your relevant secrets:

  • db-user.txt contains insight_admin
  • db-pass.txt contains pa55w0rd
  • db-host.txt contains /cloudsql/insight-098765:australia-southeast1:insight (this is your connection name prepended with /cloudsql/)

Once the txt files are created, run each of the following commands to create the secrets:

# string payload, pipe the secret value into the gcloud command
mix phx.gen.secret | gcloud secrets create DEV_SECRET_KEY_BASE --data-file=-

# file payload, considered the safer way
gcloud secrets create DB_USER --data-file=db-user.txt
gcloud secrets create DB_PASS --data-file=db-pass.txt
gcloud secrets create DB_HOST --data-file=db-host.txt


  • The name of secrets in Secrets Manager does not have to match the application environment variable name because there is a mapping exercise during the final deployment step.
  • Do not commit the txt files to your git repository
  • Secrets Manager expects a file (or string) payload. If sending a string the --data-file must be set to -. I’ve used both methods above for demonstration purposes
  • You can retrieve the value of the secrets by running either of the following:
    • gcloud secrets versions access 1 --secret="DB_HOST"
    • gcloud secrets versions access latest --secret="DEV_SECRET_KEY_BASE"
  • Google encourages use of data files for secrets instead of sending strings directly on the command line. This is because direct command line creations are stored in plaintext in your processes and shell history

Next we need to provide the Service Account with permission to access all of these secrets.

gcloud secrets add-iam-policy-binding DEV_SECRET_KEY_BASE \
  --member="" \

gcloud secrets add-iam-policy-binding DB_USER \
  --member="" \

gcloud secrets add-iam-policy-binding DB_PASS \
  --member="" \

gcloud secrets add-iam-policy-binding DB_HOST \
  --member="" \

We also need to retrieve the paths for the DB_USER and DB_PASS for use later:

gcloud secrets describe DB_USER
gcloud secrets describe DB_PASS
⭐ INPUT the path to the DB_USER: ⭐ INPUT the path to the DB_PASS:

The numbers will certainly differ from the example provided.

8. Connect a GitHub repository to Cloud Build

This step and the next step are easier via Google Cloud Console > Cloud Build > Repositories.

Click “CREATE HOST CONNECTION” and populate the fields (E.g., region australia-southeast1).

It will then take you through authentication with GitHub. You will have an option to provide access to all of your GitHub repositories or just a selection. Pick whatever makes sense for your needs.

After you have successfully created a connection, click “LINK A REPOSITORY”. Select the connection we just created, and your Phoenix app repository. Choose generated repository names.

9. Create a Cloud Build trigger

Now we create a trigger via Google Cloud Console > Cloud Build > Triggers.

Click “CREATE TRIGGER” and populate with your desired details:

  • Name: Can be anything (e.g., main-trunk)
  • Region: australia-southeast1
  • Event: Push to a branch
  • Source: 2nd gen
  • Repository: Select the one you linked in prior step
  • Branch: Will auto populate with a regular expression to match the main branch ^main$
  • Type: Cloud Build configuration file
  • Location: Repository
  • Cloud Build configuration file location: /cloudbuild.yaml
  • Service account:

10. Create a build configuration file

In your Phoenix project’s root directory create a cloudbuild.yaml file and populate it with the below codeblock.

⭐ INPUT your desired Cloud Run service name:
- name: ''
  id: Build and Push Docker Image
  script: |
    docker build -t ${_IMAGE_NAME}:latest .
    docker push ${_IMAGE_NAME}:latest

- name: ''
  id: Start Cloud SQL Proxy to Postgres
  args: [

- name: 'postgres'
  id: Wait for Cloud SQL Proxy to be available
  script: |
    until pg_isready -h cloudsql ; do sleep 1; done

- name: ${_IMAGE_NAME}:latest
  id: Run migrations
  - MIX_ENV=prod
  - SECRET_KEY_BASE=fake-key
  - PGHOST=cloudsql
  script: |
    /app/bin/insight eval "Insight.Release.migrate"

- name: ''
  id: Deploy to Cloud Run
  script: |
    gcloud run deploy ${_SERVICE_NAME} \
      --image ${_IMAGE_NAME}:latest \
      --region ${LOCATION} \
      --platform managed \
      --allow-unauthenticated \
      --set-secrets=SECRET_KEY_BASE=DEV_SECRET_KEY_BASE:latest \
      --set-secrets=PGHOST=DB_HOST:latest \
      --set-secrets=PGUSER=DB_USER:latest \
      --set-secrets=PGPASSWORD=DB_PASS:latest \
      --set-env-vars=PGDATABASE=${_DATABASE_NAME} \
      --add-cloudsql-instances=${_INSTANCE_CONNECTION_NAME} \

  - versionName: projects/123456789/secrets/DB_USER/versions/latest
    env: 'PGUSER'
  - versionName: projects/123456789/secrets/DB_PASS/versions/latest
    env: 'PGPASSWORD'

  - ${_IMAGE_NAME}:latest

  automapSubstitutions: true

  _DATABASE_NAME: insight_dev
  _INSTANCE_CONNECTION_NAME: insight-098765:australia-southeast1:insight
  _SERVICE_NAME: insight-dev


  • To summarise the above script it:
    • Builds our application image and pushes it to Artifact Repository
    • Starts a Cloud SQL Proxy within the Cloud Build environment
    • Waits to ensure the proxy is functional
    • Executes up migrations against the database
      • Despite using MIX_ENV=prod we are still interacting with insight_dev via the PGDATABASE environment variable
      • The migrations are run using the scripts generated by mix phx.gen.release --docker
      • Uses our freshly built image and utilises the PostgreSQL environment variables (PGHOST, PGDATABASE, PGUSER, PGPASSWORD)
    • Deploys our Cloud Run service
      • Uses our freshly built image
      • Maps the secrets and environment variables
      • Assigns our service account as the owner
      • Links our Cloud SQL instance
  • We make use of substitute variables to make it easier to work with the document. Because we are using a mix of script: and arg: approaches we need to set the automapSubstitutions: true option otherwise our builds will fail
  • To learn more about the elements of the above script refer to the cloudbuild.yaml structure docs.

11. Trigger a deploy to Cloud Run

Commit the cloudbuild.yaml file (or any other change) and push it to your GitHub repository and watch it build. You can manually trigger builds via Google Cloud Console > Cloud Build > Triggers.

You can view previous builds and stream in-progress builds on the Cloud Build History tab.

You should now have a fully deployed application on GCP!

If at any time you need to retrieve details of this service you can do so with the following command

gcloud run services list

12. (OPTIONAL) psql into Cloud SQL

If we want to remotely connect to our Cloud SQL database we can use a tool called Cloud SQL Proxy. This allows us to securely connect via API to our database using our Google Cloud SDK credentials.

Download and install the Cloud SQL Proxy. Follow the instructions at the link.

Cloud SQL Proxy utilises your Google Cloud SDK credentials for auth. You can set them with:

gcloud auth application-default login

Start the proxy using our connectionName. The port must not already be in use.

./cloud-sql-proxy --port 54321 insight-098765:australia-southeast1:insight

If successful you will see see output similar to:

Authorizing with Application Default Credentials
Listening on

Now we can psql in!

psql host=" port=54321 sslmode=disable user=insight_admin dbname=insight_dev"