Configuring a pipeline to autonomously deploy to Kubernetes does not have to be complex but will depend on your workload and expected integrations with the rest of your cluster. In this post I’ll explain one of the simplest ways to automate Kubernetes deployment using Microsoft Tye.

This post is part of a series on Continuous Deployment to Kubernetes:

Simple Workload, Simple Deployment

Microsoft Tye makes deployment to Kubernetes so simple you’ll only need to know how to authenticate with your cluster - Tye will take care of the rest. Previously I’ve blogged about running containers and deploying to Kubernetes with Tye, and I’ve also presented on deploying from your local to Kubernetes with Tye.

In this post we’ll deploy a single workload - the Digital Icebreakers web application, which will be a single container without any integrations to the rest of the cluster. At a high level, we’ll need our pipeline to perform the following steps:

  • Install NodeJS and dotnet build tools
  • Build
  • Run tests
  • Publish into a container image
  • Authenticate to container registry
  • Push image to container registry
  • Authenticate to Kubernetes
  • Deploy to Kubernetes

Microsoft Tye simplifies much of this, it will: build and publish our csproj, create the docker image and push it to the registry, create our Kubernetes manifests and push them to our cluster. That’s a lot of work performed by a single command: dotnet tye deploy.

Azure Pipelines

The listing below shows the relevant parts of the pipeline I previously used to deploy Digital Icebreakers to my Kubernetes cluster. With every commit/merge to the main branch, this pipeline will be triggered and providing no errors occur, ultimately deployed to the cluster. The first steps are probably familiar if you’re already configuring CI/CD as they build and run tests - the continuous integration part of CI/CD.

dotnet tool restore is an important part that might be new - it installs to the pipeline agent tools we’ve specified in our source. Digital Icebreakers source specifies two local dotnet tools: microsoft.tye and nbgv. Tye does our deployment heavy lifting as mentioned above. The second tool is NerdBank.GitVersioning which on build sets version information against our csproj based on git history. This is important because Tye will tag images and manifests with that version which in turn ensures Kubernetes deployments pull the correct revision of our build artifacts.

trigger:
- master
pr: none # without this, deployment will run on PRs

pool:
  vmImage: 'ubuntu-16.04'

steps:

# Build and test

- displayName: Install .NET Core
  task: UseDotNet@2
  inputs:
    packageType: sdk
    version: 3.1.201

- displayName: Install NodeJS
  task: NodeTool@0
  inputs:
    versionSpec: '12.16.1'

- displayName: Restore tools
  script: dotnet tool restore

- displayName: Restore dotnet project dependencies
  script: dotnet restore

- displayName: Run dotnet tests 
  task: DotNetCoreCLI@2
  inputs:
    command: test
    projects: '**/*.Test.csproj'

- displayName: Run Front-end tests 
  script: npm run test-ci
  workingDirectory: ./DigitalIcebreakers/ClientApp

# Deploy

- displayName: Docker Hub authenticate
  task: Docker@2
  inputs:
    command: login
    containerRegistry: $(docker-service-connection)

- displayName: Cluster authenticate
  task: Kubernetes@1
  inputs:
    connectionType: Kubernetes Service Connection
    kubernetesServiceEndpoint: $(k8s-service-connection)
    command: login

- displayName: Deploy 
  script: dotnet tye deploy -v Debug

The last three steps are the deployment section of the pipeline, executed only if the previous steps were successful. Authentication to Docker Hub and Kubernetes is configured via Azure DevOps Service Connections. The actual deployment occurs in the last step after authentication.

That’s It!

Aside from authentication, there is no other mention in the pipeline specific to Kubernetes or Docker - this is the power of Microsoft Tye which abstracts those ideas and lets us use and deploy containers to Kubernetes with little up-front knowledge of either Docker or Kubernetes.

This abstraction comes at a cost - our ability to have fine-grained control over the makeup of our container images and Kubernetes resources like deployments and services is heavily restricted due to automation. In Part 2 of this series, we’ll move this pipeline to Github Actions and deploy our own Kubernetes Service and Deployment manifests.

This post is part of a series on Continuous Deployment to Kubernetes: