Author : HASSAN MD TAREQ | Updated : 2021/04/13
AKS as PaaS
- Azure takes care of master node and user takes care of worker node (The master node runs Control Plane Components, and the worker nodes run Node components)
- As a PaaS service, Azure manages master node and the underlaying scale-set of workre nodes
- You just need to add VMs (adding VM as a node to that underlaying scale-set)
Service object provides a set of capabilities that match the microservices requirements for service discoverability:
- IP address: The Service object provides a static internal IP address for a group of pods (ReplicaSet). As pods are created or moved around, the service is always reachable at this internal IP address.
- Load balancing: Traffic sent to the service’s IP address is load balanced to the pods.
- Service discovery: Services are assigned internal DNS entries by the Kubernetes DNS service
Azure Container Registry
- Azure based private container registry for Docker container images
- Azure Container Registry -> ACR
- AKS can authenticate with ACR using its Azure AD identity
As PaaS service, AKS provides load balancer
- As a PaaS service it provides Ingress Controller
- Ingress controller might implement the API gateway pattern
- Pod autoscaling (number of pods allocated to a deployment)
- Cluster autoscaling (number of nodes in a cluster)
- An application or service has an identity stored in Azure AD, and uses this identity to authenticate with an Azure service
- When managed indentity is used, microservice running in AKS can connect to other Azure resources i.e. Azure SQL
- You can use managed identities in AKS by assigning identities to individual pods, using the aad-pod-identity project
- aad-pod-identity : https://github.com/Azure/aad-pod-identity
- Azure Key Vault as Pod volume: https://github.com/Azure/secrets-store-csi-driver-provider-azure
a node pool is a grouping of nodes in the cluster with the same configurations
- System node pool: primary purpose is hosting critical system pods like CoreDNS and tunnelfront, hence the name system (can also host pods)
- User node pool: designed for hosting application pods
During provisioning in Azure portal, we can add additional node pool
[Courtesy of following: https://pixelrobots.co.uk/2020/06/azure-kubernetes-service-aks-system-and-user-node-pools/]
When you create a new AKS cluster using the portal or Azure cli then the node pool that is automatically created
is the system node pool. You don’t even have to specify it in the commands. It just happens.
When you add additional node pools using the
az aks nodepool add command the newly created node pool will be a user node pool.
By default K8s nodes are linux VMs
- you can have Windows VMs in the cluster if you want
- To get Windows VMs, you might need to use Azure CLI or PowerShell or ARM template instead of Azure Portal
Bridge to Kubernetes
- Bridge to Kubernetes allows developers to work directly on their development computers while interacting with the rest of their cluster in AKS
- Azure Dev Spaces will be retired on October 31, 2023. So, Bridge to Kubernete is recommended
- Bridge to Kubernetes allows you to develop and debug your microservice code directly on your development computer while still in the context of the larger application running in Kubernetes
- Bridge to Kubernetes extends the perimeter of the Kubernetes cluster and allows local processes to inherit configuration from Kubernetes
- Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services
How bridge works?
- When using Bridge to Kubernetes, a network connection between your development computer and your cluster is established
- A proxy is added to your cluster in place of your Kubernetes deployment that redirects requests to the service to your development computer
- When you disconnect, the application deployment will revert to using the original version of the deployment running on the cluster
- Bridge to Kubernetes is intended for use in development and testing scenarios only
- Before using Bridge to Kubernetes, Verify that Kubernetes cluster does not have Azure Dev Spaces enabled (Bridge to Kubernetes can’t be used on clusters with Azure Dev Spaces enabled).
- Deployment center simplifies setting up a DevOps pipeline for your application
- You can use this configured DevOps pipeline to set up a continuous integration (CI) and continuous delivery (CD) pipeline to your AKS Kubernetes cluster
On left pane (under setting section) there will be “Deployment Center” option. You can setup Azure DevOps pipelines (CI/CD) and integrate Azure Container Repository from Deployment Center.
HTTP application routing
HTTP application routing add-on makes it easy to access AKS cluster deployed applications publicly
Node Resource Group
- AKS would have 2 resources groups:
- Your resource group in which AKS would be provisioned
- During deployment AKS will create another
- During deployement:
- The AKS resource provider automatically creates the second resource group called ‘Node Resource Group’
- Node Resource Group contains all of the infrastructure resources associated with the cluster (these resources include the Kubernetes node VMs, virtual networking, and storage)
- Your resource group will contain only the Kubernetes service resources
- Helm is an application package manager for Kubernetes that you use to standardize and simplify the deployment of cloud-native applications on Kubernetes
- Helm is as a package manager for Kubernetes (similar to yum or apt-get in linux) — a way to bundle Kubernetes objects into a single unit that you can publish, deploy, version, and update.
- Helm is tool used to create Pod from “Helm chart” and container image
- Helm is also a template engine (generates helm charts from templates)
- Bunch of files (i.e. yaml) used by Helm to create Pod
- “Helm Chart” + “Container Image” -> Helm -> K8 Pod