Google Cloud Platform (GCP) - Kubernetes | Deploy docker container on kubernetes
Run this time route, we will use that same application and we will deploy this Hello YouTube application on Kubernetes. And what it will help us is that it will help us to understand how we can deploy that same container on Kubernetes cluster and help it scale automatically. And yes, if you are not well aware of the basics of containers Kubernetes technologies, we have separate videos. The link would be there in the description. So just watch this video till the end.
So you will definitely get some understanding. And later on you can go and view those videos as well. So let's do a quick recap. So containers, as we already know, container is a technology through which we can wrap and application and all its dependencies into a single unit. And then we can ship this particular container across different platforms.
So this is quite straightforward for me. We move quickly onto Kubernetes, which is our main topic for today. So Kubernetes is also called as K eight, just an abbreviation for the eight letters which falls between Kans. And this is simply a system to run and coordinate containerized application. So basically Kubernetes provide you that platform autoscaling capabilities through which you can autoscale automate orchestrate your containerized application.
So what are the basic components of Kubernetes platform? So container? We already know. But when we deploy a container into Kubernetes, then basically that container goes into the smallest unit of Kubernetes, which is called as a pot. A pod can contain one or multiple containers, and this is the smallest unit of Kubernetes environment.
Then we will have different ports residing on inside a node, and then this particular node will be residing under dedicated Kubernetes cluster. So the Kubernetes node and the Kubernetes cluster would be powered by the virtual machines running at the back end, which would be auto scaled based on the demand. So what is Google Kubernetes engine? Kubernetes is an open source technology. Google has made it a managed service inside a Google cloud platform, and that service is called as Google Kubernetes engine.
So what does GKE and any other Kubernetes technology exactly does. So first of all, it creates a Kubernetes cluster, which is nothing but a set of machines. Then the containerized application gets deployed on this particular Kubernetes cluster. After that, Kubernetes exposes this app to the outside world so that they can use it. You can also explore this applications internally.
Then after this step, you can scale your application. I suppose today your application is used by five users, but all of a sudden or in future, you see a demand of 500,000 users using this application. So if you are in a single containerized application, there would be limitations around scaling it up. But in Cuban nities it can be auto scaled without any manual interview. Also, if there are rolling update, frequent updates to your application, it can be done seamlessly if the Kubernetes engine is managing your containers.
So what you see here are various components which makes a Kubernetes cluster environment. First of all, let's start with the container image. So whenever you want to create any container application, you have to have a container image regardless of whether you are using it within Kubernetes or just using it as a standalone containerized application. But in Cuban ties, then you wrap it up inside a pod. As I said, pod is the smallest unit of a Kubernetes cluster environment, and then that Port resides inside a deployment.
What is a deployment? So deployment is exactly dictates. How many replicas of one pod can exactly run to manage this application. So right now you see that one pod is running one particular application, but then you can have multiple pods running the same version of the application so that you can auto scale it. You also see one component as service.
So this service component, what it does is it exposes the application to outside network. So suppose there is an option to directly contact the application pots, but these pots are formal in nature. That means that these are created and then deleted at runtime. There can be numerous pots getting created and deleted based on the demand. So that's why we open up a service endpoint through which any application user can connect to any of the ports available.
So in our case, we have deployed a web application called as Hello YouTube. So in this demo we will expose this particular Hello YouTube application to outside network through a load balancer at Port 80 wherein our application is running on Port 88. So any request which comes to our application would actually come through this service component so that there is no direct connection being made to the ports because as I said, ports are a formal in nature and these can be deleted or created on demand. So in our last lesson, we created a containerized application on Google Cloud.
Following these five simple steps, we first of all coded our application in Python, Hello YouTube.
Then we created a Docker file. Then we used container built to create the container image. We pushed that container image on to the container registry, and then we deployed it on Google Cloud run as a single standalone container application. But this time we will not follow step five. Instead of running it on cloud run, we will actually deploy this on Kubernetes engine.
So step one to four remains the same. But at step five, we will now introduce Google Kubernetes engine so that we can deploy this application on this Kubernetes cluster and scale it as per org. These are some components which we will be using in our demonstration today. So let's go through these components one by one. Container image, as you already know, is an image with the executable of application code.
Container Registry is the place where we push this image on Google Cloud. Kubernetes engine, as I said, is a fully managed service to orchestrate containerized app managed service provided by Google Cloud. Now the new part is Deployment Ml. So this is the configuration file to manage resources for your front end web application. So now when you will deploy your application on Google Cloud and especially on Cubans Cuban, it needs to know what it has to exactly run.
So this particular file will exactly tell the Cuban its engine that what kind of front end web application we are actually deploying or deployed in this particular container. So this Deployment Ml file has that information services DML again, as I was explaining before, it gives all the configurations related to a single point of entry to access these application ports, maybe a load balancer. So these two files we will use in our demo to dictate these two functionalities and Cube CTL. Cube CTL is like gsutil or G Cloud.
All these utilities which we have seen in the past, similar to that Cube CTL is a command line utility to manage Kubernetes on Cloud, and it is also a generic utility also outside Google Cloud as well.
So basically we will have four simple steps, and these four simple steps would be first step we will skip because we have already created Hello YouTube application in Python. If you want to see that, you can go back and see that video in this particular video, we will pick up from step two, where we will package our application into a container image, push it into Container registry, and then from there on, we will create Kubernetes cluster, and then at the end, deploy the container image to the Cuban ETS cluster.
Okay, so I'm already there logged in to my consult the steps are simple. Again, we have already seen in our previous videos. So let's directly go into the Google Cloud console.
We have also created already the project which we used the last time, so we'll use that same Google Cloud project to continue our development and deployment of this application on Kubernetes. So just to give you a brief overview of what this application, this is a very simple application called as Hello, YouTube.

0 Comments