Home Hello Kubernetes
Post
Cancel

Hello Kubernetes

Introduction to Kubernetes

Welcome to the world of Kubernetes. In this blog post, we’ll be using Kind (Kubernetes in Docker) to create a local Kubernetes cluster. Kind is an efficient tool for getting started with Kubernetes because it’s a lightweight, easy to set up, and doesn’t require much resource.

Let’s dive right into it. We will demonstrate how to use Kind to create a sandboxed environment, which is safe for experimentation.

So grab a cup of coffee :coffee:, fire up your favorite terminal :computer:, and let’s get started with our HelloKubernetes blog post using Kind!

If you want to dive right into the code, head over to the HelloKubernetes repository on GitHub to check out the code!

1. What is Kubernetes?

Kubernetes, also known as K8s, is an open-source container orchestration platform that was developed by Google. It allows you to deploy, scale, and manage containerized applications in a highly efficient and automated way.

Put simply, Kubernetes helps you manage the lifecycle of your applications and their components. This includes everything from deployment and scaling to rolling updates and self-healing.

Kubernetes is becoming increasingly popular among developers and IT teams, especially those working with microservices and container-based architectures. It offers a powerful set of tools for managing complex distributed systems, and its modular architecture makes it highly adaptable to a wide range of use cases.

Setting up a Kubernetes environment can be quite complex and intimidating, especially for beginners. That’s where Kind comes in - it provides a simple and lightweight way to set up a local Kubernetes environment that you can use to experiment and learn. In the next section, we’ll go over how to set up Kind on your local machine.

2. Setting up a Kubernetes Cluster with Kind

Before you can start using Kubernetes with Kind, you’ll need to set it up on your local machine. Don’t worry, it’s not as complicated as it sounds!

  • Before setting up Kind, ensure that you have Docker installed on your local machine. If you’re using Windows or Mac, you can download and install Docker Desktop from the Docker website. If you’re using Linux, you’ll need to install Docker by following the instructions specific to your Linux distribution. For example, on Ubuntu, you can run the following commands in the terminal to install Docker:
    1
    2
    
    sudo apt-get update
    sudo apt-get install docker.io
    

    Alternatively, you can refer to the official Docker documentation for instructions on how to install Docker on Linux: https://docs.docker.com/engine/install/.

  • Next, download the latest release of Kind for your platform from the Kind GitHub repository. You can use the command line to download it on Mac, Linux, or Windows with the following commands: For Mac or Linux:
    1
    2
    3
    
    curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
    chmod +x ./kind
    mv ./kind /usr/local/bin/kind
    

    For Windows (using PowerShell):

    1
    
    curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.17.0/kind-windows-amd64
    

    Please refer to the installation guide for the latest version of Kind.

  • Now that you have Kind installed on your local machine, you can use it to create a Kubernetes cluster by running the following command: ```bash kind create cluster –name
1
2
3
4
With the `--name` flag, you can give the cluster a specific name. This can be helpful when working with multiple clusters or when you want to give the cluster a meaningful name for identification purposes.
This will create a new Kubernetes cluster with **one control plane** node and **one worker node**. You can check that the cluster was created successfully by running the following command:
```bash
kubectl cluster-info

This should display the endpoint for the Kubernetes API server and the DNS name for the Kubernetes dashboard.

That’s it! :clap: You now have a local Kubernetes cluster set up using Kind. In the next section, we’ll go over how to deploy your first application to the cluster. :rocket:

3. Deploying Your First Application

Now that you have Kind set up on your local machine, it’s time to deploy your first application on your new Kubernetes cluster. For this example, we’ll use a simple application that consists of a single container running a basic web server.

  • First, create a new file named nginx.yaml on your local machine and add the following code to define a new Kubernetes deployment and service for an Nginx web server:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Define the Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-nginx # Name of the Deployment
spec:
  selector:
    matchLabels:
      app: nginx # Labels to match Pods created by this Deployment
  replicas: 1 # Number of Pods to create
  template:
    metadata:
      labels:
        app: nginx # Labels for the Pod created by this Deployment
    spec:
      containers:
      - name: nginx # Name of the container
        image: nginx:latest # Docker image to use for the container
        ports:
        - containerPort: 80 # Port to expose in the container

# Define the Service
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service # Name of the Service
spec:
  type: NodePort # Expose the Service on a randomly assigned port on each Node
  selector:
    app: nginx # Labels to select the Pods to load balance traffic to
  ports:
  - name: http # Name of the port
    protocol: TCP # Protocol to use for the port
    port: 80 # Port to expose on the Service
    targetPort: 80 # Port that the Service should forward traffic to on the Pods

This will create a new deployment named hello-nginx that runs a single replica of a container running nginx:latest image on port 80.

  • Use the following command to create the new deployment:
    1
    
    kubectl create -f nginx.yaml
    

This will create the deployment and start the container on your local Kubernetes cluster.

  • Use the following command to check the status of your deployment:
    1
    
    kubectl get deployments
    

    This will show you the current status of your hello-nginx deployment. You should see that there is one replica running.

  • To check the status of the Pods created by your deployment, use the following command:
    1
    
    kubectl get pods
    

    This will show you the current status of your hello-nginx Pods. You should see that there is one Pod running.

  • Use the following command to check the status of your Service:
    1
    
    kubectl get services
    

    This will show you the current status of your nginx-service. You should see that there is a Service running named nginx-service and exposing port 80.

Congratulations :tada:, you’ve just deployed your first application on your local Kubernetes cluster! In the next section, we’ll go over how to scale your application up or down to meet changing demand.

4. Scaling Your Application

One of the key benefits of Kubernetes is the ability to easily scale your applications up or down to meet changing demand. Let’s take a look at how to do this with our hello-nginx deployment.

  • First, use the following command to scale your deployment up to three replicas:
    1
    
    kubectl scale deployment hello-nginx --replicas=3
    

    This will create two new replicas of your container, bringing the total number of replicas to three.

  • Use the following command to check the status of your deployment again:
    1
    
    kubectl get deployments
    

    This will show you the current status of your hello-nginx deployment. You should see that there are now three replicas running.

  • Finally, use the following command to scale your deployment back down to a single replica:
    1
    
    kubectl scale deployment hello-nginx --replicas=1
    

    This will delete two of the replicas, leaving you with a single replica running again.

That’s it! With just a few simple commands, you can easily scale your nginx deployment up or down to meet changing demand.

5. Deleting Your Deployment

When you’re finished testing your application on your local Kubernetes cluster, you’ll want to delete your deployment to free up resources. Here’s how to do that:

  • Use the following command to delete your deployment:
    1
    
    kubectl delete deployment hello-nginx
    

    This will delete your hello-nginx deployment and all of its associated resources, including the replica set and pods.

  • Use the following command to confirm that your deployment has been deleted:
    1
    
    kubectl get deployments
    

    This should return an empty list of deployments, indicating that your hello-kubernetes deployment has been successfully deleted.

That’s it! :tada: You’ve now learned how to deploy and manage a simple application on a local Kubernetes cluster using the Kind container. We hope you found this tutorial helpful and we encourage you to continue exploring the many features and capabilities of Kubernetes. :rocket: Happy coding!

This post is licensed under CC BY 4.0 by the author.