Kubernetes On Ubuntu 22.04: A Kubeadm Setup Guide
Kubernetes on Ubuntu 22.04: A Kubeadm Setup Guide
Hey everyone! Today, we’re diving deep into setting up your very own Kubernetes cluster using kubeadm on the latest and greatest, Ubuntu 22.04 LTS (Jammy Jellyfish) . If you’re looking to get your hands dirty with container orchestration and want a reliable, step-by-step guide, you’ve come to the right place, guys! We’ll be covering everything from prerequisites to getting your first pod running. So grab your favorite beverage, settle in, and let’s get this cluster built!
Table of Contents
- Why Kubernetes and Kubeadm?
- Prerequisites: Getting Your Ducks in a Row
- At Least Two Ubuntu 22.04 Machines
- Unique Hostnames, MAC Addresses, and product_uuids
- Minimum 2GB RAM and 2 CPU Cores per Machine
- Full Network Connectivity Between All Nodes
- Swap Disabled
- Unique Cloud Provider Identifiers (if applicable)
- Step 1: Prepare All Nodes (Control Plane and Workers)
- Update System Packages
- Install Container Runtime (containerd)
- Install Kubernetes Components (kubelet, kubeadm, kubectl)
- Configure
- Verify Prerequisites (Optional but Recommended)
- Step 2: Initialize the Control Plane Node
- Run
- Configure
- Verify Control Plane Status
- Step 3: Install a Pod Network Add-on
- Install Flannel CNI Plugin
- Verify Network Installation
- Step 4: Join Worker Nodes to the Cluster
- Get the
- Execute
- Verify Worker Node Status
- Step 5: Deploy Your First Application!
- Create a Deployment
- Verify the Deployment and Pods
- Expose the Application with a Service
- Access Your Nginx Application!
- Conclusion: Your Kubernetes Journey Begins!
Why Kubernetes and Kubeadm?
First off, why Kubernetes, you ask? Well, Kubernetes is the undisputed champion of container orchestration. It helps you automate the deployment, scaling, and management of containerized applications. Think of it as the ultimate conductor for your microservices orchestra, ensuring everything runs smoothly, efficiently, and can scale up or down as needed. It’s a game-changer for modern application development and deployment, offering resilience, portability, and incredible flexibility. Whether you’re running a small startup or a massive enterprise, Kubernetes has become an essential tool in the DevOps toolkit. It tackles complex challenges like service discovery, load balancing, storage orchestration, and automated rollouts and rollbacks. In short, it makes managing distributed systems a whole lot less painful and a whole lot more powerful. It’s the de facto standard for managing containers at scale, and understanding it is key to staying relevant in the tech world.
Now, why
kubeadm
specifically? Kubeadm is a tool that bootstraps a minimal, viable Kubernetes cluster. It’s officially supported by the Kubernetes project and is designed to simplify the process of setting up a cluster. It handles the tricky parts, like generating certificates, configuring control plane components, and joining worker nodes. While there are other ways to set up Kubernetes, kubeadm is often recommended for its simplicity and adherence to best practices. It focuses on making the
initialization
of a cluster easy, giving you a solid foundation to build upon. It’s not meant to manage the lifecycle of the cluster
after
it’s set up – that’s where tools like
kube-controller-manager
and
kube-scheduler
come in, which kubeadm helps configure. Think of kubeadm as your expert guide for the initial construction phase of your Kubernetes house, making sure the foundation and core structure are perfectly in place. This makes it an excellent choice for learning, testing, and even production environments where you prefer a more hands-on approach to cluster management.
Ubuntu 22.04 LTS, on the other hand, is a fantastic choice for running Kubernetes. It’s known for its stability, long-term support, and robust security features. Running Kubernetes on a stable OS like Ubuntu ensures your cluster has a reliable underlying infrastructure. The LTS version means you get security updates and maintenance for five years, providing a stable environment for your critical applications. Ubuntu’s comprehensive package management system (
apt
) also makes installing and managing the necessary components a breeze. With its latest kernel features and enhancements, Ubuntu 22.04 provides an optimized environment for containerized workloads, ensuring better performance and compatibility with the latest Kubernetes features. It’s a solid, enterprise-grade operating system that perfectly complements the power of Kubernetes.
Prerequisites: Getting Your Ducks in a Row
Before we dive headfirst into setting up our Kubernetes cluster, let’s make sure we have all our ducks in a row. Trust me, having these prerequisites sorted beforehand will save you a ton of headaches later on. We’re going to need a few things to get started, so let’s list them out:
At Least Two Ubuntu 22.04 Machines
First and foremost, you’ll need at least two machines running Ubuntu 22.04 LTS . For a functional cluster, you typically need at least one machine for the control plane (the brain of the operation) and at least one machine for the worker node(s) (where your actual applications will run). For testing and learning purposes, you can even use virtual machines on your local computer with tools like VirtualBox or VMware. If you’re setting up a production-grade cluster, you’ll likely want multiple control plane nodes for high availability and multiple worker nodes for capacity and redundancy. For this guide, we’ll assume you have at least two VMs or physical servers ready to go. Ensure they have internet access, as we’ll need to download packages.
Unique Hostnames, MAC Addresses, and product_uuids
This is super important, guys! Each node in your Kubernetes cluster
must have a unique hostname, MAC address, and
product_uuid
. Why? Kubernetes uses these to identify and differentiate nodes. If they’re not unique, you’ll run into some seriously weird networking and identification issues. You can check these on your Ubuntu machines:
-
Hostname:
hostnamectl -
MAC Address:
ip link show eth0(or your primary network interface) -
product_uuid:
sudo cat /sys/class/dmi/id/product_uuid
Make sure these are all distinct for each machine you plan to add to the cluster. If you’re using VMs, this is usually handled automatically, but it’s always good practice to double-check, especially if you’ve cloned VMs.
Minimum 2GB RAM and 2 CPU Cores per Machine
Kubernetes itself and the components it runs require a decent amount of resources. To avoid performance issues and ensure your cluster runs smoothly, each machine (both control plane and worker nodes) should have at least 2GB of RAM and 2 CPU cores . If you’re running significant workloads, you’ll definitely want to beef this up. For a quick test setup, 2GB/2CPU might be just enough, but don’t expect to run a production-ready cluster with these minimal specs. More resources mean happier containers!
Full Network Connectivity Between All Nodes
This is a big one for Kubernetes:
all nodes must be able to communicate with each other over the network without any issues
. This means no firewalls blocking necessary ports, and proper routing. You should be able to
ping
each node from every other node. We’ll need to open up specific ports for Kubernetes communication, but for now, just ensure basic network connectivity is established. This includes ensuring that your chosen Container Network Interface (CNI) plugin will also be able to function correctly between nodes.
Swap Disabled
Kubernetes really doesn’t like swap memory. It can cause unpredictable behavior and performance issues because the scheduler assumes memory is available when it actually might be swapped out. So,
you absolutely
must
disable swap on all nodes
. You can do this temporarily for the current session with
sudo swapoff -a
, but to make it permanent, you need to comment out the swap entries in your
/etc/fstab
file. This is a critical step, so don’t skip it!
To disable swap temporarily:
sudo swapoff -a
To disable swap permanently, edit
/etc/fstab
:
sudo nano /etc/fstab
Find the line that refers to swap (it might look something like
/swap.img ... swap ...
) and add a
#
at the beginning of the line to comment it out. Then save and exit.
Unique Cloud Provider Identifiers (if applicable)
If you’re setting up your cluster on a cloud provider like AWS, GCP, or Azure, make sure each node has unique cloud provider identifiers . This is usually handled automatically by the cloud provider, but it’s worth mentioning just in case. Kubeadm might try to use these for certain integrations.
Once all these prerequisites are met, you’re golden and ready to proceed to the actual setup!
Step 1: Prepare All Nodes (Control Plane and Workers)
Alright guys, it’s time to get our hands dirty and prepare all the machines that will be part of our Kubernetes cluster. This step is crucial because it ensures that every node is configured correctly to run Kubernetes components. We need to perform these actions on both the machine designated as the control plane node and all the machines that will serve as worker nodes. Let’s get this done!
Update System Packages
First things first, let’s make sure our Ubuntu systems are up-to-date. A fresh system is a happy system! This ensures you have the latest security patches and software versions.
sudo apt update && sudo apt upgrade -y
Install Container Runtime (containerd)
Kubernetes needs a container runtime to actually run your containers. The most common and recommended one is
containerd
. We’ll install it using
apt
.
sudo apt install -y containerd
After installation, we need to configure containerd to work with Kubernetes. We’ll create a default configuration file and then restart the containerd service.
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Now, we need to enable systemd cgroup driver for containerd. This is important because Kubernetes uses the systemd cgroup driver. Open the configuration file we just created:
sudo nano /etc/containerd/config.toml
Inside this file, find the
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
section and make sure the
SystemdCgroup
option is set to
true
. It might look like this:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
If it’s
false
or commented out, change it to
true
. Save and exit the file (Ctrl+X, Y, Enter).
Finally, restart the containerd service to apply the changes:
sudo systemctl restart containerd
Install Kubernetes Components (kubelet, kubeadm, kubectl)
Now, let’s install the core Kubernetes tools:
kubelet
,
kubeadm
, and
kubectl
.
kubelet
is the agent that runs on each node and ensures containers are running in a Pod.
kubeadm
is the tool we’re using to bootstrap the cluster.
kubectl
is the command-line tool for interacting with the cluster.
We need to add the official Kubernetes package repository. First, install some dependencies that allow
apt
to use a repository over HTTPS:
sudo apt install -y apt-transport-https ca-certificates curl gpg
Now, download the public signing key for the Kubernetes package repository:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes apt repository to your system’s sources list:
# Using the v1.29 repository for Kubernetes 1.29
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update your
apt
package list again to include the new repository:
sudo apt update
Now, install the Kubernetes components. We’ll also hold these packages so they don’t get automatically upgraded to a newer version that might not be compatible with our current kubeadm version. This is good practice for cluster stability.
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Configure
kubelet
Before we can run
kubeadm
, we need to configure
kubelet
. The
kubelet
service needs to know how to run. We can create a systemd drop-in file for kubelet to ensure it starts correctly.
sudo mkdir -p /etc/systemd/system/kubelet.service.d
Create a new configuration file for
kubelet
:
sudo nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add the following content to this file. The key part is adding
--cgroup-driver=systemd
which matches our containerd configuration.
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubeconfig --kubeconfig=/etc/kubernetes/kubeconfig --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This drops the requirement to have a cloud provider, which is useful for non-cloud setups
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS
# These flags are recommended by Kubernetes
ExecStart=/usr/bin/kubelet --register-node=true --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --pod-infra-container-image=registry.k8s.io/pause:3.9
# Ensure that the systemd cgroup driver is used
ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Note:
The exact
ExecStart
line might vary slightly depending on your
kubeadm
version. The crucial parts are
--cgroup-driver=systemd
and
--container-runtime-endpoint=unix:///run/containerd/containerd.sock
.
Reload the systemd daemon to pick up the new configuration:
sudo systemctl daemon-reload
Enable and start the
kubelet
service. It will fail to start initially because it’s not part of a cluster yet, and that’s perfectly fine. We’ll see an error, but the important thing is that the service is enabled and trying to run.
sudo systemctl enable kubelet
sudo systemctl start kubelet
Check the status. You should see an error or a warning about not being configured. This is expected!
sudo systemctl status kubelet
You might see output like
error: couldn't get resource names for node "your-node-name": no Node "your-node-name" in the cluster
. This is exactly what we want to see at this stage. It means
kubelet
is running but not yet joined to a cluster.
Verify Prerequisites (Optional but Recommended)
Kubeadm has a handy command to check if your system meets the prerequisites:
sudo kubeadm preflight-check
This command will run through a series of checks, like ensuring swap is off, necessary ports are open, and container runtime is working. It’s a good idea to run this on all your nodes before proceeding.
With these steps completed, all your nodes are now prepared to become part of the Kubernetes cluster. Awesome job!
Step 2: Initialize the Control Plane Node
Now for the moment of truth, guys! We’re going to initialize the control plane node, which is the brain of our Kubernetes cluster. This is where
kubeadm init
comes into play. This command will set up all the essential control plane components, like the API server, scheduler, and controller manager.
Run
kubeadm init
On the machine you’ve designated as your control plane node , run the following command. We’ll use a few flags to customize the setup.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Let’s break down this command:
-
sudo kubeadm init: This is the main command to initialize the control plane. -
--pod-network-cidr=10.244.0.0/16: This flag specifies the IP address range for Pods in your cluster. This is crucial for the network plugin we’ll install later.10.244.0.0/16is a common choice, especially if you plan to use the Flannel CNI plugin (which we’ll cover soon). Make sure this CIDR is unique and doesn’t overlap with your existing network ranges.
This command can take a few minutes to complete. It will generate certificates, create static Pod manifests for control plane components, and configure essential Kubernetes components. Once it’s finished successfully, you’ll see output similar to this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a Pod network to the cluster. Follow the steps in the "Adding a Pod Network" section of the documentation.
Then you can join any remaining nodes by running:
kubeadm join <control-plane-ip>:6443 --token <token>
--discovery-token-ca-cert-hash sha256:<hash>
Pay close attention to the output! It contains vital information:
-
Commands to configure
kubectl: These commands tell you how to set up your local environment to interact with the new cluster. We’ll run these right away. -
Instructions for joining worker nodes:
These provide the unique
kubeadm joincommand, including a token and a discovery hash, that your worker nodes will use to connect to the control plane. - Reminder to install a Pod network: Kubernetes networking is complex, and you need a CNI (Container Network Interface) plugin to enable communication between Pods. The cluster won’t be fully functional until this is installed.
Configure
kubectl
for Regular User Access
To manage your cluster using
kubectl
as a regular user (not just root), you need to copy the admin configuration file to your home directory. Run these commands on the
control plane node
:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify Control Plane Status
Now, let’s verify that the control plane components are running correctly. You can use
kubectl
to check the status of the pods in the
kube-system
namespace:
kubectl get pods -n kube-system
You should see pods like
kube-apiserver
,
etcd
,
kube-scheduler
, and
kube-controller-manager
in a
Running
state. If they aren’t, check the logs using
kubectl logs <pod-name> -n kube-system
.
At this point, your control plane is up and running! But your cluster isn’t fully functional yet because Pods can’t communicate with each other. That’s where the Pod Network comes in.
Step 3: Install a Pod Network Add-on
Alright, guys, we’ve got our control plane humming, but our cluster is like a brain without a nervous system right now. Pods can’t talk to each other across different nodes because we haven’t set up a network layer. This is where Pod Network Add-ons come in. These are CNI (Container Network Interface) plugins that provide networking capabilities for your Pods.
There are several options available, such as Calico, Flannel, Weave Net, and Cilium. For simplicity and widespread use, we’ll use
Flannel
. It’s a popular choice, easy to set up, and works well with
kubeadm
.
Install Flannel CNI Plugin
To install Flannel, you just need to apply a YAML manifest file. This file contains the necessary Kubernetes resource definitions (like Deployments, DaemonSets, etc.) to get Flannel up and running.
First, ensure you have
curl
installed (it usually is, but just in case):
sudo apt install -y curl
Now, download and apply the Flannel manifest. We’ll use the manifest URL for Flannel. Make sure the
--pod-network-cidr
you used during
kubeadm init
matches the network Flannel expects (usually
10.244.0.0/16
for the default Flannel config).
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the YAML file from the specified URL and applies it to your cluster. It will create a
DaemonSet
that runs a Flannel pod on each node, setting up the network overlay.
Verify Network Installation
It might take a minute or two for the Flannel pods to start and for the network to be fully configured. You can check the status of the Flannel pods in the
kube-system
namespace:
kubectl get pods -n kube-system | grep flannel
You should see the
kube-flannel-ds-...
pods running on each node. Once they are in a
Running
state, your Pod network should be operational.
Let’s also check the status of the
coredns
pods in
kube-system
.
coredns
is responsible for DNS resolution within the cluster, and it relies on the network being functional.
kubectl get pods -n kube-system | grep coredns
If both Flannel and CoreDNS pods are running, congratulations! Your Pod network is likely set up correctly.
Now, let’s check the status of the nodes. Remember the control plane node? It should now be in a
Ready
state.
kubectl get nodes
You should see your control plane node listed with a status of
Ready
. If it’s showing
NotReady
, it usually indicates a networking issue or that the
kubelet
isn’t properly communicating. Double-check your network setup and firewall rules, and ensure
kubelet
is running correctly on the control plane node.
With the Pod network installed, your cluster is now much closer to being fully functional. Your control plane can manage nodes, and Pods can potentially communicate. The next logical step is to add worker nodes.
Step 4: Join Worker Nodes to the Cluster
Alright, you’ve successfully set up your control plane and installed a Pod network. Now it’s time to bring your
worker nodes
into the fold! Worker nodes are where your actual application containers will run. Kubeadm makes joining nodes incredibly straightforward, thanks to the
kubeadm join
command that was outputted during the control plane initialization.
Get the
kubeadm join
Command
If you missed it or accidentally closed the terminal on your control plane node, don’t sweat it! You can regenerate the
kubeadm join
command. On the
control plane node
, run:
sudo kubeadm token create --print-join-command
This command will output a
kubeadm join
command that looks something like this:
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef12345678
-
Replace
192.168.1.100:6443with the actual IP address of your control plane node and the Kubernetes API server port (usually 6443). -
The
--tokenis a temporary authentication token. Tokens expire, so if it’s been a while, you might need to generate a new one using the command above. -
The
--discovery-token-ca-cert-hashis used to verify the identity of the control plane.
Execute
kubeadm join
on Worker Nodes
Now, SSH into each of your worker nodes . Ensure that each worker node has completed all the prerequisite steps outlined in Step 1 (system updates, containerd, kubelet, kubeadm, kubectl installation, swap disabled, etc.).
Once you’re on a worker node, paste and run the
kubeadm join
command that you obtained from the control plane node. You’ll need
sudo
for this command:
sudo kubeadm join <control-plane-ip>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
For example:
sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef12345678
This command tells the worker node to connect to the control plane, authenticate using the token and hash, and configure itself as a Kubernetes node.
Verify Worker Node Status
After running the
kubeadm join
command on each worker node, switch back to your
control plane node
. You can check if the new nodes have joined the cluster using
kubectl
:
kubectl get nodes
You should now see all your worker nodes listed, along with the control plane node. Initially, their status might be
NotReady
. This is often because the Pod network hasn’t fully propagated to them yet, or the
kubelet
on the worker node is still initializing. Give it a few minutes.
If the nodes remain
NotReady
after a while, troubleshoot potential issues:
- Network Connectivity: Ensure the worker nodes can reach the control plane node on port 6443. Check firewalls.
-
Kubelet Service:
Verify that the
kubeletservice is running on the worker nodes (sudo systemctl status kubelet). -
CNI Plugin:
Ensure the CNI plugin (Flannel in our case) is running correctly on all nodes. Check the Flannel pods (
kubectl get pods -n kube-system).
Once the worker nodes show a
Ready
status, you’ve successfully expanded your Kubernetes cluster! You now have a multi-node setup ready to host your containerized applications.
Step 5: Deploy Your First Application!
Woohoo! You’ve built a Kubernetes cluster from scratch on Ubuntu 22.04 using
kubeadm
. High five, guys! Now, let’s put it to the test by deploying a simple application. We’ll deploy a basic Nginx web server.
Create a Deployment
A Deployment is a Kubernetes object that manages a replicated application, typically a stateless one like a web server. It ensures that a specified number of Pod replicas are running at any given time.
Let’s create a file named
nginx-deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3 # We want 3 instances of Nginx
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Use the latest Nginx image
ports:
- containerPort: 80 # The port Nginx listens on inside the container
This YAML defines:
-
apiVersion: The Kubernetes API version. -
kind: The type of Kubernetes object (a Deployment). -
metadata: Name of the deployment. -
spec: The desired state.-
replicas: How many Pods we want. -
selector: How the Deployment finds which Pods to manage. -
template: The blueprint for the Pods to be created.-
containers: Defines the containers within the Pod, specifying the image (nginx:latest) and the port it exposes (80).
-
-
Now, apply this deployment to your cluster using
kubectl
:
kubectl apply -f nginx-deployment.yaml
Verify the Deployment and Pods
Let’s check if the deployment was successful and if our Nginx pods are running:
Check the deployment status:
kubectl get deployments
You should see
nginx-deployment
listed with the desired number of replicas (3) available.
Check the Pods created by the deployment:
kubectl get pods -l app=nginx
You should see three Nginx pods running. If any are in a
CrashLoopBackOff
or
Error
state, check their logs with
kubectl logs <pod-name>
.
Expose the Application with a Service
Right now, our Nginx pods are running, but they’re only accessible within the cluster. To make them accessible from outside, we need a Service . A Service provides a stable IP address and DNS name for a set of Pods, acting as a load balancer.
We’ll create a
NodePort
service. A NodePort service exposes the Service on each Node’s IP at a static port (the NodePort). This makes the Service accessible from outside the cluster.
Create a file named
nginx-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx # Selects Pods with the label 'app: nginx'
ports:
- protocol: TCP
port: 80 # Port the service listens on
targetPort: 80 # Port the containers listen on
nodePort: 30080 # The static port exposed on each Node
type: NodePort # Exposes the service on each Node's IP at the NodePort
Apply this service definition:
kubectl apply -f nginx-service.yaml
Access Your Nginx Application!
Now, you can access your Nginx web server by navigating to the IP address of
any
of your nodes (control plane or worker) followed by the
nodePort
(30080 in this case).
Open your web browser and go to:
http://<your-node-ip>:30080
Replace
<your-node-ip>
with the actual IP address of one of your cluster nodes. You should see the default Nginx welcome page!
This confirms that your Kubernetes cluster is working, your application is deployed, and it’s accessible from the outside. You’ve done it!
Conclusion: Your Kubernetes Journey Begins!
And there you have it, folks! You’ve successfully set up a Kubernetes cluster using kubeadm on Ubuntu 22.04 LTS . We’ve covered everything from the essential prerequisites and node preparation to initializing the control plane, installing a Pod network, joining worker nodes, and finally, deploying and accessing your first application. This is a massive accomplishment, and you should feel pretty proud of yourself!
This setup gives you a solid foundation for exploring the vast world of container orchestration. From here, you can dive into more advanced topics like:
- Deploying stateful applications with Persistent Volumes and StatefulSets.
- Implementing advanced networking with Ingress controllers for external access.
- Managing configurations with ConfigMaps and Secrets.
- Setting up monitoring and logging solutions.
- Exploring different CNI plugins like Calico for more advanced network policies.
- Implementing High Availability for your control plane.
Remember, the Kubernetes ecosystem is constantly evolving, so keep learning, keep experimenting, and don’t be afraid to break things (in a safe, test environment, of course!).
Thanks for following along. Happy clustering!