Kubernetes Cluster On Ubuntu 20.04: A Step-by-Step Guide
Kubernetes Cluster on Ubuntu 20.04: A Step-by-Step Guide
Hey guys! So, you’re looking to get your hands dirty with Kubernetes on Ubuntu 20.04, huh? Awesome choice! Kubernetes, or K8s as us tech nerds like to call it, is the undisputed champion when it comes to managing containerized applications. It’s like the super-smart conductor for your apps, making sure they run smoothly, scale up when needed, and recover if something goes wrong. Setting up your own K8s cluster might sound a bit intimidating at first, but trust me, with a little guidance, it’s totally doable. We’re going to walk through the entire process, from setting up your Ubuntu servers to having a fully functional Kubernetes cluster ready to deploy your awesome applications. So, grab a coffee, buckle up, and let’s dive into the exciting world of Kubernetes on Ubuntu 20.04!
Table of Contents
Pre-Installation Checklist: What You Need Before You Start
Alright, before we jump headfirst into installing Kubernetes, let’s make sure we’ve got all our ducks in a row. This is super important, guys, because skipping these steps can lead to some frustrating troubleshooting down the line. Think of it like preparing your ingredients before you start cooking – you don’t want to realize you’re missing the flour halfway through baking a cake, right? For our Kubernetes cluster, we’ll need a few things. First off, you’ll need at least
two Ubuntu 20.04 LTS servers
. One will act as your master node (the brain of the operation) and the others will be worker nodes (where your actual applications will run). It’s totally possible to run a single-node cluster for testing, but for anything serious, you’ll want at least one master and one worker. Make sure these servers have
static IP addresses
. This is crucial because Kubernetes relies on stable network configurations. If your IP addresses keep changing, your cluster will get confused, and nobody wants a confused cluster. You’ll also need
SSH access
to all your nodes with a user that has
sudo
privileges. We’ll be running a lot of commands, and we need that
sudo
power. Additionally, ensure that
all nodes have internet access
so they can download necessary packages and images. And a big one:
disable swap
on all nodes. Kubernetes doesn’t play well with swap enabled, as it can cause performance issues and instability. You can do this by running
sudo swapoff -a
and then commenting out the swap line in your
/etc/fstab
file. Lastly, make sure your
firewalls are configured correctly
. We’ll need to open specific ports for Kubernetes communication. For now, just keep in mind that network connectivity is key. Don’t worry, we’ll cover the exact ports later. Getting these prerequisites sorted will save you a ton of headache, so take your time here. It’s the foundation upon which our glorious Kubernetes cluster will be built!
Step 1: Preparing Your Ubuntu Nodes
Okay, team, let’s get our Ubuntu servers prepped and ready for Kubernetes action. This initial setup is
absolutely critical
for a smooth installation. We want our nodes to be clean, consistent, and speaking the same language, so to speak. First things first, let’s ensure all our systems are up-to-date. Open up your terminal on each node and run these commands:
sudo apt update
followed by
sudo apt upgrade -y
. This will fetch the latest package information and install any available updates. It’s like giving your servers a fresh coat of paint and making sure all their tools are in top-notch condition. Next, we need to install some essential packages that Kubernetes relies on. The most important ones are
apt-transport-https
,
ca-certificates
,
curl
,
gnupg
, and
lsb-release
. These allow your system to securely communicate with package repositories and handle certificates. You can install them all with a single command:
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
. Now, for a bit of K8s magic, we need to install
containerd
, which is a container runtime. Kubernetes needs something to actually run your containers, and containerd is a popular and efficient choice. To install it, we’ll add the Docker GPG key (yes, even though we’re not installing Docker itself, containerd’s repository is often managed alongside it), set up the repository, and then install containerd. Here’s how you do it:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y containerd.io
After installing
containerd.io
, we need to configure it. We’ll generate the default configuration file using
sudo containerd config default | sudo tee /etc/containerd/config.toml
. Then, we need to make a crucial change to enable Systemd cgroup driver for Kubernetes. Open the configuration file with
sudo nano /etc/containerd/config.toml
and find the line
SystemdCgroup = false
. Change it to
SystemdCgroup = true
. Save and exit. This step is
super important
for Kubernetes to properly manage container resources.
Finally, let’s get Kubernetes itself installed. We’ll add the Kubernetes GPG key and repository. Run these commands:
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
Now, we can install the Kubernetes components:
kubeadm
,
kubelet
, and
kubectl
.
kubeadm
is the tool that helps bootstrap the cluster,
kubelet
runs on each node and ensures containers are running, and
kubectl
is the command-line tool to interact with the cluster. Run:
sudo apt install -y kubeadm kubelet kubectl
. You might get a message saying
kubelet
is masked; that’s fine for now. We’ll unmask it later. We’re almost there with the node preparation, guys! Just a couple more tweaks, and we’ll be ready to initialize our cluster.
Step 2: Initializing the Control Plane (Master Node)
Alright, this is where the magic really begins! We’re going to initialize our
control plane
, which is essentially setting up the brain of our Kubernetes cluster on our designated master node. This node will manage the cluster state, schedule pods, and handle all the heavy lifting. First, make sure you’re on your master node. We need to initialize
kubeadm
with some specific configurations. The command we’ll use is
sudo kubeadm init
. However, to make our cluster functional and to ensure it has a proper network setup, we need to provide a few flags. A critical flag is
--pod-network-cidr
. This tells Kubernetes what IP address range to use for your pods. A common choice is
10.244.0.0/16
if you plan to use the Flannel CNI (Container Network Interface) plugin, which is what we’ll likely use later. So, the command looks like this:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This command will take a few minutes to complete.
kubeadm
will set up all the necessary components on the master node, including etcd (the cluster’s key-value store), the Kubernetes API server, controller manager, and scheduler. Once it’s done, you’ll see a bunch of output, and
crucially
, it will give you instructions on how to configure
kubectl
for your user and how to join your worker nodes to the cluster.
Pay close attention to these instructions!
They contain the
kubeadm join
command with a token and a discovery hash that are unique to your cluster. Copy these down somewhere safe; you’ll need them to add your worker nodes later. After
kubeadm init
finishes, you need to configure
kubectl
so your regular user can interact with the cluster. You can do this by running the following commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This sets up the necessary configuration file in your home directory. Now, you can test if
kubectl
is working by running
kubectl get nodes
. You should see your master node listed, but it will likely be in a
NotReady
state. That’s totally normal at this point because we haven’t installed a pod network yet. The
kubelet
service also needs to be enabled and started. You can do this with:
sudo systemctl enable kubelet
sudo systemctl start kubelet
And there you have it! Your control plane is initialized. It’s like the command center is operational, but it still needs communication lines to its other stations. Next up, we’ll connect the rest of the team – our worker nodes!
Step 3: Installing a Pod Network Add-on
Guys, your Kubernetes cluster isn’t fully functional without a
pod network
. Think of it like this: your master node can control things, but the worker nodes can’t talk to each other or to the pods running on them effectively without a proper network setup. This is where a CNI (Container Network Interface) plugin comes in. There are several options out there, like Calico, Flannel, Weave Net, etc. For simplicity and wide compatibility, we’re going to use
Flannel
. It’s a popular choice for beginners and works great. Before we install Flannel, let’s make sure our master node is ready. Sometimes, the
kubelet
might not start correctly if it expects CNI configuration. To ensure it starts, you might need to restart it:
sudo systemctl restart kubelet
. Now, let’s apply the Flannel manifest. You can usually find the latest manifest on the Flannel GitHub repository, but here’s a common command to apply it directly:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
This command downloads the Flannel configuration from the internet and applies it to your cluster.
kubectl
will create the necessary pods and configurations for Flannel to manage the network between your nodes and pods. Once you run this command, it might take a minute or two for the Flannel pods to start up. You can check their status with
kubectl get pods -n kube-system
. You should see pods like
kube-flannel-ds-...
running. After Flannel is up and running, your master node should transition from the
NotReady
state to
Ready
. You can verify this by running
kubectl get nodes
again. You should now see your master node listed as
Ready
.
This is a huge milestone!
Your control plane is now not only initialized but also has a working network layer. This means it can start communicating properly. Without this pod network, your worker nodes wouldn’t be able to join or schedule pods effectively. So, installing the CNI is
absolutely essential
for a functional Kubernetes cluster. It bridges the gap, allowing all the components to communicate seamlessly. Pretty cool, right? Now, let’s bring our worker nodes into the fold.
Step 4: Joining Worker Nodes to the Cluster
Alright, guys, it’s time to add our
worker nodes
to the Kubernetes cluster! These are the machines that will actually run your containerized applications. Remember that
kubeadm join
command we saved from when we initialized the master node? This is where it comes into play. You need to run this command on
each
of your worker nodes. It will look something like this:
sudo kubeadm join <master-ip-address>:6443 --token <your-token> \
--discovery-token-ca-cert-hash sha256:<your-hash>
Replace
<master-ip-address>
with the actual IP address of your master node, and
<your-token>
and
<your-hash>
with the token and hash that
kubeadm init
provided earlier. If, for some reason, you lost your token or it expired (tokens are valid for 24 hours by default), you can generate a new one on the master node with this command:
sudo kubeadm token create --print-join-command
This will spit out a new
kubeadm join
command that you can then copy and run on your worker nodes. Once you run the join command on a worker node, it will connect to the master node, get its configuration, and start up its
kubelet
service. The node will then register itself with the cluster. You can verify that the worker node has joined successfully by going back to your master node and running
kubectl get nodes
. You should see your newly added worker node appear in the list, and it should also eventually show a
Ready
status. Repeat this process for all the worker nodes you want to add to your cluster.
It’s vital
that the worker nodes can reach the master node over the network and that the necessary ports are open in your firewalls. If a worker node doesn’t show up or stays in a
NotReady
state, double-check your network connectivity, firewall rules, and ensure the
kubelet
service is running on the worker node (
sudo systemctl status kubelet
). With all your worker nodes joined and showing as
Ready
, congratulations! You now have a
fully operational multi-node Kubernetes cluster
running on Ubuntu 20.04. This is a massive achievement, guys! You’ve successfully set up the infrastructure to deploy and manage your containerized applications at scale.
Conclusion: Your Kubernetes Journey Begins!
And there you have it, folks! You’ve successfully navigated the process of setting up a
Kubernetes cluster on Ubuntu 20.04
. From prepping your servers and installing essential components to initializing the control plane, configuring the pod network, and finally joining your worker nodes, you’ve built a solid foundation for your container orchestration needs. This is just the beginning of your Kubernetes adventure, though! Now that you have a cluster, you can start deploying your applications using
kubectl
. Experiment with creating Deployments, Services, and understand how Kubernetes manages your workloads. Remember, Kubernetes is a vast ecosystem, and there’s always more to learn. Keep exploring, keep practicing, and don’t be afraid to dive deeper into topics like networking, storage, security, and advanced scheduling. Setting up your own cluster like this is an invaluable learning experience that gives you a hands-on understanding of how Kubernetes works under the hood. So, go forth and containerize! Happy deploying, and welcome to the world of Kubernetes!