Deploy-a-Microservices-Application-on-AWS-EC2-using-Kubernetes
Deploy-a-Microservices-Application-on-AWS-
EC2-using-Kubernetes
How to Deploy a Microservices Application on AWS EC2 using Kubernetes:
- Setting up AWS EC2 Instances: We created two
t2.medium
instances on AWS EC2, one for the master node and one for the worker node. We also configured the security groups and SSH keys for these instances. - Installing Kubernetes on AWS EC2 Instances: We installed Kubernetes on both instances using
kubeadm
,kubelet
, andkubectl
. We also initialized the master node and joined the worker node to the cluster. - Deploying the Microservices Application on Kubernetes: We deployed a microservices application that consists of a MongoDB database and a taskmaster service. The taskmaster service is a web app that allows us to create and manage tasks. The app is written in Node.js and uses MongoDB as the database. We deployed the persistent volume and persistent volume claim, the
MongoDB
database, theClusterIP
service for MongoDB, thetaskmaster
service, and theNodePort
service for taskmaster.
Creating Two t2.medium Instances - To create two t2.medium instances on AWS EC2, follow these steps
- Log in to your AWS console and go to the EC2 dashboard.
- Click on the Launch Instance button.
- Choose Ubuntu Server 22.04 LTS (HVM) as the AMI.
- Choose t2.medium as the instance type.
- Click on Next until you reach the Configure Security Group page.
- Create a new security group with the following rules: a. Allow SSH from anywhere (port 22) b. Allow TCP from anywhere (port 80) c. Allow TCP from anywhere (port 5000) d. Allow TCP from anywhere (port 6443) e. Allow all traffic from within the security group (port range 0–65535)
- Click on the Review and Launch button.
- Create a new key pair or use an existing one and download it.
- Click on the Launch Instances button
Configuring Security Groups and SSH Keys
- Go to the Instances page on the EC2 dashboard and select one of your instances.
- Click on Actions > Networking > Change Security Groups.
- Select the security group that you created in the previous step and click on the Assign Security Groups button.
- Repeat the same steps for the other instance.
- Go to your terminal and change the permissions of your key pair file by running:
chmod 400 ~/.ssh/mykey.pem
- Replace
~/.ssh/mykey.pem
with the path to your key pair file. This will make sure that only you can read and write to your key pair file. - SSH into one of your instances by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<instance-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<instance-ip>
with the public IPv4 address of your instance. This will establish a secure connection to your instance using your key pair. - Repeat the same steps for the other instance.
Installing Kubernetes on AWS EC2 Instances
Installing Docker on Both Instances. To install Docker on both instances, follow these steps:
- SSH into one of your instances by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<instance-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<instance-ip>
with the public IPv4 address of your instance. - Update the package index by running:
sudo apt update
- Install Docker by running:
sudo apt install docker.io -y
- Start and enable the Docker service by running:
sudo systemctl start docker sudo systemctl enable docker
- Verify that Docker is installed and running by running:
sudo docker version
- Add current user to the docker group
sudo usermod -aG docker $USER
- Repeat the same steps for the other instance.
- SSH into one of your instances by running:
Installing
kubeadm
,kubelet
, andkubectl
on Both Instances. To installkubeadm
,kubelet
, andkubectl
on both instances, follow these steps:- SSH into one of your instances by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<instance-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<instance-ip>
with the public IPv4 address of your instance. - Add the Kubernetes apt repository by running:
sudo apt update sudo apt install apt-transport-https ca-certificates curl -y curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Install
kubeadm
,kubelet
, andkubectl
by running:sudo apt update sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
- Verify that
kubeadm
,kubelet
, andkubectl
are installed by running:kubeadm version kubelet --version kubectl version --client
- Repeat the same steps for the other instance.
- SSH into one of your instances by running:
Initializing the Master Node
To initialize the master node, follow these steps:
- SSH into the instance that you want to use as the master node by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<master-node-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<master-node-ip>
with the public IPv4 address of your master node instance - Initialize the master node by running:
sudo kubeadm init
- Copy the join command that is displayed at the end of the output. You will need this command later to join the worker node to the cluster.
- Configure your user account to use
kubectl
by running:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Install a pod network add-on by running: kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- Generate a token for worker nodes to join:
sudo kubeadm token create --print-join-commandsudo kubeadm token create --print-join-command
- Verify that the master node is ready by running:
kubectl get nodes
- SSH into the instance that you want to use as the master node by running:
Joining the Worker Node to the Cluster
To join the worker node to the cluster, follow these steps:
- SSH into the instance that you want to use as the worker node by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<worker-node-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<worker-node-ip>
with the public IPv4 address of your worker node instance. - Run the following command
sudo kubeadm reset pre-flight checks
- Join the worker node to the cluster by running the join command that you copied from the master node. The command should look something like this:
sudo kubeadm join 172.31.29.92:6443 --token thclqt.0m4dzsh5m6haswf5 --discovery-token-ca-cert-hash sha256:51bc85440423635ec77009514443330cfd2ff6292491dd33833e37406b34a8ba
- Replace
<master-node-ip>
with the public IPv4 address of your master node instance,<token>
with the token that was generated by kubeadm, and with the hash of the CA certificate. - Verify that the worker node is ready by running:
kubectl get nodes
- SSH into the instance that you want to use as the worker node by running:
Deploying the Microservices Application on Kubernetes
Cloning the GitHub Repository
- SSH into the master node by running:
ssh -i ~/.ssh/mykey.pem ubuntu@<master-node-ip>
- Replace
~/.ssh/mykey.pem
with the path to your key pair file and<master-node-ip>
with the public IPv4 address of your master node instance. - Install git by running:
sudo apt install git -y
- Clone the repository that contains the configuration files for our MongoDB database by running:
https://github.com/574n13y/microservices-k8s.git
- SSH into the master node by running:
Deploying the Persistent Volume and Persistent Volume Claim
- Go to the mongodb-k8s directory by running:
cd microservices-k8s/flask-api/k8s
- Apply the YAML file that defines the persistent volume by running:
kubectl apply -f mongo-pv.yml
- Apply the YAML file that defines the persistent volume claim by running:
kubectl apply -f mongo-pvc.yml
- Verify that the persistent volume and persistent volume claim are created and bound by running:
kubectl get pv,pvc
- Go to the mongodb-k8s directory by running:
Deploying the MongoDB Database
- Apply the YAML file that defines the deployment by running:
kubectl apply -f mongo.yml
- Verify that the deployment is created and running by running:
kubectl get deployments
- Verify that the pod is running by running:
kubectl get pods
- Apply the YAML file that defines the deployment by running:
Deploying the ClusterIP Service for MongoDB
- Apply the YAML file that defines the service by running:
kubectl apply -f mongo-svc.yml
- Verify that the service is created by running:
kubectl get services
- Apply the YAML file that defines the service by running:
Deploying the Taskmaster Service
- Apply the YAML file that defines the deployment by running:
kubectl apply -f taskmaster.yml
- Verify that the deployment is created and running by running:
kubectl get deployments
- Verify that the pods are running by running:
kubectl get pods
- Apply the YAML file that defines the deployment by running:
Deploying the NodePort Service for Taskmaster
- Apply the YAML file that defines the service by running:
``kubectl apply -f taskmaster-svc.yml``
- Verify that the service is created by running: ``kubectl get services``
- Access the taskmaster web app from your browser by going to http://:30007/. You can use any node’s IP address, either the master or the worker.
- Apply the YAML file that defines the service by running:
Comments
Post a Comment