Pages

Monday, November 7, 2022

Updated Metallb 0.13.7 Configuration for K8s 1.25

In the new metallb first, we need to enable the ARP using the following command

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

Then apply the new metal lb file. 

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Apply the following Yaml to mention the IP's which Metal LB can use. 

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.16.2.80-172.16.2.90



From.
https://metallb.universe.tf/installation/

Tuesday, September 13, 2022

Dynamic DNS with noip Solution

 Choose your Operating System and follow the installation instructions below.

  1. Download the DUC and save the file to: /usr/local/src
  2. Open a terminal and execute the following:
    • cd /usr/local/src
    • tar xzf noip-duc-linux.tar.gz
    • cd no-ip-2.1.9
    • make
    • make install
  3. Create the configuration file: /usr/local/bin/noip2 -C
  4. You will be prompted to enter your username and password for No-IP, and for the hostnames you wish to update.
  5. Launch the DUC: /usr/local/bin/noip2

Create the file /etc/systemd/system/noip2.service with the following content (and drop your init.d scripts):

[Unit] Description=noip2 service [Service] Type=forking ExecStart=/usr/local/bin/noip2 Restart=always [Install] WantedBy=default.target

Then issue

sudo systemctl daemon-reload

to make systemd aware of the new unit (systemd caches unit files and this command makes systemd reconsider its cache).

Now you can try to start and stop your unit and see its status:

sudo systemctl status noip2
sudo systemctl start  noip2
sudo systemctl status noip2
sudo systemctl stop   noip2
sudo systemctl status noip2

To have the unit started at boot time you need to enable it:

sudo systemctl enable noip2

Monday, August 8, 2022

PodMan

Podman is container runtime engine like docker, Rocket, Drawbridge, LXC. 


podman login -u username -p password registry.access.redhat.com
podman pull [OPTIONS] [REGISTRY[:PORT]/]NAME[:TAG]

podman ps -a
podman search <image-name>
podman pull <image-name>
podman images
podman run <image-name> echo 'Hello world!'
podman run -d -p 8080 httpd << Run a container with image httpd in background with -d
podman port -l <<  Displays port details of last used container
podman run -it ubi8/ubi:8.3 /bin/bash << Run and enter to bash of a container with -it
podman run --name MySQL-custom \ 
    -e MYSQL_USER=Ruser -e MYSQL_PASSWORD=PASS \
    -e MYSQL_ROOT_PASSWORD= PASS \
    -d MySQL
podman ps --format "{{.ID}} {{.Image}} {{.Names}}"

 

Root and Rootless Containers

sudo podman run --rm --name asroot -ti httpd /bin/bash
podman run --rm --name asuser -ti httpd /bin/bash

podman run --name my-httpd-container httpd << Custom name to pod with --name
podman exec httpd-container cat /etc/hostname << Running commands in container with exec
podman stop my-httpd-container
podman kill -s SIGKILL my-httpd-container << send custom kill signals by -s
podman restart my-httpd-container
podman rm my-httpd-container


podman rm -a << Remove all pods
podman stop -a << Stop all pods


podman exec mysql /bin/bash -c 'mysql -uuser1 -pmypa55 -e "select * from items.Projects;"'
Sharing a local directory to container. 

First setup the local directory with proper selinux permission
mkdir /home/student/dbfiles
podman unshare chown -R 27:27 /home/student/dbfiles << 
sudo semanage fcontext -a -t container_file_t '/home/student/dbfiles(/.*)?'
sudo restorecon -Rv /home/student/dbfiles
ls -ldZ /home/student/dbfiles 

The mount the path with -v location_in_local:location_in_container 
podman run -v /home/student/dbfiles:/var/lib/mysql rhmap47/mysql
podman unshare chown 27:27 /home/student/local/mysql

 Port management


podman run -d --name apache1 -p 8080:8080 httpd
podman run -d --name apache2 -p 127.0.0.1:8081:8080 httpd
podman run -d --name apache3 -p 127.0.0.1::8080 httpd
podman port apache3

Podman Image Management
Podman is available on a RHEL host with the following entry in /etc/containers/ registries.conf file: 

[registries.search] 
registries = ["registry.redhat.io","quay.io"]

podman save [-o FILE_NAME] IMAGE_NAME[:TAG]
podman save -o mysql.tar quay.io/mysql:latest

podman load [-i FILE_NAME]
podman load -i mysql.tar

podman rmi [OPTIONS] IMAGE [IMAGE...]
podman rmi -a

podman commit [OPTIONS] CONTAINER [REPOSITORY[:PORT]/]IMAGE_NAME[:TAG]
podman commit mysql-basic mysql-custom
podman commit -a 'Your Name' httpd httpd-new


podman diff container-name << diff subcommand tags any added file with an A, any changed ones with a C, and any deleted file with a D.

podman tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
podman tag mysql-custom devops/mysql


podman rmi devops/mysql:snapshot

podman push [OPTIONS] IMAGE [DESTINATION]
podman push quay.io/bitnami/nginx










Friday, August 5, 2022

Kubernets Components

 

Kubernetes as K8s is one of the most commonly used container orchestration tools. Following are the major components of the Kubernetes environment. These are not the only components but the very core components. 






Data Plane: Worker Nodes, Where the Pods or Containers with workload run
Control Plane: Master Node, where the k8s components run

Following are the Components of the Control Plane
  • Apiserver
    • Apiserver service act as the connection between all the components in the Control Plane and Data Plane
    • Orchestrating all operations in the cluster
    • Expose the K8s API which end users use for operation and monitoring
    • Collect data from Kubelet for Monitoring
    • Authenticates - Validates - retrieve data
    • Give data or do the operations with data
    • Pass data to kubelet to perform operations in the Worker node

  • etcd
    • etcd service is mainly used for the storage of all the details. Etcd is basically a key-value pair data store. 
    • Store Data not limited to the following details
      • Registry
      • Nodes
      • Pods
      • Config
      • Secrets
      • Accounts
      • Roles
      • -- other components as well

  • Kube scheduler
    • Identify the right worker nodes in which containers can be deployed and give data back to API Servers, then kubelet get data from API server and deploys the container. 
    • Keeps on monitoring the API Server for operations 
    • Identify the right worker node for mentioned operation and give it back to API Server
    • Filter nodes
    • Ranks nodes : 
      • Resource requirements, resources left after container placement
      • Taints and Tolerations
      • Node Selectors/Affinity
      • Labels and Selectors
      • Resource limits
      • Manual Scheduling 
      • Daemon Sets
      • Multiple Schedulers
      • Scheduler Events
  • Kube-controller-Manager
    • Watch Status
    • Remediate Situations
    • Monitor the state of the system and try to bring it to the desired state

Following are the Components of the Data Plane
  • Kubectl
    • Client used to connect to API Server
  • Kubelet
    • Agent runs on each Worker nodes
    • Listens to the Kube APIs and Performs the Operation 
    • give back data to Kube API Server for monitoring of operation
  • Kube-proxy
    • Enable communication between services in Worker nodes
    • Pod-Network
      • by Default All pods connect to each other
    • Create Iptable rules to allow communication between pods and services





Wednesday, August 3, 2022

Quick OpenVPN Server

The easiest way to set up an OpenVPN server, the script is very helpful to manage the client keys

First, we need to download the script, then make it executable and run with bash


wget https://git.io/vpn -O openvpn-install.sh
chmod +x openvpn-install.sh 
bash openvpn-install.sh


Following will be output and options to choose to create the VPN server and Client Certificates.

Welcome to this OpenVPN road warrior installer!
Which protocol should OpenVPN use?
   1) UDP (recommended)
   2) TCP
Protocol [1]: 1
What port should OpenVPN listen to?
Port [1194]: 
Select a DNS server for the clients:
   1) Current system resolvers
   2) Google
   3) 1.1.1.1
   4) OpenDNS
   5) Quad9
   6) AdGuard
DNS server [1]: 2
Enter a name for the first client:
Name [client]: rahul
OpenVPN installation is ready to begin.
Press any key to continue...
Finished!
The client configuration is available in: /root/XXXX.ovpn
New clients can be added by running this script again.


Once the client configuration is done we can copy it and move it to the device we need to use as the client. Configure the Ovpn client app with the client key and can start using the VPN. 

Friday, January 28, 2022

Kubernetes with Containerd Using Ansible Over Ubuntu Machines

 Following are the anisble playbook used to deploy a 1 master 2 worker k8s cluster. 


Environment

  • Ubuntu VM's running on Vmware
  • K8s with Docker Runtime

User Creation

  • Asks for the User Name which has to be created
  • Create's the user
  • Adds a dedicated Sudo entry 
  • Setting up Password less sudo for user
  • Copy the local uses ssh key to server for password less auth
  • Print the details
  • Updates the System

- hosts: all
become: yes

vars_prompt:
- name: "new_user"
prompt: "Account need to be create in remote server."
private: no

tasks:
- name: creating the user {{ new_user }}.
user:
name: "{{ new_user }}"
createhome: yes
shell: /bin/bash
append: yes
state: present

- name: Create a dedicated sudo entry file for the user.
file:
path: "/etc/sudoers.d/{{ new_user }}"
state: touch
mode: '0600'
- name: "Setting up Sudo without Password for user {{ new_user }}."
lineinfile:
dest: "/etc/sudoers.d/{{ new_user }}"
line: '{{ new_user }} ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'

- name: Set authorized key for user copying it from current {{ new_user }} user.
authorized_key:
user: "{{ new_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

- name: Print the created user.
shell: id "{{ new_user }}"
register: new_user_created
- debug:
msg: "{{ new_user_created.stdout_lines[0] }}"

- name: "Update cache & Full system update"
apt:
update_cache: true
upgrade: dist
cache_valid_time: 3600
force_apt_get: true


Package Installation in Master and Worker Nodes

  • Copy the local host files to all the server for name resolution
  • update the hostnames of the machines based on the names in host file
  • Temporary Swap off
  • Swap off in fstab
  • Create a empty file for containerd module.
  • Configure module for containerd.
  • Create a empty file for kubernetes sysctl params.
  • Configure sysctl params for Kubernetes.
  • Apply sysctl params without reboot
  • Installing Prerequisites for Kubernetes
  • Add Docker’s official GPG key
  • Add Docker Repository
  • Install containerd.
  • Configure containerd.
  • Configure containerd.
  • Creating containerd Config file
  • Enable containerd service, and start it.
  • Add Google official GPG key
  • Add Kubernetes Repository
  • Installing Kubernetes Cluster Packages.
  • Enable service kubelet, and enable persistently
  • Reboot all the Kubernetes nodes.

- hosts: "master, workers"
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Copying the host file
copy:
src: /etc/hosts
dest: /etc/hosts
owner: root
group: root

- name: "Updating hostnames"
hostname:
name: "{{ new_hostname }}"

- name: Make the Swap inactive
command: swapoff -a

- name: Remove Swap entry from /etc/fstab.
lineinfile:
dest: /etc/fstab
regexp: swap
state: absent

- name: Create a empty file for containerd module.
copy:
content: ""
dest: /etc/modules-load.d/containerd.conf
force: no

- name: Configure module for containerd.
blockinfile:
path: /etc/modules-load.d/containerd.conf
block: |
overlay
br_netfilter

- name: Create a empty file for kubernetes sysctl params.
copy:
content: ""
dest: /etc/sysctl.d/99-kubernetes-cri.conf
force: no

- name: Configure sysctl params for Kubernetes.
lineinfile:
path: /etc/sysctl.d/99-kubernetes-cri.conf
line: "{{ item }}"
with_items:
- 'net.bridge.bridge-nf-call-iptables = 1'
- 'net.ipv4.ip_forward = 1'
- 'net.bridge.bridge-nf-call-ip6tables = 1'

- name: Apply sysctl params without reboot.
command: sysctl --system

- name: Installing Prerequisites for Kubernetes
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- vim
- software-properties-common
state: present

- name: Add Docker’s official GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
filename: docker
update_cache: yes

- name: Install containerd.
apt:
name:
- containerd.io
state: present

- name: Configure containerd.
file:
path: /etc/containerd
state: directory

- name: Configure containerd.
shell: /usr/bin/containerd config default > /etc/containerd/config.toml

- name: Creating containerd Config file
copy:
dest: "/etc/crictl.yaml"
content: |
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false

- name: Enable containerd service, and start it.
systemd:
name: containerd
state: restarted
enabled: yes
daemon-reload: yes

- name: Add Google official GPG key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present

- name: Add Kubernetes Repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes
mode: 0600

- name: Installing Kubernetes Cluster Packages.
apt:
name:
- kubeadm
- kubectl
- kubelet
state: present

- name: Enable service kubelet, and enable persistently
service:
name: kubelet
enabled: yes

- name: Reboot all the kubernetes nodes.
reboot:
msg: "Reboot initiated by Ansible"
connect_timeout: 5
reboot_timeout: 3600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami


Master Configuration

  • Pulls all needed images
  • Reset Kubeadm if its already configured
  • Initialize K8s cluster
  • Create Directory for Kube config file in master
  • Create a local kube config file in master
  • Copy the kube config file to ansible local server
  • Genarates the Kube toke for workers and stores it
  • Copy the token to master's tmp directory
  • Copy the toke to ansible local tmp direcotry
  • Initialize the pod network with fannel
  • Copy the output to mater file
  • Copy the output to ansible local server


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Pulling images required for setting up a Kubernetes cluster
shell: kubeadm config images pull

- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Initializing Kubernetes cluster
shell: kubeadm init --apiserver-advertise-address=$(ip a |grep ens160| grep 'inet ' | awk '{print $2}' | cut -f1 -d'/') --pod-network-cidr 10.244.0.0/16 --v=5
register: myshell_output

- debug: msg="{{ myshell_output.stdout }}"

- name: Create .kube to home directory of master server
file:
path: $HOME/.kube
state: directory
mode: 0755

- name: Copy admin.conf to user's kube config to master server
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes

- name: Copy admin.conf to user's kube config to ansible local server
become: yes
become_method: sudo
become_user: root
fetch:
src: /etc/kubernetes/admin.conf
dest: /Users/rahulraj/.kube/config
flat: yes
- name: Get the token for joining the nodes with Kuberentes master.
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- debug:
msg: "{{ kubernetes_join_command.stdout_lines }}"

- name: Copy K8s Join command to file in master
copy:
content: "{{ kubernetes_join_command.stdout_lines[0] }}"
dest: "/tmp/kubernetes_join_command"

- name: Copy join command from master to local ansible server
fetch:
src: "/tmp/kubernetes_join_command"
dest: "/tmp/kubernetes_join_command"
flat: yes

- name: Install Pod network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
register: myshell_output

- name: Copy the output to master file
copy:
content: "{{ myshell_output.stdout }}"
dest: "/tmp/pod_network_setup.txt"

- name: Copy network output from master to local ansible server
fetch:
src: "/tmp/pod_network_setup.txt"
dest: "/tmp/pod_network_setup.txt"
flat: yes


Worker Configuration

  • Copy the token from ansible local file to worker nodes
  • Reset the kubeadm 
  • Join the Worker node to Master by running the command

- hosts: workers
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Copy token to worker nodes.
become: yes
become_method: sudo
become_user: root
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Join the Worker nodes with the master.
become: yes
become_method: sudo
become_user: root
command: sh /tmp/kubernetes_join_command
register: joined_or_not
- debug:
msg: "{{ joined_or_not.stdout }}"


K8s should be up with the worker nodes now. 


Friday, January 21, 2022

Setting up MetalLB Load Balancer with Kubernetes.

When we are deploying the Kubernetes in the local development environment and if we need to publish the services through load balancer services then Metallb load balancer is one of the easiest solutions we can use. All we need is a set of IP range from our network which metal lb can use.  

Following are the k8s configurations that need to be applied on the cluster. 

Below is the config map which mentions the IPs which can be used for the load balancers

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.2.80-172.16.2.90



Below is the ansible-playbook I used to deploy the metal load balancer on the k8s cluster. 
  • Initialize the master with Metallb Clusters
  • Copy the metallb Configuration to master
  • Kube apply the configuration on master. 


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Initializing Metallb cluster
shell: kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
register: myshell_output

- name: Copying the Metallb config file
copy:
src: /Users/rahulraj/workspace/vmware-ansible/k8s/playbook/metallb-congif.yml
dest: $HOME/metallb-congif.yml


- name: Configuring Metallb cluster
shell: kubectl apply -f $HOME/metallb-congif.yml
register: myshell_output



For testing it we shall deploy a sample Nginx and expose it through load balancer type services. 

k create deployment nginx-deployments --image=nginx --replicas=3 --port=80
k expose deployment nginx-deployments --port=80 --target-port=80 --type=LoadBalancer



Output should be like following 

 kubectl get svc
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1        <none>        443/TCP        25h
nginx-deployments   LoadBalancer   10.100.137.154   172.16.2.80   80:30973/TCP   13h