Pages

Showing posts with label LINUX. Show all posts
Showing posts with label LINUX. Show all posts

Wednesday, August 16, 2023

Resolving File Update Issues in Nextcloud: Correcting Permissions and Indexing

Modern cloud storage solutions like Nextcloud offer seamless file synchronization and sharing capabilities, enhancing collaboration and accessibility. However, sometimes you might encounter issues where manually copied files fail to get updated or indexed. This blog post provides insights into tackling this problem and presents commands to correct file permissions and trigger file indexing in Nextcloud.

Understanding the Issue

When manually copying files into your Nextcloud directory, you might notice that these files don't seem to sync or get indexed properly. This discrepancy can often be attributed to incorrect permissions or a lack of indexing triggers within the Nextcloud environment.


Correcting Permissions

File permissions play a crucial role in ensuring that the Nextcloud server can access, modify, and index files appropriately. Incorrect permissions can lead to issues such as files not being recognized or processed by Nextcloud.

To rectify this, you can adjust the ownership of your Nextcloud directory using the chown command. The following command changes the ownership of the Nextcloud directory to the nginx user and group:
sudo chown nginx. -R /PATH TO THE NEXTCLOUD DIRECTORY/ABC/nextcloud
This ensures that the Nextcloud server has access to your files for indexing and synchronization. In case we are using an Apache server relevant user has to be added. 


Triggering File Indexing

Nextcloud relies on indexing to keep track of file changes and updates. If manually copied files aren't being indexed automatically, you can initiate the indexing process using the occ command-line tool.
Use the following command to run a full file scan and index all files in your Nextcloud installation:

sudo -u nginx /PATH TO THE NEXTCLOUD DIRECTORY/ABC/nextcloud/occ files:scan --all
This command runs the indexing process under the nginx user, ensuring that the permissions are correctly managed throughout the process.

Thursday, August 10, 2023

Building a Secure Nextcloud Deployment with NFS Backend and Nginx on CentOS 9 with SELinux

We will walk through the meticulous process of setting up a secure Nextcloud installation on your personal CentOS 9 server, utilizing NFS as a robust backend storage solution. Furthermore, we will ensure the integrity of your server environment by enabling SELinux and configuring Nginx for optimal performance.

Introduction

This comprehensive guide will walk us through the meticulous process of setting up a secure Nextcloud installation on your personal CentOS 9 server, utilizing NFS as a robust backend storage solution. Furthermore, we will ensure the integrity of your server environment by enabling SELinux and configuring Nginx for optimal performance.

Prerequisites

Before embarking on this endeavor, make sure you have the following prerequisites:


  • A server running CentOS 9.
  • Administrative access to the server.
  • Familiarity with Linux command-line operations.
  • A functional NFS server with shared storage.
  • Selinux Enabled 


Installing Nginx

Installing Nginx on CentOS 9 is a straightforward process. Before you begin, updating your system packages to ensure you're using the latest versions is good practice. Use the DNF package manager to install Nginx. After the installation, start the Nginx service and enable it to start automatically at the system boot
sudo dnf update
sudo dnf install nginx
Start and Enable Nginx:
sudo systemctl start nginx
sudo systemctl enable nginx


Configure Firewall, If you have the firewall enabled, you need to allow HTTP and HTTPS traffic through it.
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload


Install MariaDB Server


Use the dnf package manager to install MariaDB. After installation, start the MariaDB service and enable it to start automatically on system boot. Start and Enable MariaDB after the installation. 
sudo dnf install mariadb-server
sudo systemctl start mariadb
sudo systemctl enable mariadb

Secure MariaDB Installation

MariaDB comes with a script to help you secure your installation. It will prompt you to set a root password, remove anonymous users, disallow root login remotely, and more.

sudo mysql_secure_installation
Follow the on-screen prompts to secure your MariaDB installation according to your preferences. Check MariaDB Status. Verify that MariaDB is running without any errors.
sudo systemctl status mariadb
Access MariaDB, You can now access the MariaDB command-line interface using the following command. Enter the root password you set during the secure installation.
sudo mysql -u root -p
That's it! You have successfully installed and secured MariaDB on your CentOS 9 server. You can now use MariaDB for your applications or databases.

CREATE DATABASE nextcloud;
CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost';
FLUSH PRIVILEGES;
EXIT;


Installing and configuring PHP


Install EPEL and Remi Repositories:

You're installing the EPEL and Remi repositories to get access to more recent versions of PHP and its extensions.
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
sudo dnf install -y https://rpms.remirepo.net/enterprise/remi-release-9.rpm

Reset PHP Module:

You're resetting the PHP module to ensure a clean installation.
dnf module reset PHP

Install PHP 7.4:

You're installing PHP 7.4 using the Remi repository.
dnf module install php:remi-7.4
dnf update

Install PHP Extensions:

You're installing various PHP extensions that are commonly used with Nextcloud and other web applications.
dnf install -y php php-gd php-mbstring php-intl php-pecl-apcu php-mysqlnd php-opcache php-json php-zip

Enable PHP-FPM:

You're enabling and starting the PHP-FPM service, which is used to serve PHP files through Nginx.
systemctl enable --now php-fpm

Additional Extensions:

You're installing more PHP extensions that can be useful for various purposes.
dnf install -y php-gd php-json php-curl php-mbstring php-intl php-xml php-zip php-pear php-soap php-bcmath php-gmp php-opcache php-imagick php-pecl-redis php-pecl-apcu

These commands set up PHP and its extensions, making your server ready to support applications like Nextcloud. After completing these steps, you should be closer to having a functional web environment for hosting your applications. Always ensure to follow official documentation and best practices when setting up your server.

Edit PHP-FPM Configuration:

You're editing the www.conf file to set the user and group for PHP-FPM.
vi /etc/php-fpm.d/www.conf

Inside the file, update the user and group settings to use nginx:
user = nginx
group = nginx

Set SELinux Boolean:

You're setting a SELinux boolean to allow PHP to execute memory-mapped shared libraries.
setsebool -P httpd_execmem 1

Enable and Restart Services:

You're enabling and starting the PHP-FPM service and restarting the Nginx service.
systemctl enable --now php-fpm.service
systemctl restart nginx.service

Create PHP Info File:

You're creating a PHP info file to check the PHP configuration.
vi /usr/share/nginx/html/info.php
Add the following content to the file:
<?php phpinfo(); ?>

Check PHP and FPM Status:

You're checking thestatus of the PHP-FPM service.
netstat -pl | grep php
systemctl status php-fpm

Update PHP Configuration:

You're editing the PHP configuration file to adjust some settings.
nano /etc/php.ini
Uncomment and/or modify the following lines:
cgi.fix_pathinfo=0
memory_limit=512M

Further, Adjust PHP-FPM Configuration:

You're modifying the www.conf file for PHP-FPM to fine-tune its settings.
nano /etc/php-fpm.d/www.conf 
user = nginx
group = nginx
Uncomment these lines by removing the ‘;’.
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp 
Follow the instructions you provided to set the user, group, environment variables, and process manager settings for PHP-FPM.

Edit OPCache Configuration:

You're editing the OPCache configuration file to optimize PHP performance.
nano /etc/php.d/10-opcache.ini
Uncomment and adjust values for various OPCache settings.
opcache.enable=1
opcache.max_accelerated_files=10000
opcache.interned_strings_buffer=8
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1

Downloading and Configuring NextCloud

Install wget, You're installing wget, which is a good idea for downloading files. Download and Extract Nextcloud, You're downloading and extracting the Nextcloud archive. Remember to adjust the version number in the URL to the latest version.

sudo dnf install wget
wget https://download.nextcloud.com/server/releases/nextcloud-latest.zip
sudo dnf install unzip -y
unzip nextcloud-latest.zip -d /usr/share/nginx/

Set Ownership

You're setting ownership of the Nextcloud files to the nginx user. This is needed for Nginx to have the appropriate permissions.

sudo chown -R nginx:nginx /usr/share/nginx/nextcloud

Adjust PHP Permissions

You're adjusting permissions for PHP directories. However, it seems like you're trying to adjust /var/lib/php paths. If this is related to your PHP configuration, ensure that these paths match your actual PHP setup.

sudo chgrp -R nginx /var/lib/php/{opcache,session,wsdlcache}

Create Nextcloud Data Directory

You're creating the data directory for Nextcloud. This is where Nextcloud will store user data and files.

sudo mkdir /usr/share/nginx/nextcloud/data

Installing and Mounting NFS


Install NFS Utilities:

You're installing the NFS utility package, which is necessary for working with NFS shares.
sudo dnf install nfs-utils

Show Available NFS Exports:

You're using the showmount command to list the available NFS exports on a remote server with the IP address xxx.xxx.xxx.xxx
showmount -e "xxx.xxx.xxx.xxx"
This will display a list of directories that are shared through NFS on the specified server.

Mount NFS Share:

You're mounting an NFS share from the remote server with the IP address xxx.xxx.xxx.xxx The share path is /Volume2/Media, and you're mounting it to the local directory /etc/plex/media.
sudo mount xxx.xxx.xxx.xxx:/Volume2/Media /usr/share/nginx/nextcloud/data

This command mounts the remote NFS directory onto the local /etc/plex/media directory on your CentOS 9 server. The contents of the remote directory will now be accessible from the local directory.


Enabling the SELINUX

Change Ownership:

You're changing the ownership of the Nextcloud directory to the nginx user and group.
chown -R nginx:nginx /usr/share/nginx/nextcloud/

Configure SELinux Contexts:

You're using the semanage fcontext command to adjust SELinux file contexts for various Nextcloud directories and files. This allows SELinux to work with these files without causing permission issues.
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/data(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/config(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/apps(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/assets(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/.htaccess'
semanage fcontext -a -t httpd_sys_rw_content_t '/usr/share/nginx/nextcloud/.user.ini'

Adjust Data Directory Permissions:

You're again changing ownership of the Nextcloud data directory.
chown -R nginx:nginx /usr/share/nginx/nextcloud/data

Restore SELinux Contexts:

You're using the restorecon command to restore SELinux file contexts for the Nextcloud directories and files you've adjusted.
restorecon -Rv '/usr/share/nginx/nextcloud/'

Set SELinux Boolean for NFS:

You're using the setsebool command to enable the httpd_use_nfs boolean. This allows the HTTP server (httpd) to access NFS shares.
setsebool -P httpd_use_nfs=1

Getting the SSL for Domain

Obtain SSL/TLS Certificate:

You're using Certbot in manual mode with the DNS challenge. This means Certbot will prompt you to add a specific DNS TXT record to your domain's DNS configuration as a way to verify that you have control over the domain.

sudo dnf install certbot -y 
sudo certbot --manual --preferred-challenges dns certonly -d xyz.adcd.com

In this command, -d 
xyz.adcd.com.in specifies the domain for which you want to obtain the certificate.
Following this command, Certbot will provide you with instructions on what DNS TXT record to add, where to add it, and how to proceed. This process might involve temporarily adding the TXT record to your DNS zone and then waiting for DNS propagation before Certbot can validate it.


Update the Nginx Config


cat /etc/nginx/sites-available/nextcloud.conf 
upstream php-handler {
    server unix:/run/php-fpm/www.sock;
}

server {
    listen 80;
    server_name xyz.adcd.com;
    # enforce https
    return 301 https://$server_name:443$request_uri;
}

server {
    listen 8443 ssl http2;
    server_name xyz.adcd.com;

    ssl_certificate /etc/letsencrypt/live/xyz.adcd.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.adcd.com/privkey.pem;

    add_header Strict-Transport-Security “max-age=15552000" always;
    add_header Referrer-Policy "no-referrer" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Download-Options "noopen" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Permitted-Cross-Domain-Policies "none" always;
    add_header X-Robots-Tag "none" always;
    add_header X-XSS-Protection "1; mode=block" always;
    fastcgi_hide_header X-Powered-By;

    # Path to the root of your installation
    root /usr/share/nginx/nextcloud;

    access_log /var/log/nginx/nc_access_log;
    error_log /var/log/nginx/nc_error_log;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    rewrite ^/.well-known/webfinger /nextcloud/public.php?service=webfinger last;
    rewrite ^/.well-known/nodeinfo /nextcloud/public.php?service=nodeinfo last;
    location = /.well-known/carddav {
      return 301 $scheme://$host:$server_port/remote.php/dav;
    }
    location = /.well-known/caldav {
      return 301 $scheme://$host:$server_port/remote.php/dav;
    }

    # set max upload size
    client_max_body_size 512M;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    location / {
        rewrite ^ /index.php;
    }

    location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
        deny all;
    }
    location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
        deny all;
    }

    location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) {
        fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
        set $path_info $fastcgi_path_info;
        try_files $fastcgi_script_name =404;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;
        fastcgi_param modHeadersAvailable true;
        fastcgi_param front_controller_active true;
        fastcgi_pass php-handler;
        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
    }

    location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
        try_files $uri/ =404;
        index index.php;
    }

    location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
        try_files $uri /index.php$request_uri;
        add_header Cache-Control "public, max-age=15778463";
        add_header Referrer-Policy "no-referrer" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header X-Download-Options "noopen" always;
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Permitted-Cross-Domain-Policies "none" always;
        add_header X-Robots-Tag "none" always;
        add_header X-XSS-Protection "1; mode=block" always;

        access_log off;
    }

    location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ {
        try_files $uri /index.php$request_uri;
        access_log off;
    }
}

Now restart the Nginx and Start Initializing the NextCloud


Wednesday, April 12, 2023

Generalizing ubuntu for vmware

When you clone a virtual machine in VMware, the new machine is an exact copy of the original machine, including the network settings. This means that the new machine will have the same IP address, MAC address, and other network settings as the original machine. This can cause network conflicts and other issues, especially if you are running multiple clones of the same machine on the same network.
    
To avoid this issue, you need to ensure that each clone of the machine has a unique network configuration. One way to do this is to delete the machine-id file, which is a unique identifier for the machine. When the machine boots up, it generates a new machine-id based on its hardware configuration, which will result in a unique network configuration.

The command rm -rf /var/log/* removes all logs from the /var/log directory, which can help to free up disk space and reduce clutter. However, it is important to note that this command will permanently delete all log files, which can make troubleshooting more difficult if there are issues with the system.

To delete the value in the machine-id file, you can use the following command:

echo "" > /etc/machine-id

** Don't rm -rf the machine-id file, the system might get stuck at the start. 

This will clear the value in the file, effectively resetting the machine ID and generating a new ID on boot.

In addition to deleting the machine-id file, you may also want to clear the SSH keys and other sensitive information from the virtual machine. This can help to ensure that each clone of the machine is unique and secure.


Friday, January 21, 2022

Setting up MetalLB Load Balancer with Kubernetes k8s.

When we are deploying the Kubernetes in the local development environment and if we need to publish the services through load balancer services then Metallb load balancer is one of the easiest solutions we can use. All we need is a set of IP range from our network which metal lb can use.  

Following are the k8s configurations that need to be applied on the cluster. 

Below is the config map which mentions the IPs which can be used for the load balancers

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.2.80-172.16.2.90



Below is the ansible-playbook I used to deploy the metal load balancer on the k8s cluster. 
  • Initialize the master with Metallb Clusters
  • Copy the metallb Configuration to master
  • Kube apply the configuration on master. 


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Initializing Metallb cluster
shell: kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
register: myshell_output

- name: Copying the Metallb config file
copy:
src: /Users/rahulraj/workspace/vmware-ansible/k8s/playbook/metallb-congif.yml
dest: $HOME/metallb-congif.yml


- name: Configuring Metallb cluster
shell: kubectl apply -f $HOME/metallb-congif.yml
register: myshell_output



For testing it we shall deploy a sample Nginx and expose it through load balancer type services. 

k create deployment nginx-deployments --image=nginx --replicas=3 --port=80
k expose deployment nginx-deployments --port=80 --target-port=80 --type=LoadBalancer



Output should be like following 

 kubectl get svc
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1        <none>        443/TCP        25h
nginx-deployments   LoadBalancer   10.100.137.154   172.16.2.80   80:30973/TCP   13h











Sunday, August 16, 2020

Converting Text Case in Linux: Exploring Powerful Command-Line Tools

In the realm of command-line utilities, Linux offers a plethora of versatile tools that empower users to perform a wide range of tasks efficiently. One such task involves converting the case of text within a file. Whether you're looking to transform text to lowercase or uppercase, Linux provides multiple command-line options to achieve this. In this article, we'll delve into the process of converting text case using four prominent tools: dd, awk, perl, and sed.


Converting Text to Lowercase

Using dd

The dd command, renowned for its data manipulation capabilities, can also be employed to convert text to lowercase.

$ dd if=input.txt of=output.txt conv=lcase

Leveraging awk

awk, a versatile text processing tool, offers a succinct way to convert text to lowercase.

$ awk '{ print tolower($0) }' input.txt > output.txt

The Magic of perl

Perl enthusiasts can harness the power of this scripting language to achieve case conversion.
$ perl -pe '$_= lc($_)' input.txt > output.txt

Transforming with sed

For those who appreciate the elegance of sed, this command can seamlessly convert text to lowercase.

$ sed -e 's/\(.*\)/\L\1/' input.txt > output.txt

Converting Text to Uppercase

dd for Uppercase Conversion

Using dd to convert text to uppercase is equally achievable.

$ dd if=input.txt of=output.txt conv=ucase

awk for Uppercase Transformation

awk enthusiasts can employ its capabilities for converting text to uppercase.

$ awk '{ print toupper($0) }' input.txt > output.txt

Uppercase Conversion with perl

Perl's power shines again in transforming text to uppercase.

$ perl -pe '$_= uc($_)' input.txt > output.txt

sed for Uppercase Conversion

Converting text to uppercase using sed is both efficient and effective.

$ sed -e 's/\(.*\)/\U\1/' input.txt > output.txt


Tuesday, January 15, 2019

Kubernetes Sample Commands

Below is a Kubernetes cheat sheet, which lists various useful commands that can be used with kubectl command-line interface to manage Kubernetes clusters. These commands cover a range of tasks, such as creating and managing deployments, pods, and services, querying resource usage, deleting resources, and more. Examples include running tests using temporary pods, checking node and pod resource usage, and deleting resources by labels. Additionally, the cheat sheet also provides tips on how to enable shell autocompletion for kubectl, and how to open a bash terminal in a pod.


Run curl test temporarily 
kubectl run --rm mytest --image=yauritux/busybox-curl -it
Run wget test temporarily 
kubectl run --rm mytest --image=busybox -it
Run nginx deployment with 2 replicas 
kubectl run my-nginx --image=nginx --replicas=2 --port=80
List everything 
kubectl get all --all-namespaces
List pods with nodes info 
kubectl get pod -o wide
Show nodes with labels
kubectl get nodes --show-labels
Validate yaml file with dry run
kubectl create --dry-run --validate -f pod-dummy.yaml
Start a temporary pod for testing
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh
kubectl run shell command
kubectl exec -it mytest -- ls -l /etc/hosts
Get system conf via configmap
kubectl -n kube-system get cm kubeadm-config -o yaml
Explain resource kubectl explain pods
kubectl explain svc
Get all services
kubectl get service --all-namespaces
Watch pods
kubectl get pods -n wordpress --watch
Query healthcheck endpoint
curl -L http://127.0.0.1:10250/healthz
Open a bash terminal in a pod
kubectl exec -it storage sh
Check pod environment variables
kubectl exec redis-master-ft9ex env
Enable kubectl shell autocompletion
echo "source <(kubectl completion bash)" >>~/.bashrc, and reload
Get services sorted by name
kubectl get services –sort-by=.metadata.name
Get pods sorted by restart count
kubectl get pods –sort-by=’.status.containerStatuses[0].restartCount’
Get node resource usage
kubectl top node
Get pod resource usage
kubectl top pod
Get resource usage for a given pod
kubectl top <podname> --containers
List resource utilization for all containers
kubectl top pod --all-namespaces --containers=true
Delete pod
kubectl delete pod/<pod-name> -n <my-namespace>
Delete pod by force
kubectl delete pod/<pod-name> --grace-period=0 --force
Delete pods by labels
kubectl delete pod -l env=test
Delete deployments by labels
kubectl delete deployment -l app=wordpress
Delete all resources filtered by labels
kubectl delete pods,services -l name=myLabel
Delete resources under a namespace
kubectl -n my-ns delete po,svc --all
Delete persist volumes by labels
kubectl delete pvc -l app=wordpress
Delete statefulset only (not pods)
kubectl delete sts/<stateful_set_name> --cascade=false
List all pods
kubectl get pods
List pods for all namespace
kubectl get pods -all-namespaces
List all critical pods
kubectl get -n kube-system pods -a
List pods with more info
kubectl get pod -o wide, kubectl get pod/<pod-name> -o yaml
Get pod info
kubectl describe pod/srv-mysql-server
List all pods with labels
kubectl get pods --show-labels
List running pods
kubectl get pods –field-selector=status.phase=Running
Get Pod initContainer status
kubectl get pod --template '{{.status.initContainerStatuses}}' <pod-name>
kubectl run command
kubectl exec -it -n “$ns” “$podname” – sh -c “echo $msg >>/dev/err.log”

Friday, December 28, 2018

Docker Sample Commands

Below is a cheat sheet for using Docker, a popular containerization platform. It provides a list of commonly used commands to pull images from a registry, retag images, log in to a registry, push images to a registry, list images, delete images, create a Docker container from an image, stop and kill running containers, create overlay networks, list running containers, stop and remove containers, attach to and detach from containers, set containers to read-only, flatten images, check container resource usage, build images from Dockerfiles, use Docker Compose to build, create, and start containers, create and run a container with a mounted volume, copy files to and from containers, and inspect containers. It also includes a Dockerfile sample.

Pull an image from a registry
docker pull alpine:3.4

Retag a local image with a new image name and tag
docker tag alpine:3.4 myrepo/myalpine:3.4

Log in to a registry (the Docker Hub by default)
docker login my.registry.com:8000
Push an image to a registry
docker push myrepo/myalpine:3.4
List all images that are locally stored with the Docker engine
docker images
Delete an image from the local image store
docker rmi alpine:3.4
Create a Docker from image
docker run
--rm #remove container automatically after it exits
-it #connect the container to terminal
--name web #name the container
-p 5000:80 #expose port 5000 externally and map to port 80
-v ~/dev:/code #create a host mapped volume inside the container
alpine:3.4 #the image from which the container is instantiated
/bin/sh #the command to run inside the container
Stop a running container through SIGTERM
docker stop web
Stop a running container through SIGKILL
docker kill web
Create an overlay network and specify a subnet
docker network create --subnet 10.1.0.0/24 --gateway 10.1.0.1 -d overlay mynet
List the networks
docker network ls
List the running containers
docker ps
List the all running/stopped containers
docker ps -a

Stop a container
docker stop <container­name>

Stop a container (timeout = 1 second)
docker stop t 1 <container­name>
Delete all running and stopped containers
docker rm -f $(docker ps -aq)

Remove all stopped containers
docker rm $(docker ps q f "status=exited”)
Create a new bash process inside the container and connect it to the terminal
docker exec -it web bash
Print the last 100 lines of a container’s lo
docker logs --tail 100 web

Exporting image to an external file
docker save o <filename>.tar [username/]<image­name>[:tag]

Importing and image to an external file
docker load i <filename>.tar 

Inspecting docker image
docker inspect <Container-ID>

Attach to a running container 
docker attach <Container-ID>

Detach from the Container with out killing it ##turn interactive mode to daemon mode
Type Ctrl + p , Ctrl + q

Set the container to be read-only:
docker run --read-only

Flatten an image
ID=$(docker run -d image-name /bin/bash)
docker export $ID | docker import – flat-image-name

To check the CPU, memory, and network I/O usage
docker stats <container>
Build an image from the Dockerfile in the current directory and tag the image
docker build -t myapp:1.0 .

Docker file samples

vi Dockerfile
=========
FROM ubuntu
MAINTAINER RR
RUN apt-get update
RUN apt-get install -y nginx
COPY index.html /usr/share/nginx/html/
ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”]
EXPOSE 80
Build new images, create all containers, and start all containers (Compose). (This will not rebuild images if a Dockerfile changes.)
docker-compose up

Build, create, and start all in the background (Compose):
docker-compose up -d

Rebuild all images, create all containers, and start all containers (Compose):
docker-compose up --build

Create a new container for my_service in docker-compose.yml and run the echo command instead of the specified command:
docker-compose run my_service echo "hello"

Run a container with a volume named my_volume mounted at /my/path in the Docker container. (The volume will be created if it doesn't already exist.) 
docker run --mount source=my_volume,target=/my/path my-image
docker run -v my_volume:/my/path my-image

Copy my-file.txt from the host current directory to the /tmp directory in my_container:
docker cp ./my-file.txt my_container:/tmp/my-file.txt

Inspect an container
docker inspect python_web | less 

Sunday, July 22, 2018

Deploying Kafka into ubuntu

Apache Kafka is a distributed message broker designed to handle large volumes of real-time data efficiently. Unlike traditional brokers like ActiveMQ and RabbitMQ, Kafka runs as a cluster of one or more servers which makes it highly scalable and due to this distributed nature it has inbuilt fault tolerance while delivering higher throughput when compared to its counterparts

Implementation of Single Node Kafka

Installing Java

sudo apt-get update
sudo apt-get install default-jre

Installing Zookeeper

sudo apt-get install zookeeperd

Create a service User for Kafka

sudo adduser --system --no-create-home --disabled-password --disabled-login kafka

Download Kafka

cd ~
curl http://kafka.apache.org/KEYS | gpg --import
wget https://dist.apache.org/repos/dist/release/kafka/1.0.1/kafka_2.12-1.0.1.tgz.asc
gpg --verify kafka_2.12-1.0.1.tgz.asc kafka_2.12-1.0.1.tgz

Create a directory for extracting Kafka

sudo mkdir /opt/kafka
sudo tar -xvzf kafka_2.12-1.0.1.tgz --directory /opt/kafka --strip-components 1

Delete Kafka tarball and .asc file

rm -rf kafka_2.12-1.0.1.tgz kafka_2.12-1.0.1.tgz.asc

Configuring Kafka Server

Setup Kafka to start automatically on bootup

Copy the following init script to /etc/init.d/kafka:
======***
DAEMON_PATH=/opt/kafka/bin
DAEMON_NAME=kafka
# Check that networking is up.
#[ ${NETWORKING} = "no" ] && exit 0

PATH=$PATH:$DAEMON_PATH

# See how we were called.
case "$1" in
 start)
       # Start daemon.
       echo "Starting $DAEMON_NAME";
       nohup $DAEMON_PATH/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
       ;;
 stop)
       # Stop daemons.
       echo "Shutting down $DAEMON_NAME";
       pid=`ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}'`
       if [ -n "$pid" ]
         then
         kill -9 $pid
       else
         echo "Kafka was not Running"
       fi
       ;;
 restart)
       $0 stop
       sleep 2
       $0 start
       ;;
 status)
       pid=`ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}'`
       if [ -n "$pid" ]
         then
         echo "Kafka is Running as PID: $pid"
       else
         echo "Kafka is not Running"
       fi
       ;;
 *)
       echo "Usage: $0 {start|stop|restart|status}"
       exit 1
esac

exit 0
======***

Make the Kafka service

sudo chmod 755 /etc/init.d/kafka
sudo update-rc.d kafka defaults

Start Stop the Kafka Services

sudo service kafka start
sudo service kafka status
sudo service kafka stop

Testing Kafka topics

sudo service kafka start
sudo service kafka status

Topic creation

/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Publish Msg to test topic

/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

This will prompt for Msgs,  we can enter a test Msg

Consume Msg from the topic

/opt/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

Making Kafka Scalable

Requirement
Clustering the Zookeeper in all the Servers
Clustering the Kafka in All the servers

Install Zookeeper on all the servers and configure the servers in

/etc/zookeeper/conf/zoo.cfg
to mention all the nodes of the zookeeper

server.0=10.0.0.1:2888:3888
server.1=10.0.0.2:2888:3888
server.2=10.0.0.3:2888:3888

Once Kafka is installed in all the servers

/opt/kafka/config/server.properties
We will change the following settings.

broker.id should be unique for each node in the cluster.

for node-2 broker.id=1
for node-3 broker.id=2
change zookeeper. connect value to have such that it lists all zookeeper hosts with port

zookeeper.connect=10.0.0.1:2181,10.0.0.2:2181,10.0.0.3:2181