Pages

Friday, December 28, 2018

Docker Sample Commands

Below is a cheat sheet for using Docker, a popular containerization platform. It provides a list of commonly used commands to pull images from a registry, retag images, log in to a registry, push images to a registry, list images, delete images, create a Docker container from an image, stop and kill running containers, create overlay networks, list running containers, stop and remove containers, attach to and detach from containers, set containers to read-only, flatten images, check container resource usage, build images from Dockerfiles, use Docker Compose to build, create, and start containers, create and run a container with a mounted volume, copy files to and from containers, and inspect containers. It also includes a Dockerfile sample.

Pull an image from a registry
docker pull alpine:3.4

Retag a local image with a new image name and tag
docker tag alpine:3.4 myrepo/myalpine:3.4

Log in to a registry (the Docker Hub by default)
docker login my.registry.com:8000
Push an image to a registry
docker push myrepo/myalpine:3.4
List all images that are locally stored with the Docker engine
docker images
Delete an image from the local image store
docker rmi alpine:3.4
Create a Docker from image
docker run
--rm #remove container automatically after it exits
-it #connect the container to terminal
--name web #name the container
-p 5000:80 #expose port 5000 externally and map to port 80
-v ~/dev:/code #create a host mapped volume inside the container
alpine:3.4 #the image from which the container is instantiated
/bin/sh #the command to run inside the container
Stop a running container through SIGTERM
docker stop web
Stop a running container through SIGKILL
docker kill web
Create an overlay network and specify a subnet
docker network create --subnet 10.1.0.0/24 --gateway 10.1.0.1 -d overlay mynet
List the networks
docker network ls
List the running containers
docker ps
List the all running/stopped containers
docker ps -a

Stop a container
docker stop <container­name>

Stop a container (timeout = 1 second)
docker stop t 1 <container­name>
Delete all running and stopped containers
docker rm -f $(docker ps -aq)

Remove all stopped containers
docker rm $(docker ps q f "status=exited”)
Create a new bash process inside the container and connect it to the terminal
docker exec -it web bash
Print the last 100 lines of a container’s lo
docker logs --tail 100 web

Exporting image to an external file
docker save o <filename>.tar [username/]<image­name>[:tag]

Importing and image to an external file
docker load i <filename>.tar 

Inspecting docker image
docker inspect <Container-ID>

Attach to a running container 
docker attach <Container-ID>

Detach from the Container with out killing it ##turn interactive mode to daemon mode
Type Ctrl + p , Ctrl + q

Set the container to be read-only:
docker run --read-only

Flatten an image
ID=$(docker run -d image-name /bin/bash)
docker export $ID | docker import – flat-image-name

To check the CPU, memory, and network I/O usage
docker stats <container>
Build an image from the Dockerfile in the current directory and tag the image
docker build -t myapp:1.0 .

Docker file samples

vi Dockerfile
=========
FROM ubuntu
MAINTAINER RR
RUN apt-get update
RUN apt-get install -y nginx
COPY index.html /usr/share/nginx/html/
ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”]
EXPOSE 80
Build new images, create all containers, and start all containers (Compose). (This will not rebuild images if a Dockerfile changes.)
docker-compose up

Build, create, and start all in the background (Compose):
docker-compose up -d

Rebuild all images, create all containers, and start all containers (Compose):
docker-compose up --build

Create a new container for my_service in docker-compose.yml and run the echo command instead of the specified command:
docker-compose run my_service echo "hello"

Run a container with a volume named my_volume mounted at /my/path in the Docker container. (The volume will be created if it doesn't already exist.) 
docker run --mount source=my_volume,target=/my/path my-image
docker run -v my_volume:/my/path my-image

Copy my-file.txt from the host current directory to the /tmp directory in my_container:
docker cp ./my-file.txt my_container:/tmp/my-file.txt

Inspect an container
docker inspect python_web | less 

Sunday, July 22, 2018

Deploying Kafka into ubuntu

Apache Kafka is a distributed message broker designed to handle large volumes of real-time data efficiently. Unlike traditional brokers like ActiveMQ and RabbitMQ, Kafka runs as a cluster of one or more servers which makes it highly scalable and due to this distributed nature it has inbuilt fault tolerance while delivering higher throughput when compared to its counterparts

Implementation of Single Node Kafka

Installing Java

sudo apt-get update
sudo apt-get install default-jre

Installing Zookeeper

sudo apt-get install zookeeperd

Create a service User for Kafka

sudo adduser --system --no-create-home --disabled-password --disabled-login kafka

Download Kafka

cd ~
curl http://kafka.apache.org/KEYS | gpg --import
wget https://dist.apache.org/repos/dist/release/kafka/1.0.1/kafka_2.12-1.0.1.tgz.asc
gpg --verify kafka_2.12-1.0.1.tgz.asc kafka_2.12-1.0.1.tgz

Create a directory for extracting Kafka

sudo mkdir /opt/kafka
sudo tar -xvzf kafka_2.12-1.0.1.tgz --directory /opt/kafka --strip-components 1

Delete Kafka tarball and .asc file

rm -rf kafka_2.12-1.0.1.tgz kafka_2.12-1.0.1.tgz.asc

Configuring Kafka Server

Setup Kafka to start automatically on bootup

Copy the following init script to /etc/init.d/kafka:
======***
DAEMON_PATH=/opt/kafka/bin
DAEMON_NAME=kafka
# Check that networking is up.
#[ ${NETWORKING} = "no" ] && exit 0

PATH=$PATH:$DAEMON_PATH

# See how we were called.
case "$1" in
 start)
       # Start daemon.
       echo "Starting $DAEMON_NAME";
       nohup $DAEMON_PATH/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
       ;;
 stop)
       # Stop daemons.
       echo "Shutting down $DAEMON_NAME";
       pid=`ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}'`
       if [ -n "$pid" ]
         then
         kill -9 $pid
       else
         echo "Kafka was not Running"
       fi
       ;;
 restart)
       $0 stop
       sleep 2
       $0 start
       ;;
 status)
       pid=`ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}'`
       if [ -n "$pid" ]
         then
         echo "Kafka is Running as PID: $pid"
       else
         echo "Kafka is not Running"
       fi
       ;;
 *)
       echo "Usage: $0 {start|stop|restart|status}"
       exit 1
esac

exit 0
======***

Make the Kafka service

sudo chmod 755 /etc/init.d/kafka
sudo update-rc.d kafka defaults

Start Stop the Kafka Services

sudo service kafka start
sudo service kafka status
sudo service kafka stop

Testing Kafka topics

sudo service kafka start
sudo service kafka status

Topic creation

/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Publish Msg to test topic

/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

This will prompt for Msgs,  we can enter a test Msg

Consume Msg from the topic

/opt/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

Making Kafka Scalable

Requirement
Clustering the Zookeeper in all the Servers
Clustering the Kafka in All the servers

Install Zookeeper on all the servers and configure the servers in

/etc/zookeeper/conf/zoo.cfg
to mention all the nodes of the zookeeper

server.0=10.0.0.1:2888:3888
server.1=10.0.0.2:2888:3888
server.2=10.0.0.3:2888:3888

Once Kafka is installed in all the servers

/opt/kafka/config/server.properties
We will change the following settings.

broker.id should be unique for each node in the cluster.

for node-2 broker.id=1
for node-3 broker.id=2
change zookeeper. connect value to have such that it lists all zookeeper hosts with port

zookeeper.connect=10.0.0.1:2181,10.0.0.2:2181,10.0.0.3:2181

Sunday, April 15, 2018

Enabling hive Authorization in Qubole

Once the Hive authorization is enabled in qubole we need to manage the users and permission by Hive authentication,  following are the some of the commands which will be used for the same.

1. Listing the Current Roles

Set role admin;
show roles

2. Create the roles

CREATE ROLE <role_name>;
Creates a new role. Only the admin role has privilege for this.


Eg:
Set role admin;
Create role sysadmin;

3. Grant Role to users


GRANT ROLE <role_name> TO USER <user_name>
 
Eg:
Set role admin;
Grant Role sysadmin to user rahul ;


4. Revoke a role from user

REVOKE ROLE <role_name> FROM USER <user_name>;


Eg:
Set role admin;
REVOKE Role sysadmin from user rahul;


5. List  Roles attached to a user

SHOW ROLE GRANT USER <user_name>;


Eg.
Set role admin;
show role grant user `rahul`;


6. List Users under a role

SHOW PRINCIPALS <Role_name>


Eg
Set role admin;
SHOW PRINCIPALS sysadmin


7. Assign Role access to tables



Sample Permission
SELECT privilege: It provides read access to an object (table).
INSERT privilege: It provides ability for adding data to an object (table).
UPDATE privilege: It provides ability for running UPDATE queries on an object (table).
DELETE privilege: It provides ability for deleting data in an object (table).
ALL privilege: It provides all privileges. In other words, this privilege gets translated into all the above privileges.


GRANT <Permission> ON <table_name> TO ROLE <role_name>;


Eg:
Grant all on default.testtable to role sysadmin


8. View Role/user Permissions on tables

Check all users who have been granted with a specific role


SHOW GRANT USER <user_name> ON <table_name|All>;
SHOW GRANT ROLE <user_name> ON <table_name|All>;


Eg:
SHOW GRANT user analytics on all

Saturday, March 31, 2018

Parsing Value from a Json Field in Qubole.

Description of how to extract a value from a JSON field in Hive using the get_json_object function. When the data in one of the fields in the Hive environment is in JSON format, and we need to extract a value out of the JSON, we can use the get_json_object function. For example, if we have a column named jdata containing the following JSON:


get_json_object(column_name, '$.keyvalue')

The column name is : jdata and json the Column is as followes.

{
    "Foo": "ABC",
    "Bar": "20090101100000",
    "Quux": {
        "QuuxId": 1234,
        "QuuxName": "Sam"
    }
}

if we have to extract ABC : get_json_object(jdata, '$.Foo') 

Friday, February 16, 2018

Azure VPN Gateway with Cisco ASA using Routing

The Azure VPN Gateway and Cisco ASA can encounter routing-type issues when configured together. To resolve this, the UsePolicyBasedTrafficSelectors must be enabled in the Azure Connection. The provided code is a PowerShell script that retrieves the specified Azure virtual network gateway connection and creates a new IPsec policy with specific parameters. The script then sets the IPsec policies for the connection to the new policy and enables UsePolicyBasedTrafficSelectors to solve the routing issue.

$RG1 = "****************"

This line declares a variable $RG1 and sets its value to a string of asterisks. This is likely just a placeholder for the actual resource group name.

$Connection16 = "****************"

Similar to the first line, this line declares a variable $Connection16 and sets its value to a string of asterisks. This is likely just a placeholder for the actual connection name.

$connection6 = Get-AzureRmVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1

This line retrieves the virtual network gateway connection object for a connection with the specified name ($Connection16) in the specified resource group ($RG1). The connection object is assigned to the variable $connection6.

$newpolicy6 = New-AzureRmIpsecPolicy -IkeEncryption AES256 -IkeIntegrity SHA384 -DhGroup DHGroup24 -IpsecEncryption AES256 -IpsecIntegrity SHA1 -PfsGroup PFS24 -SALifeTimeSeconds 28800 -SADataSizeKilobytes 4608000

This line creates a new IPsec policy object ($newpolicy6) with the specified settings for IKE encryption, integrity, DH group, IPsec encryption, integrity, Perfect Forward Secrecy (PFS) group, Security Association (SA) lifetime, and SA data size.

Set-AzureRmVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection6 -IpsecPolicies $newpolicy6

This line updates the virtual network gateway connection object ($connection6) with the new IPsec policy ($newpolicy6) created in the previous step.

Set-AzureRmVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection6 -IpsecPolicies $newpolicy6 -UsePolicyBasedTrafficSelectors $True

This line updates the virtual network gateway connection object ($connection6) again, this time enabling policy-based traffic selectors by setting the -UsePolicyBasedTrafficSelectors parameter to $True. This is necessary to resolve routing issues that can occur when configuring the Azure VPN Gateway with a Cisco ASA.


PS Azure:\> $connection6.UsePolicyBasedTrafficSelectors

True

Azure:\

PS Azure:\> $connection6.IpsecPolicies


Docker Management using Portainer

Portainer is a lightweight management UI that allows easy management of Docker environments, including creating, deploying, and managing containers, services, and stacks. It is particularly useful for those who are new to Docker or those who prefer a visual interface over command-line management.

To install Portainer with a persistent container, you can follow these steps:
  • Pull the Portainer image: docker pull portainer/portainer
  • Create a directory for Portainer data: mkdir -p /mnt/docker/portainer/data
  • Create a Docker service for Portainer with the following command:  
docker service create \ --name portainer \ --publish 9090:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=/mnt/docker/portainer/data,dst=/data \ --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock

 

the above command will create a new Docker service named "portainer" with a published port of 9090, mounted volume for persistent data, and a constraint for the node role of "manager".

  • Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.
  • Create a new user account and start managing your Docker environment using the Portainer UI.

Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.