Pages

Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Friday, December 28, 2018

Docker Sample Commands

Below is a cheat sheet for using Docker, a popular containerization platform. It provides a list of commonly used commands to pull images from a registry, retag images, log in to a registry, push images to a registry, list images, delete images, create a Docker container from an image, stop and kill running containers, create overlay networks, list running containers, stop and remove containers, attach to and detach from containers, set containers to read-only, flatten images, check container resource usage, build images from Dockerfiles, use Docker Compose to build, create, and start containers, create and run a container with a mounted volume, copy files to and from containers, and inspect containers. It also includes a Dockerfile sample.

Pull an image from a registry
docker pull alpine:3.4

Retag a local image with a new image name and tag
docker tag alpine:3.4 myrepo/myalpine:3.4

Log in to a registry (the Docker Hub by default)
docker login my.registry.com:8000
Push an image to a registry
docker push myrepo/myalpine:3.4
List all images that are locally stored with the Docker engine
docker images
Delete an image from the local image store
docker rmi alpine:3.4
Create a Docker from image
docker run
--rm #remove container automatically after it exits
-it #connect the container to terminal
--name web #name the container
-p 5000:80 #expose port 5000 externally and map to port 80
-v ~/dev:/code #create a host mapped volume inside the container
alpine:3.4 #the image from which the container is instantiated
/bin/sh #the command to run inside the container
Stop a running container through SIGTERM
docker stop web
Stop a running container through SIGKILL
docker kill web
Create an overlay network and specify a subnet
docker network create --subnet 10.1.0.0/24 --gateway 10.1.0.1 -d overlay mynet
List the networks
docker network ls
List the running containers
docker ps
List the all running/stopped containers
docker ps -a

Stop a container
docker stop <container­name>

Stop a container (timeout = 1 second)
docker stop t 1 <container­name>
Delete all running and stopped containers
docker rm -f $(docker ps -aq)

Remove all stopped containers
docker rm $(docker ps q f "status=exited”)
Create a new bash process inside the container and connect it to the terminal
docker exec -it web bash
Print the last 100 lines of a container’s lo
docker logs --tail 100 web

Exporting image to an external file
docker save o <filename>.tar [username/]<image­name>[:tag]

Importing and image to an external file
docker load i <filename>.tar 

Inspecting docker image
docker inspect <Container-ID>

Attach to a running container 
docker attach <Container-ID>

Detach from the Container with out killing it ##turn interactive mode to daemon mode
Type Ctrl + p , Ctrl + q

Set the container to be read-only:
docker run --read-only

Flatten an image
ID=$(docker run -d image-name /bin/bash)
docker export $ID | docker import – flat-image-name

To check the CPU, memory, and network I/O usage
docker stats <container>
Build an image from the Dockerfile in the current directory and tag the image
docker build -t myapp:1.0 .

Docker file samples

vi Dockerfile
=========
FROM ubuntu
MAINTAINER RR
RUN apt-get update
RUN apt-get install -y nginx
COPY index.html /usr/share/nginx/html/
ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”]
EXPOSE 80
Build new images, create all containers, and start all containers (Compose). (This will not rebuild images if a Dockerfile changes.)
docker-compose up

Build, create, and start all in the background (Compose):
docker-compose up -d

Rebuild all images, create all containers, and start all containers (Compose):
docker-compose up --build

Create a new container for my_service in docker-compose.yml and run the echo command instead of the specified command:
docker-compose run my_service echo "hello"

Run a container with a volume named my_volume mounted at /my/path in the Docker container. (The volume will be created if it doesn't already exist.) 
docker run --mount source=my_volume,target=/my/path my-image
docker run -v my_volume:/my/path my-image

Copy my-file.txt from the host current directory to the /tmp directory in my_container:
docker cp ./my-file.txt my_container:/tmp/my-file.txt

Inspect an container
docker inspect python_web | less 

Friday, February 16, 2018

Docker Management using Portainer

Portainer is a lightweight management UI that allows easy management of Docker environments, including creating, deploying, and managing containers, services, and stacks. It is particularly useful for those who are new to Docker or those who prefer a visual interface over command-line management.

To install Portainer with a persistent container, you can follow these steps:
  • Pull the Portainer image: docker pull portainer/portainer
  • Create a directory for Portainer data: mkdir -p /mnt/docker/portainer/data
  • Create a Docker service for Portainer with the following command:  
docker service create \ --name portainer \ --publish 9090:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=/mnt/docker/portainer/data,dst=/data \ --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock

 

the above command will create a new Docker service named "portainer" with a published port of 9090, mounted volume for persistent data, and a constraint for the node role of "manager".

  • Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.
  • Create a new user account and start managing your Docker environment using the Portainer UI.

Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.



Wednesday, November 22, 2017

Docker Clustering with Swarm in Centos7

Docker Clustering with Swarm in Centos7 is a process of creating a cluster of Docker hosts using the Docker Swarm feature in the CentOS 7 operating system. The Swarm feature is a native clustering and orchestration tool within Docker that enables users to create and manage a cluster of Docker hosts. This process involves setting up a Docker Swarm manager and one or more Docker Swarm nodes, configuring the network and storage for the cluster, and deploying and scaling Docker services across the cluster. The benefits of clustering Docker hosts with Swarm in CentOS 7 include increased scalability, high availability, and load balancing of Docker services, as well as simplified management and deployment of containerized applications.

Installing Docker

mkdir /install-files ; cd /install-files
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.13.1-1.el7.centos.x86_64.rpm
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm


Package for docker-engine-selinux
yum install -y policycoreutils-python
rpm -i docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm
Package for docker-engine
yum install -y libtool-ltdl libseccomp
rpm -i docker-engine-1.13.1-1.el7.centos.x86_64.rpm
Remove rpm packages
rm docker-engine-* -f
Enable systemd service
systemctl enable docker
Start docker

systemctl start docker

Firewalld Enabling Firewall Rules

firewall-cmd --get-active-zones
firewall-cmd --list-all
firewall-cmd --zone=public --add-port=2377/tcp --permanent
firewall-cmd --permanent --add-source=192.168.56.0/24
firewall-cmd --permanent --add-port=2377/tcp
firewall-cmd --permanent --add-port=7946/tcp
firewall-cmd --permanent --add-port=7946/udp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload
Enable and Restart systemd service
systemctl enable docker;
systemctl restart docker
Docker Cluster Env

docker swarm init --advertise-addr=192.168.56.105

Swarm initialized: current node (b4b79zi3t1mq1572r0iubxdhc) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-1wcz7xfyvhewvj3dd4wcbhufw4lub3b1vgpuoybh90myzookbf-4ksxoxrilifb2tmvuligp9krs \
    192.168.56.101:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

To join as a Swarm manager

docker swarm join-token manager

  docker swarm join \
    --token SWMTKN-1-10cqx6yryq5kyfe128m2xhyxzplsc90lzksqggmscv1nfipsbb-bfdbvfhuw9sg8mx2i1a4rkvlv \
    192.168.56.101:2377


Tuesday, October 24, 2017

docker: 'stack' is not a docker command.

The error message "docker: 'stack' is not a docker command" suggests that the version of Docker being used does not support the "stack" command. The solution to this problem is to upgrade Docker to version 1.13 or higher. In the given example, the solution is to upgrade Docker to version 1.13 by downloading the required RPM packages from the Docker project repository and installing them using the "rpm -i" command. After the installation, the "systemctl enable docker" and "systemctl start docker" commands are used to enable and start the Docker service.While deploying the docker services using stack deploy command. We got following error.

docker stack deploy -c docker-compose.yml appslab
docker: 'stack' is not a docker command.
See 'docker --help'.

Resolution
Upgrade docker to 1.13

In Centos 7 we used the following to get the docker upgraded. Now the docket-latest package in centos7 is upgraded to 1.13
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.13.1-1.el7.centos.x86_64.rpm
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm

#package for docker-engine-selinux
yum install -y policycoreutils-python
rpm -i docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm


#package for docker-engine
yum install -y libtool-ltdl libseccomp
rpm -i docker-engine-1.13.1-1.el7.centos.x86_64.rpm


#remove rpm packages
rm docker-engine-* -f

#enable systemd service
systemctl enable docker

#start docker

systemctl start docker

Friday, March 6, 2015

Directory Sharing between Host Machine and Docker

Mount a Host Directory as a Data Volume
 To mount a  host directory on to the container

>>$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py

This will mount the host directory, /src/webapp, into the container at /opt/webapp

Tuesday, January 27, 2015

Clustering CoreOS Docker Hosts Using Fleet

Once the CoreOS Docker Hosts are Clustered we will be able to manage the Docker Hosts from a single server.

Getting the new Discovery URL.

curl -w "\n" https://discovery.etcd.io/new

We will get somthing like

https://discovery.etcd.io/16043bf6be5ecf5c42a0bcc0d9237954

make sure we use the new Discovery URL in the Config-core.yaml

Configure the new Config File with the URL
>>cat config-core.yaml
===============================================
#cloud-config
coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    #discovery: https://discovery.etcd.io/<token>
    discovery: https://discovery.etcd.io/a41bcad1e117d272d47eb938e060e6c8
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
write_files:
  - path: /etc/resolv.conf
    permissions: 0644
    owner: root
    content: |
      nameserver 8.8.8.8
ssh_authorized_keys:
  # include one or more SSH public keys
  - ssh-rsa AA7dasdlakdjlksajdlkjasa654d6s5a4d6sa5d465df46sdg4rdfghfdhdg74wefg4sd32f13f468e Generated-by-Nova
===============================================
If we are using Openstack make sure that the neutron metadata service is running fine and working properly because the private_ipv4 attribute works only if the instance get the meta data properly.

Use the Above Config File to start the CoreOs VM's. and Once the CoreOs VM are running Then we can use the Following Commands to check the fleet status.

List the machines in the Fleet
>> fleetctl list-machines
MACHINE         IP              METADATA
0a1cad1d...     192.36.0.65      -
220f3e64...     192.36.0.67      -
31dbd5ca...     192.36.0.66      -

If we need to add a new server into the same fleet we can get the Discovery URL by

grep DISCOVERY /run/systemd/system/etcd.service.d/20-cloudinit.conf

Using the above command we can get the Discovery URL from a running machine in the Fleet.

Once we get the URL start the new instance with the same config-core.yaml.

Friday, January 9, 2015

Pushing Images into private Docker-Registry

Pushing to a Private Docker.

Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.
Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Let our private docker be https://docker-registry:8080

In the Docker Server.
Listing the Images.
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

List the Running Docker's
core@coreos ~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                        NAMES
4867ea72bd6a        centos:6            "/bin/bash"         41 minutes ago      Up 41 minutes       0.0.0.0:2221->22/tcp, 0.0.0.0:8080->80/tcp   boring_babbage

Commit the Docker
core@coreos ~ $ docker commit 4867ea72bd6a dockeradmin/centos-wordpress
9d1b81492b51653710745cad6614444d16b78551981ec44a53804b196b683fdb

Check Whether the image of new contianer is ready
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
dockeradmin/centos-wordpress    latest              9d1b81492b51        4 minutes ago       591.3 MB
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

Tag the Container to the name format <private-registry>/<repo-name>
core@coreos ~ $ docker tag dockeradmin/centos-wordpress dockerregistry:8080/wordpress
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
dockeradmin/centos-wordpress    latest              9d1b81492b51        4 minutes ago       591.3 MB
dockerregistry:8080/wordpress   latest              9d1b81492b51        4 minutes ago       591.3 MB
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

Try Loging in to the Docker-Registry
core@coreos ~ $ docker login https://dockerregistry:8080
Username (dockeradmin):
Login Succeeded

Finally Pushing into the Registry. 
core@coreos ~ $ docker push dockerregistry:8080/wordpress
The push refers to a repository [dockerregistry:8080/wordpress] (len: 1)
Sending image list
Pushing repository dockerregistry:8080/wordpress (1 tags)
511136ea3c5a: Image successfully pushed
5b12ef8fd570: Image successfully pushed
510cf09a7986: Image successfully pushed
9d1b81492b51: Image successfully pushed
Pushing tag for rev [9d1b81492b51] on {https://dockerregistry:8080/v1/repositories/wordpress/tags/latest}

Wednesday, December 24, 2014

Private Docker Registry @ Centos7

Here we will try to create a private docker registry for the internal use with security.

Install the Packages
>> yum update -y ;
>> yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
>> yum install docker-registry

The configruetion file will be at /etc/docker-registry.yml

Edit the dev section to match the needed Storage portion

 # This is the default configuration when no flavor is specified
dev:
    storage: local
    storage_path: /home/registry
    loglevel: debug

Create the Directory at /home/registry or Configure the storage there.

Start and enable the Docker Registry

>>systemctl start docker-registry
>>systemctl status docker-registry
docker-registry.service - Registry server for Docker
   Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled)
   Active: active (running) since Mon 2014-12-15 21:20:26 UTC; 5s ago
 Main PID: 19468 (gunicorn)
   CGroup: /system.slice/docker-registry.service
           ├─19468 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19473 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19474 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19475 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19482 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19488 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19489 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19494 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           └─19496 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...

Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Listening a...68)
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Using worke...ent
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19473] [INFO] Booting wor...473
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19474] [INFO] Booting wor...474
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19475] [INFO] Booting wor...475
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19482] [INFO] Booting wor...482
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19488] [INFO] Booting wor...488
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19489] [INFO] Booting wor...489
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19494] [INFO] Booting wor...494
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19496] [INFO] Booting wor...496
Hint: Some lines were ellipsized, use -l to show in full.


Check whether its working
>>curl localhost:5000
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#

Now we need to add authentication to the registry and an self signed certificate SSL

Creating the Certificates
# Generate private key
>>openssl genrsa -out docker-registry.key 2048

# Generate CSR **** MAKE SURE WE GIVE THE SERVER NAME CORRECTLY***
>>openssl req -new -key docker-registry.key -out docker-registry.csr

# Generate Self Signed Key
>>openssl x509 -req -days 365 -in docker-registry.csr -signkey docker-registry.key -out docker-registry.crt

# Copy the files to the correct locations
>>cp docker-registry.crt /etc/pki/tls/certs
>>cp docker-registry.key /etc/pki/tls/private/docker-registry.key
>>cp docker-registry.csr /etc/pki/tls/private/docker-registry.csr

#Update the CA-Certificate in the Centos7
>>update-ca-trust enable
>>cp docker-registry.crt /etc/pki/ca-trust/source/anchors/
>>update-ca-trust extract


>>yum install nginx httpd-tools

Create a User for authentication
>>sudo htpasswd -c /etc/nginx/docker-registry.htpasswd dockeruser
<Password will be prompted>

Configure nginx Virtual Host (/etc/nginx/conf.d/virtualhost.conf).

============
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream docker-registry {
 server localhost:5000;
}

server {
 listen 8080;
 server_name docker.registry;

 # ssl on;
 # ssl_certificate /etc/pki/tls/certs/docker-registry.crt;
 # ssl_certificate_key /etc/pki/tls/private/docker-registry.key;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    docker-registry.htpasswd;

     proxy_pass http://docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }

}
============
Restart the Service
>>>systemctl restart nginx

Checking the Resgistry curl https://<username>:<password>@<hostname>:<port>  ******Hostanme should be the one we gave in the certificate ********

>> curl https://dockeruser:docker@docker-registry:8080
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#


Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.

Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Also in this case as our server name is docker-registry:8080 we need to copy the CA certificate to /etc/docker/cert.d/docker-registry\:8080/ca.crt also.

And restart the docker service

Now try to login to registry

core@coreos ~ $ docker login https://docker-registry:8080
Username :
Password :
Email :
Login Succeeded
core@coreos ~ $


Now to Push the Image to Private repo we need to tag the image in format <registry-hostname>:<port>


Import/Export a Docker Images

For Exporting a docker image from one server to another we can user private registry or we can also tar the image and copy the tar over to new server and import it into the new server using the tar file.

Export a Docker image to a file.

docker save image > image.tar

Import a Docker image

docker load -i (archivefile)
Loads in a Docker image in the following formats: .tar, .tar.gz, .tar.xz. lrz is not supported.

Friday, December 12, 2014

Docker Usage Explained

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Downloading a Docker image >>docker pull centos >>docker pull ubuntu
Running A Docker The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Let's launch an Ubuntu container and install Apache inside of it using the bash prompt: >>docker run -t -i ubuntu /bin/bash To Quit
Starting with docker 0.6.5, you can add -t to the docker run command, which will attach a pseudo-TTY. Then you can type Control-C to detach from the container without terminating it.If you use -t and -i then Control-C will terminate the container.When using -i with -t then you have to use Control-P Control-Q to detach without terminating.
Control-P Control-Q List the Dockers Running >>docker ps -a Enter a running docker >>docker exec -it [container-id] bash Once inside the Docker install the needed Items and Packages and configure the Services as needed. Now Quit the Docker using Control-P Control-Q To keep it running.
For Using Public Docker Registry, Register with Email Address and Username https://registry.hub.docker.com/
Committing the changes made into a new Image that can be used later. >>docker commit [container-id] <registered_username>/<Nameforimage> eg: core@coreos ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5adf005708db centos:latest "/bin/bash" 11 minutes ago Up 11 minutes thirsty_ritchie core@coreos ~ $ docker commit 5adf005708db rahulrajvn/centos-httpd b8810f9ca8d52a289c963f57824f575341324c353707a5b1f215840c9ea88ebe core@coreos ~ $ Now the Image named rahulrajvn/centos-httpd is present in the local machine if we need to create more of that Image in same sever we can use it. Pushing the Image to registered public Docker-io repo , While pusing we will be asked for Username and password. core@coreos ~ $ docker push rahulrajvn/centos-httpd The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Please login prior to push: Username: rahulrajvn Password:******** Email: ****************** Login Succeeded The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Pushing repository rahulrajvn/centos-httpd (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping 34943839435d: Image already pushed, skipping b8810f9ca8d5: Image successfully pushed Pushing tag for rev [b8810f9ca8d5] on {https://cdn-registry-1.docker.io/v1/repositories/rahulrajvn/centos-httpd/tags/latest} core@coreos ~ $ Download a image from a Public Repo We just need to call it using the account name and Image name . Here in below example we use account rahulrajvn and image centos-httpd. core@coreos2 ~ $ docker pull rahulrajvn/centos-httpd Pulling repository rahulrajvn/centos-httpd b8810f9ca8d5: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete 34943839435d: Download complete Status: Downloaded newer image for rahulrajvn/centos-httpd:latest core@coreos2 ~ $ Network Access to 80 The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the -p flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include -p 80:80 in our command: docker run -d -p 80:80 -it rahulrajvn/centos6 /bin/bash If we need to forward more ports we can do it by adding one more -p option. docker run -d -p 80:80 -p 2222:22 -it rahulrajvn/centos6 /bin/bash Listing the Images >>docker images Removing Images >>docker rmi <Image-ID>

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

Monday, November 10, 2014

Docker + Nova Time Out Error

http://paste.openstack.org/show/131728/

Sample Error
==========
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 468, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 456, in request
    resp = self.send(prep, **send_kwargs)
 File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
 File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 559, in send
    r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
    timeout=timeout
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 299, in _make_request
    timeout_obj = self._get_timeout(timeout)
 File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 279, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 152, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 95, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.



To fix the problem i have to modify directly "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py"

def _get_timeout(self, timeout):                                            
    """ Helper that always returns a :class:`urllib3.util.Timeout` """      
    if timeout is _Default:                                                           
        return self.timeout.clone()                                         

    if isinstance(timeout, Timeout): <========================== Timeout is not a urllib3 timeout
        return timeout.clone()                                              
    else:                                                                   
        # User passed us an int/float. This is for backwards compatibility, 
        # can be removed later                                                                                                 
        return Timeout.from_float(timeout._connect ) <======================= manually entered _connect
I

Tuesday, November 4, 2014

Docker with Openstack Giving Error "ova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout"

When I try to integrate Docker to Openstack Juno, I am not able to start the nova service in the compute node. I followed https://wiki.openstack.org/wiki/Docker .

When I remove or comment out #compute_driver = novadocker.virt.docker.DockerDriver from nova configuration, the service is able to start but the pid gets killed soon.

I am getting following error while trying to start the nova service.

Complete Error.
http://paste.openstack.org/show/128805/

Sample Error
****2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup     "int or float." % (name, value))
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup****


The issue has been fixed , I didn't installed the docker requirement . Once i installed it and rebooted the server its working fine now .

For testing I have used * for installation ,we just need to install the correct packages. https://github.com/stackforge/nova-do...

yum install *pbr*
yum install *six*
yum install *babel*
yum install *oslo*
yum install docker-py

Monday, October 20, 2014

Openstack Juno + Docker error "Docker daemon is not running or is not reachable"

I was getting following error while integrating docker with Openstack Juno.

"2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)"

I tried changing the permission of the docker.sock but that didn't help. But when I upgraded the docker to 1.2 version the issue was fixed . The docker version which comes with centos is little bit old we can the rpm of new docker for centos7 from 

Download the following RPMS 

wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm

Install the RPM

in the same dorectory
yum install docker-1.2.0
yum install docker*


Error
====
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup
2014-10-20 14:24:22.876 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.901 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.906 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.919 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.954 2995 ERROR nova.openstack.common.threadgroup [-] Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup