Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Tuesday, April 14, 2020

Configure AWS Login With Azure AD Enterprise App

The idea is to enable users to sign in to AWS using their Azure AD credentials. This can be achieved by configuring single sign-on (SSO) between Azure AD and AWS. The process involves creating an enterprise application in Azure AD and configuring the AWS application to use Azure AD as the identity provider. Once this is set up, users can log in to AWS using their Azure AD credentials and access the AWS resources that they have been authorized to use. The tutorial provided by Microsoft explains the steps involved in setting up this SSO configuration between Azure AD and AWS.

  1. Azure >> Enterprise APP >> <<Configure Azure AD SSO
    1. Deploy Amazon Web Services Developer App
    2. Single Sign On >>.SAML
      1. Popup to save
        1. Identifier:
        2. Reply URL:
      2. Save
    3. SAML Signing Certificate
      1. Download "Federation Metadata XML"
    4. Add the AD user's to Application's User' and Group
  2. AWS >> IAM >> Identity provider
    1. Create
      1. SAML
      2. AZADAWS
      3. Upload the Metadata XML
    2. Verify Create
  3. AWS>> IAM >> ROLE << This Role will Come in Azure Application
    1. SAML 2.0 Federations
      1. Choose :  Earlier Created Identity provider
      2. Allow programmatic and AWS Management Console access
      3. Choose required permissions
      4. Create the role with Appropriate name
  4. AWS >> IAM >> POLICIES <<  This policy will allow to fetch the roles from AWS accounts.
    1. Choose JSON
      1. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": "*" } ] }
    2. Name : AzureAD_SSOUserRole_Policy.
    3. Create the Policy
  5. AWS >> IAM >> USER
    1. Name : AzureADRoleManager
    2. Choose Programmatic access
    3. Permission : Attach existing polices
      1. Choose : AzureAD_SSOUserRole_Policy
    4. Create User
    5. Copy Access and Secret key
  6. Azure Enterprise App >> Choose Amazon Web Services App which was deployed
    1. Provisoing
      1. Make it automatic
      2. Give Aws Access and Secret key
      3. Test and Save
      4. Make the "Provisioning Status" to ON
      5. Wait for a sync to complete
      6. Once Sync is Completed got the user's and Groups
        1. Choose the user, select Click EDIT
        2. Choose the AWS Role

Friday, September 8, 2017

Minio Running as Service

Minio is a distributed object storage server, similar to Amazon S3, that allows you to store and access large amounts of data. Since the service is running on different hosts, it is important to have a shared storage mechanism so that the data is synchronized across all nodes. To achieve this, a bind mount is used to mount a directory on the host machine to the Minio server container, allowing it to read and write data to the directory. Additionally, two Docker secrets are created for access and secret keys to authenticate and authorize access to the Minio server. Finally, the service is created with the docker service create command, specifying the name of the service, the port to publish, the constraint to run the service only on a manager node, the bind mount for data synchronization, and the two Docker secrets for authentication. The minio/minio image is used to run the Minio server, and the /data directory is specified as the location to store data.

echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -

docker service create --name="minio-service" --publish 9000:9000   --constraint 'node.role == manager' --mount type=bind,src=/mnt/minio/,dst=/data --secret="access_key" --secret="secret_key" minio/minio server /data

Wednesday, September 6, 2017

Minio: S3 Compatible Stoage in Docker

Minio is a distributed object storage server that is designed to be scalable and highly available. It is built for cloud-native applications and DevOps. Minio provides Amazon S3 compatible API for cloud-native applications to store and retrieve data. It is open-source and can be deployed on-premise, on the cloud or on Kubernetes.

The command docker pull minio/minio pulls the Minio image from Docker Hub. The command docker run -p 9000:9000 minio/minio server /data runs a Minio container with port forwarding from the host to the container for the Minio web interface. The /data parameter specifies the path to the data directory that will be used to store the data on the container's file system.

**We need to have the docker env up and running.

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data

After running this command, you can access the Minio web interface by navigating to http://localhost:9000 in your web browser.

Thursday, August 17, 2017

Inceass the Root Disk Size for Centos in Aws

Issue: Root Partition not scaled after EBS is resized.

Growpart called by cloud-init only works for kernels >3.8. Only newer kernels support changing the partition size of a mounted partition. When using an older kernel the resizing of the root partition happens in the initrd stage before the root partition is mounted and the subsequent cloud-init growpart run is a no-op.

xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0   8G  0 part /
Perform the following command as root:

# yum install cloud-utils-growpart

# growpart /dev/xvda 1

# reboot
After the reboot:

# lsblk
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0  30G  0 part /

Tuesday, November 10, 2015

Oracle Database backup to AWS S3 : Error occured when installing OSB(Oracle Security Backup) on Amazon S

I tried to set up rman backup using amazon cloud module and I faced up following error.
Internet connections are positively working.

#> java -jar osbws_install.jar -AWSID MyAWSID -AWSKey MYAWSKEY -otnUser MYOTNID -otnPass MYOTNPASS -walletDir $ORACLE_HOME/dbs/osbws_wallet -libDir $ORACLE_HOME/lib -debug

Fix:  The OSB module works only with Java version 1.5 and 1.6. The new Machines are running with java version 1.7. try with Java version 1.6.

Friday, June 12, 2015

Getting Client IP Behind the Aws ELB (Http/Http Mode)

We need to add the Following Logformat to get the clients IP.

We use the X-Forwarded-For entry in the apache configuration to get it done.

# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
LogFormat "\"%{X-Forwarded-For}i\" %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined_new


    DocumentRoot "/var/www/"

        Options Includes FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all

    CustomLog /var/www/logs/ combined_new
    ErrorLog /var/www/logs/


Thursday, July 10, 2014

Configure Amazon CloudWatch Monitoring Scripts for Linux

Installing dependencies

yum install cpan
yum install perl-Time-HiRes
cpan >> install Switch
yum install zip unzip
yum install wget
yum install perl-Crypt-SSLeay

cd aws-scripts-mon/

vi awscreds.template

Test it
./ --mem-util --mem-used --mem-avail --disk-path=/ --disk-space-util --disk-space-used --disk-space-avail --swap-used --aws-credential-file=/root/aws-scripts-mon/awscreds.template

Add it to cron
* * * * * /usr/bin/perl /root/aws-scripts-mon/ --mem-util --mem-used --mem-avail --aws-credential-file=/root/aws-scripts-mon/awscreds.template --from-cron



Wednesday, June 11, 2014

Putty + Remote tunnel + RDP

Installing Putty and Configuring SSH Tunnel and Remote Desktop

On the CLIENT computer we are connecting from, we will need to install Putty and configure it to connect RDP over SSH (ie create the tunnel).

1. To install putty, just extract the Zip for to your C:\Putty folder.  The Putty folder should contain several .exe programs.

2. To run putty, we will just run the Putty.exe in the C:\Putty folder.  To make it easier to launch, you can create a shortcut to Putty.exe and put it on your desktop or in your Start Menu.

3. Under the Session section (on left pane), type in the host name of the pc we are connecting to (in our example on our local network). and leave the port at 22.  Also you can go under the Saved Session box and enter a name to save the profile as for easy connection (more later on this).

Under the Connection > SSH Tunnels tab, under Source Port, enter in a local port to connect to as our tunnel (i use a very high port in the 40000 range, we’ll use 40000), in the Destination box, we can put in the ip address of the remote computer we have running Copssh/SSH, in my example.

Go back to the Sessions section and click the Save button under the Saved Sessions box and then hit the Open button.

4. You should get a prompt to accept a key the first time we connect, click Yes.

5.  We now should get a command window like interface asking for a user.  Enter your remote computers login username and password.  Once you connect, the command window will change to a local window.

Connecting via Remote Desktop over the SSH Tunnel

1. On the laptop/client computer, open Remote Desktop Connection (Start Menu > All Programs > Accessories > Remote Desktop Connection)

2. Enter in for the computer to connect to. = the local tcp/ip stack loopback address and 40000 = port to connect over.  This in turn forces our remote desktop client to use the SSH tunnel we created at 40000 to connect to our remote pc at the 22 port.

Wednesday, May 21, 2014

Installing Amazon Command Line using PIP

Installing the repo needed for pip

cd /tmp
rpm -ivh epel-release-6-8.noarch.rpm

Installing C-compiler for Pip

yum install gcc

Installing amazon cli

pip install awscli

Configure Amazon Cli

aws configrue

you need aws access key ,secret key, default region and output format.

Installing and configuring Amazon EC2 command line

Now download the Amazon API CLI tools using following command and extract them at a proper place. For this example, we are using /opt directory.

# mkdir /opt/ec2
# wget
# unzip -d /tmp
# mv /tmp/ec2-api-tools-* /opt/ec2/tools
Step 3- Download Private Key and Certificate Files

Now create and download X.509 certificate (private key file and certificate file) files from your account from Security Credentials page and copy to /opt/ec2/certs/ directory.

# ls -l /opt/ec2/certs/

-rw-r--r--. 1 root root 1281 May 15 12:57 my-ec2-cert.pem
-rw-r--r--. 1 root root 1704 May 15 12:56 my-ec2-pk.pem
Step 4- Configure Environment

Install JAVA
The Amazon EC2 command line tools required Java 1.6 or later version. Make sure you have proper java installed on your system. You can install JRE or JDK , both are ok to use.
# java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode)
If you don’t have Java installed your system, Use below links to install Java on your system first
Installing JAVA/JDK 8 on CentOS, RHEL and Fedora
Installing JAVA/JDK 8 on Ubuntu

Now edit ~/.bashrc file and add the following values at end of file

export EC2_BASE=/opt/ec2
export EC2_HOME=$EC2_BASE/tools
export EC2_PRIVATE_KEY=$EC2_BASE/certs/my-ec2-pk.pem
export EC2_CERT=$EC2_BASE/certs/my-ec2-cert.pem
export EC2_URL=
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/opt/jdk1.8.0_05
Now execute the following command to set environment variables

$ source ~/.bashrc

After completing all configuration, let’s run following command to quickly verify setup.

# ec2-describe-regions

REGION eu-west-1
REGION sa-east-1
REGION us-east-1
REGION ap-northeast-1
REGION us-west-2
REGION us-west-1
REGION ap-southeast-1
REGION ap-southeast-2

Thursday, May 15, 2014

Apache load balancing using mod_jk

Considering you have two tomcat server's and both are configured and port 8009 is listened by ajp in tomcat.

Download the module from

Sample Version

#tar -xvf tomcat-connectors-1.2.37-src.tar

# cd tomcat-connectors-1.2.32-src/native/

# which usr/sbin/apxs

# ./configure --with-apxs=/usr/sbin/apxs --enable-api-compatibility

# make

# make install

after completed this activity you will get file in /usr/lib64/httpd/modules/

or else copy the modules to apache's module directory.


if get it , going well

Installation part has been completed, let's start configuration part

4. Open httpd.conf file and add end of line.

# vi /etc/httpd/conf/httpd.conf

JkWorkersFile "/etc/httpd/conf/"
JkLogFile "/var/log/httpd/mod_jk.log"
JkRequestLogFormat "%w %V %T"
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

The below two lines in the virtualhost.

JkMount / loadbalancer
JkMount /status status

Content of the

cat /etc/httpd/conf/





to check the status






Tomcat needs to be configured to allow for setup of cluster of two nodes over unicast. Following is section of my ${LIFERAY_HOME}/tomcat-6.0.32/conf/server.xml on server1 (replace node1 with node2 and swap location of IP_ADDRESSES and change unique_id to anything 16 bit long other than{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}, on server.xml in server2) which allowed for this. IP_ADDRESSES here refer to private ip addresses of server1 and server2 respectively.


<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6" channelStartOptions="3">

<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />

<Channel className="">

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
autoBind="0" selectorTimeout="5000" maxThreads="6"
address="IP_ADDRESS_SERVER1" port="4444" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
<Interceptor className="" staticOnly="true"/>
<Interceptor className=""/>
<Interceptor className="">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="" />
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>


Monday, May 12, 2014

Extend a root volume in the Aws

To extend a root volume in the Aws we first need to take a snap shot of the correct volume and the create a new volume of the needed size from the snapshot. Checking the size of the Current partition. We can see that the current size is 10Gb. Intial State

First find the correct instance ID


Check the volume section and find out which all volume are attached to the Correct instance. Check and note the volume instance and find the mount path as we need to mount the new volume to same mount path.     Instance-ID-01

  Select the Correct Volume and create a snapshot of that Volume. Right Click and select the option to create a snapshot.


Enter the snapshot name and description to create a snapshot.


Snapshot Created.


    Now select the snap shot option in the right menu and choose the correct snapshot which needs to be extended and Right click to create the new volume.


Select the Needed Size , type of storage etc


Once the Volume in created you can see it in the Volume Panel in available model.


Stop the instance in which the Volume is mounted


  Select the Volume panel  and Unmount the  initial Volume which is of small size by right click the old volume.


Once the volume ins unmounted both the Volumes will be in available state. Now right click on the new volume and attach it into the instance.


  Enter the instance to which the volume should be added and also the device path which we have save before.


Make sure you enter the device path correctly as we have noted down before.  


Once mounted make sure that the instance ID, Volume ID and device path are correct.


Once its all set start the instance and check the size. In some cases we need to run resize2fs to make the size of the volume extent.


In windows you can resize it from the disk management option.      

Wednesday, April 30, 2014

S3cmd : Used to copy files to s3 bucket from server. AWS

S3cmd : AWS command used to copy/Sync content to S3 bucket

s3cmd can be installed from epel repo or by manually compiling the code.

While installing from epel there could be dependency issue for the python.
while using epel repo we need the python version 2.4 in the server if you are having another version of python its better to go with the manual installation.

## RHEL/CentOS 6 32-Bit ##
# wget
# rpm -ivh epel-release-6-8.noarch.rpm

## RHEL/CentOS 6 64-Bit ##
# wget
# rpm -ivh epel-release-6-8.noarch.rpm

yum install s3cmd

For manual installation Download the tar file from

get the tar file of the needed version .
make sure you have a python version > than 2.4 installed in the server.

untar the file using tar zxvf or zjvf as per the need and use python to run the installation script

python install


Configuring/Reconfiguring the s3cmd command

s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Encryption password: xxxxxxxxxx
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'


# s3cmd mb s3://test
Bucket 's3://test/' created

# s3cmd ls s3://test/

Upload a file
# s3cmd put file.txt s3://test/

Upload Similar files
# s3cmd put *.txt s3://test/

Uploading complete Directory
# s3cmd put -r upload-dir s3://test/
Upload files in a directory
# s3cmd put -r upload-dir/ s3://test/

Get a file
# s3cmd get s3://test/file.txt

Removing file from s3 bucket
# s3cmd del s3://test/file.txt
File s3://test/file.txt deleted

Removing directory from s3 bucket
# s3cmd del s3://test/backup
File s3://test/backup deleted
Sync direcotry .
# s3cmd sync ./back s3://test/back

attributes that can be used with Sync
--delete-removed :-remove files that are removed from the local directory .
--skip-existing :-Don't sync the files already synced.

—exclude / —include — standard shell-style wildcards, enclose them into apostrophes to avoid their expansion by the shell. For example --exclude&nbsp;'x*.jpg' will match x12345.jpg but not abcdef.jpg.
—rexclude / —rinclude — regular expression version of the above. Much more powerful way to create match patterns. I realise most users have no clue about RegExps, which is sad. Anyway, if you’re one of them and can get by with shell style wildcards just use —exclude/—include and don’t worry about —rexclude/—rinclude. Or read some tutorial on RegExps, such a knowledge will come handy one day, I promise ;-)
—exclude-from / —rexclude-from / —(r)include-from — Instead of having to supply all the patterns on the command line, write them into a file and pass that file’s name as a parameter to one of these options. For instance --exclude '*.jpg' --exclude '*.gif' is the same as --