Pages

Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Tuesday, April 14, 2020

Configure AWS Login With Azure AD Enterprise App

The idea is to enable users to sign in to AWS using their Azure AD credentials. This can be achieved by configuring single sign-on (SSO) between Azure AD and AWS. The process involves creating an enterprise application in Azure AD and configuring the AWS application to use Azure AD as the identity provider. Once this is set up, users can log in to AWS using their Azure AD credentials and access the AWS resources that they have been authorized to use. The tutorial provided by Microsoft explains the steps involved in setting up this SSO configuration between Azure AD and AWS.



  1. Azure >> Enterprise APP >> <<Configure Azure AD SSO
    1. Deploy Amazon Web Services Developer App
    2. Single Sign On >>.SAML
      1. Popup to save
        1. Identifier: https://signin.aws.amazon.com/saml
        2. Reply URL: https://signin.aws.amazon.com/saml
      2. Save
    3. SAML Signing Certificate
      1. Download "Federation Metadata XML"
    4. Add the AD user's to Application's User' and Group
  2. AWS >> IAM >> Identity provider
    1. Create
      1. SAML
      2. AZADAWS
      3. Upload the Metadata XML
    2. Verify Create
  3. AWS>> IAM >> ROLE << This Role will Come in Azure Application
    1. SAML 2.0 Federations
      1. Choose :  Earlier Created Identity provider
      2. Allow programmatic and AWS Management Console access
      3. Choose required permissions
      4. Create the role with Appropriate name
  4. AWS >> IAM >> POLICIES <<  This policy will allow to fetch the roles from AWS accounts.
    1. Choose JSON
      1. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": "*" } ] }
    2. Name : AzureAD_SSOUserRole_Policy.
    3. Create the Policy
  5. AWS >> IAM >> USER
    1. Name : AzureADRoleManager
    2. Choose Programmatic access
    3. Permission : Attach existing polices
      1. Choose : AzureAD_SSOUserRole_Policy
    4. Create User
    5. Copy Access and Secret key
  6. Azure Enterprise App >> Choose Amazon Web Services App which was deployed
    1. Provisoing
      1. Make it automatic
      2. Give Aws Access and Secret key
      3. Test and Save
      4. Make the "Provisioning Status" to ON
      5. Wait for a sync to complete
      6. Once Sync is Completed got the user's and Groups
        1. Choose the user, select Click EDIT
        2. Choose the AWS Role






Friday, September 8, 2017

Minio Running as Service

Minio is a distributed object storage server, similar to Amazon S3, that allows you to store and access large amounts of data. Since the service is running on different hosts, it is important to have a shared storage mechanism so that the data is synchronized across all nodes. To achieve this, a bind mount is used to mount a directory on the host machine to the Minio server container, allowing it to read and write data to the directory. Additionally, two Docker secrets are created for access and secret keys to authenticate and authorize access to the Minio server. Finally, the service is created with the docker service create command, specifying the name of the service, the port to publish, the constraint to run the service only on a manager node, the bind mount for data synchronization, and the two Docker secrets for authentication. The minio/minio image is used to run the Minio server, and the /data directory is specified as the location to store data.


echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -

docker service create --name="minio-service" --publish 9000:9000   --constraint 'node.role == manager' --mount type=bind,src=/mnt/minio/,dst=/data --secret="access_key" --secret="secret_key" minio/minio server /data

Wednesday, September 6, 2017

Minio: S3 Compatible Stoage in Docker

Minio is a distributed object storage server that is designed to be scalable and highly available. It is built for cloud-native applications and DevOps. Minio provides Amazon S3 compatible API for cloud-native applications to store and retrieve data. It is open-source and can be deployed on-premise, on the cloud or on Kubernetes.

The command docker pull minio/minio pulls the Minio image from Docker Hub. The command docker run -p 9000:9000 minio/minio server /data runs a Minio container with port forwarding from the host to the container for the Minio web interface. The /data parameter specifies the path to the data directory that will be used to store the data on the container's file system.

**We need to have the docker env up and running.

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data


After running this command, you can access the Minio web interface by navigating to http://localhost:9000 in your web browser.






Thursday, August 17, 2017

Inceass the Root Disk Size for Centos in Aws

Issue: Root Partition not scaled after EBS is resized.

Growpart called by cloud-init only works for kernels >3.8. Only newer kernels support changing the partition size of a mounted partition. When using an older kernel the resizing of the root partition happens in the initrd stage before the root partition is mounted and the subsequent cloud-init growpart run is a no-op.


#lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0   8G  0 part /
Perform the following command as root:

# yum install cloud-utils-growpart

# growpart /dev/xvda 1

# reboot
After the reboot:

# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0  30G  0 part /

Tuesday, November 10, 2015

Oracle Database backup to AWS S3 : Error occured when installing OSB(Oracle Security Backup) on Amazon S



I tried to set up rman backup using amazon cloud module and I faced up following error.
Internet connections are positively working.

#> java -jar osbws_install.jar -AWSID MyAWSID -AWSKey MYAWSKEY -otnUser MYOTNID -otnPass MYOTNPASS -walletDir $ORACLE_HOME/dbs/osbws_wallet -libDir $ORACLE_HOME/lib -debug










Fix:  The OSB module works only with Java version 1.5 and 1.6. The new Machines are running with java version 1.7. try with Java version 1.6.


Friday, June 12, 2015

Getting Client IP Behind the Aws ELB (Http/Http Mode)

We need to add the Following Logformat to get the clients IP.

We use the X-Forwarded-For entry in the apache configuration to get it done.

# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "\"%{X-Forwarded-For}i\" %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined_new
#....

#...
#
# START_HOST example.com

    ServerName example.com
    DocumentRoot "/var/www/example.com/html"

        Options Includes FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all

    CustomLog /var/www/logs/example.com/access_log combined_new
    ErrorLog /var/www/logs/example.com/error_log

# END_HOST example.com

Thursday, July 10, 2014

Configure Amazon CloudWatch Monitoring Scripts for Linux

Installing dependencies

yum install cpan
yum install perl-Time-HiRes
cpan >> install Switch
yum install zip unzip
yum install wget
yum install perl-Crypt-SSLeay


wget http://ec2-downloads.s3.amazonaws.com/cloudwatch-samples/CloudWatchMonitoringScripts-v1.1.0.zip
unzip CloudWatchMonitoringScripts-v1.1.0.zip
cd aws-scripts-mon/

vi awscreds.template
AWSAccessKeyId=
AWSSecretKey=

Test it
./mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-path=/ --disk-space-util --disk-space-used --disk-space-avail --swap-used --aws-credential-file=/root/aws-scripts-mon/awscreds.template

Add it to cron
* * * * * /usr/bin/perl /root/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --aws-credential-file=/root/aws-scripts-mon/awscreds.template --from-cron

 

 

Wednesday, June 11, 2014

Putty + Remote tunnel + RDP

Installing Putty and Configuring SSH Tunnel and Remote Desktop

On the CLIENT computer we are connecting from, we will need to install Putty and configure it to connect RDP over SSH (ie create the tunnel).

1. To install putty, just extract the Zip for to your C:\Putty folder.  The Putty folder should contain several .exe programs.

2. To run putty, we will just run the Putty.exe in the C:\Putty folder.  To make it easier to launch, you can create a shortcut to Putty.exe and put it on your desktop or in your Start Menu.

3. Under the Session section (on left pane), type in the host name of the pc we are connecting to (in our example on our local network). 10.0.1.5 and leave the port at 22.  Also you can go under the Saved Session box and enter a name to save the profile as for easy connection (more later on this).

Under the Connection > SSH Tunnels tab, under Source Port, enter in a local port to connect to as our tunnel (i use a very high port in the 40000 range, we’ll use 40000), in the Destination box, we can put in the ip address of the remote computer we have running Copssh/SSH, 10.0.1.5 in my example.




Go back to the Sessions section and click the Save button under the Saved Sessions box and then hit the Open button.

4. You should get a prompt to accept a key the first time we connect, click Yes.

5.  We now should get a command window like interface asking for a user.  Enter your remote computers login username and password.  Once you connect, the command window will change to a local window.

Connecting via Remote Desktop over the SSH Tunnel

1. On the laptop/client computer, open Remote Desktop Connection (Start Menu > All Programs > Accessories > Remote Desktop Connection)

2. Enter in 127.0.0.1:40000 for the computer to connect to.

127.0.0.1 = the local tcp/ip stack loopback address and 40000 = port to connect over.  This in turn forces our remote desktop client to use the SSH tunnel we created at 40000 to connect to our remote pc at the 22 port.

Wednesday, May 21, 2014

Installing Amazon Command Line using PIP

Installing the repo needed for pip

cd /tmp
wget http://mirror-fpt-telecom.fpt.net/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm

Installing C-compiler for Pip

yum install gcc

Installing amazon cli

pip install awscli

Configure Amazon Cli

aws configrue

you need aws access key ,secret key, default region and output format.

Installing and configuring Amazon EC2 command line

Now download the Amazon API CLI tools using following command and extract them at a proper place. For this example, we are using /opt directory.

# mkdir /opt/ec2
# wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
# unzip ec2-api-tools.zip -d /tmp
# mv /tmp/ec2-api-tools-* /opt/ec2/tools
Step 3- Download Private Key and Certificate Files

Now create and download X.509 certificate (private key file and certificate file) files from your account from Security Credentials page and copy to /opt/ec2/certs/ directory.

# ls -l /opt/ec2/certs/

-rw-r--r--. 1 root root 1281 May 15 12:57 my-ec2-cert.pem
-rw-r--r--. 1 root root 1704 May 15 12:56 my-ec2-pk.pem
Step 4- Configure Environment

Install JAVA
The Amazon EC2 command line tools required Java 1.6 or later version. Make sure you have proper java installed on your system. You can install JRE or JDK , both are ok to use.
# java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode)
If you don’t have Java installed your system, Use below links to install Java on your system first
Installing JAVA/JDK 8 on CentOS, RHEL and Fedora
Installing JAVA/JDK 8 on Ubuntu

Now edit ~/.bashrc file and add the following values at end of file

export EC2_BASE=/opt/ec2
export EC2_HOME=$EC2_BASE/tools
export EC2_PRIVATE_KEY=$EC2_BASE/certs/my-ec2-pk.pem
export EC2_CERT=$EC2_BASE/certs/my-ec2-cert.pem
export EC2_URL=https://ec2.xxxxxxx.amazonaws.com
export AWS_ACCOUNT_NUMBER=
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/opt/jdk1.8.0_05
Now execute the following command to set environment variables

$ source ~/.bashrc

After completing all configuration, let’s run following command to quickly verify setup.

# ec2-describe-regions

REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com

Thursday, May 15, 2014

Apache load balancing using mod_jk

Considering you have two tomcat server's and both are configured and port 8009 is listened by ajp in tomcat.

Download the module from http://tomcat.apache.org/download-connectors.cgi

Sample Version http://apache.mirrors.hoobly.com/tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.40-src.tar.gz

#tar -xvf tomcat-connectors-1.2.37-src.tar

# cd tomcat-connectors-1.2.32-src/native/

# which usr/sbin/apxs

# ./configure --with-apxs=/usr/sbin/apxs --enable-api-compatibility

# make

# make install

after completed this activity you will get mod_jk.so file in /usr/lib64/httpd/modules/mod_jk.so

or else copy the modules to apache's module directory.

 

if get it , going well

Installation part has been completed, let's start configuration part

4. Open httpd.conf file and add end of line.

# vi /etc/httpd/conf/httpd.conf

JkWorkersFile "/etc/httpd/conf/worker.properties"
JkLogFile "/var/log/httpd/mod_jk.log"
JkRequestLogFormat "%w %V %T"
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

The below two lines in the virtualhost.

JkMount / loadbalancer
JkMount /status status

Content of the worker.properties

cat /etc/httpd/conf/worker.properties
worker.list=loadbalancer,status

worker.template.type=ajp13
worker.template.connection_pool_size=50
worker.template.socket_timeout=1200

worker.node2.reference=worker.template
worker.node1.port=8009
worker.node1.host=54.86.231.61
worker.node1.type=ajp13
worker.node2.jvm_route=node1

worker.node2.port=8009
worker.node2.host=54.86.17.252
worker.node2.type=ajp13
worker.node2.jvm_route=node2

worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
#worker.loadbalancer.sticky_session=TRUE

to check the status

worker.status.type=status

 

 

 

Tomcat-Static-Unicast-Clustering

Tomcat needs to be configured to allow for setup of cluster of two nodes over unicast. Following is section of my ${LIFERAY_HOME}/tomcat-6.0.32/conf/server.xml on server1 (replace node1 with node2 and swap location of IP_ADDRESSES and change unique_id to anything 16 bit long other than{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}, on server.xml in server2) which allowed for this. IP_ADDRESSES here refer to private ip addresses of server1 and server2 respectively.

================================

<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6" channelStartOptions="3">

<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />

<Channel className="org.apache.catalina.tribes.group.GroupChannel">

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
autoBind="0" selectorTimeout="5000" maxThreads="6"
address="IP_ADDRESS_SERVER1" port="4444" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
timeout="60000"
keepAliveTime="10"
keepAliveCount="0"
/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor" staticOnly="true"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
host="IP_ADDRESS_SERVER2"
port="4444"
uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="" />
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

=================================

Monday, May 12, 2014

Extend a root volume in the Aws

To extend a root volume in the Aws we first need to take a snap shot of the correct volume and the create a new volume of the needed size from the snapshot. Checking the size of the Current partition. We can see that the current size is 10Gb. Intial State

First find the correct instance ID

Instance-ID-00png

Check the volume section and find out which all volume are attached to the Correct instance. Check and note the volume instance and find the mount path as we need to mount the new volume to same mount path.     Instance-ID-01

  Select the Correct Volume and create a snapshot of that Volume. Right Click and select the option to create a snapshot.

Instance-ID-02

Enter the snapshot name and description to create a snapshot.

Instance-ID-03

Snapshot Created.

Instance-ID-04

    Now select the snap shot option in the right menu and choose the correct snapshot which needs to be extended and Right click to create the new volume.

Instance-ID-05  

Select the Needed Size , type of storage etc

  Instance-ID-06  

Once the Volume in created you can see it in the Volume Panel in available model.

  Instance-ID-07  

Stop the instance in which the Volume is mounted

  Instance-ID-08

  Select the Volume panel  and Unmount the  initial Volume which is of small size by right click the old volume.

Instance-ID-09  

Once the volume ins unmounted both the Volumes will be in available state. Now right click on the new volume and attach it into the instance.

  Instance-ID-10

  Enter the instance to which the volume should be added and also the device path which we have save before.

Instance-ID-11  

Make sure you enter the device path correctly as we have noted down before.  

Instance-ID-12


Once mounted make sure that the instance ID, Volume ID and device path are correct.

Instance-ID-13  

Once its all set start the instance and check the size. In some cases we need to run resize2fs to make the size of the volume extent.

instance    

In windows you can resize it from the disk management option.      

Wednesday, April 30, 2014

S3cmd : Used to copy files to s3 bucket from server. AWS

S3cmd : AWS command used to copy/Sync content to S3 bucket

s3cmd can be installed from epel repo or by manually compiling the code.

While installing from epel there could be dependency issue for the python.
while using epel repo we need the python version 2.4 in the server if you are having another version of python its better to go with the manual installation.

## RHEL/CentOS 6 32-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

## RHEL/CentOS 6 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

yum install s3cmd

For manual installation Download the tar file from

http://sourceforge.net/projects/s3tools/files/s3cmd/

get the tar file of the needed version .
make sure you have a python version > than 2.4 installed in the server.

untar the file using tar zxvf or zjvf as per the need and use python to run the installation script

python setup.py install

..

Configuring/Reconfiguring the s3cmd command

s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Encryption password: xxxxxxxxxx
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

 

# s3cmd mb s3://test
Bucket 's3://test/' created

# s3cmd ls s3://test/

Upload a file
# s3cmd put file.txt s3://test/

Upload Similar files
# s3cmd put *.txt s3://test/

Uploading complete Directory
# s3cmd put -r upload-dir s3://test/
Upload files in a directory
# s3cmd put -r upload-dir/ s3://test/

Get a file
# s3cmd get s3://test/file.txt

Removing file from s3 bucket
# s3cmd del s3://test/file.txt
File s3://test/file.txt deleted

Removing directory from s3 bucket
# s3cmd del s3://test/backup
File s3://test/backup deleted
Sync direcotry .
# s3cmd sync ./back s3://test/back

attributes that can be used with Sync
--delete-removed :-remove files that are removed from the local directory .
--skip-existing :-Don't sync the files already synced.

—exclude / —include — standard shell-style wildcards, enclose them into apostrophes to avoid their expansion by the shell. For example --exclude&nbsp;'x*.jpg' will match x12345.jpg but not abcdef.jpg.
—rexclude / —rinclude — regular expression version of the above. Much more powerful way to create match patterns. I realise most users have no clue about RegExps, which is sad. Anyway, if you’re one of them and can get by with shell style wildcards just use —exclude/—include and don’t worry about —rexclude/—rinclude. Or read some tutorial on RegExps, such a knowledge will come handy one day, I promise ;-)
—exclude-from / —rexclude-from / —(r)include-from — Instead of having to supply all the patterns on the command line, write them into a file and pass that file’s name as a parameter to one of these options. For instance --exclude '*.jpg' --exclude '*.gif' is the same as --