Pages

Monday, July 25, 2016

Import OVA to Amazon Aws

VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing you to deploy workloads across your IT infrastructure.


Step 1.  : installing the Aws CLI

Step 2. We can get the Access Key ID and Secret Key from Aws IAM service under the specific User.
aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: ENTER

Step 3
Now create two files: trust-policy.json & role-policy.json, in the second file you’ll need to replace “$bucketname” with your bucket name.

trust-policy.json:
===============
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}
===============

role-policy.json:
=================
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketname"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketname/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}
===================

Now, use the aws cli tools to apply the policies:
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

Step 4 : Check VM prerequisite before exporting as OVA
========================
In regard of VM, before exporting it from Vsphere and importing to AWS cloud, please make sure that all prerequisites for Import have been fullfiled.
Compare your VM with this checklist:
-all unnecessary services are disabled,
-no unnecessary applications are placed in Windows Startup,
-there are no pending reboots (reboot flag set by Windows Update or by any other software),
-VM volumes are defragmented and the size of each disk is resized to necessary (bigger disk=longer conversion time),
-you use single network interface setup to use DHCP (this should be done prior to import),
-no ISO is attached to this VM,
-make sure that Microsoft .NET Framework 3.5 Service Pack 1 or later are installed (required to support Ec2Config),
-your VM's root volume use MBR partition table,
-your anti-virus and anti-spyware software and firewalls are disabled,
-only one partition is bootable,
-rdp access is enabled,
-the administrator account and all other user accounts use secure passwords. All accounts must have passwords or the importation might fail.
-Uninstall the VMware Tools from your VMware VM,
-the language of your OS is EN-US,
-these hotfixes are installed (according to OS version):
Install Latest Ec2Config
https://aws.amazon.com/developertools/5562082477397515
=================

Step 5 : Uploading the OVA to S3 and Creating the AMI
You can upload your VMs in OVA format to your Amazon S3 bucket using the upload tool of your choice. After you upload your VM to Amazon S3, you can use the AWS CLI to import your OVA image. These tools accept either a URL (public Amazon S3 file, a signed GET URL for private Amazon S3 files) or the Amazon S3 bucket and path to the disk file.

Use aws ec2 import-image to create a new import image task.
The syntax of the command is as follows:

$ aws ec2 import-image --description "Windows 2008 OVA" --disk-containers file://containers.json
The file containers.json is a JSON document that contains information about the image. The S3Key is the name of the image file you want to upload to the S3Bucket.

[{
    "Description": "First CLI task",
    "Format": "ova",
    "UserBucket": {
        "S3Bucket": "my-import-bucket",
        "S3Key": "my-windows-2008-vm.ova"
    }
}]

Step 6 : Checking the Status


Use the “aws ec2 describe-import-image-tasks” command to return the status of the task. The syntax of the command is as follows:

Regarding the License licensing, within the  api-call "aws ec2 import-image" we can define a "--license-type" value.
Based on this option your VM will use your license (BYOL) or will activate itself in AWS KMS[4]. Option should be set to "AWS" or "BYOL".

Wednesday, February 17, 2016

AWS IAM policy for limiting the users access to a group of instance with a particular Tag Name.

AWS IAM policy for limiting the users access to a group of instance with a particular Tag Name.


The TAGNAME and VALUE will be
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:RebootInstances"
            ],
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/TAGNAME": "VALUE"
                }
            },
            "Resource": "arn:aws:ec2:eu-east-0:123654456123:instance/*"
        }
    ]
}

Mount S3 Bucket on CentOS/RHEL and Ubuntu using S3FS

S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.
Step 1: Remove Existing Packages
First check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.
CentOS/RHEL Users:
# yum remove fuse fuse-s3fs
Ubuntu Users:
$ sudo apt-get remove fuse
Step 2: Install Required Packages
After removing above packages. First we will install all dependencies for fuse and s3cmd. Install the required packages to system use following command.
CentOS/RHEL Users:
# yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap
Ubuntu Users:
$ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support
Step 3: Download and Compile Latest Fuse
Download and compile latest version of fuse source code. For this article we are using fuse version 2.9.3. Following set of command will compile fuse and add fuse module in kernel.
# cd /usr/src/
# wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.3/fuse-2.9.3.tar.gz
# tar xzf fuse-2.9.3.tar.gz
# cd fuse-2.9.3
# ./configure –prefix=/usr/local
# make && make install
# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
# ldconfig
# modprobe fuse
Step 4: Download and Compile Latest S3FS
Download and compile latest version of s3fs source code. For this article we are using s3fs version 1.74. After downloading extract the archive and compile source code in system.
# cd /usr/src/
# wget https://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
# tar xzf s3fs-1.74.tar.gz
# cd s3fs-1.74
# ./configure –prefix=/usr/local
# make && make install

Step 5: Setup Access Key
Also In order to configure s3fs we would required Access Key and Secret Key of your S3 Amazon account. Get these security keys from Here.
# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs
# chmod 600 ~/.passwd-s3fs
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.
Step 6: Mount S3 Bucket
Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.
# mkdir /tmp/cache
# mkdir /s3mnt
# chmod 777 /tmp/cache /s3mnt
# s3fs -o use_cache=/tmp/cache mydbbackup /s3mnt