Pages

Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Saturday, May 18, 2024

Resizing EBS Volumes for Your EC2 Instances: A Step-by-Step Guide

Running out of space on your Amazon EC2 instance? Don't worry, you're not alone. Thankfully, with Elastic Block Store (EBS) volumes, expanding your storage capacity is a straightforward process. In this guide, we'll walk you through the steps to seamlessly resize your EBS volumes and ensure your EC2 instance has ample room to grow.

Why Resize EBS Volumes?

EBS volumes provide persistent block storage for your EC2 instances. As your applications and data grow, you might find the initial storage allocation becoming insufficient. Resizing EBS volumes allows you to increase the storage capacity without the need to create a new instance or migrate data, minimizing downtime and disruption.

Steps to Resize Your EBS Volume:

  1. Stop Your Instance: Navigate to the EC2 Instances console within the AWS Management Console and stop the instance whose EBS volume you want to resize. Note the availability zone of your instance – this is crucial for later steps. Also, make a note of the mount point of the volume (e.g., /dev/sdxx).

  2. Create a Snapshot: Go to the EBS Volumes console and locate the volume attached to your stopped instance. Select the volume and choose the "Take Snapshot" option. This creates a point-in-time backup of your data.

  3. Create a New Volume from the Snapshot: Find the newly created snapshot in the EBS Snapshots console. Select it and click "Create Volume." Specify the desired increased size for the new volume and ensure you select the same availability zone as your EC2 instance.

  4. Detach and Attach Volumes:

    • Head back to the EBS Volumes console.
    • Select the old volume, choose "Actions," and then "Detach Volume."
    • Select the new volume, choose "Actions," and then "Attach Volume."
    • Choose your instance from the list.
    • In the "Device" field, ensure you enter the correct mount point you noted in step 1 (e.g., /dev/sdxx).
  5. Start Your Instance: Restart your EC2 instance from the EC2 Instances console.

  6. Extend the Filesystem:

    • Once the instance is running, SSH into it.
    • Run df -h to list partitions. You'll see the new volume, likely mounted at /dev/xvda1 (or similar). Note that the displayed size won't reflect the increased capacity yet.
    • Extend the filesystem to utilize the full volume size by running:
      Bash
      resize2fs /dev/xvda1 
      (Replace /dev/xvda1 if your volume has a different mount point.)

Important Tips:

  • Snapshots Are Your Friends: Always take a snapshot before resizing volumes, ensuring you have a rollback point in case of unexpected issues.
  • Choose the Right Volume Type: If your workload demands high performance, consider using Provisioned IOPS SSD (io1) or General Purpose SSD (gp3) volumes for optimal results.
  • Monitor Storage Usage: Regularly monitor your EBS volume usage to ensure you have enough headroom and plan for future resizing.

By following these steps, you can effortlessly resize your EBS volumes and scale your EC2 instances to meet the demands of your growing applications and workloads.

Tuesday, May 14, 2024

How to Create Cross-Account Alias Records in AWS Route 53 for an ELB

Managing DNS records across multiple AWS accounts can be challenging, especially when dealing with resources like Elastic Load Balancers (ELBs). If you have a domain hosted in one AWS account and an ELB in another, you might wonder how to create an alias record that links the two. Fortunately, AWS Route 53 supports cross-account alias records, making this process straightforward. Here’s how you can set it up.

Scenario

Account A: Contains the Route 53 hosted zone for your domain.
Account B: Contains the ELB.

Step-by-Step Guide

Step 1: Obtain the ELB DNS NameLog in to AWS Account B.

  1. Log in to AWS Account B.
  2. Navigate to the EC2 Console: Go to the EC2 dashboard.
  3. In the navigation pane, select Load Balancers.
  4. Copy the DNS Name of the ELB:Select your target ELB.
  5. Note down its DNS name (e.g., my-elb-1234567890.us-west-2.elb.amazonaws.com).

Step 2: Create Alias Record in Route 53

  • Log in to AWS Account A.
  • Open the Route 53 Console: Go to the Route 53 dashboard.
  • Navigate to Hosted Zones and select the hosted zone for your domain.
    • Create a New Record:Click on Create Record.
    • Choose Simple Routing.
      • Configure the Alias Record:Record Name: Leave this blank if you are configuring the zone apex (e.g., example.com), or enter the desired subdomain (e.g., www).
      • Record Type: Choose A - IPv4 address.
      • Alias: Select Yes.
      • Alias Target: Paste the ELB DNS name copied from Account B.
      • AWS will automatically resolve the Alias Hosted Zone ID associated with the ELB DNS name.
    • Save the Record:Click Create records to save your changes.

Step 3: Verify the Configuration

Check the DNS Record:Use a DNS query tool like dig or nslookup to verify that the domain points to the ELB

dig example.com

The response should include the ELB DNS name.


Updated AWS Documentation

AWS has updated its documentation to clarify the process of creating cross-account alias records. You can refer to the AWS Route 53 Developer Guide for detailed information.
Conclusion

By following these steps, you can successfully create an alias record in AWS Route 53 that points to an ELB in another AWS account. This method ensures seamless integration of your domain with resources across multiple AWS accounts, enhancing your infrastructure’s flexibility and security.

Sunday, December 24, 2023

Building a Custom NAT Server on AWS: A Step-by-Step Guide

Network Address Translation (NAT) servers are essential components in a cloud infrastructure, allowing instances in a private subnet to connect to the internet or other AWS services while preventing the internet from initiating a connection with those instances. This blog provides a detailed guide on setting up a NAT server from scratch in an AWS cloud environment.

Step 1: Launching an AWS Instance

Start a t1.micro instance:

  • Navigate to the AWS Management Console.
  • Select the EC2 service and choose to launch a t1.micro instance.
  • Pick an Amazon Machine Image (AMI) that suits your needs (commonly Amazon Linux or Ubuntu).
  • Configure instance details ensuring it's in the same VPC as your private subnet but in a public subnet.

Step 2: Configuring the Instance

Disable "Change Source / Dest Check":

  • Right-click on the instance from the EC2 dashboard.
  • Navigate to "Networking" and select "Change Source / Dest Check."
  • Disable this setting to allow the instance to route traffic not specifically destined for itself.

Security Group Settings:

  • Ensure the Security Group associated with your NAT instance allows the necessary traffic.
  • Typically, it should allow inbound traffic on ports 80 (HTTP) and 443 (HTTPS) for updates and patches.

Step 3: Configuring the NAT Server

Access your instance via SSH and perform the following configurations:

Enable IP Forwarding:

  1. Edit the /etc/sysctl.conf file to enable IP forwarding. This setting allows the instance to forward traffic from the private subnet to the internet.

    sed -i "s/net.ipv4.ip_forward.*/net.ipv4.ip_forward = 1/g" /etc/sysctl.conf
  2. Activate the change immediately:

    echo 1 > /proc/sys/net/ipv4/ip_forward
  3. Confirm the change:

    cat /etc/sysctl.conf | grep net.ipv4.ip_forward

    Expected output: net.ipv4.ip_forward = 1

Configure iptables:

  1. Set up NAT using iptables to masquerade outbound traffic, making it appear as if it originates from the NAT server:

    iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    This command routes all connections reaching eth0 (the primary network interface) to all available paths.

  2. Allow traffic on ports 80 and 443 for updates and external access:

    iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT iptables -A FORWARD -i eth0 -j ACCEPT

Step 4: Routing Configuration

Configure Route Tables:

  • In the AWS Console, go to the VPC Dashboard and select Route Tables.
  • Modify the route table associated with your private subnet:
    • Add a route where the destination is 0.0.0.0/0 (representing all traffic), and the target is the instance ID of your NAT server.
  • Modify the route table associated with your NAT instance:
    • Ensure there's a route where the destination is 0.0.0.0/0, and the target is the internet gateway of your VPC.

Conclusion

With these steps, you've successfully created a NAT server in your AWS environment, allowing instances in a private subnet to securely access the internet for updates and communicate with other AWS services. This setup is crucial for maintaining a secure and efficient cloud infrastructure. Always monitor and maintain your NAT server to ensure it operates smoothly and securely. Currently there are managed NAT server services from AWs which we can use for production grade environments.

Tuesday, April 14, 2020

Configure AWS Login With Azure AD Enterprise App

The idea is to enable users to sign in to AWS using their Azure AD credentials. This can be achieved by configuring single sign-on (SSO) between Azure AD and AWS. The process involves creating an enterprise application in Azure AD and configuring the AWS application to use Azure AD as the identity provider. Once this is set up, users can log in to AWS using their Azure AD credentials and access the AWS resources that they have been authorized to use. The tutorial provided by Microsoft explains the steps involved in setting up this SSO configuration between Azure AD and AWS.



  1. Azure >> Enterprise APP >> <<Configure Azure AD SSO
    1. Deploy Amazon Web Services Developer App
    2. Single Sign On >>.SAML
      1. Popup to save
        1. Identifier: https://signin.aws.amazon.com/saml
        2. Reply URL: https://signin.aws.amazon.com/saml
      2. Save
    3. SAML Signing Certificate
      1. Download "Federation Metadata XML"
    4. Add the AD user's to Application's User' and Group
  2. AWS >> IAM >> Identity provider
    1. Create
      1. SAML
      2. AZADAWS
      3. Upload the Metadata XML
    2. Verify Create
  3. AWS>> IAM >> ROLE << This Role will Come in Azure Application
    1. SAML 2.0 Federations
      1. Choose :  Earlier Created Identity provider
      2. Allow programmatic and AWS Management Console access
      3. Choose required permissions
      4. Create the role with Appropriate name
  4. AWS >> IAM >> POLICIES <<  This policy will allow to fetch the roles from AWS accounts.
    1. Choose JSON
      1. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": "*" } ] }
    2. Name : AzureAD_SSOUserRole_Policy.
    3. Create the Policy
  5. AWS >> IAM >> USER
    1. Name : AzureADRoleManager
    2. Choose Programmatic access
    3. Permission : Attach existing polices
      1. Choose : AzureAD_SSOUserRole_Policy
    4. Create User
    5. Copy Access and Secret key
  6. Azure Enterprise App >> Choose Amazon Web Services App which was deployed
    1. Provisoing
      1. Make it automatic
      2. Give Aws Access and Secret key
      3. Test and Save
      4. Make the "Provisioning Status" to ON
      5. Wait for a sync to complete
      6. Once Sync is Completed got the user's and Groups
        1. Choose the user, select Click EDIT
        2. Choose the AWS Role






Friday, September 8, 2017

Minio Running as Service

Minio is a distributed object storage server, similar to Amazon S3, that allows you to store and access large amounts of data. Since the service is running on different hosts, it is important to have a shared storage mechanism so that the data is synchronized across all nodes. To achieve this, a bind mount is used to mount a directory on the host machine to the Minio server container, allowing it to read and write data to the directory. Additionally, two Docker secrets are created for access and secret keys to authenticate and authorize access to the Minio server. Finally, the service is created with the docker service create command, specifying the name of the service, the port to publish, the constraint to run the service only on a manager node, the bind mount for data synchronization, and the two Docker secrets for authentication. The minio/minio image is used to run the Minio server, and the /data directory is specified as the location to store data.


echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -

docker service create --name="minio-service" --publish 9000:9000   --constraint 'node.role == manager' --mount type=bind,src=/mnt/minio/,dst=/data --secret="access_key" --secret="secret_key" minio/minio server /data

Wednesday, September 6, 2017

Minio: S3 Compatible Stoage in Docker

Minio is a distributed object storage server that is designed to be scalable and highly available. It is built for cloud-native applications and DevOps. Minio provides Amazon S3 compatible API for cloud-native applications to store and retrieve data. It is open-source and can be deployed on-premise, on the cloud or on Kubernetes.

The command docker pull minio/minio pulls the Minio image from Docker Hub. The command docker run -p 9000:9000 minio/minio server /data runs a Minio container with port forwarding from the host to the container for the Minio web interface. The /data parameter specifies the path to the data directory that will be used to store the data on the container's file system.

**We need to have the docker env up and running.

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data


After running this command, you can access the Minio web interface by navigating to http://localhost:9000 in your web browser.






Thursday, August 17, 2017

Inceass the Root Disk Size for Centos in Aws

Issue: Root Partition not scaled after EBS is resized.

Growpart called by cloud-init only works for kernels >3.8. Only newer kernels support changing the partition size of a mounted partition. When using an older kernel the resizing of the root partition happens in the initrd stage before the root partition is mounted and the subsequent cloud-init growpart run is a no-op.


#lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0   8G  0 part /
Perform the following command as root:

# yum install cloud-utils-growpart

# growpart /dev/xvda 1

# reboot
After the reboot:

# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0  30G  0 part /

Tuesday, November 10, 2015

Oracle Database backup to AWS S3 : Error occured when installing OSB(Oracle Security Backup) on Amazon S



I tried to set up rman backup using amazon cloud module and I faced up following error.
Internet connections are positively working.

#> java -jar osbws_install.jar -AWSID MyAWSID -AWSKey MYAWSKEY -otnUser MYOTNID -otnPass MYOTNPASS -walletDir $ORACLE_HOME/dbs/osbws_wallet -libDir $ORACLE_HOME/lib -debug










Fix:  The OSB module works only with Java version 1.5 and 1.6. The new Machines are running with java version 1.7. try with Java version 1.6.