Showing posts with label s3. Show all posts
Showing posts with label s3. Show all posts

Friday, September 8, 2017

Minio Running as Service

Minio is a distributed object storage server, similar to Amazon S3, that allows you to store and access large amounts of data. Since the service is running on different hosts, it is important to have a shared storage mechanism so that the data is synchronized across all nodes. To achieve this, a bind mount is used to mount a directory on the host machine to the Minio server container, allowing it to read and write data to the directory. Additionally, two Docker secrets are created for access and secret keys to authenticate and authorize access to the Minio server. Finally, the service is created with the docker service create command, specifying the name of the service, the port to publish, the constraint to run the service only on a manager node, the bind mount for data synchronization, and the two Docker secrets for authentication. The minio/minio image is used to run the Minio server, and the /data directory is specified as the location to store data.

echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -

docker service create --name="minio-service" --publish 9000:9000   --constraint 'node.role == manager' --mount type=bind,src=/mnt/minio/,dst=/data --secret="access_key" --secret="secret_key" minio/minio server /data

Wednesday, September 6, 2017

Minio: S3 Compatible Stoage in Docker

Minio is a distributed object storage server that is designed to be scalable and highly available. It is built for cloud-native applications and DevOps. Minio provides Amazon S3 compatible API for cloud-native applications to store and retrieve data. It is open-source and can be deployed on-premise, on the cloud or on Kubernetes.

The command docker pull minio/minio pulls the Minio image from Docker Hub. The command docker run -p 9000:9000 minio/minio server /data runs a Minio container with port forwarding from the host to the container for the Minio web interface. The /data parameter specifies the path to the data directory that will be used to store the data on the container's file system.

**We need to have the docker env up and running.

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data

After running this command, you can access the Minio web interface by navigating to http://localhost:9000 in your web browser.

Wednesday, February 17, 2016

Mount S3 Bucket on CentOS/RHEL and Ubuntu using S3FS

S3FS is FUSE (File System in User Space) based solution to mount an Amazon S3 buckets, We can use system commands with this drive just like as another Hard Disk in system. On s3fs mounted files systems we can simply use cp, mv and ls the basic Unix commands similar to run on locally attached disks.
This article will help you to install S3FS and Fuse by compiling from source, and also help you to mount S3 bucket on your CentOS/RHEL and Ubuntu systems.
Step 1: Remove Existing Packages
First check if you have any existing s3fs or fuse package installed on your system. If installed it already remove it to avoid any file conflicts.
CentOS/RHEL Users:
# yum remove fuse fuse-s3fs
Ubuntu Users:
$ sudo apt-get remove fuse
Step 2: Install Required Packages
After removing above packages. First we will install all dependencies for fuse and s3cmd. Install the required packages to system use following command.
CentOS/RHEL Users:
# yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap
Ubuntu Users:
$ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support
Step 3: Download and Compile Latest Fuse
Download and compile latest version of fuse source code. For this article we are using fuse version 2.9.3. Following set of command will compile fuse and add fuse module in kernel.
# cd /usr/src/
# wget
# tar xzf fuse-2.9.3.tar.gz
# cd fuse-2.9.3
# ./configure –prefix=/usr/local
# make && make install
# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
# ldconfig
# modprobe fuse
Step 4: Download and Compile Latest S3FS
Download and compile latest version of s3fs source code. For this article we are using s3fs version 1.74. After downloading extract the archive and compile source code in system.
# cd /usr/src/
# wget
# tar xzf s3fs-1.74.tar.gz
# cd s3fs-1.74
# ./configure –prefix=/usr/local
# make && make install

Step 5: Setup Access Key
Also In order to configure s3fs we would required Access Key and Secret Key of your S3 Amazon account. Get these security keys from Here.
# chmod 600 ~/.passwd-s3fs
Note: Change AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with your actual key values.
Step 6: Mount S3 Bucket
Finally mount your s3 bucket using following set of commands. For this example, we are using s3 bucket name as mydbbackup and mount point as /s3mnt.
# mkdir /tmp/cache
# mkdir /s3mnt
# chmod 777 /tmp/cache /s3mnt
# s3fs -o use_cache=/tmp/cache mydbbackup /s3mnt

Install s3cmd in Linux to Manage Amazon s3 Bucket

s3cmd is a command line utility used for creating s3 buckets, uploading, retrieving and managing data to Amazon s3 storage. This article will help you to how to use install s3cmd on CentOS, RHEL and Ubuntu Systems and manage s3 buckets via command line in easy steps
How to Install s3cmd Package
s3cmd is available in default rpm repositories for CentOS,RHEL and Ubuntu systems, You can install it using simply executing following commands on your system.
# yum install s3cmd

If the above command is not working we need to create the following repo or enable epel repo
# vim /etc/yum.repos.d/s3tools.repo
name=Tools for managing Amazon S3 - Simple Storage Service (RHEL_6)
On Ubuntu/Debian:
$ sudo apt-get install s3cmd
On SUSE Linux Enterprise Server 11:
# zypper addrepo
# zypper install s3cmd
Configure s3cmd Environment
In order to configure s3cmd we would require Access Key and Secret Key of your S3 Amazon account. Get these security keys from aws securityCredentials page. If will prompt to login to your amazon account.
After getting key files, use below command to configure s3cmd.
# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
  Access Key: xxxxxxxxxxxxxxxxxxxxxx
  Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  Encryption password: xxxxxxxxxx
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
Uses of s3cmd Command Line
Once configuration is successfully completed. Now find below command details to how to manage s3 buckets using commands.
1. Creating New Bucket
To create a new bucket in Amazon s3 use below command. It will create bucket named apps in S3 account.
# s3cmd mb s3://apps

Bucket 's3://apps/' created
2. Uploading file in Bucket
Below command will upload file file.txt to s3 bucket using s3cmd command.
# s3cmd put file.txt s3://apps/

file.txt -> s3://appsfile.txt  [1 of 1]
 190216 of 190216   100% in    0s  1668.35 kB/s  done
3. Uploading Directory in Bucket
If we need to upload entire directory use -r to upload it recursively like below.
# s3cmd put -r backup s3://apps/

backup/file1.txt -> s3://apps/backup/file1.txt  [1 of 2]
 9984 of 9984   100% in    0s    18.78 kB/s  done
backup/file2.txt -> s3://apps/backup/file2.txt  [2 of 2]
 0 of 0     0% in    0s     0.00 B/s  done
Make sure you are not adding trailing slash in upload directory named backup (eg: backup/), else it will upload only content of backup directory only.
# s3cmd put -r backup/ s3://apps/

backup/file1.txt -> s3://apps/file1.txt  [1 of 2]
 9984 of 9984   100% in    0s    21.78 kB/s  done
backup/file2.txt -> s3://apps/file2.txt  [2 of 2]
 0 of 0     0% in    0s     0.00 B/s  done
4. List Data of S3 Bucket
List the objects of s3 bucket using ls switch with s3cmd.
# s3cmd ls s3://apps/

                       DIR   s3://apps/backup/
2013-09-03 10:58    190216   s3://apps/file.txt
5. Download Files from Bucket
Some times if we need to download files from s3 bucket, Use following commands to download it.
# s3cmd get s3://apps/file.txt

s3://apps/file.txt -> ./file.txt  [1 of 1]
 4 of 4   100% in    0s    10.84 B/s  done
6. Remove Data of S3 Bucket
To remove files are folder from s3 bucket use following commands.
 Removing file from s3 bucket 
# s3cmd del s3://apps/file.txt

File s3://apps/file.txt deleted

 Removing directory from s3 bucket 
# s3cmd del s3://apps/backup
File s3://apps/backup deleted
7. Remove S3 Bucket
If we don’t need s3 bucket any more, we can simply delete it using following command. Before removing bucket make sure its empty.
# s3cmd rb s3://apps
ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
Above command failed because of s3 bucket was not empty
To remove bucket first remove all objects inside bucket and then use command again.
# s3cmd rb s3://apps
Bucket 's3://apps/' removed
8. List All S3 Bucket
Use following command to list all s3 buckets in your aws account.
# s3cmd ls

Saturday, November 21, 2015

How To Grant Access To Only One S3 Bucket Using AWS IAM Policy

Click on “My Account/Console” and select “Security Credentials”.

Select “Continue to Security Credentials”.

Select “Policies” on the left menu, then click “Create Policy”.

Select “Create Your Own Policy”.

Fill out the “Policy Name”, “Description” and “Policy Document” fields.
Replace “YOUR-BUCKET” in the example below with your bucket name.
Please note that we set “ListAllMyBuckets” to list all buckets owned by you, so that tools that lists buckets will work.

NOTE: If you explicitly list out the actions for your bucket, please also include
"s3:GetBucketLocation" so that ObjectiveFS can select the right S3 endpoint to talk with.
Example policy:
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:s3:::*"
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [