Pages

Sunday, August 16, 2020

Converting Text Case in Linux: Exploring Powerful Command-Line Tools

In the realm of command-line utilities, Linux offers a plethora of versatile tools that empower users to perform a wide range of tasks efficiently. One such task involves converting the case of text within a file. Whether you're looking to transform text to lowercase or uppercase, Linux provides multiple command-line options to achieve this. In this article, we'll delve into the process of converting text case using four prominent tools: dd, awk, perl, and sed.


Converting Text to Lowercase

Using dd

The dd command, renowned for its data manipulation capabilities, can also be employed to convert text to lowercase.

$ dd if=input.txt of=output.txt conv=lcase

Leveraging awk

awk, a versatile text processing tool, offers a succinct way to convert text to lowercase.

$ awk '{ print tolower($0) }' input.txt > output.txt

The Magic of perl

Perl enthusiasts can harness the power of this scripting language to achieve case conversion.
$ perl -pe '$_= lc($_)' input.txt > output.txt

Transforming with sed

For those who appreciate the elegance of sed, this command can seamlessly convert text to lowercase.

$ sed -e 's/\(.*\)/\L\1/' input.txt > output.txt

Converting Text to Uppercase

dd for Uppercase Conversion

Using dd to convert text to uppercase is equally achievable.

$ dd if=input.txt of=output.txt conv=ucase

awk for Uppercase Transformation

awk enthusiasts can employ its capabilities for converting text to uppercase.

$ awk '{ print toupper($0) }' input.txt > output.txt

Uppercase Conversion with perl

Perl's power shines again in transforming text to uppercase.

$ perl -pe '$_= uc($_)' input.txt > output.txt

sed for Uppercase Conversion

Converting text to uppercase using sed is both efficient and effective.

$ sed -e 's/\(.*\)/\U\1/' input.txt > output.txt


Tuesday, April 14, 2020

Configure AWS Login With Azure AD Enterprise App

The idea is to enable users to sign in to AWS using their Azure AD credentials. This can be achieved by configuring single sign-on (SSO) between Azure AD and AWS. The process involves creating an enterprise application in Azure AD and configuring the AWS application to use Azure AD as the identity provider. Once this is set up, users can log in to AWS using their Azure AD credentials and access the AWS resources that they have been authorized to use. The tutorial provided by Microsoft explains the steps involved in setting up this SSO configuration between Azure AD and AWS.



  1. Azure >> Enterprise APP >> <<Configure Azure AD SSO
    1. Deploy Amazon Web Services Developer App
    2. Single Sign On >>.SAML
      1. Popup to save
        1. Identifier: https://signin.aws.amazon.com/saml
        2. Reply URL: https://signin.aws.amazon.com/saml
      2. Save
    3. SAML Signing Certificate
      1. Download "Federation Metadata XML"
    4. Add the AD user's to Application's User' and Group
  2. AWS >> IAM >> Identity provider
    1. Create
      1. SAML
      2. AZADAWS
      3. Upload the Metadata XML
    2. Verify Create
  3. AWS>> IAM >> ROLE << This Role will Come in Azure Application
    1. SAML 2.0 Federations
      1. Choose :  Earlier Created Identity provider
      2. Allow programmatic and AWS Management Console access
      3. Choose required permissions
      4. Create the role with Appropriate name
  4. AWS >> IAM >> POLICIES <<  This policy will allow to fetch the roles from AWS accounts.
    1. Choose JSON
      1. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": "*" } ] }
    2. Name : AzureAD_SSOUserRole_Policy.
    3. Create the Policy
  5. AWS >> IAM >> USER
    1. Name : AzureADRoleManager
    2. Choose Programmatic access
    3. Permission : Attach existing polices
      1. Choose : AzureAD_SSOUserRole_Policy
    4. Create User
    5. Copy Access and Secret key
  6. Azure Enterprise App >> Choose Amazon Web Services App which was deployed
    1. Provisoing
      1. Make it automatic
      2. Give Aws Access and Secret key
      3. Test and Save
      4. Make the "Provisioning Status" to ON
      5. Wait for a sync to complete
      6. Once Sync is Completed got the user's and Groups
        1. Choose the user, select Click EDIT
        2. Choose the AWS Role






Saturday, February 8, 2020

Issue with Mission Control in Mac 10.15.13

If you are experiencing issues with Mission Control on Mac version 10.15.13 after an update, there is a fix that involves using the Terminal app. Simply type or copy the following command: "defaults write com.apple.dock mcx-expose-disabled -bool FALSE", then type "killall Dock" to stop the Dock, which will automatically restart. After this, the Exposé activation should take effect.
Fix
  • Go to the Terminal app
  • Type or copy: defaults write com.apple.dock mcx-expose-disabled -bool FALSE 
  • Then type: killall Dock to stop the Dock that will then automatically be restarted. Only then the Exposé activation will take effect.


Wednesday, January 29, 2020

Azure AD integration to GCP Cloud for SSO

Idea is to enable SSO to GCP cloud with Azure AD configuration

Make sure cloud identity is subscripted in GCP account and we have a super admin user in that account.
Also the same domain is verified in both Azure and GCP.
Note: If Same domain is verified in any other Gsuit or GCP account, that should be used.

Base Document Followed



Process.
In GCP: Create 1 Super admin in Google env (Super admin is only available in admin.google.com which is available only if Gsuite or if Cloud Identity is register.)
In Azure:  Create 1 Application for the User Provisioning.
Make sure the user has been created in GCP user portal. Admin.google.com

In Azure Create Second App
We will face login error after configuring as per the GCP document.  Errors have been listed below. To solve we need add the Identifier and Reply URL.




Errors faced

Error1:
AADSTS650056: Misconfigured application. This could be due to one of the following: The client has not listed any permissions for 'AAD Graph' in the requested permissions in the client's application registration. Or, The admin has not consented in the tenant. Or, Check the application identifier in the request to ensure it matches the configured client application identifier. Please contact your admin to fix the configuration or consent on behalf of the tenant. Client app ID: 01303a13-8322-4e06-bee5-80d612907131.
Solution :  In SAML Config : add Identifier (Entity ID) :  google.com/a/<Domain Name>

Error2:
AADSTS900561: The endpoint only accepts POST requests. Received a GET request.
Solution :  In SAML Config : add Reply URL :  https://google.com/a/*

Friday, December 20, 2019

Exposè/Mission Control Not Working Mac 10.15.2

The issue of Exposè/Mission Control not working in Mac 10.15.2 can be fixed by applying a defaults write command. First, open Terminal app and type or copy the following command:

defaults write com.apple.dock mcx-expose-disabled -bool FALSE

After running this command, restart the OSX Dock by typing the following command in Terminal:

killall Dock

This will enable the Exposè/Mission Control feature and fix the issue.

Tuesday, January 15, 2019

Kubernetes Sample Commands

Below is a Kubernetes cheat sheet, which lists various useful commands that can be used with kubectl command-line interface to manage Kubernetes clusters. These commands cover a range of tasks, such as creating and managing deployments, pods, and services, querying resource usage, deleting resources, and more. Examples include running tests using temporary pods, checking node and pod resource usage, and deleting resources by labels. Additionally, the cheat sheet also provides tips on how to enable shell autocompletion for kubectl, and how to open a bash terminal in a pod.


Run curl test temporarily 
kubectl run --rm mytest --image=yauritux/busybox-curl -it
Run wget test temporarily 
kubectl run --rm mytest --image=busybox -it
Run nginx deployment with 2 replicas 
kubectl run my-nginx --image=nginx --replicas=2 --port=80
List everything 
kubectl get all --all-namespaces
List pods with nodes info 
kubectl get pod -o wide
Show nodes with labels
kubectl get nodes --show-labels
Validate yaml file with dry run
kubectl create --dry-run --validate -f pod-dummy.yaml
Start a temporary pod for testing
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh
kubectl run shell command
kubectl exec -it mytest -- ls -l /etc/hosts
Get system conf via configmap
kubectl -n kube-system get cm kubeadm-config -o yaml
Explain resource kubectl explain pods
kubectl explain svc
Get all services
kubectl get service --all-namespaces
Watch pods
kubectl get pods -n wordpress --watch
Query healthcheck endpoint
curl -L http://127.0.0.1:10250/healthz
Open a bash terminal in a pod
kubectl exec -it storage sh
Check pod environment variables
kubectl exec redis-master-ft9ex env
Enable kubectl shell autocompletion
echo "source <(kubectl completion bash)" >>~/.bashrc, and reload
Get services sorted by name
kubectl get services –sort-by=.metadata.name
Get pods sorted by restart count
kubectl get pods –sort-by=’.status.containerStatuses[0].restartCount’
Get node resource usage
kubectl top node
Get pod resource usage
kubectl top pod
Get resource usage for a given pod
kubectl top <podname> --containers
List resource utilization for all containers
kubectl top pod --all-namespaces --containers=true
Delete pod
kubectl delete pod/<pod-name> -n <my-namespace>
Delete pod by force
kubectl delete pod/<pod-name> --grace-period=0 --force
Delete pods by labels
kubectl delete pod -l env=test
Delete deployments by labels
kubectl delete deployment -l app=wordpress
Delete all resources filtered by labels
kubectl delete pods,services -l name=myLabel
Delete resources under a namespace
kubectl -n my-ns delete po,svc --all
Delete persist volumes by labels
kubectl delete pvc -l app=wordpress
Delete statefulset only (not pods)
kubectl delete sts/<stateful_set_name> --cascade=false
List all pods
kubectl get pods
List pods for all namespace
kubectl get pods -all-namespaces
List all critical pods
kubectl get -n kube-system pods -a
List pods with more info
kubectl get pod -o wide, kubectl get pod/<pod-name> -o yaml
Get pod info
kubectl describe pod/srv-mysql-server
List all pods with labels
kubectl get pods --show-labels
List running pods
kubectl get pods –field-selector=status.phase=Running
Get Pod initContainer status
kubectl get pod --template '{{.status.initContainerStatuses}}' <pod-name>
kubectl run command
kubectl exec -it -n “$ns” “$podname” – sh -c “echo $msg >>/dev/err.log”

Friday, December 28, 2018

Docker Sample Commands

Below is a cheat sheet for using Docker, a popular containerization platform. It provides a list of commonly used commands to pull images from a registry, retag images, log in to a registry, push images to a registry, list images, delete images, create a Docker container from an image, stop and kill running containers, create overlay networks, list running containers, stop and remove containers, attach to and detach from containers, set containers to read-only, flatten images, check container resource usage, build images from Dockerfiles, use Docker Compose to build, create, and start containers, create and run a container with a mounted volume, copy files to and from containers, and inspect containers. It also includes a Dockerfile sample.

Pull an image from a registry
docker pull alpine:3.4

Retag a local image with a new image name and tag
docker tag alpine:3.4 myrepo/myalpine:3.4

Log in to a registry (the Docker Hub by default)
docker login my.registry.com:8000
Push an image to a registry
docker push myrepo/myalpine:3.4
List all images that are locally stored with the Docker engine
docker images
Delete an image from the local image store
docker rmi alpine:3.4
Create a Docker from image
docker run
--rm #remove container automatically after it exits
-it #connect the container to terminal
--name web #name the container
-p 5000:80 #expose port 5000 externally and map to port 80
-v ~/dev:/code #create a host mapped volume inside the container
alpine:3.4 #the image from which the container is instantiated
/bin/sh #the command to run inside the container
Stop a running container through SIGTERM
docker stop web
Stop a running container through SIGKILL
docker kill web
Create an overlay network and specify a subnet
docker network create --subnet 10.1.0.0/24 --gateway 10.1.0.1 -d overlay mynet
List the networks
docker network ls
List the running containers
docker ps
List the all running/stopped containers
docker ps -a

Stop a container
docker stop <container­name>

Stop a container (timeout = 1 second)
docker stop t 1 <container­name>
Delete all running and stopped containers
docker rm -f $(docker ps -aq)

Remove all stopped containers
docker rm $(docker ps q f "status=exited”)
Create a new bash process inside the container and connect it to the terminal
docker exec -it web bash
Print the last 100 lines of a container’s lo
docker logs --tail 100 web

Exporting image to an external file
docker save o <filename>.tar [username/]<image­name>[:tag]

Importing and image to an external file
docker load i <filename>.tar 

Inspecting docker image
docker inspect <Container-ID>

Attach to a running container 
docker attach <Container-ID>

Detach from the Container with out killing it ##turn interactive mode to daemon mode
Type Ctrl + p , Ctrl + q

Set the container to be read-only:
docker run --read-only

Flatten an image
ID=$(docker run -d image-name /bin/bash)
docker export $ID | docker import – flat-image-name

To check the CPU, memory, and network I/O usage
docker stats <container>
Build an image from the Dockerfile in the current directory and tag the image
docker build -t myapp:1.0 .

Docker file samples

vi Dockerfile
=========
FROM ubuntu
MAINTAINER RR
RUN apt-get update
RUN apt-get install -y nginx
COPY index.html /usr/share/nginx/html/
ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”]
EXPOSE 80
Build new images, create all containers, and start all containers (Compose). (This will not rebuild images if a Dockerfile changes.)
docker-compose up

Build, create, and start all in the background (Compose):
docker-compose up -d

Rebuild all images, create all containers, and start all containers (Compose):
docker-compose up --build

Create a new container for my_service in docker-compose.yml and run the echo command instead of the specified command:
docker-compose run my_service echo "hello"

Run a container with a volume named my_volume mounted at /my/path in the Docker container. (The volume will be created if it doesn't already exist.) 
docker run --mount source=my_volume,target=/my/path my-image
docker run -v my_volume:/my/path my-image

Copy my-file.txt from the host current directory to the /tmp directory in my_container:
docker cp ./my-file.txt my_container:/tmp/my-file.txt

Inspect an container
docker inspect python_web | less