AWS DEC 777 WE Batch
AWS DEC 777 WE Batch
AWS DEC 777 WE Batch
IAAS -
PAAS -
SAAS -
Ethans@02
amazon Prime
Netflix
DS - MF ---
IAAS -- EC2 -- VM
SAAS -- Macci /
1 global service
2 regional service
65 Qs
130 min
No RUPAY card.
===============
-----------
Ethans@02
Region- Logical
Mumbai - AP-South-1
Softwares required
-----------
1. AWS CLI === https://docs.aws.amazon.com/cli/latest/userguide/getting-started-
install.html#getting-started-install-instructions
2. Putty ==== https://www.putty.org/
3. Puttygen ---it's with putty now
4. WinSCP
https://winscp.net/eng/download.php
1*31*24=744
2*31*24=1488
** 1 year later --
noramal -
{"name":"keyur","city":"surat"}
name,city
keyur,surat
====================
IAM
S3
EC2
EBS
VPC
Monitoring
AWS CW
AWS CT
ELB
ASG
CFT
AWS BS
RDS
LAMDA
SQS
SNS
Inspecror
Maccie
Systems Mgr
Route 53
DynamoDB
EFS
Light Sail
DEV OPS
Docker
Maven
Athena
EMR
====================
----
======== IAM =============
IAM is free-of-cost.
keyur@331026266777
Tags (optional)
---------
IAM tags are key-value pairs you can add to your user.
Tags can include user information,
such as an email address, or can be descriptive, such as a job title.
You can use the tags to organize, track, or control
access for this user
set of permissions
Types of policy
----------
1. inline policy -- need to create one policy for one user.
not reusable. will not be suitable to be used.
-- deleted a USER,
related Policy that will also be deleted
2. managed policy -- it is reusable.
One policy can be attached to multiple users.
Who will create/maintain policy?
1. customer managed policy --
you will create/maintain policy.
not everyone is comfortable in JSON
2. AWS managed policy --
AWS has created alot of policies.
which can be used by USERS.
Key:value
Dictionary
IAMReadOnlyAccess
User Groups
----------------
a collection of Users who wants to share same access level
Roles
----------
-----------
Roles in AWS
it is a trust between two service to access each other
chailtali
Access Key ID:
AKIAYFFMHOZTHDNIP55A
Secret Access Key:
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
Admin group has been create. now add ROHAN user in that
aws configure
=======================
https://www.tutorialspoint.com/unix/index.htm
----Login to Linux
default login user is ec2-user
sudo su -
-> mkdir
-> cd
-> pwd
-> ls
-> ls -lrt
-> touch - create empty file
-> vi - editor for read / write files
first you need to go into insert mode
press "i"
save content with below command
shift : x
or
shift : wq
-> redirect operator to create and append files
> -- it will create new file if not exist if exists then replace
the content
>> -- it will create new files and append content to existing
file if exist
linux permission
ex :
chmod 777 <file/directory name>
-relative path
-absolute path
cut
mv
cp
grep
=======================================
-----------------------------
2. block store -- all types of files can be stored- txt, binary, audio, video,
image, .exe, backup etc..
All these files are not available over the Internet --
just like the DISK in your VM -- EBS ( elastic Block service) -- will OS expansion,
will App installation
S3 - Simple storage service -- 2006
-----------
S3 is a global service.
objects -- are files only but here we shall call then Object
arn:aws:s3:::777-ethans-demo-bkt
arn:aws:s3:::777-ethans-demo-bkt/1.txt
https://
777-ethans-demo-bkt
.s3
.ap-south-1
.amazonaws.com/1.txt
Bucket Versioning
-------------
What?
Why?
GIT
Rollback
file1.txt base_version
123
345
file1.txt version_one
666
file1.txt version_two
777
--------
--- ready for extentions but close for modification --
100mb
101mb
105mb
110mb
===========
416 mb
https://bootstrapmade.com/
--------Storage Classes---------
standard ---------
-min 3 az your data is replicated ---- 99.999999999%
-miliseconds response of your data
-no of req frequency - no limit
standard-ia
-min 3 az your data is replicated ---- 99.999999999%
-miliseconds response of your data
one zone ia
--99.99%
glacier
-
deep glacier
-
inteligent-tiering --- standard - 100 - 90
standard
and
standard-ia
Storage Class in S3
--------------------------
1. standard -- at least 3 AZs
6. Intelligent tiering -- it check the access pattern of your file using AWS's own
AI algo.
cost saving compared to other storage class.
2. standard Infrequent access --
3. One Zone- IA
4. Glacier
5. Deep Glacier
S3 Pricing
-----
1. Region-
2. amount of data
3. storage class
4. access (no of reads/writes/modifications etc.) pattern
5. data tarnsfer charges ( in same region very less,
cross region high charges)
6. etc
Read/ write
arn:aws:s3:::ethans-662-demo-bkt
two types of permission strategy
-----------------
1. fine grained access control -- minute level of access
can be conrolled-
list the files but not read the contents of the file
2. coarse grained access control- basic read/write/execute -- ACL
Ans- NO
MULTIPART UPLOAD
----------------
Multipart upload allows you to upload a single object as a set of parts.
Each part is a contiguous portion of the object's data.
You can upload these object parts independently and in any order.
If transmission of any part fails, you can retransmit that part
without affecting other parts.
After all parts of your object are uploaded,
Amazon S3 assembles these parts and creates the object.
In general, when your object size reaches 100 MB,
you should consider using multipart uploads instead
of uploading the object in a single operation.
aws s3 help
aws s3 ls
--https://saturncloud.io/blog/how-to-upload-a-file-to-amazon-s3-with-nodejs/
==================
EC2
==================
2 * 24 * 15 days = 720
3 * 3 * 30 days = 270
4 * 24 * 8 days
port
airport
passport
In order to work with Putty, you would not be able to use the
.PEM file.
:
;
~ -- tilde
& -- ampersand
read = 4
write = 2
execute = 1
- - it is file
4+2+1 = 7
-
rw- owner -
r-- group of owner
r-- others
. 1 root root 0 Apr 2 04:48 f1
d
rwx
r-x
r-x
. 2 root root 6 Apr 2 04:49 a11
777
4+2+1 = 7
400
chmod
File
10000 lines
GREP
https://www.tutorialspoint.com/unix/index.htm
Relative paths --
Absolute path
cd /var/www/html
vi index.html
"write your content and save it"
Esc + Shit + :x
service httpd start
service httpd status
ssh -i 777-new-key.pem [email protected]
scp -i 777-new-key.pem ./a.txt [email protected]:/home/ec2-user/test/
sftp -i 777-new-key.pem [email protected]
========================
yum install java-11 -y
nohup java -jar webdemo.jar &
========================
https://github.com/keyur2714/kiran-spring-boot/tree/main
Public IP address
15.206.84.202
User name
Administrator
Password
mB&yi&xe9YZ.yPWFiZSt.q&3Av58FcbM
elastic ip : 13.200.96.110
now if you stop and start ec2 your ip will not change because it's elastic ip
after stop and start : 13.200.96.110
--13.235.75.95
@@ can I create a Static Public IP -- ELASTIC IP
#!/bin/bash
sudo su -
yum update -y
yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service
hostname >> /var/www/html/index.html
echo "Hello Radhe Krishna..." >> /var/www/html/index.html
#!/bin/bash
sudo su -
yum update -y
yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service
aws s3 sync s3://777-ethans-webapp-bkt/ /var/www/html/
hostname >> /var/www/html/index.html
#include<math.h>
#! -- shebang
/bin/bash
ksh
csh
bash
shell
colonel
cd /var/www/html
vi index.html
======================
#!/bin/bash
sudo su -
yum update -y
yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service
aws s3 sync s3://ethans-718-web-app-bkt/ /var/www/html/
======================
Q- i have created a UDS. That had some issues and now i want to modify and
then execute it.
How can I do?
Ans- UDS can run only once at the time of instance provisioning.
------------
================
Ans- yes
13.232.167.94 - original ip
13.200.115.188 -- Elastic IP
now stop ec2 and start again
13.200.115.188
Ans- No. You cant detach ROOT volume from a RUNNING instance.
Additional volumes can be attached/detached when the instance
is RUNNING.
If the ROOT volume is not attached to an instance,
then it cant be STARTED.
IPv4 is a SCARCE resource. It has huge demand but not matching SUPPLY.
Can we increase the Supply of IPv4
0-255.0-255.0-255.0-255
Total number of avaialble IPv4 = 2^32 == 4.2 billion
XXX.XXX.XXX.XXX
0.0.0. 0- 255
0.0.1.0-255
---------------
13.234.78.170
Windows instance --
Snapshot
---------
the point-in-time backup of EBS volume
it will be stored in S3, not in your S3 bucket,
it will be in AWS's system S3 bucket
incremental backup
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
mkdir /mydata
mkfs.xfs /dev/xvdf
xfs_growfs /dev/xvdf
----------------------
VPC
========================
private network in AWS
it is free of cost
Private IP -- can only be used within a network -- office desk extn number --
can be duplicate i.e. in one network a Pvt IP can be ther,
same pvt IP can be there in another private network -- TCS
11.12.13000.140000 - invalid ip
-------------------
10.0.0.0 to 10.255.255.255 ( any IP starting from 10 is a pvt IP )
172.16.0.0 to 172.31.255.255 ( 172.16-31.0-255.0-255,
any IP starting from 172,
whose 2nd octet is between 16 and 31 are pvt IP)
192.168.0.0 to 192.168.255.255
( any Ip starting from 192, whose 2nd octet is 168,
is a pvt ip)
10.1.100.101
172.16-31.0-255.0-255
192.168.168.1
172.15.1.111
10.1.1.1
172.30.1.1
10.11.12.13 - private
172.156.256.356 -- invalid
172.16.17.18 -- private
192.167.168.169 -- public
192.169.17.180 -- public
10.0.0.255
10.0.1.0
---
10.0.0.0
10.255.255.255
10.0.1.0 == 10.0.1.0-255
zSFcsv3ag!xLNhxcPG&EGO*P*T8tmHx3
10.0.0.0/16
10.0.0.0
10.0.0.1
10.0.0.2
10.0.1.255
10.0.1.0
10.0.0.0/16
10.0.0.1
10.0.0.2
10.0.0.255
10.0.1.0
10.0.1.1
---
10.0.1.255
CIDR block = 10.0.0.0/28. the number of available IPs in this CIDR = 2^(32-Prefix)
= 2^(32-28) = 2^4 = 16
CIDR block = 172.16.0.0/16. the number of available IPs in this CIDR = 2^(32-
Prefix) = 2^(32-16) = 2^16 = 65536
CIDR block = 192.168.0.0/16. the number of available IPs in this CIDR = 2^(32-
Prefix) = 2^(32-16) = 2^16 = 65536
yf@tw)4*KIFbhD99=8*D;hUrV9N;Vqgx
30 instances -- 100
A*C5ru1%WkKlO&LJxr3@B11htxfKgEKv
Reserved Ip
--------------
in any Subnet, 5 IPs will be reserved by AWS and only 251 usable IPs.
172.16.1.0/24
172.16.1.0
172.16.1.1
172.16.1.2
172.16.1.3
172.16.1.255
10.0.1.0/24
10.0.1.0 -- will be reserved for the network address - sn address
10.0.1.1 -- will be reserved for an inbuilt hidden local router
which will help the local communication in VPC
10.0.1.2 -- will go for DNS/DHCP purpose
10.0.1.3 -- reserved for future use
10.0.1.255 -- for broadcasting purpose ,
but broadcasting is not allowed in any of Cloud env (AWS/Azure/GCP)
unicast - 1-1
multicast - 1- m - (in some group)
broadcast - 1 - n --
ws
184.72.66.90 - public-IPip
10.0.1.15 - private-ip
ds
- public-IP
10.0.2.8 - private-ip
============ws=============
Private IP address
10.0.1.10
User name
Administrator
Password
S3a4FapU&8;fKbS8w$P)pBGl&ILQr-N)
184.72.66.90
============ds=============
Private IP address
10.0.2.111
User name
Administrator
Password
RbhwP@cX=.V;wSstTHQWURNfjwjzl?tW
** How to jump from one windows ec2 instance (in Public SN) to another windows ec2
instance (in Private SN)
** How to jump from one Linux ec2 instance (in Public SN) to another Linux ec2
instance
(in Private SN)
- file or directory d
r = 4
w = 2
e = 1
rw- owner = 6
r-- group = 4
r-- others =4
0644
0400
-
r--
---
---
Linux WS
54.90.197.60 - public-IP
10.0.1.107 - private-ip
Linux DS
- no public ip
10.0.2.13
- file d - directory
rw- read / write / execute - owner - 4 2 0 = 6
rw- group of owner 4 2 0 = 6
r-- others 4 0 0 = 4
-
rw- owner
r-- owner group
r-- public/all
421
0644
0400
Private IP address
10.0.2.25
User name
Administrator
Password
FqOWPi;Oe8wTbS.D94=eo8tcCH@nJ2F8
Private IP address
10.0.1.192
User name
Administrator
Password
OEq;V!M)xSmaMN83(PEw)8uzw84Fk%nz
www.oracle.com ---
curl ifconfig.me
Solution -- ??
13.235.245.201
CcC1fvkHaHxpv?2w3yRlg*)n;r=tZiPv
10.0.2.76
4A34ntJrd0-zmQ0YS0se;e)U=74O%N1b
Solution-
VPC Peering
--------------
1. Two VPCs which are to be peered must have
non-overlapping CIDR block.
A peering connection cannot be created between
2 VPCs that have overlapping CIDRs.
Please select 2 VPCs which have distinct CIDRs.
4. There can be only one peering connection between any two VPCs
)%i$nb=MHqzpi?4;L4=.$JH=sl@n9V*x -- 1
axqXl=fU15G6GYVQxL=$W1atLOy=4!2C -- 4
ey7E3rVcjmO)4uy0LD8ECvPBw(CHzF@z -- 2
10.0.2.249
RLjQm;@rTm7w9YRi8MeXNcVwcK&J2LAA
13.232.242.234
qkKLju0VZErP)$2mh2=AWoAnc(bUzxnc
192.168.2.9
jVH!JsSHikfyhpKC.FolrvApsBm0s$T;
Cant be peered
A - 10.0.0.0/16
B - 10.0.0.0/16
Cant be peered
A - 10.0.0.0/16
B - 10.0.1.0/24
non-overlapping
A- 10.0.0.0/16
B- 192.168.0.0/16
non-overlapping
A- 10.0.1.0/24
10.0.1.0
10.0.1.1
10.0.1.2
10.0.1.3
10.0.1.4
10.0.1.5
.
.
10.0.1.255
B- 10.0.2.0/24
10.0.2.0
10.0.2.1
10.0.2.2
10.0.2.3
10.0.2.4
.
.
10.0.2.255
VPN
-----
-----
---location hide----
surat --- vpn --- bangladesh ----
human traffic - drugs
Nord VPN -
--number of users drastically increase
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/mutual.html
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html
Monitoring in AWS -
Flow Logs
----------
A flow log enables you to capture information
about the IP traffic
going to and from network interfaces in your VPC.
Flow log data can be published to Amazon CloudWatch Logs
or Amazon S3. After you've created a flow log,
you can retrieve and view its data in the
chosen destination.
text file
Parquet
=========
-------- Monitoring in AWS -------
metrics
4/5 = 1/20
5 * 20 = 100
2/2 check
------
1. System Status Checks
2. Instance Status Checks
Metrics-
plot of parameter with time
1. metrics
2. Alarm -- diff state of alarm
3. Events
4. Log groups
5. Log Insights
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-cwl.html
CLOUDTRAIL
----------------
==========================
==========================
==========================
ELB - Elastic Load balancer
Is a ** managed service **.
No need to worry for it's hardware config,
scalability, avaialbility
you cant login here.
AWS LB types
-------------
1. classic LB
2. Application LB
3. Network LB
4. Gateway LB
ALB
--
Your load balancer routes requests to the targets
in this target group
using the protocol and port that you specify here.
It also performs health checks on the targets using these settings.
The target group you specify in this step will apply to all of
the listeners
configured on this load balancer.
You can edit or add listeners after the load balancer is created.
Sticky session
Cross zone load balancing
you can use the sticky session feature (also known as session affinity),
which enables the load balancer to bind a user's session to a specific instance.
This ensures that all requests from the user during the session are sent to the
same instance.
============================
Autoscaling
---------------------
1. horizontal --
the no of resource will increase/decrease
scale-out
scale-in
2. vertical --
the number of resource will remain constant
scale-up
scale-down
=====================
https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-
practice-exams-saa-c03/
https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-
associate-saa-c02/view/
=====================
--
-----------------------------
--
-----------------------------
SNS
-----------------------------
Amazon Simple Notification Service (Amazon SNS)
is a managed service that provides message
delivery from publishers to subscribers
(also known as producers and consumers).
Publishers communicate asynchronously with subscribers
by sending messages to a topic,
which is a logical access point and communication channel. Clients can subscribe
to the SNS topic and receive published messages
using a supported endpoint type,
such as Amazon Kinesis Data Firehose, Amazon SQS,
AWS Lambda, HTTP, email,
mobile push notifications, and mobile text messages (SMS).
----------s3-sns/sqs policy
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-
config-to-bucket.html
===================
--
=============================
Create topic
-----------------------------
SQS:
======================
What is the use of SQS in AWS?
Amazon Simple Queue Service (Amazon SQS) offers a secure,
durable, and available hosted queue that lets you integrate
and decouple distributed software systems and components.
Amazon SQS offers common constructs such as dead-letter
queues and cost allocation tags.
send-message.json
{
"City": {
"DataType": "String",
"StringValue": "Any City"
},
"Greeting": {
"DataType": "Binary",
"BinaryValue": "Hello, World!"
},
"Population": {
"DataType": "Number",
"StringValue": "1250800"
}
}
==============
------------------------------------
mainframe -
on prem ds -
vm - hypervisor - ec2 -
serverless -
Serverless
------------
is a new concept in cloud where you dont need to
create/configure/access/login on the server
You pay only for the compute time that you consume
— there is no charge when your code is not running.
With Lambda, you can run code for virtually any type
of application or backend service,
all with zero administration.
import boto3
region = 'ap-south-1'
instances = ['i-038352b32680689b8']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.stop_instances(InstanceIds=instances)
print('stopped your instances: ' + str(instances))
import boto3
region = 'ap-south-1'
instances = ['i-038352b32680689b8']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.start_instances(InstanceIds=instances)
print('started your instances: ' + str(instances))
----------------------------------
import boto3
translate_client = boto3.client('translate')
def lambda_handler(event, context):
review_text = event['text']
translate_response = translate_client.translate_text(
Text=review_text,
SourceLanguageCode='auto',
TargetLanguageCode='en'
)
print(translate_response)
return translate_response['TranslatedText']
----------------------------------
----------------------------------
https://awstip.com/trigger-a-lambda-function-with-api-gateway-5a19973cb713
https://varunmanik1.medium.com/a-simple-machine-learning-step-by-step-tutorial-
with-the-help-of-amazon-translate-lambda-api-1cc200bf2212
=======================
==============
Databases :
=================================
Aurora db
MySQL
Oracle
Postgres
SqlServer-(MSSQL)
MaryaDB
Synchronous
asynchronous
sync
-----
LightSail
-----
--------------------CICD---------------------
------------
DynamoDB
===============
department
deptId,deptName
11,IT
12,HR
employee - fk - department table
empId,name,city,mobileNo,email,deptId,hobby,land_line_no
1,keyur,aaa,2222,[email protected],11,null,null
2,keyur,aaa,2222,[email protected],12,null,
3,keyur,aaa,2222,[email protected],11,null,
4,keyur,aaa,2222,[email protected],12,cricket,
4,keyur,aaa,2222,[email protected],12,cricket,1234
1,keyur
2,keyur,thakor
3,keyur,7387029671,[email protected]
employee_salary
emp_salary_id,empId,salary
person
basci_info
address
conact_details
transaction
---
=============
{
"username": "keyur",
"firstName": "Keyur",
"hobbies": [
"cricket",
"music",
"travelling"
],
"lastName": "Thakor"
},
{
"username": "denish",
"city": "surat",
"contact": [
"9879534778",
"9825306383"
],
"designation": "engineer"
}
if rdbbms we end up adding this many fields in user_profile table and that
also not hen end.
username,firstName,hobbies,lastName,city,d
{
"uid": "1234",
"city": "pune",
"designation": "technical lead",
"name": "keyur",
"skills": [
"java",
"aws"
]
}
user_profile
uid,city,designation,name
1234,pune,technical lead,keyur
skills
uid,skill_id,skill
1234,1,java
1234,2,aws
AWS Inspector
---------------
cve --benchmark
cis
Amazon Macie - S3
-----------------
Masking --
HIPAA
PCI-DSS
PEP
remedy
Route 53
------------------
------------
ns-853.awsdns-42.net
ns-1248.awsdns-28.org
ns-1805.awsdns-33.co.uk
ns-257.awsdns-32.com
keyur.life
ns - name server
shared
A-record --> used to map domain name with IPV4
AAAA-record ---> to map domain name with IPv6
https://who.is/
-----------
=EFS
=======
Amazon Elastic File System (Amazon EFS) provides a simple, scalable,
elastic file system for general purpose workloads for use with AWS Cloud services
and on-premises resources.
sudu su -
mkdir /myefs
mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-
00a0e25c4862d3573.efs.ap-south-1.amazonaws.com:/ /myefsm
----------------------
CICD---------------------
1. created Environment on AWS BeanStak
cd 777-CICD
-create new file index.php and check git status with below command
git status
git add .
===============================================
AWS Cloud Formation
===============================================
Cloud Formation
---------------------
Infrastructure-as-code
-----------------
to maintain my infra as a code. using a piece of code
I can create my infra.
it will help in migration of infra from one enviornment
to another like from development env to Production env.
2 components of CFT
---------
1. template -- a piece of code written in JSON/YAML to spin
up my infra. Every stack is based on a template.
A template is a JSON or YAML file that contains
configuration
information about the AWS resources you want to
include in the stack.
A template is a JSON or YAML file that describes
your stack's
resources and properties.
2. stack -- is the actual infra or group of infra which
will be created by the template
Parameters-- are defined in your template and allow you to input custom values
when you create or update a stack.
interpreter -- to convert source code into byte code --- .py ---> .pyc
compiler
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-
services-us-west-2.html
** Change Set
-----------
https://docs.aws.amazon.com/codebuild/latest/userguide/cloudformation-vpc-
template.html
-------------------------