Serverless Stack v3.4
Serverless Stack v3.4
Serverless Stack v3.4
What is Serverless?
INTRODUCTION What is AWS Lambda?
Set up Bootstrap
Handle 404s
Upload a file to S3
Display a note
Render the note form
Delete a note
Redirect on login
Deploy the Frontend
Create an S3 bucket
Deploy to S3
Set up SSL
Deploy updates
Update the app
Deploy again
Part II - Automation
Configure S3 in Serverless
Frontend workflow
Wrapping up
Futher reading
TranslaUons
CONCLUSION
Giving back
Changelog
Staying up to date
Extra Credit
Organizing Serverless projects
Backups in DynamoDB
So you might be a backend developer who would like to learn more about the frontend
por9on of building serverless apps or a frontend developer that would like to learn more about
the backend; this guide should have you covered.
We are also catering this solely towards JavaScript developers for now. We might target other
languages and environments in the future. But we think this is a good star9ng point because it
can be really beneficial as a full-stack developer to use a single language (JavaScript) and
environment (Node.js) to build your en9re applica9on.
On a personal note, the serverless approach has been a giant revela9on for us and we wanted
to create a resource where we could share what we’ve learned. You can read more about us
here (/about.html). And check out a sample of what folks have built with Serverless Stack
(/showcase.html).
We’ll be using the AWS PlaPorm to build it. We might expand further and cover a few other
plaPorms but we figured the AWS PlaPorm would be a good place to start.
We are going to be using the free *ers for the above services. So you should be able to sign
up for them for free. This of course does not apply to purchasing a new domain to host your
app. Also for AWS, you are required to put in a credit card while crea7ng an account. So if you
happen to be crea7ng resources above and beyond what we cover in this tutorial, you might
end up ge`ng charged.
While the list above might look daun7ng, we are trying to ensure that upon comple7ng the
guide you’ll be ready to build real-world, secure, and fully-func*onal web apps. And don’t
worry we’ll be around to help!
Requirements
You need Node v8.10+ and NPM v5.5+ (h;ps://nodejs.org/en/). You also need to have basic
knowledge of how to use the command line.
So we decided to extend the guide and add a second part to it. This is targe7ng folks that are
intending to use this setup for their projects. It automates all the manual steps from part 1 and
helps you create a produc7on ready workflow that you can use for all your serverless projects.
Here is what we cover in the two parts.
Part I
Create the notes applica7on and deploy it. We cover all the basics. Each service is created by
hand. Here is what is covered in order.
Part II
Aimed at folks who are looking to use the Serverless Stack for their day-to-day projects. We
automate all the steps from the first part. Here is what is covered in order.
We think this will give you a good founda7on on building full-stack produc7on ready
serverless applica7ons. If there are any other concepts or technologies you’d like us to cover,
feel free to let us know on our forums (h;ps://discourse.serverless-stack.com).
1. We are charged for keeping the server up even when we are not serving out any requests.
2. We are responsible for up&me and maintenance of the server and all its resources.
3. We are also responsible for applying the appropriate security updates to the server.
4. As our usage scales we need to manage scaling up our server as well. And as a result
manage scaling it down when we don’t have as much usage.
For smaller companies and individual developers this can be a lot to handle. This ends up
distrac&ng from the more important job that we have; building and maintaining the actual
applica&on. At larger organiza&ons this is handled by the infrastructure team and usually it is
not the responsibility of the individual developer. However, the processes necessary to
support this can end up slowing down development &mes. As you cannot just go ahead and
build your applica&on without working with the infrastructure team to help you get up and
running. As developers we’ve been looking for a solu&on to these problems and this is where
serverless comes in.
Serverless Computing
Serverless compu&ng (or serverless for short), is an execu&on model where the cloud provider
(AWS, Azure, or Google Cloud) is responsible for execu&ng a piece of code by dynamically
alloca&ng the resources. And only charging for the amount of resources used to run the code.
The code is typically run inside stateless containers that can be triggered by a variety of events
including hQp requests, database events, queuing services, monitoring alerts, file uploads,
scheduled events (cron jobs), etc. The code that is sent to the cloud provider for execu&on is
usually in the form of a func&on. Hence serverless is some&mes referred to as “Func&ons as a
Service” or “FaaS”. Following are the FaaS offerings of the major cloud providers:
AWS: AWS Lambda (hQps://aws.amazon.com/lambda/)
MicrosoX Azure: Azure Func&ons (hQps://azure.microsoX.com/en-us/services/func&ons/)
Google Cloud: Cloud Func&ons (hQps://cloud.google.com/func&ons/)
While serverless abstracts the underlying infrastructure away from the developer, servers are
s&ll involved in execu&ng our func&ons.
Since your code is going to be executed as individual func&ons, there are a couple of things
that we need to be aware of.
Microservices
The biggest change that we are faced with while transi&oning to a serverless world is that our
applica&on needs to be architectured in the form of func&ons. You might be used to deploying
your applica&on as a single Rails or Express monolith app. But in the serverless world you are
typically required to adopt a more microservice based architecture. You can get around this by
running your en&re applica&on inside a single func&on as a monolith and handling the rou&ng
yourself. But this isn’t recommended since it is beQer to reduce the size of your func&ons.
We’ll talk about this below.
Stateless Functions
Your func&ons are typically run inside secure (almost) stateless containers. This means that
you won’t be able to run code in your applica&on server that executes long aXer an event has
completed or uses a prior execu&on context to serve a request. You have to effec&vely
assume that your func&on is invoked in a new container every single &me.
There are some subtle&es to this and we will discuss in the What is AWS Lambda
(/chapters/what-is-aws-lambda.html) chapter.
Cold Starts
Since your func&ons are run inside a container that is brought up on demand to respond to an
event, there is some latency associated with it. This is referred to as a Cold Start. Your
container might be kept around for a liQle while aXer your func&on has completed execu&on.
If another event is triggered during this &me it responds far more quickly and this is typically
known as a Warm Start.
The dura&on of cold starts depends on the implementa&on of the specific cloud provider. On
AWS Lambda it can range from anywhere between a few hundred milliseconds to a few
seconds. It can depend on the run&me (or language) used, the size of the func&on (as a
package), and of course the cloud provider in ques&on. Cold starts have dras&cally improved
over the years as cloud providers have goQen much beQer at op&mizing for lower latency
&mes.
Aside from op&mizing your func&ons, you can use simple tricks like a separate scheduled
func&on to invoke your func&on every few minutes to keep it warm. Serverless Framework
(hQps://serverless.com) which we are going to be using in this tutorial has a few plugins to
help keep your func&ons warm (hQps://github.com/FidelLimited/serverless-plugin-warmup).
Now that we have a good idea of serverless compu&ng, let’s take a deeper look at what a
Lambda func&on is and how your code will be executed.
Lambda Specs
Let’s start by quickly looking at the technical specifica@ons of AWS Lambda. Lambda supports
the following run@mes.
Each func@on runs inside a container with a 64-bit Amazon Linux AMI. And the execu@on
environment has:
You might no@ce that CPU is not men@oned as a part of the container specifica@on. This is
because you cannot control the CPU directly. As you increase the memory, the CPU is
increased as well.
The ephemeral disk space is available in the form of the /tmp directory. You can only use
this space for temporary storage since subsequent invoca@ons will not have access to this.
We’ll talk a bit more on the stateless nature of the Lambda func@ons below.
The execu@on dura@on means that your Lambda func@on can run for a maximum of 900
seconds or 15 minutes. This means that Lambda isn’t meant for long running processes.
The package size refers to all your code necessary to run your func@on. This includes any
dependencies ( node_modules/ directory in case of Node.js) that your func@on might
import. There is a limit of 250MB on the uncompressed package and a 50MB limit once it has
been compressed. We’ll take a look at the packaging process below.
Lambda Function
Finally here is what a Lambda func@on (a Node.js version) looks like.
Here myHandler is the name of our Lambda func@on. The event object contains all the
informa@on about the event that triggered this Lambda. In the case of an HTTP request it’ll be
informa@on about the specific HTTP request. The context object contains info about the
run@me our Lambda func@on is execu@ng in. Ader we do all the work inside our Lambda
func@on, we simply call the callback func@on with the results (or the error) and AWS will
respond to the HTTP request with it.
Packaging Functions
Lambda func@ons need to be packaged and sent to AWS. This is usually a process of
compressing the func@on and all its dependencies and uploading it to a S3 bucket. And leeng
AWS know that you want to use this package when a specific event takes place. To help us
with this process we use the Serverless Framework (h,ps://serverless.com). We’ll go over this
in detail later on in this guide.
Execution Model
The container (and the resources used by it) that runs our func@on is managed completely by
AWS. It is brought up when an event takes place and is turned off if it is not being used. If
addi@onal requests are made while the original event is being served, a new container is
brought up to serve a request. This means that if we are undergoing a usage spike, the cloud
provider simply creates mul@ple instances of the container with our func@on to serve those
requests.
This has some interes@ng implica@ons. Firstly, our func@ons are effec@vely stateless. Secondly,
each request (or event) is served by a single instance of a Lambda func@on. This means that
you are not going to be handling concurrent requests in your code. AWS brings up a container
whenever there is a new request. It does make some op@miza@ons here. It will hang on to the
container for a few minutes (5 - 15mins depending on the load) so it can respond to
subsequent requests without a cold start.
Stateless Functions
The above execu@on model makes Lambda func@ons effec@vely stateless. This means that
every @me your Lambda func@on is triggered by an event it is invoked in a completely new
environment. You don’t have access to the execu@on context of the previous event.
However, due to the op@miza@on noted above, the actual Lambda func@on is invoked only
once per container instan@a@on. Recall that our func@ons are run inside containers. So when a
func@on is first invoked, all the code in our handler func@on gets executed and the handler
func@on gets invoked. If the container is s@ll available for subsequent requests, your func@on
will get invoked and not the code around it.
For example, the createNewDbConnection method below is called once per container
instan@a@on and not every @me the Lambda func@on is invoked. The myHandler func@on
on the other hand is called on every invoca@on.
var dbConnection = createNewDbConnection();
This caching effect of containers also applies to the /tmp directory that we talked about
above. It is available as long as the container is being cached.
Now you can guess that this isn’t a very reliable way to make our Lambda func@ons stateful.
This is because we just don’t control the underlying process by which Lambda is invoked or its
containers are cached.
Pricing
Finally, Lambda func@ons are billed only for the @me it takes to execute your func@on. And it
is calculated from the @me it begins execu@ng @ll when it returns or terminates. It is rounded
up to the nearest 100ms.
Note that while AWS might keep the container with your Lambda func@on around ader it has
completed; you are not going to be charged for this.
Lambda comes with a very generous free @er and it is unlikely that you will go over this while
working on this guide.
The Lambda free @er includes 1M free requests per month and 400,000 GB-seconds of
compute @me per month. Past this, it costs $0.20 per 1 million requests and $0.00001667 for
every GB-seconds. The GB-seconds is based on the memory consump@on of the Lambda
func@on. For further details check out the Lambda pricing page
(h,ps://aws.amazon.com/lambda/pricing/).
In our experience, Lambda is usually the least expensive part of our infrastructure costs.
Next, let’s take a deeper look into the advantages of serverless, including the total cost of
running our demo app.
For help and discussion
1. Low maintenance
2. Low cost
3. Easy to scale
The biggest benefit by far is that you only need to worry about your code and nothing else.
The low maintenance is a result of not having any servers to manage. You don’t need to
ac8vely ensure that your server is running properly, or that you have the right security updates
on it. You deal with your own applica8on code and nothing else.
The main reason it’s cheaper to run serverless applica8ons is that you are effec8vely only
paying per request. So when your applica8on is not being used, you are not being charged for
it. Let’s do a quick breakdown of what it would cost for us to run our note taking applica8on.
We’ll assume that we have 1000 daily ac8ve users making 20 requests per day to our API, and
storing around 10MB of files on S3. Here is a very rough calcula8on of our costs.
Total $6.10
[1] Cognito is free for < 50K MAUs and $0.00550/MAU onwards.
[2] Lambda is free for < 1M requests and 400000GB-secs of compute.
[3] DynamoDB gives 25GB of free storage.
[4] S3 gives 1GB of free transfer.
So that comes out to $6.10 per month. Addi8onally, a .com domain would cost us $12 per
year, making that the biggest up front cost for us. But just keep in mind that these are very
rough es8mates. Real-world usage paeerns are going to be very different. However, these
rates should give you a sense of how the cost of running a serverless applica8on is calculated.
Finally, the ease of scaling is thanks in part to DynamoDB which gives us near infinite scale
and Lambda that simply scales up to meet the demand. And of course our front end is a simple
sta8c single page app that is almost guaranteed to always respond instantly thanks to
CloudFront.
Great! Now that you are convinced on why you should build serverless apps; let’s get started.
Next let’s configure your account so it’s ready to be used for the rest of our guide.
In this chapter, we are going to create a new IAM user for a couple of the AWS related tools
we are going to be using later.
Create User
First, log in to your AWS Console (hIps://console.aws.amazon.com) and select IAM from the
list of services.
Select Users.
This account will be used by our AWS CLI (hIps://aws.amazon.com/cli/) and Serverless
Framework (hIps://serverless.com). They’ll be connec-ng to the AWS API directly and will not
be using the Management Console.
Select A5ach exis.ng policies directly.
Search for AdministratorAccess and select the policy, then select Next: Review.
We can provide a more fine-grained policy here and we cover this later in the Customize the
Serverless IAM Policy (/chapters/customize-the-serverless-iam-policy.html) chapter. But for
now, let’s con-nue with this.
The concept of IAM pops up very frequently when working with AWS services. So it is worth
taking a beIer look at what IAM is and how it can help us secure our serverless setup.
AWS Iden)ty and Access Management (IAM) is a web service that helps you securely control
access to AWS resources for your users. You use IAM to control who can use your AWS resources
(authen)ca)on) and what resources they can use and in what ways (authoriza)on).
The first thing to noHce here is that IAM is a service just like all the other services that AWS
has. But in some ways it helps bring them all together in a secure way. IAM is made up of a
few different parts, so let’s start by looking at the first and most basic one.
An IAM user consists of a name, a password to sign into the AWS Management Console, and
up to two access keys that can be used with the API or CLI.
By default, users can’t access anything in your account. You grant permissions to a user by
creaHng a policy and aNaching the policy to the user. You can grant one or more of these
policies to restrict what the user can and cannot access.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
}
And here is a policy that grants more granular access, only allowing retrieval of files prefixed
by the string Bobs- in the bucket called Hello-bucket .
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::Hello-bucket/*",
"Condition": {"StringEquals": {"s3:prefix": "Bobs-"}}
}
We are using S3 resources in the above examples. But a policy looks similar for any of the
AWS services. It just depends on the resource ARN for Resource property. An ARN is an
idenHfier for a resource in AWS and we’ll look at it in more detail in the next chapter. We also
add the corresponding service acHons and condiHon context keys in Action and
Condition property. You can find all the available AWS Service acHons and condiHon
context keys for use in IAM Policies here
(hNps://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_acHonscondiHons.ht
ml). Aside from aNaching a policy to a user, you can aNach them to a role or a group.
An IAM role is very similar to a user, in that it is an iden)ty with permission policies that
determine what the idenHty can and cannot do in AWS. However, a role does not have any
credenHals (password or access keys) associated with it. Instead of being uniquely associated
with one person, a role can be taken on by anyone who needs it. In this case, the Lambda
funcHon will be assigned with a role to temporarily take on the permission.
Roles can be applied to users as well. In this case, the user is taking on the policy set for the
IAM role. This is useful for cases where a user is wearing mulHple “hats” in the organizaHon.
Roles make this easy since you only need to create these roles once and they can be re-used
for anybody else that wants to take it on.
You can also have a role Hed to the ARN of a user from a different organizaHon. This allows
the external user to assume that role as a part of your organizaHon. This is typically used when
you have a third party service that is acHng on your AWS OrganizaHon. You’ll be asked to
create a Cross-Account IAM Role and add the external user as a Trust Rela)onship. The Trust
Rela)onship is telling AWS that the specified external user can assume this role.
What is an IAM Group
An IAM group is simply a collecHon of IAM users. You can use groups to specify permissions
for a collecHon of users, which can make those permissions easier to manage for those users.
For example, you could have a group called Admins and give that group the types of
permissions that administrators typically need. Any user in that group automaHcally has the
permissions that are assigned to the group. If a new user joins your organizaHon and should
have administrator privileges, you can assign the appropriate permissions by adding the user
to that group. Similarly, if a person changes jobs in your organizaHon, instead of ediHng that
user’s permissions, you can remove him or her from the old groups and add him or her to the
appropriate new groups.
This should give you a quick idea of IAM and some of its concepts. We will be referring to a
few of these in the coming chapters. Next let’s quickly look at another AWS concept; the ARN.
Amazon Resource Names (ARNs) uniquely iden6fy AWS resources. We require an ARN when you
need to specify a resource unambiguously across all of AWS, such as in IAM policies, Amazon
Rela6onal Database Service (Amazon RDS) tags, and API calls.
ARN is really just a globally unique idenBfier for an individual AWS resource. It takes one of
the following formats.
arn:partition:service:region:account-id:resource
arn:partition:service:region:account-id:resourcetype/resource
arn:partition:service:region:account-id:resourcetype:resource
Let’s look at some examples of ARN. Note the different formats used.
ARN is used to reference a specific resource when you orchestrate a system involving
mulBple AWS resources. For example, you have an API Gateway listening for RESTful APIs
and invoking the corresponding Lambda funcBon based on the API path and request
method. The rouBng looks like the following.
2. IAM Policy
We had looked at this in detail in the last chapter but here is an example of a policy
definiBon.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::Hello-bucket/*"
}
ARN is used to define which resource (S3 bucket in this case) the access is granted for.
The wildcard * character is used here to match all resources inside the Hello-bucket.
Next let’s configure our AWS CLI. We’ll be using the info from the IAM user account we
created previously.
Now using Pip you can install the AWS CLI (on Linux, macOS, or Unix) by running:
If you are having some problems installing the AWS CLI or need Windows install instrucRons,
refer to the complete install instrucRons
(h=p://docs.aws.amazon.com/cli/latest/userguide/installing.html).
Simply run the following with your Secret Key ID and your Access Key.
$ aws configure
You can leave the Default region name and Default output format the way they are.
About DynamoDB
Amazon DynamoDB is a fully managed NoSQL database that provides fast and predictable
performance with seamless scalability. Similar to other databases, DynamoDB stores data in
tables. Each table contains mulKple items, and each item is composed of one or more
a?ributes. We are going to cover some basics in the following chapters. But to get a be?er
feel for it, here is a great guide on DynamoDB (h?ps://www.dynamodbguide.com).
Create Table
First, log in to your AWS Console (h?ps://console.aws.amazon.com) and select DynamoDB
from the list of services.
Select Create table.
Enter the Table name and Primary key info as shown below. Just make sure that userId and
noteId are in camel case.
Each DynamoDB table has a primary key, which cannot be changed once set. The primary key
uniquely idenKfies each item in the table, so that no two items can have the same key.
DynamoDB supports two different kinds of primary keys:
ParKKon key
ParKKon key and sort key (composite)
We are going to use the composite primary key which gives us addiKonal flexibility when
querying the data. For example, if you provide only the value for userId , DynamoDB would
retrieve all of the notes by that user. Or you could provide a value for userId and a value for
noteId , to retrieve a parKcular note.
To further your understanding of how indexes work in DynamoDB, you can read more here:
DynamoDB Core Components
(h?p://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreCo
mponents.html)
Next scroll down and deselect Use default se8ngs.
In this chapter, we are going to create an S3 bucket which will be used to store user uploaded
files from our notes app.
Create Bucket
First, log in to your AWS Console (h<ps://console.aws.amazon.com) and select S3 from the list
of services.
Select Create bucket.
Pick a name of the bucket and select a region. Then select Create.
Bucket names are globally unique, which means you cannot pick the same name as this
tutorial.
Region is the physical geographical region where the files are stored. We will use US East
(N. Virginia) for this guide.
Make a note of the name and region as we’ll be using it later in the guide.
Step through the next steps and leave the defaults by clicking Next, and then click Create
bucket on the last step.
Enable CORS
In the notes app we’ll be building, users will be uploading files to the bucket we just created.
And since our app will be served through our custom domain, it’ll be communicaNng across
domains while it does the uploads. By default, S3 does not allow its resources to be accessed
from a different domain. However, cross-origin resource sharing (CORS) defines a way for
client web applicaNons that are loaded in one domain to interact with resources in a different
domain. Let’s enable CORS for our S3 bucket.
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Note that you can edit this configuraNon to use your own domain or a list of domains when
you use this in producNon.
Now that our S3 bucket is ready, let’s get set up to handle user authenNcaNon.
For help and discussion
Amazon Cognito User Pool makes it easy for developers to add sign-up and sign-in
func0onality to web and mobile applica0ons. It serves as your own iden0ty provider to
maintain a user directory. It supports user registra0on and sign-in, as well as provisioning
iden0ty tokens for signed-in users.
In this chapter, we are going to create a User Pool for our notes app.
And select Email address or phone numbers and Allow email addresses. This is telling Cognito
User Pool that we want our users to be able to sign up and login with their email as their
username.
Scroll down and select Next step.
Hit Review in the side panel and make sure that the Username a9ributes is set to email.
Generate client secret: user pool apps with a client secret are not supported by the
JavaScript SDK. We need to un-select the op0on.
Enable sign-in API for server-based authenEcaEon: required by AWS CLI when managing
the pool users via command line interface. We will be crea0ng a test user through the
command line interface in the next chapter.
Your app client has been created. Take note of the App client id which will be required in the
later chapters.
Create Domain Name
Finally, select Domain name from the leS panel. Enter your unique domain name and select
Save changes. In our case we are using notes-app .
Now our Cognito User Pool is ready. It will maintain a user directory for our notes app. It will
also be used to authen0cate access to our API. Next let’s set up a test user within the pool.
Create User
First, we will use AWS CLI to sign up a user with their email and password.
Now, the user is created in Cognito User Pool. However, before the user can authen:cate with
the User Pool, the account needs to be verified. Let’s quickly verify the user using an
administrator command.
Now our test user is ready. Next, let’s set up the Serverless Framework to create our backend
APIs.
In this chapter, we are going to set up the Serverless Framework on our local development
environment.
Install Serverless
Install Serverless globally.
The above command needs NPM (h5ps://www.npmjs.com), a package manager for JavaScript.
Follow this (h5ps://docs.npmjs.com/geTng-started/installing-node) if you need help installing
NPM.
In your working directory; create a project using a Node.js starter. We’ll go over
some of the details of this starter project in the next chapter.
$ cd notes-app-api
Now the directory should contain a few files including, the handler.js and serverless.yml.
handler.js file contains actual code for the services/funcHons that will be deployed to
AWS Lambda.
serverless.yml file contains the configuraHon on what AWS services Serverless will
provision and how to configure them.
We also have a tests/ directory where we can add our unit tests.
$ npm install
Next, we’ll install a couple of other packages specifically for our backend.
The starter project that we are using allows us to use the version of JavaScript that we’ll be
using in our frontend app later. Let’s look at exactly how it does this.
All this has been added in the previous chapter using the serverless-nodejs-starter
(/chapters/serverless-nodejs-starter.html). We created this starter for a couple of reasons:
If you recall we installed this starter using the serverless install --url
https://github.com/AnomalyInnovations/serverless-nodejs-starter --name
my-project command. This is telling Serverless Framework to use the starter
(hAps://github.com/AnomalyInnovaNons/serverless-nodejs-starter) as a template to create our
project.
In this chapter, let’s quickly go over how it’s doing this so you’ll be able to make changes in the
future if you need to.
Serverless Webpack
The transpiling process of converNng our ES code to Node v8.10 JavaScript is done by the
serverless-bundle plugin. This plugin was added in our serverless.yml .
service: notes-app-api
plugins:
- serverless-bundle # Package our functions with Webpack
- serverless-offline
provider:
name: aws
runtime: nodejs8.10
stage: prod
region: us-east-1
The service opNon is preAy important. We are calling our service the notes-app-api .
Serverless Framework creates your stack on AWS using this as the name. This means that if
you change the name and deploy your project, it will create a completely new project.
By default, Serverless Framework creates one large package for all the Lambda funcNons in
your app. Large Lambda funcNon packages can cause longer cold starts. By se_ng
individually: true , we are telling Serverless Framework to create a single package per
Lambda funcNon. This in combinaNon with serverless-bundle (and Webpack) will generate
opNmized packages. Note that, this’ll slow down our builds but the performance benefit is well
worth it.
Create a new file called create.js in our project root with the following.
const params = {
TableName: "notes",
// 'Item' contains the attributes of the item to be created
// - 'userId': user identities are federated through the
// Cognito Identity Pool, we will use the identity id
// as the user id of the authenticated user
// - 'noteId': a unique uuid
// - 'content': parsed from request body
// - 'attachment': parsed from request body
// - 'createdAt': current Unix timestamp
Item: {
userId: event.requestContext.identity.cognitoIdentityId,
noteId: uuid.v1(),
content: data.content,
attachment: data.attachment,
createdAt: Date.now()
}
};
There are some helpful comments in the code but we are doing a few simple things here.
The AWS JS SDK assumes the region based on the current region of the Lambda funcCon.
So if your DynamoDB table is in a different region, make sure to set it by calling
AWS.config.update({ region: "my-region" }); before iniClizing the DynamoDB
client.
Parse the input from the event.body . This represents the HTTP request parameters.
The userId is a Federated IdenCty id that comes in as a part of the request. This is set
aRer our user has been authenCcated via the User Pool. We are going to expand more on
this in the coming chapters when we set up our Cognito IdenCty Pool. However, if you
want to use the user’s User Pool user Id; take a look at the Mapping Cognito IdenCty Id
and User Pool Id (/chapters/mapping-cognito-idenCty-id-and-user-pool-id.html) chapter.
Make a call to DynamoDB to put a new object with a generated noteId and the current
date as the createdAt .
Upon success, return the newly created note object with the HTTP status code 200 and
response headers to enable CORS (Cross-Origin Resource Sharing).
And if the DynamoDB call fails then return an error with the HTTP status code 500 .
service: notes-app-api
plugins:
- serverless-bundle # Package our functions with Webpack
- serverless-offline
provider:
name: aws
runtime: nodejs8.10
stage: prod
region: us-east-1
# 'iamRoleStatements' defines the permission policy for the Lambda
function.
# In this case Lambda functions are granted with permissions to
access DynamoDB.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:us-east-1:*:*"
functions:
# Defines an HTTP API endpoint that calls the main function in
create.js
# - path: url path is /notes
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser
cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
create:
handler: create.main
events:
- http:
path: notes
method: post
cors: true
authorizer: aws_iam
Here we are adding our newly added create funcCon to the configuraCon. We specify that it
handles post requests at the /notes endpoint. This pa[ern of using a single Lambda
funcCon to respond to a single HTTP event is very much like the Microservices architecture
(h[ps://en.wikipedia.org/wiki/Microservices). We discuss this and a few other pa[erns in the
chapter on organizing Serverless Framework projects (/chapters/organizing-serverless-
projects.html). We set CORS support to true. This is because our frontend is going to be
served from a different domain. As the authorizer we are going to restrict access to our API
based on the user’s IAM credenCals. We will touch on this and how our User Pool works with
this, in the Cognito IdenCty Pool chapter.
The iamRoleStatements secCon is telling AWS which resources our Lambda funcCons
have access to. In this case we are saying that our Lambda funcCons can carry out the above
listed acCons on DynamoDB. We specify DynamoDB using arn:aws:dynamodb:us-east-
1:*:* . This is roughly poinCng to every DynamoDB table in the us-east-1 region. We can
be more specific here by specifying the table name but we’ll leave this as an exercise for the
reader. Just make sure to use the region that the DynamoDB table was created in, as this can
be a common source of issues later on. For us the region is us-east-1 .
Test
Now we are ready to test our new API. To be able to test it on our local we are going to mock
the input parameters.
$ mkdir mocks
{
"body": "{\"content\":\"hello
world\",\"attachment\":\"hello.jpg\"}",
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
You might have noCced that the body and requestContext fields are the ones we used in
our create funcCon. In this case the cognitoIdentityId field is just a string we are going
to use as our userId . We can use any string here; just make sure to use the same one when
we test our other funcCons.
And to invoke our funcCon we run the following in the root directory.
If you have mulCple profiles for your AWS SDK credenCals, you will need to explicitly pick
one. Use the following command instead:
Where myProfile is the name of the AWS profile you want to use. If you need more info on
how to work with AWS profiles in Serverless, refer to our Configure mulCple AWS profiles
(/chapters/configure-mulCple-aws-profiles.html) chapter.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"userId":"USER-SUB-1234","noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","content":"hello
world","attachment":"hello.jpg","createdAt":1487800950620}'
}
Make a note of the noteId in the response. We are going to use this newly created note in
the next chapter.
$ mkdir libs
$ cd libs
This will manage building the response objects for both success and failure cases with the
proper HTTP status code and headers.
Here we are using the promise form of the DynamoDB methods. Promises are a method for
managing asynchronous code that serve as an alternaCve to the standard callback funcCon
syntax. It will make our code a lot easier to read.
Now, we’ll go back to our create.js and use the helper funcCons we created.
Replace our create.js with the following.
try {
await dynamoDbLib.call("put", params);
return success(params.Item);
} catch (e) {
return failure({ status: false });
}
}
We are also using the async/await pa[ern here to refactor our Lambda funcCon. This
allows us to return once we are done processing; instead of using the callback funcCon.
Next, we are going to write the API to get a note given its id.
Common Issues
If you see a statusCode: 500 response when you invoke your funcCon, here is how to
debug it. The error is generated by our code in the catch block. Adding a
console.log like so, should give you a clue about what the issue is.
catch(e) {
console.log(e);
return failure({status: false});
}
try {
const result = await dynamoDbLib.call("get", params);
if (result.Item) {
// Return the retrieved item
return success(result.Item);
} else {
return failure({ status: false, error: "Item not found." });
}
} catch (e) {
return failure({ status: false });
}
}
This follows exactly the same structure as our previous create.js funcBon. The major
difference here is that we are doing a dynamoDbLib.call('get', params) to get a note
object given the noteId and userId that is passed in through the request.
get:
# Defines an HTTP API endpoint that calls the main function in
get.js
# - path: url path is /notes/{id}
# - method: GET request
handler: get.main
events:
- http:
path: notes/{id}
method: get
cors: true
authorizer: aws_iam
Make sure that this block is indented exactly the same way as the preceding create block.
This defines our get note API. It adds a GET request handler with the endpoint
/notes/{id} .
Test
To test our get note API we need to mock passing in the noteId parameter. We are going to
use the noteId of the note we created in the previous chapter and add in a
pathParameters block to our mock. So it should look similar to the one below. Replace the
value of id with the id you received when you invoked the previous create.js funcBon.
Create a mocks/get-event.json file and add the following.
{
"pathParameters": {
"id": "578eb840-f70f-11e6-9d1a-1359b3b22944"
},
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"attachment":"hello.jpg","content":"hello
world","createdAt":1487800950620,"noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","userId":"USER-SUB-1234"}'
}
Next, let’s create an API to list all the notes a user has.
try {
const result = await dynamoDbLib.call("query", params);
// Return the matching list of items in response body
return success(result.Items);
} catch (e) {
return failure({ status: false });
}
}
This is pre;y much the same as our get.js except we only pass in the userId in the
DynamoDB query call.
list:
# Defines an HTTP API endpoint that calls the main function in
list.js
# - path: url path is /notes
# - method: GET request
handler: list.main
events:
- http:
path: notes
method: get
cors: true
authorizer: aws_iam
Test
Create a mocks/list-event.json file and add the following.
{
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
And invoke our funcGon from the root directory of the project.
$ serverless invoke local --function list --path mocks/list-event.json
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '[{"attachment":"hello.jpg","content":"hello
world","createdAt":1487800950620,"noteId":"578eb840-f70f-11e6-9d1a-
1359b3b22944","userId":"USER-SUB-1234"}]'
}
Note that this API returns an array of note objects as opposed to the get.js funcGon that
returns just a single note object.
try {
await dynamoDbLib.call("update", params);
return success({ status: true });
} catch (e) {
return failure({ status: false });
}
}
This should look similar to the create.js funcAon. Here we make an update DynamoDB
call with the new content and attachment values in the params .
update:
# Defines an HTTP API endpoint that calls the main function in
update.js
# - path: url path is /notes/{id}
# - method: PUT request
handler: update.main
events:
- http:
path: notes/{id}
method: put
cors: true
authorizer: aws_iam
Here we are adding a handler for the PUT request to the /notes/{id} endpoint.
Test
Create a mocks/update-event.json file and add the following.
Also, don’t forget to use the noteId of the note we have been using in place of the id in
the pathParameters block.
{
"body": "{\"content\":\"new world\",\"attachment\":\"new.jpg\"}",
"pathParameters": {
"id": "578eb840-f70f-11e6-9d1a-1359b3b22944"
},
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
And we invoke our newly created funcAon from the root directory.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"status":true}'
}
Next we are going to add an API to delete a note given its id.
For help and discussion
try {
await dynamoDbLib.call("delete", params);
return success({ status: true });
} catch (e) {
return failure({ status: false });
}
}
This makes a DynamoDB delete call with the userId & noteId key to delete the note.
Configure the API Endpoint
Open the serverless.yml file and append the following to it.
delete:
# Defines an HTTP API endpoint that calls the main function in
delete.js
# - path: url path is /notes/{id}
# - method: DELETE request
handler: delete.main
events:
- http:
path: notes/{id}
method: delete
cors: true
authorizer: aws_iam
Test
Create a mocks/delete-event.json file and add the following.
Just like before we’ll use the noteId of our note in place of the id in the
pathParameters block.
{
"pathParameters": {
"id": "578eb840-f70f-11e6-9d1a-1359b3b22944"
},
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
Invoke our newly created funcJon from the root directory.
{
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"status":true}'
}
Now that our APIs are complete; we are almost ready to deploy them.
Consequently, debugging such errors can be really hard. Our client won’t be able to see the
error message and instead will be presented with something like this:
These CORS related errors are one of the most common Serverless API errors. In this chapter,
we are going to configure API Gateway to set the CORS headers in the case there is an HTTP
error. We won’t be able to test this right away, but it will really help when we work on our
frontend client.
Create a Resource
To configure API Gateway errors we are going to add a few things to our serverless.yml .
By default, Serverless Framework (hRps://serverless.com) supports CloudFormaIon
(hRps://aws.amazon.com/cloudformaIon/) to help us configure our API Gateway instance
through code.
Let’s create a directory to add our resources. We’ll be adding to this later in the
guide.
$ mkdir resources/
The above might look a liRle inImidaIng. It’s a CloudFormaIon resource and its syntax tends
to be fairly verbose. But the details here aren’t too important. We are adding the CORS
headers to the ApiGatewayRestApi resource in our app. The
GatewayResponseDefault4XX is for 4xx errors, while GatewayResponseDefault5XX is
for 5xx errors.
$ serverless deploy
If you have mul?ple profiles for your AWS SDK creden?als, you will need to explicitly pick
one. Use the following command instead:
Where myProfile is the name of the AWS profile you want to use. If you need more info on
how to work with AWS profiles in Serverless, refer to our Configure mul?ple AWS profiles
(/chapters/configure-mul?ple-aws-profiles.html) chapter.
Near the boNom of the output for this command, you will find the Service Informa.on.
Service Information
service: notes-app-api
stage: prod
region: us-east-1
api keys:
None
endpoints:
POST - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes
GET - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
GET - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes
PUT - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
DELETE - https://ly55wbovq4.execute-api.us-east-
1.amazonaws.com/prod/notes/{id}
functions:
notes-app-api-prod-create
notes-app-api-prod-get
notes-app-api-prod-list
notes-app-api-prod-update
notes-app-api-prod-delete
This has a list of the API endpoints that were created. Make a note of these endpoints as we
are going to use them later while crea?ng our frontend. Also make a note of the region and
the id in these endpoints, we are going to use them in the coming chapters. In our case, us-
east-1 is our API Gateway Region and ly55wbovq4 is our API Gateway ID.
If you are running into some issues while deploying your app, we have a compila?on of some
of the most common Serverless errors (hNps://seed.run/docs/serverless-errors/) over on Seed
(hNps://seed.run).
For example, to deploy the list func?on again, we can run the following.
Now before we test our APIs we have one final thing to set up. We need to ensure that our
users can securely access the AWS resources we have created so far. Let’s look at seUng up a
Cognito Iden?ty Pool.
Amazon Cognito Federated IdenEEes enables developers to create unique idenEEes for your
users and authenEcate them with federated idenEty providers. With a federated idenEty, you
can obtain temporary, limited-privilege AWS credenEals to securely access other AWS
services such as Amazon DynamoDB, Amazon S3, and Amazon API Gateway.
In this chapter, we are going to create a federated Cognito IdenEty Pool. We will be using our
User Pool as the idenEty provider. We could also use Facebook, Google, or our own custom
idenEty provider. Once a user is authenEcated via our User Pool, the IdenEty Pool will aCach
an IAM Role to the user. We will define a policy for this IAM Role to grant access to the S3
bucket and our API. This is the Amazon way of securing your resources.
Create Pool
From your AWS Console (hCps://console.aws.amazon.com) and select Cognito from the list of
services.
Select Manage Federated Iden//es.
Enter an Iden/ty pool name. If you have any exisEng IdenEty Pools, you’ll need to click the
Create new iden/ty pool buCon.
Select Authen/ca/on providers. Under Cognito tab, enter User Pool ID and App Client ID of
the User Pool created in the Create a Cognito user pool (/chapters/create-a-cognito-user-
pool.html) chapter. Select Create Pool.
Now we need to specify what AWS resources are accessible for users with temporary
credenEals obtained from the Cognito IdenEty Pool.
Select View Details. Two Role Summary secEons are expanded. The top secEon summarizes
the permission policy for authenEcated users, and the boCom secEon summarizes that for
unauthenEcated users.
Select View Policy Document in the top secEon. Then select Edit.
It will warn you to read the documentaEon. Select Ok to edit.
Add the following policy into the editor. Replace
YOUR_S3_UPLOADS_BUCKET_NAME with the bucket name from the Create an S3 bucket for
file uploads (/chapters/create-an-s3-bucket-for-file-uploads.html) chapter. And replace the
YOUR_API_GATEWAY_REGION and YOUR_API_GATEWAY_ID with the ones that you get
aZer you deployed your API in the last chapter.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/private/${cognito-
identity.amazonaws.com:sub}/*"
]
},
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/*/*/*"
]
}
]
}
A quick note on the block that relates to the S3 Bucket. In the above policy we are granEng
our logged in users access to the path private/${cognito-
identity.amazonaws.com:sub/}/ . Where cognito-identity.amazonaws.com:sub is
the authenEcated user’s federated idenEty ID (their user id). So a user has access to only their
folder within the bucket. This is how we are securing the uploads for each user.
So in summary we are telling AWS that an authenEcated user has access to two resources.
1. Files in the S3 bucket that are inside a folder with their federated idenEty id as the name
of the folder.
2. And, the APIs we deployed using API Gateway.
One other thing to note is that the federated idenEty id is a UUID that is assigned by our
IdenEty Pool. This is the id ( event.requestContext.identity.cognitoIdentityId )
that we were using as our user id back when we were creaEng our APIs.
Select Allow.
Our Cognito IdenEty Pool should now be created. Let’s find out the IdenEty Pool ID.
Select Dashboard from the leZ panel, then select Edit iden/ty pool.
Take a note of the Iden/ty pool ID which will be required in the later chapters.
Now before we test our serverless API let’s take a quick look at the Cognito User Pool and
Cognito IdenEty Pool and make sure we’ve got a good idea of the two concepts and the
differences between them.
Amazon Cognito User Pool makes it easy for developers to add sign-up and sign-in func;onality to
web and mobile applica;ons. It serves as your own iden;ty provider to maintain a user directory. It
supports user registra;on and sign-in, as well as provisioning iden;ty tokens for signed-in users.
Amazon Cognito Federated Iden;;es enables developers to create unique iden;;es for your users
and authen;cate them with federated iden;ty providers. With a federated iden;ty, you can obtain
temporary, limited-privilege AWS creden;als to securely access other AWS services such as
Amazon DynamoDB, Amazon S3, and Amazon API Gateway.
Unfortunately they are both a bit vague and confusingly similar. Here is a more prac,cal
descrip,on of what they are.
User Pool
Say you were crea,ng a new web or mobile app and you were thinking about how to handle
user registra,on, authen,ca,on, and account recovery. This is where Cognito User Pools
would come in. Cognito User Pool handles all of this and as a developer you just need to use
the SDK to retrieve user related informa,on.
Identity Pool
Cognito Iden,ty Pool (or Cognito Federated Iden,,es) on the other hand is a way to authorize
your users to use the various AWS services. Say you wanted to allow a user to have access to
your S3 bucket so that they could upload a file; you could specify that while crea,ng an
Iden,ty Pool. And to create these levels of access, the Iden,ty Pool has its own concept of an
iden,ty (or user). The source of these iden,,es (or users) could be a Cognito User Pool or
even Facebook or Google.
No,ce how we could use the User Pool, social networks, or even our own custom
authen,ca,on system as the iden,ty provider for the Cognito Iden,ty Pool. The Cognito
Iden,ty Pool simply takes all your iden,ty providers and puts them together (federates them).
And with all of this it can now give your users secure access to your AWS services, regardless
of where they come from.
So in summary; the Cognito User Pool stores all your users which then plugs into your Cognito
Iden,ty Pool which can give your users access to your AWS services.
Now that we have a good understanding of how our users will be handled, let’s finish up our
backend by tes,ng our APIs.
To be able to hit our API endpoints securely, we need to follow these steps.
These steps can be a bit tricky to do by hand. So we created a simple tool called AWS API
Gateway Test CLI (hMps://github.com/AnomalyInnovaAons/aws-api-gateway-cli-test).
$ npx aws-api-gateway-cli-test
The npx command is just a convenient way of running a NPM module without installing it
globally.
We need to pass in quite a bit of our info to complete the above steps.
Use the username and password of the user created in the Create a Cognito test user
(/chapters/create-a-cognito-test-user.html) chapter.
Replace YOUR_COGNITO_USER_POOL_ID, YOUR_COGNITO_APP_CLIENT_ID, and
YOUR_COGNITO_REGION with the values from the Create a Cognito user pool
(/chapters/create-a-cognito-user-pool.html) chapter. In our case the region is us-east-
1 .
Replace YOUR_IDENTITY_POOL_ID with the one from the Create a Cognito idenAty
pool (/chapters/create-a-cognito-idenAty-pool.html) chapter.
Use the YOUR_API_GATEWAY_URL and YOUR_API_GATEWAY_REGION with the ones
from the Deploy the APIs (/chapters/deploy-the-apis.html) chapter. In our case the URL is
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod and the
region is us-east-1 .
$ npx aws-api-gateway-cli-test \
--username='[email protected]' \
--password='Passw0rd!' \
--user-pool-id='YOUR_COGNITO_USER_POOL_ID' \
--app-client-id='YOUR_COGNITO_APP_CLIENT_ID' \
--cognito-region='YOUR_COGNITO_REGION' \
--identity-pool-id='YOUR_IDENTITY_POOL_ID' \
--invoke-url='YOUR_API_GATEWAY_URL' \
--api-gateway-region='YOUR_API_GATEWAY_REGION' \
--path-template='/notes' \
--method='POST' \
--body='{"content":"hello world","attachment":"hello.jpg"}'
While this might look inAmidaAng, just keep in mind that behind the scenes all we are doing is
generaAng some security headers before making a basic HTTP request. You’ll see more of this
process when we connect our React.js app to our API backend.
If you are on Windows, use the command below. The space between each opAon is very
important.
And that’s it for the backend! Next we are going to move on to creaAng the frontend of our
app.
Common Issues
This is the most common issue we come across and it is a bit crypAc and can be hard to
debug. Here are a few things to check before you start debugging:
There are no trailing slashes for YOUR_API_GATEWAY_URL . In our case, the URL is
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod . NoAce
that it does not end with a / .
If you’re on Windows and are using Git Bash, try adding a trailing slash to
YOUR_API_GATEWAY_URL while removing the leading slash from --path-
template . In our case, it would result in --invoke-url
https://ly55wbovq4.execute-api.us-east-1.amazonaws.com/prod/ --
path-template notes . You can follow the discussion on this here
(hMps://github.com/AnomalyInnovaAons/serverless-stack-
com/issues/112#issuecomment-345996566).
There is a good chance that this error is happening even before our Lambda funcAons are
invoked. So we can start by making sure our IAM Roles are configured properly for our
IdenAty Pool. Follow the steps as detailed in our Debugging Serverless API Issues
(/chapters/debugging-serverless-api-issues.html#missing-iam-policy) chapter to ensure
that your IAM Roles have the right set of permissions.
Finally, make sure to look at the comment thread below. We’ve helped quite a few people
with similar issues and it’s very likely that somebody has run into a similar issue as you.
If instead your command fails with the {status: false} response; we can do a few
things to debug this. This response is generated by our Lambda funcAons when there is
an error. Add a console.log like so in your handler funcAon.
catch(e) {
console.log(e);
callback(null, failure({status: false}));
}
And deploy it using serverless deploy function -f create . But we can’t see this
output when we make an HTTP request to it, since the console logs are not sent in our
HTTP responses. We need to check the logs to see this. We have a detailed chapter
(/chapters/api-gateway-and-lambda-logs.html#viewing-lambda-cloudwatch-logs) on
working with API Gateway and Lambda logs and you can read about how to check your
debug messages here (/chapters/api-gateway-and-lambda-logs.html#viewing-lambda-
cloudwatch-logs).
Move out of the directory that we were working in for the backend.
$ cd ../
This should take a second to run, and it will create your new project and your new working
directory.
Now let’s go into our working directory and run our project.
$ cd notes-app-client
$ npm start
Create React App comes pre-loaded with a pre:y convenient yet minimal development
environment. It includes live reloading, a tesNng framework, ES6 support, and much more
(h:ps://github.com/facebookincubator/create-react-app#why-use-this).
Next, we are going to create our app icon and update the favicons.
For our example, we are going to start with a simple image and generate the various versions
from it.
To ensure that our icon works for most of our targeted pla>orms we’ll use a service called the
Favicon Generator (h@p://realfavicongenerator.net).
Click Favicon package to download the generated favicons. And copy all the files
over to your public/ directory.
Then replace the contents of public/manifest.json with the following:
{
"short_name": "Scratch",
"name": "Scratch Note Taking App",
"icons": [
{
"src": "android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "android-chrome-256x256.png",
"sizes": "256x256",
"type": "image/png"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#ffffff",
"background_color": "#ffffff"
}
To include a file from the public/ directory in your HTML, Create React App needs the
%PUBLIC_URL% prefix.
And remove the following lines that reference the original favicon and theme color.
Finally head over to your browser and try the /favicon-32x32.png path to ensure that the
files were added correctly.
Next we are going to look into se:ng up custom fonts in our app.
This also gives us a chance to explore the structure of our newly created React.js app.
Let’s first include them in the HTML. Our React.js app is using a single HTML file.
Go ahead and edit public/index.html and add the following line in the
<head> sec5on of the HTML to include the two typefaces.
Here we are referencing all the 5 different weights (300, 400, 600, 700, and 800) of the Open
Sans typeface.
Let’s change the current font in src/index.css for the body tag to the
following.
body {
margin: 0;
padding: 0;
font-family: "Open Sans", sans-serif;
font-size: 16px;
color: #333;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
And let’s change the fonts for the header tags to our new Serif font by adding this
block to the css file.
Now if you just flip over to your browser with our new app, you should see the new fonts
update automa5cally; thanks to the live reloading.
We’ll stay on the theme of adding styles and set up our project with Bootstrap to ensure that
we have a consistent UI Kit to work with while building our app.
This installs the NPM package and adds the dependency to your package.json .
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"
We’ll also tweak the styles of the form fields so that the mobile browser does not zoom in on
them on focus. We just need them to have a minimum font size of 16px to prevent the
zoom.
select.form-control,
textarea.form-control,
input.form-control {
font-size: 16px;
}
input[type=file] {
width: 100%;
}
We are also seRng the width of the input type file to prevent the page on mobile from
overflowing and adding a scrollbar.
Now if you head over to your browser, you might no-ce that the styles have shiTed a bit. This
is because Bootstrap includes Normalize.css (h=p://necolas.github.io/normalize.css/) to have a
more consistent styles across browsers.
Next, we are going to create a few routes for our applica-on and set up the React Router.
Let’s start by installing React Router. We are going to be using the React Router v4, the
newest version of React Router. React Router v4 can be used on the web and in naFve. So
let’s install the one for the web.
This installs the NPM package and adds the dependency to your package.json .
With this:
ReactDOM.render(
<Router>
<App />
</Router>,
document.getElementById("root")
);
Now if you head over to your browser, your app should load just like before. The only
difference being that we are using React Router to serve out our pages.
Next we are going to look into how to organize the different pages of our app.
Add a Navbar
Let’s start by crea7ng the outer chrome of our applica7on by first adding a naviga7on bar to it.
We are going to use the Navbar (hGps://react-bootstrap.github.io/components/navbar/)
React-Bootstrap component.
To start, you can go remove the src/logo.svg that is placed there by Create
React App.
$ rm src/logo.svg
And go ahead and remove the code inside src/App.js and replace it with the
following.
Let’s also add a couple of line of styles to space things out a bit more.
Remove all the code inside src/App.css and replace it with the following:
.App {
margin-top: 15px;
}
.App .navbar-brand {
font-weight: bold;
}
$ mkdir src/containers/
We’ll be storing all of our top level components here. These are components that will respond
to our routes and make requests to our API. We will be calling them containers through the
rest of this tutorial.
This simply renders our homepage given that the user is not currently signed in.
.Home .lander {
padding: 80px 0;
text-align: center;
}
.Home .lander h1 {
font-family: "Open Sans", sans-serif;
font-weight: 600;
}
.Home .lander p {
color: #999;
}
This component uses this Switch component from React-Router that renders the first
matching route that is defined within it. For now we only have a single route, it looks for /
and renders the Home component when matched. We are also using the exact prop to
ensure that it matches the / route exactly. This is because the path / will also match any
route that starts with a / .
<Routes />
So the render method of our src/App.js should now look like this.
render() {
return (
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
</Navbar>
<Routes />
</div>
);
}
This ensures that as we navigate to different routes in our app, the por7on below the navbar
will change to reflect that.
Finally, head over to your browser and your app should show the brand new homepage of
your app.
Next we are going to add login and signup links to our navbar.
render() {
return (
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
<NavItem href="/signup">Signup</NavItem>
<NavItem href="/login">Login</NavItem>
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes />
</div>
);
}
This adds two links to our navbar using the NavItem Bootstrap component. The
Navbar.Collapse component ensures that on mobile devices the two links will be
collapsed.
Now if you flip over to your browser, you should see the two links in our navbar.
Unfortunately, when you click on them they refresh your browser while redirecCng to the link.
We need it to route it to the new link without refreshing the page since we are building a
single page app.
To fix this we need a component that works with React Router and React Bootstrap called
React Router Bootstrap (hGps://github.com/react-bootstrap/react-router-bootstrap). It can
wrap around your Navbar links and use the React Router to route your app to the required
link without refreshing the browser.
We will now wrap our links with the LinkContainer . Replace the render
method in your src/App.js with this.
render() {
return (
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes />
</div>
);
}
And that’s it! Now if you flip over to your browser and click on the login link, you should see
the link highlighted in the navbar. Also, it doesn’t refresh the page while redirecCng.
You’ll noCce that we are not rendering anything on the page because we don’t have a login
page currently. We should handle the case when a requested page is not found.
Next let’s look at how to tackle handling 404s with our router.
Create a Component
Let’s start by crea<ng a component that will handle this for us.
All this component does is print out a simple message for us.
.NotFound {
padding-top: 100px;
text-align: center;
}
Find the <Switch> block in src/Routes.js and add it as the last line in that
sec<on.
This needs to always be the last line in the <Route> block. You can think of it as the route
that handles requests in case all the other routes before it have failed.
And include the NotFound component in the header by adding the following:
And that’s it! Now if you were to switch over to your browser and try clicking on the Login or
Signup buKons in the Nav you should see the 404 message that we have.
Next up, we are going to configure our app with the info of our backend resources.
AWS Amplify provides a few simple modules (Auth, API, and Storage) to help us easily connect
to our backend. Let’s get started.
This installs the NPM package and adds the dependency to your package.json .
Create a Config
Let’s first create a configura9on file for our app that’ll reference all the resources we have
created.
export default {
s3: {
REGION: "YOUR_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
}
};
And to ini9alize AWS Amplify; add the following above the ReactDOM.render
line in src/index.js .
Amplify.configure({
Auth: {
mandatorySignIn: true,
region: config.cognito.REGION,
userPoolId: config.cognito.USER_POOL_ID,
identityPoolId: config.cognito.IDENTITY_POOL_ID,
userPoolWebClientId: config.cognito.APP_CLIENT_ID
},
Storage: {
region: config.s3.REGION,
bucket: config.s3.BUCKET,
identityPoolId: config.cognito.IDENTITY_POOL_ID
},
API: {
endpoints: [
{
name: "notes",
endpoint: config.apiGateway.URL,
region: config.apiGateway.REGION
},
]
}
});
The mandatorySignIn flag for Auth is set to true because we want our users to be
signed in before they can interact with our app.
The name: "notes" is basically telling Amplify that we want to name our API. Amplify
allows you to add mul9ple APIs that your app is going to work with. In our case our en9re
backend is just one single API.
The Amplify.configure() is just se\ng the various AWS resources that we want to
interact with. It isn’t doing anything else special here beside configura9on. So while this
might look in9mida9ng, just remember this is only se\ng things up.
Next up, we are going to work on crea9ng our login and sign up forms.
So let’s start by crea5ng the basic form that’ll take the user’s email (as their username) and
password.
this.state = {
email: "",
password: ""
};
}
validateForm() {
return this.state.email.length > 0 && this.state.password.length >
0;
}
render() {
return (
<div className="Login">
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="email" bsSize="large">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup controlId="password" bsSize="large">
<ControlLabel>Password</ControlLabel>
<FormControl
value={this.state.password}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<Button
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
>
Login
</Button>
</form>
</div>
);
}
}
1. In the constructor of our component we create a state object. This will be where we’ll
store what the user enters in the form.
2. We then connect the state to our two fields in the form by seHng this.state.email
and this.state.password as the value in our input fields. This means that when
the state changes, React will re-render these components with the updated value.
3. But to update the state when the user types something into these fields, we’ll call a
handle func5on named handleChange . This func5on grabs the id (set as
controlId for the <FormGroup> ) of the field being changed and updates its state
with the value the user is typing in. Also, to have access to the this keyword inside
handleChange we store the reference to an anonymous func5on like so:
handleChange = (event) => { } .
4. We are seHng the autoFocus flag for our email field, so that when our form loads, it
sets focus to this field.
5. We also link up our submit buTon with our state by using a validate func5on called
validateForm . This simply checks if our fields are non-empty, but can easily do
something more complicated.
6. Finally, we trigger our callback handleSubmit when the form is submiTed. For now we
are simply suppressing the browsers default behavior on submit but we’ll do more here
later.
Now if we switch to our browser and navigate to the login page we should see our newly
created form.
Next, let’s connect our login form to our AWS Cognito set up.
try {
await Auth.signIn(this.state.email, this.state.password);
alert("Logged in");
} catch (e) {
alert(e.message);
}
}
1. We grab the email and password from this.state and call Amplify’s
Auth.signIn() method with it. This method returns a promise since it will be logging
the user asynchronously.
2. We use the await keyword to invoke the Auth.signIn() method that returns a
promise. And we need to label our handleSubmit method as async .
Now if you try to login using the [email protected] user (that we created in the Create a
Cognito Test User (/chapters/create-a-cognito-test-user.html) chapter), you should see the
browser alert that tells you that the login was successful.
Next, we’ll take a look at storing the login state in our app.
Add the following to src/App.js right below the class App extends
Component { line.
constructor(props) {
super(props);
this.state = {
isAuthenticated: false
};
}
This ini:alizes the isAuthenticated flag in the App’s state. And calling
userHasAuthenticated updates it. But for the Login container to call this method we
need to pass a reference of this method to it.
const childProps = {
isAuthenticated: this.state.isAuthenticated,
userHasAuthenticated: this.userHasAuthenticated
};
And pass them into our Routes component by replacing the following line in the
render method of src/App.js .
<Routes />
With this.
Currently, our Routes component does not do anything with the passed in childProps .
We need it to apply these props to the child component it is going to render. In this case we
need it to apply them to our Login component.
$ mkdir src/components/
Here we’ll be storing all our React components that are not dealing directly with our API or
responding to routes.
This simple component creates a Route where the child component that it renders contains
the passed in props. Let’s take a quick look at how this being done.
The Route component takes a prop called component that represents the component
that will be rendered when a matching route is found. We want our childProps to be
sent to this component.
The Route component can also take a render method in place of the component .
This allows us to control what is passed in to our component.
Based on this we can create a component that returns a Route and takes a
component and childProps prop. This allows us to pass in the component we want
rendered and the props that we want applied.
Finally, we take component (set as C ) and props (set as cProps ) and render inside
our Route using the inline func:on; props => <C {...props} {...cProps} /> .
Note, the props variable in this case is what the Route component passes us. Whereas,
the cProps is the childProps that we want to set.
Now to use this component, we are going to include it in the routes where we need to have
the childProps passed in.
this.props.userHasAuthenticated(true);
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
{this.state.isAuthenticated
? <NavItem onClick={this.handleLogout}>Logout</NavItem>
: <Fragment>
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
</Fragment>
}
Also, import the Fragment in the header.
Replace the import React line in the header of src/App.js with the
following.
Now head over to your browser and try logging in with the admin creden:als we created in
the Create a Cognito Test User (/chapters/create-a-cognito-test-user.html) chapter. You
should see the Logout buPon appear right away.
Now if you refresh your page you should be logged out again. This is because we are not
ini:alizing the state from the browser session. Let’s look at how to do that next.
Amplify gives us a way to get the current user session using the Auth.currentSession()
method. It returns a promise that resolves to the session object (if there is one).
this.state = {
isAuthenticated: false,
isAuthenticating: true
};
Let’s include the Auth module by adding the following to the header of
src/App.js .
Now to load the user session we’ll add the following to our src/App.js below
our constructor method.
async componentDidMount() {
try {
await Auth.currentSession();
this.userHasAuthenticated(true);
}
catch(e) {
if (e !== 'No current user') {
alert(e);
}
}
All this does is load the current session. If it loads, then it updates the isAuthenticating
flag once the process is complete. The Auth.currentSession() method throws an error
No current user if nobody is currently logged in. We don’t want to show this error to
users when they load up our app and are not signed in.
render() {
const childProps = {
isAuthenticated: this.state.isAuthenticated,
userHasAuthenticated: this.userHasAuthenticated
};
return (
!this.state.isAuthenticating &&
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
{this.state.isAuthenticated
? <NavItem onClick={this.handleLogout}>Logout</NavItem>
: <Fragment>
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
</Fragment>
}
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes childProps={childProps} />
</div>
);
}
Now if you head over to your browser and refresh the page, you should see that a user is
logged in.
Unfortunately, when we hit Logout and refresh the page; we are s/ll logged in. To fix this we
are going to clear the session on logout next.
this.userHasAuthenticated(false);
}
Now if you head over to your browser, logout and then refresh the page; you should be logged
out completely.
If you try out the enHre login flow from the beginning you’ll noHce that, we conHnue to stay
on the login page through out the enHre process. Next, we’ll look at redirecHng the page aKer
we login and logout to make the flow make more sense.
We are going to use the history.push method that comes with React Router v4.
this.props.history.push("/");
try {
await Auth.signIn(this.state.email, this.state.password);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e.message);
}
}
Now if you head over to your browser and try logging in, you should be redirected to the
homepage a8er you’ve been logged in.
Redirect to Login After Logout
Now we’ll do something very similar for the logout process. However, the App component
does not have access to the router props directly since it is not rendered inside a Route
component. To be able to use the router props in our App component we will need to use
the withRouter Higher-Order Component (hMps://facebook.github.io/react/docs/higher-
order-components.html) (or HOC). You can read more about the withRouter HOC here
(hMps://reacMraining.com/react-router/web/api/withRouter).
To use this HOC, we’ll change the way we export our App component.
With this.
this.props.history.push("/login");
this.userHasAuthenticated(false);
this.props.history.push("/login");
}
This redirects us back to the login page once the user logs out.
Now if you switch over to your browser and try logging out, you should be redirected to the
login page.
You might have noRced while tesRng this flow that since the login call has a bit of a delay, we
might need to give some feedback to the user that the login call is in progress. Let’s do that
next.
this.state = {
isLoading: false,
email: "",
password: ""
};
And we’ll update it while we are logging in. So our handleSubmit method now
looks like so:
try {
await Auth.signIn(this.state.email, this.state.password);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e.message);
this.setState({ isLoading: false });
}
}
export default ({
isLoading,
text,
loadingText,
className = "",
disabled = false,
...props
}) =>
<Button
className={`LoaderButton ${className}`}
disabled={disabled || isLoading}
{...props}
>
{isLoading && <Glyphicon glyph="refresh" className="spinning" />}
{!isLoading ? text : loadingText}
</Button>;
This is a really simple component that takes an isLoading flag and the text that the buCon
displays in the two states (the default state and the loading state). The disabled prop is a
result of what we have currently in our Login buCon. And we ensure that the buCon is
disabled when isLoading is true . This makes it so that the user can’t click it while we are
in the process of logging them in.
And let’s add a couple of styles to animate our loading icon.
.LoaderButton .spinning.glyphicon {
margin-right: 7px;
top: 2px;
animation: spin 1s infinite linear;
}
@keyframes spin {
from { transform: scale(1) rotate(0deg); }
to { transform: scale(1) rotate(360deg); }
}
This spins the refresh Glyphicon infinitely with each spin taking a second. And by adding these
styles as a part of the LoaderButton we keep them self contained within the component.
<Button
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
>
Login
</Button>
<LoaderButton
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Login"
loadingText="Logging in…"
/>
Also, import the LoaderButton in the header. And remove the reference to the
Button component.
And now when we switch over to the browser and try logging in, you should see the
intermediate state before the login completes.
If you would like to add Forgot Password func<onality for your users, you can refer to our Extra
Credit series of chapters on user management (/chapters/manage-user-accounts-in-aws-
amplify.html).
Next let’s implement the sign up process for our app.
1. The user types in their email, password, and confirms their password.
2. We sign them up with Amazon Cognito using the AWS Amplify library and get a user
object in return.
3. We then render a form to accept the confirmaBon code that AWS Cognito has emailed to
them.
this.state = {
isLoading: false,
email: "",
password: "",
confirmPassword: "",
confirmationCode: "",
newUser: null
};
}
validateForm() {
return (
this.state.email.length > 0 &&
this.state.password.length > 0 &&
this.state.password === this.state.confirmPassword
);
}
validateConfirmationForm() {
return this.state.confirmationCode.length > 0;
}
renderConfirmationForm() {
return (
<form onSubmit={this.handleConfirmationSubmit}>
<FormGroup controlId="confirmationCode" bsSize="large">
<ControlLabel>Confirmation Code</ControlLabel>
<FormControl
autoFocus
type="tel"
value={this.state.confirmationCode}
onChange={this.handleChange}
/>
<HelpBlock>Please check your email for the code.</HelpBlock>
</FormGroup>
<LoaderButton
block
bsSize="large"
disabled={!this.validateConfirmationForm()}
type="submit"
isLoading={this.state.isLoading}
text="Verify"
loadingText="Verifying…"
/>
</form>
);
}
renderForm() {
return (
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="email" bsSize="large">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup controlId="password" bsSize="large">
<ControlLabel>Password</ControlLabel>
<FormControl
value={this.state.password}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<FormGroup controlId="confirmPassword" bsSize="large">
<ControlLabel>Confirm Password</ControlLabel>
<FormControl
value={this.state.confirmPassword}
onChange={this.handleChange}
type="password"
/>
</FormGroup>
<LoaderButton
block
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Signup"
loadingText="Signing up…"
/>
</form>
);
}
render() {
return (
<div className="Signup">
{this.state.newUser === null
? this.renderForm()
: this.renderConfirmationForm()}
</div>
);
}
}
Most of the things we are doing here are fairly straigh<orward but let’s go over them quickly.
1. Since we need to show the user a form to enter the confirma,on code, we are
condi,onally rendering two forms based on if we have a user object or not.
2. We are using the LoaderButton component that we created earlier for our submit
buGons.
3. Since we have two forms we have two valida,on methods called validateForm and
validateConfirmationForm .
4. We are seJng the autoFocus flags on the email and the confirma,on code fields.
.Signup form {
margin: 0 auto;
max-width: 320px;
}
}
Now if we switch to our browser and navigate to the signup page we should see our newly
created form. Our form doesn’t do anything when we enter in our info but you can s,ll try to
fill in an email address, password, and the confirma,on code. It’ll give you an idea of how the
form will behave once we connect it to Cognito.
try {
const newUser = await Auth.signUp({
username: this.state.email,
password: this.state.password
});
this.setState({
newUser
});
} catch (e) {
alert(e.message);
}
try {
await Auth.confirmSignUp(this.state.email,
this.state.confirmationCode);
await Auth.signIn(this.state.email, this.state.password);
this.props.userHasAuthenticated(true);
this.props.history.push("/");
} catch (e) {
alert(e.message);
this.setState({ isLoading: false });
}
}
1. In handleSubmit we make a call to signup a user. This creates a new user object.
4. With the user now confirmed, Cognito now knows that we have a new user that can login
to our app.
5. Use the email and password to authenKcate exactly the same way we did in the login
page.
Now if you were to switch over to your browser and try signing up for a new account it should
redirect you to the homepage aSer sign up successfully completes.
A quick note on the signup flow here. If the user refreshes their page at the confirm step, they
won’t be able to get back and confirm that account. It forces them to create a new account
instead. We are keeping things intenKonally simple but here are a couple of hints on how to fix
it.
2. Use the Auth.resendSignUp() method to resend the code if the user has not been
previously confirmed. Here is a link to the Amplify API docs (h@ps://aws.github.io/aws-
amplify/api/classes/authclass.html#resendsignup).
Give this a try and post in the comments if you have any quesKons.
Now while developing you might run into cases where you need to manually confirm an
unauthenKcated user. You can do that with the AWS CLI using the following command.
Just be sure to use your Cognito User Pool Id and the email you used to create the account.
If you would like to allow your users to change their email or password, you can refer to our
Extra Credit series of chapters on user management (/chapters/manage-user-accounts-in-
aws-amplify.html).
First we are going to create the form for a note. It’ll take some content and a file as an
a>achment.
this.file = null;
this.state = {
isLoading: null,
content: ""
};
}
validateForm() {
return this.state.content.length > 0;
}
handleChange = event => {
this.setState({
[event.target.id]: event.target.value
});
}
render() {
return (
<div className="NewNote">
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="content">
<FormControl
onChange={this.handleChange}
value={this.state.content}
componentClass="textarea"
/>
</FormGroup>
<FormGroup controlId="file">
<ControlLabel>Attachment</ControlLabel>
<FormControl onChange={this.handleFileChange} type="file"
/>
</FormGroup>
<LoaderButton
block
bsStyle="primary"
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Create"
loadingText="Creating…"
/>
</form>
</div>
);
}
}
Everything is fairly standard here, except for the file input. Our form elements so far have
been controlled components (h>ps://facebook.github.io/react/docs/forms.html), as in their
value is directly controlled by the state of the component. The file input simply calls a different
onChange handler ( handleFileChange ) that saves the file object as a class property. We
use a class property instead of saving it in the state because the file object we save does not
change or drive the rendering of our component.
Currently, our handleSubmit does not do a whole lot other than limi:ng the file size of our
a>achment. We are going to define this in our config.
MAX_ATTACHMENT_SIZE: 5000000,
.NewNote form {
padding-bottom: 15px;
}
.NewNote form textarea {
height: 300px;
font-size: 24px;
}
We just need to use the API module that AWS Amplify has.
Let’s include the API module by adding the following to the header of
src/containers/NewNote.js .
try {
await this.createNote({
content: this.state.content
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
createNote(note) {
return API.post("notes", "/notes", {
body: note
});
}
1. We make our create call in createNote by making a POST request to /notes and
passing in our note object. NoFce that the first two arguments to the API.post()
method are notes and /notes . This is because back in the Configure AWS Amplify
(/chapters/configure-aws-amplify.html) chapter we called these set of APIs by the name
notes .
2. For now the note object is simply the content of the note. We are creaFng these notes
without an aAachment for now.
And that’s it; if you switch over to your browser and try submiVng your form, it should
successfully navigate over to our homepage.
Next let’s upload our file to S3 and add an aAachment to our note.
We are going to use the Storage module that AWS Amplify has. If you recall, that back in the
Create a Cognito idenHty pool (/chapters/create-a-cognito-idenHty-pool.html) chapter we
allow a logged in user access to a folder inside our S3 Bucket. AWS Amplify stores directly to
this folder if we want to privately store a file.
Also, just looking ahead a bit; we will be uploading files when a note is created and when a
note is edited. So let’s create a simple convenience method to help with that.
Upload to S3
Create a src/libs/ directory for this.
$ mkdir src/libs/
return stored.key;
}
2. Generates a unique file name using the current Hmestamp ( Date.now() ). Of course, if
your app is being used heavily this might not be the best way to create a unique filename.
But this should be fine for now.
3. Upload the file to the user’s folder in S3 using the Storage.vault.put() object.
AlternaHvely, if we were uploading publicly you can use the Storage.put() method.
try {
const attachment = this.file
? await s3Upload(this.file)
: null;
await this.createNote({
attachment,
content: this.state.content
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
And make sure to include s3Upload by adding the following to the header of
src/containers/NewNote.js .
2. Use the returned key and add that to the note object when we create the note.
Now when we switch over to our browser and submit the form with an uploaded file we
should see the note being created successfully. And the app being redirected to the
homepage.
Next up we are going to allow users to see a list of the notes they’ve created.
Currently, our Home container is very simple. Let’s add the condi@onal rendering in there.
this.state = {
isLoading: true,
notes: []
};
}
renderNotesList(notes) {
return null;
}
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A simple note taking app</p>
</div>
);
}
renderNotes() {
return (
<div className="notes">
<PageHeader>Your Notes</PageHeader>
<ListGroup>
{!this.state.isLoading &&
this.renderNotesList(this.state.notes)}
</ListGroup>
</div>
);
}
render() {
return (
<div className="Home">
{this.props.isAuthenticated ? this.renderNotes() :
this.renderLander()}
</div>
);
}
}
2. Store our notes in the state. Currently, it’s empty but we’ll be calling our API for it.
3. Once we fetch our list we’ll use the renderNotesList method to render the items in
the list.
And that’s our basic setup! Head over to the browser and the homepage of our app should
render out an empty list.
Next we are going to fill it up with our API.
async componentDidMount() {
if (!this.props.isAuthenticated) {
return;
}
try {
const notes = await this.notes();
this.setState({ notes });
} catch (e) {
alert(e);
}
notes() {
return API.get("notes", "/notes");
}
renderNotesList(notes) {
return [{}].concat(notes).map(
(note, i) =>
i !== 0
? <LinkContainer
key={note.noteId}
to={`/notes/${note.noteId}`}
>
<ListGroupItem header={note.content.trim().split("\n")
[0]}>
{"Created: " + new
Date(note.createdAt).toLocaleString()}
</ListGroupItem>
</LinkContainer>
: <LinkContainer
key="new"
to="/notes/new"
>
<ListGroupItem>
<h4>
<b>{"\uFF0B"}</b> Create a new note
</h4>
</ListGroupItem>
</LinkContainer>
);
}
1. It always renders a Create a new note buFon as the first item in the list (even if the list is
empty). We do this by concatenaKng an array with an empty object with our notes
array.
2. We render the first line of each note as the ListGroupItem header by doing
note.content.trim().split('\n')[0] .
3. And the LinkContainer component directs our app to each of the items.
.Home .notes h4 {
font-family: "Open Sans", sans-serif;
font-weight: 600;
overflow: hidden;
line-height: 1.5;
white-space: nowrap;
text-overflow: ellipsis;
}
.Home .notes p {
color: #666;
}
Now head over to your browser and you should see your list displayed.
And if you click on the links they should take you to their respecKve pages.
Next up we are going to allow users to view and edit their notes.
The first thing we are going to need to do is load the note when our container loads. Just like
what we did in the Home container. So let’s get started.
This is important because we are going to be paHern matching to extract our note id from the
URL.
By using the route path /notes/:id we are telling the router to send all matching routes to
our component Notes . This will also end up matching the route /notes/new with an id
of new . To ensure that doesn’t happen, we put our /notes/new route before the paHern
matching one.
Of course this component doesn’t exist yet and we are going to create it now.
this.file = null;
this.state = {
note: null,
content: "",
attachmentURL: null
};
}
async componentDidMount() {
try {
let attachmentURL;
const note = await this.getNote();
const { content, attachment } = note;
if (attachment) {
attachmentURL = await Storage.vault.get(attachment);
}
this.setState({
note,
content,
attachmentURL
});
} catch (e) {
alert(e);
}
}
getNote() {
return API.get("notes", `/notes/${this.props.match.params.id}`);
}
render() {
return <div className="Notes"></div>;
}
}
1. Load the note on componentDidMount and save it to the state. We get the id of our
note from the URL using the props automa-cally passed to us by React-Router in
this.props.match.params.id . The keyword id is a part of the paHern matching in
our route ( /notes/:id ).
2. If there is an aHachment, we use the key to get a secure link to the file we uploaded to S3.
We then store this to the component’s state as attachmentURL .
3. The reason why we have the note object in the state along with the content and the
attachmentURL is because we will be using this later when the user edits the note.
Now if you switch over to your browser and navigate to a note that we previously created,
you’ll no-ce that the page renders an empty container.
Next up, we are going to render the note we just loaded.
validateForm() {
return this.state.content.length > 0;
}
formatFilename(str) {
return str.replace(/^\w+-/, "");
}
if (!confirmed) {
return;
}
render() {
return (
<div className="Notes">
{this.state.note &&
<form onSubmit={this.handleSubmit}>
<FormGroup controlId="content">
<FormControl
onChange={this.handleChange}
value={this.state.content}
componentClass="textarea"
/>
</FormGroup>
{this.state.note.attachment &&
<FormGroup>
<ControlLabel>Attachment</ControlLabel>
<FormControl.Static>
<a
target="_blank"
rel="noopener noreferrer"
href={this.state.attachmentURL}
>
{this.formatFilename(this.state.note.attachment)}
</a>
</FormControl.Static>
</FormGroup>}
<FormGroup controlId="file">
{!this.state.note.attachment &&
<ControlLabel>Attachment</ControlLabel>}
<FormControl onChange={this.handleFileChange} type="file"
/>
</FormGroup>
<LoaderButton
block
bsStyle="primary"
bsSize="large"
disabled={!this.validateForm()}
type="submit"
isLoading={this.state.isLoading}
text="Save"
loadingText="Saving…"
/>
<LoaderButton
block
bsStyle="danger"
bsSize="large"
isLoading={this.state.isDeleting}
onClick={this.handleDelete}
text="Delete"
loadingText="Deleting…"
/>
</form>}
</div>
);
}
4. We also added a delete buBon to allow users to delete the note. And just like the submit
buBon it too needs a flag that signals that the call is in progress. We call it isDeleting .
5. We handle aBachments with a file input exactly like we did in the NewNote component.
6. Our delete buBon also confirms with the user if they want to delete the note using the
browser’s confirm dialog.
To complete this code, let’s add isLoading and isDeleting to the state.
this.state = {
isLoading: null,
isDeleting: null,
note: null,
content: "",
attachmentURL: null
};
.Notes form {
padding-bottom: 15px;
}
Also, let’s include the React-Bootstrap components that we are using here by
adding the following to our header. And our styles, the LoaderButton , and the config .
And that’s it. If you switch over to your browser, you should see the note loaded.
saveNote(note) {
return API.put("notes", `/notes/${this.props.match.params.id}`, {
body: note
});
}
event.preventDefault();
try {
if (this.file) {
attachment = await s3Upload(this.file);
}
await this.saveNote({
content: this.state.content,
attachment: attachment || this.state.note.attachment
});
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isLoading: false });
}
}
The code above is doing a couple of things that should be very similar to what we did in the
NewNote container.
1. If there is a file to upload we call s3Upload to upload it and save the key we get from
S3.
2. We save the note by making a PUT request with the note object to /notes/:id where
we get the id from this.props.match.params.id . We use the API.put()
method from AWS Amplify.
Let’s switch over to our browser and give it a try by saving some changes.
You might have noKced that we are not deleKng the old aLachment when we upload a new
one. To keep things simple, we are leaving that bit of detail up to you. It should be preLy
straighMorward. Check the AWS Amplify API Docs (hLps://aws.github.io/aws-
amplify/api/classes/storageclass.html#remove) on how to a delete file from S3.
deleteNote() {
return API.del("notes", `/notes/${this.props.match.params.id}`);
}
if (!confirmed) {
return;
}
try {
await this.deleteNote();
this.props.history.push("/");
} catch (e) {
alert(e);
this.setState({ isDeleting: false });
}
}
We are simply making a DELETE request to /notes/:id where we get the id from
this.props.match.params.id . We use the API.del method from AWS Amplify to do
so. This calls our delete API and we redirect to the homepage on success.
Now if you switch over to your browser and try deleCng a note you should see it confirm your
acCon and then delete the note.
Again, you might have noCced that we are not deleCng the a6achment when we are deleCng
a note. We are leaving that up to you to keep things simple. Check the AWS Amplify API Docs
(h6ps://aws.github.io/aws-amplify/api/classes/storageclass.html#remove) on how to a delete
file from S3.
Now with our app nearly complete, we’ll look at securing some the pages of our app that
require a login. Currently if you visit a note page while you are logged out, it throws an ugly
error.
Instead, we would like it to redirect us to the login page and then redirect us back aPer we
login. Let’s look at how to do that next.
We also have a couple of pages that need to behave in sort of the same way. We want the
user to be redirected to the homepage if they type in the login ( /login ) or signup
( /signup ) URL. Currently, the login and sign up page end up loading even though the user is
already logged in.
There are many ways to solve the above problems. The simplest would be to just check the
condiIons in our containers and redirect. But since we have a few containers that need the
same logic we can create a special route (like the AppliedRoute from the Add the session
to the state (/chapters/add-the-session-to-the-state.html) chapter) for it.
We are going to create two different route components to fix the problem we have.
1. A route called the AuthenIcatedRoute, that checks if the user is authenIcated before
rouIng.
2. And a component called the UnauthenIcatedRoute, that ensures the user is not
authenIcated.
This component is similar to the AppliedRoute component that we created in the Add the
session to the state (/chapters/add-the-session-to-the-state.html) chapter. The main
difference being that we look at the props that are passed in to check if a user is
authen7cated. If the user is authen7cated, then we simply render the passed in component.
And if the user is not authen7cated, then we use the Redirect React Router v4 component
to redirect the user to the login page. We also pass in the current path to the login page
( redirect in the querystring). We will use this later to redirect us back aJer the user logs in.
Here we are checking to ensure that the user is not authen7cated before we render the
component that is passed in. And in the case where the user is authen7cated, we use the
Redirect component to simply send the user to the homepage.
Next, we are going to use the reference to redirect to the note page aBer we login.
Let’s start by adding a method to read the redirect URL from the querystring.
if (!results) {
return null;
}
if (!results[2]) {
return "";
}
This method takes the querystring param we want to read and returns it.
Now let’s update our component to use this parameter when it redirects.
this.props.history.push("/");
And that’s it! Our app is ready to go live. Let’s look at how we are going to deploy it using our
serverless setup.
The basic setup we are going to be using will look something like this:
AWS provides quite a few services that can help us do the above. We are going to use S3
(hOps://aws.amazon.com/s3/) to host our assets, CloudFront
(hOps://aws.amazon.com/cloudfront/) to serve it, Route 53
(hOps://aws.amazon.com/route53/) to manage our domain, and Cer:ficate Manager
(hOps://aws.amazon.com/cer:ficate-manager/) to handle our SSL cer:ficate.
So let’s get started by first configuring our S3 bucket to upload the assets of our app.
A bucket can also be configured to host the assets in it as a sta6c website and is automa6cally
assigned a publicly accessible URL. So let’s get started.
Select Create Bucket and pick a name for your applica6on and select the US East (N. Virginia)
Region Region. Since our applica6on is being served out using a CDN, the region should not
maJer to us.
Add Permissions
Buckets by default are not publicly accessible, so we need to change the S3 Bucket
Permission. Select the Bucket Policy from the permissions panel.
Add the following bucket policy into the editor. Where notes-app-client is the
name of our S3 bucket. Make sure to use the name of your bucket here.
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadForGetBucketObjects",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::notes-app-client/*"]
}
]
}
And hit Save.
This panel also shows us where our app will be accessible. AWS assigns us a URL for our sta6c
website. In this case the URL assigned to me is notes-app-client.s3-website-us-east-
1.amazonaws.com .
Now that our bucket is all set up and ready, let’s go ahead and upload our assets to it.
This packages all of our assets and places them in the build/ directory.
Upload to S3
Now to deploy simply run the following command; where YOUR_S3_DEPLOY_BUCKET_NAME
is the name of the S3 Bucket we created in the Create an S3 bucket (/chapters/create-an-s3-
bucket.html) chapter.
All this command does is that it syncs the build/ directory with our bucket on S3. Just as a
sanity check, go into the S3 secIon in your AWS Console
(hKps://console.aws.amazon.com/console/home) and check if your bucket has the files we just
uploaded.
And our app should be live on S3! If you head over to the URL assigned to you (in my case it is
hKp://notes-app-client.s3-website-us-east-1.amazonaws.com (hKp://notes-app-client.s3-
website-us-east-1.amazonaws.com)), you should see it live.
Next we’ll configure CloudFront to serve our app out globally.
You can grab the S3 website endpoint from the Sta,c website hos,ng panel for your S3
bucket. We had configured this in the previous chapter. Copy the URL in the Endpoint field.
And paste that URL in the Origin Domain Name field. In my case it is, http://notes-app-
client.s3-website-us-east-1.amazonaws.com .
And now scroll down the form and switch Compress Objects Automa,cally to Yes. This will
automaAcally Gzip compress the files that can be compressed and speed up the delivery of
our app.
Next, scroll down a bit further to set the Default Root Object to index.html .
And finally, hit Create Distribu,on.
It takes AWS a liEle while to create a distribuAon. But once it is complete you can find your
CloudFront DistribuAon by clicking on your newly created distribuAon from the list and
looking up its domain name.
And if you navigate over to that in your browser, you should see your app live.
Now before we move on there is one last thing we need to do. Currently, our staAc website
returns our index.html as the error page. We set this up back in the chapter where we
created our S3 bucket. However, it returns a HTTP status code of 404 when it does so. We
want to return the index.html but since the rouAng is handled by React Router; it does not
make sense that we return the 404 HTTP status code. One of the issues with this is that
certain corporate firewalls and proxies tend to block 4xx and 5xx responses.
To set up a custom error response, head over to the Error Pages tab in our DistribuAon.
And type in your new domain name in the Alternate Domain Names (CNAMEs) field.
Scroll down and hit Yes, Edit to save the changes.
Next, let’s point our domain to the CloudFront Distribu4on.
Select your domain from the list and hit Create Record Set in the details screen.
Leave the Name field empty since we are going to point our bare domain (without the www.)
to our CloudFront Distribu4on.
And select Alias as Yes since we are going to simply point this to our CloudFront domain.
In the Alias Target dropdown, select your CloudFront Distribu4on.
Create a new Record Set with the exact seOngs as before, except make sure to pick AAAA -
IPv6 address as the Type.
And hit Create to add your AAAA record set.
It can take around an hour to update the DNS records but once it’s done, you should be able
to access your app through your domain.
Next up, we’ll take a quick look at ensuring that our www. domain also directs to our app.
To create a www version of our domain and have it redirect we are going to create a new S3
Bucket and a new CloudFront DistribuFon. This new S3 Bucket will simply respond with a
redirect to our main domain using the redirecFon feature that S3 Buckets have.
But unlike last Fme we are going to select the Redirect requests opFon and fill in the domain
we are going to be redirecFng towards. This is the domain that we set up in our last chapter.
Also, make sure to copy the Endpoint as we’ll be needing this later.
And hit Save to make the changes. Next we’ll create a CloudFront DistribuFon to point to this
S3 redirect Bucket.
This Fme fill in www as the Name and select Alias as Yes. And pick your new CloudFront
DistribuFon from the Alias Target dropdown.
Add IPv6 Support
Just as before, we need to add an AAAA record to support IPv6.
Create a new Record Set with the exact same seYngs as before, except make sure to pick
AAAA - IPv6 address as the Type.
And that’s it! Just give it some Fme for the DNS to propagate and if you visit your www
version of your domain, it should redirect you to your non-www version.
Next, we’ll set up SSL and add HTTPS support for our domains.
Request a Certificate
Select Cer$ficate Manager from the list of services in your AWS Console
(hGps://console.aws.amazon.com). Ensure that you are in the US East (N. Virginia) region. This
is because a cerCficate needs to be from this region for it to work with CloudFront
(hGp://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html).
If this is your first cerCficate, you’ll need to hit Get started. If not then hit Request a cer$ficate
from the top.
And type in the name of our domain. Hit Add another name to this cer$ficate and add our
www version of our domain as well. Hit Review and request once you are done.
Now to confirm that we control the domain, select the DNS valida$on method and hit
Review.
On the validaCon screen expand the two domains we are trying to validate.
Since we control the domain through Route 53, we can directly create the DNS record
through here by hiVng Create record in Route 53.
And confirm that you want the record to be created by hiVng Create.
Also, make sure to do this for the other domain.
The process of creaCng a DNS record and validaCng it can take around 30 minutes.
Then switch the Viewer Protocol Policy to Redirect HTTP to HTTPS. And scroll down to the
boGom and hit Yes, Edit.
Now let’s do the same for our other CloudFront DistribuCon.
But leave the Viewer Protocol Policy as HTTP and HTTPS. This is because we want our users
to go straight to the HTTPS version of our non-www domain. As opposed to redirecCng to the
HTTPS version of our www domain before redirecCng again.
Open up the S3 Redirect Bucket we created in the last chapter. Head over to the Proper$es
tab and select Sta$c website hos$ng.
Change the Protocol to hPps and hit Save.
And that’s it. Our app should be served out on our domain through HTTPS.
Next up, let’s look at the process of deploying updates to our app.
We need to do the last step since CloudFront caches our objects in its edge locaFons. So to
make sure that our users see the latest version, we need to tell CloudFront to invalidate it’s
cache in the edge locaFons.
Let’s start by making a couple of changes to our app and go through the process of deploying
them.
We are going to add a Login and Signup bu<on to our lander to give users a clear call to
ac>on.
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A simple note taking app</p>
<div>
<Link to="/login" className="btn btn-info btn-lg">
Login
</Link>
<Link to="/signup" className="btn btn-success btn-lg">
Signup
</Link>
</div>
</div>
);
}
Now that our app is built and ready in the build/ directory, let’s deploy to S3.
Upload to S3
Run the following from our working directory to upload our app to our main S3 Bucket. Make
sure to replace YOUR_S3_DEPLOY_BUCKET_NAME with the S3 Bucket we created in the
Create an S3 bucket (/chapters/create-an-s3-bucket.html) chapter.
Note the --delete flag here; this is telling S3 to delete all the files that are in the bucket
that we aren’t uploading this :me around. Create React App generates unique bundles when
we build it and without this flag we’ll end up retaining all the files from the previous builds.
To do this we’ll need the Distribu(on ID of both of our CloudFront Distribu:ons. You can get
it by clicking on the distribu:on from the list of CloudFront Distribu:ons.
Now we can use the AWS CLI to invalidate the cache of the two distribu:ons. Make sure to
replace YOUR_CF_DISTRIBUTION_ID and YOUR_WWW_CF_DISTRIBUTION_ID with the
ones from above.
This invalidates our distribu:on for both the www and non-www versions of our domain. If
you click on the Invalida(ons tab, you should see your invalida:on request being processed.
It can take a few minutes to complete. But once it is done, the updated version of our app
should be live.
And that’s it! We now have a set of commands we can run to deploy our updates. Let’s quickly
put them together so we can do it with one command.
Add the following in the scripts block above eject in the package.json .
Now simply run the following command from your project root when you want to deploy your
updates. It’ll build your app, upload it to S3, and invalidate the CloudFront cache.
Our app is now complete. And this is the end of Part I. Next we’ll be looking at how to
automate this stack so we can use it for our future projects. You can also take a look at how to
add a Login with Facebook op:on in the Facebook Login with Cognito using AWS Amplify
(/chapters/facebook-login-with-cognito-using-aws-amplify.html) chapter. It builds on what we
have covered in Part I so far.
Infrastructure as code
Currently, you go through a bunch of manual steps with a lot of clicking around to
configure your backend. This makes it preEy tricky to re-create this stack for a new
project. Or to configure a new environment for the same project. Serverless Framework is
really good for conver4ng this en4re stack into code. This means that it can automa4cally
re-create the en4re project from scratch without ever touching the AWS Console.
A lot of our readers are curious about how to use serverless with 3rd party APIs. We will
go over how to connect to the Stripe API and accept credit card payments.
Unit tests
We will also look at how to configure unit tests for our backend using Jest
(hEps://facebook.github.io/jest/).
Automa;ng deployments
In the current tutorial you need to deploy through your command line using the
serverless deploy command. This can be a bit tricky when you have a team working
on your project. To start with, we’ll add our frontend and backend projects to GitHub.
We’ll then go over how to automate your deployments using Seed (hEps://seed.run) (for
the backend) and Netlify (hEps://netlify.com) (for the frontend).
Configuring environments
Typically while working on projects you end up crea4ng mul4ple environments. For
example, you’d want to make sure not to make changes directly to your app while it is in
use. Thanks to the Serverless Framework and Seed we’ll be able to do this with ease for
the backend. And we’ll do something similar for our frontend using React and Netlify.
We’ll also configure custom domains for our backend API environments.
We will look at how to work with secret environment variables in our local environment
and in produc4on.
The goal of Part II is to ensure that you have a setup that you can easily replicate and use for
your future projects. This is almost exactly what we and a few of our readers have been using.
This part of the guide is fairly standalone but it does rely on the original setup. If you haven’t
completed Part I; you can quickly browse through some of the chapters but you don’t
necessarily need to redo them all. We’ll start by forking the code from the original setup and
then building on it.
Let’s get started by first conver4ng our backend infrastructure into code.
$ rm -rf .git/
$ npm install
https://github.com/jayair/serverless-stack-2-api.git
$ git init
$ git add .
Here REPO_URL is the URL we copied from GitHub in the steps above. You can verify that it
has been set correctly by doing the following.
$ git remote -v
Next, let’s make a couple of quick changes to our project to get organized.
$ rm handler.js
$ rm tests/handler.test.js
service: notes-app-api
service: notes-app-2-api
The reason we are doing this is because Serverless Framework uses the service name to
idenEfy projects. Since we are creaEng a new project we want to ensure that we use a
different name from the original. Now we could have simply overwriHen the exisEng project
but the resources were previously created by hand and will conflict when we try to create
them through code.
stage: prod
And replace it with:
stage: dev
We are defaulEng the stage to dev instead of prod . This will become clear later when we
create mulEple environments.
$ git add .
$ git commit -m "Organizing project"
Next let’s look into configuring our enEre notes app backend via our serverless.yml . This
is commonly known as Infrastructure as code.
However, in Part I we created our DynamoDB table, Cognito User Pool, S3 uploads bucket,
and Cognito Iden>ty Pool through the AWS Console. You might be wondering if this too can
be configure programma>cally, instead of doing them manually through the console. It
definitely can!
This general pa0ern is called Infrastructure as code and it has some massive benefits. Firstly, it
allows us to simply replicate our setup with a couple of simple commands. Secondly, it is not
as error prone as doing it by hand. We know a few of you have run into configura>on related
issues by simply following the steps in the tutorial. Addi>onally, describing our en>re
infrastructure as code allows us to create mul>ple environments with ease. For example, you
can create a dev environment where you can make and test all your changes as you work on it.
And this can be kept separate from your produc>on environment that your users are
interac>ng with.
In the next few chapters we are going to configure our various infrastructure pieces through
our serverless.yml .
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
4. Finally, we are provisioning the read/write capacity for our table through a couple of
custom variables as well. We will be defining this shortly.
Replace the resources: block at the boFom of our serverless.yml with the
following:
Add the following custom: block at the top of our serverless.yml above the
provider: block.
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
# Set the table name here so we can use it while testing locally
tableName: ${self:custom.stage}-notes
We added a couple of things here that are worth spending some .me on:
We first create a custom variable called stage . You might be wondering why we need a
custom variable for this when we already have stage: dev in the provider: block.
This is because we want to set the current stage of our project based on what is set
through the serverless deploy --stage $STAGE command. And if a stage is not
set when we deploy, we want to fallback to the one we have set in the provider block. So
${opt:stage, self:provider.stage} , is telling Serverless to first look for the
opt:stage (the one passed in through the command line), and then fallback to
self:provider.stage (the one in the provider block.
Finally, we are using the PAY_PER_REQUEST seTng for the BillingMode . This tells
DynamoDB that we want to pay per request and use the On-Demand Capacity
(hFps://aws.amazon.com/dynamodb/pricing/on-demand/) op.on. With DynamoDB in
On-Demand mode, our database is now truly Serverless. This op.on can be very cost-
effec.ve, especially if you are just star.ng out and your workloads are not very
predictable or stable. On the other hand, if you know exactly how much capacity you
need, the Provisioned Capacity (hFps://aws.amazon.com/dynamodb/pricing/provisioned/)
mode would work out to be cheaper.
A lot of the above might sound tricky and overly complicated right now. But we are seTng it
up so that we can automate and replicate our en.re setup with ease. Note that, Serverless
Framework (and CloudForma.on behind the scenes) will be completely managing our
resources based on the serverless.yml . This means that if you have a typo in your table
name, the old table will be removed and a new one will be created in place. To prevent
accidentally dele.ng serverless resources (like DynamoDB tables), you need to set the
DeletionPolicy: Retain flag. We have a detailed post on this over on the Seed blog
(hFps://seed.run/blog/how-to-prevent-accidentally-dele.ng-serverless-resources).
We are also going to make a quick tweak to reference the DynamoDB resource that we are
crea.ng.
Make sure to copy the indenta-on properly. These two blocks fall under the provider
block and need to be indented as such.
1. The environment: block here is basically telling Serverless Framework to make the
variables available as process.env in our Lambda func.ons. For example,
process.env.tableName would be set to the DynamoDB table name for this stage.
We will need this later when we are connec.ng to our database.
2. For the tableName specifically, we are geTng it by referencing our custom variable
from above.
3. For the case of our iamRoleStatements: we are now specifically sta.ng which table
we want to connect to. This block is telling AWS that these are the only resources that
our Lambda func.ons have access to.
$ git add .
$ git commit -m "Adding our DynamoDB resource"
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
S3 buckets (unlike DynamoDB tables) are globally named. So it is not really possible for us to
know what it is going to be called before hand. Hence, we let CloudFormaMon generate the
name for us and we just add the Outputs: block to tell it to print it out so we can use it
later.
$ git add .
$ git commit -m "Adding our S3 resource"
And that’s it. Next let’s look into configuring our Cognito User Pool.
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.stage}-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
GenerateSecret: false
UserPoolClientId:
Value:
Ref: CognitoUserPoolClient
We are naming our User Pool (and the User Pool app client) based on the stage by using
the custom variable ${self:custom.stage} .
We are se-ng the UsernameAttributes as email. This is telling the User Pool that we
want our users to be able to log in with their email as their username.
Just like our S3 bucket, we want CloudFormaOon to tell us the User Pool Id and the User
Pool Client Id that is generated. We do this in the Outputs: block at the end.
$ git add .
$ git commit -m "Adding our Cognito User Pool resource"
And next let’s Oe all of this together by configuring our Cognito IdenOty Pool.
Resources:
# The federated identity for our user pool to auth with
CognitoIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
# Generate a name based on the stage
IdentityPoolName: ${self:custom.stage}IdentityPool
# Don't allow unathenticated users
AllowUnauthenticatedIdentities: false
# Link to our User Pool
CognitoIdentityProviders:
- ClientId:
Ref: CognitoUserPoolClient
ProviderName:
Fn::GetAtt: [ "CognitoUserPool", "ProviderName" ]
# IAM roles
CognitoIdentityPoolRoles:
Type: AWS::Cognito::IdentityPoolRoleAttachment
Properties:
IdentityPoolId:
Ref: CognitoIdentityPool
Roles:
authenticated:
Fn::GetAtt: [CognitoAuthRole, Arn]
Now it looks like there is a whole lot going on here. But it is preDy much exactly what we did
back in the Create a Cognito iden9ty pool (/chapters/create-a-cognito-iden9ty-pool.html)
chapter. It’s just that CloudForma9on can be a bit verbose and can end up looking a bit
in9mida9ng.
Let’s quickly go over the various sec9ons of this configura9on:
1. First we name our Iden9ty Pool based on the stage name using
${self:custom.stage} .
3. Next we state that we want to use our User Pool as the iden9ty provider. We are doing
this specifically using the Ref: CognitoUserPoolClient line. If you refer back to the
Configure Cognito User Pool in Serverless (/chapters/configure-cognito-user-pool-in-
serverless.html) chapter, you’ll no9ce we have a block under CognitoUserPoolClient
that we are referencing here.
5. We add the various parts to this role. This is exactly what we use in the Create a Cognito
iden9ty pool (/chapters/create-a-cognito-iden9ty-pool.html) chapter. It just needs to be
formaDed this way to work with CloudForma9on.
7. For the S3 bucket the name is generated by AWS. So for this case we use the
Fn::GetAtt: [AttachmentsBucket, Arn] to get it’s exact name.
8. Finally, we print out the generated Iden9ty Pool Id in the Outputs: block.
$ git add .
$ git commit -m "Adding our Cognito Identity Pool resource"
Next, let’s quickly reference our DynamoDB table in our Lambda func9ons using environment
variables.
This requires us to use environment variables in our Lambda funcBons to figure out which
table we should be talking to. Currently, if you pull up create.js you’ll noBce the following
secBon.
const params = {
TableName: "notes",
Item: {
userId: event.requestContext.identity.cognitoIdentityId,
noteId: uuid.v1(),
content: data.content,
attachment: data.attachment,
createdAt: new Date().getTime()
}
};
We need to change the TableName: "notes" line to use the relevant table name. In the
Configure DynamoDB in Serverless (/chapters/configure-dynamodb-in-serverless.html)
chapter, we also added tableName: to our serverless.yml under the environment:
block.
TableName: "notes",
with this:
TableName: process.env.tableName,
TableName: "notes",
with this:
TableName: process.env.tableName,
TableName: "notes",
with this:
TableName: process.env.tableName,
TableName: "notes",
with this:
TableName: process.env.tableName,
TableName: "notes",
with this:
TableName: process.env.tableName,
$ git add .
$ git commit -m "Use environment variables in our functions"
We should men7on though that our current project has all of our resources and the Lambda
func7ons that we had created in the first part of our tutorial. This is a common trend in
serverless projects. Your code and infrastructure are not treated differently. Of course, as your
projects get larger, you end up spliDng them up. So you might have a separate Serverless
Framework project that deploys your infrastructure while a different project just deploys your
Lambda func7ons.
$ serverless deploy -v
Stack Outputs
AttachmentsBucketName: notes-app-2-api-dev-attachmentsbucket-
oj4rfiumzqf5
UserPoolClientId: ft93dvu3cv8p42bjdiip7sjqr
UserPoolId: us-east-1_yxO5ed0tq
DeleteLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-delete:2
CreateLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-create:2
GetLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-get:2
UpdateLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-update:2
IdentityPoolId: us-east-1:64495ad1-617e-490e-a6cf-fd85e7c8327e
BillingLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-billing:1
ListLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-
1:232771856781:function:notes-app-2-api-dev-list:2
ServiceEndpoint: https://mqqmkwnpbc.execute-api.us-east-
1.amazonaws.com/dev
ServerlessDeploymentBucketName: notes-app-2-api-dev-
serverlessdeploymentbucket-1p2o0dshaz2qc
A couple of things to note here:
We are deploying to a stage called dev . This has been set in our serverless.yml
under the provider: block. We can override this by explicitly passing it in by running
the serverless deploy --stage $STAGE_NAME command instead.
Our deploy command (with the -v op7on) prints out the output we had requested in our
resources. For example, AttachmentsBucketName is the S3 file uploads bucket that
was created and the UserPoolId is the Id of our User Pool.
Finally, you can run the deploy command and CloudForma7on will only update the parts
that have changed. So you can confidently run this command without worrying about it
re-crea7ng your en7re infrastructure from scratch.
And that’s it! Our en7re infrastructure is completely configured and deployed automa7cally.
Next, we will add a new API (and Lambda func7on) to work with 3rd party APIs. In our case
we are going to add an API that will use Stripe to bill the users of our notes app!
A common extension of Serverless Stack (that we have noAced) is to add a billing API that
works with Stripe. In the case of our notes app we are going to allow our users to pay a fee for
storing a certain number of notes. The flow is going to look something like this:
1. The user is going to select the number of notes he wants to store and puts in his credit
card informaAon.
2. We are going to generate a one Ame token by calling the Stripe SDK on the frontend to
verify that the credit card info is valid.
3. We will then call an API passing in the number of notes and the generated token.
4. The API will take the number of notes, figure out how much to charge (based on our
pricing plan), and call the Stripe API to charge our user.
We aren’t going to do much else in the way of storing this info in our database. We’ll leave
that as an exercise for the reader.
The second thing to note is that we need to generate the Publishable key and the Secret key.
The Publishable key is what we are going to use in our frontend client with the Stripe SDK.
And the Secret key is what we are going to use in our API when asking Stripe to charge our
user. As denoted, the Publishable key is public while the Secret key needs to stay private.
Make a note of both the Publishable key and the Secret key. We are going to be using these
later.
try {
await stripe.charges.create({
source,
amount,
description,
currency: "usd"
});
return success({ status: true });
} catch (e) {
return failure({ message: e.message });
}
}
We get the storage and source from the request body. The storage variable is
the number of notes the user would like to store in his account. And source is the
Stripe token for the card that we are going to charge.
We create a new Stripe object using our Stripe Secret key. We are going to get this as an
environment variable. We do not want to put our secret keys in our code and commit that
to Git. This is a security issue.
Finally, we use the stripe.charges.create method to charge the user and respond
to the request if everything went through successfully.
billing:
handler: billing.main
events:
- http:
path: billing
method: post
cors: true
authorizer: aws_iam
Make sure this is indented correctly. This block falls under the functions block.
$ git add .
$ git commit -m "Adding a billing API"
Now before we can test our API we need to load our Stripe secret key in our environment.
Start by renaming the env.example file to env.yml and replace its contents
with the following.
prod:
stripeSecretKey: "STRIPE_PROD_SECRET_KEY"
default:
stripeSecretKey: "STRIPE_TEST_SECRET_KEY"
The custom: block of our serverless.yml should look like the following:
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
# Set the table name here so we can use it while testing locally
tableName: ${self:custom.stage}-notes
# Load our secret environment variables based on the current stage.
# Fallback to default if it is not in prod.
environment: ${file(env.yml):${self:custom.stage},
file(env.yml):default}
stripeSecretKey: ${self:custom.environment.stripeSecretKey}
We are loading a custom variable called environment from the env.yml file. This is
based on the stage (we are deploying to) using
file(env.yml):${self:custom.stage} . But if that stage is not defined in the
env.yml then we fallback to loading everything under the default: block using
file(env.yml):default . So Serverless Framework checks if the first is available
before falling back to the second.
# Env
env.yml
$ git add .
$ git commit -m "Adding stripe environment variable"
{
"body": "{\"source\":\"tok_visa\",\"storage\":21}",
"requestContext": {
"identity": {
"cognitoIdentityId": "USER-SUB-1234"
}
}
}
We are going to be tesBng with a Stripe test token called tok_visa and with 21 as the
number of notes we want to store. You can read more about the Stripe test cards and tokens
in the Stripe API Docs here (hGps://stripe.com/docs/tesBng#cards).
Let’s now invoke our billing API by running the following in our project root.
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
"body": "{\"status\":true}"
}
Commit the Changes
Let’s commit these to Git.
$ git add .
$ git commit -m "Adding a mock event for the billing API"
Now that we have our new billing API ready. Let’s look at how to setup unit tests to ensure
that our business logic has been configured correctly.
We are going to use Jest (hBps://facebook.github.io/jest/) for this and it is already a part of
our starter project (hBps://github.com/AnomalyInnovaHons/serverless-nodejs-starter).
However, if you are starHng a new Serverless Framework project. Add Jest to your dev
dependencies by running the following.
And update the scripts block in your package.json with the following:
"scripts": {
"test": "jest"
},
This will allow you to run your tests using the command npm test .
"scripts": {
"test": "serverless-bundle test"
},
expect(cost).toEqual(expectedCost);
});
expect(cost).toEqual(expectedCost);
});
expect(cost).toEqual(expectedCost);
});
This should be straighMorward. We are adding 3 tests. They are tesHng the different Hers of
our pricing structure. We test the case where a user is trying to store 10, 100, and 101 notes.
And comparing the calculated cost to the one we are expecHng. You can read more about
using Jest in the Jest docs here (hBps://facebook.github.io/jest/docs/en/geSng-started.html).
Run tests
And we can run our tests by using the following command in the root of our project.
$ npm test
PASS tests/billing.test.js
✓ Lowest tier (4ms)
✓ Middle tier
✓ Highest tier (1ms)
$ git add .
$ git commit -m "Adding unit tests"
$ git push
Next we’ll use our Git repo to automate our deployments. This will ensure that when we push
our changes to Git, it will run our tests, and deploy them for us automaHcally. We’ll also learn
to configure mulHple environments.
For help and discussion
A serverless project that has all it’s infrastructure completely configured in code
A way to handle secrets locally
And finally, a way to run unit tests to test our business logic
Next we are going to use our Git repo to automate our deployments. This essenDally means
that we can deploy our enDre project by simply pushing our changes to Git. This can be
incredibly useful since you won’t need to create any special scripts or configuraDons to deploy
your code. You can also have mulDple people on your team deploy with ease.
Along with automaDng deployments, we are also going to look at working with mulDple
environments. We want to create clear separaDon between our producDon environment and
our dev environment. We are going to create a workflow where we conDnually deploy to our
dev (or any non-prod) environment. But we will be using a manual promoDon step when we
promote to producDon. We’ll also look at configuring custom domains for APIs.
For automaDng our serverless backend, we are going to be using a service called Seed
(h>ps://seed.run). Full disclosure, we also built Seed. You can replace most of this secDon with
a service like Travis CI (h>ps://travis-ci.org) or CircleCI (h>ps://circleci.com). It’s a bit more
cumbersome and needs some scripDng. We have a couple of posts on this over on the Seed
(h>ps://seed.run) blog:
Next, Seed will scan your repos for a serverless.yml . Hit Add Service to confirm this.
Note that, if your serverless.yml is not in your project root, you will need to change the
path.
Seed deploys to your AWS account on your behalf. You should create a separate IAM user
with exact permissions that your project needs. You can read more about this here
(h1ps://seed.run/docs/customizing-your-iam-policy). But for now we’ll simply use the one
we’ve used in this tutorial.
$ cat ~/.aws/credentials
[default]
aws_access_key_id = YOUR_IAM_ACCESS_KEY
aws_secret_access_key = YOUR_IAM_SECRET_KEY
Seed will also create a couple of stages (or environments) for you. By default, it’ll create a dev
and a prod stage using the same AWS credenRals. You can customize these but for us this is
perfect.
Now before we proceed to deploying our app, we need to enable running unit tests as a part
of our build process. You’ll recall that we had added a couple of tests back in the unit tests
(/chapters/unit-tests-in-serverless.html) chapter. And we want to run those before we deploy
our app.
To do this, hit the Se:ngs link and click Enable Unit Tests.
Back in our pipeline, you’ll noRce that our dev stage is hooked up to master. This means that
any commits to master will trigger a build in dev.
Click on dev.
You’ll see that we haven’t deployed to this stage yet.
However, before we do that, we’ll need to add our secret environment variables.
This is simply updaCng the NPM version for your project. It is a good way to keep track of the
changes you are making to your project. And it also creates a quick Git commit for us.
$ git push
Now if you head into the dev stage in Seed, you should see a build in progress. Now to see the
build logs, you can hit Build v1.
Here you’ll see the build taking place live. Click on the service that is being deployed. In this
case, we only have one service.
You’ll see the build logs for the in progress build here.
NoCce the tests are being run as a part of the build.
Something cool to note here is that, the build process is split into a few parts. First the code is
checked out through Git and the tests are run. But we don’t directly deploy. Instead, we create
a package for the dev stage and the prod stage. And finally we deploy to dev with that
package. The reason this is split up is because we want avoid the build process while
promoCng to prod . This ensures that if we have a tested working build, it should just work
when we promote to producCon.
You might also noCce a couple of warnings that look like the following.
These are expected since the env.yml is not a part of our Git repo and is not available in the
build process. The Stripe key is instead set directly in the Seed console.
Once the build is complete, take a look at the build log and make a note of the following:
Region: region
Cognito User Pool Id: UserPoolId
Cognito App Client Id: UserPoolClientId
Cognito IdenCty Pool Id: IdentityPoolId
S3 File Uploads Bucket: AttachmentsBucketName
API Gateway URL: ServiceEndpoint
We’ll be needing these later in our frontend and when we test our APIs.
Now head over to the app home page. You’ll noCce that we are ready to promote to
producCon.
We have a manual promoCon step so that you get a chance to review the changes and ensure
that you are ready to push to producCon.
And if you head over to the prod stage, you should see your prod deployment in acCon. It
should take a second to deploy to producCon. And just like before, make a note of the
following.
Region: region
Cognito User Pool Id: UserPoolId
Cognito App Client Id: UserPoolClientId
Cognito IdenCty Pool Id: IdentityPoolId
S3 File Uploads Bucket: AttachmentsBucketName
API Gateway URL: ServiceEndpoint
Next let’s configure our serverless API with a custom domain.
This shows you a list of the API endpoints and Lambda funcNons that are a part of this
deployment. Now click on Se/ngs.
And hit Update Custom Domain.
In the first part of the tutorial we had added our domain to Route 53. If you haven’t done so
you can read more about it here
(hFps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigraNngDNS.html). Hit
Select a domain and you should see a list of all your Route 53 domains. Select the one you
intend to use. And fill in the sub-domain and base path. For example, you could use api.my-
domain.com/prod ; where api is the sub-domain and prod is the base path.
Seed will now go through and configure the domain for this API Gateway endpoint, create the
SSL cerNficate and aFach it to the domain. This process can take up to 40 mins.
While we wait, we can do the same for our dev stage. Go into the dev stage > click View
Deployment > click Se/ngs > and hit Update Custom Domain. And select the domain, sub-
domain, and base path. In our case we’ll use something like api.my-domain.com/dev .
Hit Update and wait for the changes to take place.
Once complete, we are ready to test our fully-configured serverless API backend!
Before we do the test let’s create a test user for both the environments. We’ll be following the
exact same steps as the Create a Cognito test user (/chapters/create-a-cognito-test-user.html)
chapter.
Next we’ll confirm the user through the Cognito Admin CLI.
$ npx aws-api-gateway-cli-test \
--username='[email protected]' \
--password='Passw0rd!' \
--user-pool-id='YOUR_DEV_COGNITO_USER_POOL_ID' \
--app-client-id='YOUR_DEV_COGNITO_APP_CLIENT_ID' \
--cognito-region='YOUR_DEV_COGNITO_REGION' \
--identity-pool-id='YOUR_DEV_IDENTITY_POOL_ID' \
--invoke-url='YOUR_DEV_API_GATEWAY_URL' \
--api-gateway-region='YOUR_DEV_API_GATEWAY_REGION' \
--path-template='/notes' \
--method='POST' \
--body='{"content":"hello world","attachment":"hello.jpg"}'
Also run the same command for prod. Make sure to use the prod versions.
$ npx aws-api-gateway-cli-test \
--username='[email protected]' \
--password='Passw0rd!' \
--user-pool-id='YOUR_PROD_COGNITO_USER_POOL_ID' \
--app-client-id='YOUR_PROD_COGNITO_APP_CLIENT_ID' \
--cognito-region='YOUR_PROD_COGNITO_REGION' \
--identity-pool-id='YOUR_PROD_IDENTITY_POOL_ID' \
--invoke-url='YOUR_PROD_API_GATEWAY_URL' \
--api-gateway-region='YOUR_PROD_API_GATEWAY_REGION' \
--path-template='/notes' \
--method='POST' \
--body='{"content":"hello world","attachment":"hello.jpg"}'
Now that our APIs our tested we are ready to plug these into our frontend. But before we do
that, let’s do a quick test to see what will happen if we make a mistake and push some faulty
code to producGon.
gibberish.what;
$ git add .
$ git commit -m "Making a mistake"
$ git push
Now you can see a build in progress. Wait for it to complete and hit Promote.
Confirm the Change Set by hiIng Promote to Produc+on.
Enable Access Logs
Now before we test our faulty code, we’ll turn on API Gateway access logs so we can see the
error. Click on the prod stage View Resources.
Hit Se4ngs.
Hit Enable Access Logs.
This will take a couple of minutes but Seed will automa-cally configure the IAM roles
necessary for this and enable API Gateway access logs for your prod environment.
$ npx aws-api-gateway-cli-test \
--username='[email protected]' \
--password='Passw0rd!' \
--user-pool-id='YOUR_PROD_COGNITO_USER_POOL_ID' \
--app-client-id='YOUR_PROD_COGNITO_APP_CLIENT_ID' \
--cognito-region='YOUR_PROD_COGNITO_REGION' \
--identity-pool-id='YOUR_PROD_IDENTITY_POOL_ID' \
--invoke-url='YOUR_PROD_API_GATEWAY_URL' \
--api-gateway-region='YOUR_PROD_API_GATEWAY_REGION' \
--path-template='/notes' \
--method='POST' \
--body='{"content":"hello world","attachment":"hello.jpg"}'
You’ll no-ce the number of requests that were made, 4xx errors, 5xx error, and latency for
those requests.
Now if we go back and click on the Logs for the create Lambda func-on.
This should show you clearly that there was an error in our code. No-ce, that it is complaining
that gibberish is not defined.
And just like the API metrics, the Lambda metrics will show you an overview of what is going
on at a func-on level.
Rollback in Production
Now obviously, we have a problem. Usually you might be tempted to fix the code and push
and promote the change. But since our users might be affected by faulty promo-ons to prod,
we want to rollback our changes immediately.
To do this, head back to the prod stage. And hit the Rollback bu@on on the previous build we
had in produc-on.
Seed keeps track of your past builds and simply uses the previously built package to deploy it
again.
And now if you run your test command from before.
$ npx aws-api-gateway-cli-test \
--username='[email protected]' \
--password='Passw0rd!' \
--user-pool-id='YOUR_PROD_COGNITO_USER_POOL_ID' \
--app-client-id='YOUR_PROD_COGNITO_APP_CLIENT_ID' \
--cognito-region='YOUR_PROD_COGNITO_REGION' \
--identity-pool-id='YOUR_PROD_IDENTITY_POOL_ID' \
--invoke-url='YOUR_PROD_API_GATEWAY_URL' \
--api-gateway-region='YOUR_PROD_API_GATEWAY_REGION' \
--path-template='/notes' \
--method='POST' \
--body='{"content":"hello world","attachment":"hello.jpg"}'
gibberish.what;
$ git add .
$ git commit -m "Fixing the mistake"
$ git push
And that’s it! We are now ready to plug this into our frontend.
$ rm -rf .git/
$ npm install
https://github.com/jayair/https://github.com/jayair/serverless-stack-
2-client.git
$ git init
$ git add .
Here REPO_URL is the URL we copied from GitHub in the steps above. You can verify that it
has been set correctly by doing the following.
$ git remote -v
Next let’s look into configuring our frontend client with the environments that we have in our
backend.
For help and discussion
Let’s start by looking at how our app is configured currently. Our src/config.js stores the
info to all of our backend resources.
export default {
MAX_ATTACHMENT_SIZE: 5000000,
s3: {
REGION: "us-east-1",
BUCKET: "notes-app-uploads"
},
apiGateway: {
REGION: "us-east-1",
URL: "https://5by75p4gn3.execute-api.us-east-1.amazonaws.com/prod"
},
cognito: {
REGION: "us-east-1",
USER_POOL_ID: "us-east-1_udmFFSb92",
APP_CLIENT_ID: "4hmari2sqvskrup67crkqa4rmo",
IDENTITY_POOL_ID: "us-east-1:ceef8ccc-0a19-4616-9067-854dc69c2d82"
}
};
We need to change this so that when we push our app to dev it connectes to the dev
environment of our backend and for prod it connects to the prod environment. Of course you
can add many more environments, but let’s just s1ck to these for now.
Environment Variables in Create React App
Our React app is a sta1c single page app. This means that once a build is created for a certain
environment it persists for that environment.
Here REACT_APP_TEST_VAR is the custom environment variable and we are seRng it to the
value 123 . In our app we can access this variable as process.env.REACT_APP_TEST_VAR .
So the following line in our app:
console.log(process.env.REACT_APP_TEST_VAR);
Note that, these variables are embedded during build 1me. Also, only the variables that start
with REACT_APP_ are embedded in our app. All the other environment variables are ignored.
const dev = {
s3: {
REGION: "YOUR_DEV_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_DEV_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_DEV_API_GATEWAY_REGION",
URL: "YOUR_DEV_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_DEV_COGNITO_REGION",
USER_POOL_ID: "YOUR_DEV_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_DEV_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_DEV_IDENTITY_POOL_ID"
}
};
const prod = {
s3: {
REGION: "YOUR_PROD_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_PROD_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_PROD_API_GATEWAY_REGION",
URL: "YOUR_PROD_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_PROD_COGNITO_REGION",
USER_POOL_ID: "YOUR_PROD_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_PROD_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_PROD_IDENTITY_POOL_ID"
}
};
export default {
// Add common config values here
MAX_ATTACHMENT_SIZE: 5000000,
...config
};
Make sure to replace the different version of the resources with the ones from the Deploying
through Seed (/chapters/deploying-through-seed.html) chapter.
Note that we are defaul1ng our environment to dev if the REACT_APP_STAGE is not set. This
means that our current build process ( npm start and npm run build ) will default to the
dev environment. And for config values like MAX_ATTACHMENT_SIZE that are common to
both environments we moved it in a different sec1on.
If we switch over to our app, we should see it in development mode and it’ll be connected to
the dev version of our backend. We haven’t changed the deployment process yet but in the
coming chapters we’ll change this when we automate our frontend deployments.
We don’t need to worry about the prod version just yet. But as an example, if we wanted to
build the prod version of our app we’d have to run the following:
OR for Windows
$ git add .
$ git commit -m "Configuring environments"
Next, let’s add a seRngs page to our app. This is where a user will be able to pay for our
service!
1. Users put in their credit card info and the number of notes they want to store.
2. We call Stripe on the frontend to generate a token for the credit card.
3. We then call our billing API with the token and the number of notes.
4. Our billing API calculates the amount and bills the card!
this.state = {
isLoading: false
};
}
billUser(details) {
return API.post("notes", "/billing", {
body: details
});
}
render() {
return (
<div className="Settings">
</div>
);
}
}
<Switch>
<AppliedRoute path="/" exact component={Home} props={childProps} />
<UnauthenticatedRoute path="/login" exact component={Login} props=
{childProps} />
<UnauthenticatedRoute path="/signup" exact component={Signup} props=
{childProps} />
<AuthenticatedRoute path="/settings" exact component={Settings}
props={childProps} />
<AuthenticatedRoute path="/notes/new" exact component={NewNote}
props={childProps} />
<AuthenticatedRoute path="/notes/:id" exact component={Notes} props=
{childProps} />
{ /* Finally, catch all unmatched routes */ }
<Route component={NotFound} />
</Switch>
Next add a link to our se-ngs page in the navbar by replacing the render
method in src/App.js with this.
render() {
const childProps = {
isAuthenticated: this.state.isAuthenticated,
userHasAuthenticated: this.userHasAuthenticated
};
return (
!this.state.isAuthenticating &&
<div className="App container">
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
{this.state.isAuthenticated
? <Fragment>
<LinkContainer to="/settings">
<NavItem>Settings</NavItem>
</LinkContainer>
<NavItem onClick=
{this.handleLogout}>Logout</NavItem>
</Fragment>
: <Fragment>
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
</Fragment>
}
</Nav>
</Navbar.Collapse>
</Navbar>
<Routes childProps={childProps} />
</div>
);
}
You’ll noNce that we added another link in the navbar for the case a user is logged in.
Now if you head over to your app, you’ll see a new Se#ngs link at the top. Of course, the
page is preQy empty right now.
$ git add .
$ git commit -m "Adding settings page"
We did not complete our Stripe account setup back then, so we don’t have the live version of
this key. For now we’ll just assume that we have two versions of the same key.
STRIPE_KEY: "YOUR_STRIPE_DEV_PUBLIC_KEY",
STRIPE_KEY: "YOUR_STRIPE_PROD_PUBLIC_KEY",
$ git add .
$ git commit -m "Adding Stripe keys to config"
this.state = {
name: "",
storage: "",
isProcessing: false,
isCardComplete: false
};
}
validateForm() {
return (
this.state.name !== "" &&
this.state.storage !== "" &&
this.state.isCardComplete
);
}
render() {
const loading = this.state.isProcessing || this.props.loading;
return (
<form className="BillingForm" onSubmit={this.handleSubmitClick}>
<FormGroup bsSize="large" controlId="storage">
<ControlLabel>Storage</ControlLabel>
<FormControl
min="0"
type="number"
value={this.state.storage}
onChange={this.handleFieldChange}
placeholder="Number of notes to store"
/>
</FormGroup>
<hr />
<FormGroup bsSize="large" controlId="name">
<ControlLabel>Cardholder's name</ControlLabel>
<FormControl
type="text"
value={this.state.name}
onChange={this.handleFieldChange}
placeholder="Name on the card"
/>
</FormGroup>
<ControlLabel>Credit Card Info</ControlLabel>
<CardElement
className="card-field"
onChange={this.handleCardFieldChange}
style={{
base: { fontSize: "18px", fontFamily: '"Open Sans", sans-
serif' }
}}
/>
<LoaderButton
block
bsSize="large"
type="submit"
text="Purchase"
isLoading={loading}
loadingText="Purchasing…"
disabled={!this.validateForm()}
/>
</form>
);
}
}
To begin with we are going to wrap our component with a Stripe module using the
injectStripe HOC. This gives our component access to the
this.props.stripe.createToken method.
As for the fields in our form, we have input field of type number that allows a user to
enter the number of notes they want to store. We also take the name on the credit card.
These are stored in the state through the this.handleFieldChange method.
The credit card number form is provided by the Stripe React SDK through the
CardElement component that we import in the header.
The submit buPon has a loading state that is set to true when we call Stripe to get a token
and when we call our billing API. However, since our Se)ngs container is calling the
billing API we use the this.props.loading to set the state of the buPon from the
Se)ngs container.
We also validate this form by checking if the name, the number of notes, and the card
details are complete. For the card details, we use the CardElement’s onChange method.
Finally, once the user completes and submits the form we make a call to Stripe by passing
in the credit card name and the credit card details (this is handled by the Stripe SDK). We
call the this.props.stripe.createToken method and in return we get the token or
an error back. We simply pass this and the number of notes to be stored to the se)ngs
page via the this.props.onSubmit method. We will be se)ng this up shortly.
You can read more about how to use the React Stripe Elements here
(hPps://github.com/stripe/react-stripe-elements).
Also, let’s add some styles to the card field so it matches the rest of our UI.
Create a file at src/components/BillingForm.css .
.BillingForm .card-field {
margin-bottom: 15px;
background-color: white;
padding: 11px 16px;
border-radius: 6px;
border: 1px solid #CCC;
box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
line-height: 1.3333333;
}
.BillingForm .card-field.StripeElement--focus {
box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 8px rgba(102,
175, 233, .6);
border-color: #66AFE9;
}
$ git add .
$ git commit -m "Adding a billing form"
<script src="https://js.stripe.com/v3/"></script>
try {
await this.billUser({
storage,
source: token.id
});
We are adding the BillingForm component that we previously created here and passing in
the loading and onSubmit prop that we referenced in the last chapter. In the
handleFormSubmit method, we are checking if the Stripe method from the last chapter
returned an error. And if things looked okay then we call our billing API and redirect to the
home page a*er leHng the user know.
An important detail here is about the StripeProvider and the Elements component
that we are using. The StripeProvider component let’s the Stripe SDK know that we want
to call the Stripe methods using config.STRIPE_KEY . And it needs to wrap around at the
top level of our billing form. Similarly, the Elements component needs to wrap around any
component that is going to be using the CardElement Stripe component.
Finally, let’s handle some styles for our seHngs page as a whole.
.Settings form {
margin: 0 auto;
max-width: 480px;
}
}
This ensures that our form displays properly for larger screens.
And that’s it. We are ready to test our Stripe form. Head over to your browser and try picking
the number of notes you want to store and use the following for your card details:
If everything is set correctly, you should see the success message and you’ll be redirected to
the homepage.
$ git add .
$ git commit -m "Connecting the billing form"
Next, we’ll set up automaSc deployments for our React app using a service called Netlify
(hQps://www.netlify.com). This will be fairly similar to what we did for our serverless backend
API.
In the next few chapters we are going to be using a service called Netlify
(hJps://www.netlify.com) to automate our deployments. It’s a liJle like what we did for our
serverless API backend. We’ll configure it so that it’ll deploy our React app when we push our
changes to Git. However, there are a couple of subtle differences between the way we
configure our backend and frontend deployments.
1. Netlify hosts the React app on their infrastructure. In the case of our serverless API
backend, it was hosted on our AWS account.
2. Any changes that are pushed to our master branch will update the produc8on version
of our React app. This means that we’ll need to use a slightly different workflow than our
backend. We’ll use a separate branch where we will do most of our development and only
push to master once we are ready to update produc8on.
Just as in the case with our backend, we could use Travis CI (hJps://travis-ci.org) or Circle CI
(hJps://circleci.com) for this but it can take a bit more configura8on and we’ll cover that in a
different chapter.
[build]
base = ""
publish = "build"
command = "REACT_APP_STAGE=dev npm run build"
The build script is configured based on contexts. There is a default one right up top. There are
three parts to this:
1. The base is the directory where Netlify will run our build commands. In our case it is in
the project root. So this is leI empty.
2. The publish opKon points to where our build is generated. In the case of Create React
App it is the build directory in our project root.
3. The command opKon is the build command that Netlify will use. If you recall the Manage
environments in Create React App (/chapters/manage-environments-in-create-react-
app.html) chapter, this will seem familiar. In the default context the command is
REACT_APP_STAGE=dev npm run build .
The producKon context labelled, context.production is the only one where we set the
REACT_APP_STAGE variable to prod . This is when we push to master . The branch-
deploy is what we will be using when we push to any other non-producKon branch. The
deploy-preview is for pull requests.
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
}
You’ll noKce we are geYng rid of our old build and deploy scripts. We are not going to be
deploying to S3.
$ git add .
$ git commit -m "Adding a Netlify build script"
$ git push
Next, create a new site by hiAng the New site from Git bu6on.
Pick GitHub as your provider.
Then pick your project from the list.
It’ll default the branch to master . We can now deploy our app! Hit Deploy site.
This should be deploying our app. Once it is done, click on the deployment.
And you should see your app in ac>on!
Of course, it is hosted on a Netlify URL. We’ll change that by configuring custom domains
next.
The following sec<on is assuming that you completed Part I (/#part-1) independently and that
the custom domains below are being set up from scratch. However, if you’ve just completed
Part I, then you have a couple of op<ons:
You might be working through this guide to build an app as opposed to learning how to
build one. If that’s the case, it doesn’t make sense that you have two versions of the
frontend floa<ng around. You’ll need to disconnect the domain from Part I. To do that,
remove the Route 53 records sets that we created for the apex domain (/chapters/setup-
your-domain-with-cloudfront.html#point-domain-to-cloudfront-distribu<on) and the
www domain (/chapters/setup-www-domain-redirect.html#point-www-domain-to-
cloudfront-distribu<on) in Part I.
If you are not sure about the two above op<ons or have any ques<ons, post a comment in the
discussion thread that we link to at the boMom of the chapter.
Let’s get started!
This will ask you to verify that you are the owner of this domain and to add it. Click Yes, add
domain.
Next hit Check DNS configura5on.
This will show you the instruc<ons for se[ng up your domain through Route 53.
Set Name to www , Type to CNAME - Canonical name, and the value to the Netlify site name
as we noted above. In our case it is https://serverless-stack-2-
client.netlify.com . Hit Create.
And give the DNS around 30 minutes to update.
Configure SSL
Back in Netlify, hit HTTPS in the side panel. And it should say that it is wai<ng for the DNS to
propagate.
Once that is complete, Netlify will automa<cally provision your SSL ceri<ficate using Let’s
Encrypt.
Wait a few seconds for the ceri<ficate to be provisioned.
Now if you head over to your browser and go to your custom domain, your notes app should
be up and running!
We have our app in produc<on but we haven’t had a chance to go through our workflow just
yet. Let’s take a look at that next.
Let’s make a faulty commit just so we can go over the process of rolling back as well.
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A very expensive note taking app</p>
<div>
<Link to="/login" className="btn btn-info btn-lg">
Login
</Link>
<Link to="/signup" className="btn btn-success btn-lg">
Signup
</Link>
</div>
</div>
);
}
$ git add .
$ git commit -m "Committing a typo"
Now if you hop over to your Netlify project page; you’ll see a new branch deploy in ac>on.
Wait for it to complete and click on it.
Push to Production
Now if we feel happy with the changes we can push this to produc>on just by
merging to master.
And that’s it! Now you have an automated workflow for building and deploying your Create
React App with serverless.
Cleanup
Let’s quickly cleanup our changes.
renderLander() {
return (
<div className="lander">
<h1>Scratch</h1>
<p>A simple note taking app</p>
<div>
<Link to="/login" className="btn btn-info btn-lg">
Login
</Link>
<Link to="/signup" className="btn btn-success btn-lg">
Signup
</Link>
</div>
</div>
);
}
$ git add .
$ git commit -m "Fixing a typo"
$ git push
This will create a new deployment to live! Let’s wrap up the guide next.
We’ve covered how to build and deploy our backend serverless API and our frontend
serverless app. And not only does it work well on the desktop.
We’d love to hear from you about your experience following this guide. Please send us any
comments or feedback you might have, via email (mailto:[email protected]). We’d love to
feature your comments here. Also, if you’d like us to cover any of the chapters or concepts in a
bit more detail, feel free to let us know (mailto:[email protected]).
If you have found any other guides or tutorials helpful in building your serverless app, feel free
to edit this page and submit a PR. Or you can let us know via the comments.
Below is a list of all the chapters that are available in mul4ple languages. If you are interested
in helping with our transla4on efforts, leave us a comment here (hAps://discourse.serverless-
stack.com/t/help-us-translate-serverless-stack/596/15).
Deploy to S3 (/chapters/deploy-to-s3.html)
ko: Deploy to S3 (/chapters/ko/deploy-to-s3.html)
Create a CloudFront Distribu4on (/chapters/create-a-cloudfront-distribu4on.html)
ko: Create a CloudFront Distribu4on (/chapters/ko/create-a-cloudfront-distribu4on.html)
Wrapping Up (/chapters/wrapping-up.html)
ko: 마무리하며 (/chapters/ko/wrapping-up.html)
Transla4ons (/chapters/transla4ons.html)
ko: 번역 (/chapters/ko/transla4ons.html)
Changelog (/chapters/changelog.html)
ko: 변경로그 (/chapters/ko/changelog.html)
A big thanks to our contributors for helping make Serverless Stack more accessible!
The content on this site is kept up to date thanks in large part to our community and our
readers. Submit a Pull Request (hDps://github.com/AnomalyInnovaHons/serverless-stack-
com/compare) to fix any typos or errors you might find.
Serverless Stack is reliant on a large number of services and open source libraries and
projects. The screenshots for the services and the dependencies need to be updated
every once in a while. Here is a liDle more details on this
(hDps://github.com/AnomalyInnovaHons/serverless-stack-
com/blob/master/CONTRIBUTING.md#keep-the-core-guide-updated).
Our incredible readers are helping translate Serverless Stack into mulHple languages. You
can check out our progress here (/chapters/translaHons.html). If you would like to help
with our translaHon efforts, leave us a comment here (hDps://discourse.serverless-
stack.com/t/help-us-translate-serverless-stack/596/15).
Improve tooling
Currently we do a lot of manual work to publish updates and maintain the tutorial. You
can help by contribuHng to improve the process. Here are some more details on what we
need help with (hDps://github.com/AnomalyInnovaHons/serverless-stack-
com/blob/master/CONTRIBUTING.md#improve-tooling).
We rely on our GitHub repo for everything from hosHng this site to code samples and
comments. Starring our repo (hDps://github.com/AnomalyInnovaHons/serverless-stack-
com) helps us get the word out.
Also, if you have any other ideas on how to contribute; feel free to let us know via email
(mailto:[email protected]).
Below are the updates we’ve made to Serverless Stack, each with:
While the hosted version of the tutorial and the code snippets are accurate, the sample
project repo that is linked at the boEom of each chapter is unfortunately not. We do however
maintain the past versions of the completed sample project repo. So you should be able to use
those to figure things out. All this info is also available on the releases page
(hEps://github.com/AnomalyInnova)ons/serverless-stack-com/releases) of our GitHub repo
(hEps://github.com/AnomalyInnova)ons/serverless-stack-com).
You can get these updates emailed to you via our newsleEer
(hEps://emailoctopus.com/lists/1c11b9a8-1500-11e8-a3c9-06b79b628af2/forms/subscribe).
Changes
v3.4: Updating to serverless-bundle and on-demand DynamoDB
(https://branchv34--serverless-stack.netlify.com/) (Current)
Jul 18, 2019: Upda)ng to serverless-bundle plugin and On-Demand Capacity for DynamoDB.
Jan 27, 2019: Adding CORS headers to API Gateway 4xx and 5xx errors.
Nov 1, 2018: Refactoring async Lambda func)ons to return instead of using the callback.
Oct 5, 2018: Updated the frontend React app to use Create React App v2.
Oct 5, 2018: Added new chapters on Facebook login with AWS Amplify and mapping Iden)ty
Id with User Pool Id. Also, added a new series of chapters on forgot password, change email
and password.
Tutorial changes (hEps://github.com/AnomalyInnova)ons/serverless-stack-
com/compare/v3.2...v3.3)
Facebook Login Client (hEps://github.com/AnomalyInnova)ons/serverless-stack-demo-
c-login-client)
User Management Client (hEps://github.com/AnomalyInnova)ons/serverless-stack-demo-
user-mgmt-client)
Aug 18, 2018: Adding a new sec)on on organizing Serverless applica)ons. Outlining how to
use CloudForma)on cross-stack references to link mul)ple Serverless services.
May 24, 2018: CloudForma)on now supports UsernameAEributes. This means that we don’t
need the email as alias work around.
May 10, 2018: Adding a new part to the guide to help create a produc)on ready version of the
note taking app. Discussion on the update (hEps://discourse.serverless-
stack.com/t/serverless-stack-update-part-ii/194).
Apr 11, 2018: Upda)ng the backend to use Node.js starter and Lambda Node v8.10.
Discussion on the update (hEps://github.com/AnomalyInnova)ons/serverless-stack-
com/issues/223).
Mar 21, 2018: Upda)ng the backend to use Webpack 4 and serverless-webpack 5.
Upda)ng frontend to use AWS Amplify. Verifying SSL cer)ficate now uses DNS valida)on.
Discussion on the update (hEps://github.com/AnomalyInnova)ons/serverless-stack-
com/issues/123).
Feb 5, 2018: Using specific Bootstrap CSS version since latest now points to Bootstrap v4.
But React-Bootstrap uses v3.
Dec 31, 2017: Updated to React 16 and fixed sigv4Client.js IE11 issue ({{
site.github_repo }}/issues/114#issuecomment-349938586).
Dec 30, 2017: Updated serverless backend to use babel-preset-env plugin and added a note
to the Deploy to S3 chapter on reducing React app bundle size.
Sep 16, 2017: Upgrading serverless backend to using serverless-webpack plugin v3. The new
version of the plugin changes some of the commands used to test the serverless backend.
Discussion on the update (hEps://github.com/AnomalyInnova)ons/serverless-stack-
com/issues/130).
Aug 30, 2017: Fixing some issues with session handling in the React app. A few minor updates
bundled together. Discussion on the update
(hEps://github.com/AnomalyInnova)ons/serverless-stack-com/issues/123).
July 19, 2017: Switching to using IAM as an authorizer instead of the authen)ca)ng directly
with User Pool. This was a major update to the tutorial. Discussion on the update
(hEps://github.com/AnomalyInnova)ons/serverless-stack-com/issues/108).
API (hEps://github.com/AnomalyInnova)ons/serverless-stack-demo-
api/releases/tag/v0.9)
Client (hEps://github.com/AnomalyInnova)ons/serverless-stack-demo-
client/releases/tag/v0.9)
To help people stay up to date with the changes, we run the Serverless Stack newsle=er
(h=ps://emailoctopus.com/lists/1c11b9a8-1500-11e8-a3c9-06b79b628af2/forms/subscribe).
The newsle=er is a:
First let’s start by quickly looking at the common terms used when talking about Serverless
Framework projects.
Service
A service is what you might call a Serverless project. It has a single serverless.yml file
driving it.
Applica+on
Now let’s look at the most common paIern for organizing serverless projects.
Microservices + Mono-Repo
Mono-repo, as the term suggests is the idea of a single repository. This means that your en>re
applica>on and all its services are in a single repository.
The microservice paIern on the other hand is a concept of keeping each of your services
modular and lightweight. So for example; if your app allows users to create profiles and submit
posts; you could have a service that deals with user profiles and one that deals with posts.
The directory structure of your en>re applica>on under the microservice + mono-repo paIern
would look something like this.
|- services/
|--- posts/
|----- get.js
|----- list.js
|----- create.js
|----- update.js
|----- delete.js
|----- serverless.yml
|--- users/
|----- get.js
|----- list.js
|----- create.js
|----- update.js
|----- delete.js
|----- serverless.yml
|- lib/
|- package.json
1. We are going over a Node.js project here but this paIern applies to other languages as
well.
2. The services/ dir at the root is made up of a collec>on of services. Where a service
contains a single serverless.yml file.
3. Each service deals with a rela>vely small and self-contained func>on. So for example, the
posts service deals with everything from crea>ng to dele>ng posts. Of course, the
degree to which you want to separate your applica>on is en>rely up to you.
4. The package.json (and the node_modules/ dir) are at the root of the repo.
However, it is fairly common to have a separate package.json inside each service
directory.
5. The lib/ dir is just to illustrate that any common code that might be used across all
services can be placed in here.
6. To deploy this applica>on you are going to need to run serverless deploy separately
in each of the services.
7. Environments (or stages) (/chapters/stages-in-serverless-framework.html) need to be co-
ordinated across all the different services. So if your team is using a dev , staging , and
prod environment, then you are going to need to define the specifics of this in each of
the services.
Advantages of Mono-Repo
The microservice + mono-repo paIern has grown in popularity for a couple of reasons:
1. Lambda func>ons are a natural fit for a microservice based architecture. This is due to a
few of reasons. Firstly, the performance of Lambda func>ons is related to the size of the
func>on. Secondly, debugging a Lambda func>on that deals with a specific event is much
easier. Finally, it is just easier to conceptually relate a Lambda func>on with a single event.
2. The easiest way to share code between services is by having them all together in a single
repository. Even though your services end up dealing with separate por>ons of your app,
they s>ll might need to share some code between them. Say for example; you have some
code that formats your requests and responses in your Lambda func>ons. This would
ideally be used across the board and it would not make sense to replicate this code in all
the services.
Disadvantages of Mono-Repo
Before we go through alterna>ve paIerns, let’s quickly look at the drawbacks of the
microservice + mono-repo paIern.
1. Microservices can grow out of control and each added service increases the complexity of
your applica>on.
2. This also means that you can end up with hundreds of Lambda func>ons.
3. Managing deployments for all these services and func>ons can get complicated.
Most of the issues described above start to appear when your applica>on begins to grow.
However, there are services that help you deal with some these issues. Services like IOpipe
(hIps://www.iopipe.com), Epsagon (hIps://epsagon.com), and Dashbird (hIps://dashbird.io)
help you with observability of your Lambda func>ons. And our own Seed (hIps://seed.run)
helps you with managing deployments and environments of mono-repo Serverless Framework
applica>ons.
Multi-Repo
The obvious counterpart to the mono-repo paIern is the mul>-repo approach. In this paIern
each of your repositories has a single Serverless Framework project.
2. Due to the fric>on involved in code sharing, we typically see each service (or repo) grow
in the number of Lambda func>ons. This can cause you to hit the CloudForma>on
resource limit and get a deployment error that looks like:
Error --------------------------------------------------
Even with the disadvantages the mul>-repo paIern does have its place. We have come across
cases where some infrastructure related pieces (seang up DynamoDB, Cognito, etc) is done in
a service that is placed in a separate repo. And since this typically doesn’t need a lot of code or
even share anything with the rest of your applica>on, it can live on it’s own. So in effect you
can run a mul>-repo setup where the standalone repos are for your infrastructure and your API
endpoints live in a microservice + mono-repo setup.
Monolith
The monolith paIern involves taking advantage of API Gateway’s {proxy+} and ANY
method to route all the requests to a single Lambda func>on. In this Lambda func>on you can
poten>ally run an applica>on server like Express (hIps://expressjs.com). So as an example, all
the API requests below would be handled by the same Lambda func>on.
GET https://api.example.com/posts
POST https://api.example.com/posts
PUT https://api.example.com/posts
DELETE https://api.example.com/posts
GET https://api.example.com/users
POST https://api.example.com/users
PUT https://api.example.com/users
DELETE https://api.example.com/users
And the specific sec>on in your serverless.yml might look like the following:
handler: app.main
events:
- http:
method: any
path: /{proxy+}
Where the main func>on in your app.js is responsible for parsing the routes and figuring
out the HTTP methods to do the specific ac>on necessary.
The biggest drawback here is that the size of your func>ons keeps growing. And this can
affect the performance of your func>ons. It also makes it harder to debug your Lambda
func>ons.
And that should roughly cover the main ways to organize your Serverless Framework
applica>ons. Hopefully, this chapter has given you a good overview of the various approaches
involved along with their benefits and drawbacks.
In the next series of chapters we’ll be looking at how to work with mul>ple services in your
Serverless Framework applica>on.
You might recall that a Serverless service is where a single serverless.yml is used to
define the project. And the serverless.yml file is converted into a CloudForma;on
template (h5ps://aws.amazon.com/cloudforma;on/aws-cloudforma;on-templates/) using
Serverless Framework. This means that in the case of mul;ple services you might need to
reference a resource that is available in a different service. For example, you might have your
DynamoDB tables created in one service and your APIs (which are in another service) need to
refer to them. Of course you don’t want to hard code this. And so over the next few chapters
we will be breaking down the note taking applica;on
(h5ps://github.com/AnomalyInnova;ons/serverless-stack-demo-api) into mul;ple resources to
illustrate how to do this.
However before we do, we need to cover the concept of cross-stack references. A cross-stack
reference is a way for one CloudForma;on template to refer to the resource in another
CloudForma;on template.
1. Use the Export: flag in the Outputs: sec;on in the serverless.yml of the
service you would like to reference.
2. Then in the service where you want to use the reference; use the Fn::ImportValue
CloudForma;on func;on.
So as a quick example (we will go over this in detail shortly), say you wanted to refer to the
DynamoDB table across services.
1. First export the table name in your DynamoDB service using the Export: flag:
resources:
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: notes
# ...
Outputs:
Value:
Ref: NotesTable
Export:
Name: NotesTableName
'Fn::ImportValue': NotesTableName
The Fn::ImportValue func;on takes the export name and returns the exported value. In
this case the imported value is the DynamoDB table name.
Now before we dig into the details of cross-stack references in Serverless, let’s quickly look at
some of its details.
Cross-stack references only apply within a single region. Meaning that an exported value
can be referenced by any service in that region.
If a service’s export is being referenced in another stack, the service cannot be removed.
So for the above example, you won’t be able to remove the DynamoDB service if it is s;ll
being referenced in the API service.
The services need to be deployed in a specific order. The service that is expor;ng a value
needs to be deployed before the one doing the impor;ng. Using the above example again,
the DynamoDB service needs to be deployed before the API service.
Advantages of Cross-Stack References
As your applica;on grows, it can become hard to track the dependencies between the
services in the applica;on. And cross-stack references can help with that. It creates a strong
link between the services. As a comparison, if you were to refer to the linked resource by hard
coding the value, it’ll be difficult to keep track of it as your applica;on grows.
The other advantage is that you can easily recreate the en;re applica;on (say for tes;ng) with
ease. This is because none of the services of your applica;on are sta;cally linked to each
other.
Example Setup
Cross-stack references can be very useful but some aspects of it can be a li5le confusing and
the documenta;on can make it hard to follow. To illustrate the various ways to use cross-stack
references in serverless we are going to split up our note taking app
(h5ps://github.com/AnomalyInnova;ons/serverless-stack-demo-api) into a mono-repo app
with mul;ple services that are connected through cross-stack references
(h5ps://github.com/AnomalyInnova;ons/serverless-stack-demo-mono-api).
4. In the first API service, refer to the DynamoDB service using a cross-stack reference.
5. In the second API service, do the same as the first. And addi;onally, link to the first API
service so that we can use the same API Gateway domain as the first.
6. Secure all our resources with a Cognito User Pool. And with an Iden;ty Pool create an
IAM role that gives authen;cated users permissions to the resources we created.
We are splibng up our app this way mainly to illustrate how to use cross-stack references. But
you can split it up in a way that makes more sense for you. For example, you might choose to
have all your infrastructure resources (DynamoDB and S3) in one service, your APIs in
another, and your auth in a separate service.
We’ve also created a separate GitHub repo with a working example
(h5ps://github.com/AnomalyInnova;ons/serverless-stack-demo-mono-api) of the above setup
that you can use for reference. We’ll be linking to it at the bo5om of each of the following
chapters.
service: notes-app-mono-database
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
# Set the table name here so we can use it while testing locally
tableName: ${self:custom.stage}-mono-notes
# Set our DynamoDB throughput for prod and all other non-prod
stages.
tableThroughputs:
prod: 5
default: 1
tableThroughput:
${self:custom.tableThroughputs.${self:custom.stage},
self:custom.tableThroughputs.default}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
resources:
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
# Generate a name based on the stage
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
# Set the capacity based on the stage
ProvisionedThroughput:
ReadCapacityUnits: ${self:custom.tableThroughput}
WriteCapacityUnits: ${self:custom.tableThroughput}
Outputs:
NotesTableArn:
Value:
Fn::GetAtt:
- NotesTable
- Arn
Export:
Name: ${self:custom.stage}-NotesTableArn
1. We are expor*ng one value here. The NotesTableArn is the ARN (/chapters/what-is-
an-arn.html) of the DynamoDB table that we are crea*ng. And the NotesTableName
which is the name of the table being created. The ARN is necessary for any IAM roles that
are going to reference the DynamoDB table.
2. The export name is based on the stage we are using to deploy this service -
${self:custom.stage} . This is important because we want our en*re applica*on to
be easily replicable across mul*ple stages. If we don’t include the stage name the exports
will thrash when we deploy to mul*ple stages.
4. We get the table ARN by using the Fn::GetAtt CloudForma*on func*on. This func*on
takes a reference from the current service and the aIribute we need. The reference in this
case is NotesTable . You’ll no*ce that the table we created in the Resources:
sec*on is created using NotesTable as the name.
When we deploy this service we’ll no*ce the exported values in the output and we can
reference these cross-stack in our other services.
service: notes-app-mono-uploads
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or falls back to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
resources:
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
AttachmentsBucketName:
Value:
Ref: S3Bucket
Export:
Name: ${self:custom.stage}-AttachmentsBucket
Most of the Resources: secFon should be fairly straighLorward and is based on Part II of
this guide (/chapters/configure-s3-in-serverless.html). So let’s go over the cross-stack exports
in the Outputs: secFon.
3. We can get the ARN by using the Fn::GetAtt funcFon by passing in the ref
( S3Bucket ) and the aGribute we need ( Arn ).
4. And finally, we can get the bucket name by just using the ref ( S3Bucket ). Note that
unlike the DynamoDB table name, the S3 bucket name is auto-generated. So while we
could get away with not exporFng the DynamoDB table name; in the case of S3, we need
to export it.
Now that we have the main infrastructure pieces created, let’s take a look at our APIs next. For
illustraFve purposes we are going to create two separate API services and look at how to
group them under the same API Gateway domain.
In this chapter we will look at how to work with API Gateway across mul8ple services. A
challenge that you run into when spliKng your APIs into mul8ple services, is sharing the same
domain for them. You might recall that APIs that are created as a part of the Serverless service
get their own unique URL that looks something like:
https://z6pv80ao4l.execute-api.us-east-1.amazonaws.com/dev
When you aRach a custom domain for your API, it is aRached to a specific endpoint like the
one above. This means that if you create mul8ple API services, they will all have unique
endpoints.
You can assign different base paths for your custom domains. For example,
api.example.com/notes can point to one service while api.example.com/users can
point to another. But if you try to split your notes service up, you’ll face the challenge of
sharing the custom domain across them.
In this chapter we will look at how to share the API Gateway project across mul8ple services.
For this we will create two separate Serverless services for our APIs. The first is the notes
service. This is the same one we’ve used in our note taking app (hRps://demo2.serverless-
stack.com) so far. But for this chapter we will simplify the number of endpoints to focus on the
cross-stack aspects of it. For the second service, we’ll create a simple users service. This
service isn’t a part of our note taking app. We just need it to demonstrate the concepts in this
chapter.
Multiple API Services
We are going to be crea8ng a notes and a users service using the following setup.
The notes service is going to be our main API service and the users service is going to link
to it. This means that the users service will refer to the notes service.
The notes service will be under /notes dir and the users service will be under the
/users dir.
Notes Service
First let’s look at the notes service. We need to connect it to the DynamoDB service that we
previously created (/chapters/dynamodb-as-a-serverless-service.html). In the example repo
(hRps://github.com/AnomalyInnova8ons/serverless-stack-demo-mono-api), you’ll no8ce that
we have a notes service in the services/ directory with a serverless.yml .
service: notes-app-mono-notes
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
# Restrict our IAM role permissions to
# the specific table for the stage
Resource:
- 'Fn::ImportValue': ${self:custom.stage}-NotesTableArn
functions:
# Defines an HTTP API endpoint that calls the main function in
create.js
# - path: url path is /notes
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser
cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
get:
# Defines an HTTP API endpoint that calls the main function in
get.js
# - path: url path is /notes/{id}
# - method: GET request
handler: handler.main
events:
- http:
path: notes
method: get
cors: true
authorizer: aws_iam
resources:
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:custom.stage}-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:custom.stage}-ApiGatewayRestApiRootResourceId
1. The Lambda func8ons in our service need to know which DynamoDB table to connect to.
To do this we are impor8ng the table name we use from the serverless.yml of that
service. We do this using
${file(../database/serverless.yml):custom.tableName} . This is basically
telling Serverless Framework to look for the serverless.yml file in the
services/database/ directory. And in that file look for the custom variable called
tableName . We set this value as an environment variable so that we can use
process.env.tableName in our Lambda func8on to find the generated name of our
notes table.
2. Next, we need to give our Lambda func8on permission to talk to this table by adding an
IAM policy. The IAM policy needs the ARN (/chapters/what-is-an-arn.html) of the table.
This is the first 8me we are using the import por8on of our cross-stack reference. Back in
the chapter where we created the DynamoDB service (/chapters/dynamodb-as-a-
serverless-service.html), we exported ${self:custom.stage}-NotesTableArn . And
we can refer to it by 'Fn::ImportValue': ${self:custom.stage}-
NotesTableArn .
3. We are going to export a couple values in this service to be able to share this API
Gateway resource in our users service.
4. The first cross-stack reference that needs to be shared is the API Gateway Id that is
created as a part of this service. We are going to export it with the name
${self:custom.stage}-ApiGatewayRestApiId . Again, we want the exports to
work across all our environments/stages and so we include the stage name as a part of it.
The value of this export is available as a reference in our current stack called
ApiGatewayRestApi .
5. Finally, we also need to export the RootResourceId . This is a reference to the / path
of this API Gateway project. To this Id we use the Fn::GetAtt CloudForma8on
func8on and pass in the current ApiGatewayRestApi and look up the aRribute
RootResourceId . We export this using the name ${self:custom.stage}-
ApiGatewayRestApiRootResourceId .
Users Service
In the example repo (hRps://github.com/AnomalyInnova8ons/serverless-stack-demo-mono-
api), open the users service in the services/ directory.
service: notes-app-mono-users
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
apiGateway:
restApiId:
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
restApiRootResourceId:
'Fn::ImportValue': ${self:custom.stage}-
ApiGatewayRestApiRootResourceId
# These environment variables are made available to our functions
# under process.env.
environment:
tableName:
${file(../database/serverless.yml):custom.tableName}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
# Restrict our IAM role permissions to
# the specific table for the stage
Resource:
- 'Fn::ImportValue': ${self:custom.stage}-NotesTableArn
functions:
# Defines an HTTP API endpoint that calls the main function in
create.js
# - path: url path is /users
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser
cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
get:
# Defines an HTTP API endpoint that calls the main function in
get.js
# - path: url path is /users/{id}
# - method: GET request
handler: handler.main
events:
- http:
path: users
method: get
cors: true
authorizer: aws_iam
Just as the notes service we are referencing our DynamoDB table name using
${file(../database/serverless.yml):custom.tableName} and the table ARN
using 'Fn::ImportValue': ${self:custom.stage}-NotesTableArn .
To share the same API Gateway domain as our notes service, we are adding a
apiGateway: sec8on to the provider: block.
1. Here we state that we want to use the restApiId of our notes service. We do this
by using the cross-stack reference 'Fn::ImportValue':
${self:custom.stage}-ApiGatewayRestApiId that we had exported above.
2. We also state that we want all the APIs in our service to be linked under the root path
of our notes service. We do this by seKng the restApiRootResourceId to the
cross-stack reference 'Fn::ImportValue': ${self:custom.stage}-
ApiGatewayRestApiRootResourceId from above.
Finally, we don’t need to export anything in this service since we aren’t crea8ng any new
resources that need to be referenced.
The key thing to note in this setup is that API Gateway needs to know where to aRach the
routes that are created in this service. We want the /users path to be aRached to the root
of our API Gateway project. Hence the restApiRootResourceId points to the root
resource of our notes service. Of course we don’t have to do it this way. We can organize our
service such that the /users path is created in our main API service and we link to it here.
Next let’s 8e our en8re stack together and secure it using Cognito User Pool and Iden8ty Pool.
service: notes-app-mono-auth
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider
section.
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
resources:
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.stage}-mono-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.stage}-mono-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
GenerateSecret: false
# IAM roles
CognitoIdentityPoolRoles:
Type: AWS::Cognito::IdentityPoolRoleAttachment
Properties:
IdentityPoolId:
Ref: CognitoIdentityPool
Roles:
authenticated:
Fn::GetAtt: [CognitoAuthRole, Arn]
# Print out the Id of the User Pool and Identity Pool that are
created
Outputs:
UserPoolId:
Value:
Ref: CognitoUserPool
UserPoolClientId:
Value:
Ref: CognitoUserPoolClient
IdentityPoolId:
Value:
Ref: CognitoIdentityPool
This can seem like a lot but both the CognitoUserPool: and the
CognitoUserPoolClient: secEon are simply creaEng our Cognito User Pool. And you’ll
noEce that both these secEons are not using any cross-stack references. They are effecEvely
standalone. If you are looking for more details on this, refer to the Part II of this guide
(/chapters/configure-cognito-user-pool-in-serverless.html).
CognitoIdentityPool: creates the role and states that the Cognito User Pool that we
created above is going to be our auth provider.
The IdenEty Pool has an IAM role aJached to its authenEcated and unauthenEcated
users. Since, we only allow authenEcated users to our note taking app; we only have one
role. The CognitoIdentityPoolRoles: secEon states that we have an authenEcated
user role that we are going to create below and we are referencing it here by doing
Fn::GetAtt: [CognitoAuthRole, Arn] .
Finally, the CognitoAuthRole: secEon creates the IAM role that will allow access to
our API and S3 file uploads bucket.
The API Gateway resource in our IAM role looks something like:
arn:aws:execute-api:us-east-1:12345678:qwe123rty456/*
Where us-east-1 is the region, 12345678 is the AWS account Id, and qwe123rty456 is
the API Gateway Resource Id. To construct this dynamically we need the cross-stack reference
of the API Gateway Resource Id that we exported in the API Gateway chapter (/chapters/api-
gateway-domains-across-services.html). And we can import it like so:
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
Again, all of our references are based on the stage we are deploying to.
The S3 bucket on the other hand has a resource that looks something like:
"arn:aws:s3:::my_s3_bucket/private/${cognito-
identity.amazonaws.com:sub}/*"
Where my_s3_bucket is the name of the bucket. We are going to use the generated name
that we exported back in the S3 bucket chapter (/chapters/s3-as-a-serverless-service.html).
And we can import it using:
'Fn::ImportValue': ${self:custom.stage}-AttachmentsBucketArn
And finally, you’ll noEce that we are outpuUng a couple of things in this service. We need the
Ids of the Cognito resources created in our frontend. But we don’t have to export any cross-
stack values.
Now that all of our resources are complete, we’ll look at how to deploy them. There is a bit of
a wrinkle here since we have some dependencies between our services.
All this is available in a sample repo that you can deploy and test
(hMps://github.com/AnomalyInnova8ons/serverless-stack-demo-mono-api).
Now we can finally look at how to deploy our services. The addi8on of cross-stack references
to our services means that we have some built-in dependencies. This means that we need to
deploy some services before we deploy certain others.
Service Dependencies
Following is a list of the services we created:
database
uploads
notes
users
auth
And based on our cross-stack references the dependencies look roughly like:
database > notes > users
Where the a > b symbolizes that service a needs to be deployed before service b . To
break it down in detail:
The users API service relies on the notes API service for the API Gateway cross-
stack reference.
The users and notes API services rely on the database service for the DynamoDB
cross-stack reference.
And the auth service relies on the uploads and notes service for the S3 bucket and
API Gateway cross-stack references respec8vely.
1. database
2. uploads
3. notes
4. users
5. auth
Now there are some intricacies here but that is the general idea.
Multi-Service Deployments
Given the rough dependency graph above, you can script your CI/CD pipeline to ensure that
your automa8c deployments follow these rules. There are a few ways to simplify this process.
It is very likely that your auth , database , and uploads service don’t change very oYen.
You might also need to follow some strict policies across your team to make sure no
haphazard changes are made to it. So by separa8ng out these resources into their own
services (like we have done in the past few chapters) you can carry out updates to these
services by using a manual approval step as a part of the deployment process. This leaves the
API services. These need to be deployed manually once and can later be automated.
Service Dependencies in Seed
Seed (hMps://seed.run) has a concept of Deploy Phases (hMps://seed.run/docs/configuring-
deploy-phases) to handle service dependencies.
You can configure this by heading to the app se[ngs and hi[ng Manage Deploy Phases.
Here you’ll no8ce that by default all the services are deployed concurrently.
Note that, you’ll need to add your services first. To do this, head over to the app Se0ngs and
hit Add a Service.
We can configure our service dependencies by adding the necessary deploy phases and
moving the services around.
And when you deploy your app, the deployments are carried out according to the deploy
phases specified.
Environments
A quick word of handling environments across these services. The services that we have
created can be easily re-created for mul8ple environments or stages (/chapters/stages-in-
serverless-framework.html). A good standard prac8ce is to have a dev, staging, and prod
environment. And it makes sense to replicate all your services across these three
environments.
However, when you are working on a new feature or you want to give a developer on your
team their own environment, it might not make sense to replicate all of your services across
them. It is more common to only replicate the API services as you create mul8ple dev
environments.
Mono-Repo vs Multi-Repo
Finally, when considering how to house these services in your repository, it is worth looking at
how much code is shared across them. Typically, your infrastructure services ( database ,
uploads and auth ) don’t share any code between them. In fact they probably don’t have
any code in them to begin with. These services can be put in their own repos. Whereas the
API services that might share some code (request and response handling) can be placed in the
same repo and follow the mono-repo approach outlined in the Organizing Serverless Projects
chapter (/chapters/organizing-serverless-projects.html).
This combined way of using the mul8-repo and mono-repo strategy also makes sense when
you think about how we deploy them. As we stated above, the infrastructure services are
probably going to be deployed manually and with cau8on. While the API services can be
automated (using Seed (hMps://seed.run) or your own CI) for the mono-repo services and
handle the others ones as a special case.
Conclusion
Hopefully these series of chapters have given you a sense of how to structure large Serverless
applica8ons using CloudForma8on cross-stack references. And the example repo
(hMps://github.com/AnomalyInnova8ons/serverless-stack-demo-mono-api) gives you a clear
working demonstra8on of the concepts we’ve covered. Give the above setup a try and leave
us your feedback in the comments.
Types of Logs
There are 2 types of logs we usually take for granted in a monolithic environment.
Server logs
Web server logs maintain a history of requests, in the order they took place. Each log
entry contains the informa*on about the request, including client IP address, request
date/*me, request path, HTTP code, bytes served, user agent, etc.
Applica*on logs
Applica*on logs are a file of events that are logged by the web applica*on. It usually
contains errors, warnings, and informa*onal events. It could contain everything from
unexpected func*on failures, to key events for understanding how users behave.
In the serverless environment, we have lesser control over the underlying infrastructure,
logging is the only way to acquire knowledge on how the applica*on is performing. Amazon
CloudWatch (hMps://aws.amazon.com/cloudwatch/) is a monitoring service to help you collect
and track metrics for your resources. Using the analogy of server logs and applica*on logs, you
can roughly think of the API Gateway logs as your server logs and Lambda logs as your
applica*on logs.
First, log in to your AWS Console (hMps://console.aws.amazon.com) and select IAM from the
list of services.
Go back to your AWS Console (hMps://console.aws.amazon.com) and select API Gateway from
the list of services.
Note that, the execu*on logs can generate a ton of log data and it’s not recommended to
leave them on. They are much beMer for debugging. API Gateway does have support for
access logs, which we recomend leaving on. Here is how to enable access logs for your API
Gateway project (hMps://seed.run/blog/how-to-enable-access-logs-for-api-gateway).
To view API Gateway logs, log in to your AWS Console (hMps://console.aws.amazon.com) and
select CloudWatch from the list of services.
To view Lambda logs, select Logs again from the leY panel. Then select the first log group
prefixed with /aws/lambda/ followed by the func*on name.
Where the <func-name> is the name of the Lambda func*on you are looking for.
Addi*onally, you can use the --tail flag to stream the logs automa*cally to your console.
This can be very helpful during development when trying to debug your func*ons using the
console.log call.
Hopefully, this has helped you set up CloudWatch logging for your API Gateway and Lambda
projects. And given you a quick idea of how to read your serverless logs using the AWS
Console.
For help and discussion
We’ve also compiled a list of some of the most common Serverless errors over on Seed
(hBps://seed.run). Check out Common Serverless Errors (hBps://seed.run/docs/serverless-
errors/) and do a quick search for your error message and see if it has a soluJon.
When a request is made to your serverless API, it starts by hiLng API Gateway and makes its
way through to Lambda and invokes your funcJon. It takes quite a few hops along the way
and each hop can be a point of failure. And since we don’t have great visibility over each of the
specific hops, pinpoinJng the issue can be a bit tricky. We are going to take a look at the
following issues:
This chapter assumes you have turned on CloudWatch logging for API Gateway and that you
know how to read both the API Gateway and Lambda logs. If you have not done so, start by
taking a look at the chapter on API Gateway and Lambda Logs (/chapters/api-gateway-and-
lambda-logs.html).
https://API_ID.execute-api.REGION.amazonaws.com/STAGE/PATH
In all of these cases, the error does not get logged to CloudWatch since the request does not
hit your API Gateway project.
This is a tricky issue to debug because the request sJll has not reached API Gateway, and
hence the error is not logged in the API Gateway CloudWatch logs. But we can perform a
check to ensure that our Cognito IdenJty Pool users have the required permissions, using the
IAM policy Simulator (hBps://policysim.aws.amazon.com).
Before we can use the simulator we first need to find out the name of the IAM role that we
are using to connect to API Gateway. We had created this role back in the Create a Cognito
idenJty pool (/chapters/create-a-cognito-idenJty-pool.html) chapter.
Select API Gateway as the service and select the Invoke acJon.
Expand the service and enter the API Gateway endpoint ARN, then select Run SimulaJon.
The format here is the same one we used back in the Create a Cognito idenJty pool
(/chapters/create-a-cognito-idenJty-pool.html) chapter; arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/* . In our case this looks like
arn:aws:execute-api:us-east-1:*:ly55wbovq4/* .
If your IAM role is configured properly you should see allowed under Permission.
But if something is off, you’ll see denied.
To fix this and edit the role we need to go back to the AWS Console
(hBps://console.aws.amazon.com) and select IAM from the list of services.
Select Roles on the le[ menu.
And select the IAM role that our IdenJty Pool is using. In our case it’s called
Cognito_notesidentitypoolAuth_Role .
...
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-
api:YOUR_API_GATEWAY_REGION:*:YOUR_API_GATEWAY_ID/*"
]
}
...
Now if you test your policy, it should show that you are allowed to invoke your API Gateway
endpoint.
Lambda Function Error
Now if you are able to invoke your Lambda funcJon but it fails to execute properly due to
uncaught excepJons, it’ll error out. These are preBy straigh\orward to debug. When this
happens, AWS Lambda will aBempt to convert the error object to a string, and then send it to
CloudWatch along with the stacktrace. This can be observed in both Lambda and API
Gateway CloudWatch log groups.
To get around this issue, you can set this callbackWaitsForEmptyEventLoop property to false
to request AWS Lambda to freeze the process as soon as the callback is called, even if there
are events in the event loop.
context.callbackWaitsForEmptyEventLoop = false;
...
};
This effecJvely allows a Lambda funcJon to return its result to the caller without requiring
that the database connecJon be closed. This allows the Lambda funcJon to reuse the same
connecJon across calls, and it reduces the execuJon Jme as well.
These are just a few of the common issues we see folks running into while working with
serverless APIs. Feel free to let us know via the comments if there are any other issues you’d
like us to cover.
For help and discussion
service: service-name
provider:
name: aws
stage: dev
functions:
hello:
handler: handler.hello
environment:
SYSTEM_URL: http://example.com/api/v1
Here SYSTEM_URL is the name of the environment variable we are defining and
http://example.com/api/v1 is its value. We can access this in our hello Lambda
funcDon using process.env.SYSTEM_URL , like so:
service: service-name
provider:
name: aws
stage: dev
environment:
SYSTEM_ID: jdoe
functions:
hello:
handler: handler.hello
environment:
SYSTEM_URL: http://example.com/api/v1
Just as before we can access the environment variable SYSTEM_ID in our hello Lambda
funcDon using process.env.SYSTEM_ID . The difference being that it is available to all the
Lambda funcDons defined in our serverless.yml .
In the case where both the provider and functions secDon has an environment variable
with the same name, the funcDon specific environment variable takes precedence. As in, we
can override the environment variables described in the provider secDon with the ones
defined in the functions secDon.
Let’s take a quick look at how these work using an example. Say you had the following
serverless.yml .
service: service-name
provider:
name: aws
stage: dev
functions:
helloA:
handler: handler.helloA
environment:
SYSTEM_URL: http://example.com/api/v1/pathA
helloB:
handler: handler.helloB
environment:
SYSTEM_URL: http://example.com/api/v1/pathB
In the case above we have the environment variable SYSTEM_URL defined in both the
helloA and helloB Lambda funcDons. But the only difference between them is that the
url ends with pathA or pathB . We can merge these two using the idea of variables.
A variable allows you to replace values in your serverless.yml dynamically. It uses the
${variableName} syntax, where the value of variableName will be inserted.
Let’s see how this works in pracDce. We can rewrite our example and simplify it by doing the
following:
service: service-name
custom:
systemUrl: http://example.com/api/v1/
provider:
name: aws
stage: dev
functions:
helloA:
handler: handler.helloA
environment:
SYSTEM_URL: ${self:custom.systemUrl}pathA
helloB:
handler: handler.helloB
environment:
SYSTEM_URL: ${self:custom.systemUrl}pathB
custom:
systemUrl: http://example.com/api/v1/
This defines a variable called systemUrl under the secDon custom . We can then
reference the variable using the syntax ${self:custom.systemUrl} .
Variables can be referenced from a lot of different sources including CLI opDons, external
YAML files, etc. You can read more about using variables in your serverless.yml here
(hMps://serverless.com/framework/docs/providers/aws/guide/variables/).
In this chapter we will take a look at how to configure stages in Serverless. Let’s first start by
looking at how stages can be implemented.
You can create mul8ple stages within a single API Gateway project. Stages within the
same project share the same endpoint host, but have a different path. For example, say
you have a stage called prod with the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/prod
If you were to add a stage called dev to the same API Gateway API, the new stage will
have the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/dev
The downside is that both stages are part of the same project. You don’t have the same
level of flexibility to fine tune the IAM policies for stages of the same API, when compared
to tuning different APIs. This leads to the next setup, each stage being its own API.
You create an API Gateway project for each stage. Let’s take the same example, your
prod stage has the endpoint:
https://abc12345.execute-api.us-east-1.amazonaws.com/prod
To create the dev stage, you create a new API Gateway project and add the dev stage
to the new project. The new endpoint will look something like:
https://xyz67890.execute-api.us-east-1.amazonaws.com/dev
Note that the dev stage carries a different endpoint host since it belongs to a different
project. This is the approach Serverless Framework takes when configuring stages for
your Serverless project. We will look at this in detail below.
Just like how having each stage being separate APIs give us more flexibility to fine tune
the IAM policy. We can take it a step further and create the API project in a different AWS
account. Most companies don’t keep their produc8on infrastructure in the same account
as their development infrastructure. This helps reduce any cases where developers
accidentally edit/delete produc8on resources. We go in to more detail on how to deploy
to mul8ple AWS accounts using different AWS profiles in the Configure Mul8ple AWS
Profiles (/chapters/configure-mul8ple-aws-profiles.html) chapter.
Deploying to a Stage
Let’s look at how the Serverless Framework helps us work with stages. As men8oned above, a
new stage is a new API Gateway project. To deploy to a specific stage, you can either specify
the stage in the serverless.yml .
service: service-name
provider:
name: aws
stage: dev
Or you can specify the stage by passing the --stage op8on to the serverless deploy
command.
service: service-name
custom:
myStage: ${opt:stage, self:provider.stage}
myEnvironment:
MESSAGE:
prod: "This is production environment"
dev: "This is development environment"
provider:
name: aws
stage: dev
environment:
MESSAGE:
${self:custom.myEnvironment.MESSAGE.${self:custom.myStage}}
There are a couple of things happening here. We first defined the custom.myStage variable
as ${opt:stage, self:provider.stage} . This is telling Serverless Framework to use the
--stage CLI op8on if it exists. And if it does not, then use the default stage specified by
provider.stage . We also define the custom.myEnvironment sec8on. This contains the
value for MESSAGE defined for each stage. Finally, we set the environment variable
MESSAGE as ${self:custom.myEnvironment.MESSAGE.${self:custom.myStage}} .
This sets the variable to pick the value of self:custom.myEnvironment depending on the
current stage defined in custom.myStage .
You can easily extend this format to create separate sets of environment variables for the
stages you are deploying to.
And we can access the MESSAGE in our Lambda func8ons via process.env object like so.
Hopefully, this chapter gives you a quick idea on how to set up stages in your Serverless
project.
For our demo notes app (hDps://demo.serverless-stack.com), we are using a DynamoDB table
to store all our user’s notes. DynamoDB achieves a high degree of data availability and
durability by replica;ng your data across three different facili;es within a given region.
However, DynamoDB does not provide an SLA for the data durability. This means that you
should backup your database tables.
Backups in DynamoDB
There are two types of backups in DynamoDB:
1. On-demand backups
This creates a full backup on-demand of your DynamoDB tables. It’s useful for long-term
data reten;on and archival. The backup is retained even if the table is deleted. You can
use the backup to restore to a different table name. And this can make it useful for
replica;ng tables.
2. Point-in-3me recovery
This type of backup on the other hand allows you to perform point-in-;me restore. It’s
really helpful in protec;ng against accidental writes or delete opera;ons. So for example,
if you ran a script to transform the data within a table and it accidentally removed or
corrupted your data; you could simply restore your table to any point in the last 35 days.
DynamoDB does this by maintaining an incremental backup of your table. It even does
this automa;cally, so you don’t have to worry about crea;ng, maintaining, or scheduling
on-demand backups.
Let’s look at how to use the two backup types.
On-Demand Backup
Head over to your table and click on the Backups tab.
Restore Backup
Now to restore your backup, simply select the backup and hit Restore backup.
Here you can type in the name of the new table you want to restore to and hit Restore table.
Depending on the size of the table, this might take some ;me. But you should no;ce a new
table being created from the backup.
DynamoDB makes it easy to create and restore on-demand backups. You can also read more
about on-demand backups here
(hDps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html)
.
Point-in-Time Recovery
To enable Point-in-;me Recovery once again head over to the Backups tab.
And hit Enable in the Point-in-;me Recovery sec;on.
This will no;fy you that addi;onal charges will apply for this seNng. Click Enable to confirm.
Restore to Point-in-Time
Once enabled, you can click Restore to point-in-3me to restore to an older point.
Here you can type in the name of the new table to be restored to and select the ;me you
want to recover to.
And hit Restore table.
You should see your new table being restored.
You can read more about the details of Point-in-;me Recovery here
(hDps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery
.html).
Conclusion
Given, the two above types; a good strategy is to enable Point-in-;me recovery and maintain
a schedule of longer term On-demand backups. There are quite a few plugins and scripts that
can help you with scheduling On-demand backups, here is one created by one of our readers -
hDps://github.com/UnlyEd/serverless-plugin-dynamodb-backups.
Also worth no;ng, DynamoDB’s backup and restore ac;ons have no impact on the table
performance or availability. No worrying about long backup processes that slow down
performance for your ac;ve users.
So make sure to configure backups for the DynamoDB tables in your applica;ons.
For help and discussion
These credenAals are stored in ~/.aws/credentials and are used by the Serverless
Framework when we run serverless deploy . Behind the scenes Serverless uses these
credenAals and the AWS SDK to create the necessary resources on your behalf to the AWS
account specified in the credenAals.
There are cases where you might have mulAple credenAals configured in your AWS CLI. This
usually happens if you are working on mulAple projects or if you want to separate the
different stages of the same project.
In this chapter let’s take a look at how you can work with mulAple AWS credenAals.
Where newAccount is the name of the new profile you are creaAng. You can leave the
Default region name and Default output format the way they are.
In this case your Lambda funcAon is run locally and has not been deployed yet. So any calls
made in your Lambda funcAon to any other AWS resources on your account will use the
default AWS profile that you have. You can check your default AWS profile in
~/.aws/credentials under the [default] tag.
To switch the default AWS profile to a new profile for the serverless invoke local
command, you can run the following:
Here newAccount is the name of the profile you want to switch to and hello is the name
of the funcAon that is being invoked locally. By adding AWS_PROFILE=newAccount at the
beginning of our serverless invoke local command we are seTng the variable that the
AWS SDK will use to figure out what your default AWS profile is.
If you want to set this so that you don’t add it to each of your commands, you can use the
following command:
$ export AWS_PROFILE=newAccount
Where newAccount is the profile you want to switch to. Now for the rest of your shell
session, newAccount will be your default profile.
You can read more about this in the AWS Docs here
(hRp://docs.aws.amazon.com/cli/latest/userguide/cli-mulAple-profiles.html).
Again, newAccount is the AWS profile Serverless Framework will be using to deploy.
If you don’t want to set the profile every Ame you run serverless deploy , you can add it
to your serverless.yml .
service: service-name
provider:
name: aws
stage: dev
profile: newAccount
Note the profile: newAccount line here. This is telling Serverless to use the
newAccount profile while running serverless deploy .
Let’s look at a quick example of how to work with mulAple profiles per stage. So following the
examples from before, if you wanted to deploy to your producAon environment, you would:
Here, prodAccount and devAccount are the AWS profiles for the producAon and staging
environment respecAvely.
To simplify this process you can add the profiles to your serverless.yml . So you don’t
have to specify them in your serverless deploy commands.
service: service-name
custom:
myStage: ${opt:stage, self:provider.stage}
myProfile:
prod: prodAccount
dev: devAccount
provider:
name: aws
stage: dev
profile: ${self:custom.myProfile.${self:custom.myStage}}
We used the concept of variables in Serverless Framework in this example. You can read more
about this in the chapter on Serverless Environment Variables (/chapters/serverless-
environment-variables.html).
Now, when you deploy to producAon, Serverless Framework is going to use the
prodAccount profile. And the resources will be provisioned inside prodAccount profile
user’s AWS account.
And when you deploy to staging, the exact same set of AWS resources will be provisioned
inside devAccount profile user’s AWS account.
NoAce that we did not have to set the --aws-profile opAon. And that’s it, this should give
you a good understanding of how to work with mulAple AWS profiles and credenAals.
In this chapter we will take a look at how to customize the IAM Policy that Serverless
Framework is going to use.
Gran<ng AdministratorAccess policy ensures that your project will always have the necessary
permissions. But if you want to create an IAM policy that grants the minimal set of
permissions, you need to customize your IAM policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*",
"s3:*",
"logs:*",
"iam:*",
"apigateway:*",
"lambda:*",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"events:*"
],
"Resource": [
"*"
]
}
]
}
We can a8ach this policy to the IAM user we are crea<ng by con<nuing from the ACach
exis1ng policies directly step in the Create an IAM User (/chapters/create-an-iam-user.html)
chapter.
Finally, hit Create Policy. You can now chose this policy while crea<ng your IAM user instead
of the AdministratorAccess one that we had used before.
This policy grants your Serverless Framework project access to all the resources listed above.
But we can narrow this down further by restric<ng them to specific Ac1ons for the specific
Resources in each AWS service.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:Describe*",
"cloudformation:List*",
"cloudformation:Get*",
"cloudformation:CreateStack",
"cloudformation:UpdateStack",
"cloudformation:DeleteStack"
],
"Resource": "arn:aws:cloudformation:<region>:
<account_no>:stack/<service_name>*/*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*/*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:<region>:<account_no>:log-group::log-
stream:*"
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DeleteLogGroup",
"logs:DeleteLogStream",
"logs:DescribeLogStreams",
"logs:FilterLogEvents"
],
"Resource": "arn:aws:logs:<region>:<account_no>:log-
group:/aws/lambda/<service_name>*:log-stream:*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole",
"iam:CreateRole",
"iam:DeleteRole",
"iam:DetachRolePolicy",
"iam:PutRolePolicy",
"iam:AttachRolePolicy",
"iam:DeleteRolePolicy"
],
"Resource": [
"arn:aws:iam::<account_no>:role/<service_name>*-lambdaRole"
]
},
{
"Effect": "Allow",
"Action": [
"apigateway:GET",
"apigateway:POST",
"apigateway:PUT",
"apigateway:DELETE"
],
"Resource": [
"arn:aws:apigateway:<region>::/restapis"
]
},
{
"Effect": "Allow",
"Action": [
"apigateway:GET",
"apigateway:POST",
"apigateway:PUT",
"apigateway:DELETE"
],
"Resource": [
"arn:aws:apigateway:<region>::/restapis/*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:GetFunction",
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:UpdateFunctionConfiguration",
"lambda:UpdateFunctionCode",
"lambda:ListVersionsByFunction",
"lambda:PublishVersion",
"lambda:CreateAlias",
"lambda:DeleteAlias",
"lambda:UpdateAlias",
"lambda:GetFunctionConfiguration",
"lambda:AddPermission",
"lambda:RemovePermission",
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:*:<account_no>:function:<service_name>*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"events:Put*",
"events:Remove*",
"events:Delete*",
"events:Describe*"
],
"Resource": "arn:aws:events::<account_no>:rule/<service_name>*"
}
]
}
The <account_no> is your AWS Account ID and you can follow these instruc<ons
(h8p://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html) to look it up.
Also, recall that the <region> and <service_name> are defined in your
serverless.yml like so.
service: my-service
provider:
name: aws
region: us-east-1
The above IAM policy template restricts access to the AWS services based on the name of
your Serverless project and the region it is deployed in.
It provides sufficient permissions for a minimal Serverless project. However, if you provision
any addi<onal resources in your serverless.yml, or install Serverless plugins, or invoke any
AWS APIs in your applica<on code; you would need to update the IAM policy to
accommodate for those changes. If you are looking for details on where this policy comes
from; here is an in-depth discussion on the minimal Serverless IAM Deployment Policy
(h8ps://github.com/serverless/serverless/issues/1439) required for a Serverless project.
At this second step, their User Pool informa7on is no longer available to us. To beQer
understand this flow you can take a look at the Cognito user pool vs iden7ty pool
(/chapters/cognito-user-pool-vs-iden7ty-pool.html) chapter. But in a nutshell, you can have
mul7ple authen7ca7on providers at step 1 and the Iden7ty Pool just ensures that they are all
given a global user id that you can use.
Below is a sample Lambda func7on where we find the user’s User Pool user id.
export async function main(event, context, callback) {
const authProvider =
event.requestContext.identity.cognitoAuthenticationProvider;
// Cognito authentication provider looks like:
// cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxxxxx,cognito-
idp.us-east-1.amazonaws.com/us-east-
1_aaaaaaaaa:CognitoSignIn:qqqqqqqq-1111-2222-3333-rrrrrrrrrrrr
// Where us-east-1_aaaaaaaaa is the User Pool id
// And qqqqqqqq-1111-2222-3333-rrrrrrrrrrrr is the User Pool User Id
const parts = authProvider.split(':');
const userPoolIdParts = parts[parts.length - 3].split('/');
...
}
cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxxxxx,cognito-
idp.us-east-1.amazonaws.com/us-east-
1_aaaaaaaaa:CognitoSignIn:qqqqqqqq-1111-2222-3333-rrrrrrrrrrrr
And that’s it! You now have access to a user’s User Pool user Id even though we are using
AWS IAM and Federated Iden77es to secure our Lambda func7on.
The generated SDK can be hard to use since you need to re-generate it every &me a change is
made. And we cover how to configure your app using AWS Amplify in the Configure AWS
Amplify (/chapters/configure-aws-amplify.html) chapter.
However if you are looking to simply connect to API Gateway using the AWS JS SDK, we’ve
create a standalone sigV4Client.js (hCps://github.com/AnomalyInnova&ons/sigV4Client)
that you can use. It is based on the client that comes pre-packaged with the generated SDK.
In this chapter we’ll go over how to use the the sigV4Client.js . The basic flow looks like
this:
1. Authen&cate a user with Cognito User Pool and acquire a user token.
2. With the user token get temporary IAM creden&als from the Iden&ty Pool.
3. Use the IAM creden&als to sign our API request with Signature Version 4
(hCp://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
Ensure to use your USER_POOL_ID and APP_CLIENT_ID . And given their Cognito
username and password you can log a user in by calling:
function getUserToken(currentUser) {
return new Promise((resolve, reject) => {
currentUser.getSession(function(err, session) {
if (err) {
reject(err);
return;
}
resolve(session.getIdToken().getJwtToken());
});
});
}
function getCurrentUser() {
const userPool = new CognitoUserPool({
UserPoolId: config.cognito.USER_POOL_ID,
ClientId: config.cognito.APP_CLIENT_ID
});
return userPool.getCurrentUser();
}
And with the JWT token you can generate their temporary IAM creden&als using:
function getAwsCredentials(userToken) {
const authenticator = `cognito-idp.${config.cognito
.REGION}.amazonaws.com/${config.cognito.USER_POOL_ID}`;
return AWS.config.credentials.getPromise();
}
→ sigV4Client.js
(hCps://raw.githubusercontent.com/AnomalyInnova&ons/sigV4Client/master/sigV4Client.js)
This file can look a bit in&mida&ng at first but it is just using the temporary creden&als and the
request parameters to create the necessary signed headers. To create a new sigV4Client
we need to pass in the following:
// Pseudocode
sigV4Client.newClient({
// Your AWS temporary access key
accessKey,
// Your AWS temporary secret key
secretKey,
// Your AWS temporary session token
sessionToken,
// API Gateway region
region,
// API Gateway URL
endpoint
});
And to sign a request you need to use the signRequest method and pass in:
// Pseudocode
And signedRequest.headers should give you the signed headers that you need to make
the request.
function invokeApig({
path,
method = "GET",
headers = {},
queryParams = {},
body
}) {
await getAwsCredentials(userToken);
return results.json();
}
Demo
A demo version of this service is hosted on AWS - https://z6pv80ao4l.execute-
api.us-east-1.amazonaws.com/dev/hello (h@ps://z6pv80ao4l.execute-api.us-east-
1.amazonaws.com/dev/hello).
And here is the ES7 source behind it.
callback(null, response);
};
Requirements
Configure your AWS CLI (/chapters/configure-the-aws-cli.html)
Install the Serverless Framework npm install serverless -g
Installation
To create a new Serverless project.
$ npm install
Usage
To run a func4on on your local
$ npm test
We use Jest to run our tests. You can read more about se`ng up your tests here
(h@ps://facebook.github.io/jest/docs/en/ge`ng-started.html#content).
$ serverless deploy
So give it a try and send us an email (mailto:[email protected]) if you have any ques4ons or
open a new issue (h@ps://github.com/AnomalyInnova4ons/serverless-nodejs-
starter/issues/new) if you’ve found a bug.
By adding the above to your serverless.yml , you are telling Serverless Framework to
generate individual packages for each of your Lambda func.ons. Note that, this isn’t the
default behavior because individual packaging takes a lot longer. However, the performance
benefit makes this well worth it.
While individual packaging is a good start, for Node.js apps, Serverless Framework will add
your node_modules/ directory in the package. This can balloon the size of your Lambda
func.on packages astronomically. To fix this you can op.mize your packages further by using
the serverless-webpack (hIps://github.com/serverless-heaven/serverless-webpack) plugin to
apply Webpack’s tree shaking algorithm (hIps://webpack.js.org/guides/tree-shaking/) to only
include the relevant bits of code needed for your Lambda func.on. Also, with the serverless-
webpack plugin, you can use Babel (hIps://babeljs.io) to transpile your JavaScript func.ons so
that you can use a more modern syntax including import/export statements.
However, using Webpack and Babel require you to manage their respec.ve configs, plugins,
and NPM packages in your Serverless app. Addi.onally, you might want to lint your code
before your func.ons get packaged. This means that your projects can end up with a long list
of packages and config files before you even write your first line of code! And they need to be
updated over .me. Which can be really hard to do across mul.ple projects.
"eslint"
"webpack"
"@babel/core"
"babel-eslint"
"babel-loader"
"eslint-loader"
"@babel/runtime"
"@babel/preset-env"
"serverless-webpack"
"source-map-support"
"webpack-node-externals"
"eslint-config-strongloop"
"@babel/plugin-transform-runtime"
"babel-plugin-source-map-support"
- "eslint"
- "webpack"
- "@babel/core"
- "babel-eslint"
- "babel-loader"
- "eslint-loader"
- "@babel/runtime"
- "@babel/preset-env"
- "serverless-webpack"
- "source-map-support"
- "webpack-node-externals"
- "eslint-config-strongloop"
- "@babel/plugin-transform-runtime"
- "babel-plugin-source-map-support"
+ "serverless-bundle": "^1.2.2"
plugins:
- serverless-bundle
And to run your tests using the same Babel config used in the plugin add the following to your
package.json :
"scripts": {
"test": "serverless-bundle test"
}
You can read more on the advanced op.ons over on the GitHub README
(hIps://github.com/AnomalyInnova.ons/serverless-bundle/blob/master/README.md).
In the next few chapters we are going to look at how to add the above funcMonality to our
Serverless notes app (hCps://demo.serverless-stack.com). For these chapters we are going to
use a forked version of the notes app. You can view the hosted version here (hCps://demo-
user-mgmt.serverless-stack.com) and the source is available in a repo here
(hCps://github.com/AnomalyInnovaMons/serverless-stack-demo-user-mgmt-client).
Let’s get started by allowing users to reset their password in case they have forgoCen it.
Let’s look at the main changes we need to make to allow users to reset their password.
this.state = {
code: "",
email: "",
password: "",
codeSent: false,
confirmed: false,
confirmPassword: "",
isConfirming: false,
isSendingCode: false
};
}
validateCodeForm() {
return this.state.email.length > 0;
}
validateResetForm() {
return (
this.state.code.length > 0 &&
this.state.password.length > 0 &&
this.state.password === this.state.confirmPassword
);
}
try {
await Auth.forgotPassword(this.state.email);
this.setState({ codeSent: true });
} catch (e) {
alert(e.message);
this.setState({ isSendingCode: false });
}
};
try {
await Auth.forgotPasswordSubmit(
this.state.email,
this.state.code,
this.state.password
);
this.setState({ confirmed: true });
} catch (e) {
alert(e.message);
this.setState({ isConfirming: false });
}
};
renderRequestCodeForm() {
return (
<form onSubmit={this.handleSendCodeClick}>
<FormGroup bsSize="large" controlId="email">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<LoaderButton
block
type="submit"
bsSize="large"
loadingText="Sending…"
text="Send Confirmation"
isLoading={this.state.isSendingCode}
disabled={!this.validateCodeForm()}
/>
</form>
);
}
renderConfirmationForm() {
return (
<form onSubmit={this.handleConfirmClick}>
<FormGroup bsSize="large" controlId="code">
<ControlLabel>Confirmation Code</ControlLabel>
<FormControl
autoFocus
type="tel"
value={this.state.code}
onChange={this.handleChange}
/>
<HelpBlock>
Please check your email ({this.state.email}) for the
confirmation
code.
</HelpBlock>
</FormGroup>
<hr />
<FormGroup bsSize="large" controlId="password">
<ControlLabel>New Password</ControlLabel>
<FormControl
type="password"
value={this.state.password}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup bsSize="large" controlId="confirmPassword">
<ControlLabel>Confirm Password</ControlLabel>
<FormControl
type="password"
onChange={this.handleChange}
value={this.state.confirmPassword}
/>
</FormGroup>
<LoaderButton
block
type="submit"
bsSize="large"
text="Confirm"
loadingText="Confirm…"
isLoading={this.state.isConfirming}
disabled={!this.validateResetForm()}
/>
</form>
);
}
renderSuccessMessage() {
return (
<div className="success">
<Glyphicon glyph="ok" />
<p>Your password has been reset.</p>
<p>
<Link to="/login">
Click here to login with your new credentials.
</Link>
</p>
</div>
);
}
render() {
return (
<div className="ResetPassword">
{!this.state.codeSent
? this.renderRequestCodeForm()
: !this.state.confirmed
? this.renderConfirmationForm()
: this.renderSuccessMessage()}
</div>
);
}
}
We ask the user to put in the email address for their account in the
this.renderRequestCodeForm() .
Once the user submits this form, we start the process by calling
Auth.forgotPassword(this.state.email) . Where Auth is a part of the AWS
Amplify library.
This triggers Cognito to send a verificaCon code to the specified email address.
Then we present a form where the user can input the code that Cognito sends them. This
form is rendered in this.renderConfirmationForm() . And it also allows the user to
put in their new password.
Once they submit this form with the code and their new password, we call
Auth.forgotPasswordSubmit(this.state.email, this.state.code,
this.state.password) . This resets the password for the account.
Finally, we show the user a sign telling them that their password has been successfully
reset. We also link them to the login page where they can login using their new details.
.ResetPassword form {
margin: 0 auto;
max-width: 320px;
}
.ResetPassword .success {
max-width: 400px;
}
}
.ResetPassword .success {
margin: 0 auto;
text-align: center;
}
.ResetPassword .success .glyphicon {
color: grey;
font-size: 30px;
margin-bottom: 30px;
}
<UnauthenticatedRoute
path="/login/reset"
exact
component={ResetPassword}
props={childProps}
/>
That’s it! We should now be able to navigate to /login/reset or go to it from the login
page in case we need to reset our password.
And from there they can put in their email to reset their password.
Next, let’s look at how our logged in users can change their password.
For reference, we are using a forked version of the notes app with:
Let’s start by creaIng a seLngs page that our users can use to change their password.
this.state = {
};
}
render() {
return (
<div className="Settings">
<LinkContainer to="/settings/email">
<LoaderButton
block
bsSize="large"
text="Change Email"
/>
</LinkContainer>
<LinkContainer to="/settings/password">
<LoaderButton
block
bsSize="large"
text="Change Password"
/>
</LinkContainer>
</div>
);
}
}
All this does is add two links to a page that allows our users to change their password and
email.
Add a link to this seLngs page to the navbar of our app by changing src/App.js .
<Navbar fluid collapseOnSelect>
<Navbar.Header>
<Navbar.Brand>
<Link to="/">Scratch</Link>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<Navbar.Collapse>
<Nav pullRight>
{this.state.isAuthenticated
? <Fragment>
<LinkContainer to="/settings">
<NavItem>Settings</NavItem>
</LinkContainer>
<NavItem onClick={this.handleLogout}>Logout</NavItem>
</Fragment>
: <Fragment>
<LinkContainer to="/signup">
<NavItem>Signup</NavItem>
</LinkContainer>
<LinkContainer to="/login">
<NavItem>Login</NavItem>
</LinkContainer>
</Fragment>
}
</Nav>
</Navbar.Collapse>
</Navbar>
<AuthenticatedRoute
path="/settings"
exact
component={Settings}
props={childProps}
/>
And don’t forget to import it.
This should give us a seLngs page that our users can get to from the app navbar.
this.state = {
password: "",
oldPassword: "",
isChanging: false,
confirmPassword: ""
};
}
validateForm() {
return (
this.state.oldPassword.length > 0 &&
this.state.password.length > 0 &&
this.state.password === this.state.confirmPassword
);
}
try {
const currentUser = await Auth.currentAuthenticatedUser();
await Auth.changePassword(
currentUser,
this.state.oldPassword,
this.state.password
);
this.props.history.push("/settings");
} catch (e) {
alert(e.message);
this.setState({ isChanging: false });
}
};
render() {
return (
<div className="ChangePassword">
<form onSubmit={this.handleChangeClick}>
<FormGroup bsSize="large" controlId="oldPassword">
<ControlLabel>Old Password</ControlLabel>
<FormControl
type="password"
onChange={this.handleChange}
value={this.state.oldPassword}
/>
</FormGroup>
<hr />
<FormGroup bsSize="large" controlId="password">
<ControlLabel>New Password</ControlLabel>
<FormControl
type="password"
value={this.state.password}
onChange={this.handleChange}
/>
</FormGroup>
<FormGroup bsSize="large" controlId="confirmPassword">
<ControlLabel>Confirm Password</ControlLabel>
<FormControl
type="password"
onChange={this.handleChange}
value={this.state.confirmPassword}
/>
</FormGroup>
<LoaderButton
block
type="submit"
bsSize="large"
text="Change Password"
loadingText="Changing…"
disabled={!this.validateForm()}
isLoading={this.state.isChanging}
/>
</form>
</div>
);
}
}
Most of this should be very straighPorward. The key part of the flow here is that we ask the
user for their current password along with their new password. Once they enter it, we can call
the following:
The above snippet uses the Auth module from Amplify to get the current user. And then
uses that to change their password by passing in the old and new password. Once the
Auth.changePassword method completes, we redirect the user to the seLngs page.
.ChangePassword form {
margin: 0 auto;
max-width: 320px;
}
}
<AuthenticatedRoute
path="/settings/password"
exact
component={ChangePassword}
props={childProps}
/>
That should do it. The /settings/password page should allow us to change our password.
Next, let’s look at how to implement a change email form for our users.
For reference, we are using a forked version of the notes app with:
In the previous chapter we created a seJngs page that links to /settings/email . Let’s
implement that.
this.state = {
code: "",
email: "",
codeSent: false,
isConfirming: false,
isSendingCode: false
};
}
validatEmailForm() {
return this.state.email.length > 0;
}
validateConfirmForm() {
return this.state.code.length > 0;
}
try {
const user = await Auth.currentAuthenticatedUser();
await Auth.updateUserAttributes(user, { email: this.state.email
});
try {
await Auth.verifyCurrentUserAttributeSubmit("email",
this.state.code);
this.props.history.push("/settings");
} catch (e) {
alert(e.message);
this.setState({ isConfirming: false });
}
};
renderUpdateForm() {
return (
<form onSubmit={this.handleUpdateClick}>
<FormGroup bsSize="large" controlId="email">
<ControlLabel>Email</ControlLabel>
<FormControl
autoFocus
type="email"
value={this.state.email}
onChange={this.handleChange}
/>
</FormGroup>
<LoaderButton
block
type="submit"
bsSize="large"
text="Update Email"
loadingText="Updating…"
disabled={!this.validatEmailForm()}
isLoading={this.state.isSendingCode}
/>
</form>
);
}
renderConfirmationForm() {
return (
<form onSubmit={this.handleConfirmClick}>
<FormGroup bsSize="large" controlId="code">
<ControlLabel>Confirmation Code</ControlLabel>
<FormControl
autoFocus
type="tel"
value={this.state.code}
onChange={this.handleChange}
/>
<HelpBlock>
Please check your email ({this.state.email}) for the
confirmation
code.
</HelpBlock>
</FormGroup>
<LoaderButton
block
type="submit"
bsSize="large"
text="Confirm"
loadingText="Confirm…"
isLoading={this.state.isConfirming}
disabled={!this.validateConfirmForm()}
/>
</form>
);
}
render() {
return (
<div className="ChangeEmail">
{!this.state.codeSent
? this.renderUpdateForm()
: this.renderConfirmationForm()}
</div>
);
}
}
The flow for changing a user’s email is pre3y similar to how we sign a user up.
We start by rendering a form that asks our user to enter their new email in
this.renderUpdateForm() . Once the user submits this form, we call:
This gets the current user and updates their email using the Auth module from Amplify.
Next we render the form where they can enter the code in
this.renderConfirmationForm() . Upon submiJng this form we call:
Auth.verifyCurrentUserAttributeSubmit("email", this.state.code);
This confirms the change on Cognito’s side. Finally, we redirect the user to the seJngs page.
.ChangeEmail form {
margin: 0 auto;
max-width: 320px;
}
}
<AuthenticatedRoute
path="/settings/email"
exact
component={ChangeEmail}
props={childProps}
/>
That should do it. Our users should now be able to change their email.
Finer Details
You might noIce that the change email flow is interrupted if the user does not confirm the
new email. In this case, the email appears to have been changed but Cognito marks it as not
being verified. We will let you handle this case on your own but here are a couple of hints on
how to do so.
In this case show a simple sign that allows users to resend the verificaIon code. You can
do this by calling Auth.verifyCurrentUserAttribute("email") .
Next you can simply display the confirm code form from above and follow the same flow
by calling Auth.verifyCurrentUserAttributeSubmit("email",
this.state.code) .
This can make your change email flow more robust and handle the case where a user forgets
to verify their new email.
Code Splitting
While working on React.js single page apps, there is a tendency for apps to grow quite large. A
secAon of the app (or route) might import a large number of components that are not
necessary when it first loads. This hurts the iniAal load Ame of our app.
You might have noAced that Create React App will generate one large .js file while we are
building our app. This contains all the JavaScript our app needs. But if a user is simply loading
the login page to sign in; it doesn’t make sense that we load the rest of the app with it. This
isn’t a concern early on when our app is quite small but it becomes an issue down the road. To
address this, Create React App has a very simple built-in way to split up our code. This feature
unsurprisingly, is called Code Spli*ng.
Create React App (from 1.0 onwards) allows us to dynamically import parts of our app using
the import() proposal. You can read more about it here
(hOps://facebook.github.io/react/blog/2017/05/18/whats-new-in-create-react-
app.html#code-spli*ng-with-dynamic-import).
While, the dynamic import() can be used for any component in our React app; it works
really well with React Router. Since, React Router is figuring out which component to load
based on the path; it would make sense that we dynamically import those components only
when we navigate to them.
We start by imporAng the components that will respond to our routes. And then use them to
define our routes. The Switch component renders the route that matches the path.
However, we import all of the components in the route staAcally at the top. This means, that
all these components are loaded regardless of which route is matched. To implement Code
Spli*ng here we are going to want to only load the component that responds to the matched
route.
this.state = {
component: null
};
}
async componentDidMount() {
const { default: component } = await importComponent();
this.setState({
component: component
});
}
render() {
const C = this.state.component;
return AsyncComponent;
}
We are going to use the asyncComponent to dynamically import the component we want.
const AsyncHome = asyncComponent(() => import("./containers/Home"));
It’s important to note that we are not doing an import here. We are only passing in a funcAon
to asyncComponent that will dynamically import() when the AsyncHome component is
created.
Also, it might seem weird that we are passing a funcAon here. Why not just pass in a string
(say ./containers/Home ) and then do the dynamic import() inside the
AsyncComponent ? This is because we want to explicitly state the component we are
dynamically imporAng. Webpack splits our app based on this. It looks at these imports and
generates the required parts (or chunks). This was pointed out by @wSokra
(hOps://twiOer.com/wSokra/status/866703557323632640) and @dan_abramov
(hOps://twiOer.com/dan_abramov/status/866646657437491201).
We are then going to use the AsyncHome component in our routes. React Router will create
the AsyncHome component when the route is matched and that will in turn dynamically
import the Home component and conAnue just like before.
Now let’s go back to our Notes project and apply these changes.
It is preOy cool that with just a couple of changes, our app is all set up for code spli*ng. And
without adding a whole lot more complexity either! Here is what our src/Routes.js
looked like before.
NoAce that instead of doing the staAc imports for all the containers at the top, we are creaAng
these funcAons that are going to do the dynamic imports for us when necessary.
Now if you build your app using npm run build ; you’ll see the code spli*ng in acAon.
Each of those .chunk.js files are the different dynamic import() calls that we have. Of
course, our app is quite small and the various parts that are split up are not significant at all.
However, if the page that we use to edit our note included a rich text editor; you can imagine
how that would grow in size. And it would unfortunately affect the iniAal load Ame of our app.
Now if we deploy our app using npm run deploy ; you can see the browser load the
different chunks on-demand as we browse around in the demo (hOps://demo.serverless-
stack.com).
That’s it! With just a few simple changes our app is completely set up to use the code spli*ng
feature that Create React App has.
Next Steps
Now this seems really easy to implement but you might be wondering what happens if the
request to import the new component takes too long, or fails. Or maybe you want to preload
certain components. For example, a user is on your login page about to login and you want to
preload the homepage.
It was menAoned above that you can add a loading spinner while the import is in progress. But
we can take it a step further and address some of these edge cases. There is an excellent
higher order component that does a lot of this well; it’s called react-loadable
(hOps://github.com/thejameskyle/react-loadable).
And AsyncHome is used exactly as before. Here the MyLoadingComponent would look
something like this.
It’s a simple component that handles all the different edge cases gracefully.
To add preloading and to further customize this; make sure to check out the other opAons and
features that react-loadable (hOps://github.com/thejameskyle/react-loadable) has. And have
fun code spli*ng!
Aside from isola?ng the resources used, having a separate environment that mimics your
produc?on version can really help with tes?ng your changes before they go live. You can take
this idea of environments further by having a staging environment that can even have
snapshots of the live database to give you as close to a produc?on setup as possible. This type
of setup can some?mes help track down bugs and issues that you might run into only on our
live environment and not on local.
In this chapter we will look at some simple ways to configure mul?ple environments in our
React app. There are many different ways to do this but here is a simple one based on what
we have built in Part 1 of this guide (/#part-1).
Here REACT_APP_TEST_VAR is the custom environment variable and we are seUng it to the
value 123 . In our app we can access this variable as process.env.REACT_APP_TEST_VAR .
So the following line in our app:
console.log(process.env.REACT_APP_TEST_VAR);
Will print out 123 in our console.
Note that, these variables are embedded during build ?me. Also, only the variables that start
with REACT_APP_ are embedded in our app. All the other environment variables are ignored.
Configuring Environments
We can use this idea of custom environment variables to configure our React app for specific
environments. Say we used a custom environment variable called REACT_APP_STAGE to
denote the environment our app is in. And we wanted to configure two environments for our
app:
One that we will use for our local development and also to test before pushing it to live.
Let’s call this one dev .
And our live environment that we will only push to, once we are comfortable with our
changes. Let’s call it production .
The first thing we can do is to configure our build system with the REACT_APP_STAGE
environment variable. Currently the scripts por?on of our package.json looks
something like this:
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"predeploy": "npm run build",
"deploy": "aws s3 sync build/ s3://YOUR_S3_DEPLOY_BUCKET_NAME",
"postdeploy": "aws cloudfront create-invalidation --distribution-id
YOUR_CF_DISTRIBUTION_ID --paths '/*' && aws cloudfront create-
invalidation --distribution-id YOUR_WWW_CF_DISTRIBUTION_ID --paths
'/*'",
"eject": "react-scripts eject"
}
Here we only have one environment and we use it for our local development and on live. The
npm start command runs our local server and npm run deploy command deploys our
app to live.
"scripts": {
"start": "REACT_APP_STAGE=dev react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
Note that you don’t have to replicate the S3 and CloudFront Distribu?ons for the dev version.
But it does help if you want to mimic the live version as much as possible.
export default {
MAX_ATTACHMENT_SIZE: 5000000,
s3: {
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
}
};
To use the REACT_APP_STAGE variable, we are just going to set the config condi?onally.
const dev = {
s3: {
BUCKET: "YOUR_DEV_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_DEV_API_GATEWAY_REGION",
URL: "YOUR_DEV_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_DEV_COGNITO_REGION",
USER_POOL_ID: "YOUR_DEV_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_DEV_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_DEV_IDENTITY_POOL_ID"
}
};
const prod = {
s3: {
BUCKET: "YOUR_PROD_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_PROD_API_GATEWAY_REGION",
URL: "YOUR_PROD_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_PROD_COGNITO_REGION",
USER_POOL_ID: "YOUR_PROD_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_PROD_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_PROD_IDENTITY_POOL_ID"
}
};
This is preOy straigh`orward. We simply have a set of configs for dev and for produc?on. The
configs point to a separate set of resources for our dev and produc?on environments. And
using process.env.REACT_APP_STAGE we decide which one to use.
Again, it might not be necessary to replicate the resources for each of the environments. But it
is preOy important to separate your live resources from your dev ones. You do not want to be
tes?ng your changes directly on your live database.
So to recap:
This en?re setup is fairly straigh`orward and can be extended to mul?ple environments. You
can read more on custom environment variables in Create React App here
(hOps://github.com/facebookincubator/create-react-app/blob/master/packages/react-
scripts/template/README.md#adding-custom-environment-variables).
The main ideas and code for this chapter have been contributed by our long @me reader and
contributor Peter Eman Paver Abas@llas (h7ps://github.com/jatazoulja).
To get started let’s create a Facebook app that our users will use to login.
Let’s take a quick look at the key changes that were made.
export default {
s3: {
REGION: "YOUR_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
},
social: {
FB: "YOUR_FACEBOOK_APP_ID"
}
};
async componentDidMount() {
this.loadFacebookSDK();
try {
await Auth.currentAuthenticatedUser();
this.userHasAuthenticated(true);
} catch (e) {
if (e !== "not authenticated") {
alert(e);
}
}
loadFacebookSDK() {
window.fbAsyncInit = function() {
window.FB.init({
appId : config.social.FB,
autoLogAppEvents : true,
xfbml : true,
version : 'v3.1'
});
};
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
}
function waitForInit() {
return new Promise((res, rej) => {
const hasFbLoaded = () => {
if (window.FB) {
res();
} else {
setTimeout(hasFbLoaded, 300);
}
};
hasFbLoaded();
});
}
this.state = {
isLoading: true
};
}
async componentDidMount() {
await waitForInit();
this.setState({ isLoading: false });
}
checkLoginState = () => {
window.FB.getLoginStatus(this.statusChangeCallback);
};
handleClick = () => {
window.FB.login(this.checkLoginState, {scope:
"public_profile,email"});
};
handleError(error) {
alert(error);
}
async handleResponse(data) {
const { email, accessToken: token, expiresIn } = data;
const expires_at = expiresIn * 1000 + new Date().getTime();
const user = { email };
try {
const response = await Auth.federatedSignIn(
"facebook",
{ token, expires_at },
user
);
this.setState({ isLoading: false });
this.props.onLogin(response);
} catch (e) {
this.setState({ isLoading: false });
this.handleError(e);
}
}
render() {
return (
<LoaderButton
block
bsSize="large"
bsStyle="primary"
className="FacebookButton"
text="Login with Facebook"
onClick={this.handleClick}
disabled={this.state.isLoading}
/>
);
}
}
Let’s look at what we are doing here very quickly.
1. We first wait for the Facebook JS SDK to load in the waitForInit method. Once it has
loaded, we enable the Login with Facebook bu7on.
2. Once our user clicks the bu7on, we kick off the login process using FB.login and listen
for the login status to change in the statusChangeCallback . While calling this
method, we are specifying that we want the user’s public profile and email address by
sedng {scope: "public_profile,email"} .
3. If the user has given our app the permissions, then we use the informa@on we receive
from Facebook (the user’s email) and call the Auth.federatedSignIn AWS Amplify
method. This effec@vely logs the user in.
<FacebookButton
onLogin={this.handleFbLogin}
/>
<hr />
Add the bu7on above our login and signup form. And don’t forget to import it using import
FacebookButton from "../components/FacebookButton"; .
handleFbLogin = () => {
this.props.userHasAuthenticated(true);
};
The above logs the user in to our React app, once the Facebook sign up process is complete.
Make sure to add these to src/containers/Signup.js as well.
And that’s it, if you head over to your app you should see the login with Facebook op@on.
Clicking on it should bring up the Facebook dialog asking you to login with your app.
Once you are logged in, you should be able to interact with the app just as before.
A final note on deploying your app. You might recall from above that we are telling Facebook
to use the https://localhost:3000 URL. This needs to be changed to the live URL once
you deploy your React app. A good prac@ce here is to create two Facebook apps, one for your
live users and one for your local tes@ng. That way you won’t need to change the URL and you
will have an environment where you can test your changes.