System Design - The Complete Course
System Design - The Complete Course
System Design - The Complete Course
Karan's Blog
Follow
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 1/251
4/1/24, 10:01 AM System Design: The complete course
NoSQL databases
SQL vs NoSQL databases
Database Replication
Indexes
Normalization and Denormalization
ACID and BASE consistency models
CAP theorem
PACELC Theorem
Transactions
Distributed Transactions
Sharding
Consistent Hashing
Database Federation
Chapter III
N-tier architecture
Message Brokers
Message Queues
Publish-Subscribe
Enterprise Service Bus (ESB)
Monoliths and Microservices
Event-Driven Architecture (EDA)
Event Sourcing
Command and Query Responsibility Segregation (CQRS)
API Gateway
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 3/251
4/1/24, 10:01 AM System Design: The complete course
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 4/251
4/1/24, 10:01 AM System Design: The complete course
Versions
Now, let's learn about the different versions of IP addresses:
IPv4
The original Internet Protocol is IPv4 which uses a 32-bit numeric dot-
decimal notation that only allows for around 4 billion IP addresses.
Initially, it was more than enough but as internet adoption grew we
needed something better.
Example: 102.22.192.181
IPv6
IPv6 is a new protocol that was introduced in 1998. Deployment
commenced in the mid-2000s and since the internet users have grown
exponentially, it is still ongoing.
This new protocol uses 128-bit alphanumeric hexadecimal notation.
This means that IPv6 can provide about ~340e+36 IP addresses.
That's more than enough to meet the growing demand for years to
come.
Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
Types
Let's discuss types of IP addresses:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 6/251
4/1/24, 10:01 AM System Design: The complete course
Public
A public IP address is an address where one primary address is
associated with your whole network. In this type of IP address, each of
the connected devices has the same IP address.
Example: IP address provided to your router by the ISP.
Private
A private IP address is a unique IP number assigned to every device
that connects to your internet network, which includes devices like
computers, tablets, and smartphones, which are used in your
household.
Example: IP addresses generated by your home router for your devices.
Static
A static IP address does not change and is one that was manually
created, as opposed to having been assigned. These addresses are
usually more expensive but are more reliable.
Example: They are usually used for important things like reliable geo-
location services, remote access, server hosting, etc.
Dynamic
A dynamic IP address changes from time to time and is not always the
same. It has been assigned by a Dynamic Host Configuration Protocol
(DHCP) server. Dynamic IP addresses are the most common type of
internet protocol addresses. They are cheaper to deploy and allow us
to reuse IP addresses within a network as needed.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 7/251
4/1/24, 10:01 AM System Design: The complete course
Example: They are more commonly used for consumer equipment and
personal use.
OSI Model
The OSI Model is a logical and conceptual model that defines network
communication used by systems open to interconnection and
communication with other systems. The Open System Interconnection
(OSI Model) also defines a logical network and effectively describes
computer packet transfer by using various layers of protocols.
The OSI Model can be seen as a universal language for computer
networking. It's based on the concept of splitting up a communication
system into seven abstract layers, each one stacked upon the last.
Layers
The seven abstraction layers of the OSI model can be defined as
follows, from top to bottom:
Application
This is the only layer that directly interacts with data from the user.
Software applications like web browsers and email clients rely on the
application layer to initiate communication. But it should be made clear
that client software applications are not part of the application layer,
rather the application layer is responsible for the protocols and data
manipulation that the software relies on to present meaningful data to
the user. Application layer protocols include HTTP as well as SMTP.
Presentation
The presentation layer is also called the Translation layer. The data
from the application layer is extracted here and manipulated as per the
required format to transmit over the network. The functions of the
presentation layer are translation, encryption/decryption, and
compression.
Session
This is the layer responsible for opening and closing communication
between the two devices. The time between when the communication
is opened and closed is known as the session. The session layer
ensures that the session stays open long enough to transfer all the
data being exchanged, and then promptly closes the session in order
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 9/251
4/1/24, 10:01 AM System Design: The complete course
Transport
The transport layer (also known as layer 4) is responsible for end-to-
end communication between the two devices. This includes taking
data from the session layer and breaking it up into chunks called
segments before sending it to the Network layer (layer 3). It is also
responsible for reassembling the segments on the receiving device into
data the session layer can consume.
Network
The network layer is responsible for facilitating data transfer between
two different networks. The network layer breaks up segments from
the transport layer into smaller units, called packets, on the sender's
device, and reassembles these packets on the receiving device. The
network layer also finds the best physical path for the data to reach its
destination this is known as routing. If the two devices communicating
are on the same network, then the network layer is unnecessary.
Data Link
The data link layer is very similar to the network layer, except the data
link layer facilitates data transfer between two devices on the same
network. The data link layer takes packets from the network layer and
breaks them into smaller pieces called frames.
Physical
This layer includes the physical equipment involved in the data transfer,
such as the cables and switches. This is also the layer where the data
gets converted into a bit stream, which is a string of 1s and 0s. The
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 10/251
4/1/24, 10:01 AM System Design: The complete course
TCP
Transmission Control Protocol (TCP) is connection-oriented, meaning
once a connection has been established, data can be transmitted in
both directions. TCP has built-in systems to check for errors and to
guarantee data will be delivered in the order it was sent, making it the
perfect protocol for transferring information like still images, data files,
and web pages.
UDP
User Datagram Protocol (UDP) is a simpler, connectionless internet
protocol in which error-checking and recovery services are not
required. With UDP, there is no overhead for opening a connection,
maintaining a connection, or terminating a connection. Data is
continuously sent to the recipient, whether or not they receive it.
TCP vs UDP
TCP is a connection-oriented protocol, whereas UDP is a
connectionless protocol. A key difference between TCP and UDP is
speed, as TCP is comparatively slower than UDP. Overall, UDP is a
much faster, simpler, and more efficient protocol, however,
retransmission of lost data packets is only possible with TCP.
TCP provides ordered delivery of data from user to server (and vice
versa), whereas UDP is not dedicated to end-to-end communications,
nor does it check the readiness of the receiver.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 12/251
4/1/24, 10:01 AM System Design: The complete course
Server types
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 13/251
4/1/24, 10:01 AM System Design: The complete course
Now, let's look at the four key groups of servers that make up the DNS
infrastructure.
DNS Resolver
A DNS resolver (also known as a DNS recursive resolver) is the first
stop in a DNS query. The recursive resolver acts as a middleman
between a client and a DNS nameserver. After receiving a DNS query
from a web client, a recursive resolver will either respond with cached
data, or send a request to a root nameserver, followed by another
request to a TLD nameserver, and then one last request to an
authoritative nameserver. After receiving a response from the
authoritative nameserver containing the requested IP address, the
recursive resolver then sends a response to the client.
TLD nameserver
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 14/251
4/1/24, 10:01 AM System Design: The complete course
A TLD nameserver maintains information for all the domain names that
share a common domain extension, such as .com , .net , or whatever
comes after the last dot in a URL.
Management of TLD nameservers is handled by the Internet Assigned
Numbers Authority (IANA), which is a branch of ICANN. The IANA
breaks up the TLD servers into two main groups:
Generic top-level domains: These are domains like , , .com .org
.net, , and .
.edu .gov
and . .jp
Query Types
There are three types of queries in a DNS system:
Recursive
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 15/251
4/1/24, 10:01 AM System Design: The complete course
Iterative
In an iterative query, a DNS client provides a hostname, and the DNS
Resolver returns the best answer it can. If the DNS resolver has the
relevant DNS records in its cache, it returns them. If not, it refers the
DNS client to the Root Server or another Authoritative Name Server
that is nearest to the required DNS zone. The DNS client must then
repeat the query directly against the DNS server it was referred.
Non-recursive
A non-recursive query is a query in which the DNS Resolver already
knows the answer. It either immediately returns a DNS record because
it already stores it in a local cache, or queries a DNS Name Server
which is authoritative for the record, meaning it definitely holds the
correct IP for that hostname. In both cases, there is no need for
additional rounds of queries (like in recursive or iterative queries).
Rather, a response is immediately returned to the client.
Records Types
DNS records (aka zone files) are instructions that live in authoritative
DNS servers and provide information about a domain including what IP
address is associated with that domain and how to handle requests for
that domain.
These records consist of a series of text files written in what is known
as DNS syntax. DNS syntax is just a string of characters used as
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 16/251
4/1/24, 10:01 AM System Design: The complete course
commands that tell the DNS server what to do. All DNS records also
have a "TTL", which stands for time-to-live, and indicates how often a
DNS server will refresh that record.
There are more record types but for now, let's look at some of the most
commonly used ones:
A (Address record): This is the record that holds the IP address of
a domain.
AAAA (IP Version 6 Address record): The record that contains the
IPv6 address for a domain (as opposed to A records, which stores
the IPv4 address).
CNAME (Canonical Name record): Forwards one domain or
subdomain to another domain, does NOT provide an IP address.
MX (Mail exchanger record): Directs mail to an email server.
TXT (Text Record): This record lets an admin store text notes in
the record. These records are often used for email security.
NS (Name Server records): Stores the name server for a DNS
entry.
SOA (Start of Authority): Stores admin information about a
domain.
SRV (Service Location record): Specifies a port for specific
services.
PTR (Reverse-lookup Pointer records): Provides a domain name in
reverse lookups.
CERT (Certificate record): Stores public key certificates.
Subdomains
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 17/251
4/1/24, 10:01 AM System Design: The complete course
DNS Zones
A DNS zone is a distinct part of the domain namespace which is
delegated to a legal entity like a person, organization, or company, who
is responsible for maintaining the DNS zone. A DNS zone is also an
administrative function, allowing for granular control of DNS
components, such as authoritative name servers.
DNS Caching
A DNS cache (sometimes called a DNS resolver cache) is a temporary
database, maintained by a computer's operating system, that contains
records of all the recent visits and attempted visits to websites and
other internet domains. In other words, a DNS cache is just a memory
of recent DNS lookups that our computer can quickly refer to when it's
trying to figure out how to load a website.
The Domain Name System implements a time-to-live (TTL) on every
DNS record. TTL specifies the number of seconds the record can be
cached by a DNS client or server. When the record is stored in a cache,
whatever TTL value came with it gets stored as well. The server
continues to update the TTL of the record stored in the cache,
counting down every second. When it hits zero, the record is deleted or
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 18/251
4/1/24, 10:01 AM System Design: The complete course
purged from the cache. At that point, if a query for that record is
received, the DNS server has to start the resolution process.
Reverse DNS
A reverse DNS lookup is a DNS query for the domain name associated
with a given IP address. This accomplishes the opposite of the more
commonly used forward DNS lookup, in which the DNS system is
queried to return an IP address. The process of reverse resolving an IP
address uses PTR records. If the server does not have a PTR record, it
cannot resolve a reverse lookup.
Reverse lookups are commonly used by email servers. Email servers
check and see if an email message came from a valid server before
bringing it onto their network. Many email servers will reject messages
from any server that does not support reverse lookups or from a server
that is highly unlikely to be legitimate.
Note: Reverse DNS lookups are not universally adopted as they are not
critical to the normal function of the internet.
Examples
These are some widely used managed DNS solutions:
Route53
Cloudflare DNS
Google Cloud DNS
Azure DNS
NS1
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 19/251
4/1/24, 10:01 AM System Design: The complete course
Load Balancing
Load balancing lets us distribute incoming network traffic across
multiple resources ensuring high availability and reliability by sending
requests only to resources that are online. This provides the flexibility
to add or subtract resources as demand dictates.
But why?
Modern high-traffic websites must serve hundreds of thousands, if not
millions, of concurrent requests from users or clients. To cost-
effectively scale to meet these high volumes, modern computing best
practice generally requires adding more servers.
A load balancer can sit in front of the servers and route client requests
across all servers capable of fulfilling those requests in a manner that
maximizes speed and capacity utilization. This ensures that no single
server is overworked, which could degrade performance. If a single
server goes down, the load balancer redirects traffic to the remaining
online servers. When a new server is added to the server group, the
load balancer automatically starts sending requests to it.
Workload distribution
This is the core functionality provided by a load balancer and has
several common variations:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 20/251
4/1/24, 10:01 AM System Design: The complete course
Layers
Generally speaking, load balancers operate at one of the two levels:
Network layer
This is the load balancer that works at the network's transport layer,
also known as layer 4. This performs routing based on networking
information such as IP addresses and is not able to perform content-
based routing. These are often dedicated hardware devices that can
operate at high speed.
Application layer
This is the load balancer that operates at the application layer, also
known as layer 7. Load balancers can read requests in their entirety
and perform content-based routing. This allows the management of
load based on a full understanding of traffic.
Types
Let's look at different types of load balancers:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 21/251
4/1/24, 10:01 AM System Design: The complete course
Software
Software load balancers usually are easier to deploy than hardware
versions. They also tend to be more cost-effective and flexible, and
they are used in conjunction with software development environments.
The software approach gives us the flexibility of configuring the load
balancer to our environment's specific needs. The boost in flexibility
may come at the cost of having to do more work to set up the load
balancer. Compared to hardware versions, which offer more of a
closed-box approach, software balancers give us more freedom to
make changes and upgrades.
Software load balancers are widely used and are available either as
installable solutions that require configuration and management or as a
managed cloud service.
Hardware
As the name implies, a hardware load balancer relies on physical, on-
premises hardware to distribute application and network traffic. These
devices can handle a large volume of traffic but often carry a hefty
price tag and are fairly limited in terms of flexibility.
Hardware load balancers include proprietary firmware that requires
maintenance and updates as new versions and security patches are
released.
DNS
DNS load balancing is the practice of configuring a domain in the
Domain Name System (DNS) such that client requests to the domain
are distributed across a group of server machines.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 22/251
4/1/24, 10:01 AM System Design: The complete course
Routing Algorithms
Now, let's discuss commonly used routing algorithms:
Round-robin: Requests are distributed to application servers in
rotation.
Weighted Round-robin: Builds on the simple Round-robin
technique to account for differing server characteristics such as
compute and traffic handling capacity using weights that can be
assigned via DNS records by the administrator.
Least Connections: A new request is sent to the server with the
fewest current connections to clients. The relative computing
capacity of each server is factored into determining which one has
the least connections.
Least Response Time: Sends requests to the server selected by a
formula that combines the fastest response time and fewest active
connections.
Least Bandwidth: This method measures traffic in megabits per
second (Mbps), sending client requests to the server with the least
Mbps of traffic.
Hashing: Distributes requests based on a key we define, such as
the client IP address or the request URL.
Advantages
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 23/251
4/1/24, 10:01 AM System Design: The complete course
Features
Here are some commonly desired features of load balancers:
Autoscaling: Starting up and shutting down resources in response
to demand conditions.
Sticky sessions: The ability to assign the same user or device to
the same resource in order to maintain the session state on the
resource.
Healthchecks: The ability to determine if a resource is down or
performing poorly in order to remove the resource from the load
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 24/251
4/1/24, 10:01 AM System Design: The complete course
balancing pool.
Persistence connections: Allowing a server to open a persistent
connection with a client such as a WebSocket.
Encryption: Handling encrypted connections such as TLS and SSL.
Certificates: Presenting certificates to a client and authentication
of client certificates.
Compression: Compression of responses.
Caching: An application-layer load balancer may offer the ability to
cache responses.
Logging: Logging of request and response metadata can serve as
an important audit trail or source for analytics data.
Request tracing: Assigning each request a unique id for the
purposes of logging, monitoring, and troubleshooting.
Redirects: The ability to redirect an incoming request based on
factors such as the requested path.
Fixed response: Returning a static response for a request such as
an error message.
Examples
Following are some of the load balancing solutions commonly used in
the industry:
Amazon Elastic Load Balancing
Azure Load Balancing
GCP Load Balancing
DigitalOcean Load Balancer
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 25/251
4/1/24, 10:01 AM System Design: The complete course
Nginx
HAProxy
Clustering
At a high level, a computer cluster is a group of two or more computers,
or nodes, that run in parallel to achieve a common goal. This allows
workloads consisting of a high number of individual, parallelizable tasks
to be distributed among the nodes in the cluster. As a result, these
tasks can leverage the combined memory and processing power of
each computer to increase overall performance.
To build a computer cluster, the individual nodes should be connected
to a network to enable internode communication. The software can
then be used to join the nodes together and form a cluster. It may have
a shared storage device and/or local storage on each node.
Typically, at least one node is designated as the leader node and acts
as the entry point to the cluster. The leader node may be responsible
for delegating incoming work to the other nodes and, if necessary,
aggregating the results and returning a response to the user.
Ideally, a cluster functions as if it were a single system. A user
accessing the cluster should not need to know whether the system is a
cluster or an individual machine. Furthermore, a cluster should be
designed to minimize latency and prevent bottlenecks in node-to-node
communication.
Types
Computer clusters can generally be categorized into three types:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 26/251
4/1/24, 10:01 AM System Design: The complete course
Configurations
The two most commonly used high availability (HA) clustering
configurations are active-active and active-passive.
Active-Active
Active-Passive
Advantages
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 27/251
4/1/24, 10:01 AM System Design: The complete course
Challenges
The most obvious challenge clustering presents is the increased
complexity of installation and maintenance. An operating system, the
application, and its dependencies must each be installed and updated
on every node.
This becomes even more complicated if the nodes in the cluster are
not homogeneous. Resource utilization for each node must also be
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 28/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Clustering is commonly used in the industry, and often many
technologies offer some sort of clustering mode. For example:
Containers (eg. Kubernetes, Amazon ECS)
Databases (eg. Cassandra, MongoDB)
Cache (eg. Redis)
Caching
"There are only two hard things in Computer Science: cache
invalidation and naming things." - Phil Karlton
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 29/251
4/1/24, 10:01 AM System Design: The complete course
A hot cache is an instance where data was read from the memory at
the fastest possible rate. This happens when the data is retrieved from
L1.
A cold cache is the slowest possible rate for data to be read, though,
it's still successful so it's still considered a cache hit. The data is just
found lower in the memory hierarchy such as in L3, or lower.
A warm cache is used to describe data that's found in L2 or L3. It's not
as fast as a hot cache, but it's still faster than a cold cache. Generally,
calling a cache warm is used to express that it's slower and closer to a
cold cache than a hot one.
Cache miss
A cache miss refers to the instance when the memory is searched and
the data isn't found. When this happens, the content is transferred and
written into the cache.
Cache Invalidation
Cache invalidation is a process where the computer system declares
the cache entries as invalid and removes or replaces them. If the data
is modified, it should be invalidated in the cache, if not, this can cause
inconsistent application behavior. There are three kinds of caching
systems:
Write-through cache
Write-around cache
Write-back cache
Where the write is only done to the caching layer and the write is
confirmed as soon as the write to the cache completes. The cache
then asynchronously syncs this write to the database.
Pro: This would lead to reduced latency and high throughput for write-
intensive applications.
Con: There is a risk of data loss in case the caching layer crashes. We
can improve this by having more than one replica acknowledging the
write in the cache.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 32/251
4/1/24, 10:01 AM System Design: The complete course
Eviction policies
Following are some of the most common cache eviction policies:
First In First Out (FIFO): The cache evicts the first block accessed
first without any regard to how often or how many times it was
accessed before.
Last In First Out (LIFO): The cache evicts the block accessed most
recently first without any regard to how often or how many times it
was accessed before.
Least Recently Used (LRU): Discards the least recently used items
first.
Most Recently Used (MRU): Discards, in contrast to LRU, the most
recently used items first.
Least Frequently Used (LFU): Counts how often an item is needed.
Those that are used least often are discarded first.
Random Replacement (RR): Randomly selects a candidate item
and discards it to make space when necessary.
Distributed Cache
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 33/251
4/1/24, 10:01 AM System Design: The complete course
Global Cache
As the name suggests, we will have a single shared cache that all the
application nodes will use. When the requested data is not found in the
global cache, it's the responsibility of the cache to find out the missing
piece of data from the underlying data store.
Use cases
Caching can have many real-world use cases such as:
Database Caching
Content Delivery Network (CDN)
Domain Name System (DNS) Caching
API Caching
When not to use caching?
Let's also look at some scenarios where we should not use cache:
Caching isn't helpful when it takes just as long to access the cache
as it does to access the primary data store.
Caching doesn't work as well when requests have low repetition
(higher randomness), because caching performance comes from
repeated memory access patterns.
Caching isn't helpful when the data changes frequently, as the
cached version gets out of sync, and the primary data store must
be accessed every time.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 34/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Below are some advantages of caching:
Improves performance
Reduce latency
Reduce load on the database
Reduce network cost
Increase Read Throughput
Examples
Here are some commonly used technologies for caching:
Redis
Memcached
Amazon Elasticache
Aerospike
Content Delivery Network (CDN)
A content delivery network (CDN) is a geographically distributed group
of servers that work together to provide fast delivery of internet
content. Generally, static files such as HTML/CSS/JS, photos, and
videos are served from CDN.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 35/251
4/1/24, 10:01 AM System Design: The complete course
In a CDN, the origin server contains the original versions of the content
while the edge servers are numerous and distributed across various
locations around the world.
To minimize the distance between the visitors and the website's server,
a CDN stores a cached version of its content in multiple geographical
locations known as edge locations. Each edge location contains a
number of caching servers responsible for content delivery to visitors
within its proximity.
Once the static assets are cached on all the CDN servers for a
particular location, all subsequent website visitor requests for static
assets will be delivered from these edge servers instead of the origin,
thus reducing origin load and improving scalability.
For example, when someone in the UK requests our website which
might be hosted in the USA, they will be served from the closest edge
location such as the London edge location. This is much quicker than
having the visitor make a complete request to the origin server which
will increase the latency.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 36/251
4/1/24, 10:01 AM System Design: The complete course
Types
CDNs are generally divided into two types:
Push CDNs
Push CDNs receive new content whenever changes occur on the
server. We take full responsibility for providing content, uploading
directly to the CDN, and rewriting URLs to point to the CDN. We can
configure when content expires and when it is updated. Content is
uploaded only when it is new or changed, minimizing traffic, but
maximizing storage.
Sites with a small amount of traffic or sites with content that isn't often
updated work well with push CDNs. Content is placed on the CDNs
once, instead of being re-pulled at regular intervals.
Pull CDNs
In a Pull CDN situation, the cache is updated based on request. When
the client sends a request that requires static assets to be fetched
from the CDN if the CDN doesn't have it, then it will fetch the newly
updated assets from the origin server and populate its cache with this
new asset, and then send this new cached asset to the user.
Contrary to the Push CDN, this requires less maintenance because
cache updates on CDN nodes are performed based on requests from
the client to the origin server. Sites with heavy traffic work well with
pull CDNs, as traffic is spread out more evenly with only recently-
requested content remaining on the CDN.
Disadvantages
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 37/251
4/1/24, 10:01 AM System Design: The complete course
As we all know good things come with extra costs, so let's discuss
some disadvantages of CDNs:
Extra charges: It can be expensive to use a CDN, especially for
high-traffic services.
Restrictions: Some organizations and countries have blocked the
domains or IP addresses of popular CDNs.
Location: If most of our audience is located in a country where the
CDN has no servers, the data on our website may have to travel
further than without using any CDN.
Examples
Here are some widely used CDNs:
Amazon CloudFront
Google Cloud CDN
Cloudflare CDN
Fastly
Proxy
A proxy server is an intermediary piece of hardware/software sitting
between the client and the backend server. It receives requests from
clients and relays them to the origin servers. Typically, proxies are used
to filter requests, log requests, or sometimes transform requests (by
adding/removing headers, encrypting/decrypting, or compression).
Types
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 38/251
4/1/24, 10:01 AM System Design: The complete course
Forward Proxy
A forward proxy, often called a proxy, proxy server, or web proxy is a
server that sits in front of a group of client machines. When those
computers make requests to sites and services on the internet, the
proxy server intercepts those requests and then communicates with
web servers on behalf of those clients, like a middleman.
Advantages
Here are some advantages of a forward proxy:
Block access to certain content
Allows access to geo-restricted content
Provides anonymity
Avoid other browsing restrictions
Although proxies provide the benefits of anonymity, they can still track
our personal information. Setup and maintenance of a proxy server can
be costly and requires configurations.
Reverse Proxy
A reverse proxy is a server that sits in front of one or more web
servers, intercepting requests from clients. When clients send requests
to the origin server of a website, those requests are intercepted by the
reverse proxy server.
The difference between a forward and reverse proxy is subtle but
important. A simplified way to sum it up would be to say that a forward
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 39/251
4/1/24, 10:01 AM System Design: The complete course
proxy sits in front of a client and ensures that no origin server ever
communicates directly with that specific client. On the other hand, a
reverse proxy sits in front of an origin server and ensures that no client
ever communicates directly with that origin server.
Examples
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 40/251
4/1/24, 10:01 AM System Design: The complete course
Sequence
Overall availability decreases when two components are in sequence.
Availability (T otal) = Availability (F oo) ∗ Availability (Bar)
For example, if both Foo and Bar each had 99.9% availability, their total
availability in sequence would be 99.8%.
Parallel
Overall availability increases when two components are in parallel.
Availability (T otal) = 1 − (1 − Availability (F oo)) ∗ (1 − Availability (Ba
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 42/251
4/1/24, 10:01 AM System Design: The complete course
For example, if both Foo and Bar each had 99.9% availability, their total
availability in parallel would be 99.9999%.
Availability vs Reliability
If a system is reliable, it is available. However, if it is available, it is not
necessarily reliable. In other words, high reliability contributes to high
availability, but it is possible to achieve high availability even with an
unreliable system.
Vertical scaling
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 43/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Simple to implement
Easier to manage
Data consistent
Disadvantages
Risk of high downtime
Harder to upgrade
Can be a single point of failure
Horizontal scaling
Horizontal scaling (also known as scaling out) expands a system's
scale by adding more machines. It improves the performance of the
server by adding more instances to the existing pool of servers,
allowing the load to be distributed more evenly.
Advantages
Increased redundancy
Better fault tolerance
Flexible and efficient
Easier to upgrade
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 44/251
4/1/24, 10:01 AM System Design: The complete course
Disadvantages
Increases complexity
Data inconsistency
Increased load on downstream services
Storage
Storage is a mechanism that enables a system to retain data, either
temporarily or permanently. This topic is mostly skipped over in the
context of system design, however, it is important to have a basic
understanding of some common types of storage techniques that can
help us fine-tune our storage components. Let's discuss some
important storage concepts:
RAID
RAID (Redundant Array of Independent Disks) is a way of storing the
same data on multiple hard disks or solid-state drives (SSDs) to protect
data in the case of a drive failure.
There are different RAID levels, however, and not all have the goal of
providing redundancy. Let's discuss some commonly used RAID levels:
RAID 0: Also known as striping, data is split evenly across all the
drives in the array.
RAID 1: Also known as mirroring, at least two drives contains the
exact copy of a set of data. If a drive fails, others will still work.
RAID 5: Striping with parity. Requires the use of at least 3 drives,
striping the data across multiple drives like RAID 0, but also has a
parity distributed across the drives.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 45/251
4/1/24, 10:01 AM System Design: The complete course
RAID 6: Striping with double parity. RAID 6 is like RAID 5, but the
parity data are written to two drives.
RAID 10: Combines striping plus mirroring from RAID 0 and RAID 1.
It provides security by mirroring all data on secondary drives while
using striping across each set of drives to speed up data transfers.
Comparison
Let's compare all the features of different RAID levels:
Minimum Disks 2 2
Read Performance High High
Write Performance High Medium
Cost Low High
Fault Tolerance None Single-drive failure
Volumes
Volume is a fixed amount of storage on a disk or tape. The term volume
is often used as a synonym for the storage itself, but it is possible for a
single disk to contain more than one volume or a volume to span more
than one disk.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 46/251
4/1/24, 10:01 AM System Design: The complete course
File storage
File storage is a solution to store data as files and present it to its final
users as a hierarchical directories structure. The main advantage is to
provide a user-friendly solution to store and retrieve files. To locate a
file in file storage, the complete path of the file is required. It is
economical and easily structured and is usually found on hard drives,
which means that they appear exactly the same for the user and on the
hard drive.
Example: Amazon EFS, Azure files, Google Cloud Filestore, etc.
Block storage
Block storage divides data into blocks (chunks) and stores them as
separate pieces. Each block of data is given a unique identifier, which
allows a storage system to place the smaller pieces of data wherever it
is most convenient.
Block storage also decouples data from user environments, allowing
that data to be spread across multiple environments. This creates
multiple paths to the data and allows the user to retrieve it quickly.
When a user or application requests data from a block storage system,
the underlying storage system reassembles the data blocks and
presents the data to the user or application
Example: Amazon EBS.
Object Storage
Object storage, which is also known as object-based storage, breaks
data files up into pieces called objects. It then stores those objects in a
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 47/251
4/1/24, 10:01 AM System Design: The complete course
NAS
A NAS (Network Attached Storage) is a storage device connected to a
network that allows storage and retrieval of data from a central
location for authorized network users. NAS devices are flexible,
meaning that as we need additional storage, we can add to what we
have. It's faster, less expensive, and provides all the benefits of a public
cloud on-site, giving us complete control.
HDFS
The Hadoop Distributed File System (HDFS) is a distributed file system
designed to run on commodity hardware. HDFS is highly fault-tolerant
and is designed to be deployed on low-cost hardware. HDFS provides
high throughput access to application data and is suitable for
applications that have large data sets. It has many similarities with
existing distributed file systems.
HDFS is designed to reliably store very large files across machines in a
large cluster. It stores each file as a sequence of blocks, all blocks in a
file except the last block are the same size. The blocks of a file are
replicated for fault tolerance.
Databases and DBMS
What is a Database?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 48/251
4/1/24, 10:01 AM System Design: The complete course
What is DBMS?
A database typically requires a comprehensive database software
program known as a Database Management System (DBMS). A DBMS
serves as an interface between the database and its end-users or
programs, allowing users to retrieve, update, and manage how the
information is organized and optimized. A DBMS also facilitates
oversight and control of databases, enabling a variety of administrative
operations such as performance monitoring, tuning, and backup and
recovery.
Components
Here are some common components found across different databases:
Schema
The role of a schema is to define the shape of a data structure, and
specify what kinds of data can go where. Schemas can be strictly
enforced across the entire database, loosely enforced on part of the
database, or they might not exist at all.
Table
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 49/251
4/1/24, 10:01 AM System Design: The complete course
Column
A column contains a set of data values of a particular type, one value
for each row of the database. A column may contain text values,
numbers, enums, timestamps, etc.
Row
Data in a table is recorded in rows. There can be thousands or millions
of rows in a table having any particular information.
Types
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 50/251
4/1/24, 10:01 AM System Design: The complete course
SQL and NoSQL databases are broad topics and will be discussed
separately in SQL databases and NoSQL databases. Learn how they
compare to each other in SQL vs NoSQL databases.
Challenges
Some common challenges faced while running databases at scale:
Absorbing significant increases in data volume: The explosion of
data coming in from sensors, connected machines, and dozens of
other sources.
Ensuring data security: Data breaches are happening everywhere
these days, it's more important than ever to ensure that data is
secure but also easily accessible to users.
Keeping up with demand: Companies need real-time access to
their data to support timely decision-making and to take
advantage of new opportunities.
Managing and maintaining the database and infrastructure: As
databases become more complex and data volumes grow,
companies are faced with the expense of hiring additional talent to
manage their databases.
Removing limits on scalability: A business needs to grow if it's
going to survive, and its data management must grow along with it.
But it's very difficult to predict how much capacity the company
will need, particularly with on-premises databases.
Ensuring data residency, data sovereignty, or latency
requirements: Some organizations have use cases that are better
suited to run on-premises. In those cases, engineered systems
that are pre-configured and pre-optimized for running the
database are ideal.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 51/251
4/1/24, 10:01 AM System Design: The complete course
SQL databases
A SQL (or relational) database is a collection of data items with pre-
defined relationships between them. These items are organized as a
set of tables with columns and rows. Tables are used to hold
information about the objects to be represented in the database. Each
column in a table holds a certain kind of data and a field stores the
actual value of an attribute. The rows in the table represent a collection
of related values of one object or entity.
Each row in a table could be marked with a unique identifier called a
primary key, and rows among multiple tables can be made related
using foreign keys. This data can be accessed in many different ways
without re-organizing the database tables themselves. SQL databases
usually follow the ACID consistency model.
Materialized views
A materialized view is a pre-computed data set derived from a query
specification and stored for later use. Because the data is pre-
computed, querying a materialized view is faster than executing a
query against the base table of the view. This performance difference
can be significant when a query is run frequently or is sufficiently
complex.
It also enables data subsetting and improves the performance of
complex queries that run on large data sets which reduces network
loads. There are other uses of materialized views, but they are mostly
used for performance and replication.
The N+1 query problem happens when the data access layer executes
N additional SQL statements to fetch the same data that could have
been retrieved when executing the primary SQL query. The larger the
value of N, the more queries will be executed, the larger the
performance impact.
This is commonly seen in GraphQL and ORM (Object-Relational
Mapping) tools and can be addressed by optimizing the SQL query or
using a dataloader that batches consecutive requests and makes a
single data request under the hood.
Advantages
Let's look at some advantages of using relational databases:
Simple and accurate
Accessibility
Data consistency
Flexibility
Disadvantages
Below are the disadvantages of relational databases:
Expensive to maintain
Difficult schema evolution
Performance hits (join, denormalization, etc.)
Difficult to scale due to poor horizontal scalability
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 53/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Here are some commonly used relational databases:
PostgreSQL
MySQL
MariaDB
Amazon Aurora
NoSQL databases
NoSQL is a broad category that includes any database that doesn't use
SQL as its primary data access language. These types of databases
are also sometimes referred to as non-relational databases. Unlike in
relational databases, data in a NoSQL database doesn't have to
conform to a pre-defined schema. NoSQL databases follow BASE
consistency model.
Below are different types of NoSQL databases:
Document
A document database (also known as a document-oriented database
or a document store) is a database that stores information in
documents. They are general-purpose databases that serve a variety
of use cases for both transactional and analytical applications.
Advantages
Intuitive and flexible
Easy horizontal scaling
Schemaless
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 54/251
4/1/24, 10:01 AM System Design: The complete course
Disadvantages
Schemaless
Non-relational
Examples
MongoDB
Amazon DocumentDB
CouchDB
Key-value
One of the simplest types of NoSQL databases, key-value databases
save data as a group of key-value pairs made up of two data items
each. They're also sometimes referred to as a key-value store.
Advantages
Simple and performant
Highly scalable for high volumes of traffic
Session management
Optimized lookups
Disadvantages
Basic CRUD
Values can't be filtered
Lacks indexing and scanning capabilities
Not optimized for complex queries
Examples
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 55/251
4/1/24, 10:01 AM System Design: The complete course
Redis
Memcached
Amazon DynamoDB
Aerospike
Graph
A graph database is a NoSQL database that uses graph structures for
semantic queries with nodes, edges, and properties to represent and
store data instead of tables or documents.
The graph relates the data items in the store to a collection of nodes
and edges, the edges representing the relationships between the
nodes. The relationships allow data in the store to be linked together
directly and, in many cases, retrieved with one operation.
Advantages
Query speed
Agile and flexible
Explicit data representation
Disadvantages
Complex
No standardized query language
Use cases
Fraud detection
Recommendation engines
Social networks
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 56/251
4/1/24, 10:01 AM System Design: The complete course
Network mapping
Examples
Neo4j
ArangoDB
Amazon Neptune
JanusGraph
Time series
A time-series database is a database optimized for time-stamped, or
time series, data.
Advantages
Fast insertion and retrieval
Efficient data storage
Use cases
IoT data
Metrics analysis
Application monitoring
Understand financial trends
Examples
InfluxDB
Apache Druid
Wide column
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 57/251
4/1/24, 10:01 AM System Design: The complete course
Multi-model
Multi-model databases combine different database models (i.e.
relational, graph, key-value, document, etc.) into a single, integrated
backend. This means they can accommodate various data types,
indexes, queries, and store data in more than one model.
Advantages
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 58/251
4/1/24, 10:01 AM System Design: The complete course
Flexibility
Suitable for complex projects
Data consistent
Disadvantages
Complex
Less mature
Examples
ArangoDB
Azure Cosmos DB
Couchbase
SQL vs NoSQL databases
In the world of databases, there are two main types of solutions, SQL
(relational) and NoSQL (non-relational) databases. Both of them differ
in the way they were built, the kind of information they store, and how
they store it. Relational databases are structured and have predefined
schemas while non-relational databases are unstructured, distributed,
and have a dynamic schema.
High-level differences
Here are some high-level differences between SQL and NoSQL:
Storage
SQL stores data in tables, where each row represents an entity and
each column represents a data point about that entity.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 59/251
4/1/24, 10:01 AM System Design: The complete course
Schema
In SQL, each record conforms to a fixed schema, meaning the columns
must be decided and chosen before data entry and each row must
have data for each column. The schema can be altered later, but it
involves modifying the database using migrations.
Whereas in NoSQL, schemas are dynamic. Columns can be added on
the fly, and each row (or equivalent) doesn't have to contain data for
each column.
Querying
SQL databases use SQL (structured query language) for defining and
manipulating the data, which is very powerful.
In a NoSQL database, queries are focused on a collection of
documents. Different databases have different syntax for querying.
Scalability
In most common situations, SQL databases are vertically scalable,
which can get very expensive. It is possible to scale a relational
database across multiple servers, but this is a challenging and time-
consuming process.
On the other hand, NoSQL databases are horizontally scalable,
meaning we can add more servers easily to our NoSQL database
infrastructure to handle large traffic. Any cheap commodity hardware
or cloud instances can host NoSQL databases, thus making it a lot
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 60/251
4/1/24, 10:01 AM System Design: The complete course
Reliability
The vast majority of relational databases are ACID compliant. So, when
it comes to data reliability and a safe guarantee of performing
transactions, SQL databases are still the better bet.
Most of the NoSQL solutions sacrifice ACID compliance for
performance and scalability.
Reasons
As always we should always pick the technology that fits the
requirements better. So, let's look at some reasons for picking SQL or
NoSQL based database:
For SQL
Structured data with strict schema
Relational data
Need for complex joins
Transactions
Lookups by index are very fast
For NoSQL
Dynamic or flexible schema
Non-relational data
No need for complex joins
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 61/251
4/1/24, 10:01 AM System Design: The complete course
Master-Slave Replication
The master serves reads and writes, replicating writes to one or more
slaves, which serve only reads. Slaves can also replicate additional
slaves in a tree-like fashion. If the master goes offline, the system can
continue to operate in read-only mode until a slave is promoted to a
master or a new master is provisioned.
Advantages
Backups of the entire database of relatively no impact on the
master.
Applications can read from the slave(s) without impacting the
master.
Slaves can be taken offline and synced back to the master without
any downtime.
Disadvantages
Replication adds more hardware and additional complexity.
Downtime and possibly loss of data when a master fails.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 62/251
4/1/24, 10:01 AM System Design: The complete course
Master-Master Replication
Both masters serve reads/writes and coordinate with each other. If
either master goes down, the system can continue to operate with
both reads and writes.
Advantages
Applications can read from both masters.
Distributes write load across both master nodes.
Simple, automatic, and quick failover.
Disadvantages
Not as simple as master-slave to configure and deploy.
Either loosely consistent or have increased write latency due to
synchronization.
Conflict resolution comes into play as more write nodes are added
and as latency increases.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 63/251
4/1/24, 10:01 AM System Design: The complete course
Dense Index
In a dense index, an index record is created for every row of the table.
Records can be located directly as each record of the index holds the
search key value and the pointer to the actual record.
Sparse Index
In a sparse index, records are created only for some of the records.
Terms
Before we go any further, let's look at some commonly used terms in
normalization and denormalization.
Keys
Primary key: Column or group of columns that can be used to uniquely
identify every row of the table.
Composite key: A primary key made up of multiple columns.
Super key: Set of all keys that can uniquely identify all the rows
present in a table.
Candidate key: Attributes that identify rows uniquely in a table.
Foreign key: It is a reference to a primary key of another table.
Alternate key: Keys that are not primary keys are known as alternate
keys.
Surrogate key: A system-generated value that uniquely identifies each
entry in a table when no other column was able to hold properties of a
primary key.
Dependencies
Partial dependency: Occurs when the primary key determines some
other attributes.
Functional dependency: It is a relationship that exists between two
attributes, typically between the primary key and non-key attribute
within a table.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 66/251
4/1/24, 10:01 AM System Design: The complete course
Anomalies
Database anomaly happens when there is a flaw in the database due to
incorrect planning or storing everything in a flat database. This is
generally addressed by the process of normalization.
There are three types of database anomalies:
Insertion anomaly: Occurs when we are not able to insert certain
attributes in the database without the presence of other attributes.
Update anomaly: Occurs in case of data redundancy and partial
update. In other words, a correct update of the database needs other
actions such as addition, deletion, or both.
Deletion anomaly: Occurs where deletion of some data requires
deletion of other data.
Example
Let's consider the following table which is not normalized:
ID Name Role
1 Peter Software Engineer
2 Brian DevOps Engineer
3 Hailey Product Manager
4 Hailey Product Manager
5 Steve Frontend Engineer
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 67/251
4/1/24, 10:01 AM System Design: The complete course
Let's imagine, we hired a new person "John" but they might not be
assigned a team immediately. This will cause an insertion anomaly as
the team attribute is not yet present.
Next, let's say Hailey from Team C got promoted, to reflect that change
in the database, we will need to update 2 rows to maintain consistency
which can cause an update anomaly.
Finally, we would like to remove Team B but to do that we will also need
to remove additional information such as name and role, this is an
example of a deletion anomaly.
Normalization
Normalization is the process of organizing data in a database. This
includes creating tables and establishing relationships between those
tables according to rules designed both to protect the data and to
make the database more flexible by eliminating redundancy and
inconsistent dependency.
Normal forms
Normal forms are a series of guidelines to ensure that the database is
normalized. Let's discuss some essential normal forms:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 68/251
4/1/24, 10:01 AM System Design: The complete course
1NF
For a table to be in the first normal form (1NF), it should follow the
following rules:
Repeating groups are not permitted.
Identify each set of related data with a primary key.
Set of related data should have a separate table.
Mixing data types in the same column is not permitted.
2NF
For a table to be in the second normal form (2NF), it should follow the
following rules:
Satisfies the first normal form (1NF).
Should not have any partial dependency.
3NF
For a table to be in the third normal form (3NF), it should follow the
following rules:
Satisfies the second normal form (2NF).
Transitive functional dependencies are not permitted.
BCNF
Boyce-Codd normal form (or BCNF) is a slightly stronger version of the
third normal form (3NF) used to address certain types of anomalies not
dealt with by 3NF as originally defined. Sometimes it is also known as
the 3.5 normal form (3.5NF).
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 69/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Here are some advantages of normalization:
Reduces data redundancy.
Better data design.
Increases data consistency.
Enforces referential integrity.
Disadvantages
Let's look at some disadvantages of normalization:
Data design is complex.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 70/251
4/1/24, 10:01 AM System Design: The complete course
Slower performance.
Maintenance overhead.
Require more joins.
Denormalization
Denormalization is a database optimization technique in which we add
redundant data to one or more tables. This can help us avoid costly
joins in a relational database. It attempts to improve read performance
at the expense of some write performance. Redundant copies of the
data are written in multiple tables to avoid expensive joins.
Once data becomes distributed with techniques such as federation and
sharding, managing joins across the network further increases
complexity. Denormalization might circumvent the need for such
complex joins.
Note: Denormalization does not mean reversing normalization.
Advantages
Let's look at some advantages of denormalization:
Retrieving data is faster.
Writing queries is easier.
Reduction in number of tables.
Convenient to manage.
Disadvantages
Below are some disadvantages of denormalization:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 71/251
4/1/24, 10:01 AM System Design: The complete course
ACID
The term ACID stands for Atomicity, Consistency, Isolation, and
Durability. ACID properties are used for maintaining data integrity
during transaction processing.
In order to maintain consistency before and after a transaction
relational databases follow ACID properties. Let us understand these
terms:
Atomic
All operations in a transaction succeed or every operation is rolled
back.
Consistent
On the completion of a transaction, the database is structurally sound.
Isolated
Transactions do not contend with one another. Contentious access to
data is moderated by the database so that transactions appear to run
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 72/251
4/1/24, 10:01 AM System Design: The complete course
sequentially.
Durable
Once the transaction has been completed and the writes and updates
have been written to the disk, it will remain in the system even if a
system failure occurs.
BASE
With the increasing amount of data and high availability requirements,
the approach to database design has also changed dramatically. To
increase the ability to scale and at the same time be highly available,
we move the logic from the database to separate servers. In this way,
the database becomes more independent and focused on the actual
process of storing data.
In the NoSQL database world, ACID transactions are less common as
some databases have loosened the requirements for immediate
consistency, data freshness, and accuracy in order to gain other
benefits, like scale and resilience.
BASE properties are much looser than ACID guarantees, but there isn't
a direct one-for-one mapping between the two consistency models.
Let us understand these terms:
Basic Availability
The database appears to work most of the time.
Soft-state
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 73/251
4/1/24, 10:01 AM System Design: The complete course
Eventual consistency
The data might not be consistent immediately but eventually, it
becomes consistent. Reads in the system are still possible even though
they may not give the correct response due to inconsistency.
Consistency
Consistency means that all clients see the same data at the same time,
no matter which node they connect to. For this to happen, whenever
data is written to one node, it must be instantly forwarded or replicated
across all the nodes in the system before the write is deemed
"successful".
Availability
Availability means that any client making a request for data gets a
response, even if one or more nodes are down.
Partition tolerance
Partition tolerance means the system continues to work despite
message loss or partial failure. A system that is partition-tolerant can
sustain any amount of network failure that doesn't result in a failure of
the entire network. Data is sufficiently replicated across combinations
of nodes and networks to keep the system up through intermittent
outages.
Consistency-Availability Tradeoff
We live in a physical world and can't guarantee the stability of a
network, so distributed databases must choose Partition Tolerance (P).
This implies a tradeoff between Consistency (C) and Availability (A).
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 75/251
4/1/24, 10:01 AM System Design: The complete course
CA database
A CA database delivers consistency and availability across all nodes. It
can't do this if there is a partition between any two nodes in the
system, and therefore can't deliver fault tolerance.
Example: PostgreSQL, MariaDB.
CP database
A CP database delivers consistency and partition tolerance at the
expense of availability. When a partition occurs between any two
nodes, the system has to shut down the non-consistent node until the
partition is resolved.
Example: MongoDB, Apache HBase.
AP database
An AP database delivers availability and partition tolerance at the
expense of consistency. When a partition occurs, all nodes remain
available but those at the wrong end of a partition might return an
older version of data than others. When the partition is resolved, the
AP databases typically re-syncs the nodes to repair all inconsistencies
in the system.
Example: Apache Cassandra, CouchDB.
PACELC Theorem
The PACELC theorem is an extension of the CAP theorem. The CAP
theorem states that in the case of network partitioning (P) in a
distributed system, one has to choose between Availability (A) and
Consistency (C).
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 76/251
4/1/24, 10:01 AM System Design: The complete course
States
A transaction in a database can be in one of the following states:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 77/251
4/1/24, 10:01 AM System Design: The complete course
Active
In this state, the transaction is being executed. This is the initial state
of every transaction.
Partially Committed
When a transaction executes its final operation, it is said to be in a
partially committed state.
Committed
If a transaction executes all its operations successfully, it is said to be
committed. All its effects are now permanently established on the
database system.
Failed
The transaction is said to be in a failed state if any of the checks made
by the database recovery system fails. A failed transaction can no
longer proceed further.
Aborted
If any of the checks fail and the transaction has reached a failed state,
then the recovery manager rolls back all its write operations on the
database to bring the database back to its original state where it was
prior to the execution of the transaction. Transactions in this state are
aborted.
The database recovery module can select one of the two operations
after a transaction aborts:
Restart the transaction
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 78/251
4/1/24, 10:01 AM System Design: The complete course
Terminated
If there isn't any roll-back or the transaction comes from the committed
state, then the system is consistent and ready for a new transaction
and the old transaction is terminated.
Distributed Transactions
A distributed transaction is a set of operations on data that is
performed across two or more databases. It is typically coordinated
across separate nodes connected by a network, but may also span
multiple databases on a single server.
Two-Phase commit
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 79/251
4/1/24, 10:01 AM System Design: The complete course
Phases
Two-phase commit consists of the following phases:
Prepare phase
The prepare phase involves the coordinator node collecting consensus
from each of the participant nodes. The transaction will be aborted
unless each of the nodes responds that they're prepared.
Commit phase
If all participants respond to the coordinator that they are prepared,
then the coordinator asks all the nodes to commit the transaction. If a
failure occurs, the transaction will be rolled back.
Problems
Following problems may arise in the two-phase commit protocol:
What if one of the nodes crashes?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 80/251
4/1/24, 10:01 AM System Design: The complete course
Three-phase commit
Phases
Three-phase commit consists of the following phases:
Prepare phase
This phase is the same as the two-phase commit.
Pre-commit phase
Coordinator issues the pre-commit message and all the participating
nodes must acknowledge it. If a participant fails to receive this
message in time, then the transaction is aborted.
Commit phase
This step is also similar to the two-phase commit protocol.
Sagas
Coordination
There are two common implementation approaches:
Choreography: Each local transaction publishes domain events
that trigger local transactions in other services.
Orchestration: An orchestrator tells the participants what local
transactions to execute.
Problems
The Saga pattern is particularly hard to debug.
There's a risk of cyclic dependency between saga participants.
Lack of participant data isolation imposes durability challenges.
Testing is difficult because all services must be running to simulate
a transaction.
Sharding
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 82/251
4/1/24, 10:01 AM System Design: The complete course
Data Partitioning
Data partitioning is a technique to break up a database into many
smaller parts. It is the process of splitting up a database or a table
across multiple machines to improve the manageability, performance,
and availability of a database.
Methods
There are many different ways one could use to decide how to break
up an application database into multiple smaller DBs. Below are three
of the most popular methods used by various large-scale applications:
Horizontal Partitioning (or Sharding)
In this strategy, we split the table data horizontally based on the range
of values defined by the partition key. It is also referred to as database
sharding.
Vertical Partitioning
In vertical partitioning, we partition the data vertically based on
columns. We divide tables into relatively smaller tables with few
elements, and each part is present in a separate partition.
In this tutorial, we will specifically focus on sharding.
What is sharding?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 83/251
4/1/24, 10:01 AM System Design: The complete course
Partitioning criteria
There are a large number of criteria available for data partitioning.
Some most commonly used criteria are:
Hash-Based
This strategy divides the rows into different partitions based on a
hashing algorithm rather than grouping database rows based on
continuous indexes.
The disadvantage of this method is that dynamically adding/removing
database servers becomes expensive.
List-Based
In list-based partitioning, each partition is defined and selected based
on the list of values on a column rather than a set of contiguous ranges
of values.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 84/251
4/1/24, 10:01 AM System Design: The complete course
Range Based
Range partitioning maps data to various partitions based on ranges of
values of the partitioning key. In other words, we partition the table in
such a way that each partition contains rows within a given range
defined by the partition key.
Ranges should be contiguous but not overlapping, where each range
specifies a non-inclusive lower and upper bound for a partition. Any
partitioning key values equal to or higher than the upper bound of the
range are added to the next partition.
Composite
As the name suggests, composite partitioning partitions the data
based on two or more partitioning techniques. Here we first partition
the data using one technique, and then each partition is further
subdivided into sub-partitions using the same or some other method.
Advantages
But why do we need sharding? Here are some advantages:
Availability: Provides logical independence to the partitioned
database, ensuring the high availability of our application. Here
individual partitions can be managed independently.
Scalability: Proves to increase scalability by distributing the data
across multiple partitions.
Security: Helps improve the system's security by storing sensitive
and non-sensitive data in different partitions. This could provide
better manageability and security to sensitive data.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 85/251
4/1/24, 10:01 AM System Design: The complete course
Disadvantages
Complexity: Sharding increases the complexity of the system in
general.
Joins across shards: Once a database is partitioned and spread
across multiple machines it is often not feasible to perform joins
that span multiple database shards. Such joins will not be
performance efficient since data has to be retrieved from multiple
servers.
Rebalancing: If the data distribution is not uniform or there is a lot
of load on a single shard, in such cases we have to rebalance our
shards so that the requests are as equally distributed among the
shards as possible.
Consistent Hashing
Let's first understand the problem we're trying to solve.
Where,
R : Data that would require re-distribution.
K : Number of partition keys.
N : Number of nodes.
The output of the hash function is a range let's say 0...m-1 which we
can represent on our hash ring. We hash the requests and distribute
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 88/251
4/1/24, 10:01 AM System Design: The complete course
them on the ring depending on what the output was. Similarly, we also
hash the node and distribute them on the same ring as well.
$$ \begin{align} & Hash(key_1) = P_1 \ & Hash(key_2) = P_2 \ &
Hash(key_3) = P_3 \ & ... \ & Hash(keyn) = P{m-1} \end{align} $$
Where,
key : Request/Node ID or IP.
P : Position on the hash ring.
m : Total range of the hash ring.
Now, when the request comes in we can simply route it to the closest
node in a clockwise (can be counterclockwise as well) manner. This
means that if a new node is added or removed, we can use the nearest
node and only a fraction of the requests need to be re-routed.
In theory, consistent hashing should distribute the load evenly however
it doesn't happen in practice. Usually, the load distribution is uneven
and one server may end up handling the majority of the request
becoming a hotspot, essentially a bottleneck for the system. We can fix
this by adding extra nodes but that can be expensive.
Let's see how we can address these issues.
Virtual Nodes
In order to ensure a more evenly distributed load, we can introduce the
idea of a virtual node, sometimes also referred to as a VNode.
Instead of assigning a single position to a node, the hash range is
divided into multiple smaller ranges, and each physical node is
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 89/251
4/1/24, 10:01 AM System Design: The complete course
Data replication
To ensure high availability and durability, consistent hashing replicates
each data item on multiple N nodes in the system where the value N is
equivalent to the replication factor.
The replication factor is the number of nodes that will receive the copy
of the same data. In eventually consistent systems, this is done
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 90/251
4/1/24, 10:01 AM System Design: The complete course
asynchronously.
Advantages
Let's look at some advantages of consistent hashing:
Makes rapid scaling up and down more predictable.
Facilitates partitioning and replication across nodes.
Enables scalability and availability.
Reduces hotspots.
Disadvantages
Below are some disadvantages of consistent hashing:
Increases complexity.
Cascading failures.
Load distribution can still be uneven.
Key management can be expensive when nodes transiently fail.
Examples
Let's look at some examples where consistent hashing is used:
Data partitioning in Apache Cassandra.
Load distribution across multiple storage hosts in Amazon
DynamoDB.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 91/251
4/1/24, 10:01 AM System Design: The complete course
Database Federation
Federation (or functional partitioning) splits up databases by function.
The federation architecture makes several distinct physical databases
appear as one logical database to end-users.
All of the components in a federation are tied together by one or more
federal schemas that express the commonality of data throughout the
federation. These federated schemas are used to specify the
information that can be shared by the federation components and to
provide a common basis for communication among them.
Characteristics
Let's look at some key characteristics of a federated database:
Transparency: Federated database masks user differences and
implementations of underlying data sources. Therefore, the users
do not need to be aware of where the data is stored.
Heterogeneity: Data sources can differ in many ways. A federated
database system can handle different hardware, network
protocols, data models, etc.
Extensibility: New sources may be needed to meet the changing
needs of the business. A good federated database system needs
to make it easy to add new sources.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 92/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Here are some advantages of federated databases:
Flexible data sharing.
Autonomy among the database components.
Access heterogeneous data in a unified way.
No tight coupling of applications with legacy databases.
Disadvantages
Below are some disadvantages of federated databases:
Adds more hardware and additional complexity.
Joining data from two databases is complex.
Dependence on autonomous data sources.
Query performance and scalability.
N-tier architecture
N-tier architecture divides an application into logical layers and
physical tiers. Layers are a way to separate responsibilities and
manage dependencies. Each layer has a specific responsibility. A
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 93/251
4/1/24, 10:01 AM System Design: The complete course
higher layer can use services in a lower layer, but not the other way
around.
3-Tier architecture
3-Tier is widely used and consists of the following different layers:
Presentation layer: Handles user interactions with the application.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 94/251
4/1/24, 10:01 AM System Design: The complete course
Business Logic layer: Accepts the data from the application layer,
validates it as per business logic and passes it to the data layer.
Data Access layer: Receives the data from the business layer and
performs the necessary operation on the database.
2-Tier architecture
In this architecture, the presentation layer runs on the client and
communicates with a data store. There is no business logic layer or
immediate layer between client and server.
Advantages
Here are some advantages of using N-tier architecture:
Can improve availability.
Better security as layers can behave like a firewall.
Separate tiers allow us to scale them as needed.
Improve maintenance as different people can manage different
tiers.
Disadvantages
Below are some disadvantages of N-tier architecture:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 95/251
4/1/24, 10:01 AM System Design: The complete course
Models
Message brokers offer two basic message distribution patterns or
messaging styles:
Point-to-Point messaging: This is the distribution pattern utilized in
message queues with a one-to-one relationship between the
message's sender and receiver.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 96/251
4/1/24, 10:01 AM System Design: The complete course
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 97/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Here are some commonly used message brokers:
NATS
Apache Kafka
RabbitMQ
ActiveMQ
Message Queues
A message queue is a form of service-to-service communication that
facilitates asynchronous communication. It asynchronously receives
messages from producers and sends them to consumers.
Queues are used to effectively manage requests in large-scale
distributed systems. In small systems with minimal processing loads
and small databases, writes can be predictably fast. However, in more
complex and large systems writes can take an almost non-
deterministic amount of time.
Working
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 98/251
4/1/24, 10:01 AM System Design: The complete course
Messages are stored in the queue until they are processed and
deleted. Each message is processed only once by a single consumer.
Here's how it works:
A producer publishes a job to the queue, then notifies the user of
the job status.
A consumer picks up the job from the queue, processes it, then
signals that the job is complete.
Advantages
Let's discuss some advantages of using a message queue:
Scalability: Message queues make it possible to scale precisely
where we need to. When workloads peak, multiple instances of our
application can all add requests to the queue without the risk of
collision
Decoupling: Message queues remove dependencies between
components and significantly simplify the implementation of
decoupled applications.
Performance: Message queues enable asynchronous
communication, which means that the endpoints that are
producing and consuming messages interact with the queue, not
each other. Producers can add requests to the queue without
waiting for them to be processed.
Reliability: Queues make our data persistent, and reduce the errors
that happen when different parts of our system go offline.
Features
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 99/251
4/1/24, 10:01 AM System Design: The complete course
At-Least-Once Delivery
Message queues may store multiple copies of messages for
redundancy and high availability, and resend messages in the event of
communication failures or errors to ensure they are delivered at least
once.
Exactly-Once Delivery
When duplicates can't be tolerated, FIFO (first-in-first-out) message
queues will make sure that each message is delivered exactly once
(and only once) by filtering out duplicates automatically.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 100/251
4/1/24, 10:01 AM System Design: The complete course
Dead-letter Queues
A dead-letter queue is a queue to which other queues can send
messages that can't be processed successfully. This makes it easy to
set them aside for further inspection without blocking the queue
processing or spending CPU cycles on a message that might never be
consumed successfully.
Ordering
Most message queues provide best-effort ordering which ensures that
messages are generally delivered in the same order as they're sent and
that a message is delivered at least once.
Poison-pill Messages
Poison pills are special messages that can be received, but not
processed. They are a mechanism used in order to signal a consumer
to end its work so it is no longer waiting for new inputs, and are similar
to closing a socket in a client/server model.
Security
Message queues will authenticate applications that try to access the
queue, this allows us to encrypt messages over the network as well as
in the queue itself.
Task Queues
Tasks queues receive tasks and their related data, run them, then
deliver their results. They can support scheduling and can be used to
run computationally-intensive jobs in the background.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 101/251
4/1/24, 10:01 AM System Design: The complete course
Backpressure
If queues start to grow significantly, the queue size can become larger
than memory, resulting in cache misses, disk reads, and even slower
performance. Backpressure can help by limiting the queue size,
thereby maintaining a high throughput rate and good response times
for jobs already in the queue. Once the queue fills up, clients get a
server busy or HTTP 503 status code to try again later. Clients can
retry the request at a later time, perhaps with exponential backoff
strategy.
Examples
Following are some widely used message queues:
Amazon SQS
RabbitMQ
ActiveMQ
ZeroMQ
Publish-Subscribe
Similar to a message queue, publish-subscribe is also a form of
service-to-service communication that facilitates asynchronous
communication. In a pub/sub model, any message published to a topic
is pushed immediately to all the subscribers of the topic.
Working
Unlike message queues, which batch messages until they are retrieved,
message topics transfer messages with little or no queuing and push
them out immediately to all subscribers. Here's how it works:
A message topic provides a lightweight mechanism to broadcast
asynchronous event notifications and endpoints that allow
software components to connect to the topic in order to send and
receive those messages.
To broadcast a message, a component called a publisher simply
pushes a message to the topic.
All components that subscribe to the topic (known as subscribers)
will receive every message that was broadcasted.
Advantages
Let's discuss some advantages of using publish-subscribe:
Eliminate Polling: Message topics allow instantaneous, push-
based delivery, eliminating the need for message consumers to
periodically check or "poll" for new information and updates. This
promotes faster response time and reduces the delivery latency
which can be particularly problematic in systems where delays
cannot be tolerated.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 103/251
4/1/24, 10:01 AM System Design: The complete course
Features
Now, let's discuss some desired features of publish-subscribe:
Push Delivery
Pub/Sub messaging instantly pushes asynchronous event notifications
when messages are published to the message topic. Subscribers are
notified when a message is available.
Fanout
This scenario happens when a message is sent to a topic and then
replicated and pushed to multiple endpoints. Fanout provides
asynchronous event notifications which in turn allows for parallel
processing.
Filtering
This feature empowers the subscriber to create a message filtering
policy so that it will only get the notifications it is interested in, as
opposed to receiving every single message posted to the topic.
Durability
Pub/Sub messaging services often provide very high durability, and at
least once delivery, by storing copies of the same message on multiple
servers.
Security
Message topics authenticate applications that try to publish content,
this allows us to use encrypted endpoints and encrypt messages in
transit over the network.
Examples
Here are some technologies commonly used for publish-subscribe:
Amazon SNS
Google Pub/Sub
Enterprise Service Bus (ESB)
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 105/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
In theory, a centralized ESB offers the potential to standardize and
dramatically simplify communication, messaging, and integration
between services across the enterprise. Here are some advantages of
using an ESB:
Improved developer productivity: Enables developers to
incorporate new technologies into one part of an application
without touching the rest of the application.
Simpler, more cost-effective scalability: Components can be
scaled independently of others.
Greater resilience: Failure of one component does not impact the
others, and each microservice can adhere to its own availability
requirements without risking the availability of other components in
the system.
Disadvantages
While ESBs were deployed successfully in many organizations, in many
other organizations the ESB came to be seen as a bottleneck. Here are
some disadvantages of using an ESB:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 106/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Below are some widely used Enterprise Service Bus (ESB)
technologies:
Azure Service Bus
IBM App Connect
Apache Camel
Fuse ESB
Monoliths and Microservices
Monoliths
A monolith is a self-contained and independent application. It is built as
a single unit and is responsible for not just a particular task, but can
perform every step needed to satisfy a business need.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 107/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Following are some advantages of monoliths:
Simple to develop or debug.
Fast and reliable communication.
Easy monitoring and testing.
Supports ACID transactions.
Disadvantages
Some common disadvantages of monoliths are:
Maintenance becomes hard as the codebase grows.
Tightly coupled application, hard to extend.
Requires commitment to a particular technology stack.
On each update, the entire application is redeployed.
Reduced reliability as a single bug can bring down the entire
system.
Difficult to scale or adopt technologies new technologies.
Modular monoliths
A Modular Monolith is an approach where we build and deploy a single
application (that's the Monolith part), but we build it in a way that
breaks up the code into independent modules for each of the features
needed in our application.
This approach reduces the dependencies of a module in such as way
that we can enhance or change a module without affecting other
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 108/251
4/1/24, 10:01 AM System Design: The complete course
modules. When done right, this can be really beneficial in the long term
as it reduces the complexity that comes with maintaining a monolith as
the system grows.
Microservices
A microservices architecture consists of a collection of small,
autonomous services where each service is self-contained and should
implement a single business capability within a bounded context. A
bounded context is a natural division of business logic that provides an
explicit boundary within which a domain model exists.
Characteristics
The microservices architecture style has the following characteristics:
Loosely coupled: Services should be loosely coupled so that they
can be independently deployed and scaled. This will lead to the
decentralization of development teams and thus, enabling them to
develop and deploy faster with minimal constraints and operational
dependencies.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 109/251
4/1/24, 10:01 AM System Design: The complete course
Small but focused: It's about scope and responsibilities and not
size, a service should be focused on a specific problem. Basically,
"It does one thing and does it well". Ideally, they can be
independent of the underlying architecture.
Built for businesses: The microservices architecture is usually
organized around business capabilities and priorities.
Resilience & Fault tolerance: Services should be designed in such
a way that they still function in case of failure or errors. In
environments with independently deployable services, failure
tolerance is of the highest importance.
Highly maintainable: Service should be easy to maintainable and
test because services that cannot be maintained will be re-written.
Advantages
Here are some advantages of microservices architecture:
Loosely coupled services.
Services can be deployed independently.
Highly agile for multiple development teams.
Improves fault tolerance and data isolation.
Better scalability as each service can be scaled independently.
Eliminates any long-term commitment to a particular technology
stack.
Disadvantages
Microservices architecture brings its own set of challenges:
Complexity of a distributed system.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 110/251
4/1/24, 10:01 AM System Design: The complete course
Best practices
Let's discuss some microservices best practices:
Model services around the business domain.
Services should have loose coupling and high functional cohesion.
Isolate failures and use resiliency strategies to prevent failures
within a service from cascading.
Services should only communicate through well-designed APIs.
Avoid leaking implementation details.
Data storage should be private to the service that owns the data
Avoid coupling between services. Causes of coupling include
shared database schemas and rigid communication protocols.
Decentralize everything. Individual teams are responsible for
designing and building services. Avoid sharing code or data
schemas.
Fail fast by using a circuit breaker to achieve fault tolerance.
Ensure that the API changes are backward compatible.
Pitfalls
Below are some common pitfalls of microservices architecture:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 111/251
4/1/24, 10:01 AM System Design: The complete course
So, you might be wondering, monoliths seem like a bad idea to begin
with, why would anyone use that?
Well, it depends. While each approach has its own advantages and
disadvantages, it is advised to start with a monolith when building a
new system. It is important to understand, that microservices are not a
silver bullet instead they solve an organizational problem.
Microservices architecture is about your organizational priorities and
team as much as it's about technology.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 113/251
4/1/24, 10:01 AM System Design: The complete course
What is an event?
An event is a data point that represents state changes in a system. It
doesn't specify what should happen and how the change should
modify the system, it only notifies the system of a particular state
change. When a user makes an action, they trigger an event.
Components
Event-driven architectures have three key components:
Event producers: Publishes an event to the router.
Event routers: Filters and pushes the events to consumers.
Event consumers: Uses events to reflect changes in the system.
Patterns
There are several ways to implement the event-driven architecture,
and which method we use depends on the use case but here are some
common examples:
Sagas
Publish-Subscribe
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 115/251
4/1/24, 10:01 AM System Design: The complete course
Event Sourcing
Command and Query Responsibility Segregation (CQRS)
Note: Each of these methods is discussed separately.
Advantages
Let's discuss some advantages:
Decoupled producers and consumers.
Highly scalable and distributed.
Easy to add new consumers.
Improves agility.
Challenges
Here are some challenges of event-drive architecture:
Guaranteed delivery.
Error handling is difficult.
Event-driven systems are complex in general.
Exactly once, in-order processing of events.
Use cases
Below are some common use cases where event-driven architectures
are beneficial:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 116/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Here are some widely used technologies for implementing event-driven
architectures:
NATS
Apache Kafka
Amazon EventBridge
Amazon SNS
Google PubSub
Event Sourcing
Instead of storing just the current state of the data in a domain, use an
append-only store to record the full series of actions taken on that
data. The store acts as the system of record and can be used to
materialize the domain objects.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 117/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's discuss some advantages of using event sourcing:
Excellent for real-time data reporting.
Great for fail-safety, data can be reconstituted from the event
store.
Extremely flexible, any type of message can be stored.
Preferred way of achieving audit logs functionality for high
compliance systems.
Disadvantages
Following are the disadvantages of event sourcing:
Requires an extremely efficient network infrastructure.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 118/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's discuss some advantages of CQRS:
Allows independent scaling of read and write workloads.
Easier scaling, optimizations, and architectural changes.
Closer to business logic with loose coupling.
The application can avoid complex joins when querying.
Clear boundaries between the system behavior.
Disadvantages
Below are some disadvantages of CQRS:
More complex application design.
Message failures or duplicate messages can occur.
Dealing with eventual consistency is a challenge.
Increased system maintenance efforts.
Use cases
Here are some scenarios where CQRS will be helpful:
The performance of data reads must be fine-tuned separately from
the performance of data writes.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 120/251
4/1/24, 10:01 AM System Design: The complete course
Features
Below are some desired features of an API Gateway:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 121/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's look at some advantages of using an API Gateway:
Encapsulates the internal structure of an API.
Provides a centralized view of the API.
Simplifies the client code.
Monitoring, analytics, tracing, and other such features.
Disadvantages
Here are some possible disadvantages of an API Gateway:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 122/251
4/1/24, 10:01 AM System Design: The complete course
The primary function of the backend for the frontend pattern is to get
the required data from the appropriate service, format the data, and
sent it to the frontend.
GraphQL performs really well as a backend for frontend (BFF).
Examples
Following are some widely used gateways technologies:
Amazon API Gateway
Apigee API Gateway
Azure API Gateway
Kong API Gateway
REST, GraphQL, gRPC
A good API design is always a crucial part of any system. But it is also
important to pick the right API technology. So, in this tutorial, we will
briefly discuss different API technologies such as REST, GraphQL, and
gRPC.
What's an API?
Before we even get into API technologies, let's first understand what is
an API.
An API is a set of definitions and protocols for building and integrating
application software. It's sometimes referred to as a contract between
an information provider and an information user establishing the
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 124/251
4/1/24, 10:01 AM System Design: The complete course
content required from the producer and the content required by the
consumer.
In other words, if you want to interact with a computer or system to
retrieve information or perform a function, an API helps you
communicate what you want to that system so it can understand and
complete the request.
REST
A REST API (also known as RESTful API) is an application programming
interface that conforms to the constraints of REST architectural style
and allows for interaction with RESTful web services. REST stands for
Representational State Transfer and it was first introduced by Roy
Fielding in the year 2000.
In REST API, the fundamental unit is a resource.
Concepts
Let's discuss some concepts of a RESTful API.
Constraints
In order for an API to be considered RESTful, it has to conform to these
architectural constraints:
Uniform Interface: There should be a uniform way of interacting
with a given server.
Client-Server: A client-server architecture managed through
HTTP.
Stateless: No client context shall be stored on the server between
requests.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 125/251
4/1/24, 10:01 AM System Design: The complete course
response body.
POST: Submits an entity to the specified resource, often causing a
change in state or side effects on the server.
PUT: Replaces all current representations of the target resource
with the request payload.
DELETE: Deletes the specified resource.
PATCH: Applies partial modifications to a resource.
HTTP response codes
HTTP response status codes indicate whether a specific HTTP request
has been successfully completed.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 126/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's discuss some advantages of REST API:
Simple and easy to understand.
Flexible and portable.
Good caching support.
Client and server are decoupled.
Disadvantages
Let's discuss some disadvantages of REST API:
Over-fetching of data.
Sometimes multiple round trips to the server are required.
Use cases
REST APIs are pretty much used universally and are the default
standard for designing APIs. Overall REST APIs are quite flexible and
can fit almost all scenarios.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 127/251
4/1/24, 10:01 AM System Design: The complete course
Example
Here's an example usage of a REST API that operates on a users
resource.
GraphQL
GraphQL is a query language and server-side runtime for APIs that
prioritizes giving clients exactly the data they request and no more. It
was developed by Facebook and later open-sourced in 2015.
GraphQL is designed to make APIs fast, flexible, and developer-
friendly. Additionally, GraphQL gives API maintainers the flexibility to
add or deprecate fields without impacting existing queries. Developers
can build APIs with whatever methods they prefer, and the GraphQL
specification will ensure they function in predictable ways to clients.
In GraphQL, the fundamental unit is a query.
Concepts
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 128/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's discuss some advantages of GraphQL:
Eliminates over-fetching of data.
Strongly defined schema.
Code generation support.
Payload optimization.
Disadvantages
Let's discuss some disadvantages of GraphQL:
Shifts complexity to server-side.
Caching becomes hard.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 129/251
4/1/24, 10:01 AM System Design: The complete course
Versioning is ambiguous.
N+1 problem.
Use cases
GraphQL proves to be essential in the following scenarios:
Reducing app bandwidth usage as we can query multiple
resources in a single query.
Rapid prototyping for complex systems.
When we are working with a graph-like data model.
Example
Here's a GraphQL schema that defines a User type and a Query type.
COPY
type Query {
getUser: User
}
type User {
id: ID
name: String
city: String
state: String
}
Using the above schema, the client can request the required fields
easily without having to fetch the entire resource or guess what the API
might return.
COPY
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 130/251
4/1/24, 10:01 AM System Design: The complete course
{
getUser {
id
name
city
}
}
{
"getUser": {
"id": 123,
"name": "Karan",
"city": "San Francisco"
}
}
gRPC
gRPC is a modern open-source high-performance Remote Procedure
Call (RPC) framework that can run in any environment. It can efficiently
connect services in and across data centers with pluggable support for
load balancing, tracing, health checking, authentication and much
more.
Concepts
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 131/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Let's discuss some disadvantages of gRPC:
Lightweight and efficient.
High performance.
Built-in code generation support.
Bi-directional streaming.
Disadvantages
Let's discuss some disadvantages of gRPC:
Relatively new compared to REST and GraphQL.
Limited browser support.
Steeper learning curve.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 132/251
4/1/24, 10:01 AM System Design: The complete course
Use cases
Below are some good use cases for gRPC:
Real-time communication via bi-directional streaming.
Efficient inter-service communication in microservices.
Low latency and high throughput communication.
Polyglot environments.
Example
Here's a basic example of a gRPC service defined in a *.proto file.
Using this definition, we can easily code generate the HelloService
service in the programming language of our choice.
COPY
service HelloService {
rpc SayHello (HelloRequest) returns (HelloResponse);
}
message HelloRequest {
string greeting = 1;
}
message HelloResponse {
string reply = 1;
}
Now that we know how these API designing techniques work, let's
compare them based on the following parameters:
Will it cause tight coupling?
How chatty (distinct API calls to get needed information) are the
APIs?
What's the performance like?
How complex is it to integrate?
How well does the caching work?
Built-in tooling and code generation?
What's API discoverability like?
How easy is it to version APIs?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 134/251
4/1/24, 10:01 AM System Design: The complete course
Long polling
HTTP Long polling is a technique used to push information to a client
as soon as possible from the server. As a result, the server does not
have to wait for the client to send a request.
In Long polling, the server does not close the connection once it
receives a request from the client. Instead, the server responds only if
any new message is available or a timeout threshold is reached.
Working
Let's understand how long polling works:
1. The client makes an initial request and waits for a response.
2. The server receives the request and delays sending anything until
an update is available.
3. Once an update is available, the response is sent to the client.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 135/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Here are some advantages of long polling:
Easy to implement, good for small-scale projects.
Nearly universally supported.
Disadvantages
A major downside of long polling is that it is usually not scalable. Below
are some of the other reasons:
Creates a new connection each time, which can be intensive on
the server.
Reliable message ordering can be an issue for multiple requests.
Increased latency as the server needs to wait for a new request.
WebSockets
WebSocket provides full-duplex communication channels over a single
TCP connection. It is a persistent connection between a client and a
server that both parties can use to start sending data at any time.
The client establishes a WebSocket connection through a process
known as the WebSocket handshake. If the process succeeds, then the
server and client can exchange data in both directions at any time. The
WebSocket protocol enables the communication between a client and
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 136/251
4/1/24, 10:01 AM System Design: The complete course
Working
Let's understand how WebSockets work:
1. The client initiates a WebSocket handshake process by sending a
request.
2. The request also contains an HTTP Upgrade header that allows
the request to switch to the WebSocket protocol ( ws:// ).
3. The server sends a response to the client, acknowledging the
WebSocket handshake request.
4. A WebSocket connection will be opened once the client receives a
successful handshake response.
5. Now the client and server can start sending data in both directions
allowing real-time communication.
6. The connection is closed once the server or the client decides to
close the connection.
Advantages
Below are some advantages of WebSockets:
Full-duplex asynchronous messaging.
Better origin-based security model.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 137/251
4/1/24, 10:01 AM System Design: The complete course
Disadvantages
Let's discuss some disadvantages of WebSockets:
Terminated connections aren't automatically recovered.
Older browsers don't support WebSockets (becoming less
relevant).
Working
Let's understand how server-sent events work:
1. The client makes a request to the server.
2. The connection between client and server is established and it
remains open.
3. The server sends responses or events to the client when new data
is available.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 138/251
4/1/24, 10:01 AM System Design: The complete course
Advantages
Simple to implement and use for both client and server.
Supported by most browsers.
No trouble with firewalls.
Disadvantages
Unidirectional nature can be limiting.
Limitation for the maximum number of open connections.
Does not support binary data.
Geohashing and Quadtrees
Geohashing
Geohashing is a geocoding method used to encode geographic
coordinates such as latitude and longitude into short alphanumeric
strings. It was created by Gustavo Niemeyer in 2008.
For example, San Francisco with coordinates 37.7564, -122.4016 can be
represented in geohash as 9q8yy9mf .
9q8yy9 .
Use cases
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 140/251
4/1/24, 10:01 AM System Design: The complete course
Examples
Geohashing is widely used and it is supported by popular databases.
MySQL
Redis
Amazon DynamoDB
Google Cloud Firestore
Quadtrees
A quadtree is a tree data structure in which each internal node has
exactly four children. They are often used to partition a two-
dimensional space by recursively subdividing it into four quadrants or
regions. Each child or leaf node stores spatial information. Quadtrees
are the two-dimensional analog of Octrees which are used to partition
three-dimensional space.
Types of Quadtrees
Quadtrees may be classified according to the type of data they
represent, including areas, points, lines, and curves. The following are
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 141/251
4/1/24, 10:01 AM System Design: The complete course
Use cases
Below are some common uses of quadtrees:
Image representation, processing, and compression.
Spacial indexing and range queries.
Location-based services like Google Maps, Uber, etc.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 142/251
4/1/24, 10:01 AM System Design: The complete course
The basic idea behind the circuit breaker is very simple. We wrap a
protected function call in a circuit breaker object, which monitors for
failures. Once the failures reach a certain threshold, the circuit breaker
trips, and all further calls to the circuit breaker return with an error,
without the protected call being made at all. Usually, we'll also want
some kind of monitor alert if the circuit breaker trips.
States
Let's discuss circuit breaker states:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 143/251
4/1/24, 10:01 AM System Design: The complete course
Closed
When everything is normal, the circuit breakers remain closed, and all
the request passes through to the services as normal. If the number of
failures increases beyond the threshold, the circuit breaker trips and
goes into an open state.
Open
In this state circuit breaker returns an error immediately without even
invoking the services. The Circuit breakers move into the half-open
state after a certain timeout period elapses. Usually, it will have a
monitoring system where the timeout will be specified.
Half-open
In this state, the circuit breaker allows a limited number of requests
from the service to pass through and invoke the operation. If the
requests are successful, then the circuit breaker will go to the closed
state. However, if the requests continue to fail, then it goes back to the
open state.
Rate Limiting
Rate limiting refers to preventing the frequency of an operation from
exceeding a defined limit. In large-scale systems, rate limiting is
commonly used to protect underlying services and resources. Rate
limiting is generally used as a defensive mechanism in distributed
systems, so that shared resources can maintain availability. It also
protects our APIs from unintended or malicious overuse by limiting the
number of requests that can reach our API in a given period of time.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 144/251
4/1/24, 10:01 AM System Design: The complete course
Algorithms
There are various algorithms for API rate limiting, each with its
advantages and disadvantages. Let's briefly discuss some of these
algorithms:
Leaky Bucket
Leaky Bucket is an algorithm that provides a simple, intuitive approach
to rate limiting via a queue. When registering a request, the system
appends it to the end of the queue. Processing for the first item on the
queue occurs at a regular interval or first-in, first-out (FIFO). If the
queue is full, then additional requests are discarded (or leaked).
Token Bucket
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 145/251
4/1/24, 10:01 AM System Design: The complete course
Fixed Window
The system uses a window size of n seconds to track the fixed
window algorithm rate. Each incoming request increments the counter
for the window. It discards the request if the counter exceeds a
threshold.
Sliding Log
Sliding Log rate-limiting involves tracking a time-stamped log for each
request. The system stores these logs in a time-sorted hash set or
table. It also discards logs with timestamps beyond a threshold. When
a new request comes in, we calculate the sum of logs to determine the
request rate. If the request would exceed the threshold rate, then it is
held.
Sliding Window
Sliding Window is a hybrid approach that combines the fixed window
algorithm's low processing cost and the sliding log's improved
boundary conditions. Like the fixed window algorithm, we track a
counter for each fixed window. Next, we account for a weighted value
of the previous window's request rate based on the current timestamp
to smooth out bursts of traffic.
Inconsistencies
When using a cluster of multiple nodes, we might need to enforce a
global rate limit policy. Because if each node were to track its rate limit,
a consumer could exceed a global rate limit when sending requests to
different nodes. The greater the number of nodes, the more likely the
user will exceed the global limit.
The simplest way to solve this problem is to use sticky sessions in our
load balancers so that each consumer gets sent to exactly one node
but this causes a lack of fault tolerance and scaling problems. Another
approach might be to use a centralized data store like Redis but this
will increase latency and cause race conditions.
Race Conditions
This issue happens when we use a naive "get-then-set" approach, in
which we retrieve the current rate limit counter, increment it, and then
push it back to the datastore. This model's problem is that additional
requests can come through in the time it takes to perform a full cycle of
read-increment-store, each attempting to store the increment counter
with an invalid (lower) counter value. This allows a consumer to send a
very large number of requests to bypass the rate limiting controls.
One way to avoid this problem is to use some sort of distributed
locking mechanism around the key, preventing any other processes
from accessing or writing to the counter. Though the lock will become
a significant bottleneck and will not scale well. A better approach might
be to use a "set-then-get" approach, allowing us to quickly increment
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 147/251
4/1/24, 10:01 AM System Design: The complete course
and check counter values without letting the atomic operations get in
the way.
Service Discovery
Service discovery is the detection of services within a computer
network. Service Discovery Protocol (SDP) is a networking standard
that accomplishes the detection of networks by identifying resources.
Implementations
There are two main service discovery patterns:
Client-side discovery
Server-side discovery
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 148/251
4/1/24, 10:01 AM System Design: The complete course
Service Registry
A service registry is basically a database containing the network
locations of service instances to which the clients can reach out. A
Service Registry must be highly available and up-to-date.
Service Registration
We also need a way to obtain service information, often known as
service registration. Let's look at two possible service registration
approaches:
Self-Registration
When using the self-registration model, a service instance is
responsible for registering and de-registering itself in the Service
Registry. In addition, if necessary, a service instance sends heartbeat
requests to keep its registration alive.
Third-party Registration
The registry keeps track of changes to running instances by polling the
deployment environment or subscribing to events. When it detects a
newly available service instance, it records it in its database. The
Service Registry also de-registers terminated service instances.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 149/251
4/1/24, 10:01 AM System Design: The complete course
Service mesh
Service-to-service communication is essential in a distributed
application but routing this communication, both within and across
application clusters, becomes increasingly complex as the number of
services grows. Service mesh enables managed, observable, and
secure communication between individual services. It works with a
service discovery protocol to detect services. Istio and envoy are some
of the most commonly used service mesh technologies.
Examples
Here are some commonly used service discovery infrastructure tools:
etcd
Consul
Apache Thrift
Apache Zookeeper
SLA, SLO, SLI
Let's briefly discuss SLA, SLO, and SLI. These are mostly related to the
business and site reliability side of things but good to know
nonetheless.
SLA
An SLA, or Service Level Agreement, is an agreement made between a
company and its users of a given service. The SLA defines the different
promises that the company makes to users regarding specific metrics,
such as service availability.
SLAs are often written by a company's business or legal team.
SLO
An SLO, or Service Level Objective, is the promise that a company
makes to users regarding a specific metric such as incident response
or uptime. SLOs exist within an SLA as individual promises contained
within the full user agreement. The SLO is the specific goal that the
service must meet in order to comply with the SLA. SLOs should
always be simple, clearly defined, and easily measured to determine
whether or not the objective is being fulfilled.
SLI
An SLI, or Service Level Indicator, is a key metric used to determine
whether or not the SLO is being met. It is the measured value of the
metric described within the SLO. In order to remain in compliance with
the SLA, the SLI's value must always meet or exceed the value
determined by the SLO.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 151/251
4/1/24, 10:01 AM System Design: The complete course
Disaster recovery
Disaster recovery (DR) is a process of regaining access and
functionality of the infrastructure after events like a natural disaster,
cyber attack, or even business disruptions.
Disaster recovery relies upon the replication of data and computer
processing in an off-premises location not affected by the disaster.
When servers go down because of a disaster, a business needs to
recover lost data from a second location where the data is backed up.
Ideally, an organization can transfer its computer processing to that
remote location as well in order to continue operations.
Disaster Recovery is often not actively discussed during system design
interviews but it's important to have some basic understanding of this
topic. You can learn more about disaster recovery from AWS Well-
Architected Framework.
Terms
Let's discuss some important terms relevantly for disaster recovery:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 152/251
4/1/24, 10:01 AM System Design: The complete course
RTO
Recovery Time Objective (RTO) is the maximum acceptable delay
between the interruption of service and restoration of service. This
determines what is considered an acceptable time window when
service is unavailable.
RPO
Recovery Point Objective (RPO) is the maximum acceptable amount of
time since the last data recovery point. This determines what is
considered an acceptable loss of data between the last recovery point
and the interruption of service.
Strategies
A variety of disaster recovery (DR) strategies can be part of a disaster
recovery plan.
Back-up
This is the simplest type of disaster recovery and involves storing data
off-site or on a removable drive.
Cold Site
In this type of disaster recovery, an organization sets up basic
infrastructure in a second site.
Hot site
A hot site maintains up-to-date copies of data at all times. Hot sites are
time-consuming to set up and more expensive than cold sites, but they
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 153/251
4/1/24, 10:01 AM System Design: The complete course
What is a Hypervisor?
A Hypervisor sometimes called a Virtual Machine Monitor (VMM),
isolates the operating system and resources from the virtual machines
and enables the creation and management of those VMs. The
hypervisor treats resources like CPU, memory, and storage as a pool of
resources that can be easily reallocated between existing guests or
new virtual machines.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 154/251
4/1/24, 10:01 AM System Design: The complete course
Containers
A container is a standard unit of software that packages up code and
all its dependencies such as specific versions of runtimes and libraries
so that the application runs quickly and reliably from one computing
environment to another. Containers offer a logical packaging
mechanism in which applications can be abstracted from the
environment in which they actually run. This decoupling allows
container-based applications to be deployed easily and consistently,
regardless of the target environment.
Virtualization vs Containerization
Containers also share the OS kernel and use a fraction of the memory
VMs require.
OAuth 2.0 and OpenID Connect
(OIDC)
OAuth 2.0
OAuth 2.0, which stands for Open Authorization, is a standard
designed to provide consented access to resources on behalf of the
user, without ever sharing the user's credentials. OAuth 2.0 is an
authorization protocol and not an authentication protocol, it is
designed primarily as a means of granting access to a set of resources,
for example, remote APIs or user's data.
Concepts
The OAuth 2.0 protocol defines the following entities:
Resource Owner: The user or system that owns the protected
resources and can grant access to them.
Client: The client is the system that requires access to the
protected resources.
Authorization Server: This server receives requests from the
Client for Access Tokens and issues them upon successful
authentication and consent by the Resource Owner.
Resource Server: A server that protects the user's resources and
receives access requests from the Client. It accepts and validates
an Access Token from the Client and returns the appropriate
resources.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 157/251
4/1/24, 10:01 AM System Design: The complete course
Scopes: They are used to specify exactly the reason for which
access to resources may be granted. Acceptable scope values,
and which resources they relate to, are dependent on the
Resource Server.
Access Token: A piece of data that represents the authorization to
access resources on behalf of the end-user.
Disadvantages
Here are the most common disadvantages of OAuth 2.0:
Lacks built-in security features.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 158/251
4/1/24, 10:01 AM System Design: The complete course
No standard implementation.
No common set of scopes.
OpenID Connect
OAuth 2.0 is designed only for authorization, for granting access to
data and features from one application to another. OpenID Connect
(OIDC) is a thin layer that sits on top of OAuth 2.0 that adds login and
profile information about the person who is logged in.
When an Authorization Server supports OIDC, it is sometimes called an
identity provider (IdP), since it provides information about the Resource
Owner back to the Client. OpenID Connect is relatively new, resulting in
lower adoption and industry implementation of best practices
compared to OAuth.
Concepts
The OpenID Connect (OIDC) protocol defines the following entities:
Relying Party: The current application.
OpenID Provider: This is essentially an intermediate service that
provides a one-time code to the Relying Party.
Token Endpoint: A web server that accepts the One-Time Code
(OTC) and provides an access code that's valid for an hour. The
main difference between OIDC and OAuth 2.0 is that the token is
provided using JSON Web Token (JWT).
UserInfo Endpoint: The Relying Party communicates with this
endpoint, providing a secure token and receiving information about
the end-user
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 159/251
4/1/24, 10:01 AM System Design: The complete course
Both OAuth 2.0 and OIDC are easy to implement and are JSON based,
which is supported by most web and mobile applications. However, the
OpenID Connect (OIDC) specification is more strict than that of basic
OAuth.
Single Sign-On (SSO)
Single Sign-On (SSO) is an authentication process in which a user is
provided access to multiple applications or websites by using only a
single set of login credentials. This prevents the need for the user to
log separately into the different applications.
The user credentials and other identifying information are stored and
managed by a centralized system called Identity Provider (IdP). The
Identity Provider is a trusted system that provides access to other
websites and applications.
Single Sign-On (SSO) based authentication systems are commonly
used in enterprise environments where employees require access to
multiple applications of their organizations.
Components
Let's discuss some key components of Single Sign-On (SSO).
Service Provider
A service provider provides services to the end-user. They rely on
identity providers to assert the identity of a user, and typically certain
attributes about the user are managed by the identity provider. Service
providers may also maintain a local account for the user along with
attributes that are unique to their service.
Identity Broker
An identity broker acts as an intermediary that connects multiple
service providers with various different identity providers. Using
Identity Broker, we can perform single sign-on over any application
without the hassle of the protocol it follows.
SAML
Security Assertion Markup Language is an open standard that allows
clients to share security information about identity, authentication, and
permission across different systems. SAML is implemented with the
Extensible Markup Language (XML) standard for sharing data.
SAML specifically enables identity federation, making it possible for
identity providers (IdPs) to seamlessly and securely pass authenticated
identities and their attributes to service providers.
Advantages
Following are the benefits of using Single Sign-On:
Ease of use as users only need to remember one set of credentials.
Ease of access without having to go through a lengthy
authorization process.
Enforced security and compliance to protect sensitive data.
Simplifying the management with reduced IT support cost and
admin time.
Disadvantages
Here are some disadvantages of Single Sign-On:
Single Password Vulnerability, if the main SSO password gets
compromised, all the supported applications get compromised.
The authentication process using Single Sign-On is slower than
traditional authentication as every application has to request the
SSO provider for verification.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 163/251
4/1/24, 10:01 AM System Design: The complete course
Examples
These are some commonly used Identity Providers (IdP):
Okta
Google
Auth0
OneLogin
SSL, TLS, mTLS
Let's briefly discuss some important communication security protocols
such as SSL, TLS, and mTLS. I would say that from a "big picture"
system design perspective, this topic is not very important but still
good to know about.
SSL
SSL stands for Secure Sockets Layer, and it refers to a protocol for
encrypting and securing communications that take place on the
internet. It was first developed in 1995 but since has been deprecated
in favor of TLS (Transport Layer Security).
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 164/251
4/1/24, 10:01 AM System Design: The complete course
TLS
Transport Layer Security, or TLS, is a widely adopted security protocol
designed to facilitate privacy and data security for communications
over the internet. TLS evolved from a previous encryption protocol
called Secure Sockets Layer (SSL). A primary use case of TLS is
encrypting the communication between web applications and servers.
There are three main components to what the TLS protocol
accomplishes:
Encryption: hides the data being transferred from third parties.
Authentication: ensures that the parties exchanging information
are who they claim to be.
Integrity: verifies that the data has not been forged or tampered
with.
mTLS
Mutual TLS, or mTLS, is a method for mutual authentication. mTLS
ensures that the parties at each end of a network connection are who
they claim to be by verifying that they both have the correct private
key. The information within their respective TLS certificates provides
additional verification.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 165/251
4/1/24, 10:01 AM System Design: The complete course
Requirements clarifications
System design interview questions, by nature, are vague or abstract.
Asking questions about the exact scope of the problem, and clarifying
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 166/251
4/1/24, 10:01 AM System Design: The complete course
Functional requirements
These are the requirements that the end user specifically demands as
basic functionalities that the system should offer. All these
functionalities need to be necessarily incorporated into the system as
part of the contract.
For example:
"What are the features that we need to design for this system?"
"What are the edge cases we need to consider, if any, in our
design?"
Non-functional requirements
These are the quality constraints that the system must satisfy
according to the project contract. The priority or extent to which these
factors are implemented varies from one project to another. They are
also called non-behavioral requirements. For example, portability,
maintainability, reliability, scalability, security, etc.
For example:
"Each request should be processed with the minimum latency"
"System should be highly available"
Extended requirements
These are basically "nice to have" requirements that might be out of
the scope of the system.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 167/251
4/1/24, 10:01 AM System Design: The complete course
For example:
"Our system should record metrics and analytics"
"Service health and performance monitoring?"
API design
Next, we can start designing APIs for the system. These APIs will help
us define the expectations from the system explicitly. We don't have to
write any code, just a simple interface defining the API requirements
such as parameters, functions, classes, types, entities, etc.
For example:
COPY
Detailed design
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 169/251
4/1/24, 10:01 AM System Design: The complete course
Now it's time to go into detail about the major components of the
system we designed. As always discuss with the interviewer which
component may need further improvements.
Here is a good opportunity to demonstrate your experience in the
areas of your expertise. Present different approaches, advantages, and
disadvantages. Explain your design decisions, and back them up with
examples. This is also a good time to discuss any additional features
the system might be able to support, though this is optional.
"How should we partition our data?"
"What about load distribution?"
"Should we use cache?"
"How will we handle a sudden spike in traffic?"
Also, try not to be too opinionated about certain technologies,
statements like "I believe that NoSQL databases are just better, SQL
databases are not scalable" reflect poorly. As someone who has
interviewed a lot of people over the years, my two cents here would be
to be humble about what you know and what you do not. Use your
existing knowledge with examples to navigate this part of the
interview.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 171/251
4/1/24, 10:01 AM System Design: The complete course
Requirements
Our URL shortening system should meet the following requirements:
Functional requirements
Given a URL, our service should generate a shorter and unique
alias for it.
Users should be redirected to the original URL when they visit the
short link.
Links should expire after a default timespan.
Non-functional requirements
High availability with minimal latency.
The system should be scalable and efficient.
Extended requirements
Prevent abuse of services.
Record analytics and metrics for redirections.
Traffic
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 172/251
4/1/24, 10:01 AM System Design: The complete course
And with a 100:1 read/write ratio, the number of redirections will be:
100 × 40 U RLs/second = 4000 requests/second
Bandwidth
Since we expect about 40 URLs every second, and if we assume each
request is of size 500 bytes then the total incoming data for then write
requests would be:
40 × 500 bytes = 20 KB/second
Storage
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 173/251
4/1/24, 10:01 AM System Design: The complete course
Cache
For caching, we will follow the classic Pareto principle also known as
the 80/20 rule. This means that 80% of the requests are for 20% of the
data, so we can cache around 20% of our requests.
Since we get around 4K read or redirection requests each second. This
translates into 350M requests per day.
4000 U RLs/second × 24 hours × 3600 seconds =∼ 350 million requests/
High-level estimate
Here is our high-level estimate:
Type Estimate
Writes (New URLs) 40/s
Reads (Redirection) 4K/s
Bandwidth (Incoming) 20 KB/s
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 174/251
4/1/24, 10:01 AM System Design: The complete course
Type Estimate
Bandwidth (Outgoing) 2 MB/s
Storage (10 years) 6 TB
Memory (Caching) ~35 GB/day
API design
Let us do a basic API design for our services:
Create URL
This API should create a new short URL in our system given an original
URL.
COPY
Parameters
API Key ( string ): API key provided by the user.
Original Url ( string ): Original URL to be shortened.
Expiration ( Date ): Expiration date of the new URL (optional).
Returns
Short URL ( string ): New shortened URL.
Get URL
This API should retrieve the original URL from a given short URL.
COPY
Parameters
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 176/251
4/1/24, 10:01 AM System Design: The complete course
Delete URL
This API should delete a given shortURL from our system.
COPY
Parameters
API Key ( string ): API key provided by the user.
Short Url ( string ): Short URL to be deleted.
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 177/251
4/1/24, 10:01 AM System Design: The complete course
High-level design
Now let us do a high-level design of our system.
URL Encoding
Our system's primary goal is to shorten a given URL, let's look at
different approaches:
Base62 Approach
In this approach, we can encode the original URL using Base62 which
consists of the capital letters A-Z, the lower case letters a-z, and the
numbers 0-9.
N
N umber of U RLs = 62
Where,
N : Number of characters in the generated URL.
So, if we want to generate a URL that is 7 characters long, we will
generate ~3.5 trillion different URLs.
$$ \begin{gather*} 62^5 = \sim 916 \space million \space URLs \
62^6 = \sim 56.8 \space billion \space URLs \
62^7 = \sim 3.5 \space trillion \space URLs \end{gather*} $$
This is the simplest solution here, but it does not guarantee non-
duplicate or collision-resistant keys.
MD5 Approach
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 178/251
4/1/24, 10:01 AM System Design: The complete course
However, this creates a new issue for us, which is duplication and
collision. We can try to re-compute the hash until we find a unique one
but that will increase the overhead of our systems. It's better to look
for more scalable approaches.
Counter Approach
In this approach, we will start with a single server which will maintain
the count of the keys generated. Once our service receives a request,
it can reach out to the counter which returns a unique number and
increments the counter. When the next request comes the counter
again returns the unique number and this goes on.
Counter(0 − 3.5 trillion) → base62encode → hash
The problem with this approach is that it can quickly become a single
point for failure. And if we run multiple instances of the counter we can
have collision as it's essentially a distributed system.
To solve this issue we can use a distributed system manager such as
Zookeeper which can provide distributed synchronization. Zookeeper
can maintain multiple ranges for our servers.
$$ \begin{align*} & Range \space 1: \space 1 \rightarrow 1,000,000 \
& Range \space 2: \space 1,000,001 \rightarrow 2,000,000 \
& Range \space 3: \space 2,000,001 \rightarrow 3,000,000 \
& ... \end{align*} $$
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 179/251
4/1/24, 10:01 AM System Design: The complete course
While 390 GB seems like a lot for this simple use case, it is important to
remember this is for the entirety of our service lifetime and the size of
the keys database would not increase like our main database.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 180/251
4/1/24, 10:01 AM System Design: The complete course
Caching
Now, let's talk about caching. As per our estimations, we will require
around ~35 GB of memory per day to cache 20% of the incoming
requests to our services. For this use case, we can use Redis or
Memcached servers alongside our API server.
For more details, refer to caching.
Design
Now that we have identified some core components, let's do the first
draft of our system design.
3. If the key is still not found in the database, an HTTP 404 (Not
found) error is sent to the user.
Detailed design
It's time to discuss the finer details of our design.
Data Partitioning
To scale out our databases we will need to partition our data.
Horizontal partitioning (aka Sharding) can be a good first step. We can
use partitions schemes such as:
Hash-Based Partitioning
List-Based Partitioning
Range Based Partitioning
Composite Partitioning
The above approaches can still cause uneven data and load
distribution, we can solve this using Consistent hashing.
For more details, refer to Sharding and Consistent Hashing.
Database cleanup
This is more of a maintenance step for our services and depends on
whether we keep the expired entries or remove them. If we do decide
to remove expired entries, we can approach this in two different ways:
Active cleanup
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 182/251
4/1/24, 10:01 AM System Design: The complete course
Cache
Now let us talk about caching.
Which cache eviction policy to use?
As we discussed before, we can use solutions like Redis or
Memcached and cache 20% of the daily traffic but what kind of cache
eviction policy would best fit our needs?
Least Recently Used (LRU) can be a good policy for our system. In this
policy, we discard the least recently used key first.
How to handle cache miss?
Whenever there is a cache miss, our servers can hit the database
directly and update the cache with the new entries.
Security
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 183/251
4/1/24, 10:01 AM System Design: The complete course
Whatsapp
Let's design a Whatsapp like instant messaging service, similar to
services like Whatsapp, Facebook Messenger, and WeChat.
What is Whatsapp?
Whatsapp is a chat application that provides instant messaging
services to its users. It is one of the most used mobile applications on
the planet connecting over 2 billion users in 180+ countries. Whatsapp
is also available on the web.
Requirements
Our system should meet the following requirements:
Functional requirements
Should support one-on-one chat.
Group chats (max 100 people).
Should support file sharing (image, video, etc.).
Non-functional requirements
High availability with minimal latency.
The system should be scalable and efficient.
Extended requirements
Sent, Delivered, and Read receipts of the messages.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 185/251
4/1/24, 10:01 AM System Design: The complete course
Traffic
Let us assume we have 50 million daily active users (DAU) and on
average each user sends at least 10 messages to 4 different people
every day. This gives us 2 billion messages per day.
50 million × 20 messages = 2 billion/day
Storage
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 186/251
4/1/24, 10:01 AM System Design: The complete course
Bandwidth
As our system is handling 10.2 TB of ingress every day, we will a
require minimum bandwidth of around 120 MB per second.
10.2 T B
=∼ 120 M B/second
(24 hrs × 3600 seconds)
High-level estimate
Here is our high-level estimate:
Type Estimate
Daily active users (DAU) 50 million
Requests per second (RPS) 24K/s
Storage (per day) ~10.2 TB
Storage (10 years) ~38 PB
Bandwidth ~120 MB/s
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 187/251
4/1/24, 10:01 AM System Design: The complete course
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 188/251
4/1/24, 10:01 AM System Design: The complete course
This table maps users and groups as multiple users can be a part of
multiple groups (N:M relationship) and vice versa.
API design
Let us do a basic API design for our services:
Parameters
User ID ( UUID ): ID of the current user.
Returns
Result ( Chat[] | Group[] ): All the chats and groups the user is a part of.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 189/251
4/1/24, 10:01 AM System Design: The complete course
Get messages
Get all messages for a user given the channelID (chat or group id).
COPY
Parameters
User ID ( UUID ): ID of the current user.
Channel ID ( UUID ): ID of the channel (chat or group) from which
messages need to be retrieved.
Returns
Messages ( Message[] ): All the messages in a given chat or group.
Send message
Send a message from a user to a channel (chat or group).
COPY
Parameters
User ID ( UUID ): ID of the current user.
Channel ID ( UUID ): ID of the channel (chat or group) user wants to send
a message to.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 190/251
4/1/24, 10:01 AM System Design: The complete course
Message ( Message ): The message (text, image, video, etc.) that the
user wants to send.
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
Parameters
User ID ( UUID ): ID of the current user.
Channel ID ( UUID ): ID of the channel (chat or group) the user wants to
join or leave.
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
High-level design
Now let us do a high-level design of our system.
Architecture
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 191/251
4/1/24, 10:01 AM System Design: The complete course
performs well but we can further improve the performance using gRPC
which is more lightweight and efficient.
Service discovery is another thing we will have to take into account.
We can also use a service mesh that enables managed, observable,
and secure communication between individual services.
Note: Learn more about REST, GraphQL, gRPC and how they compare
with each other.
Real-time messaging
How do we efficiently send and receive messages? We have two
different options:
Pull model
The client can periodically send an HTTP request to servers to check if
there are any new messages. This can be achieved via something like
Long polling.
Push model
The client opens a long-lived connection with the server and once new
data is available it will be pushed to the client. We can use WebSockets
or Server-Sent Events (SSE) for this.
The pull model approach is not scalable as it will create unnecessary
request overhead on our servers and most of the time the response will
be empty, thus wasting our resources. To minimize latency, using the
push model with WebSockets is a better choice because then we can
push data to the client once it's available without any delay given the
connection is open with the client. Also, WebSockets provide full-
duplex communication, unlike Server-Sent Events (SSE) which are only
unidirectional.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 193/251
4/1/24, 10:01 AM System Design: The complete course
Last seen
To implement the last seen functionality, we can use a heartbeat
mechanism, where the client can periodically ping the servers
indicating its liveness. Since this needs to be as low overhead as
possible, we can store the last active timestamp in the cache as
follows:
Key Value
User A 2022-07-01T14:32:50
User B 2022-07-05T05:10:35
User C 2022-07-10T04:33:25
This will give us the last time the user was active. This functionality will
be handled by the presence service combined with Redis or
Memcached as our cache.
Another way to implement this is to track the latest action of the user,
once the last activity crosses a certain threshold, such as "user hasn't
performed any action in the last 30 seconds", we can show the user as
offline and last seen with the last recorded timestamp. This will be
more of a lazy update approach and might benefit us over heartbeat in
certain cases.
Notifications
Once a message is sent in a chat or a group, we will first check if the
recipient is active or not, we can get this information by taking the
user's active connection and last seen into consideration.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 194/251
4/1/24, 10:01 AM System Design: The complete course
If the recipient is not active, the chat service will add an event to a
message queue with additional metadata such as the client's device
platform which will be used to route the notification to the correct
platform later on.
The notification service will then consume the event from the message
queue and forward the request to Firebase Cloud Messaging (FCM) or
Apple Push Notification Service (APNS) based on the client's device
platform (Android, iOS, web, etc). We can also add support for email
and SMS.
Why are we using a message queue?
Since most message queues provide best-effort ordering which
ensures that messages are generally delivered in the same order as
they're sent and that a message is delivered at least once which is an
important part of our service functionality.
While this seems like a classic publish-subscribe use case, it is actually
not as mobile devices and browsers each have their own way of
handling push notifications. Usually, notifications are handled
externally via Firebase Cloud Messaging (FCM) or Apple Push
Notification Service (APNS) unlike message fan-out which we
commonly see in backend services. We can use something like
Amazon SQS or RabbitMQ to support this functionality.
Read receipts
Handling read receipts can be tricky, for this use case we can wait for
some sort of Acknowledgment (ACK) from the client to determine if
the message was delivered and update the corresponding deliveredAt
field. Similarly, we will mark message the message seen once the user
opens the chat and update the corresponding seenAt timestamp field.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 195/251
4/1/24, 10:01 AM System Design: The complete course
Design
Now that we have identified some core components, let's do the first
draft of our system design.
Detailed design
It's time to discuss our design decisions in detail.
Data Partitioning
To scale out our databases we will need to partition our data.
Horizontal partitioning (aka Sharding) can be a good first step. We can
use partitions schemes such as:
Hash-Based Partitioning
List-Based Partitioning
Range Based Partitioning
Composite Partitioning
The above approaches can still cause uneven data and load
distribution, we can solve this using Consistent hashing.
For more details, refer to Sharding and Consistent Hashing.
Caching
In a messaging application, we have to be careful about using cache as
our users expect the latest data, but many users will be requesting the
same messages, especially in a group chat. So, to prevent usage
spikes from our resources we can cache older messages.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 196/251
4/1/24, 10:01 AM System Design: The complete course
Some group chats can have thousands of messages and sending that
over the network will be really inefficient, to improve efficiency we can
add pagination to our system APIs. This decision will be helpful for
users with limited network bandwidth as they won't have to retrieve old
messages unless requested.
Which cache eviction policy to use?
We can use solutions like Redis or Memcached and cache 20% of the
daily traffic but what kind of cache eviction policy would best fit our
needs?
Least Recently Used (LRU) can be a good policy for our system. In this
policy, we discard the least recently used key first.
How to handle cache miss?
Whenever there is a cache miss, our servers can hit the database
directly and update the cache with the new entries.
For more details, refer to Caching.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 197/251
4/1/24, 10:01 AM System Design: The complete course
Fun fact: Whatsapp deletes media on its servers once it has been
downloaded by the user.
We can use object stores like Amazon S3, Azure Blob Storage, or
Google Cloud Storage for this use case.
API gateway
Since we will be using multiple protocols like HTTP, WebSocket,
TCP/IP, deploying multiple L4 (transport layer) or L7 (application layer)
type load balancers separately for each protocol will be expensive.
Instead, we can use an API Gateway that supports multiple protocols
without any issues.
API Gateway can also offer other features such as authentication,
authorization, rate limiting, throttling, and API versioning which will
improve the quality of our services.
We can use services like Amazon API Gateway or Azure API Gateway
for this use case.
Let's design a Twitter like social media service, similar to services like
Facebook, Instagram, etc.
What is Twitter?
Twitter is a social media service where users can read or post short
messages (up to 280 characters) called tweets. It is available on the
web and mobile platforms such as Android and iOS.
Requirements
Our system should meet the following requirements:
Functional requirements
Should be able to post new tweets (can be text, image, video,
etc.).
Should be able to follow other users.
Should have a newsfeed feature consisting of tweets from the
people the user is following.
Should be able to search tweets.
Non-Functional requirements
High availability with minimal latency.
The system should be scalable and efficient.
Extended requirements
Metrics and analytics.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 200/251
4/1/24, 10:01 AM System Design: The complete course
Retweet functionality.
Favorite tweets.
Traffic
This will be a read-heavy system, let us assume we have 1 billion total
users with 200 million daily active users (DAU), and on average each
user tweets 5 times a day. This gives us 1 billion tweets per day.
200 million × 5 messages = 1 billion/day
Storage
If we assume each message on average is 100 bytes, we will require
about 100 GB of database storage every day.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 201/251
4/1/24, 10:01 AM System Design: The complete course
Bandwidth
As our system is handling 5.1 TB of ingress every day, we will a require
minimum bandwidth of around 60 MB per second.
5.1 T B
=∼ 60 M B/second
(24 hrs × 3600 seconds)
High-level estimate
Here is our high-level estimate:
Type Estimate
Daily active users (DAU) 100 million
Requests per second (RPS) 12K/s
Storage (per day) ~5.1 TB
Storage (10 years) ~19 PB
Bandwidth ~60 MB/s
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 203/251
4/1/24, 10:01 AM System Design: The complete course
While our data model seems quite relational, we don't necessarily need
to store everything in a single database, as this can limit our scalability
and quickly become a bottleneck.
We will split the data between different services each having
ownership over a particular table. Then we can use a relational
database such as PostgreSQL or a distributed NoSQL database such
as Apache Cassandra for our use case.
API design
Let us do a basic API design for our services:
Post a tweet
This API will allow the user to post a tweet on the platform.
COPY
Parameters
User ID ( UUID ): ID of the user.
Content ( string ): Contents of the tweet.
Media URL ( string ): URL of the attached media (optional).
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 204/251
4/1/24, 10:01 AM System Design: The complete course
Parameters
Follower ID ( UUID ): ID of the current user.
Followee ID ( UUID ): ID of the user we want to follow or unfollow.
Media URL ( string ): URL of the attached media (optional).
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
Get newsfeed
This API will return all the tweets to be shown within a given newsfeed.
COPY
Parameters
User ID ( UUID ): ID of the user.
Returns
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 205/251
4/1/24, 10:01 AM System Design: The complete course
High-level design
Now let us do a high-level design of our system.
Architecture
We will be using microservices architecture since it will make it easier
to horizontally scale and decouple our services. Each service will have
ownership of its own data model. Let's try to divide our system into
some core services.
User Service
This service handles user-related concerns such as authentication and
user information.
Newsfeed Service
This service will handle the generation and publishing of user
newsfeeds. It will be discussed in detail separately.
Tweet Service
The tweet service will handle tweet-related use cases such as posting
a tweet, favorites, etc.
Search Service
The service is responsible for handling search-related functionality. It
will be discussed in detail separately.
Media service
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 206/251
4/1/24, 10:01 AM System Design: The complete course
This service will handle the media (images, videos, files, etc.) uploads.
It will be discussed in detail separately.
Notification Service
This service will simply send push notifications to the users.
Analytics Service
This service will be used for metrics and analytics use cases.
What about inter-service communication and service discovery?
Since our architecture is microservices-based, services will be
communicating with each other as well. Generally, REST or HTTP
performs well but we can further improve the performance using gRPC
which is more lightweight and efficient.
Service discovery is another thing we will have to take into account.
We can also use a service mesh that enables managed, observable,
and secure communication between individual services.
Note: Learn more about REST, GraphQL, gRPC and how they compare
with each other.
Newsfeed
When it comes to the newsfeed, it seems easy enough to implement,
but there are a lot of things that can make or break this feature. So,
let's divide our problem into two parts:
Generation
Let's assume we want to generate the feed for user A, we will perform
the following steps:
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 207/251
4/1/24, 10:01 AM System Design: The complete course
1. Retrieve the IDs of all the users and entities (hashtags, topics, etc.)
user A follows.
2. Fetch the relevant tweets for each of the retrieved IDs.
3. Use a ranking algorithm to rank the tweets based on parameters
such as relevance, time, engagement, etc.
4. Return the ranked tweets data to the client in a paginated manner.
Feed generation is an intensive process and can take quite a lot of
time, especially for users following a lot of people. To improve the
performance, the feed can be pre-generated and stored in the cache,
then we can have a mechanism to periodically update the feed and
apply our ranking algorithm to the new tweets.
Publishing
Publishing is the step where the feed data is pushed according to each
specific user. This can be a quite heavy operation, as a user may have
millions of friends or followers. To deal with this, we have three
different approaches:
Pull Model (or Fan-out on load)
Ranking Algorithm
As we discussed, we will need a ranking algorithm to rank each tweet
according to its relevance to each specific user.
For example, Facebook used to utilize an EdgeRank algorithm, here,
the rank of each feed item is described by:
Rank = Af f inity × W eight × Decay
Where,
Affinity : is the "closeness" of the user to the creator of the edge. If a
user frequently likes, comments, or messages the edge creator, then
the value of affinity will be higher, resulting in a higher rank for the
post.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 209/251
4/1/24, 10:01 AM System Design: The complete course
Retweets
Retweets are one of our extended requirements. To implement this
feature we can simply create a new tweet with the user id of the user
retweeting the original tweet and then modify the type enum and
content property of the new tweet to link it with the original tweet.
For example, the type enum property can be of type tweet, similar to
text, video, etc and content can be the id of the original tweet. Here the
first row indicates the original tweet while the second row is how we
can represent a retweet.
id userID type
ad34-291a-45f6-b36c 7a2c-62c4-4dc8-b1bb text
Search
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 210/251
4/1/24, 10:01 AM System Design: The complete course
Notifications
Push notifications are an integral part of any social media platform. We
can use a message queue or a message broker such as Apache Kafka
with the notification service to dispatch requests to Firebase Cloud
Messaging (FCM) or Apple Push Notification Service (APNS) which will
handle the delivery of the push notifications to user devices.
For more details, refer to the Whatsapp system design where we
discuss push notifications.
Detailed design
It's time to discuss our design decisions in detail.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 211/251
4/1/24, 10:01 AM System Design: The complete course
Data Partitioning
To scale out our databases we will need to partition our data.
Horizontal partitioning (aka Sharding) can be a good first step. We can
use partitions schemes such as:
Hash-Based Partitioning
List-Based Partitioning
Range Based Partitioning
Composite Partitioning
The above approaches can still cause uneven data and load
distribution, we can solve this using Consistent hashing.
For more details, refer to Sharding and Consistent Hashing.
Mutual friends
For mutual friends, we can build a social graph for every user. Each
node in the graph will represent a user and a directional edge will
represent followers and followees. After that, we can traverse the
followers of a user to find and suggest a mutual friend. This would
require a graph database such as Neo4j and ArangoDB.
This is a pretty simple algorithm, to improve our suggestion accuracy,
we will need to incorporate a recommendation model which uses
machine learning as part of our algorithm.
Caching
In a social media application, we have to be careful about using cache
as our users expect the latest data. So, to prevent usage spikes from
our resources we can cache the top 20% of the tweets.
To further improve efficiency we can add pagination to our system
APIs. This decision will be helpful for users with limited network
bandwidth as they won't have to retrieve old messages unless
requested.
Which cache eviction policy to use?
We can use solutions like Redis or Memcached and cache 20% of the
daily traffic but what kind of cache eviction policy would best fit our
needs?
Least Recently Used (LRU) can be a good policy for our system. In this
policy, we discard the least recently used key first.
How to handle cache miss?
Whenever there is a cache miss, our servers can hit the database
directly and update the cache with the new entries.
For more details, refer to Caching.
But where can we store files at scale? Well, object storage is what
we're looking for. Object stores break data files up into pieces called
objects. It then stores those objects in a single repository, which can
be spread out across multiple networked systems. We can also use
distributed file storage such as HDFS or GlusterFS.
What is Netflix?
Netflix is a subscription-based streaming service that allows its
members to watch TV shows and movies on an internet-connected
device. It is available on platforms such as the Web, iOS, Android, TV,
etc.
Requirements
Our system should meet the following requirements:
Functional requirements
Users should be able to stream and share videos.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 215/251
4/1/24, 10:01 AM System Design: The complete course
Non-Functional requirements
High availability with minimal latency.
High reliability, no uploads should be lost.
The system should be scalable and efficient.
Extended requirements
Certain content should be geo-blocked.
Resume video playback from the point user left off.
Record metrics and analytics of videos.
Traffic
This will be a read-heavy system, let us assume we have 1 billion total
users with 200 million daily active users (DAU), and on average each
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 216/251
4/1/24, 10:01 AM System Design: The complete course
user watches 5 videos a day. This gives us 1 billion videos watched per
day.
200 million × 5 videos = 1 billion/day
Storage
If we assume each video is 100 MB on average, we will require about 5
PB of storage every day.
50 million × 100 M B = 5 P B/day
Bandwidth
As our system is handling 5 PB of ingress every day, we will a require
minimum bandwidth of around 58 GB per second.
5 PB
=∼ 58 GB/second
(24 hrs × 3600 seconds)
High-level estimate
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 217/251
4/1/24, 10:01 AM System Design: The complete course
Type Estimate
Daily active users (DAU) 200 million
Requests per second (RPS) 12K/s
Storage (per day) ~5 PB
Storage (10 years) ~18,250 PB
Bandwidth ~58 GB/s
API design
Let us do a basic API design for our services:
Upload a video
Given a byte stream, this API enables video to be uploaded to our
service.
COPY
Parameters
Title ( string ): Title of the new video.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 219/251
4/1/24, 10:01 AM System Design: The complete course
Streaming a video
This API allows our users to stream a video with the preferred codec
and resolution.
COPY
Parameters
Video ID ( UUID ): ID of the video that needs to be streamed.
Codec ( Enum<string> ): Required codec of the requested video, such as
h.265 , h.264 , VP9 , etc.
Parameters
Query ( string ): Search query from the user.
Next Page ( string ): Token for the next page, this can be used for
pagination (optional).
Returns
Videos ( Video[] ): All the videos available for a particular search query.
Add a comment
This API will allow our users to post a comment on a video (like
YouTube).
COPY
Parameters
VideoID ( UUID ): ID of the video user wants to comment on.
Comment ( string ): The text content of the comment.
Returns
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 221/251
4/1/24, 10:01 AM System Design: The complete course
High-level design
Now let us do a high-level design of our system.
Architecture
We will be using microservices architecture since it will make it easier
to horizontally scale and decouple our services. Each service will have
ownership of its own data model. Let's try to divide our system into
some core services.
User Service
This service handles user-related concerns such as authentication and
user information.
Stream Service
The tweet service will handle video streaming-related functionality.
Search Service
The service is responsible for handling search-related functionality. It
will be discussed in detail separately.
Media service
This service will handle the video uploads and processing. It will be
discussed in detail separately.
Analytics Service
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 222/251
4/1/24, 10:01 AM System Design: The complete course
This service will be used for metrics and analytics use cases.
What about inter-service communication and service discovery?
Since our architecture is microservices-based, services will be
communicating with each other as well. Generally, REST or HTTP
performs well but we can further improve the performance using gRPC
which is more lightweight and efficient.
Service discovery is another thing we will have to take into account.
We can also use a service mesh that enables managed, observable,
and secure communication between individual services.
Note: Learn more about REST, GraphQL, gRPC and how they compare
with each other.
Video processing
There are so many variables in play when it comes to processing a
video. For example, an average data size of two-hour raw 8K footage
from a high-end camera can easily be up to 4 TB, thus we need to have
some kind of processing to reduce both storage and delivery costs.
Here's how we can process videos once they're uploaded by the
content team (or users in YouTube's case) and are queued for
processing in our message queue.
reduces the amount of data sent over the network by only selecting
changed chunks.
Usually, a video file can be split into equal size chunks based on
timestamps but Netflix instead splits chunks based on scenes, this
slight variation becomes a huge factor for a better user experience as
whenever the client requests a chunk from the server, there is a lower
chance of interruption as a complete scene will be retrieved.
Content Filter
This step checks if the video adheres to the content policy of the
platform, this can be pre-approved in the case of Netflix as per the
content rating of the media or can be strictly enforced like YouTube.
This entire step is done by a machine learning model which performs
copyright, piracy, and NSFW checks. If issues are found, we can push
the task to a dead-letter queue (DLQ) and someone from the
moderation team can do further inspection.
Transcoder
Transcoding is a process in which the original data is decoded to an
intermediate uncompressed format, which is then encoded into the
target format. This process uses different codecs to perform bitrate
adjustment, image downsampling, or re-encoding the media.
This results in a smaller size file and a much more optimized format for
the target devices. Standalone solutions such as FFmpeg or cloud-
based solutions like AWS Elemental MediaConvert can be used to
implement this step of the pipeline.
Quality Conversion
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 224/251
4/1/24, 10:01 AM System Design: The complete course
This is the last step of the processing pipeline and as the name
suggests, this step handles the conversion of the transcoded media
from the previous step into different resolutions such as 4K, 1440p,
1080p, 720p, etc.
This allows us to fetch the desired quality of the video as per the user's
request, and once the media file finishes processing, it will be uploaded
to a distributed file storage such as HDFS, GlusterFS, or an object
storage such as Amazon S3 for later retrieval during streaming.
Note: We can add additional steps such as subtitles and thumbnails
generation as part of our pipeline.
Why are we using a message queue?
Processing videos as a long-running task makes much more sense, and
a message queue also decouples our video processing pipeline from
the uploads functionality. We can use something like Amazon SQS or
RabbitMQ to support this.
Video streaming
Video streaming is a challenging task from both the client and server
perspectives. Moreover, internet connection speeds vary quite a lot
between different users. To make sure users don't re-fetch the same
content, we can use a Content Delivery Network (CDN).
Netflix takes this a step further with its Open Connect program. In this
approach, they partner with thousands of Internet Service Providers
(ISPs) to localize their traffic and deliver their content more efficiently.
What is the difference between Netflix's Open Connect and a
traditional Content Delivery Network (CDN)?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 225/251
4/1/24, 10:01 AM System Design: The complete course
Searching
Sometimes traditional DBMS are not performant enough, we need
something which allows us to store, search, and analyze huge volumes
of data quickly and in near real-time and give results within
milliseconds. Elasticsearch can help us with this use case.
Elasticsearch is a distributed, free and open search and analytics
engine for all types of data, including textual, numerical, geospatial,
structured, and unstructured. It is built on top of Apache Lucene.
How do we identify trending content?
Trending functionality will be based on top of the search functionality.
We can cache the most frequently searched queries in the last N
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 226/251
4/1/24, 10:01 AM System Design: The complete course
seconds and update them every M seconds using some sort of batch
job mechanism.
Sharing
Sharing content is an important part of any platform, for this, we can
have some sort of URL shortener service in place that can generate
short URLs for the users to share.
For more details, refer to the URL Shortener system design.
Detailed design
It's time to discuss our design decisions in detail.
Data Partitioning
To scale out our databases we will need to partition our data.
Horizontal partitioning (aka Sharding) can be a good first step. We can
use partitions schemes such as:
Hash-Based Partitioning
List-Based Partitioning
Range Based Partitioning
Composite Partitioning
The above approaches can still cause uneven data and load
distribution, we can solve this using Consistent hashing.
For more details, refer to Sharding and Consistent Hashing.
Geo-blocking
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 227/251
4/1/24, 10:01 AM System Design: The complete course
Recommendations
Netflix uses a machine learning model which uses the user's viewing
history to predict what the user might like to watch next, an algorithm
like Collaborative Filtering can be used.
However, Netflix (like YouTube) uses its own algorithm called Netflix
Recommendation Engine which can track several data points such as:
User profile information like age, gender, and location.
Browsing and scrolling behavior of the user.
Time and date a user watched a title.
The device which was used to stream the content.
The number of searches and what terms were searched.
For more detail, refer to Netflix recommendation research.
Caching
In a streaming platform, caching is important. We have to be able to
cache as much static media content as possible to improve user
experience. We can use solutions like Redis or Memcached but what
kind of cache eviction policy would best fit our needs?
Which cache eviction policy to use?
Least Recently Used (LRU) can be a good policy for our system. In this
policy, we discard the least recently used key first.
How to handle cache miss?
Whenever there is a cache miss, our servers can hit the database
directly and update the cache with the new entries.
For more details, refer to Caching.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 230/251
4/1/24, 10:01 AM System Design: The complete course
What is Uber?
Uber is a mobility service provider, allowing users to book rides and a
driver to transport them in a way similar to a taxi. It is available on the
web and mobile platforms such as Android and iOS.
Requirements
Our system should meet the following requirements:
Functional requirements
We will design our system for two types of users: Customers and
Drivers.
Customers
Customers should be able to see all the cabs in the vicinity with an
ETA and pricing information.
Customers should be able to book a cab to a destination.
Customers should be able to see the location of the driver.
Drivers
Drivers should be able to accept or deny the customer requested
ride.
Once a driver accepts the ride, they should see the pickup location
of the customer.
Drivers should be able to mark the trip as complete on reaching
the destination.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 231/251
4/1/24, 10:01 AM System Design: The complete course
Non-Functional requirements
High reliability.
High availability with minimal latency.
The system should be scalable and efficient.
Extended requirements
Customers can rate the trip after it's completed.
Payment processing.
Metrics and analytics.
Traffic
Let us assume we have 100 million daily active users (DAU) with 1
million drivers and on average our platform enables 10 million rides
daily.
If on average each user performs 10 actions (such as request a check
available rides, fares, book rides, etc.) we will have to handle 1 billion
requests daily.
100 million × 10 actions = 1 billion/day
1 billion requests per day translate into 12K requests per second.
1 billion
=∼ 12K requests/second
(24 hrs × 3600 seconds)
Storage
If we assume each message on average is 400 bytes, we will require
about 400 GB of database storage every day.
1 billion × 400 bytes =∼ 400 GB/day
Bandwidth
As our system is handling 400 GB of ingress every day, we will a
require minimum bandwidth of around 4 MB per second.
400 GB
=∼ 5 M B/second
(24 hrs × 3600 seconds)
High-level estimate
Here is our high-level estimate:
Type Estimate
Daily active users (DAU) 100 million
Requests per second (RPS) 12K/s
Storage (per day) ~400 GB
Storage (10 years) ~1.4 PB
Bandwidth ~5 MB/s
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 233/251
4/1/24, 10:01 AM System Design: The complete course
API design
Let us do a basic API design for our services:
Request a Ride
Through this API, customers will be able to request a ride.
COPY
Parameters
Customer ID ( UUID ): ID of the customer.
Source ( Tuple<float> ): Tuple containing the latitude and longitude of
the trip's starting location.
Destination ( Tuple<float> ): Tuple containing the latitude and longitude
of the trip's destination.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 235/251
4/1/24, 10:01 AM System Design: The complete course
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
Parameters
Customer ID ( UUID ): ID of the customer.
Reason ( UUID ): Reason for canceling the ride (optional).
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
Parameters
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 236/251
4/1/24, 10:01 AM System Design: The complete course
Parameters
Driver ID ( UUID ): ID of the driver.
Trip ID ( UUID ): ID of the requested trip.
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 237/251
4/1/24, 10:01 AM System Design: The complete course
Parameters
Customer ID ( UUID ): ID of the customer.
Trip ID ( UUID ): ID of the completed trip.
Rating ( int ): Rating of the trip.
Feedback ( string ): Feedback about the trip by the customer (optional).
Returns
Result ( boolean ): Represents whether the operation was successful or
not.
High-level design
Now let us do a high-level design of our system.
Architecture
We will be using microservices architecture since it will make it easier
to horizontally scale and decouple our services. Each service will have
ownership of its own data model. Let's try to divide our system into
some core services.
Customer Service
This service handles customer-related concerns such as authentication
and customer information.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 238/251
4/1/24, 10:01 AM System Design: The complete course
Driver Service
This service handles driver-related concerns such as authentication
and driver information.
Ride Service
This service will be responsible for ride matching and quadtree
aggregation. It will be discussed in detail separately.
Trip Service
This service handles trip-related functionality in our system.
Payment Service
This service will be responsible for handling payments in our system.
Notification Service
This service will simply send push notifications to the users. It will be
discussed in detail separately.
Analytics Service
This service will be used for metrics and analytics use cases.
What about inter-service communication and service discovery?
Since our architecture is microservices-based, services will be
communicating with each other as well. Generally, REST or HTTP
performs well but we can further improve the performance using gRPC
which is more lightweight and efficient.
Service discovery is another thing we will have to take into account.
We can also use a service mesh that enables managed, observable,
and secure communication between individual services.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 239/251
4/1/24, 10:01 AM System Design: The complete course
Note: Learn more about REST, GraphQL, gRPC and how they compare
with each other.
Location Tracking
How do we efficiently send and receive live location data from the
client (customers and drivers) to our backend? We have two different
options:
Pull model
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 240/251
4/1/24, 10:01 AM System Design: The complete course
Ride Matching
We need a way to efficiently store and query nearby drivers. Let's
explore different solutions we can incorporate into our design.
SQL
We already have access to the latitude and longitude of our customers,
and with databases like PostgreSQL and MySQL we can perform a
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 241/251
4/1/24, 10:01 AM System Design: The complete course
query to find nearby driver locations given a latitude and longitude (X,
Y) within a radius (R).
COPY
SELECT * FROM locations WHERE lat BETWEEN X-R AND X+R AND long BETW
Quadtree seems perfect for our use case, we can update the Quadtree
every time we receive a new location update from the driver. To reduce
the load on the quadtree servers we can use an in-memory datastore
such as Redis to cache the latest updates. And with the application of
mapping algorithms such as the Hilbert curve, we can perform efficient
range queries to find nearby drivers for the customer.
What about race conditions?
Race conditions can easily occur when a large number of customers
will be requesting rides simultaneously. To avoid this can wrap our ride
matching logic in a Mutex to avoid any race conditions. Furthermore,
every action should be transactional in nature.
For more details, refer to Transactions and Distributed Transactions.
How to find the best drivers nearby?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 243/251
4/1/24, 10:01 AM System Design: The complete course
Payments
Handling payments at scale is challenging, to simplify our system we
can use a third-party payment processor like Stripe or PayPal. Once
the payment is complete, the payment processor will redirect the user
back to our application and we can set up a webhook to capture all the
payment-related data.
Notifications
Push notifications will be an integral part of our platform. We can use a
message queue or a message broker such as Apache Kafka with the
notification service to dispatch requests to Firebase Cloud Messaging
(FCM) or Apple Push Notification Service (APNS) which will handle the
delivery of the push notifications to user devices.
For more details, refer to the Whatsapp system design where we
discuss push notifications.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 244/251
4/1/24, 10:01 AM System Design: The complete course
Detailed design
It's time to discuss our design decisions in detail.
Data Partitioning
To scale out our databases we will need to partition our data.
Horizontal partitioning (aka Sharding) can be a good first step. We can
shard our database either based on existing partition schemes or
regions. If we divide the locations into regions using let's say zip codes,
we can effectively store all the data in a given region on a fixed node.
But this can still cause uneven data and load distribution, we can solve
this using Consistent hashing.
For more details, refer to Sharding and Consistent Hashing.
Caching
In a location services-based platform, caching is important. We have to
be able to cache the recent locations of the customers and drivers for
fast retrieval. We can use solutions like Redis or Memcached but what
kind of cache eviction policy would best fit our needs?
Which cache eviction policy to use?
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 245/251
4/1/24, 10:01 AM System Design: The complete course
Least Recently Used (LRU) can be a good policy for our system. In this
policy, we discard the least recently used key first.
How to handle cache miss?
Whenever there is a cache miss, our servers can hit the database
directly and update the cache with the new entries.
For more details, refer to Caching.
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 248/251
4/1/24, 10:01 AM System Design: The complete course
Written by
Karan Pratap Singh
A software engineer who aims to evolve, innovate, and inspire through
technology.
Follow
MORE ARTICLES
Karan Pratap Singh
Understand Context in Go
In concurrent programs, it's often necessary to preempt operations
because of timeouts, cancellation…
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 249/251
4/1/24, 10:01 AM System Design: The complete course
Sync package in Go
As weKaran Pratap
learned Singhgoroutines run in the same address space, so
earlier,
access to shared memory must be …
https://kps.hashnode.dev/system-design-the-complete-course#heading-url-shortener 251/251