Unit 3 Chapter 1 Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Unit-3 (Chapter 1) Theory Notes

Instructor name: Dr. Vijay Mohan Shrimal

Emp ID: E17122

Subject Name: Cloud Design Pattern

Subject code: 21CSH-445

--------------------------------------------------------------------------------------------

DB Replication Pattern (Replicating Online Databases)

The first pattern we will discuss will cover fault tolerance. To maintain zero-downtime to an
end user will be a master-client configuration with full replication of data. It is one thing to
have a backup of the actual data, but it is important to be able to continue to serve requests
even if the data center hosting the instance experiences issues or upgrades, and causes a
termination. Consider another scenario in which the database requires system-level updates
that would cause it to not be available.
This could be alleviated by having a separate up-to-date instance in a different availability
zone. We will first create our master and prepare it to be a MySQL master instance. Launch
an EC2 instance from the AWS Linux AMI and SSH into it when it is ready:

$ sudo yum install -y mysql mysql-server >/dev/null 2>&1


$ sudo sed -i.bak 's/\[mysqld\]/[mysqld]\nlog-bin=mysql-bin\nserver- id=1/g' /etc/my.cnf
$ sudo service mysqld start >/dev/null 2>&1
$ mysqladmin -u root password 'abc1234!'
$ cat <<EOF | mysql -uroot -pabc1234\!
GRANT SELECT, PROCESS, REPLICATION CLIENT, REPLICATION SLAVE,
RELOAD ON
*.* TO 'repl' IDENTIFIED BY 'slavepass';
GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY 'abc1234!'; FLUSH
PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
UNLOCK TABLES;
EOF
File Position Binlog_Do_DB Binlog_Ignore_DB mysql-bin.000003 637

This instance is now configured as a master and will allow a slave instance to replicate from
it using a repl user. We will now configure the slave. It is important to make note of the
information from the output from the command SHOW MASTER STATUS; under File
Position and Binlog_Do_DB, as we will need them in the slave configuration. Launch another
EC2 instance from the AWS Linux AMI into a different geographical availability zone and
SSH into it when it is ready (please note that some items may require change based on your
configuration and will be in bold).
$ sudo yum install -y mysql mysql-server >/dev/null 2>&1

$ sudo sed -i.bak 's/\[mysqld\]/[mysqld]\nlog-bin=mysql-bin\nserver- id=2/g'


/etc/my.cnf

$ sudo service mysqld start >/dev/null 2>&1

$ mysqladmin -u root password 'abc1234!'

$ cat <<EOF | mysql -uroot -pabc1234\!

CHANGE MASTER TO MASTER_HOST='10.203.30.61', MASTER_USER='repl',


MASTER_

PASSWORD='slavepass', MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_ POS=637;

start slave;

EOF
mysql -uroot -pabc1234\! -e 'show slave status\G' | grep Slave_IO_State
Slave_IO_State: Waiting for master to send event

The final command shows the state Waiting for master to send event, which means it is
properly configured and ready to replicate. We will now create a database, table, and some
data, and verify that it replicates to the slave. SSH back into the master instance:

$ cat <<EOF | mysql -uroot -pabc1234\! create


database mydb;

use mydb;

create table people (

id INT AUTO_INCREMENT PRIMARY KEY,

firstname varchar(50), lastname


varchar(50)

); EOF

$ for i in {1..10}; do

random_text=$(tr -dc a-z0-9 </dev/urandom | tr 0-9 ' \n' | sed 's/^

*//' | fmt | grep -E '[a-z]+\s[a-z]+' | head -n 1) first_name=$(echo


$random_text | awk '{print $1}') last_name=$(echo $random_text | awk
'{print $2}')

cat <<EOF | mysql -uroot -pabc1234\! mydb

insert into people (firstname, lastname) values ('$first_name', '$last_ name');

EOF

done

$ mysql -uroot -pabc1234\! mydb -e 'select count(*) from files;'


+----------+

| count(*) |

+----------+

| 10 |

+----------+

The previous snippet creates a database mydb and a table people. It then uses some clever bash
to generate random gibberish names to insert into the table. You can see from the final line that
there are 10 rows in the database. Let's now verify that it has replicated to the slave. SSH into
the slave instance:

$ mysql -uroot -pabc1234\! mydb -e 'select count(*) from files;'

+----------+

| count(*) |

+----------+

| 10 |

+----------+

For this pattern we are complete, as we now have two database instances with identical data.
From an operations perspective, if the master database experienced a failure or needed to be
upgraded, all that would need to happen would be to point the application servers to the
slave instance. It should be noted that, in this pattern, it is not appropriate to put these
instances behind a load balancer of any kind as we have only solved the replication and
availability issue. If the instance is needed to handle additional load, a different pattern
would need to be applied. Also, if data were written to the master and immediately
retrieved, it might not be available from the slave instance. A final point to note is that this
is a one-way synchronization from the master to the slave. Data written to the slave will not
propagate to the master.
Read Replica Pattern (Load Distribution through Read Replicas)

In the previous database replication pattern, we did a full 1:1 replication of data from the
master to a slave, more or less as a backup or failover policy. It might be applicable, however,
to use the slave as a read-only instance and use the master as a write or update instance. This
would allow us to easily configure multiple replications of the data.

The trick to this one is that it is not subject to traditional load-balancing algorithms. If we
configure the slaves as read-only instances, then we cannot allow applications to attempt
any write executions. To do this, we will use a software on the master called MySQL proxy
with a custom proxy script. This script will inspect the execution and determine what
instance to issue the command to. The master will receive write commands and the slaves
will receive read commands. There are other ways to handle this, and it could be done with
other proxy softwares such as HAproxy, or even cluster management tools such as MySQL
Fabric or Galera.

First, we will configure MySQL proxy on the master instance created in the previous pattern.
In the previous pattern, however, we did not configure the slave to allow connections from the
master. Since MySQL proxy is running on the master, we need to allow it to issue read only,
or SELECT statements from the master instance. SSH into the slave instance:

$ mysql -uroot -pabc1234\! -e "GRANT SELECT ON *.* TO


'root'@'10.203.30.61' IDENTIFIED BY 'abc1234\!';"

Now the master can connect to the slave, so SSH into the master instance and install MySQL
proxy (provided by EPEL) and download a copy of the rw-splitting.lua script into /usr/local/bin:

$ sudo yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/ mysql-proxy-


0.8.5-1.el6.x86_64.rpm >/dev/null 2>&1

$ curl –s https://raw.githubusercontent.com/drmingdrmer/mysql-proxy-xp/
master/lib/rw-splitting.lua | sudo tee -a /usr/local/bin/rw-splitting.lua
>/dev/null 2>&1

$ sudo sed -i.bak 's/#proxy-lua-script =.*/proxy-lua-script = \/usr\/ local\/bin\/rw-


splitting.lua/g' /etc/mysql-proxy.cnf

Next, we will configure MySQL proxy to know where the read-write instance lives,
or in this case 10.203.30.61:

$ sudo sed -i.bak 's/proxy-backend-addresses =.*/proxy-backend-addresses

= 10.203.30.61:3306/g' /etc/mysql-proxy.cnf

Now we must configure the proxy to know where the read-only instance is, or in this
case 10.203.30.60:

$ sudo sed -i.bak 's/#proxy-read-only-backend-addresses =.*/proxy-read- only-


backend-addresses = 10.203.30.60:3306/g' /etc/mysql-proxy.cnf

$ sudo service mysql-proxy start

Starting mysql-proxy: [ OK ]

Finally, we must prove that our splitting actually works. Note, however, that it might take a
few moments or even a few minutes for MySQL proxy to pick up the read-only instance and
allow it to be used as such:

$ mysql -uroot -pabc1234\! -h127.0.0.1 -P3307 mydb -e 'select @@


hostname;'

+------------------+

| @@hostname |

+------------------+

| ip-10-203-30-60 |

+------------------+
$ mysql -uroot -pabc1234\! -h127.0.0.1 -P3307 mydb -e ' insert into files (firstname,
lastname) values ("asdf", "foo"); select @@hostname;'

+------------------+

| @@hostname |

+------------------+

| ip-10-203-30-61 |

+------------------+

What we have done now is to actually optimize the replication process, to be useful to the
application layer that sits on top of the database. The only difference to the application user
is that we are using the port 3307 for MySQL proxy. If we wanted this to be truly invisible to
other services and systems, we would place the proxy on its own instance on port 3306 and
give that information to the application layer. However, this does not resolve the issue of
synchronization lag. If the application wrote data and immediately retrieved it, there exists
potential for the data to not be immediately available.

Software such as the previously mentioned Galera claims to reduce this issue. However,
there are other ways to configure the database to scale outwards without synchronization
lag, which will be discussed in the final sharding write pattern. This pattern is now
complete.
In memory DB Cache Pattern (Caching High-Frequency Data)

While it is important to create a database in which the data is highly available, there are other
optimizations possible, depending on the structure. If an application is very read-heavy but
does not write very often, it might make sense to use a read-only database with splitting, as
designed in the previous pattern. It might also serve the user or consuming system to cache
the data so that it does not have to be retrieved on every request.

Consider an application that uses pagination to display information to a user. A typical query
might look like SELECT * FROM products WHERE category=23 LIMIT 50 OFFSET 1000;. At
first glance, this might be acceptable, but for this particular query, the application will execute
and retrieve the first 1000 rounds, discard them, and then return the next 50 rows. The retrieval
of the 1000 would be a waste of time and resources on data that is changing at a very quick
rate. This query will degrade over time as the system continues to grow.

A simple way to offset this cost would be to cache the results wherever possible. There is a
downstream effect, however, in that the application must handle this logic; so it is not a
transparent optimization to the dependent systems. A typical way to handle this would be to
use a fast key-value store such as Redis, for results that have already been computed. From
the application perspective, the logic would be as simple as checking the cache for the data,
and either using it or doing a database lookup if nothing was found.

We will show a simple example for this using the same pattern as before. The first thing we
will do is create a Redis store. You may use the steps from the previous state sharing pattern
to do this.

Once the Redis store is configured, we will configure the MySQL database. If you are using
one of the previous patterns to create a database, you may skip this section. However, note
that there is an addition of a new column in the files table named about that will be required to
follow along. Otherwise, launch an EC2 instance from the AWS Linux AMI and SSH into it
when it is ready:

$ sudo yum install -y mysql mysql-server >/dev/null 2>&1

$ sudo service mysqld start >/dev/null 2>&1


$ mysqladmin -u root password 'abc1234!'

$ cat <<EOF | mysql -uroot -pabc1234\! create


database mydb;

use mydb;

create table files (

id INT AUTO_INCREMENT PRIMARY KEY,

firstname varchar(50), lastname


varchar(50), about
varchar(1024)

); EOF

Now that the database is configured, we will use the handy bash code from previous snippets
to generate and insert random data into it:

$ for i in {1..10}; do

# generate random strings space separated

random_text=$(tr -dc a-z0-9 </dev/urandom | tr 0-9 ' \n' | sed 's/^

*//' | fmt | grep -E '[a-z]+\s[a-z]+' | head -n 1)

first_name=$(echo $random_text | awk '{print $1}') last_name=$(echo


$random_text | awk '{print $2}')

# generate a random 1024 character string to fill the 'about' col about=$(cat
/dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 1024 | head -n

1)
cat <<EOF | mysql -uroot -pabc1234\! mydb

insert into files (firstname, lastname, about) values ('$first_name', '$last_name',


'$about');

EOF

done

$ mysql -uroot -pabc1234\! mydb -e 'select count(*) from files;' #verify

+----------+

| count(*) |

+----------+

| 10 |

+----------+

Now that we have some random data in the database, we will write a PHP
application to utilize our cache. The index page will query the files table for names,
and create links to a page that displays the information from the about column for
that person. We will store this about information into Redis, with the key being
their lastname. As a hash, it might resemble:

"Young" => "The Author",


"Smith" => "Some guy"

If Redis has not cached the about information, we will query the database and then store
it into Redis. First, let's set up php 5.5 on our machine:

$ sudo yum install -y php55 php55-pecl-redis php55-mysqlnd >/dev/null 2>&1


$ sudo service httpd start

[ OK ]

Now that PHP is set up, we will create our index page that queries MySQL for files.
Save this into file /var/www/html/index.php;

<?php

$redis=new Redis();

$redis_connected= $redis->connect('127.0.0.1', 6379); if(!


$redis_connected) {

die( "Cannot connect to redis server.\n" );

$mysql_conn = mysql_connect('localhost', 'root', 'abc1234!') or die('Could not


connect: ' . mysql_error());

mysql_select_db('mydb') or die('Could not select database');

$query = "SELECT firstname, lastname FROM files";

$result = mysql_query($query) or die('Query failed: ' . mysql_ error());

while ($line = mysql_fetch_array($result, MYSQL_ASSOC)) {

$lastname=$line['lastname'];

$firstname=$line['firstname'];

echo "<a href=\"about.php?lname=$lastname\">$firstname $lastname</ a><br/>";

?>
If you were to browse to your EC2 instance via a web browser, you would be greeted with a
list of gibberish names similar to the following screenshot:

The links will not work until we create the about page. Create a new file at /var/
www/html/about.php that contains:

<?php

$redis=new Redis();

$redis_connected= $redis->connect('127.0.0.1', 6379); if(!


$redis_connected) {

// some other code to handle connection problem die( "Cannot connect to


redis server.\n" );

$mysql_conn = mysql_connect('localhost', 'root', 'abc1234!')

or die('Could not connect: ' . mysql_error()); mysql_select_db('mydb') or


die('Could not select database');
$lname = htmlspecialchars($_GET["lname"]);

$about = '';

$source = '';

if ($redis->exists($lname)) {

$about = $redis->get($lname);

$source = "Redis";

} else {

$query = "SELECT * FROM files WHERE lastname='$lname'";

$result = mysql_query($query) or die('Query failed: ' . mysql_ error());

$about = mysql_fetch_array($result, MYSQL_ASSOC)['about'];

$redis->set($lname, $about);

$source = 'MySQL';

echo "[$source] $about";

?>

Now if you click any of the links from your EC2 instance, you will be greeted with the
information from the corresponding about column as well as the source, either MySQL or
Redis. The first time you click one of the names from the index page, the source will show
MySQL, as shown in the following screenshot:
However, if you refresh the about page or click one of the links multiple times, you will
see the source change to Redis, as shown in the next screenshot:

This pattern can be further modified to optimize many portions of a web application in
which data caching is applicable. Information that changes often or needs to be at its absolute
latest state, should either not be cached or the caching logic should be very carefully
architected, so as to not propagate inconsistent or wrong information to the consuming systems
or users.
Sharding Write Pattern (Improving Efficiency in Writing)

The previous chapter focused a bit on optimization in terms of splitting the query across
cloned instances, which would be only part of a true database scalability concern. The
database would still have performance concerns, as there is replication lag coupled with a
fragile splitting mechanism through the proxy. The best route to take to reduce all of these
concerns, is to architect a distributed database from the very beginning.

In the sharding write pattern, we take the previous concepts a bit further in-depth, by not
analyzing the query to determine which instance to execute against. Instead, we use a cluster
management tool called MySQL Fabric, which was announced by Oracle in early 2014.
Fabric provides a single API to create and manage farms (data centers) of databases or even
farms of farms of databases. Using this method, we are able to create multiple database
instances, and from a Fabric node, group them into many different layouts to achieve high
availability.

Unlike other patterns, I will not cover the installation steps for configuring a Fabric server in
great detail, but will explain the end-goal as though we have an operational cluster. The
reason for this is that there is great overhead involved with Fabric that is not relevant to the
end-goal.

Sharding as a concept refers to a horizontal partitioning of data within a database. For


instance, if we had table users that contained hundreds of thousands of rows, running searches
would take a very long time to complete if all of the data were kept in a single instance. If
this database were replicated to another server, then we could expect the replication to never
be completed on the slave, as the amount of data would likely be changing at a blazingly
fast pace.

The first thing a database administrator might suggest would be to partition the database into
partitions, which is a very large architectural concern. Fabric exists to make this management
a much smaller concern, as it provides a singular API to create and manage these setups.

Consider the original database example, which conceptually would resemble the
following figure:

The first improvement we created with the read replica pattern was to increase throughput to
the system as a whole by making use of the slave instance as a
read-only instance. With Fabric, we open up an entire possibility for improvements right
away:

> Create the Fabric node, database, and two shards in a single availability zone

> Create a similar Fabric node, database, and two shards in different availability zones

> Create two Fabric nodes in different availability zones behind a single load balancer
A lot of issues are mitigated if the database is designed as seen in the previous image. From
the outside of the database, we have added throughput by:

> Having horizontal nodes behind the load balancer to determine where to execute

> Replicating asynchronously to a read-only database for query optimization

> Partitioning the data into shards for distributed writing

> We have also increased our high availability in many ways as well:

> The entry points to the database are load balanced in separate availability zones

> The data from the primary database is replicated into a cluster in a different availability zone

From a database perspective, Fabric takes care of all of the failover for the database. If the
primary database experiences any unrecoverable issues, that failure will be picked up from the
Fabric management instance above it. For example, if Availability Zone 1 in the diagram
experienced any malfunction, the Fabric node in Availability Zone 2 could be promoted and
changed from read-only to read-write, and vice versa. If either of the availability zones
attached to the load balancer experience issues, those issues could be picked up and handled
accordingly by a health check directly into the instance via the Fabric API.

One important thing to note if using high availability database software such as Fabric is that
the application must be aware of the infrastructure at some level. While the application should
not understand the underlying architecture, it cannot interact with Fabric in a transparent
manner. For example, MySQL has provided a connector for Fabric but it is not a drop-in
replacement for the standard MySQL connector.
The methods and calls are slightly different and there are new actions available to a
consuming system. The application gains a lot of power through the API, in terms of being
able to ask for database instances and layout information from the Fabric API; so it should
be constructed carefully to do so.

You might also like