Redis Vs Ncache
Redis Vs Ncache
Redis Vs Ncache
vs.
NCache
Please note that this comparison is not against the general Open Source Redis v4.0.1
(download that comparison separately) but against Redis v3.2.7 that is being used by
Microsoft Azure Redis Cache. Read this comparison to:
Disclaimer ................................................................................................................................................................................................ 1
1 Executive Summary .................................................................................................................................................................... 2
2 Qualitative Differences Explained ......................................................................................................................................... 6
2.1 .NET Platform Support ................................................................................................................................................... 6
2.2 Performance and Scalability ........................................................................................................................................ 7
2.3 Cache Elasticity (High Availability) ............................................................................................................................ 9
2.4 Cache Topologies ......................................................................................................................................................... 12
2.5 WAN Replication ........................................................................................................................................................... 15
2.6 ASP.NET & ASP.NET Core Support ........................................................................................................................ 16
2.7 Object Caching Features ............................................................................................................................................ 18
2.8 Managing Data Relationships in Cache ............................................................................................................... 20
2.9 Cache Synchronization with Database ................................................................................................................. 21
2.10 Event Driven Data Sharing......................................................................................................................................... 23
2.11 SQL-Like Cache Search ............................................................................................................................................... 25
2.12 Data Grouping ................................................................................................................................................................ 25
2.13 Read-through, Write-through & Cache Loader ................................................................................................ 27
2.14 Big Data Processing ..................................................................................................................................................... 28
2.15 Third Party Integrations .............................................................................................................................................. 29
2.16 Security & Encryption .................................................................................................................................................. 30
2.17 Cache Size Management (Evictions Policies) ..................................................................................................... 31
2.18 Cache Administration .................................................................................................................................................. 32
3 Conclusion .................................................................................................................................................................................. 34
We did not conduct any scientific benchmarks for performance and scalability of Azure Redis Cache so our
assessment about it may be different from yours. NCache benchmarks are already published on our website
(http://www.alachisoft.com) for you to see.
Additionally, we have made a conscious effort to be objective, honest, and accurate in our assessments in this
document. But, any information about Azure Redis Cache could be unintentionally incorrect or missing, and we do
not take any responsibility for it.
Instead, we strongly recommend that you do your own comparison of NCache with Azure Redis Cache and arrive
at your own conclusions. We also encourage you to do performance benchmarks of Azure Redis Cache and
NCache both in your environment for the same purpose.
Cache Topologies
- Local Cache Partial support Supported
- Client Cache (Near Cache) Not Supported Supported
WAN Replication
- Active – Passive Not Supported Supported
- Active – Active Not Supported Supported
- Conflict Resolution Not Supported Supported
- De-duplication Not Supported Supported
- Data Security Not Supported Supported
Data Grouping
- Groups/Subgroups Not Supported Supported
- Tags Not Supported Supported
- Named Tags Not Supported Supported
Cache Administration
- Admin Tool (GUI) Not Supported Supported
- Monitoring Tool (GUI) Not Supported Supported
- PerfMon Counters Not Supported Supported
- Admin Tools (PowerShell) Not Supported Supported
- Admin Tools (Command Line) Supported Supported
- Administration and Monitoring (API) Supported Supported
For .NET applications, it is important that your distributed cache is also native to .NET so your entire application
stack is .NET. Otherwise, it unnecessarily complicates things for your development, testing, and deployment.
This section describes how Redis and NCache support .NET platform.
Not officially supported. Third party .NET .NET Client is officially supported.
client exists.
Not officially supported. Third party .NET .NET Core Client is officially supported.
Core client exists.
Not officially supported. Third party NuGet Full set of NuGet packages provided.
packages exists.
Redis actually doesn’t even support server- Develop all server-side code like Read-
side code, let alone supporting it in .NET. through, Write-through, Write-behind,
Cache Loader, Custom Dependency, and
more in .NET.
Performance is defined as how fast cache operations are performed at a normal transaction load. Scalability is
defined as how fast the same cache operations are performed under higher and higher transaction loads. NCache
is extremely fast and scalable.
But, please note that Redis converts NCache is extremely fast. Please see its
everything into string for storage. And, this performance benchmarks.
can be very costly for images, binary data,
or large objects.
Bulk operations are not distributed to all the Bulk Get, Add, Insert, and Remove. This
nodes. Instead, they’re all sent to one shard covers most of the major cache
and all the keys must be present on that operations and provides a great
node for success. performance boost.
You can bind multiple IPs but cannot define You can assign two NICs to a cache
which to use for cluster replication. Multiple server. One can be used for clients to talk
binding is only for accessing Redis through to the cache server and second for
different networks. multiple cache servers in the cluster to
talk to each other.
Cache elasticity means how flexible is the cache at runtime. Are you able to perform the following operations at
runtime without stopping the cache or your application?
1. Add or remove any cache servers at runtime without stopping the cache.
2. Make cache config changes without stopping the cache
3. Add or remove web/application servers without stopping the cache
4. Have failover support in case any server goes down (meaning are cache clients are able to continue
working seamlessly).
This is an area where Redis is relatively weak. In fact, it doesn’t provide support for some of these things. But,
NCache is known for its strength in this area.
NCache provides a self-healing dynamic cache clustering that makes NCache highly elastic. Read more about it at
Self-Healing Dynamic Clustering.
Redis clusters with shards are rigid. If a NCache is highly dynamic and simple to
Master without a Slave fails, the cluster manage. A shard in NCache is a partition.
becomes unusable because the slots from And, a partition can also have a replica
this shard are not picked up by other but always on separate server.
Masters.
And, similar to Redis, if a cache server
And, if a Master that has a slave goes down, goes down, its replica automatically takes
the slave does take over. But it doesn’t over.
automatically rebalance (re-shard) itself.
But, unlike Redis, NCache replica
And, if this Slave also goes down before automatically rebalances (re-shards) and
rebalancing (re-sharding), you not only lose merges itself into other partitions and all
data but the cluster again enters an the partitions ensure they have
unusable state because other Masters don’t corresponding replicas.
pick up the slots from this shard.
This way, NCache is not vulnerable to
And, all rebalancing must be done manually data loss except during the rebalancing
and also requires you to specify which (state transfer) which is quite fast.
“slots” or buckets to rebalance. This also
increases complexity for managing Redis.
In Redis, if a shard with no replicas goes NCache provides full connection failover
down, the entire cluster is halted and blocks support between cache clients and
any client requests. servers and also within the cache cluster.
If a Redis shard node with a replica goes In case of cache server failure, NCache
down, the replica automatically takes over. clients continue working with other
But, if later the replica also goes down servers in the cluster and without any
(before any manual rebalancing), then the interruption.
client
Cluster auto-manages itself by
rebalancing its data and recreating
replicas where needed.
Although, Redis provides CONFIG SET NCache cluster configuration is not hard
command, you have to separately run it coded and when you add or drop servers
against every server whereas in NCache, at runtime, all other servers in the cluster
you can do this against the entire cluster at are made aware of it.
once.
NCache clients also learn about all the
servers and a variety of other
configuration at runtime from the cache
cluster.
Cluster Events are not supported. Redis NCache provides events about
only allows you to manually query about changes in the cluster like:
the cluster health. MemberJoined, MemberLeft,
CacheStopped, etc.
Cache Topologies determine data storage, data replication, and client connection strategy. There are different
topologies for different type of use cases. So, it is best to have a cache that offers a rich variety of cache
topologies.
Read more details on NCache caching topologies at in-memory distributed cache Topologies.
No failover support in case a Master goes Full failover support if any server goes
down. The whole cluster becomes down (although there is data loss).
unusable.
Partitioned Cache is a very powerful
Partitioned Cache means data partitioning topology. You can partition without
without replication. It’s good when you replication to speed up the cache and
don’t want the memory and transaction also use less memory because you can
cost of replication because you can always always reload some data if lost in the
load data from the database if some of it is cache.
lost due to cache server going down.
In Partitioned Cache, the entire cache is
Its equivalent in Redis is a cluster of partitioned and each cache server gets
Masters without any Slaves. one partition. All partitions are created or
deleted and their buckets reassigned
But, if a Redis Master without a Slave goes automatically at runtime when you
down, the entire cluster becomes add/remove nodes.
unusable. This is because slots from this
shard are not picked up by other Masters. Data re-balancing feature is provided
even if no partition is added or removed
but when any partition gets
overwhelmed with too much data.
Except Redis allows more than one Replica kept at another cache server. This
for each Master whereas NCache only provides reliability against data loss if a
supports one Replica for each Partition. node goes down. Async and Sync, both
replications are supported.
Redis support async and sync replications
like NCache. Just like partitions, replicas are also
created dynamically.
Redis provides RDB and AOF persistence. NCache has the equivalent of RDB
RDB is a snapshot of cache at a time. And, persistence through Dump/Reload tools
AOF logs every transaction. that take a snapshot of the cache and
persist them to disk or reload the cache
from a previous dump.
WAN replication is an important feature for many customers whose applications are deployed in multiple data
centers either for disaster recovery purpose or for load balancing of regional traffic.
The idea behind WAN replication is that it must not slow down the cache in each geographical location due to the
high latency of WAN for propagating the data replication. NCache provides Bridge Topology to handle all of this.
Similarly, ASP.NET applications need three things from a good in-memory distributed cache. And, they are
ASP.NET Session State storage, ASP.NET View State caching, and ASP.NET Output Cache.
ASP.NET Session State store must allow session replication in order to ensure that no session is lost even if a cache
server goes down. And, it must be fast and scalable so it is a better option than InProc, StateServer, and SqlServer
options that Microsoft provides out of the box. NCache has implemented a powerful ASP.NET Session State
provider. Read more about it at NCache Product Features.
ASP.NET View State caching allows you to cache heavy View State on the web server so it is not sent as “hidden
field” to the user browser for a round-trip. Instead, only a “key” is sent. This makes the payload much lighter,
speeds up ASP.NET response time, and also reduces bandwidth pressure and cost for you. NCache provides a
feature-rich View State cache. Read more about it at NCache Product Features.
Third is ASP.NET Output Cache. Since .NET 4.0, Microsoft has changed the ASP.NET Output Cache architecture and
now allows third-party in-memory distributed cache to be plug-in. ASP.NET Output Cache saves the output of an
- Group-level policy
- Associate pages to groups
- Link View State to sessions
- Max View State count per user
- More
These are the most basic operations without which an in-memory distributed cache becomes almost unusable.
These by no means cover all the operations a good Cache should have.
Redis supports EXPIRE command through Absolute and Sliding expirations provided.
which you can do Absolute Expiration.
Absolute expiration is good for data that
But, to implement Sliding Expiration, you is coming from the database and must be
must call EXPIRE every time you access expired after a known time because it
the cache item. might become stale.
This is more work on your part and is also Sliding expiration means expire after a
more costly as an extra cache call is being period of inactivity and is good for
made. session and other temporary data that
must be removed once it is no longer
needed.
Redis does not provide any support for .NET to Java and Java to .NET object
transforming C# objects into Java or vice conversion supported without going
versa. You have to do this yourself. through JSON/XML transformation.
Configurable using a user friendly GUI.
You have to manage it yourself This ensures that only one client can
update an item and all future updates will
fail unless cache clients first fetch the
latest version and then update it.
Since most data being cached comes from relational databases, it has relationships among various data items. So,
a good cache should allow you to specify these relationships in the cache and then keep the data integrity. It
should allow you to handle one-to-one, one-to-many, and many-to-many data relationships in the cache
automatically without burdening your application with this task.
Database synchronization is a very important feature for any good In-Memory distributed cache. Since most data
being cached is coming from a relational database, there are always situations where other applications or users
might change the data and cause the cached data to become stale.
To handle these situations, a good In-Memory distributed cache should allow you to specify dependencies
between cached items and data in the database. Then, whenever that data in the database changes, the cache
becomes aware of it and either invalidates its data or reloads a new copy.
Additionally, a good distributed cache should allow you to synchronize the cache with non-relational data sources
since real life is full of those situations as well.
Event Driven Data Sharing has become an important use for in-memory distributed caches. More and more
applications today need to share data with other applications at runtime in an asynchronous fashion.
Previously, relational databases were used to share data among multiple applications but that requires constant
polling by the applications wanting to consume data. Then, message queues became popular because of their
asynchronous features and their persistence of events. And although message queues are great, they lack
performance and scalability requirements of today’s applications.
NCache provides very powerful features to facilitate Event Driven Data Sharing. They are discussed below and
compared with Redis.
In-Memory distributed cache is frequently used to cache objects that contain data coming from a relational
database. This data may be individual objects or collections that are the result of some database query.
Either way, applications often want to fetch a subset of this data and if they have the ability to search the
distributed cache with a SQL-like query language and specify object attributes as part of the criteria, it makes the
In-Memory distributed cache much more useful for them.
We’ve already explained how to search an in-memory distributed cache through SQL and LINQ. Now let’s discuss
Groups, Tags, and Named Tags. These features allow you to keep track of collections of data easily and even
modify them.
Many people use in-memory distributed cache as “cache on the side” where they fetch data directly from the
database and put it in the cache. Another approach is “cache through” where your application just asks the cache
for the data. And, if the data isn’t there, the in-memory distributed cache gets it from your data source.
The same thing goes for write-through. Write-behind is nothing more than a write-through where the cache is
updated immediately and the control returned to the client application. And, then the database or data source is
updated asynchronously so the application doesn’t have to wait for it.
Redis persists everything in a file. Upon NCache lets you implement a Cache
restarts, Redis re-loads the state of the Loader and register it with the cache
cache. Some data loss is to be expected cluster. NCache then calls it to
since the replication is asynchronous. prepopulate the cache upon startup.
But, you can’t write a custom cache loader. Cache Loader is your code that reads data
from your data source/database.
For analysis and processing large amount of data becomes faster if done in-memory A distributed cache is a
scalable in-memory data store. And, if it can support the popular Map/Reduce style processing, then you’re able
to speed up your work greatly.
No entry processor like capability provided NCache fully supports Entry Processor
even though a very basic LUA scripting execution on cache nodes in parallel.
engine allows scripts to be executed on
servers.
Memcached is an open-source in-memory distributed caching solution which helps speed up web applications by
taking pressure off the database. Memcached is used by many of the internet’s biggest websites and has been
merged with other technologies. NCache implements Memcached protocol to enable users with existing
Memcached implementations to easily migrate to NCache. No code required for this.
NHibernate is a very powerful and popular object-relational mapping engine. And, fortunately, it also has a
Second Level Cache provider architecture that allows you to plug-in a third-party cache without making any code
changes to the NHibernate application. NCache has implemented this NHibernate Second Level Cache provider.
See NHibernate Second Level Cache for details.
Entity Framework from Microsoft is also a very popular object-relational mapping engine. And, although Entity
Framework doesn’t have a nice Second Level Cache provide architecture like NHibernate, NCache has nonetheless
implemented a Second Level Cache for Entity Framework.
Many applications deal with sensitive data or are mission critical and cannot allow the cache to be open to
everybody. Therefore, a good In-Memory distributed cache provides restricted access based on authentication
and authorization to classify people in different groups of users. And, it should also allow data to be encrypted
inside the client application process before it travels to the distributed cache.
This means that an in-memory distributed cache should allow you to specify how much memory it should
consume and once it reaches that size, the cache should evict some of the cached items. However, please keep in
mind that if you’re caching something that does not exist in the database (e.g. ASP.NET Sessions) then you need
to do proper capacity planning to ensure that these cached items (sessions in this case) are never evicted from the
cache. Instead, they should be “expired” at appropriate time based on their usage.
Cache administration is a very important aspect of any distributed cache. A good cache should provide the
following:
1. GUI based and command line tools for cache administration including cache creation and
editing/updates.
2. GUI based tools to monitor cache activities at runtime.
3. Cache statistics based on PerfMon (since for Windows PerfMon is the standard)
3 Conclusion
As you can see in a very detailed fashion, we have outlined all of NCache features and all the corresponding Redis
features or a lack thereof. I hope this document helps you get a better understanding of Redis versus NCache.
In summary, Redis is a popular free cache mainly on Unix/Linux platform with clients running on either Unix or
Windows. And, the Windows port of Redis server done by Microsoft OpenTech group not very stable and
therefore not as reliable. In fact, Microsoft itself is using the Linux version of Redis in Azure. So, if you want to use
Redis, you’ll probably want to run it on Unix and then access it from your Windows app servers.
But, the true cost of ownership for an in-memory distributed cache is not just the price of it. It is the cost to your
business. The most important thing for many customers is that they cannot afford unscheduled downtime
(especially during peak hours). And, this is where an elastic cache like NCache truly shines.
Additionally, all those caching features that NCache provides are intended to give you total control over the cache
and allow you to cache all types of data and not just simple data. This is something Redis cannot do.
Please read more about NCache and also feel free to download a fully working 60-day trial of NCache from:
- NCache details.
- Download NCache.