Memcached vs Redis

· 636 words · 3 minute read

I’m a free market advocate and as such believe optionality is great. The fact that consumers are able to pick between multiple competing products helps foster innovation and discard what becomes obsolete.

I hate comparing software, but lately I’ve been seeing a lot of folks on the internet claiming that using Memcached over Redis for caching is better. I honestly can’t say it isn’t. Memcached is a fantastic and incredibly reliable piece of software that is widely used by some of the largest companies in the world (such as Facebook, Twitter, Wikipedia, Youtube, etc.). However, I disagree with the general thesis that gets thrown out there for claiming that Memcached is a better caching solution than Redis. Such thesis can be summed up as follows:

  1. Memcached is purely designed for caching and caching alone
  2. Memcached does no disk I/O whatsoever
  3. Memcached scales well (can handle 100K requests per second without any issues)

In this post I will attempt to explain why such claims, while true, are not showing the full picture. I’ll try to show why Redis is also a great caching solution without taking anything away from Memcached.

Memcached Is Purely Designed For Caching 🔗

Memcached is in fact solely designed to be a general-purpose distributed memory-caching system. However, I can also state that Redis is a general-purpose distributed memory-caching system and no one would be able to refute such a claim. Therefore, since both systems in theory could be the same, this statement cannot really be used as an argument for Memcached being a superior caching solution, so lets simply move on.

Memcached Does No Disk I/O Whatsoever 🔗

While Redis allows for persisting data on disk, this is not a Redis deficiency, it is actually value added. Redis can be configured to be a purely in-memory database (which is usually how it is used in practice). Redis allows for Disk I/O to be disabled entirely while providing the option to just persist the database on reboots such as with the command “SHUTDOWN SAVE”. The bottom line is that when it comes to real life production systems, optionality is always good. Redis gives optionality, you have the right to choose but not the obligation to use persistence. Yet again this statement cannot be used to recommend Memcached over Redis.

Memcached Scales Well 🔗

This one is interesting. When this claim is usually made, it alludes to the fact that Memcached is multi-threaded and Redis for the most part is a single threaded server (at least at the time of this writing). However, Redis can handle an incredible amount of requests. It’s not unfathomable to see Redis handle nearly half a million operations per second (per thread) when using pipelining or about 100K operations per second (per thread) if pipelining is not used. Nonetheless, ​​Memcached multi threading is still an advantage since it makes things simpler to use and manage. This is not to say that the same could not be achieved with Redis. To do so we would have to run Redis instances as masters, disable disk I/Os, leave sharding to the clients, and spin up multiple processes. It’s harder to administer but at the end we would get the same results.

Final Thoughts 🔗

The bottom line is that both solutions are great. Memcached is a proven, performant, simple, easy to manage and highly scalable solution. Redis provides more optionality and can execute a broader set of operations that enable more interesting use cases without being to hard to manage either. Just look at what your application really needs and choose whatever suits your needs in a simpler fashion. Ideally you would have an abstraction layer that allows you to swap each caching system wihtout having to make any code changes to your application other than the programmatic configuration of the solution you would like to use.