Fast Golang API Performance with In Memory Key Value Storing Cache

Aditya Rama
4 min readJan 9, 2021
Image credits to Guillaume Jaillet (https://unsplash.com/photos/Nl-GCtizDHg)

Some of you might already heard about a lot of caching out there, Redis and Memcached probably the common things when we’re talking about caching. However, in this writing I would like to show you a comparison of a simple API performance, in which the data is being stored on database (postgres), redis, and go-cache.

Background Please…

Redis as stated in the official website documentation (https://redis.io/).

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions, and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

While go-cache (https://github.com/patrickmn/go-cache) that we’re going to use on this experiment.

go-cache is an in-memory key:value store/cache similar to memcached that is suitable for applications running on a single machine. Its major advantage is that, being essentially a thread-safe map[string]interface{} with expiration times, it doesn't need to serialize or transmit its contents over the network.

Any object can be stored, for a given duration or forever, and the cache can be safely used by multiple goroutines.

The difference between those two lies on several things in which some of them are:

  • Redis run separately from your application (not necessarily on different instance / pods, but considered on different process or even service) while go-cache is “basically a map string of interface” within your app, so it’s running inside your Golang application
  • Multiple Redis can form a cluster, while go-cache is within your app designed to be running on a single machine (cannot be clustered).

There are a lot of things that distinguish them for each other, but on this writing, since we’re just going to cache one simple object for our API test, we’ll treat both of them equally as our caching engine.

Scenario

We will create a Golang program that is serving an API to return a “post” data, the data source within this app is based on this JSON placeholder API.

https://jsonplaceholder.typicode.com/posts/1

The Golang code flows will be like this:

  1. Init Database
  2. Init Redis Client / Go-Cache
  3. Serve API with method “GET” for path “/post”
  4. the API will try to get data from cache first (either Redis / Go-Cache), if it’s exist in cache, return it as response. But if it’s not exist in the cache, query from DB, save it to cache, then write it into the response.

Technical Details

If you’re curious about the result, you can skip that part, but if you want to have some details within the flows it’s better to read this first.

The caching object within the Go Program is written as an interface that has two methods:

  • Set(key string, data interface{}, expiration time.Duration) error
  • Get(key string) ([]byte, error)

This is to unify both caching engine so we can switch around between those two via the interface. Inside the set method for both of the caching engine, We are going to encode the data with JSON marshal and save the []byte into the caching engine with the given key. Get method for each engine will be pretty straight forward to just getting the data based on the key parameter.

On the database’s side, I created a table of (id, user_id, title, body) only consists of one data (todo which stated in the beginning).

Some codes are written to the point for simplicity:

  1. Any error just trigger log fatal
  2. Query to DB doesn’t written into its own function
  3. DB and Redis in this experiment is being ran on local docker using each image latest tag on this time of writing.

Detail of the code can be seen here

Result

Let’s see the result on this table.

App Cache is the Go-Cache

Those numbers are elapsed time of the API for three storages (DB, redis, and app cache). Lower is better.

Values are in ms (milliseconds)

Summary

If you want a blazingly fast simple key value caching for Golang application, this caching library might be for you since in this simple experiment (different usecase and architecture/design may vary), It performed about 200 times faster than DB and 100 times faster than redis for single key retrieval on average. I recommend this library if you want to cache a very frequent data that is being read in your application to reduce any DB operation / even if you want it to be faster than getting it from redis. However, this app cache is running within your application, in which if let say you have 4 instances / pods running your application, each of them will have their own go-cache individually. It is challenging to make them update all at the same time (if you want/need to update some key’s value) and it’s recommended for single machine by default, unless you design a separate caching engine using this yourself.

--

--