Skip to main content

Build an Elixir Redis API that's 100x faster than HTTP

· 7 min read
Kyle Hanson

Need a fast server and client? HTTP too slow? Try the Redis Protocol for lightning fast, low-overhead API calls. It's easy to implement and nearly every language has mature Redis clients that can connect.

This project was inspired by Tino, the Redis/MsgPack framework for Python.

Building a server based on Redis Protocol from scratch can sound intimidating. But if you know what you are doing it can be relatively straight forward to implement. A huge shortcut involves using Redix to parse the binary stream in a fast and efficient manner.

In this article we will implement a Redis echo server and explain how to extend the server to handle your own custom commands. In the end, it results in a performance boost of over 100x using Redis instead of HTTP.

This blog post assumes you are familar with Elixir and its Application structure. If you want to learn more about TCP connections and supervision, read the official Elixir article.

Advantages

First before we get started, lets discuss some of the advantages. Building a server based on RESP (the Redis protocol) means you are cutting out a lot of overhead associated with HTTP. In addtion to a more lean protocol, nearly every language has a high-performance client built for Redis that allows pipelining. Pipelining combines commands as you send them for even greater efficiency. Most redis clients even support pooling for working at high concurrencies.

With these built in features, you don't have to do very much to talk to your server in an extremely high performance fashion.

Reading the TCP connection

To get started building our server, we will need to accept a TCP connection. We do this by looping over :gen_tcp.accept and spawning a task.

defmodule MyRedisServer.Redis do
require Logger
def accept(port) do
{:ok, socket} = :gen_tcp.listen(port, [:binary, active: false, reuseaddr: true])
Logger.info("Accepting connections on port #{port}")
loop_acceptor(socket)
end

defp loop_acceptor(socket) do
{:ok, client} = :gen_tcp.accept(socket)

{:ok, pid} =
Task.start(fn ->
serve(client, %{continuation: nil})
end)

:ok = :gen_tcp.controlling_process(client, pid)

loop_acceptor(socket)
end
end

Now we are ready to read packets from the connection. Elixir's Redis client Redix includes an parser for us to use.

defmodule MyRedisServer.Redis do
...

defp serve(socket, %{continuation: nil}) do
case :gen_tcp.recv(socket, 0) do
{:ok, data} -> handle_parse(socket, Redix.Protocol.parse(data))
{:error, :closed} -> :ok
end
end

defp serve(socket, %{continuation: fun}) do
case :gen_tcp.recv(socket, 0) do
{:ok, data} -> handle_parse(socket, fun.(data))
{:error, :closed} -> :ok
end
end
emd

Handling the parse result is straight forward. Either an entire message was processed and we can handle it, and respond, or a partial message was recieved and we need to wait for more data.

defmodule MyRedisServer.Redis do
...

defp handle_parse(socket, {:continuation, fun}) do
serve(socket, %{continuation: fun})
end

defp handle_parse(socket, {:ok, req, left_over}) do
resp = handle(req)

:gen_tcp.send(socket, Redix.Protocol.pack(resp))

case left_over do
"" -> serve(socket, %{continuation: nil})
_ -> handle_parse(socket, Redix.Protocol.parse(left_over))
end
end

def handle(data) do
data
end
end

Complete example

Finally we are ready to put it all together. All the pieces come together to form a nice little echo server.

defmodule MyRedisServer.Redis do
require Logger

def accept(port) do
{:ok, socket} = :gen_tcp.listen(port, [:binary, active: false, reuseaddr: true])
Logger.info("Accepting connections on port #{port}")
loop_acceptor(socket)
end

defp loop_acceptor(socket) do
{:ok, client} = :gen_tcp.accept(socket)

{:ok, pid} =
Task.start(fn ->
serve(client, %{continuation: nil})
end)

:ok = :gen_tcp.controlling_process(client, pid)

loop_acceptor(socket)
end

defp serve(socket, %{continuation: nil}) do
case :gen_tcp.recv(socket, 0) do
{:ok, data} -> handle_parse(socket, Redix.Protocol.parse(data))
{:error, :closed} -> :ok
end
end

defp serve(socket, %{continuation: fun}) do
case :gen_tcp.recv(socket, 0) do
{:ok, data} -> handle_parse(socket, fun.(data))
{:error, :closed} -> :ok
end
end

defp handle_parse(socket, {:continuation, fun}) do
serve(socket, %{continuation: fun})
end

defp handle_parse(socket, {:ok, req, left_over}) do
resp = handle(req)

:gen_tcp.send(socket, Redix.Protocol.pack(resp))

case left_over do
"" -> serve(socket, %{continuation: nil})
_ -> handle_parse(socket, Redix.Protocol.parse(left_over))
end
end

def handle(data) do
data
end
end

Run this server in your Application's supervision tree:

 defmodule MyRedisServer.Application do
use Application

...

def start(_type, _args) do
claims = get_license_claims!()

children = [
...,
Supervisor.child_spec({Task, fn -> MyRedisServer.Redis.accept(3211) end}, restart: :permanent)
]

...

Supervisor.start_link(children, opts)
end
end

Connecting from a client

Start your mix project and you should be able to connect to redis on 3211 and the command should echo what you send it.

> {:ok, conn} = Redix.start_link("redis://localhost:3211")
> Redix.command(conn, ["COOL_COMMAND", "123"])
{:ok, ["COOL_COMMAND", "123"]}

Adding commands to your new redis server is easy with pattern matching:

defmodule MyRedisServer.Redis do
...

def handle(["PUT", key, val]) do
Cachex.put(:my_cachex, key, val)
["OK"]
end

def handle(["GET", key]) do
[Cachex.get(:my_cachex, key)]
end

def handle(["ECHO", msg]) do
msg
end

def handle(_data) do
%Redix.Error{message: "UNKNOWN_COMMAND"}
end
end

MsgPack

MsgPack is essentially a faster, more compact version of JSON. Use it to serialize complex structures into binary data to pass back and forth between your API.

defmodule MyRedisServer.Redis do
...

def handle([command, payload]) do
case handle_command(command, MsgPax.unpack!(payload)) do
{:error, e} -> %Redix.Error{message: "ERROR #{e}"}
value -> [MsgPax.pack!(value)]
end
end

def hande(_) do
%Redix.Error{message: "INMVALID_FORMAT"}
end

defp handle_command("PUT", [key, val]) do
Cachex.put(:my_cachex, key, val)
["OK"]
end

defp handle_command("GET", key) do
Cachex.get(:my_cachex, key)
end

defp handle_command("ECHO", msg) do
msg
end

defp handle_command(_command, _data) do
{:error, "INVALID_COMMAND"}
end
end

Benchmark

For this benchmark we will compare HTTP Phoenix to our Redis Server.

Our HTTP Phoenix Controllers:

  # GET -> Text
def bench(conn, %{"payload" => payload, "times" => times}) when is_binary(times) do
text(conn, String.duplicate(payload, String.to_integer(times)))
end

# POST -> JSON
def bench(conn, %{"payload" => payload, "times" => times}) do
json(conn, %{"data" => String.duplicate(payload, times)})
end

and our Redis server:

  def handle(["BENCH", payload, number]) do
[String.duplicate(payload, String.to_integer(number))]
end

We will use Finch for the HTTP client, which labels itself as "performance focused".

For the full benchmark see the source.

We will remotely call our functions using the Finch HTTP pool, a single Redix connection, or a pool of Redix connections. We will also test pipelining vs calling each command individually for Redix. We will call our remote function 1000 times concurrently and ask it to duplicate the string "12345&?\"678,\n90" 100 times and respond.

Name                           ips        average  deviation         median         99th %
redix_pool 70.44 14.20 ms ±36.07% 13.30 ms 50.60 ms
run_redix_pipeline 30.56 32.73 ms ±65.74% 47.26 ms 91.99 ms
redix_pool_pipelined 21.55 46.40 ms ±3.87% 47.59 ms 48.12 ms
redix 13.84 72.28 ms ±9.91% 72.09 ms 80.31 ms
finch_get 0.55 1814.88 ms ±2.44% 1814.88 ms 1846.24 ms
finch_post 0.54 1859.71 ms ±0.70% 1859.71 ms 1868.97 ms

The results show that running Redis protocol is well over 100x faster than relying on HTTP. By default Phoenix sends extra headers for the content type and other information. In addition there is extra overhead encoding and decoding the values for URL encoding and JSON.

Overall using Redis as a Protocol instead of HTTP results in orders of magnitude higher troughput.

Conclusion

We wrote a high-performance server based on the Redis Protocol in around 10 minutes. This server can handle thousands of connections easily and has minimal overhead. One downside is that load balancing becomes more of a challenge when doing multi-node deploys when using a protocol other than HTTP.

If you have a one or thousands of clients that need to communicate with a server in the fastest way possible, consider using Redis as your protocol of choice instead of HTTP.