Benchmark Proto3 File
Client Throughput
For our first set of benchmarks we’ll have a number of concurrently connected clients and each client will make RPCs to the fRPC or gRPC server using a randomly generated fixed-sized message, and then wait for a response before repeating. In each of our benchmark runs we’re increasing the number of concurrently connected clients and we’re measuring the average throughput of each client to see how well fRPC and gRPC scale. We’re also running a number of separate benchmarks, each with an increasing message size.32-Byte Messages

512-Byte Messages

128-KB Messages

1-MB Messages

Server Throughput
Now let’s look at how fRPC servers scale as we increase the number of connected clients. For this benchmark, we’re going to make it so that each client repeatedly sends 10 concurrent RPCs in order to saturate the underlying TCP connections and the accompanying RPC server.
Multi-Threaded Performance
By default, fRPC creates a new goroutine for each incoming RPC. This is a very similar approach to the one used by gRPC, and is a good choice for high-throughput applications where handling the RPC can be a blocking operation (like querying a remote database). However, fRPC can also be configured to create a single goroutine to handle all the RPCs from each incoming connection. This is a good choice for applications that require very low latency and where the handlers are not blocking operations (such as metrics streaming). The benchmarks above were all run with the single-goroutine option enabled, because our BenchmarkService implementation is a simpleEcho
service that does little to no processing and does
not block.
It’s also important, however, to benchmark an application where the RPCs are blocking operations - and for those we’ll go back
to fRPCs default behavior to create a new goroutine to handle each incoming RPC.
Our blocking operation for the following benchmark is a simple time.Sleep
call that sleeps for exactly 50 Microseconds.
