Protocol buffers and gRPC
Prerequisites:
- You should be familiar with Rust
- async
Communication between different parts of an ecosystem is a key feature of large, non-monolithic or even distributed projects. Historically, using REST/HTTP or SOAP were some of the most common solutions. Nowadays, another option is using GraphQL or, in the case of embedded devices, mqtt.
In Braiins, we opted for gRPC. This is because it has several benefits:
- great tooling and language support (even though the Rust implementation is considered "community" and not official)
- it uses Protocol buffers to specify message types and is a binary protocol
- no need to worry about the underlying wire protocol
- simple to use, with good performance and low latency
- native support for bi-directional streaming, encryption
- focus on forward and backward compatibility
- layered design
Protocol buffers
Protocol buffers are the de/serialization format used by gPRC.
There are two major versions used out in the wild, version 2 and version 3.
The latter features simplified syntax, has useful new features and supports
more languages, so it is recommended you use it.
In fact, gRPC service APIs themselves are specified in Protobufs' .proto files,
and are generated by a plugin to the protoc compiler.
Order of fields is important in protocol buffers, and is explicitly specified for each field:
syntax = "proto3";
message SearchRequest {
string query = 1;
int32 page_number = 2;
int32 result_per_page = 3;
}
The first line is required to enable version 3 syntax, the older version is still the default otherwise
Each field number can only be specified once, field numbers between 1-15 only take one byte to encode and so should they be used up first, and field numbers between 19000 and 19999 are reserved by the implementation and thus you cannot use them.
List-like behavior is supplanted by using repeated fields:
syntax = "proto3";
message TheBoys {
repeated string names = 1;
}
For other types and rest of the syntax, you can check out this cheat-sheet: https://gist.github.com/shankarshastri/c1b4d920188da78a0dbc9fc707e82996
gRPC service specifications
You might have noticed the following snippet at the end of the protobuf cheat-sheet gist linked above:
service SearchService {
rpc Search (SearchRequest) returns (SearchResponse);
}
This is the service definition used by gRPC. It defines a single API call
called Search which takes a parameter SearchRequest, returning a SearchResponse.
Further information can be found here: https://www.grpc.io/docs/what-is-grpc/core-concepts/
Streams
Streams are an important concept in gRPC. Since bi-directional streams are supported, you can effortlessly do the following:
- have an RPC call take a stream of parameters, producing an unary response
- have an RPC call take an unary parameter, producing a stream response
- take a stream, return a stream
Streams are declared by using the stream keyword:
service SearchService {
rpc Search (stream SearchRequest) returns (stream SearchResponse);
}
This declaration takes a stream and returns a stream.
gRPC in Rust
In Rust, there are two options for gRPC:
In Braiins, we prefer tonic, as it is a part of the hyper ecosystem
and integrates well with it and tokio, and linking to OpenSSL is a nightmare
of incompatibilities further down the line.
Building protobuf files
The main library used in the Rust ecosystem for Protocol buffers is prost. Tonic depends on it and uses it.
Protobuf files need to be built ahead of time, so you can include the generated Rust files and can actually implement the API.
For this, you will need to use a Cargo build script. These are small Rust programs
contained in the build.rs file in crate root. Their dependencies are specified
in the [build-dependencies] section. A build script runs at most every build,
but can be specified to run fewer times:
// Example custom build script. fn main() { // Tell Cargo that if the given file changes, to rerun this build script. println!("cargo:rerun-if-changed=src/hello.c"); // Use the `cc` crate to build a C file and statically link it. cc::Build::new() .file("src/hello.c") .compile("hello"); }
See, further information here: https://doc.rust-lang.org/cargo/reference/build-scripts.html
To build the gRPC files, we need the tonic-build crate.
Specify your Cargo.toml suchly:
[dependencies]
tonic = "<tonic-version>"
prost = "<prost-version>"
[build-dependencies]
tonic-build = "<tonic-version>"
Then you can use build.rs to build your protobuf files:
fn main() -> Result<(), Box<dyn std::error::Error>> { tonic_build::compile_protos("proto/service.proto")?; Ok(()) }
You can then include it in your Rust crate:
#![allow(unused)] fn main() { pub mod service { // name of the grpc package tonic::include_proto!("service"); } }
Implementing the server-side
In most languages, the gRPC framework might generate something like a
stub method for each API call, in Rust, it generates an async_trait
for each service, which you need to implement.
Whether the struct you implement it on contains inner state or is a unit struct is your prerogative:
use tonic::{transport::Server, Request, Response, Status}; pub mod service { // name of the grpc package tonic::include_proto!("service"); } use service::search_service_server::{SearchService, SearchServiceServer}; use service::{SearchRequest, SearchResponse}; #[derive(Default)] pub struct MySearch; #[tonic::async_trait] impl SearchService for MySearch { async fn search( &self, request: Request<SearchRequest> ) -> Result<Response<HelloReply>, Status> { unimplemented!("this is where I would put my search implementation, if I had one!!!") } } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let addr = "[::1]:50051".parse().unwrap(); let search = MySearch::default(); Server::builder() .add_service(SearchServiceServer::new(search)) .serve(addr) .await?; Ok(()) }
Interceptors
To facilitate features such as authentication by way
of checking, modifying or adding metadata and cancelling
requests with a status, gRPC supports a concept called
interceptors.
As such, interceptors are similar to middle-ware, but they are much less flexible.
Interceptors are added by using the with_interceptor() method
on your Service trait.
#[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let addr = "[::1]:50051".parse().unwrap(); let greeter = MyGreeter::default(); // See examples/src/interceptor/client.rs for an example of how to create a // named interceptor that can be returned from functions or stored in // structs. let svc = GreeterServer::with_interceptor(greeter, intercept); println!("GreeterServer listening on {}", addr); Server::builder().add_service(svc).serve(addr).await?; Ok(()) } /// This function will get called on each inbound request, if a `Status` /// is returned, it will cancel the request and return that status to the /// client. fn intercept(mut req: Request<()>) -> Result<Request<()>, Status> { println!("Intercepting request: {:?}", req); // Set an extension that can be retrieved by `say_hello` req.extensions_mut().insert(MyExtension { some_piece_of_data: "foo".to_string(), }); Ok(req) }
For real middle-ware, look into the tower
crate with the concept of layers. These allow you to modify and further process the
data passing through and you can also use it to collect metrics.
Here is an example of a logger layer:
#![allow(unused)] fn main() { use tower::Layer; In production, consider using something like [`tracing`](https://docs.rs/tracing/latest/tracing/index.html) instead of `println!` for logging. Layers are also a very handy way to record metrics, for example with [Prometheus](https://prometheus.io/docs/introduction/overview/) pub struct LogLayer { target: &'static str, } impl<S> Layer<S> for LogLayer { type Service = LogService<S>; fn layer(&self, service: S) -> Self::Service { LogService { target: self.target, service } } } // This service implements the Log behavior pub struct LogService<S> { target: &'static str, service: S, } impl<S, Request> Service<Request> for LogService<S> where S: Service<Request>, Request: fmt::Debug, { type Response = S::Response; type Error = S::Error; type Future = S::Future; fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> { self.service.poll_ready(cx) } fn call(&mut self, request: Request) -> Self::Future { // Insert log statement here or other functionality println!("request = {:?}, target = {:?}", request, self.target); self.service.call(request) } } }
In production, consider using something like tracing
instead of println! for logging.
Layers are also a very handy way to record metrics, for example with Prometheus
The project: Server calculator
For this project, we will be developing a very simple server-client calculator over gRPC.
1. Start by creating the protobuf file
Define two types:
CalcInput, which contains integersaandbCalcOutput, which contains a single number with result and boolerror
Define a service Calculator with four calls, all of which will take CalcInput and produce CalcOutput:
Addfor additionSubfor subtractionDivfor divisionMulfor multiplication
2. Server implementation
Implement the service above accordingly, the error param of the output should be true if the operation is invalid (ie. undefined).
Per the examples above, you will need to do the following: 0. Create a build.rs script
- Declare the module in your Rust code (probably main.rs or something like server.rs depending on how you structure your project)
- Import server and message types from the module
- Import server tools from tonic
- Implement the calculator service trait
- Run the server in main()
That's as much as you need for the server
3. Client implementation
For client you need to just use the API. Here is a short example for how the usage of a simple hello world might look:
use hello_world::greeter_client::GreeterClient; use hello_world::HelloRequest; pub mod hello_world { tonic::include_proto!("helloworld"); } #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let mut client = GreeterClient::connect("http://[::1]:50051").await?; let request = tonic::Request::new(HelloRequest { name: "Tonic".into(), }); let response = client.say_hello(request).await?; println!("RESPONSE={:?}", response); Ok(()) }
4. Putting it all together
You will need to implement a reasonable API to allow using the client as a cli tool. Whether you opt for parsing numbers from the stdin and make your project work like a REPL, or whether you will make it a one-shot that takes the numbers and operation as command-line argument is up to you.
Remember that your server should not crash.
5. Final product
In the end you should be left with a well prepared project, that has the following:
- documented code explaining your reasoning where it isn't self-evident
- optionally tests
- and an example or two where applicable
- clean git history that does not contain fix-ups, merge commits or malformed/misformatted commits
Your Rust code should be formatted by rustfmt / cargo fmt and should produce no
warnings when built. It should also work on stable Rust and follow the Braiins Standard