How to test a non-REST backend. Part three, gRPC
So, we got to the third, “hard” part of the cycle. Today we will talk about gRPC.
Contents
What is gRPC?
RPC itself – remote procedure call (sometimes remote procedure call; RPC from the English remote procedure call) – a class of technologies that allow programs to call functions or procedures of other programs, doing it as if they were in the same address space. Letter g in the name – this is the Google implementation of these technologies.
Let’s analyze it with an example.
Suppose that you are a programmer and you are sitting in a monolithic turnip. You have one program. The project itself is open in the IDE, and you work in it. Rip has a certain class implemented (for example, in Kotlin) that has a method that returns you data per user.
fun getUserInfo(id: String) {
return //some data
}
Now, somewhere else in the project, in another class, you want to call this method to work with the user data. What do you do then? Import the class and simply make a normal method call.
var userData = getUserInfo("userID")
// Continue work with data
It will be executed, you will receive all the necessary data. Everything is great and everything works.
Next comes a new challenge – microservice architecture, which proposes to separate everything and bring everything we do with users into a separate service.
Now our conventional services are separated by network interaction, as shown in the picture.
There is a service A that wants data from a user. There is a service B that has a public API through which this data can be retrieved.
What should service A do?
He needs to make a get request to the endpoint that will be provided by service B, then get back the HTTP response (there will be JSON), get the data and work with it.
What does gRPC offer?
And he offers this – let service B create a protofile with the extension .proto, in which it describes the methods and functions that can be called by everyone else.
The following API analogue is obtained, in which the service confirms that it can work with calls to these functions with certain parameters and return certain data. At the same time, it actually implements all these functions in itself.
And it raises the RPC service to its level.
This service handles all incoming requests.
Okay, the service has taken care of it. Now what to do at service level A to get user data?
We take the models described in the .proto file of service B. For example, we connect this package in gradle via dependency, which contains all the described models.
And then we implement a wrapper over them with our own logic and calling the methods of the service St.
Now we start writing the code, relying on the calls of our wrapper, which we have here in the project, it is already more convenient. At the same time, when we call our methods, they will actually call the methods of service B internally. They will go to it in the RPC service and say that they want to work with it. Service B will respond that it is ready to help us.
At the same time, it is worth noting that gRPC uses all requests Protocol Buffers is a special binary data serialization protocol developed by Google. That is, service A will provide the service with its request in binary form. The service will understand which method is called, execute it and return the result. Moreover, the response will also pass through the RPC service and will be in binary form. Service Rating: 4 A will already receive such an answer and will continue to work as if nothing had happened.
If you dig a little more carefully under the hood, you will get the following scheme.
-
On the client machine, we call the method. For example, we need information about users, and call it as if we were actually in service B.
-
At the same time, we refer specifically to the previously implemented wrapper.
-
Then special libraries are hacked, which translate our request in a certain way into a binary request. Serialization is carried out, it is already transferred to the RPC service, and then everything goes to server B through protocol buffers.
-
It in turn deserializes this binary sequence as a procedure name, parameters, and value. Runs it, gets the data, and then does all this transformation back.
-
Packs the data into a binary sequence and sends it back over the network.
-
We on the client side receive it, unpack the data, return it to our program and continue working further.
Such an algorithm, if in two words.
I’ll be honest, when I studied this technology, the first thing I thought about was what the hell is it? Why make everything so complicated, you can do everything easier. There is REST HTTP, it is clear, there are endpoints, normal requests and responses that can be seen and read, to see the data that we transmit and receive, they can be understood. Why all this binary stuff anyway?
There are actually three very important pieces here.
First, And this is essentially the main killer feature – Speed. Google itself stated that the increase in speed would be 3-10 times, and the enthusiasts who tested all of this put their figure at 7 times. Data serialization is 7 times faster when working with this binary format than when working with JSON.
Therefore, if your services are, as they say, gRPC-oriented, that is, sharpened for very fast interactions with each other, the response time is as fast as possible, with very small short messages – you get a huge increase in productivity and efficiency of resource consumption. This is a very important case.
SecondDo you remember the situation when working with REST HTTP, when the guys made an endpoint and forgot the documentation? And it is necessary to test somehow. So in this case, the documentation will be the developer himself.
In gRPC, this will not work – here you first implement the contract in the prototype file. That is, you sort of implement first documentation, a then – Logic. In my opinion, this is also a very important feature
ThirdThere is very strong backward compatibility due to the loose binding to the field name. The binary protocol doesn’t know what strings are, it knows what numbers are. A certain method of recording and organizing data in the protofile eliminates the situation when the developer changed the name of the variable, and the client missed something.
In practice
Let’s get down to practice. To begin with, I download the project from here. We collect and run it according to the documentation
Next, we go to Postman -> NEW and select gRPC.
Before we start working, we need to explain to Postman which service we will be dealing with. How is it done? Right! It is necessary to upload the service prototype file to it.
Go to the Service definition tab.
The button “Import .proto file” asks to be pressed.
Click on it and then select the file.
The protofile we need will be in the folder with the downloaded project, subfolder proto. The name is GrpcExampleService.proto
We select it and press “Next”.
Postman saw our file and understood that we will work with some service, so we will offer to import it. I don’t mind, so we press “Import as API”.
After a successful import, we will see that our empty gRPC Request is using our new API.
What changed after that?
You can click on the method selector.
Oho! All the methods that the service can work with are already waiting for us here.
It’s cool, but can you check that all the methods here are real?
You can go to the APIs section in the left menu of Postman, then select our New API – Definition and click on the prototype file.
For the curious, you can study it all, and I will show only the first 10 lines, where we have information about which methods are open to the outside. We look at the service block.
Indeed, 3 methods – Postman did not deceive, but I could not test it 🙂.
OK, now back to our query.
Let’s run the first request to add a new customer.
To do this, first enter “localhost:50051” in the url field, since the service is up on this port.
And in the method selector, select “AddClient” and go to the tab “Message“.
We have Message in the form of a JSON object, add curly brackets and enter the first character inside.c” C, and Postman immediately prompts you to print clientsinfohints here as here – thank you.
We can take a look in the protofile, what needs to be transferred to the clientinfo — login, email, city.
Okay, let’s create such an object.
{
"clientsinfo": [
{
"city": "City",
"email": "check",
"login": "Habra"
}
]
}
We will execute the request and see Response: OK.
Let’s try to get information from this login. To do this, change the method called in the selector to GetClientByLogin:
And in the body of the message we indicate:
{
"login": "Habra"
}
We execute the request and get all the information we entered earlier.
That’s all the magic, easy, right?
Answer codes
It is worth noting that gRPC has its own response codes. Google is cool with the documentation, so it’s right in the gRPC repository all information about response codes and their meaning,
What should be tested in gRPC?
Right away, I want to make a small digression and say something. Since this is a remote procedure call, checking the logic of the procedures themselves is well covered by unit tests and there is no need to test their operation through external calls. Unfortunately, in practice there are various projects, and there may be those where unit tests are replaced by external procedure calls to check their work logic. In this case, you will have to add a check of the logic of the procedures that are called themselves to the list below.
Okay, if the logic of the work is covered by unit tests, what else should I pay attention to as a tester?
-
First of all, let’s not forget that gRPC is a remote procedure call. Therefore, you and I have to check that our procedure has certain parameters can be called remotely. Because it might not work anymore because of the remote call.
-
Next, you need to check whether the procedure is available externally. No one canceled the closure by authorization
-
Cool if it gets tested data format, which returns The examples above returned JSON. It can, for example, be validated according to the scheme
-
Well and response codes. It is up to you, according to the business need
Let’s summarize what is needed for testing the gRPC service
-
Import the .proto file
-
We check the availability of remote procedure calls
-
We check the availability of methods closed by authorization
-
We check the returned data and errors (if necessary, because it can be covered by unit tests)
-
We check e2e scenarios
-
We check the answer codes
With this, I will conclude my series of three articles on three completely different non-REST backends.
Although the cycle took three positions, we only skimmed the tips of these three icebergs. Under each technology there are many more features. But, I hope, my posts will be useful to you and will help you approach these things more easily, to test something, try it, feel it, and just get to know it. And only later, having gained experience, you can start increasing the quality of your testing.
Good luck to everyone!