Evolved from the traditional monolithic approach, ViriCiti’s system consists of multiple services that run in containerized Docker environments, utilizing Kubernetes as container orchestrator. This architectural style, called Microservices, has been actively debated on in the past few years and consists of the idea of decomposing a system into a collection of services. The agility that comes with this architecture proved to be invaluable for accommodating the growth of our system, being able to conveniently scale up and easily integrate new features.
Microservice components only become a valuable Service Oriented Architecture (SOA) when they are able to communicate with other components in the system. Together they form the overarching functionalities of our system as a whole. Each service has its own API, and just like the modularity of the code base it is important that these are loosely coupled. But as we were expanding our codebase we increasingly experienced that microservices are no silver bullet, and introduced new issues that must be addressed.
- Services do not just communicate with each other, but also expose data to the outside world. Exposed endpoints need to be secured and authorized, as the outside world can be a dangerous place. Additionally, web-friendly standards have to be met regarding CORS, rate-limiting, caching and metrics to name a few. This also means that each service has to provide an HTTP REST API, while the microservice might benefit more from using different communication protocols.
- The granularity of APIs provided by the microservices is different to what a client needs. Clients are often forced to make multiple calls to different services and aggregate the data themselves.
- Next to our web applications we expose our data to mobile and third party clients. These clients can request data that comes from different services, and even identical requests may desire data in different formats or dimensions.
While most of these challenges could be solved on a ‘per service’ base, it often requires duplication of code within your codebase which then quickly becomes very cumbersome to manage. Basically we ended up with two sides of the same coin; On one hand we split up our system into small pieces to reap the benefits of microservices, on the other hand we would like to expose our services through an uniform API.
Researching several solutions to tackle these challenges brought us to the API Gateway pattern. The idea of this pattern is to create a single API endpoint that will serve as a reverse proxy for all the underlying services. All clients will communicate only, and only, with the API gateway which will proxy valid incoming requests to the corresponding services. Besides functioning as a reverse proxy, the position of the gateway within the system makes it a perfect candidate for additional responsibilities such as security and other web standards, literally acting as the gateway to our backend services.
The API gateway has the following benefits:
- It provides unambiguous access to our framework by a single API endpoint. Clients do not have to call different services anymore, resulting a reduction of overhead and complexity.
- It can handle all responsibilities regarding security, rate-limiting, CORS, etc., within this API layer so that it does not have to be handled by the microservices themselves.
- It creates a safe ‘internal’ environment for our services. Services that are behind the gateway only accept connections from other internal services or the gateway itself.
- It becomes even easier to update/deploy new services.
OK, that sounds pretty neat, but how do we actually implement this API gateway thing? You could write your own from scratch, but why reinvent the wheel? The API gateway market is thriving; big software companies offer complete SaaS solutions while others provide open source products. In our search, our interest got piqued by the relatively young Express-Gateway. As the name already indicates, this solution is built upon the popular Node.js framework ExpressJS. This is extremely convenient since both Node.js and ExpressJS are core technologies used within ViriCiti.
Gateway entities such as pipelines and policies, which are used to configure the internal behaviour of the gateway, wrap around ExpressJS middleware. Each available endpoint on the API gateway has a corresponding pipeline that uses configurable policies as its building blocks. Next to already available policies, custom policies can be implemented. This way we can fully customize our API gateway using our in-house expertise. Winning!
The final question was where to actually start building. As mentioned in the initial part of this article, currently most of our services handle their own logic. To get the most out of using the API gateway pattern, existing services need to be further decoupled and frontends need to be separated from their backend counterparts so we can route their communication through the gateway. This means that the migration requires a redesign of the current architecture, preferably done in steps to minimize the risks while doing so.
Project: Charge Stations
At ViriCiti we recently launched a new project, called Charge Stations. Next to vehicle monitoring, we now provide real-time monitoring solutions for charge stations. By being aware of their status, charging rate and technical errors, our customers are guaranteed that their vehicles are always charged on time. Since this was a green field that had to be built from the ground up, it gave us the opportunity to completely shape its design.
Our team started off by simultaneously building the frontend and backend projects, both in their own separate codebase. For the frontend we decided to go with Vue.js, backends were traditionally built using Node.js.
During development we quickly experienced the first benefits of using an API gateway; not only did this allow us to develop much quicker, by only having to focus on the business features of our application, but we could also easily switch between local and staging environments. By simply changing configurations we could feed the frontend data without having to spin all processes locally.
Not tied directly to the API gateway pattern but a very important aspect is how to handle authentication. As mentioned incoming requests will be authenticated by the gateway, and following plugins often depend on knowing which user made the request. Commenced from the need of a standards-based, interopable approach to application security OAuth2 has come forth as the de facto industry standard. Therefor we opted for using OAuth2 as our security strategy, next to securing our API this makes it easier for other companies to integrate our backend APIs using their own user management systems.
Some gateway solutions provide the option for the gateway to take on responsibilities of an OAuth2 provider, by keeping user credentials and being able to administer access tokens. We chose to exclude this from the gateway, and built a separate OAuth2 provider. While this initially means more work, it makes the integration of other OAuth2 providers with our gateway much easier. Furthermore we can use our OAuth2 provider with other systems without having to stress the API gateway. To support this we wrote our own authentication plugin for the gateway, that focuses exclusively on validating incoming API calls.
Putting it live
Using an API gateway we were able to redesign the way how frontends communicate with our backends. The initial investment was quite big, not only the API gateway itself but also a security provider and gateway plugins had to be built. For us, Express-Gateway proved to be a solid solution as it provides the perfect tools to set this up using Node.js.
During development we could focus on the business features instead of having to worry about security and other web-development caveats. Frontend applications only have to talk with one API, and by changing the configuration they can easily switch from which environment they pull their data. On top of this the gateway provides us with a safe environment for our internal services.
Ultimately we’re very content with using this architectural pattern when building our Charge Stations domain, and are currently in the process of updating already existing services to comply with this architecture. We hope sharing the experience of our implementation, including choices made regarding infrastructure and security, can help you making your own decisions.