Mark Taylor in Coding 2 minutes

From microservice to mini monolith and back again

Here at Not On The High Street we have made a commitment to Microservices. Our developers have agreed to the principle that “We don’t extend the mononoth” so all new features will be developed using a microservice architecture. There are no rules on languages used for these micro services, only that they will have to be built and maintained by the current teams.

Payments are one area that has been broken out into a microservice, primarily to facilitate the move to different payment providers. The initial design was sensible, a java service that is responsible for payments. Creating an API for the first payment provider was a smooth process and a good RESTful interface was implemented on top of the service. When adding the second provider we looked at being good Object Oriented Programmers and extract an interface for paying that wasn’t aware of providers and other implementation details. This wasn’t possible, due to the differences in the providers, particularly the flows that each provider expected us to follow. So we implemented a different flow for each provider with some shared concepts such as payment identifiers, and different REST endpoints. This all remained in the payment service as it is all payment related.

Once work on the first payment provider was finished it was released and put into production. This first implementation is considered to be a success and is used by other services for performing payments. During the work on the second provider there were no changes made to the implementation for the first provider, other than some minor bug fixes. At the time we considered this to be a good thing, we could implement a new payment provider without impacting the others.

As development on the second provider neared it’s end we started to consider the release process and what levels of exploratory testing we would need to perform. We quickly realised that we needed to perform testing on both of the providers to ensure that there were no unforeseen interactions between the two. During this time we also considered our load testing approach and noticed that in a pre product environment that calls to the second providers sandbox were taking a considerable amount of time. Under extreme load we managed to get a threadpool timeout meaning that the client had run out of threads to connect to the sandbox as they were all waiting on responses. This meant that if one provider went down during heavy traffic then we could lose all mechanisms of taking payment over the Internet, not an acceptable state for an ecommerce site.

The payment service isn’t big in terms of lines of code, sonar puts it at just over 10,000 lines and it has good test coverage, in particular branch coverage. However it appears that in trying to move from a monolithic application to Microservices we have in the case of payments created instead a mini monolith application.

There were several ‘smells’ that were identified during the development process that in the future we will pay closer attention to:

  • Unable to create a single interface to the service.
  • There are multiple high level paths through the service based on a user’s choice, in this case which provider to use.
  • Can add new features over a number of sprints without changing large areas of the code.
  • Need to retest whole user journeys even if that area of the application hasn’t been changed.
  • Unable to scale new features as needed without also scaling existing features.
  • A series of slow or bad responses from one part of the service impacts the performance of an unrelated part. (Inverse of above)

    So how do we get back again? We have already acknowledged that we have introduced technical debt into the payments service and have made the product owner aware that this will need to be addressed soon. The scheduling of the work is not the issue, the issue is that we have a released API in production that is working and supporting two payment processes. That means that we need to provide a non-breaking, backwards compatible API from two services. We also need to identify the shared resources and work out if they are truly shared or if we were forcing reuse for the sake of reuse and over adherence to DRY principles. Once we have worked out sharing and routing policies, it should just be a case of splitting the project into multiple maven modules. This will allow us to check to ensure that the new services are properly isolated before moving each service module into it’s own github repository and it’s own delivery pipeline.

That is the plan, however, as is well documented “No plan survives contact with the enemy” so I will follow up on this post once we have managed to split out these services.