Fast vs Complex

Tecknuovo
May 1, 2022
  • 4 min read

Recently I've been reflecting on some of the services I've built and run at the enterprise level. In particular, I've been thinking about how the performance to the end-user hasn't really changed in the last 20 years (yes, I'm old).

This all stems from a conversation I had with a colleague about 15 years ago after we had delivered a new underwriting platform for the London Market. The system had been delayed. This was in part down to performance issues that emerged late on in testing due to adding third party integrations and complex searches on new fields as part of new functionality. And so I was curious to see how the performance had improved over time in the context of his delivering technology in the Lloyds market.

For context, my colleague was at the end of his career, and often talked about the days of mainframes and manually inputting all data. So I was genuinely surprised when he said that the performance was about the same when running queries and looking up clients between when he started and the new system we had just delivered. The difference he said, was in the complexity. Speed had stayed the same but the complexity behind the searches, queries, and general functionality had increased exponentially.

That has stayed with me ever since… and is never more relevant than now.

I’ve helped lots of clients build and run all sorts of different architectures and services. While Microservices are extremely flexible and rapid to deploy, there is a hidden cost to these services, especially as they scale. As services mature, we frequently see more integrations (and data) start to emerge which drives complexity behind the scenes.

Complexity can come in many forms. Here are a few that come to mind in the context of this post:

Emergent design

Opening up back doors or changing designs to fix BAU incidents that fundamentally change a design or compromise a security boundary without realising it or the immediate consequence.

Operational/security overhead

Managing all the ever-changing services and attack surfaces on a quickly changing platform requires a strong technical understanding of many technologies and visibility of change. This needs to be carefully considered and tooled!

Ease of change

As above, defining boundaries and having role-based access control in place is essential – not just in regulated environments – to ensure only the right people can make potentially damaging changes.

In the old days of data centres and client server architecture (gather round children 😉), there was an inverse relationship between spend and application performance. Spend (hardware) was more or less fixed** and – generally speaking – performance was tuned at the app level. This involved lots of performance test cycles where you’d also try to tune the environment. All this took a long time and careful coordination while involving lots of people with all sorts of different skill sets, all while the business tapped its foot impatiently and finance watched the project's budget increase hour by hour.

Now, with Cloud, it’s completely flipped. One person can build and deploy an app (or series of microservices) and can tune the environment to make the service work quickly, and in quite a cost-effective manner. This is a stunning turnabout and the reason why so many tech companies are successful (thank you Captain Obvious) but it can also be a catastrophe for an enterprise organisation. One such example is the story of someone who was experimenting on a bank’s Cloud account. They built and deployed an app on AWS but didn’t really understand the operational side, and the firm received a $1m invoice from AWS in a month for services that weren’t designed properly and kept autoscaling uncapped.

Carefully designing platforms and integration services is now something I seem to be spending a lot of my time talking to clients about and it’s refreshing that people like to hear about our experiences running platforms so they can learn how to prevent operational costs from being too high from the start. In fact, nearly all the conversations now start with that angle, which is great to hear!

If you’re running enterprise-like services, what you will likely see these days, is while the speed of your services will likely stay the same, the cost of doing so behind the scenes will probably climb fairly steeply as complexity drives increased processing, storage requirements, networking... the list goes on. You need to design for that and make sure any changes are visible and understood.

** yes you could play around with VMs or add more memory but that involved lots of people and was generally very visible, time-consuming, and lots of people got in trouble messy for all involved!

Back to What we think

The latest insights

Get the inside scoop — delivered straight to your inbox.