This is a continuation of last week’s post on What You Missed at RESTfest 2018. Today I’ll be covering my presentation on Balancing Architecture at Scale.
API Production & Consumption
Your work begins with API creation to meet an immediate need or new concept. As the provider of that API, your focus is on creating a working product, that meets that specific use case.
Next, consumers will begin using your API. They may or may not be who you expected, which is fine… or they may or may not be using it according to your original use case. And that is also fine! However, their code now relies on yours. As their requirements evolve, they’ll have additional asks of your API, which may not align to the original use cases.
In addition, new consumers may come along with the same results.
A Problem and a Solution…
Accepting these additional asks can affect your API over time. You may change the original use cases to suit the consumer’s perspective (particularly a very vocal one). But eventually, this can result in what is effectively a hard-coupled solution (back to the world of SOAP) for long-lived APIs. You can’t change them; you can’t retire them.
This problem can be resolved by saying, as the provider, that your API will only change with the data representation. This way, you can avoid any “corruption” over time. Sometimes this does work — but consumers will look for easier solutions where possible. So you will be turning away potential consumers (and potential revenues).
The Enterprise Lens
Now consider this within a large enterprise, where you don’t always have control over what requirements you implement, and you also rely on a variety of other technologies which change independently. In this scenario, conflicts are unavoidable.
Due to this kind of environment, individual team leaders each determine and implement their own patterns and best practices, for a variety of reasons:
- Comfortable technologies, within their skill sets
- New technologies that are exciting and promise faster development, easier maintenance, or better performance
- Pursuit of architectural purity
- Other personal preferences
At a large scale, the result is many different implementations that have trouble interfacing with each other, multiple implementations of the same function, and possibly a jumbled appearance to the outside world. These varied responses and approaches to APIs result in something I like to call “the hydra”.
Solutions to “The Hydra”
Different solutions have appeared over the last twenty years to solve this problem:
- Canonical data and service models
- Enterprise architecture
- Attempts at standardization of toolkits, libraries, patterns, etc.
These can be implemented either at the enterprise, or at a lower level. Some approaches are better than others:
- PRO High-speed delivery
- PRO Flexibility of implementations
- CON Difficult to maintain
- CON Difficult to integrate
- CON Gaps and Overlaps
- PRO Few gaps and overlaps
- PRO Easier to integrate
- CON Vastly slower delivery
- CON Vulnerable to security flaws
- CON Still difficult to maintain
Taking a “middle path” approach here can provide the best results. The following common design tools can help with the big problems:
A catalog provides you with the discoverability to help reduce the number of gaps and duplication of existing services, as well and visualize any exposure risk you might have by letting you see the services exposed externally.
It also gives you visibility on design-time data, which has to be compared with runtime to make sure that what you have logged is accurate. In addition, you can provide visibility into your project pipelines (which services are in progress, upcoming, or sunsetting), which cuts time on prioritization and helps with project management.
For more on the API Catalog, visit What’s in an API Catalog?
Canonical models yield easier integrations by providing singling language of transformation and offering a disciplined approach to avoid affecting delivery times. That has its own pitfalls — you need to have a disciplined approach in designing and maintaining the canonical models. (To see how we apply domain-driven design to the canonical model, visit domain-based information models.)
In addition, having and maintaining a set of canonical models can drive a common business language, making it easier for tech and business to communicate.
This is the newest of the set, something offered by few API Management vendors out there — ignite is the only platform I know of. You could always try to do it in-house, but the purpose is to reduce exposure to technological problems as technology changes. There will of course be upfront costs as you figure out how to adjust your existing models, but once you pay that upfront cost you’re able to preserve any template patterns needed. The obvious use case here is with SOAP-to-REST, or REST-to-SOAP transformations, which you can export in a Swagger. This feature would also prove useful for those undertaking a move to GraphQL; you can make changes to your existing REST APIs and export them as GraphQL, meaning your cost to rebuild in GraphQL is decreased by a factor of up to 90%.
The End Result
By combining the use of these three tools and using a disciplined approach with an eye toward Agility, you have a streamlined method that reduces risk for your long-term strategy. While there is an increased onboarding cost to this methodology, there is a dramatically decreased maintenance cost. Ultimately, you have to decide what’s best for your organization’s specific needs when applying scale and balance to your architecture.
To learn more, book a chat with one of our consultants today.