API ManagementAPI MaturityAPI Product DesignAPI Product ManagementAPI Product StrategyAPI ProductsApis in the Enterprise: Field MusingsDevOpsEnterprise Application IntegrationigniteInitiativesIT ModernizationMoving to the CloudSOAThe API EconomyUnderstanding APIs

API Management Tools: A World of Nails and Screws — API Implementation Part 2

Today we hear from our Chief Architect Andy Medlicott, who compares the difference between API Management tools (Apigee, Mulesoft, etc.) and our own API Product Management platform to the difference between nails and screws — both are quite useful; but it all depends on the task at hand.

Before reading, we encourage you to read Part 1 of his series, “The Hidden Complexity of an API: API Implementation Part 1

Andy Medlicott – Chief Architect – digitalML

Previously I have written about the power and risks of dynamically “hacking” or “tweaking” an API management tool configuration, and pondered the advantage of having the configuration flowing from the design.

It doesn’t stop there, though – API management tools add some really interesting implementation options to help simplify writing RESTful API’s. But like all options – you still have to be wise in how you use them.

I know a few things about DIY. I know that a screw is best used with a screwdriver, a nail with a hammer. I know both are fun. To someone who’s never seen a screwdriver, a screw is just a funny looking nail. Try removing a screw with a claw hammer – then an electric screwdriver. No contest.

api management: a world of nails and screws

The world has screws as well as nails.

API Management tools provide some great functionality which it makes sense to almost always use – such as authentication, traffic management, monitoring and load-balancing. But they also often provide other functionality which benefits from some critical thinking before using – such as caching, request and response transformation and authorisation.

It depends on the API management tool, Apigee, IBM’s API Connect, WSO2, Amazon API Gateway all offer the different features beyond being a proxy. For example API Connect has data power integration whereas AWS offers lambda functions.

For example, where you have an API which follows this pattern:

  1. Receive a request
  2. Call downstream API x
  3. Call downstream API y
  4. Combine the results into a common format
  5. Return the result

You could use a couple of target proxy policies and a couple of response transformation polices and deploy them in your API management tool of choice. But a couple of things to realise:

  • For calling the downstream APIs, you may have an option of a circuit breaker policy – you may not.
  • The transformation to a common format may be simple, or it may be complex – it may perform poorly and impact other APIs being handled by the API management tool.
  • The “language” to implement the transformations or orchestration logic may be custom to your API management tool and therefore fewer developers can write it competently.
  • The logic to handle error conditions may need to be expressed as additional policies – these will require tests and probably mock APIs.

Then you also have some other things to think about:

  • Should I add policies to log the parts of the API flow or should I simply rely on the logging which would be naturally generated from calling the downstream APIs?
  • Is there some information I need to mask or redact because of insufficient authorisation? How easy is it to express that in policies? How do I report on it for auditing?

The point is simply – you need to think about things rather than assume that just because it CAN be done in an API management system that it SHOULD. I would come to different conclusions if I were working with Apigee or API Connect. 

api implementation -- choose the right tool

It depends also on how complex these things are – a simple transformation from an integer to string is trivial and probably simple enough to put into an API management tool… though as per my previous blog – it’s important to document it to avoid misleading people!

On the other hand, having all transformations (including the complex ones) in an API management system can be great to have consistency and visibility. There’s rarely a universally clear answer.

Remembering that an API consists of management policies and execution code helps make wise implementation decisions. Having a clear design is also crucial and helps this decision-making. Expressing data mappings in a design, no matter how simple, means that the information is available.

When translating a design into implementation code (and my preference is by as much code generation as practical) the possibility exists that this can be automatically converted into a response transformation policy – or equally into XSLT mappings, Apache Dozer configuration files, or MOXy annotations. If I manually build these mappings I’m tempted to go for what’s simplest, rather than what’s right…

Do I log using the API management tool? Of course. Do I ALSO add logging to my API execution code to aid in diagnosing problems if I need – why not? What’s the harm? Time to write? Time to execute? They’re poor excuses these days.

Do I orchestrate in the API management tool? Sure you can. What if I prefer to use a message system such as Apache Kafka or MQ to handle orchestration with reliability messages? If my design contains the essentials of the orchestration then why shouldn’t I seek to have the message payloads, definitions, call and wait logic automatically generated? I can then choose to implement some in the API management tool – some in Kafka or MQ? I can switch from one to another, the design stays the same.

Of course, all this relies on good and complete designs.

It all depends on good design.

So many times I hear people say they have a design and they pass me a swagger file. It’s a start, but then I ask for mappings and orchestrations. I get a spreadsheet. I ask for NFRs, if I don’t get a puzzled look, I often get a word document or an email saying “up 100% of the time with a 10ms response with a single server” or similar unrealistic boilerplate answers.

I do code generation often to get the stubs, but quickly I then make changes in the code because it’s too much hassle to regenerate and patch it all up again to the spreadsheets.

Having a place where design is complete, where code generation delivers stubs, mocks, transformations, policies, basic implementations with my favourite design patterns and abstract methods, where I can develop extension classes, test code, etc., makes my job enjoyable.

I spend my time truly designing and concentrating on solving the real problems which my boss has – not sleuthing around trying to work out why a design hasn’t been implemented properly.

The world has screws as well as nails. Best to pick the right tool for the problem at hand!

 

If you’d like to learn more about our ignite platform, please visit our release page or book a call and we’ll have a chat.

Tags

Related Articles

Close