Seven Peaks Insights

API Use Cases in Banking and Cloud Development – All You Can Learn

hand-pressing-security-button-touch-screen
 
 
 
 On December 8th 2021, Seven Peaks Software held an online and onsite meetup to talk about Cloud Development and API use cases in banking. The topic that we chose was API Development. Three speakers of Cloud experts giving out their knowledge and expertise to guide you through to be successful with Cloud Development.

 

Cloud Meetup Summary – Azure API Management overview and API use cases in Banking Industry.

The meetup was opened by Satish Kamalchand Dadha by presenting Azure API Management overview and API use cases in Banking Industry. He is a Senior Specialist focusing on Digital and Application innovation at Microsoft.

His Session focused on use cases of APIs in Banking and Financial Services industry, Key capabilities to look for in an API Management solution and an overview of Azure API Management Solution.

In his talk, Satish is going to delve into why organizations should consider investing in an API management solution.

He continues with what’s needed to look at API management, what are the capabilities that one needs to look for in an API management solution and Once a case is made for using an API management tool set, what are the capabilities that one should look for? And then he also jumped onto the capabilities of assures, API management offering.

Lastly, Satish closes his presentation with some cases, from the financial services industry point of view of API use cases in banking industry.

So, why should organizations consider investing in API management?

When organizations invest in an API management solution, they invest in offering their business capabilities as APIs. This provides a means to extend the reach of their offerings.

So, for example, if your business is a bank, a retailer, a hotel, a travel and transport company when the business starts offering the core offerings as APIs, it opens up the possibility of creating partnerships. It is important in today’s day and age, where things are becoming app based.

When a business starts exposing their core capabilities as APIs, it maximizes the possibility of forging a partnership with a startup, with a, with a mobile app and then, extending the reach of my offerings and which is extremely critical.

Moving on, Satish explains what are the capabilities that one needs to consider. So, any API management initiative can be considered successful if it gets consumed by a lot of, let’s say, clients or partners.

So, one needs to be mindful of or aware of the expectations from API consumers. In order for API management initiatives to be successful, one needs to know what are the expectations of API consumers? And what are the expectations from API Providers?

Expectations of API Consumers

  • How does a developer find an API?
  • How does a developer “sign-up” to use the API?
  • Can the API be tried before all the effort to consume it?
  • Is there documentation?
  • Does the API provide what the consumer is looking for or what the provider can do?

Expectations from API Providers

  • Automated, visual and coding options for creating APIs
  • Lifecycle and governance for APIs, Products
  • Access control over APIs, API Products
  • Customizable, self-service Developer Portal for publishing and consuming APIs
  • Policy enforcement, security and control
  • Advanced API usage analytics
  • Hybrid-cloud, Multi-cloud, and on-premise support
  • Scalability

Azure API Management comprised with 3 capabilities:

  • API Management Plane
  • Developer portal – User Plane
  • Developer portal – User Plane
Are you a talented Backend Developer?
See all our Senior, Mid-level, and Junior Backend developer jobs below!
See Backend DeveloperJobs!

Azure’s API management

Moving on to Azure’s API management, it has been around for around 7 years now. And it’s used by more than 18 thousand organizations, These figures are as of last year (2020).

So there are more than 4 trillion API calls that were made on Asia’s API management platform. More than 800 thousand API’s are live and are being used. And then the deployment is all throughout the world.

In terms of the API usage. So, API management is a full life cycle API management solution. What exactly does full life cycle mean? So, right from designing, it’s a continuous loop.

By using Azure’s API management, you would be able to design your API in a top down fashion or bottom up.

Top down, you could build the API design first and then the back end implementation later or if you have the back end implementation ready or the back end or course service ready, could just import that and build your façade.

So, designing can be done in a visual and graphical manner. You then want to develop the API ad policy, add transformation capability validation, cashing capability etc. For that, Tools like visual studio code to do that or development tools in the graphical management portal, API management portal could help to do this part.

You would then want to apply policies around security. So, you may want to apply IP filtering policies, you may want to apply quotas on the API usage. you may want to not make your APIs publicly available and enable private networking.

All of that can be done using Azure CPR management. So, once you have built your API secure, you would then want to publish it, for consumption by potential API consumers.

To publish your API, Azure repair management provides the developer portal. The developer portal could be customized based on the look and feel and the theme that your organization follows. Once the API usage grows, that is a possibility to scale.

Azure has data centers around the world and the API management services available across 54 regions in the world so your businesses are able to scale up in different parts of the world based on where the target audience is for your API.

You could go for multi region deployment if needed and as the usage grows, you definitely want to monitor and analyze the usage of the API generate dashboards, customize your API, offering based on based on the usage and make it more interesting so that more and more consumption happens of your APIs which ultimately will lead to let’s say increase the business for the organization.

Are you a talented Backend Developer?
See all our Senior, Mid-level, and Junior Backend developer jobs below!
See Backend DeveloperJobs!

API Driven Design

The next speaker, Bjorn Harvold, the co-founder of iko.travel and TripPay; an inventory distribution and payment platform for the travel industry, presenting on API Driven Design. His session focused on seeing how to let APIs “do the talking” to help drive business and partnerships forward.

Bjorn opened his presentation by explaining what is API Driven Design, continued with some Demo and lastly he talked about API LIfecycle. Starting with the HATEOAS or hypermedia as the engine of application state, it gives the computer the ability to understand the API without the need to document the API.

So HATEOAS is very healthy for the computer. Continue with verbs, there are few verbs that Bjorn mentioned which these are:

 
unnamed-18-1
 
 

Next we will also being explained about error codes which are:

unnamed-19-1-1

 

100x is an informational code, 200x is Success code which means everything is working fine, 300x is redirect code which means something changes and needo update the url to where it’s going,400x means something wrong with the systems, and lastly 500x means something wrongs with the server.

GraphQL or rest API is still a good option. Every technology have its use cases and this one obviously very good for Facebook that’s why they created today obviously because they need to separate data and for some, use cases for some application can be really useful and as you see for some developers it’s kind of little bit of learning as you it can be easily applied.

So moving to the API lifecycle. There are few aspects points that you need to look on:

  1. Changes/ Depreciations on the: Features, Entities, Format
  2. Versioning – Consider backwards compatibility, Decouple storage / conceptual model
  3. URL-based. e.g. /v1/ => /v2/
  4. Header based: Accept / Content-Type: application/vnd.my-product-v1+json
    Custom: “my-product-version” – 1

You can read and get more information about Bjorn Talk in the link below:
https://github.com/bjornharvold/pizza-delivery-network

API Monitoring and Logging

The last speakers of this meetup are the experts of Cloud at Seven Peaks Software. Giogio Desideri and Nicolas Piersen presented on API Monitoring and Logging.

This Session focused on Log, Trace and monitoring as most important aspects of API development, especially in a world driven by “microservices”, understanding the messages flows, transactions and computation progress is essential due to control and monitoring application.

Reaching this goal with Azure Application Insight is pretty easy, and the cloud service gives immediately to you a lot of features and benefits which help you to not waste time and effort.

So, how do we monitor everything? How do we ensure that the user of the API is still getting information when it needs it and the API is live or a pack of users or can access some documents in the accounts. So before that you need to understand the contract and agreement. It is divided into three; SLA, SLO & SLI.

SLA = Service Level Agreements are publicly stated or implied contracts with users.

  • Users can be external customers or internal customers.
  • The agreement may include economic repercussions

SLO = Service Level Objectives are objectives that define targeted levels of service.

  • The SLO may be measured by one or more Service Level Indicator

SLI = Service Level Indicators are metrics that indicate how well a service is performing.

  • Latency
  • Disk usage
  • CPU
  • etc
 

Moving on to the monitoring platform, Giorgio and Nicolas created a pyramid for better understanding on how to understand a platform. Monitoring platform will tell you whether a system is working, observability lets you ask why it isn’t working

unnamed-20


In the pyramid it is divided into three parts: analyzing, monitoring and observability. Analyzing uses both monitoring and observability in order to understand what is the status of the platform.

Monitoring is about identifying the data that will help to prevent issues. It will be achieved through an aggregate view of the information collected => 360 view.

Observability is generating and collecting enough information in order to know the state of the system. Observability is decomposed in three pillars: logging, metrics and tracing.

So what are the goals of monitoring? These goals are:

Based on the metrics and logs collected, identify the important information and group them around use cases

  • A view for cost management
  • A view for the administration in production (capacity, bottlenecks…)
  • A view for application monitoring (queues capacity, number retries)
  • Accesses must be secured based on Roles and Permissions

The best practices for monitoring are:

  • Build specific dashboard based on the platform, reusing dashboard from other metrics will misguide the user of the dashboard
  • Adapt and create multiple dashboards if needed
  • The dashboard should easily accessible

Moving forward to logging, So what is log? It could be defined by following points:

  • To generate information during a processus, this information should be enough to understand what’s going on (specific to the use case development vs production)
  • To be easily readable
  • To be easily filterable (by correlation id, levels, services…)
  • To be purgeable on-demand and archivable
  • Secured

Logging is the information generated by the software to give information on the performed process.

So how to log effectively? There are various ways that can be done to achieve that!

The following best practices should be applied:

  • Don’t log personal information (ex. first name, last name, card id, social security number…)
  • Log everything that is coming or going out of the platform (external services, incoming requests)
  • Use the decorator pattern to implement the log the incoming and outgoing requests
  • When it comes to debugging in development, try to debug without looking at the code.
  • The levels are trace, debug, info, warning, error

It also defined by:

  • To put numbers on the processes implemented. Those metrics are generated on different levels from the infrastructure to the application.
  • Detects under-used capacities and the bottlenecks
  • To make actions on tangible information when it comes to improve the performances
  • Improve the cost of the platform

The following best practices should be applied:

  • Generate as many metrics as possible, it will help to understand the platform.
  • Instrumentation is often underestimated and brings a lot of value to understand the processes in the application

Lastly to tracing, it is defined by Being able to link the metrics and logs together in a context (ex. a request).

Tracing is representing a series of causally related distributed events that encode the end-to-end request flow through a distributed system.

When tracing is being performed, one needs to implement the best practices such; Implement the tracing at the beginning of the project, it impacts the interface contracts.

Are you a talented Backend Developer?
See all our Senior, Mid-level, and Junior Backend developer jobs below!
See Backend Developer Jobs!