Ease Gateway Documentation

1. Product Principle

Why do we need to reinvent another wheel? This is a good question when we start to develop a new gateway. There are several reasons.

  • Gateway as a Service. This means we need those features for Gateway - Clusters, Multi-Tenant, Auto-Scaling, User-Definition, and a number of administrator APIs…

  • Business Affinity. A good business affinity could empower the business capabilities. But business is much more complicated than we can cover all of them. So, we only focus on the business which has financial transaction feature. Such as: e-Commerce, Financial Service, Distributed Service(Uber likes), etc.

And the Ease Gateway is designed by following the core principle: Improve availability, stability and performance without changing a line of code.

Overall, the principles below aims to describe it in detail.

  • Business Support. Gateway can play an important role for some business scenario under heavy throughput. Such as: flash sales, canary traffic routing etc.

  • Availability with High Concurrency. It’s first goal to protect backend system from crash owing to a huge number of requests.

  • Traffic Scheduling. Traffic scheduling and API Workflow/Orchestration are important capabilities of traffic operations.

  • Performance Improvement. Adding appropriate strategies can filter many needless requests.

  • Being a Service. Customer can define and inject the small business logic for Gateway.

From users’ perspective, Ease Gateway is a highly customizable large-scale gateway which does heavy work efficiently for backend system. It can do so many kinds of work, the content is at Features.

2. Design Principle

As humbly expected, most users of Ease Gateway have corresponding technical background. So we design exported concepts of Ease Gateway by following below principles.

  • Providing compact and consistent abstraction and hiding trivial details for users.
  • Arbitrarily assembling colorful atomic operations to satisfy different tasks.

And Ease Gateway itself must satisfy some important principles on running in front of users’ service to guarantee benefit of users:

  • Distributed. The Gateway need be deployed distributed on the WAN.

  • High Availability. Robust distributed system tolerates single point of failure stably.

  • High Performance. Lightweight but essential features speeds up at single point, and regional deployment speeds up globally.

  • High Extensibility. It’s designed in Loose coupling and modular architecture for quickly satisfying new requirements.

  • Hot Update. It allows users to update configure and applies it to the corresponding environment quickly.

  • Open API. Exported RESTful API is useful to satisfy advanced requirements, for example, users can integrate some logic about Ease Gateway into their own code/scripts.

3. Architecture

The main components of architecture are Pipeline and Plugin:

  • Plugin is considered a essential element managed and scheduled by pipeline to handle corresponding parts of a task. It provides a consistent abstraction for pipeline in Ease Gateway.
  • Pipeline is considered a plugin scheduler to assemble diverse plugins to complete a whole task. It provides a higher abstraction for administrators to complete complete business easily.

Ease Gateway Architecture

In the diagram: - User Request represents the request in diverse protocols from users which commonly is HTTP[S] Request. - Administrator Request represents the request from administrators which includes three kinds of API: Administration API, Statistics API, Health API.

4. Features

We have already designed best practice for corresponding business, you can use it directly or customize it by yourself, both simplicity and flexibility in your hands. In this document, we would like to introduce follow pipelines from real practice to cover some worth use cases as examples.

Name Description Complexity level
Ease Monitor edge service Runs an example HTTPS endpoint to receive an Ease Monitor data, processes it in the pipeline and sends prepared data to kafka finally. This is a basic example to some of following ones. Beginner
HTTP traffic throttling Performs latency and throughput rate based traffic control. Beginner
Service circuit breaking As a protection function, once the service failures reach a certain threshold all further calls to the service will be returned with an error directly, and when the service recovery the breaking function will be disabled automatically. Beginner
HTTP streamy proxy Works as a streamy HTTP/HTTPS proxy between client and upstream Beginner
HTTP proxy with load routing Works as a streamy HTTP/HTTPS proxy between client and upstream with a route selection policy. Blue/Green deployment and A/B testing are example use cases. Intermediate
HTTP proxy with caching Caches HTTP/HTTPS response for duplicated request Intermediate
Service downgrading to protect critical service Under unexpected traffic which higher than planed, sacrifice the unimportant services but keep critical request is handled. Intermediate
Flash sale event support A pair of pipelines to support flash sale event. For e-Commerce, it means we have very low price items with limited stock, but have huge amount of people online compete on that. Advanced