Ease Gateway Documentation

1. Product Principle

Why do we need to reinvent another wheel? This is a good question when we start to develop a new gateway. There are several reasons.

  • Gateway as a Service. This means we need those features for Gateway - Clusters, Multi-Tenant, Auto-Scaling, User-Definition, and a number of administrator APIs...

  • Business Affinity. A good business affinity could empower the business capabilities. But business is much more complicated than we can cover all of them. So, we only focus on the business which has financial transaction feature. Such as: e-Commerce, Financial Service, Distributed Service(Uber likes), etc.

And the Ease Gateway is designed by following the core principle: Improve availability, stability and performance without changing a line of code.

Overall, the principles below aims to describe it in detail.

  • Business Support. Gateway can play an important role for some business scenario under heavy throughput. Such as: flash sales.

  • Availability. It's first goal to protect backend system from crash owing to a huge number of requests.

  • Performance Improvement. Adding appropriate strategies can filter many needless requests.

  • Being a Service. Customer can define and inject the small business logic for Gateway.

From users' perspective, Ease Gateway is a highly customizable large-scale gateway which does heavy work efficiently for backend system. It can do so many kinds of work, the content is at Features.

2. Design Principle

As humbly expected, most users of Ease Gateway have corresponding technical background. So we design exported concepts of Ease Gateway by following below principles.

  • Providing compact and consistent abstraction and hiding trivial details for users.
  • Arbitrarily assembling colorful atomic operations to satisfy different tasks.

And Ease Gateway itself must satisfy some important principles on running in front of users' service to guarantee benefit of users:

  • Distributed. The Gateway need be deployed distributed on the WAN.

  • High Availability. Robust distributed system tolerates single point of failure stably.

  • High Performance. Lightweight but essential features speeds up at single point, and regional deployment speeds up globally.

  • High Extensibility. It's designed in Loose coupling and modular architecture for quickly satisfying new requirements.

  • Hot Update. It allows users to update configure and applies it to the corresponding environment quickly.

  • Open API. Exported RESTful API is useful to satisfy advanced requirements, for example, users can integrate some logic about Ease Gateway into their own code/scripts.

3. Architecture

The main components of architecture are Pipeline and Plugin:

  • Plugin is considered a essential element managed and scheduled by pipeline to handle corresponding parts of a task. It provides a consistent abstraction for pipeline in Ease Gateway.
  • Pipeline is considered a plugin scheduler to assemble diverse plugins to complete a whole task. It provides a higher abstraction for administrators to complete complete business easily.

Ease Gateway Architecture

In the diagram:

  • User Request represents the request in diverse protocols from users which commonly is HTTP[S] Request.
  • Administrator Request represents the request from administrators which includes three kinds of API: Administration API, Statistics API, Health API.

4. Quick Start

As you can see in architecture, the main components of Ease Gateway are plugin and pipeline. A plugin is created to complete specific work, and a pipeline is created to assemble different plugins to complete a bigger work which usually is a kind of lightweight but effective work in gateway layer.

Ease Gateway runs in different nodes among the world, but we could easily contact it with domain gateway.megaease.com. We provide multiple interfaces to manipulate Ease Gateway, here we just show the common one RESTFul API which is also a easy way for beginners to understand Ease Gateway.

  1. JSON is a common message format in many fields, so it's very meaningful in gateway to check validity of json data in order to filter lots of invalid requests for backend system. It's easy to create a plugin to complete this job:

    curl https://gateway.megaease.com:9090/admin/v1/plugins
    -X POST
    -i
    -H "Content-Type:application/json" -H "Accept:application/json"
    -d '
    {
        "type": "JSONValidator",
        "config": {
            "plugin_name": "test-jsonvalidator",
            "schema": "
            {
                \"type\": \"object\",
                \"properties\": {
                    \"name\": {\"type\": \"string\"}
                },
                \"required\": [\"name\"]
            }",
            "data_key": "DATA"
        }
    }'

    The request aims to create a plugin whose type is JSONValidator, name is test-jsonvalidator, schema is to specify that the valid format of data must be an object with a filed name whose value type is string, data source(data_key) is at DATA. The action of test-jsonvalidator is pretty easy, which uses schema to validate data read from DATA. The xxx_key style config fields usually means the Input/Output data which can be read/written among plugins, you will feel it better soon after.

  2. JSON is just a kind of message switch format, we need a application layer protocol to carry on it. Ease Gateway could handle kinds of protocols, but HTTP is the most common one, let's create a plugin receiving data from users via HTTP protocol:

    curl https://gateway.megaease.com:9090/admin/v1/plugins
    -X POST
    -i
    -H "Content-Type:application/json"
    -H "Accept:application/json"
    -d '
    {
        "type": "HTTPInput",
        "config": {
            "plugin_name": "test-httpinput",
            "url": "/test",
            "methods": ["POST"],
            "request_body_io_key": "HTTP_REQUEST_BODY_IO"
        }
    }'

    The request aims to create a plugin whose type is HTTPInput, name is test-httpinput, receivable url is /test, receivable method is POST, and puts the HTTP request body into HTTP_REQUEST_BODY_IO. But please notice request_body_io_key represents a kind of I/O interface not real data, because the HTTP request body could be a unlimited stream. So the xxx_io_key style config fields usually means the Input/Output interfaces which can be read/written among plugins. So the first plugin test-jsonvalidator can not read data from HTTP_REQUEST_BODY_IO directly. But we could use another plugin io_reader to be the bridge for them.

  3. Create a I/O Bridge plugin:

    curl https://gateway.megaease.com:9090/admin/v1/plugins
    -X POST
    -i
    -H "Content-Type:application/json"
    -H "Accept:application/json"
    -d '
    {
        "type": "IOReader",
        "config": {
            "plugin_name": "test-ioreader",
            "input_key": "HTTP_REQUEST_BODY_IO",
            "output_key": "DATA"
        }
    }'

    You may have got the rules about creating a plugin. This request aims to create a plugin whose type is IOReader, name is test-ioreader, and read data from HTTP_REQUEST_BODY_IO, then put the read data into DATA. As you can see, the test-ioreader built a bridge between test-httpinput and test-jsonvalidator. By the way, concrete types in the meeting point between two I/O interfaces must be the same, but it's trivial here, the reference specifies them in detail.

  4. Plugins themselves are just a bunch of "Lego blocks", a pipeline is the guy to assemble them together to complete a beautiful work:
    curl https://gateway.megaease.com:9090/admin/v1/pipelines
    -X POST
    -i
    -H "Content-Type:application/json"
    -H "Accept:application/json"
    -d '
    {
        "type": "LinearPipeline",
        "config": {
            "pipeline_name": "test-pipeline",
            "plugin_names": ["test-httpinput", "test-ioreader", "test-jsonvalidator"],
        }
    }'

    This request aims to create a pipeline whose type is LinearPipeline, name is test-pipeline, sequential plugins are test-httpinput, test-ioreader, test-jsonvalidator. The test-pipeline defines a pipeline full of sequential plugins to complete their own work with each other. A whole correct process can be described that the test-httpinput receives a HTTP request, check all metadata and put I/O interface of HTTP request body into a place(HTTP_REQUEST_BODY_IO), then test-ioreader read data from a place(HTTP_REQUEST_BODY_IO) and put it into another place(DATA), then test-jsonvalidator read data from a place(DATA) and check whether the data satisfy the pre-defined json schema.

We call the context carrying on all data in a whole process Task, any one of plugins can fail current task if necessary, for example test-httpinput failed a task while receiving a HTTP request with method GET, or test-jsonvalidator failed a task while validating the data without field name. If a Task failed midway the pipeline test-pipeline will jump over the following steps(Strictly speaking, some plugins has the power to recover some specified failure, but it's a little bit complex to introduce it here).

The main goal of this section is to give you a quick start with Ease Gateway, we omitted some other interesting and useful configures here in order not to distract you. It's easy to understand others in detailed reference. And you can find more complex and useful examples in Features.

5. Features

We have already designed best practice for corresponding business, you can use it directly or customize it by yourself, both simplicity and flexibility in your hands. In this document, we would like to introduce follow pipelines from real practice to cover some worth use cases as examples.

Name Description Complexity level
Ease Monitor edge service Runs an example HTTPS endpoint to receive an Ease Monitor data, processes it in the pipeline and sends prepared data to kafka finally. This is a basic example to some of following ones. Beginner
HTTP traffic throttling Performs latency and throughput rate based traffic control. Beginner
Service circuit breaking As a protection function, once the service failures reach a certain threshold all further calls to the service will be returned with an error directly, and when the service recovery the breaking function will be disabled automatically. Beginner
HTTP streamy proxy Works as a streamy HTTP/HTTPS proxy between client and upstream Beginner
HTTP proxy with load routing Works as a streamy HTTP/HTTPS proxy between client and upstream with a route selection policy. Blue/Green deployment and A/B testing are example use cases. Intermediate
HTTP proxy with caching Caches HTTP/HTTPS response for duplicated request Intermediate
Service downgrading to protect critical service Under unexpected traffic which higher than planed, sacrifice the unimportant services but keep critical request is handled. Intermediate
Flash sale event support A pair of pipelines to support flash sale event. For e-Commerce, it means we have very low price items with limited stock, but have huge amount of people online compete on that. Advanced

5.1 Ease Monitor edge service

In this case, we will prepare a pipeline to runs necessary processes as Ease Monitor edge service endpoint. This a basic case for some of the following cases, so it's highly recommended to get through it.

Plugin

  1. HTTP input: To enable HTTPS endpoint to receive Ease Monitor data sent from the client.
  2. IO reader: To read Ease Monitor data from the client via HTTPS transport layer in to local memory for handling in next steps.
  3. JSON validator: Validating the Ease Monitor data sent from the client is using a certain schema. You can use Ease Monitor graphite validator if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  4. Ease Monitor JSON GID extractor: Extracts Ease Monitor global ID from the Ease Monitor data. You can use Ease Monitor graphite GID extractor if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  5. Kafka output: Sending the data to configured kafka topic, Ease Monitor pipeline will fetch them for rest of processes.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["POST"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "JSONValidator", "config": {"plugin_name": "test-jsonvalidator", "schema": "{\"title\": \"Record\",\"type\": \"object\",\"properties\": {\"name\": {\"type\": \"string\"}}, \"required\": [\"name\"]}", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "EaseMonitorJSONGidExtractor", "config": {"plugin_name": "test-jsongidextractor", "gid_key": "GID", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "KafkaOutput", "config": {"plugin_name": "test-kafkaoutput", "topic": "test", "brokers": ["127.0.0.1:9092"], "message_key_key": "GID", "data_key": "DATA"}}'

Pipeline

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-ioreader", "test-jsonvalidator", "test-jsongidextractor", "test-kafkaoutput"], "parallelism": 10}}'

Test

Once the pipeline is created (above Administration API returns HTTP 200), any incoming Ease Monitor data sent to the URL with certain method and header will be handled in pipeline and sent to the kafka topic. You might send test requests by blow commands. The data in ~/load file just shows you an example, and you could refer Ease Monitor document to check complete data.

$ cat ~/load
{
        "name": "test-workload",
        "system": "ExampleSystem",
        "application": "ExampleApplication",
        "instance": "ExampleInstance",
        "hostname": "ExampleHost",
        "hostipv4": "127.0.0.1"
}
$ LOAD=`cat ~/load`
$ curl -i -k https://gateway.megaease.com:10443/test -X POST -i -w "\n" -H "name:bar" -d "$LOAD"

5.2 HTTP Traffic Throttling

Currently Ease Gateway supports two kinds of traffic throttling:

  • Throughput rate based throttling. This kind traffic throttling provides a clear and predictable workload limitation on the upstream, any exceeded requests will all be rejected by a ResultFlowControl internal error. Finally the error will be translated to a special response to the client, for example, HTTP Input plugin translates the error to HTTP status code 429 (StatusTooManyRequests, RFC 6585.4). There are two potential limitations on this traffic throttling way:
    • When upstream is deployed in a distributed environment, it is hard to setup a accurate throughput rate limitation according to a single service instance due to requests on the upstream could be passed by different instances.
    • When Ease Gateway is deployed in a distributed environment and more than one gateway instances are pointed to the same upstream service, under this case the requests could be handled by different gateway instances, so a clear throughput rate limitation in a single gateway instance does not really work on upstream.
  • Upstream latency based throttling. This kind of traffic throttling provides a sliding window based on upstream handling latency, just like TCP flow control implementation the size of sliding window will be adjusted according to the case of upstream response, lower latency generally means better performance upstream has and more requests could be sent to upstream quickly. This traffic throttling way solved above two potential limitations of throughput rate based throttling method.

Throughput rate based throttling

In this case, we will prepare a pipeline to show you how to setup a throughput rate limitation for Ease Monitor edge service. You can see we only need to create a plugin and add it to a certain position in to the pipeline easily.

Plugin
  1. HTTP input: To enable HTTPS endpoint to receive Ease Monitor data sent from the client.
  2. Throughput rate limiter: To add a limitation to control request rate. The limitation will be performed no matter how many concurrent clients send the request. In this case, we ask there are no more than 11 requests per second sent to Ease Monitor pipeline.
  3. IO reader: To read Ease Monitor data from the client via HTTPS transport layer in to local memory for handling in next steps.
  4. JSON validator: Validating the Ease Monitor data sent from the client is using a certain schema.
  5. Ease Monitor JSON GID extractor: Extracts Ease Monitor global ID from the Ease Monitor data. You can use Graphite GID extractor if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  6. Kafka output: Sending the data to configured kafka topic, Ease Monitor pipeline will fetch them for rest of processes.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["POST"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "ThroughputRateLimiter", "config": {"plugin_name": "test-throughputratelimiter", "tps": "11"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "JSONValidator", "config": {"plugin_name": "test-jsonvalidator", "schema": "{\"title\": \"Record\",\"type\": \"object\",\"properties\": {\"name\": {\"type\": \"string\"}}, \"required\": [\"name\"]}", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "EaseMonitorJSONGidExtractor", "config": {"plugin_name": "test-jsongidextractor", "gid_key": "GID", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "KafkaOutput", "config": {"plugin_name": "test-kafkaoutput", "topic": "test", "brokers": ["127.0.0.1:9092"], "message_key_key": "GID", "data_key": "DATA"}}'
Pipeline

We need to set throughput rate limiter plugin to a certain position in the pipeline, generally we should terminate request as earlier as possible in the pipeline since once limitation is reached there is no reason to handle the request for rest steps, in this case we add the limiter close to HTTP input plugin.

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-throughputratelimiter", "test-ioreader", "test-jsonvalidator", "test-jsongidextractor", "test-kafkaoutput"], "parallelism": 10}}'
Test
$ cat ~/load
{
        "name": "test-workload",
        "system": "ExampleSystem",
        "application": "ExampleApplication",
        "instance": "ExampleInstance",
        "hostname": "ExampleHost",
        "hostipv4": "127.0.0.1"
}
$ LOAD=`cat ~/load`
$ ab -n 100 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient).....done

Server Software:
Server Hostname:        gateway.megaease.com
Server Port:            10443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /test
Document Length:        0 bytes

Concurrency Level:      20
Time taken for tests:   8.926 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      9700 bytes
Total body sent:        33700
HTML transferred:       0 bytes
Requests per second:    11.20 [#/sec] (mean)
Time per request:       1785.239 [ms] (mean)
Time per request:       89.262 [ms] (mean, across all concurrent requests)
Transfer rate:          1.06 [Kbytes/sec] received
                        3.69 kb/s sent
                        4.75 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        4   49  76.3      5     265
Processing:     9 1546 1232.0   1096    5594
Waiting:        1 1544 1233.5   1095    5593
Total:         34 1595 1228.1   1101    5599

Percentage of the requests served within a certain time (ms)
  50%   1101
  66%   1998
  75%   2000
  80%   2530
  90%   3328
  95%   3800
  98%   5596
  99%   5599
 100%   5599 (longest request)

Upstream latency based throttling

In this case, we will prepare a pipeline to show you how to setup a latency based limitation for Ease Monitor edge service. You can see we only need to create a plugin and add it to a certain position in to the pipeline easily, it just like what we did in throughput rate based throttling above.

Plugin
  1. HTTP input: To enable HTTPS endpoint to receive Ease Monitor data sent from the client.
  2. Latency based sliding window limiter: To add a limitation to control request rate. The limitation will be performed no matter how many concurrent clients send the request. In this case, we ask there are no more than 11 requests per second sent to Ease Monitor pipeline.
  3. IO reader: To read Ease Monitor data from the client via HTTPS transport layer in to local memory for handling in next steps.
  4. JSON validator: Validating the Ease Monitor data sent from the client is using a certain schema.
  5. Ease Monitor JSON GID extractor: Extracts Ease Monitor global ID from the Ease Monitor data. You can use Graphite GID extractor if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  6. Kafka output: Sending the data to configured kafka topic, Ease Monitor pipeline will fetch them for rest of processes.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["POST"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LatencyWindowLimiter", "config": {"plugin_name": "test-latencywindowlimiter", "latency_threshold_msec": 150, "plugins_concerned": ["test-kafkaoutput"], "window_size_max": 10, "windows_size_init": 5}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "JSONValidator", "config": {"plugin_name": "test-jsonvalidator", "schema": "{\"title\": \"Record\",\"type\": \"object\",\"properties\": {\"name\": {\"type\": \"string\"}}, \"required\": [\"name\"]}", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "EaseMonitorJSONGidExtractor", "config": {"plugin_name": "test-jsongidextractor", "gid_key": "GID", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "KafkaOutput", "config": {"plugin_name": "test-kafkaoutput", "topic": "test", "brokers": ["127.0.0.1:9092"], "message_key_key": "GID", "data_key": "DATA"}}'
Pipeline

We need to set the limiter plugin to a certain position in the pipeline, generally we should terminate request as earlier as possible in the pipeline since once limitation is reached there is no reason to handle the request for rest steps, in this case we add the limiter close to HTTP input plugin.

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-latencywindowlimiter", "test-ioreader", "test-jsonvalidator", "test-jsongidextractor", "test-kafkaoutput"], "parallelism": 10}}'
Test
$ ab -n 100 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient)...apr_pollset_poll: The timeout specified has expired (70007)
$ ab -n 100 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient).....done

Server Software:
Server Hostname:        gateway.megaease.com
Server Port:            10443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /test
Document Length:        0 bytes

Concurrency Level:      20
Time taken for tests:   0.754 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      9700 bytes
Total body sent:        33700
HTML transferred:       0 bytes
Requests per second:    132.65 [#/sec] (mean)
Time per request:       150.772 [ms] (mean)
Time per request:       7.539 [ms] (mean, across all concurrent requests)
Transfer rate:          12.57 [Kbytes/sec] received
                        43.66 kb/s sent
                        56.22 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        4   53  40.6     40     169
Processing:    25   88  28.6     89     157
Waiting:        2   75  37.7     80     151
Total:         55  141  54.0    135     322

Percentage of the requests served within a certain time (ms)
  50%    135
  66%    141
  75%    152
  80%    169
  90%    213
  95%    272
  98%    322
  99%    322
 100%    322 (longest request)

5.3 Service Circuit Breaking

In this case, we will prepare a pipeline to show you how to add such a service circuit breaking mechanism to Ease Monitor edge service. You can see we only need to create a plugin and add it to a certain position in to the pipeline easily.

Note: To simulate upstream failure in the example, an assistant plugin is added to the pipeline, you need not it in the real case.

Plugin

  1. HTTP input: To enable HTTPS endpoint to receive Ease Monitor data sent from the client.
  2. Service circuit breaker: Limiting request rate base on the failure rate the pass probability plugin simulated.
  3. IO reader: To read Ease Monitor data from the client via HTTPS transport layer in to local memory for handling in next steps.
  4. JSON validator: Validating the Ease Monitor data sent from the client is using a certain schema. You can use Ease Monitor graphite validator if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  5. Ease Monitor JSON GID extractor: Extracts Ease Monitor global ID from the Ease Monitor data. You can use Ease Monitor graphite GID extractor if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  6. Static pass probability limiter: The plugin passes the request with a fixed probability, in this case it is used to simulate upstream service failure with 50% probability.
  7. Kafka output: Sending the data to configured kafka topic, Ease Monitor pipeline will fetch them for rest of processes.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["POST"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "ServiceCircuitBreaker", "config": {"plugin_name": "test-servicecircuitbreaker", "plugins_concerned": ["test-staticprobabilitylimiter"], "all_tps_threshold_to_enable": 1, "failure_tps_threshold_to_break": 1, "failure_tps_percent_threshold_to_break": -1, "recovery_time_msec": 500, "success_tps_threshold_to_open": 1}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "JSONValidator", "config": {"plugin_name": "test-jsonvalidator", "schema": "{\"title\": \"Record\",\"type\": \"object\",\"properties\": {\"name\": {\"type\": \"string\"}}, \"required\": [\"name\"]}", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "EaseMonitorJSONGidExtractor", "config": {"plugin_name": "test-jsongidextractor", "gid_key": "GID", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "StaticProbabilityLimiter", "config": {"plugin_name": "test-staticprobabilitylimiter", "pass_pr": 0.5}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "KafkaOutput", "config": {"plugin_name": "test-kafkaoutput", "topic": "test", "brokers": ["127.0.0.1:9092"], "message_key_key": "GID", "data_key": "DATA"}}'

Pipeline

We need to set the limiter plugin to a certain position in the pipeline, generally we should terminate request as earlier as possible in the pipeline since once breaking is happened there is no reason to handle the request for rest steps, in this case we add the limiter close to HTTP input plugin.

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-servicecircuitbreaker", "test-ioreader", "test-jsonvalidator", "test-jsongidextractor", "test-staticprobabilitylimiter", "test-kafkaoutput"], "parallelism": 10}}'

Test

$ ab -n 1000 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:
Server Hostname:        gateway.megaease.com
Server Port:            10443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /test
Document Length:        0 bytes

Concurrency Level:      20
Time taken for tests:   8.208 seconds
Complete requests:      1000
Failed requests:        0
Non-2xx responses:      560
Total transferred:      105400 bytes
Total body sent:        337000
HTML transferred:       0 bytes
Requests per second:    121.84 [#/sec] (mean)
Time per request:       164.151 [ms] (mean)
Time per request:       8.208 [ms] (mean, across all concurrent requests)
Transfer rate:          12.54 [Kbytes/sec] received
                        40.10 kb/s sent
                        52.64 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        4   91  46.4     94     238
Processing:     1   71  51.0     62     350
Waiting:        1   22  48.1      2     339
Total:          6  162  54.3    160     383

Percentage of the requests served within a certain time (ms)
  50%    160
  66%    173
  75%    188
  80%    196
  90%    228
  95%    262
  98%    289
  99%    323
 100%    383 (longest request)

With updating recovery_time_msec option (it equals to MTTR generally) from 500 milliseconds to 5 seconds, we can see service circuit breaker can block request on failed upstream service efficiently.

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X PUT -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "ServiceCircuitBreaker", "config": {"plugin_name": "test-servicecircuitbreaker", "plugins_concerned": ["test-staticprobabilitylimiter"], "all_tps_threshold_to_enable": 1, "failure_tps_threshold_to_break": 1, "failure_tps_percent_threshold_to_break": -1, "recovery_time_msec": 5000, "success_tps_threshold_to_open": 1}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
X-Powered-By: go-json-rest
Date: Fri, 31 Mar 2017 10:13:36 GMT
Content-Length: 0

$ ab -n 1000 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:
Server Hostname:        gateway.megaease.com
Server Port:            10443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /test
Document Length:        0 bytes

Concurrency Level:      20
Time taken for tests:   5.988 seconds
Complete requests:      1000
Failed requests:        0
Non-2xx responses:      923
Total transferred:      110845 bytes
Total body sent:        337000
HTML transferred:       0 bytes
Requests per second:    167.00 [#/sec] (mean)
Time per request:       119.764 [ms] (mean)
Time per request:       5.988 [ms] (mean, across all concurrent requests)
Transfer rate:          18.08 [Kbytes/sec] received
                        54.96 kb/s sent
                        73.04 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        4  108  31.0    118     144
Processing:     1   10  21.5      2     147
Waiting:        1    3  12.9      1     146
Total:          5  118  31.5    121     274

Percentage of the requests served within a certain time (ms)
  50%    121
  66%    124
  75%    128
  80%    132
  90%    138
  95%    154
  98%    187
  99%    200
 100%    274 (longest request)

5.4 HTTP Streamy Proxy

In this case, you can see how a pipeline act a HTTP/HTTPS proxy between input and upstream. There is no any buffering and unnecessary operations in the middle.

Plugin

  1. HTTP input: To enable HTTP endpoint to receive RESTful request for upstream service.
  2. HTTP output: Sending the body and headers to a certain endpoint of upstream RESTFul service.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["GET"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput", "url_pattern": "https://gateway.megaease.com:1122/abc", "header_patterns": {}, "method": "POST", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_io_key": "HTTP_REQUEST_BODY_IO" }}'

Pipeline

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-httpoutput"], "parallelism": 10}}'

Test

A fake HTTP Server to output request log for demo.

$ cat ~/server2.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

class WebServerHandler(BaseHTTPRequestHandler):
    def do_POST(self):
        if self.path.endswith("/abc"):
            content_len = int(self.headers.get('Content-Length', 0))
            post_body = self.rfile.read(content_len)
            print content_len, post_body
            self.send_response(200)
            self.send_header('Content-type', 'text/html')
            self.end_headers()
            message = "<html><body>OK</body></html>"
            self.wfile.write(message)
            return

def main():
    try:
        port = 1122
        server = HTTPServer(('gateway.megaease.com', port), WebServerHandler)
        print "Web Server running on port %s" % port
        server.serve_forever()
    except KeyboardInterrupt:
        print " ^C entered, stopping web server...."
        server.socket.close()

main()

$ python ~/server2.py
Web Server running on port 1122
185 {
    "name": "test-workload",
    "system": "ExampleSystem",
    "application": "ExampleApplication",
    "instance": "ExampleInstance",
    "hostname": "ExampleHost",
    "hostipv4": "127.0.0.1"
}
gateway.megaease.com - - [04/Apr/2017 15:22:14] "POST /abc HTTP/1.1" 200 -

Sending out client requests to the proxy endpoint we created by above commands.

$ curl -i -k https://gateway.megaease.com:10443/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Tue, 04 Apr 2017 07:22:14 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK</body></html>

5.5 HTTP Proxy with Load Routing

In this case, you can see how three pipelines co-works together, downstream pipeline receives HTTP/HTTPS request and sends to one of two upstream pipelines selected by round_robin and weighted_round_robin policy. The upstream timeout case will be practiced as well.

Plugin

  1. HTTP input: To enable HTTP endpoint to receive RESTful request for upstream service.
  2. HTTP output: Sending the body and headers to a certain endpoint of upstream RESTFul service.
  3. Upstream output: To output request to an upstream pipeline and waits the response.
  4. Downstream input: Handles downstream request to running pipeline as input and send the response back.

Using follow Administration API calls to setup above plugins:

# For upstream #1
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "DownstreamInput", "config": {"plugin_name": "test-downstreamintpu1", "response_data_keys": ["response_code", "HTTP_RESP_BODY_IO"] }}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput1", "url_pattern": "https://gateway.megaease.com:1122/abc", "header_patterns": {}, "method": "POST", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_io_key": "HTTP_REQUEST_BODY_IO", "close_body_after_pipeline": false}}'

# For upstream #2
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "DownstreamInput", "config": {"plugin_name": "test-downstreamintpu2", "response_data_keys": ["response_code", "HTTP_RESP_BODY_IO"] }}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput2", "url_pattern": "https://gateway.megaease.com:3344/abc", "header_patterns": {}, "method": "POST", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_io_key": "HTTP_REQUEST_BODY_IO", "close_body_after_pipeline": false}}'

# For downstream, round_robin policy is used at this time
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["GET"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "UpstreamOutput", "config": {"plugin_name": "test-upstreamoutput1", "target_pipelines": ["test-upstream1", "test-upstream2"], "request_data_keys": ["HTTP_REQUEST_BODY_IO"], "route_policy": "round_robin"}}'

Pipeline

You can use follow Administration API calls to setup the pipeline:

# For upstream #1
$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-upstream1", "plugin_names": ["test-downstreamintpu1", "test-httpoutput1"], "parallelism": 10}}'

# For upstream #2
$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-upstream2", "plugin_names": ["test-downstreamintpu2", "test-httpoutput2"], "parallelism": 10}}'

# For downstream
$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "intput", "plugin_names": ["test-httpinput", "test-upstreamoutput1"], "parallelism": 10}}'

Test

A fake HTTP Server to serve the request for demo.

$ cat ~/server3.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys

class WebServerHandler(BaseHTTPRequestHandler):
    def do_POST(self):
        if self.path.endswith("/abc"):
            content_len = int(self.headers.get('Content-Length', 0))
            post_body = self.rfile.read(content_len)
            print content_len, post_body
            self.send_response(200)
            self.send_header('Content-type', 'text/html')
            self.end_headers()
            message = "<html><body>OK - %s</body></html>" % sys.argv[1]
            self.wfile.write(message)
            return

def main():
    try:
        port = int(sys.argv[1])
        server = HTTPServer(('gateway.megaease.com', port), WebServerHandler)
        print "Web Server running on port %s" % port
        server.serve_forever()
    except KeyboardInterrupt:
        print " ^C entered, stopping web server...."
        server.socket.close()

main()

$ python ~/server3.py 1122
$ python ~/server3.py 3344 # in a different terminal

Sending out client requests to the proxy endpoint we created by above commands.

$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:24:59 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:25:00 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:25:00 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:25:01 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:25:01 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:25:02 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>

Let's take a look on the result with weighted_round_robin policy.

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X PUT -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "UpstreamOutput", "config": {"plugin_name": "test-upstreamoutput1", "target_pipelines": ["test-upstream1", "test-upstream2"], "request_data_keys": ["HTTP_REQUEST_BODY_IO"], "route_policy": "weighted_round_robin", "target_weights": [2, 1]}}'

$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:22 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:23 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:24 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:25 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:25 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 12 Jun 2017 06:26:26 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>

Note:

  • Under weighted_round_robin policy, the upstream with a zero weight will not get any chance to handle the request from the downstream. Therefore, if you gives a weight list of only zero value, the gateway will reject the plugin creation or update request with a proper error as a fast-failure. Fox example:
    
    $ curl https://gateway.megaease.com:9090/admin/v1/plugins -X PUT -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "UpstreamOutput", "config": {"plugin_name": "test-upstreamoutput1", "target_pipelines": ["test-upstream1", "test-upstream2"], "request_data_keys": ["HTTP_REQUEST_BODY_IO"], "route_policy": "weighted_round_robin", "target_weights": [0, 0]}}'
    HTTP/1.1 400 Bad Request
    Content-Type: application/json; charset=utf-8
    X-Powered-By: go-json-rest
    Date: Mon, 12 Jun 2017 06:25:29 GMT
    Content-Length: 91

{"Error":"invalid target pipeline weights, one of them should be greater or equal to zero"}


* The default weight of each upstream under `weighted_round_robin` policy is value 1, which means the behavior equals `round_robin` policy.

Next, let's see Blue/Green deployment case, this time we need to use filter policy and provide a proper condition option to it. The data belongs to the key QUERY_STRING is given by HTTP input plugin, you can check the detail out by plugin reference document.

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X PUT -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "UpstreamOutput", "config": {"plugin_name": "test-upstreamoutput1", "target_pipelines": ["test-upstream1", "test-upstream2"], "request_data_keys": ["HTTP_REQUEST_BODY_IO"], "route_policy": "filter", "filter_conditions": [{"QUERY_STRING":"release=green"}, {"QUERY_STRING":"release=blue"}]}}'

$ curl -i -k https://gateway.megaease.com:10080/test?release=green -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Tue, 13 Jun 2017 08:02:54 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test?release=blue -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Tue, 13 Jun 2017 08:03:02 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test?release=green -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Tue, 13 Jun 2017 08:02:55 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 1122</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test?release=blue -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 200 OK
Date: Tue, 13 Jun 2017 08:03:04 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>OK - 3344</body></html>
$ curl -i -k https://gateway.megaease.com:10080/test?release=shit -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 503 Service Unavailable
Date: Tue, 13 Jun 2017 08:03:24 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked

Finally, Let's check on a upstream timeout case.

A fake HTTP Server to serve the request for demo, this time we add a 5 seconds sleep to simulate upstream response timeout.

$ cat ~/server3.py

from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import sys
import time

class WebServerHandler(BaseHTTPRequestHandler):
    def do_POST(self):
        time.sleep(5)
        if self.path.endswith("/abc"):
            content_len = int(self.headers.get('Content-Length', 0))
            post_body = self.rfile.read(content_len)
            print content_len, post_body
            self.send_response(200)
            self.send_header('Content-type', 'text/html')
            self.end_headers()
            message = "<html><body>OK - %s</body></html>" % sys.argv[1]
            self.wfile.write(message)
            return

def main():
    try:
        port = int(sys.argv[1])
        server = HTTPServer(('gateway.megaease.com', port), WebServerHandler)
        print "Web Server running on port %s" % port
        server.serve_forever()
    except KeyboardInterrupt:
        print " ^C entered, stopping web server...."
        server.socket.close()

main()

$ python ~/server3.py 1122
$ python ~/server3.py 3344 # in a different terminal

Sending out client requests to the proxy endpoint we created by above commands.

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X PUT -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "UpstreamOutput", "config": {"plugin_name": "test-upstreamoutput1", "target_pipelines": ["test-upstream1", "test-upstream2"], "request_data_keys": ["HTTP_REQUEST_BODY_IO"], "route_policy": "round_robin", "timeout_sec": 2}}'

$ curl -i -k https://gateway.megaease.com:10080/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 503 Service Unavailable
Date: Mon, 12 Jun 2017 06:40:41 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked

At the moment we can see such logs have been outputted by gateway server like following.

WARN[2017-06-12T14:42:02+08:00] [plugin test-upstreamoutput1 in pipeline intput execution failure, resultcode=503, error="upstream is timeout after 2 second(s)"]  source="linear.go#186-model.(*linearPipeline).Run"
WARN[2017-06-12T14:42:02+08:00] [http request processed unsuccesfully, result code: 503, error: upstream is timeout after 2 second(s)]  source="http_input.go#382-plugins.(*httpInput).receive.func3"
ERRO[2017-06-12T14:42:05+08:00] [respond downstream pipeline test-upstreamoutput1 failed: request from pipeline test-upstreamoutput1 was closed]  source="downstream_input.go#92-plugins.(*downstreamInput).Run.func1"

5.6 HTTP Proxy with Caching

In this case, you can see how a pipeline act a HTTP/HTTPS proxy and how to add a cache layer between input and upstream. The cache function is used to improve the performance of RESTful service automatically.

Plugin

  1. HTTP input: To enable HTTP endpoint to receive RESTful request for upstream service.
  2. Simple common cache: To cache HTTP body upstream service responded. The body buffer will be cached in 10 seconds in this case. During the TTL (Time-To-Live) any request which hits the cache will renew the expiration time of the body buffer automatically.
  3. Simple common cache: To cache HTTP status code upstream service responded. The status code will be cached in 10 seconds in this case. Like body buffer, during the TTL any request which hits the cache will renew the expiration time of the status code as well.
  4. IO reader: To read received data from the client via HTTP transport layer in to local memory as a proxy.
  5. HTTP output: Sending the body and headers to a certain endpoint of upstream RESTFul service.
  6. IO reader: To read response data from the upstream service via HTTP transport layer in to local memory, the body buffer can be cached and will be responded to client in anyway.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["GET"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO", "response_code_key": "response_code", "response_body_buffer_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "SimpleCommonCache", "config": {"plugin_name": "test-simplecommoncache-body", "hit_keys": ["HTTP_NAME"], "cache_key": "DATA", "ttl_sec": 10, "finish_if_hit": false}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "SimpleCommonCache", "config": {"plugin_name": "test-simplecommoncache-code", "hit_keys": ["HTTP_NAME"], "cache_key": "response_code", "ttl_sec": 10, "finish_if_hit": true}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader1", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "REQ_DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput", "url_pattern": "https://gateway.megaease.com:1122/abc{fake}", "header_patterns": {}, "method": "GET", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_buffer_pattern": "{REQ_DATA}"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader2", "input_key":"HTTP_RESP_BODY_IO", "output_key": "DATA"}}'

Pipeline

We need to set both cache plugins to the certain position in the pipeline, in this case we would like to response client from cache as soon as possible since once cache is hit there is no reason to handle the request for rest steps in the pipeline, so we add the cache sutff close to HTTP input plugin.

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-simplecommoncache-body", "test-simplecommoncache-code", "test-ioreader1", "test-httpoutput", "test-ioreader2"], "parallelism": 10}}'

Test

A fake HTTP Server to output request log for demo.

$ cat ~/server.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

def main():
    try:
        port = 1122
        server = HTTPServer(('gateway.megaease.com', port), BaseHTTPRequestHandler)
        print "Web Server running on port %s" % port
        server.serve_forever()
    except KeyboardInterrupt:
        print " ^C entered, stopping web server...."
        server.socket.close()

main()

$ python ~/server.py
Web Server running on port 1122
gateway.megaease.com - - [31/Mar/2017 22:56:37] code 501, message Unsupported method ('GET')
gateway.megaease.com - - [31/Mar/2017 22:56:37] "GET /abc/ HTTP/1.1" 501 -
gateway.megaease.com - - [31/Mar/2017 22:56:56] code 501, message Unsupported method ('GET')
gateway.megaease.com - - [31/Mar/2017 22:56:56] "GET /abc/ HTTP/1.1" 501 -

Sending out client requests to the proxy endpoint we created by above commands. You might check the timestamp in fake server and curl outputs.

$ curl -i -k https://gateway.megaease.com:10443/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 501 Not Implemented
Date: Fri, 31 Mar 2017 14:56:37 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<head>
<title>Error response</title>
</head>
<body>
<h1>Error response</h1>
<p>Error code 501.
<p>Message: Unsupported method ('GET').
<p>Error code explanation: 501 = Server does not support this operation.
</body>

$ curl -i -k https://gateway.megaease.com:10443/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 501 Not Implemented
Date: Fri, 31 Mar 2017 14:56:39 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<head>
<title>Error response</title>
</head>
<body>
<h1>Error response</h1>
<p>Error code 501.
<p>Message: Unsupported method ('GET').
<p>Error code explanation: 501 = Server does not support this operation.
</body>

$ curl -i -k https://gateway.megaease.com:10443/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 501 Not Implemented
Date: Fri, 31 Mar 2017 14:56:56 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<head>
<title>Error response</title>
</head>
<body>
<h1>Error response</h1>
<p>Error code 501.
<p>Message: Unsupported method ('GET').
<p>Error code explanation: 501 = Server does not support this operation.
</body>

$ curl -i -k https://gateway.megaease.com:10443/test -X GET -i -w "\n" -H "name:bar" -d "$LOAD"
HTTP/1.1 501 Not Implemented
Date: Fri, 31 Mar 2017 14:57:02 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<head>
<title>Error response</title>
</head>
<body>
<h1>Error response</h1>
<p>Error code 501.
<p>Message: Unsupported method ('GET').
<p>Error code explanation: 501 = Server does not support this operation.

5.7 Service Downgrading to Protect Critical Service

In this case, we would like to show a way to make failure upstream service to returns mock data instead of exposing failure to the client directly. In general downstream on client side is a critical service, it means even which depends on the upstream however the business in the service is not important that can be skipped if some issues happened at there, like loading customer comments for a commodity display page.

Note: To simulate upstream failure in the example, an assistant plugin is added to the pipeline, you need not it in the real case.

Plugins

  1. HTTP input: To enable HTTPS endpoint to receive Ease Monitor data sent from the client.
  2. Simple common mock: To returns mock data for the failure Ease Monitor service at upstream.
  3. Static pass probability limiter: The plugin passes the request with a fixed probability, in this case it is used to simulate upstream service failure with 50% probability.
  4. IO reader: To read Ease Monitor data from the client via HTTPS transport layer in to local memory for handling in next steps.
  5. JSON validator: Validating the Ease Monitor data sent from the client is using a certain schema. You can use Ease Monitor graphite validator if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  6. Ease Monitor JSON GID extractor: Extracts Ease Monitor global ID from the Ease Monitor data. You can use Ease Monitor graphite GID extractor if you would like to use the pipeline to handle Ease Monitor data with graphite plaintext protocol.
  7. Kafka output: Sending the data to configured kafka topic, Ease Monitor pipeline will fetch them for rest of processes.

Using follow Administration API calls to setup above plugins:

$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput", "url": "/test", "methods": ["POST"], "headers_enum": {"name": ["bar", "bar1"]}, "request_body_io_key": "HTTP_REQUEST_BODY_IO"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "SimpleCommonMock", "config": {"plugin_name": "test-simplecommonmock", "plugin_concerned": "test-staticprobabilitylimiter", "task_error_code_concerned": "ResultFlowControl", "mock_task_data_key": "example", "mock_task_data_value": "fake"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "StaticProbabilityLimiter", "config": {"plugin_name": "test-staticprobabilitylimiter", "pass_pr": 0.5}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "JSONValidator", "config": {"plugin_name": "test-jsonvalidator", "schema": "{\"title\": \"Record\",\"type\": \"object\",\"properties\": {\"name\": {\"type\": \"string\"}}, \"required\": [\"name\"]}", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "EaseMonitorJSONGidExtractor", "config": {"plugin_name": "test-jsongidextractor", "gid_key": "GID", "data_key": "DATA"}}'
$ curl https://gateway.megaease.com:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "KafkaOutput", "config": {"plugin_name": "test-kafkaoutput", "topic": "test", "brokers": ["127.0.0.1:9092"], "message_key_key": "GID", "data_key": "DATA"}}'

Pipeline

You can use follow Administration API calls to setup the pipeline:

$ curl https://gateway.megaease.com:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline", "plugin_names": ["test-httpinput", "test-simplecommonmock", "test-staticprobabilitylimiter", "test-ioreader", "test-jsonvalidator", "test-jsongidextractor", "test-kafkaoutput"], "parallelism": 10}}'

Test

In output of ApacheBench you can see there is no any Non-2xx responses.

$ ab -n 100 -c 20 -H "name:bar" -T "application/json" -p ~/load -f TSL1.2 https://gateway.megaease.com:10443/test
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking gateway.megaease.com (be patient).....done

Server Software:
Server Hostname:        gateway.megaease.com
Server Port:            10443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path:          /test
Document Length:        0 bytes

Concurrency Level:      20
Time taken for tests:   0.561 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      9700 bytes
Total body sent:        33700
HTML transferred:       0 bytes
Requests per second:    178.36 [#/sec] (mean)
Time per request:       112.136 [ms] (mean)
Time per request:       5.607 [ms] (mean, across all concurrent requests)
Transfer rate:          16.90 [Kbytes/sec] received
                        58.70 kb/s sent
                        75.59 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        5   44  35.1     36     156
Processing:    11   67  21.7     71     115
Waiting:       11   65  21.6     68     115
Total:         52  111  42.6    114     216

Percentage of the requests served within a certain time (ms)
  50%    114
  66%    116
  75%    120
  80%    137
  90%    201
  95%    209
  98%    214
  99%    216
 100%    216 (longest request)

5.8 Flash Sale Event Support

In a flash sale event of e-Commerce case, the behaviors between user and website are like these:

  1. Before the flash sale event starts, many users open the event page of the website and prepare to click the order link. At the moment the order link is under disable status since event is not started.
  2. After the flash sale event starts, users click the order link and a large number of the order requests are sent to the website, and website handles the requests as much as possible on the basis of the service availability.
  3. Once the commodity sold out, any incoming order request will be rejected.

So the biggest challenges to support this kind of event for a website backend is how to support massive order requests in a very short period and keep best availability of all related services. To handle the challenge, Ease Gateway provides 4 steps to cover above three cases separately and technologically:

  1. Ease gateway can be distributed to have multiple instances to serve users come from different region or logical group. Under this way, we can increase the capacity of access layer by scaling out gateway instance easily, we are even preconfigure the capacity before the event.
  2. Before the flash sale event starts, according to the user behavior gateway can provides a statistics data to indicate how many user are prepared and ready to order. And base on this statistics indicator, website can calculate and pre-configure the probability of success buying for different access endpoint on each gateway instance.
  3. When the flash sale event starts, the gateway instance will reduce the massive order requests according to the pre-configured probability, and finally a reasonable volume of order requests hit the website backend service really.
  4. Once the website backend service responds gateway there is no more commodity in the stock, gateway returns user a standard error and reject all incoming order requests directly and no further requests hit the website.

To achieve above solution we need to prepare two dedicated pipelines for each gateway instance, first one is used to cover above point #2, and another one is used to cover point #3 and #4.

  1. User session counting.
  2. Rejecting requests based on the probability and completely rejecting once upstream returns a special failure.

User session counting

Plugin
  1. HTTP input: To enable HTTP endpoint to receive user request for accessing flash sale event page.
  2. IO reader: To read received data from the client via HTTP transport layer in to local memory as a proxy.
  3. HTTP output: Sending the body and headers to the flash sale event page of the website.
  4. HTTP header counter: To calculate amount of different HTTP header value of a certain header name, and the amount will be deducted automatically if user has not any active with website more than 1 minute. The result is exposed by the statistics indicator RECENT_HEADER_COUNT.

Using follow Administration API calls to setup above plugins:

$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput1", "url": "/test/book", "methods": ["GET"], "headers_enum": {}, "request_body_io_key": "HTTP_REQUEST_BODY_IO", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader1", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "REQ_DATA"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput1", "url_pattern": "http://127.0.0.1:1122/book/abc", "header_patterns": {}, "method": "GET", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_buffer_pattern": "{REQ_DATA}"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPHeaderCounter", "config": {"plugin_name": "test-httpheadercounter1", "header_concerned": "name", "expiration_min": 1}}'
Pipeline

Technically there is no requirement on the position of HTTP header counter plugin.

You can use follow Administration API calls to setup the pipeline:

$ curl http://127.0.0.1:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline1", "plugin_names": ["test-httpinput1", "test-ioreader1", "test-httpoutput1", "test-httpheadercounter1"], "parallelism": 10}}'
Test
$ cat ~/server.py
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

global stock
stock = 6

class WebServerHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        if self.path.endswith("/book/abc"):
                self.send_response(200)
                self.send_header('Content-type', 'text/html')
                self.end_headers()
                message = "<html><body>Book successfully!</body></html>"
                self.wfile.write(message)
        elif self.path.endswith("/abc"):
            global stock
            if stock > 0:
                self.send_response(200)
                self.send_header('Content-type', 'text/html')
                self.end_headers()
                message = "<html><body>Order successfully!</body></html>"
                self.wfile.write(message)
            else:
                self.send_response(400)
                self.send_header('Content-type', 'text/html')
                self.end_headers()
                message = "<html><body>Order failed!</body></html>"
                self.wfile.write(message)
            stock -= 1

def main():
    try:
        port = 1122
        server = HTTPServer(('127.0.0.1', port), WebServerHandler)
        print "Web Server running on port %s" % port
        server.serve_forever()
    except KeyboardInterrupt:
        print " ^C entered, stopping web server...."
        server.socket.close()

main()
$ python ~/server.py
Web Server running on port 1122
127.0.0.1 - - [24/Jul/2017 15:48:19] "GET /book/abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 15:48:24] "GET /book/abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 15:48:43] "GET /book/abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 15:51:36] "GET /book/abc HTTP/1.1" 200 -
$ curl http://127.0.0.1:9090/statistics/v1/pipelines/test-jsonpipeline/plugins/test-httpheadercounter/indicators/RECENT_HEADER_COUNT/desc  -X GET -w "\n"
{"desc":"The count of http requests that the header of each one contains a key 'name' in last 60 second(s)."}

$ curl http://127.0.0.1:9090/statistics/v1/pipelines/test-jsonpipeline/plugins/test-httpheadercounter/indicators/RECENT_HEADER_COUNT/value  -X GET -w "\n"
{"value":0}

$ curl -i -k https://127.0.0.1:10443/test/book -X GET -i -w "\n" -H "name:bar1" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:15:03 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

$ curl -i -k https://127.0.0.1:10443/test/book -X GET -i -w "\n" -H "name:bar2" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:15:06 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

$ curl -i -k https://127.0.0.1:10443/test/book -X GET -i -w "\n" -H "name:bar3" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:15:09 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

$ curl -i -k https://127.0.0.1:10443/test/book -X GET -i -w "\n" -H "name:bar4" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:15:13 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

$ curl http://127.0.0.1:9090/statistics/v1/pipelines/test-jsonpipeline1/plugins/test-httpheadercounter1/indicators/RECENT_HEADER_COUNT/value  -X GET -w "\n"
{"value":4}

Rejecting requests

Plugin
  1. HTTP input: To enable HTTP endpoint to receive user request for order commodity.
  2. No more failure limiter: To returns a standard error and reject all incoming requests directly and no further requests hit the upstream.
  3. Static pass probability limiter: The plugin passes the request with a fixed probability. In this case it is used to reduce the massive order requests and finally a reasonable volume of order requests hit the website backend service really.
  4. IO reader: To read received data from the client via HTTP transport layer in to local memory as a proxy.
  5. HTTP output: Sending the body and headers to the order serivce of the website.
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPInput", "config": {"plugin_name": "test-httpinput2", "url": "/test", "methods": ["GET"], "headers_enum": {}, "request_body_io_key": "HTTP_REQUEST_BODY_IO", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "NoMoreFailureLimiter", "config": {"plugin_name": "test-nomorefailurelimiter2", "failure_count_threshold": 1, "failure_task_data_key": "response_code", "failure_task_data_value": "400"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "StaticProbabilityLimiter", "config": {"plugin_name": "test-staticprobabilitylimiter2", "pass_pr": 0.75}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "IOReader", "config": {"plugin_name": "test-ioreader2", "input_key":"HTTP_REQUEST_BODY_IO", "output_key": "REQ_DATA"}}'
$ curl http://127.0.0.1:9090/admin/v1/plugins -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "HTTPOutput", "config": {"plugin_name": "test-httpoutput2", "url_pattern": "http://127.0.0.1:1122/abc", "header_patterns": {}, "method": "GET", "response_code_key": "response_code", "response_body_io_key": "HTTP_RESP_BODY_IO", "request_body_buffer_pattern": "{REQ_DATA}"}}'
Pipeline

You can use follow Administration API calls to setup the pipeline:

$ curl http://127.0.0.1:9090/admin/v1/pipelines -X POST -i -H "Content-Type:application/json" -H "Accept:application/json" -w "\n" -d '{"type": "LinearPipeline", "config": {"pipeline_name": "test-jsonpipeline2", "plugin_names": ["test-httpinput2", "test-nomorefailurelimiter2", "test-staticprobabilitylimiter2", "test-ioreader2", "test-httpoutput2"], "parallelism": 10}}'
Test
python server.py
Web Server running on port 1122
127.0.0.1 - - [24/Jul/2017 16:20:13] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:16] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:19] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:22] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:25] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:27] "GET /abc HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2017 16:20:30] "GET /abc HTTP/1.1" 400 -
$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar1" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:13 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar2" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:16 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar3" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:19 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar4" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:22 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar5" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:25 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar6" -d "$LOAD"
HTTP/1.1 200 OK
Date: Mon, 24 Jul 2017 08:20:27 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order successfully!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar7" -d "$LOAD"
HTTP/1.1 400 Bad Request
Date: Mon, 24 Jul 2017 08:20:30 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked

<html><body>Order failed!</body></html>

$ curl -i -k https://127.0.0.1:10443/test -X GET -i -w "\n" -H "name:bar8" -d "$LOAD"
HTTP/1.1 429 Too Many Requests
Date: Mon, 24 Jul 2017 08:20:33 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked

6. Reference

6.1 Linear Pipeline

Linear Pipeline is a model to define a unidirectional path plugins handling task parallelly(not concurrently).

6.1.1 Configuration

Parameter name Data type (golang) Description Type Optional Default value (golang)
pipeline_name string The pipeline instance name. Functionality No N/A
plugin_names []string The sequential list of plugins handling tasks. Functionality No N/A
parallelism uint16 The parallel number of linear pipeline. Functionality Yes 1
wait_plugin_close bool The flag represents if the pipeline suspends until the rotten plugins closed. Functionality Yes true

Notice: The reason why wait_plugin_close appears is to prevent that a new instance can not complete construction because a old instance(same plugin) is still holding some critical and unique resources. e.g. A instance of http_input.metrics with holding url /v1/metrics suddenly is updated(with the same url /v1/metrics), the pipeline can not construct a new instance before the old instance closes, because the unique resource url /v1/metrics can not be occupied by multiple instances simultaneously.

6.1.2 Dedicated statistics indicator

Indicator name Level Data type (golang) Description
THROUGHPUT_RATE_LAST_1MIN_ALL Pipeline float64 Throughput rate of the pipeline in last 1 minute.
THROUGHPUT_RATE_LAST_5MIN_ALL Pipeline float64 Throughput rate of the pipeline in last 5 minute.
THROUGHPUT_RATE_LAST_15MIN_ALL Pipeline float64 Throughput rate of the pipeline in last 15 minute.
EXECUTION_COUNT_ALL Pipeline int64 Total execution count of the pipeline.
EXECUTION_TIME_MAX_ALL Pipeline int64 Maximal time of execution time of the pipeline in nanosecond.
EXECUTION_TIME_MIN_ALL Pipeline int64 Minimal time of execution time of the pipeline in nanosecond.
EXECUTION_TIME_50_PERCENT_ALL Pipeline float64 50% execution time of the pipeline in nanosecond.
EXECUTION_TIME_90_PERCENT_ALL Pipeline float64 90% execution time of the pipeline in nanosecond.
EXECUTION_TIME_99_PERCENT_ALL Pipeline float64 99% execution time of the pipeline in nanosecond.
EXECUTION_TIME_STD_DEV_ALL Pipeline float64 Standard deviation of execution time of the pipeline in nanosecond.
EXECUTION_TIME_VARIANCE_ALL Pipeline float64 Variance of execution time of the pipeline.
EXECUTION_TIME_SUM_ALL Pipeline int64 Sum of execution time of the pipeline in nanosecond.
THROUGHPUT_RATE_LAST_1MIN_ALL Plugin float64 Throughput rate of the plugin in last 1 minute.
THROUGHPUT_RATE_LAST_5MIN_ALL Plugin float64 Throughput rate of the plugin in last 5 minute.
THROUGHPUT_RATE_LAST_15MIN_ALL Plugin float64 Throughput rate of the plugin in last 15 minute.
THROUGHPUT_RATE_LAST_1MIN_SUCCESS Plugin float64 Successful throughtput rate of the plugin in last 1 minute.
THROUGHPUT_RATE_LAST_5MIN_SUCCESS Plugin float64 Successful throughtput rate of the plugin in last 5 minute.
THROUGHPUT_RATE_LAST_15MIN_SUCCESS Plugin float64 Successful throughtput rate of the plugin in last 15 minute.
THROUGHPUT_RATE_LAST_1MIN_FAILURE Plugin float64 Failed throughtput rate of the plugin in last 1 minute.
THROUGHPUT_RATE_LAST_5MIN_FAILURE Plugin float64 Failed throughtput rate of the plugin in last 5 minute.
THROUGHPUT_RATE_LAST_15MIN_FAILURE Plugin float64 Failed throughtput rate of the plugin in last 15 minute.
EXECUTION_COUNT_ALL Plugin int64 Total execution count of the plugin.
EXECUTION_COUNT_SUCCESS Plugin int64 Successful execution count of the plugin.
EXECUTION_COUNT_FAILURE Plugin int64 Failed execution count of the plugin.
EXECUTION_TIME_MAX_ALL Plugin int64 Maximal time of execution time of the plugin in nanosecond.
EXECUTION_TIME_MAX_SUCCESS Plugin int64 Maximal time of successful execution of the plugin in nanosecond.
EXECUTION_TIME_MAX_FAILURE Plugin int64 Maximal time of failure execution of the plugin in nanosecond.
EXECUTION_TIME_MIN_ALL Plugin int64 Minimal time of execution time of the plugin in nanosecond.
EXECUTION_TIME_MIN_SUCCESS Plugin int64 Minimal time of successful execution of the plugin in nanosecond.
EXECUTION_TIME_MIN_FAILURE Plugin int64 Minimal time of failure execution of the plugin in nanosecond.
EXECUTION_TIME_50_PERCENT_SUCCESS Plugin float64 50% successful execution time of the plugin in nanosecond.
EXECUTION_TIME_50_PERCENT_FAILURE Plugin float64 50% failure execution time of the plugin in nanosecond.
EXECUTION_TIME_90_PERCENT_SUCCESS Plugin float64 90% successful execution time of the plugin in nanosecond.
EXECUTION_TIME_90_PERCENT_FAILURE Plugin float64 90% failure execution time of the plugin in nanosecond.
EXECUTION_TIME_99_PERCENT_SUCCESS Plugin float64 99% successful execution time of the plugin in nanosecond.
EXECUTION_TIME_99_PERCENT_FAILURE Plugin float64 99% failure execution time of the plugin in nanosecond.
EXECUTION_TIME_STD_DEV_SUCCESS Plugin float64 Standard deviation of successful execution time of the plugin in nanosecond.
EXECUTION_TIME_STD_DEV_FAILURE Plugin float64 Standard deviation of failure execution time of the plugin in nanosecond.
EXECUTION_TIME_VARIANCE_SUCCESS Plugin float64 Variance of successful execution time of the plugin.
EXECUTION_TIME_VARIANCE_FAILURE Plugin float64 Variance of failure execution time of the plugin.
EXECUTION_TIME_SUM_ALL Plugin int64 Sum of execution time of the plugin in nanosecond.
EXECUTION_TIME_SUM_SUCCESS Plugin int64 Sum of successful execution time of the plugin in nanosecond.
EXECUTION_TIME_SUM_FAILURE Plugin int64 Sum of failure execution time of the plugin in nanosecond.
EXECUTION_COUNT_ALL Task uint64 Total task execution count.
EXECUTION_COUNT_SUCCESS Task uint64 Successful task execution count.
EXECUTION_COUNT_FAILURE Task uint64 Failed task execution count.

6.2 Plugin

There are 18 available plugins totoally in Ease Gateway current release.

Plugin name Type name Block-able Functional
HTTP input HTTPInput Yes Yes
JSON validator JSONValidator No Yes
Kafka output KafkaOutput No Yes
HTTP output HTTPOutput No Yes
IO reader IOReader Yes Yes
HTTP header counter HTTPHeaderCounter No No
Throughput rate limiter ThroughputRateLimiter Yes No
Latency based sliding window limiter LatencyWindowLimiter Yes No
Service circuit breaker ServiceCircuitBreaker No No
Static pass probability limiter StaticProbabilityLimiter No No
No more failure limiter NoMoreFailureLimiter No No
Simple common cache SimpleCommonCache No No
Simple common mock SimpleCommonMock No No
Python Python Yes Yes
Upstream output UpstreamOutput Yes No
Downstream input DownstreamInput Yes No
Ease Monitor graphite validator EaseMonitorGraphiteValidator No Yes
Ease Monitor graphite GID extractor EaseMonitorGraphiteGidExtractor No Yes
Ease Monitor JSON GID extractor EaseMonitorJSONGidExtractor No Yes

HTTP Input plugin

Plugin handles HTTP request and return client with pipeline processed response. Currently a HTTPS server will runs on a fixed 10443 port with a certificate and key file pair.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
url string The request HTTP url plugin will proceed. Functionality No N/A
methods []string The request HTTP methods plugin will proceed. Functionality Yes {"GET"}
headers_enum map[string][]string The request HTTP headers plugin will proceed. Functionality Yes nil
unzip bool The flag represents if the plugin decompresses the request body when request content is encoded in GZIP. Functionality Yes true
respond_error bool The flag represents if the plugin respond error information to client if pipeline handles the request unsuccessfully. The option will be used only when response_body_io_key and response_body_io_key options are empty. Functionality Yes false
fast_close bool The flag represents if the plugin does not wait any response which is processing before close, e.g. ignore data transmission on a slow connection. Functionality Yes false
request_header_names_key string The name of HTTP request header name list stored in internal storage as the plugin output. I/O Yes ""
request_body_io_key string The key name of HTTP request body io object stored in internal storage as the plugin output. I/O Yes ""
response_code_key string The key name of HTTP response status code value stored in internal storage as the plugin input. An empty value of the option means returning pipeline handling result code to client. I/O Yes ""
response_body_io_key string The key name of HTTP response body io object stored in internal storage as the plugin input. I/O Yes ""
response_body_buffer_key string The key name of HTTP response body buffer stored in internal storage as the plugin input. The option will be leveraged only when response_body_io_key option is empty. I/O Yes ""
I/O
Data name Configuration option name Type Data Type Optional
Request header name list request_header_names_key Output []string Yes
Request body IO object request_body_io_key Output io.Reader Yes
Response HTTP status code response_code_key Input int Yes
Response body IO object response_body_io_key Input io.Reader Yes
Response body buffer response_body_buffer_key Input []byte Yes
Error
Result code Error reason
ResultRequesterGone client closed
ResultTaskCancelled task is cancelled
Dedicated statistics indicator
Indicator name Data type (golang) Description
WAIT_QUEUE_LENGTH uint64 The length of wait queue which contains requests wait to be handled by a pipeline.
WIP_REQUEST_COUNT uint64 The count of request which in the working progress of the pipeline.

JSON Validator plugin

Plugin validates input data, to check if it's a valid json data with a special schema.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
schema string The schema json data needs to accord. Functionality No N/A
data_key string The key name of data needs to check as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Data data_key Input string No
Error
Result code Error reason
ResultInternalServerError schema not found
ResultMissingInput input got wrong value
ResultBadInput failed to validate
Dedicated statistics indicator

No any indicators exposed.

Kafka Output plugin

Plugin outputs request data to a kafka service.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
brokers []string The kafka broker list. Functionality No N/A
client_id string The client id as a kafka consumer. Functionality Yes "easegateway"
topic string The topic data outputs to. Functionality No N/A
message_key_key string The key name of message key value stored in internal storage as the plugin input. I/O Yes ""
data_key string The key name of message data stored in internal storage as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Message key message_key_key Input string Yes
Message data data_key Input []byte No
Error
Result code Error reason
ResultServiceUnavailable kafka producer not ready
ResultMissingInput input got wrong value
ResultBadInput input got empty string
ResultServiceUnavailable failed to send message to the kafka topic
Dedicated statistics indicator

No any indicators exposed.

HTTP Output plugin

Plugin outputs request data to a HTTP endpoint.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
url_pattern string The pattern of the complete HTTP output endpoint. E.g. https://1.2.3.4/abc?def={INPUT_DATA} Functionality No N/A
header_patterns map[string]string The list of HTTP output header name pattern and value pattern pair. Functionality Yes nil
method string The method HTTP output used. Functionality No N/A
timeout_sec uint16 The request timeout HTTP output limited in second. Functionality Yes 120 (2 minutes)
cert_file string The certificate file HTTPS output used. Functionality Yes ""
key_file string The key file HTTPS output used. Functionality Yes ""
ca_file string The root certificate HTTPS output used. Functionality Yes ""
insecure_tls bool The flag represents if the plugin does not check server certificate. Functionality Yes false
request_body_buffer_pattern string The HTTP output body buffer pattern. The option will be leveraged only when request_body_io_key option is empty. Functionality Yes ""
request_body_io_key string The HTTP output body io object. I/O Yes ""
response_code_key string The key name of HTTP response status code value stored in internal storage as the plugin output. An empty value of the option means the plugin does not output HTTP response status code. I/O Yes ""
response_body_io_key string The key name of HTTP response body io object stored in internal storage as the plugin output. An empty value of the option means the plugin does not output HTTP response body io object. I/O Yes ""
I/O
Data name Configuration option name Type Data Type Optional
Request HTTP body request_body_io_key Input io.Reader Yes
Response HTTP status code response_code_key Output int Yes
Response body IO object response_body_io_key Output io.Reader Yes
Error
Result code Error reason
ResultServiceUnavailable failed to send HTTP request
ResultInternalServerError failed to create HTTP request
ResultInternalServerError failed to output response HTTP status code
ResultInternalServerError failed to output response body IO object
Dedicated statistics indicator

No any indicators exposed.

IO Reader plugin

Plugin reads a given I/O object and output the data.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
read_length_max int64 Maximal bytes to read. Functionality Yes 1048576 (1 MiB)
close_after_read bool The flag represents if to close IO object after reading. Functionality Yes true
data_key string The key name of read out data as the plugin output. I/O No N/A
input_key string The key name of IO object stored in internal storage as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Data buffer read out data_key Output []byte No
IO object to read input_key Input io.Reader, io.ReadCloser No
Error
Result code Error reason
ResultMissingInput input got wrong value
ResultInternalServerError failed to read data
ResultInternalServerError failed to output data buffer
Dedicated statistics indicator

No any indicators exposed.

HTTP Header Counter plugin

Plugin calculates request count in a recent period which has a special header name. This behavior likes count session amount.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
header_concerned string The header name plugin calculates. Functionality No N/A
expiration_sec uint32 The recent period in second. Functionality No N/A
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultMissingInput input got wrong value
Dedicated statistics indicator
Indicator name Data type (golang) Description
RECENT_HEADER_COUNT uint64 The count of HTTP requests that the header of each one contains the key in the recent period.

Throughput Rate Limiter plugin

Plugin limits request rate based on current throughput.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
tps The maximal requests per second. Value -1 means no limitation. Value zero here means there is no request could be processed. Functionality No N/A
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultFlowControl service is unavailable caused by throughput rate limit
ResultTaskCancelled task is cancelled
ResultInternalServerError unexpected error on internal delay timer
Dedicated statistics indicator

No any indicators exposed.

Latency Based Sliding Window Limiter plugin

Plugin limits request rate based on current latency based sliding window.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
plugins_concerned []string Plugins their processing latency will be considered to calculate sliding window size. Functionality No N/A
latency_threshold_msec uint32 The latency threshold in millisecond, when the latency greater than it the sliding window will be shrank. Functionality Yes 800
backoff_msec uint16 How many milliseconds the request need to be delayed when current sliding window is fully closed. Functionality Yes 100
window_size_max uint64 Maximal sliding window size. Functionality Yes 65535
windows_size_init uint64 Initial sliding window size. Functionality Yes 512
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultFlowControl service is unavailable caused by sliding window limit
ResultTaskCancelled task is cancelled
Dedicated statistics indicator

No any indicators exposed.

Service Circuit Breaker plugin

Plugin limits request rate base on the failure rate of one or more plugins.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
plugins_concerned []string Plugins their processing failure will be considered to control circuit breaker status. Functionality No N/A
all_tps_threshold_to_enable float64 As the condition, it indicates how many requests per second will cause circuit breaker to be enabled. Value zero means to enable circuit breaker immediately when a request arrived. Functionality Yes 1
failure_tps_threshold_to_break float64 As the condition, it indicates how many failure requests per second will cause circuit breaker to be turned on. It means fully close request flow. Value zero here means breaker will keep open or half-open status. Functionality Yes 1
failure_tps_percent_threshold_to_break float32 As the condition, it indicates what percent of failure requests per second will cause circuit breaker to be turned on. It means fully close request flow. Value zero here means breaker will keep open or half-open status. The option can be leveraged only when failure_tps_threshold_to_break condition does not satisfy. Functionality No N/A
recovery_time_msec uint32 As the condition, it indicates how long delay in milliseconds will cause circuit breaker to be turned to half-open status, the status is used to try service availability. In general, it equals to MTTR. Functionality Yes 1000
success_tps_threshold_to_open float64 As the condition, it indicates how many success requests per second will cause circuit breaker to be turned off. It means fully open request flow. Value zero here means to fully open request flow immediately after recovery time elapsed. Functionality Yes 1
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultFlowControl service is unavailable caused by service fusing
Dedicated statistics indicator

No any indicators exposed.

Static Pass Probability Limiter plugin

Plugin limits request rate based on static passing probability.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
pass_pr float32 The passing probability. Value zero means no request could be processed, and value 1.0 means no request could be limited. Functionality No N/A
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultFlowControl service is unavailable caused by probability limit
Dedicated statistics indicator

No any indicators exposed.

No More Failure Limiter plugin

Plugin limits how many fail requests can be returned before block all follow requests.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
failure_task_data_key string The key name of the data we check if it is a concerned failure task. Functionality No N/A
failure_task_data_value string The value of the data we check if it is a concerned failure task. Functionality No N/A
I/O

No any inputs or outputs.

Error
Result code Error reason
ResultFlowControl service is unavailable caused by failure limitation
Dedicated statistics indicator

No any indicators exposed.

Simple Common Cache plugin

Plugin caches a data and uses it to serve follow requests directly.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
hit_keys []string All the data with every key will be considered to check if the request hits or misses the cache. Functionality No N/A
ttl_sec uint32 Time to live of cache data in second. Functionality Yes 600 (10 mins)
cache_key string The data with the key will be cached in internal storage as the plugin input (caching) and output (reusing). I/O No N/A
finish_if_hit bool The flag represents if the pipeline is finished after hitting cache data Functionality Yes true
I/O
Data name Configuration option name Type Data Type Optional
Reusing data cache_key Output interface{} No
Error
Result code Error reason
ResultInternalServerError failed to read or write cache data
Dedicated statistics indicator

No any indicators exposed.

Simple Common Mock plugin

Plugin mock a data for a failure request.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
plugin_concerned string Plugin processing failure will be considered to apply mock. Functionality No N/A
task_error_code_concerned string What result code will be considered to apply mock. Functionality No N/A
mock_task_data_key string The key name of mock data to store as the plugin output. I/O No N/A
mock_task_data_value string The mock data to store as the plugin output. I/O Yes ""

Available task error result code:

  • ResultUnknownError
  • ResultServiceUnavailable
  • ResultInternalServerError
  • ResultTaskCancelled
  • ResultMissingInput
  • ResultBadInput
  • ResultRequesterGone
  • ResultFlowControl
I/O
Data name Configuration option name Type Data Type Optional
The key to store mock data mock_task_data_key Output string No
Mock data mock_task_data_value Output string Yes
Error

No any errors returned.

Dedicated statistics indicator

No any indicators exposed.

Python

Plugin executes python code.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
code string The python code to be executed. Functionality No N/A
version string The version of python interpreter. Functionality Yes 2
timeout_sec uint16 The wait timeout python code execution limited in second. Functionality Yes 10
input_key string The key name of standard input data for python code stored in internal storage as the plugin input. I/O Yes ""
output_key string The key name of standard output data for python code stored in internal storage as the plugin output. I/O Yes ""
I/O
Data name Configuration option name Type Data Type Optional
Standard input data input_key Input []byte Yes
Standard output data output_key Output []byte Yes
Error
Result code Error reason
ResultServiceUnavailable failed to get standard input
ResultServiceUnavailable failed to launch python interpreter
ResultServiceUnavailable failed to execute python code
ResultInternalServerError failed to load/read data
ResultTaskCancelled task is cancelled
Dedicated statistics indicator

No any indicators exposed.

Upstream Output plugin

Plugin outputs request to an upstream pipeline and waits the response.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
target_pipelines []string The list of upstream pipeline name. Functionality No N/A
route_policy string The name of route policy which is used to select a upstream pipeline form target_pipelines option for a task. Available policies are round_robin, weighted_round_robin, random, weighted_random, least_wip_requests, hash and filter. Functionality Yes "round_robin"
timeout_sec uint16 The wait timeout upstream process limited in second. Functionality Yes 120 (2 minutes)
request_data_keys []string The key names of the data in current pipeline, each of them will be passed to target pipeline as the input part of cross-pipeline request. Plugin downstream_input will handle the data as the input. I/O No []
target_weights []uint16 The weight of each upstream pipeline, only for weighted_round_robin and weighted_random policies. Functionality Yes [1...]
value_hashed_keys string The key names of the value in current pipeline which is used to calculate hash value for hash policy of upstream pipeline selection. Functionality No N/A
filter_conditions []map[string]string Each map in the list as the condition set for the target pipeline according to the index. The Map key is the key of value in the task, map value is the match condition, support regular expression. Functionality No N/A
I/O
Data name Configuration option name Type Data Type Optional
Data of cross-pipeline request request_data_keys Output (intents to send to upstream pipeline) map[interface{}]interface{} Yes
Error
Result code Error reason
ResultServiceUnavailable upstream pipeline selector returns empty pipeline name
ResultServiceUnavailable upstream is timeout
ResultServiceUnavailable failed to commit cross-pipeline request to upstream
ResultInternalServerError downstream received nil upstream response
ResultInternalServerError downstream received wrong upstream response
ResultInternalServerError failed to output data
ResultTaskCancelled task is cancelled
Dedicated statistics indicator

No any indicators exposed.

Downstream Input plugin

Plugin handles downstream request to running pipeline as input and send the response back.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
response_data_keys []string The key names of the data in current pipeline, each of them will be send back to downstream pipeline as the output part of cross-pipeline response. Plugin upstream_input will handle the data as the output. I/O No []
I/O
Data name Configuration option name Type Data Type Optional
Data of cross-pipeline response response_data_keys Input (intents to send to downstream pipeline) map[interface{}]interface{} Yes
Error
Result code Error reason
ResultInternalServerError upstream received wrong downstream request
ResultInternalServerError failed to output data
Dedicated statistics indicator

No any indicators exposed.

Ease Monitor Graphite Validator plugin

Plugin validates input data, to check if it's a valid Ease Monitor graphite data with plaintext protocol.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
data_key string The key name of data needs to check as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Data data_key Input string No
Error
Result code Error reason
ResultBadInput graphite data got EOF
ResultBadInput graphite data want 4 fields('#'-splitted)
Dedicated statistics indicator

No any indicators exposed.

Ease Monitor Graphite GID Extractor plugin

Plugin extracts Ease Monitor global ID from Ease Monitor graphite data.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
gid_key string The key name of global ID stored in internal storage as the plugin output. I/O No N/A
data_key string The key name of data needs to be extracted as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Global ID gid_key Output string No
Data data_key Input string No
Error
Result code Error reason
ResultMissingInput input got wrong value
ResultBadInput unexpected EOF
ResultBadInput graphite data want 4 fields('#'-splitted)
ResultInternalServerError failed to output global ID
Dedicated statistics indicator

No any indicators exposed.

Ease Monitor JSON GID extractor plugin

Plugin extracts Ease Monitor global ID from Ease Monitor json data.

Configuration
Parameter name Data type (golang) Description Type Optional Default value (golang)
plugin_name string The plugin instance name. Functionality No N/A
gid_key string The key name of global ID stored in internal storage as the plugin output. I/O No N/A
data_key string The key name of data needs to be extracted as the plugin input. I/O No N/A
I/O
Data name Configuration option name Type Data Type Optional
Global ID gid_key Output string No
Data data_key Input string No
Error
Result code Error reason
ResultMissingInput input got wrong value
ResultBadInput invalid json
ResultInternalServerError failed to output global ID
Dedicated statistics indicator

No any indicators exposed.

6.3 Open API

The API is described by strictly following swagger specification, you can copy it to swagger editor to get prettier layout.

6.3.1 Administration API

swagger: '2.0'
info:
  title: Ease Gateway Administration API
  description: Ease Gateway supports APIs on administration panel.
  version: "1.0"
schemes:
  - http
  - https
basePath: /admin/v1
produces:
  - application/json
paths:
  /plugin-types:
    get:
      summary: Retrieves Plugin Type
      description: |
        The Plugin Type retrieve endpoint returns all available plugin types Gateway supports currently.
        Which can be used to create a plugin instance with a special configuration.
      responses:
        200:
          description: Plugin types response.
          schema:
            $ref: '#/definitions/PluginTypesRetrieveResponse'
        default:
          description: Unexpected error.
          schema:
            $ref: '#/definitions/Error'
  /pipeline-types:
    get:
      summary: Retrieves Pipeline Type
      description: |
        The Pipeline Type retrieve endpoint returns all available pipeline types Gateway supports
        currently. Which can be used to create a pipeline instance with a special configuration.
      responses:
        200:
          description: Pipeline types response.
          schema:
            $ref: '#/definitions/PipelineTypesRetrieveResponse'
        default:
          description: Unexpected error.
          schema:
            $ref: '#/definitions/Error'
  /plugins:
    post:
      summary: Creates Plugin Instance
      description: |
        The Plugin Instance creation endpoint creates a special plugin instance according to the
        plugin type and the configuration.
      parameters:
        - name: pluginCreationRequest
          in: body
          schema:
            $ref: '#/definitions/PluginCreationRequest'
          required: true
          description: Plugin type and configuration.
      responses:
        400:
          description: |
            Invalid plugin instance creation request parameter. Including, invalid plugin type
            or invalid configuration according to the plugin instance is being created.
          schema:
            $ref: '#/definitions/Error'
        409:
          description: |
            Plugin instance with the given name request parameter provided is already existing.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Plugin instance is created successfully.
    get:
      summary: Retrieves Plugin Instances
      description: |
        The Plugin Instances retrieve endpoint returns all existing plugin instances Gateway
        has currently.
      parameters:
        - name: pluginsRetrieveRequest
          in: body
          schema:
            $ref: '#/definitions/PluginsRetrieveRequest'
          required: true
          description: Plugin instance retrieve conditions.
      responses:
        400:
          description: |
            Invalid plugin instance retrieve request parameter. Including, invalid plugin type.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Plugin instance response.
          schema:
            $ref: '#/definitions/PluginsRetrieveResponse'
    put:
      summary: Updates Plugin Instance
      description: |
        The Plugin Instance update endpoint updates a special plugin instance according to the
        plugin type and the configuration.
      parameters:
        - name: pluginUpdateRequest
          in: body
          schema:
            $ref: '#/definitions/PluginUpdateRequest'
      responses:
        400:
          description: |
            Invalid plugin instance update request parameter. Including, invalid plugin type
            or invalid configuration according to the plugin instance is being updated.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: Plugin instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Plugin instance is updated successfully.
  /plugins/{pluginName}:
    get:
      summary: Retrieves Plugin Instance
      description: |
        The Plugin Instance retrieve endpoint returns an existing plugin instance Gateway has
        currently by plugin name.
      parameters:
        - name: pluginName
          in: path
          description: Plugin instance name to retrieve.
          required: true
          type: string
      responses:
        400:
          description: Invalid plugin instance name to retrieve.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: Plugin instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Plugin instance response.
          schema:
            $ref: '#/definitions/PluginSpec'
    delete:
      summary: Deletes Plugin Instance
      description: |
        The Plugin Instance deletion endpoint deletes an existing plugin instance Gateway has
        currently by plugin name.
      parameters:
        - name: pluginName
          in: path
          description: Plugin instance name to delete.
          required: true
          type: string
      responses:
        400:
          description: Invalid plugin instance name to delete.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: Plugin instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        406:
          description: |
            Plugin instance could not be deleted by given name since it is used by
            one or more pipelines.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Plugin instance is deleted successfully.
  /pipelines:
    post:
      summary: Creates Pipeline Instance
      description: |
        The Pipeline Instance creation endpoint creates a special pipeline instance according
        to the pipeline type and the configuration.
      parameters:
        - name: pipelineCreationRequest
          in: body
          schema:
            $ref: '#/definitions/PipelineCreationRequest'
          required: true
          description: Pipeline type and configuration.
      responses:
        400:
          description: |
            Invalid pipeline instance creation request parameter. Including, invalid pipeline
            type or invalid configuration according to the pipeline instance is being created,
            or one or more plugin instances in the pipeline are not found.
          schema:
            $ref: '#/definitions/Error'
        409:
          description: |
            Pipeline instance with the given name request parameter provided is already existing.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Pipeline instance is created successfully.
    get:
      summary: Retrieves Pipeline Instances
      description: |
        The Pipeline Instances retrieve endpoint returns all existing pipeline instances Gateway
        has currently.
      parameters:
        - name: pipelinesRetrieveRequest
          in: body
          schema:
            $ref: '#/definitions/PipelinesRetrieveRequest'
          required: true
          description: Pipeline instance retrieve conditions.
      responses:
        400:
          description: |
            Invalid pipeline instance retrieve request parameter. Including, invalid pipeline type.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Pipeline instances response.
          schema:
            $ref: '#/definitions/PipelinesRetrieveResponse'
    put:
      summary: Updates Pipeline Instance
      description: |
        The Pipeline Instance update endpoint updates a special pipeline instance according to the
        pipeline type and the configuration.
      parameters:
        - name: pipelineUpdateRequest
          in: body
          schema:
            $ref: '#/definitions/PipelineUpdateRequest'
      responses:
        400:
          description: |
            Invalid pipeline instance update request parameter. Including, invalid pipeline type
            or invalid configuration according to the pipeline instance is being updated.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: |
            Pipeline instance not found by given name, or one or more plugin instances in the
            pipeline are not found.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Pipeline instance is updated successfully.
  /pipelines/{pipelineName}:
    get:
      summary: Retrieves Pipeline Instance
      description: |
        The Pipeline Instance retrieve endpoint returns an existing pipeline instance Gateway has
        currently by pipeline name.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to retrieve.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name to retrieve.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: Pipeline instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Pipeline instance response.
          schema:
            $ref: '#/definitions/PipelineSpec'
    delete:
      summary: Deletes Pipeline Instance
      description: |
        The Pipeline Instance deletion endpoint deletes an existing pipeline instance Gateway has
        currently by pipeline name.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to delete.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name to delete.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: Pipeline instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: Pipeline instance is deleted successfully.
definitions:
  PluginTypesRetrieveResponse:
    type: object
    required:
      - plugin_types
    properties:
      plugin_types:
        type: array
        items:
          type: string
  PipelineTypesRetrieveResponse:
    type: object
    required:
      - pipeline_types
    properties:
      pipeline_types:
        type: array
        items:
          type: string
  PluginCreationRequest:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available plugin types listed in PluginTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the plugin type. Check Plugin Reference document for
          more information.
  PluginsRetrieveRequest:
    type: object
    properties:
      name_pattern:
        type: string
        description: Plugin name filter condition, supports regex expression.
      types:
        type: array
        items:
          type: string
        description: Plugin types filter condition, supports regex expression.
  PluginsRetrieveResponse:
    type: object
    required:
      - plugins
    properties:
      plugins:
        type: array
        items:
          $ref: '#/definitions/PluginSpec'
  PluginUpdateRequest:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available plugin types listed in PluginTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the plugin type. Check Plugin Reference document
          for more information.
  PluginSpec:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available plugin types listed in PluginTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the plugin type. Check Plugin Reference document for
          more information.
  PipelineCreationRequest:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available pipeline types listed in PipelineTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the pipeline type. Check Pipeline Reference document for
          more information.
  PipelinesRetrieveRequest:
    type: object
    properties:
      name_pattern:
        type: string
        description: Pipeline name filter condition, supports regex expression.
      types:
        type: array
        items:
          type: string
        description: Pipeline types filter condition, supports regex expression.
  PipelinesRetrieveResponse:
    type: object
    required:
      - pipelines
    properties:
      pipelines:
        type: array
        items:
          $ref: '#/definitions/PipelineSpec'
  PipelineUpdateRequest:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available pipeline types listed in PipelineTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the pipeline type. Check Pipeline Reference document
          for more information.
  PipelineSpec:
    type: object
    required:
      - type
      - config
    properties:
      type:
        type: string
        description: One valid type of available pipeline types listed in PipelineTypesRetrieveResponse.
      config:
        type: object
        description: |
          A special configuration object of the pipeline type. Check Pipeline Reference document
          for more information.
  Error:
    type: object
    required:
      - Error
    properties:
      Error:
        type: string

6.3.2 Statistics API

swagger: '2.0'
info:
  title: Ease Gateway Statistics API
  description: Ease Gateway supports APIs for statistics.
  version: "1.0"
schemes:
  - http
  - https
basePath: /statistics/v1
produces:
  - application/json
paths:
  /pipelines/{pipelineName}/plugins/{pluginName}/indicators:
    get:
      summary: Retrieves Plugin Statistics Indicator Names
      description: |
        The Plugin Statistics Indicators retrieve endpoint returns the name list of all available
        statistics indicators plugin instance exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: pluginName
          in: path
          description: Plugin instance name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline or plugin instance name to query.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The names of statistics indicator.
          schema:
            $ref: '#/definitions/PluginIndicatorNames'
  /pipelines/{pipelineName}/plugins/{pluginName}/indicators/{indicatorName}/value:
    get:
      summary: Retrieves Plugin Statistics Indicator Value
      description: |
        The Plugin Statistics Indicator Value retrieve endpoint returns the value of given statistics
        indicator plugin instance exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: pluginName
          in: path
          description: Plugin instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline, plugin instance name or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to query value from plugin indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The value of statistics indicator.
          schema:
            $ref: '#/definitions/PluginIndicatorValue'
  /pipelines/{pipelineName}/plugins/{pluginName}/indicators/{indicatorName}/desc:
    get:
      summary: Retrieves Plugin Statistics Indicator Description
      description: |
        The Plugin Statistics Indicator Description retrieve endpoint returns the description
        of given statistics indicator plugin instance exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: pluginName
          in: path
          description: Plugin instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline, plugin instance name or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to describe plugin indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The description of statistics indicator.
          schema:
            $ref: '#/definitions/PluginIndicatorDescription'
  /pipelines/{pipelineName}/indicators:
    get:
      summary: Retrieves Pipeline Statistics Indicator Names
      description: |
        The Pipeline Statistics Indicators retrieve endpoint returns the name list of all available
        statistics indicators pipeline instance exposed.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name to query.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The names of statistics indicator.
          schema:
            $ref: '#/definitions/PipelineIndicatorNames'
  /pipelines/{pipelineName}/indicators/{indicatorName}/value:
    get:
      summary: Retrieves Pipeline Statistics Indicator Value
      description: |
        The Pipeline Statistics Indicator Value retrieve endpoint returns the value of given
        statistics indicator pipeline instance exposed.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to query value from pipeline indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The value of statistics indicator.
          schema:
            $ref: '#/definitions/PipelineIndicatorValue'
  /pipelines/{pipelineName}/indicators/{indicatorName}/desc:
    get:
      summary: Retrieves Pipeline Statistics Indicator Description
      description: |
        The Pipeline Statistics Indicator Description retrieve endpoint returns the description
        of given statistics indicator pipeline instance exposed.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to describe pipeline indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The description of statistics indicator.
          schema:
            $ref: '#/definitions/PipelineIndicatorDescription'
  /pipelines/{pipelineName}/task/indicators:
    get:
      summary: Retrieves Task Statistics Indicator Names
      description: |
        The Task Statistics Indicators retrieve endpoint returns the name list of all available
        statistics indicators task exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name to query.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The names of statistics indicator.
          schema:
            $ref: '#/definitions/TaskIndicatorNames'
  /pipelines/{pipelineName}/task/indicators/{indicatorName}/value:
    get:
      summary: Retrieves Task Statistics Indicator Value
      description: |
        The Task Statistics Indicator Value retrieve endpoint returns the value of given statistics
        indicator task exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to query value from task indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The value of statistics indicator.
          schema:
            $ref: '#/definitions/TaskIndicatorValue'
  /pipelines/{pipelineName}/task/indicators/{indicatorName}/desc:
    get:
      summary: Retrieves Task Statistics Indicator Description
      description: |
        The Task Statistics Indicator Description retrieve endpoint returns the description
        of given statistics indicator task exposed against the pipeline instance.
      parameters:
        - name: pipelineName
          in: path
          description: Pipeline instance name to query.
          required: true
          type: string
        - name: indicatorName
          in: path
          description: Statistics indicator name to query.
          required: true
          type: string
      responses:
        400:
          description: Invalid pipeline instance name or indicator name to query.
          schema:
            $ref: '#/definitions/Error'
        403:
          description: Failed to describe task indicator.
          schema:
            $ref: '#/definitions/Error'
        404:
          description: The statistics of pipeline instance or indicator not found by given name.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The statistics indicator description.
          schema:
            $ref: '#/definitions/TaskIndicatorDescription'
  /gateway/uptime:
    get:
      summary: Retrieves Gateway Uptime
      description: |
        The Gateway Uptime retrieve endpoint returns the uptime of the gateway service instance
        in nanosecond unit which serves API request currently.
      responses:
        200:
          description: The uptime of the gateway service instance.
          schema:
            $ref: '#/definitions/UptimeDuration'
  /gateway/rusage:
    get:
      summary: Retrieves Gateway Resource Usage
      description: |
        The Gateway Resource Usage retrieve endpoint returns the OS resource usage consumed by
        the gateway service instance which serves API request currently.
      responses:
        500:
          description: Failed to execute getrusage() syscall.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The resource usage of the gateway service instance.
          schema:
            $ref: '#/definitions/ResourceUsage'
  /gateway/loadavg:
    get:
      summary: Retrieves Gateway Host Load
      description: |
        The Gateway Host Load retrieve endpoint returns the OS average load hosted
        the gateway service instance which serves API request currently.
      responses:
        403:
          description: Failed to access /proc/loadavg file.
          schema:
            $ref: '#/definitions/Error'
        200:
          description: The host average load on OS level.
          schema:
            $ref: '#/definitions/AvgLoad'
definitions:
  PluginIndicatorNames:
    type: object
    required:
      - names
    properties:
      names:
        type: array
        items:
          type: string
# Could be one of (not complete list, plugin might expose one or more own indicator):
# - EXECUTION_COUNT_ALL
# - EXECUTION_COUNT_FAILURE
# - EXECUTION_COUNT_SUCCESS
# - EXECUTION_TIME_50_PERCENT_FAILURE
# - EXECUTION_TIME_50_PERCENT_SUCCESS
# - EXECUTION_TIME_90_PERCENT_FAILURE
# - EXECUTION_TIME_90_PERCENT_SUCCESS
# - EXECUTION_TIME_99_PERCENT_FAILURE
# - EXECUTION_TIME_99_PERCENT_SUCCESS
# - EXECUTION_TIME_MAX_ALL
# - EXECUTION_TIME_MAX_FAILURE
# - EXECUTION_TIME_MAX_SUCCESS
# - EXECUTION_TIME_MIN_ALL
# - EXECUTION_TIME_MIN_FAILURE
# - EXECUTION_TIME_MIN_SUCCESS
# - EXECUTION_TIME_STD_DEV_FAILURE
# - EXECUTION_TIME_STD_DEV_SUCCESS
# - EXECUTION_TIME_SUM_ALL
# - EXECUTION_TIME_SUM_FAILURE
# - EXECUTION_TIME_SUM_SUCCESS
# - EXECUTION_TIME_VARIANCE_FAILURE
# - EXECUTION_TIME_VARIANCE_SUCCESS
# - THROUGHPUT_RATE_LAST_15MIN_ALL
# - THROUGHPUT_RATE_LAST_15MIN_FAILURE
# - THROUGHPUT_RATE_LAST_15MIN_SUCCESS
# - THROUGHPUT_RATE_LAST_1MIN_ALL
# - THROUGHPUT_RATE_LAST_1MIN_FAILURE
# - THROUGHPUT_RATE_LAST_1MIN_SUCCESS
# - THROUGHPUT_RATE_LAST_5MIN_ALL
# - THROUGHPUT_RATE_LAST_5MIN_FAILURE
# - THROUGHPUT_RATE_LAST_5MIN_SUCCESS
  PluginIndicatorValue:
    type: object
    required:
      - value
    properties:
      value:
        type: object
        description: |
          A special value object of the plugin indicator type.
          Check Plugin Reference document for more information.
  PluginIndicatorDescription:
    type: object
    required:
      - desc
    properties:
      desc:
        type: string
        description: |
          A human readable description of the special plugin indicator type.
          Check Plugin Reference document for more information.
  PipelineIndicatorNames:
    type: object
    required:
      - names
    properties:
      names:
        type: array
        items:
          type: string
          enum:
            - EXECUTION_COUNT_ALL
            - EXECUTION_TIME_50_PERCENT_ALL
            - EXECUTION_TIME_90_PERCENT_ALL
            - EXECUTION_TIME_99_PERCENT_ALL
            - EXECUTION_TIME_MAX_ALL
            - EXECUTION_TIME_MIN_ALL
            - EXECUTION_TIME_STD_DEV_ALL
            - EXECUTION_TIME_SUM_ALL
            - EXECUTION_TIME_VARIANCE_ALL
            - THROUGHPUT_RATE_LAST_15MIN_ALL
            - THROUGHPUT_RATE_LAST_1MIN_ALL
            - THROUGHPUT_RATE_LAST_5MIN_ALL
  PipelineIndicatorValue:
    type: object
    required:
      - value
    properties:
      value:
        type: object
        description: |
          A special value object of the pipeline indicator type.
          Check Pipeline Reference document for more information.
  PipelineIndicatorDescription:
    type: object
    required:
      - desc
    properties:
      desc:
        type: string
        description: |
          A human readable description of the special plugin indicator type.
          Check Pipeline Reference document for more information.
  TaskIndicatorNames:
    type: object
    required:
      - names
    properties:
      names:
        type: array
        items:
          type: string
          enum:
            - EXECUTION_COUNT_ALL
            - EXECUTION_COUNT_FAILURE
            - EXECUTION_COUNT_SUCCESS
  TaskIndicatorValue:
    type: object
    required:
      - value
    properties:
      value:
        type: object
        description: |
          A special value object of the task indicator type.
          Check Pipeline Reference document for more information.
  TaskIndicatorDescription:
    type: object
    required:
      - desc
    properties:
      desc:
        type: string
        description: |
          A human readable description of the special task indicator type.
          Check Pipeline Reference document for more information.
  UptimeDuration:
    type: object
    required:
      - desc
    properties:
      desc:
        type: integer
        description: The uptime of the gateway service instance.
  ResourceUsage:
    type: object
    required:
      - Utime
      - Stime
      - Maxrss
      - Ixrss
      - Idrss
      - Isrss
      - Minflt
      - Majflt
      - Nswap
      - Inblock
      - Oublock
      - Msgsnd
      - Msgrcv
      - Nsignals
      - Nvcsw
      - Nivcsw
    properties:
      Utime:
        $ref: '#/definitions/Utime'
      Stime:
        $ref: '#/definitions/Stime'
      Maxrss:
        type: integer
        description: Maximum resident set size.
      Ixrss:
        type: integer
        description: Integral shared memory size.
      Idrss:
        type: integer
        description: Integral unshared data size.
      Isrss:
        type: integer
        description: Integral unshared stack size.
      Minflt:
        type: integer
        description: Page reclaims (soft page faults).
      Majflt:
        type: integer
        description: Page faults (hard page faults).
      Nswap:
        type: integer
        description: Swaps.
      Inblock:
        type: integer
        description: Block input operations.
      Oublock:
        type: integer
        description: Block output operations.
      Msgsnd:
        type: integer
        description: IPC messages sent.
      Msgrcv:
        type: integer
        description: IPC messages received.
      Nsignals:
        type: integer
        description: Signals received.
      Nvcsw:
        type: integer
        description: Voluntary context switches.
      Nivcsw:
        type: integer
        description: Involuntary context switches.
  Utime:
    type: object
    description: User CPU time used.
    required:
      - Sec
      - Usec
    properties:
      Sec:
        type: integer
        description: Duration in second unit.
      Usec:
        type: integer
        description: Duration in microseconds unit.
  Stime:
    type: object
    description: System CPU time used.
    required:
      - Sec
      - Usec
    properties:
      Sec:
        type: integer
        description: Duration in second unit.
      Usec:
        type: integer
        description: Duration in microseconds unit.
  AvgLoad:
    type: object
    required:
      - load1
      - load5
      - load15
    properties:
      load1:
        type: number
        description: CPU and IO utilization of the last one minute periods.
      load5:
        type: number
        description: CPU and IO utilization of the last five minute periods.
      load15:
        type: number
        description: CPU and IO utilization of the last ten minute periods.
  Error:
    type: object
    required:
      - Error
    properties:
      Error:
        type: string

6.3.3 Health API

swagger: '2.0'
info:
  title: Ease Gateway Health API
  description: Ease Gateway supports APIs for health check.
  version: "1.0"
schemes:
  - http
  - https
basePath: /health/v1
produces:
  - application/json
paths:
  /check:
    get:
      summary: Checks Gateway Service Instance Existing
      description: |
        The Gateway Service Instance Health check endpoint returns a http status code
        to present if the gateway service instance is existing and runs normally.
      responses:
        200:
          description: |
            The gateway service instance which serves API request currently
            is existing and runs normally.
        default:
          description: Unexpected error.
          schema:
            $ref: '#/definitions/Error'
  /info:
    get:
      summary: Retrieve Gateway Service Instance Information
      description: |
        The Gateway Service Instance Health information endpoint returns build
        information of gateway service instance.
      responses:
        200:
          description: |
            The information of the gateway service instance
            which serves API request currently.
          schema:
            $ref: '#/definitions/HealthInfoResponse'
        default:
          description: Unexpected error.
          schema:
            $ref: '#/definitions/Error'
definitions:
  HealthInfoResponse:
    type: object
    required:
      - build
    properties:
      build:
        type: object
        required:
          - name
          - release
          - build
          - repository
        properties:
          name:
            type: string
          release:
            type: string
          build:
            type: string
          repository:
            type: string
  Error:
    type: object
    required:
      - Error
    properties:
      Error:
        type: string