微服务架构咨询和培训服务

本站点由 Chris Richardson 编写和维护,他是经典技术著作《POJOS IN ACTION》一书的作者,也是 cloudfoundry.com 最初的创始人。Chris 的研究领域包括 Spring、Scala、微服务架构设计、领域驱动设计、NoSQL 数据库、分布式数据管理、事件驱动的应用编程等。Chris 是一位连续创业者,eventuate.io 是他的最新创业项目,一个微服务应用和数据服务平台。

Chris 定期为企业提供微服务设计培训和实战项目的架构咨询服务。近年来 Chris 多次访问中国,为包括华为、SAP、惠普、东风汽车等大型企业提供微服务架构相关的技术咨询服务。如您希望与 Chris 深入交流,建立合作,请点击下方按钮跟他取得联系。

预约课程

微服务应用的示例代码

为了避免纸上谈兵,Chris 提供了一套与这些模式相关的示例代码。这组代码使用 eventuate 框架,实现了微服务架构下分布式数据的存取。请点击下方按钮访问。

访问代码


模式库

核心模式

服务拆分

部署模式

需要关注的边界问题

通讯模式

数据管理

安全模式

可测试性

可观测性

UI 模式


订阅微服务邮件列表

全新的微服务应用支撑平台,成功解决微服务架构下分布式数据管理的难题。

了解更多

加入 微服务架构的 Google 讨论组(需要翻墙)

Pattern: Serverless deployment

Context

You have applied the Microservice architecture pattern and architected your system as a set of services. Each service is deployed as a set of service instances for throughput and availability.

Problem

How are services packaged and deployed?

Forces

  • Services are written using a variety of languages, frameworks, and framework versions
  • Each service consists of multiple service instances for throughput and availability
  • Service must be independently deployable and scalable
  • Service instances need to be isolated from one another
  • You need to be able to quickly build and deploy a service
  • You need to be able to constrain the resources (CPU and memory) consumed by a service
  • You need to monitor the behavior of each service instance
  • You want deployment to reliable
  • You must deploy the application as cost-effectively as possible

Solution

Use a deployment infrastructure that hides any concept of servers (i.e. reserved or preallocated resources)- physical or virtual hosts, or containers. The infrastructure takes your service’s code and runs it. You are charged for each request based on the resources consumed.

To deploy your service using this approach, you package the code (e.g. as a ZIP file), upload it to the deployment infrastructure and describe the desired performance characteristics.

The deployment infrastructure is a utility operated by a public cloud provider. It typically uses either containers or virtual machines to isolate the services. However, these details are hidden from you. Neither you nor anyone else in your organization is responsible for managing any low-level infrastructure such as operating systems, virtual machines, etc.

Examples

There are a few different serverless deployment environments:

They offer similar functionality but AWS Lambda has the richest feature set. An AWS Lambda function is a stateless component that is invoked to handle events. To create an AWS Lambda function you package your the NodeJS, Java or Python code for your service in a ZIP file, and upload it to AWS Lambda. You also specify the name of function that handles events as well as resource limits.

When an event occurs, AWS Lambda finds an idle instance of your function, launching one if none are available and invokes the handler function. AWS Lambda runs enough instances of your function to handle the load. Under the covers, it uses containers to isolate each instance of a lambda function. As you might expect, AWS Lambda runs the containers on EC2 instances.

There are four ways to invoke a lambda function. One option is to configure your lambda function to be invoked in response to an event generated by an AWS service such as S3, DynamoDB or Kinesis. Examples of events include the following:

  • an object being created in a S3 bucket
  • an item is created, updated or deleted in a DynamoDB table
  • a message is available to read from a Kinesis stream
  • an email being received via the Simple email service.

Another way to invoke a lambda function is to configure the AWS Lambda Gateway to route HTTP requests to your lambda. AWS Gateway transforms an HTTP request into an event object, invokes the lambda function, and generates a HTTP response from the lambda function’s result.

You can also explicitly invoke your lambda function using the AWS Lambda Web Service API. Your application that invokes the lambda function supplies a JSON object, which is passed to the lambda function. The web service call returns the value returned by the lambda.

The fourth way to invoke a lambda function is periodically using a cron-like mechanism. You can, for example, tell AWS to invoke you lambda function every five minutes.

The cost of each invocation is a function of the duration of the invocation, which is measured in 100 millisecond increments, and the memory consumed.

Resulting context

The benefits of using serverless deployment include:

  • It eliminates the need to spend time on the undifferentiated heavy lifting of managing low-level infrastructure. Instead, you can focus on your code.

  • The serverless deployment infrastructure is extremely elastic. It automatically scales your services to handle the load.

  • You pay for each request rather than provisioning what might be under utilized virtual machines or containers.

The drawbacks of serverless deployment include:

  • Significant limitation and constraints - A serverless deployment environment typically has far more constraints that a VM-based or Container-based infrastructure. For example, AWS Lambda only supports a few languages. It is only suitable for deploying stateless applications that run in response to a request. You cannot deploy a long running stateful application such as a database or message broker.

  • Limited “input sources” - lambdas can only respond to requests from a limited set of input sources. AWS Lambda is not intended to run services that, for example, subscribe to a message broker such as RabbitMQ.

  • Applications must startup quickly - serverless deployment is not a good fit your service takes a long time to start

  • Risk of high latency - the time it takes for the infrastructure to provision an instance of your function and for the function to initialize might result in significant latency. Moreover, a serverless deployment infrastructure can only react to increases in load. You cannot proactively pre-provision capacity. As a result, your application might initially exhibit high latency when there are sudden, massive spikes in load.

The deployment infrastructure will internally deploy your application using one of the other patterns. It will most likely use Service Service per Host pattern.


Tweet
© 2017 Chris Richardson 版权所有 • 保留一切权利 • 本站由 Kong 提供支持.