A Load Balancer per Microservice

There should be one load balancer per microservice, which distributes the load between the instances of the microservice. This enables the individual microservices to independently distribute load, and different configurations per microservice are possible. Likewise, it is simple to appropriately reconfigure the load balancer upon the deployment of a new version. However, in case of a failure of the load balancers, the microservice will not be available anymore.


For Load Balancing there are different approaches:

  • • The Apache httpd web server supports Load Balancing with the extension mod_proxy_balancer.[1]
  • • The web server nginx[2] can likewise be configured in a way that it supports Load Balancing. To use a web server as load balancer has the advantage that it can also deliver static websites, CSS, and images. Besides, the number of technologies will be reduced.
  • • HAProxy[3] is a solution for Load Balancing and high availability. It does not support HTTP, but all TCP-based protocols.
  • • Cloud providers frequently also offer Load Balancing. Amazon, for instance, offers Elastic Load Balancing.[4] This can be combined with auto scaling so that higher loads automatically trigger the start of new instances, and thereby the application automatically scales with load.

  • [1] http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
  • [2] http://nginx.org/en/docs/http/load_balancing.html
  • [3] http://www.haproxy.org/
  • [4] http://aws.amazon.com/de/elasticloadbalancing/
< Prev   CONTENTS   Source   Next >