<< March 2014 | Home | May 2014 >> | Conferences & TechTalks Calendar | Legal Notice | Google+ Profile | Twitter Page | My Profile Page

nginx - Stateless Loadbalancer

Balance your load on nginx without proxying the request

Today I want to show how you can use nginx to build your own stateless loadbalancer which just redirects your requests to random servers. It does neither support sticky sessions nor does it proxy your request. It will redirect (HTTP 302) your original request to the random location.

This is pratical for a lot of different use cases where you either want to just distribute static content as you would otherwise do with Content Delivery Networks (CDN) or if you have a cluster of servers that is capable of clustering your HTTP sessions (for example using Hazelcast. It also works amazingly good for PHP (or node.js) based mini games that always read data from the database.

nginx is a small non-blocking, event-driven webserver which handles 10k+ connection with no problem. The original internal loadbalancing option is used for proxying the request through nginx to the backend endpoint. If you have dynamic servers or you need sticky sessions that might be what you want. If you have just static content or your backend servers can handly stateless sessions you might not want to proxy the requests since all connections would need to be held open until the backend request is processed.


Categories : Other, hazelcast

Hazelcast MapReduce on GPU

April's Fool: Make use of massively scalable GPU power

Sorry that I have to admit it was just an April's Fool. The interesting fact btw is that when I first came up with the idea it sounded like totally implausible but while writing I realized "hey that should actually be possible". Maybe not yet for a map-reduce framework but definitely to back up a distributed fractal calculation what is I will look into at the next time. If somebody want to team up on this I'm fully open for requests. Just don't hesitate to contact me.
While this was a prank at the current time, I'm really looking forward to eventually bring distributed environments like Hazelcast to the GPU - at least when Java 9 will feature OpenJDK Project Sumatra and a big thanks to the guys from AMD and Rootbeet that started all that movement!

Since the last decade RAM and CPU always got faster but still some calculations can be done faster in GPUs due to their nature of data. At Hazelcast we try to make distributed calculations easy and fast for everybody.

While having some spare time I came up with the idea of moving data calculations to the GPU to massively scale it out and since I created the map-reduce framework on the new Hazelcast 3.2 version it was just a matter of time to make it working with a GPU.

Disclaimer: Before you read on I want to make sure that you understand that this is neither an official Hazelcast project nor it is yet likely to be part of the core in the near future but as always you may expect the unexpected!


Categories : Java, hazelcast