GSLB on the netscaler

There are lots (and lots) of articles around GSLB, but none of them really worked for my brain.  I recently had to implement GSLB, to handle persistence of a java app between datacenters.

The Scenario:  we have an internet java app in a datacenter, it uses jsession id's to track sessions.  We wanted to do active-active between our two datacenters, and have a big fat pipe between them .

The solution:

  1. LB rules for jsession id persistence, like this http://blogs.citrix.com/2010/05/06/complete-jsessionid-persistence-with-appexpert/
  2. GSLB for site persistence using connection proxy to make it travel our pipes between dc
The problems encountered.
  1. we use ultradns for managing dns load balancing and failover between our datacenters, this duplicates what gslb does, but we didn't want to open udp 53 to our dc.  this is slightly slower than using gslb and it has to wait for health checks and polling
  2. gslb is all about dns for failover, connection proxy is just for site persistence, we thought it would be smarter
  3. our testing showed if you hit DC2 , and the vip went down, GSLB kept sending you to the downed datacenter, as thats all it does.  it would withdraw dns for that site and make you fail that way. but as modern OS and web browser cache dns, this takes minutes
The solution to these problems
  1. we use backup vips in each dc, pointing to the opposite DC.  this is not a tidy config, but works well
  2. in dc1 for example, we have a main CS as a landing point for the app, then a second CS which is the landing point for dc2's backup LB config.
  3. to keep them in sync, we moved our policies to policy labels, and used the same label on each CS.
  4. in dc2 we created an LB which had a single destination, the CS in dc1.
  5. we had to be very careful here, jsessions don't play nice this way and you run a big risk of exposing session data.  as netscalers try to be smart, they try to re-use connections.  java isn't aware of this , and once the tcp stream is open it assumes all packets are the same session, so when customer 2 comes along, and hits the backup vip, this goes into the same tcp session as customer 1 , and same java session.  so you use max requests=1 and connection keep alive off.


Comments

Popular posts from this blog

Baileys liquor Chocolate Chip and Cream desert

using t1n1wall, opnsense or pfsense on Google Compute Engine GCE

nginx decode base64 url for use with imgproxy