Posts

Showing posts from 2013

Centos in a proxy enviroment

I use centos in work, behind a proxy that uses NTLM auth. some tips; install CNTLM, which access a proxy to NTLM proxies :-) configure it so it listens for localhost connections /etc/cntlm.conf Username        username Domain          domain Password        password Proxy           upstream.proxy.addr:8080 NoProxy         localhost, 172.18.32.*, 127.0.0.*, 10.*, 192.168.* Listen          3128 Gateway yes Allow           127.0.0.1 Deny            0/0 edit yum.conf echo "proxy=http://127.0.0.1:3128" >> /etc/yum.conf edit maven settings.xml /usr/local/apache-maven-3.0.5/conf/settings.xml   <proxies>     <proxy>       <id>optional</id>       <active>true</active>       <protocol>http</protocol>       <host>127.0.0.1</host>       <port>3128</port>       <nonProxyHosts>local.net|some.host.com</nonProxyHosts>     </proxy>   </proxies>

Cloudstack 4.0.2 with vsphere integration and netscaler integration

Based on a clean install of CentOS-6.4-x86_64-minimal.iso , this step builds the RPM's and shares them via apache as a repo to install. --- /etc/init.d/iptables stop yum groupinstall "Development Tools" yum install unzip createrepo ws-commons-util wget java-1.6.0-openjdk-devel.x86_64 ant ant-jdepend genisoimage mysql mysql-server ws-common-utils MySQL-python tomcat6 httpd.x86_64 wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz tar -zxvf apache-maven-3.0.5-bin.tar.gz mv apache-maven-3.0.5 /usr/local/ export PATH=/usr/local/apache-maven-3.0.5/bin:$PATH wget http://www.us.apache.org/dist/cloudstack/4.0.2/apache-cloudstack-4.0.2-src.tar.bz2 bunzip2 apache-cloudstack-4.0.2-src.tar.bz2 tar -xvf apache-cloudstack-4.0.2-src.tar cd apache-cloudstack-4.0.2-src/deps wget http://zooi.widodh.nl/cloudstack/build-dep/cloud-iControl.jar wget http://zooi.widodh.nl/cloudstack/build-dep/cloud-manageontap.jar w

installing cloudstack with ubuntu server

after a fresh install of  ubuntu 12.04 using  http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.0-incubating/html-single/Installation_Guide/#management-server-installation-overview works out like this echo " deb http://cloudstack.apt-get.eu/ubuntu precise 4.0" > /etc/apt/sources.list.d/cloudstack.list wget -O - http://cloudstack.apt-get.eu/release.asc|apt-key add - apt-get update apt-get install cloud-client-ui apt-get install mysql-server nfs-kernel-server cloud-setup-databases cloud:secret --deploy-as=root:password cloud-setup-management now go to  http://server:8080/client/ and login as admin/password you should read the manual and do nfs etc like this mkdir -p /export/primary mkdir -p /export/secondary echo "/export *(rw,async,no_root_squash)" >> /etc/exports exportfs -a put this into /etc/default/nfs-kernel-server LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 RQUOTAD_PORT=875 STATD_
Image
Distributing files across a web farm or cluster. We have 100's of servers in several locations , as part of our web content management we need to push content out frequently, some times several times a hour or more. To date, we have used a mixture of http downloads and rsync script to accomplish this.  Now we are testing new mixture, that we hope will scale out. In our central location we have a large archive with all the files we need to distribute.  Our remote datacenters have a single node in each datacenter to help with distribution.  we take the archive, lets pretend its a freebsd iso file, and we make it available via https, so we can download it over the internet between our datacenters, not via our mpls or other expensive transits.  using metalink files, you can also specify the internal source as a lower preference. then within the datacenter we share the file via torrent with the single node mentioned above being the seed for the datacenter, and also th

Netscaler Nitro API, surge queues and servicegroup members

we just upgraded to 9.3 61.5 , and nitro changed , citrix call it 'tidying up', but all I can say is that not making your API backwards compatible in a minor release is bad bad bad. so, to get the service group members, their surge queue and other stats is now a multi step process poll the config , get the lb names, and the service groups bound call /nitro/v1/config/lbvserver  to get a list of vserver names call /nitro/v1/config/lbvserver_servicegroupmember_binding/{lbservername} to get list of members call /nitro/v1/stat/servicegroupmember?args=servicegroupname:{servicegroupname},serverName:{ip},port:{port}" suddenly a simple call is now N* bigger and more complex :-(

Mobile redirects and user agent device detection on a Netscaler

This is essential for integrated caching if you do redirects on apache/nginx based on mobile device etc. bind policy patset User_Agent_Mobile Blackberry -index 260 -charset ASCII bind policy patset User_Agent_Mobile iPod -index 200 -charset ASCII bind policy patset User_Agent_Mobile iPhone -index 220 -charset ASCII bind policy patset User_Agent_Mobile iPad -index 210 -charset ASCII bind policy patset User_Agent_Mobile Android -index 250 -charset ASCII   bind policy patset User_Agent_Desktop Linux -index 100 -charset ASCII bind policy patset User_Agent_Desktop Macintosh -index 120 -charset ASCII bind policy patset User_Agent_Desktop Windows -index 110 -charset ASCII add policy expression is_mobile_ua "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS_INDEX(\"User_Agent_Mobile\").BETWEEN(200,299)" add policy expression is_desktop_ua "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS_INDEX(\"User_Agent\").BETWEEN(100,199)"

GSLB on the netscaler

There are lots (and lots) of articles around GSLB, but none of them really worked for my brain.  I recently had to implement GSLB, to handle persistence of a java app between datacenters. The Scenario:  we have an internet java app in a datacenter, it uses jsession id's to track sessions.  We wanted to do active-active between our two datacenters, and have a big fat pipe between them . The solution: LB rules for jsession id persistence, like this  http://blogs.citrix.com/2010/05/06/complete-jsessionid-persistence-with-appexpert/ GSLB for site persistence using connection proxy to make it travel our pipes between dc The problems encountered. we use ultradns for managing dns load balancing and failover between our datacenters, this duplicates what gslb does, but we didn't want to open udp 53 to our dc.  this is slightly slower than using gslb and it has to wait for health checks and polling gslb is all about dns for failover, connection proxy is just for site pers

How to sync your keys to your server farm / clusters

you need sshpass and ssh-copy-id , then do; echo " StrictHostKeyChecking no" >> .ssh/config echo "UserKnownHostsFile = / dev / null" >> .ssh/config cat mylistofservers.txt | xargs -P10 -I {} sshpass -p 'mypassword' ssh-copy-id {}
Netscaler and SSL offloading; As many people are aware, you can offload SSL on a netscaler,  this usually causes some app level problems as your app could have logic to check that access was via https, and redirect if not.  Or your app my have  logic to insert the protocol into links, and as access to your app (from it's point of view) is not http, the links may now be incorrect protocol type. you can workaround this using a technique I call ssl-intercept,  where ssl offloading is performed on the netscaler, say on a content switch, pointing to a  HTTP Load Balancer , which has services bound that are SSL services.  What this means is that the client's ssl terminates on the Netscaler and a new ssl session is made to the backend server, leaving the stream within the Netscaler as HTTP , allowing you to insert headers or make other decisions based on the http content within the ssl session. your app sees the traffic as ssl, so the problems above are negated. This is particu