Wednesday, May 29, 2013

Centos in a proxy enviroment

I use centos in work, behind a proxy that uses NTLM auth.

some tips;


  1. install CNTLM, which access a proxy to NTLM proxies :-) configure it so it listens for localhost connections
    • /etc/cntlm.conf
      • Username        username
      • Domain          domain
      • Password        password
      • Proxy           upstream.proxy.addr:8080
      • NoProxy         localhost, 172.18.32.*, 127.0.0.*, 10.*, 192.168.*
      • Listen          3128
      • Gateway yes
      • Allow           127.0.0.1
      • Deny            0/0
  2. edit yum.conf
    • echo "proxy=http://127.0.0.1:3128" >> /etc/yum.conf
  3. edit maven settings.xml
    • /usr/local/apache-maven-3.0.5/conf/settings.xml
  <proxies>
    <proxy>
      <id>optional</id>
      <active>true</active>
      <protocol>http</protocol>
      <host>127.0.0.1</host>
      <port>3128</port>
      <nonProxyHosts>local.net|some.host.com</nonProxyHosts>
    </proxy>
  </proxies>
                    1. setup git
                      • git config --global http.proxy http://127.0.0.1:3128
                    2. setup your shell (wget etc)
                      • http_proxy="http://127.0.0.1:3128"
                      • export http_proxy


                    Tuesday, May 28, 2013

                    Cloudstack 4.0.2 with vsphere integration and netscaler integration



                    Based on a clean install of CentOS-6.4-x86_64-minimal.iso , this step builds the RPM's and shares them via apache as a repo to install.
                    ---
                    /etc/init.d/iptables stop
                    yum groupinstall "Development Tools"
                    yum install unzip createrepo ws-commons-util wget java-1.6.0-openjdk-devel.x86_64 ant ant-jdepend genisoimage mysql mysql-server ws-common-utils MySQL-python tomcat6 httpd.x86_64
                    wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz
                    tar -zxvf apache-maven-3.0.5-bin.tar.gz
                    mv apache-maven-3.0.5 /usr/local/
                    export PATH=/usr/local/apache-maven-3.0.5/bin:$PATH
                    wget http://www.us.apache.org/dist/cloudstack/4.0.2/apache-cloudstack-4.0.2-src.tar.bz2
                    bunzip2 apache-cloudstack-4.0.2-src.tar.bz2
                    tar -xvf apache-cloudstack-4.0.2-src.tar
                    cd apache-cloudstack-4.0.2-src/deps
                    wget http://zooi.widodh.nl/cloudstack/build-dep/cloud-iControl.jar
                    wget http://zooi.widodh.nl/cloudstack/build-dep/cloud-manageontap.jar
                    wget http://zooi.widodh.nl/cloudstack/build-dep/vmware-vim.jar
                    wget http://zooi.widodh.nl/cloudstack/build-dep/vmware-vim25.jar
                    wget http://zooi.widodh.nl/cloudstack/build-dep/vmware-apputils.jar
                    wget http://community.citrix.com/download/attachments/37847122/cloud-netscaler-jars.zip
                    unzip cloud-netscaler-jars.zip
                    #now we are ready, lets build
                    ./install-non-oss.sh
                    cd ../vmware-base/
                    mvn install 
                    cd ..
                    mvn -D nonoss -P deps
                    wget http://people.apache.org/~jzb/cloudstack/dist/releases/4.0.2/nonoss.cloud.spec
                    cp nonoss.cloud.spec cloud.spec
                    #Build RPM's and share as repo
                    ./waf rpm
                    cd artifacts/rpmbuild/RPMS/x86_64
                    createrepo ./
                    mkdir /var/www/html/cloudstack
                    cp -R * /var/www/html/cloudstack/
                    apachectl start
                    ---

                    Now the RPM's are built and shared, time to setup the actual server

                    vi /etc/yum.repos.d/cloudstack.repo
                    # put in the repo info as http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/configure-package-repository.html#configure-package-repository-rpm
                    yum update
                    yum install cloud-server cloud-client mysql-server
                    #edit /etc/my.cnf as per docs
                    mysql_secure_installation
                    setenforce permissive
                    cloud-setup-databases cloud:secretpassword@localhost --deploy-as=root:password
                    cloud-setup-management


                    and open in your browser http://ip:8080/client

                    installing cloudstack with ubuntu server

                    after a fresh install of  ubuntu 12.04

                    using http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.0-incubating/html-single/Installation_Guide/#management-server-installation-overview

                    works out like this

                    1. echo "deb http://cloudstack.apt-get.eu/ubuntu precise 4.0" > /etc/apt/sources.list.d/cloudstack.list
                    2. wget -O - http://cloudstack.apt-get.eu/release.asc|apt-key add -
                    3. apt-get update
                    4. apt-get install cloud-client-ui
                    5. apt-get install mysql-server nfs-kernel-server
                    6. cloud-setup-databases cloud:secret --deploy-as=root:password
                    7. cloud-setup-management



                    now go to  http://server:8080/client/ and login as admin/password


                    you should read the manual and do nfs etc like this
                    1. mkdir -p /export/primary
                    2. mkdir -p /export/secondary
                    3. echo "/export *(rw,async,no_root_squash)" >> /etc/exports
                    4. exportfs -a
                    5. put this into /etc/default/nfs-kernel-server
                    LOCKD_TCPPORT=32803
                    LOCKD_UDPPORT=32769
                    MOUNTD_PORT=892
                    RQUOTAD_PORT=875
                    STATD_PORT=662
                    STATD_OUTGOING_PORT=2020
                    1.  /etc/init.d/nfs-kernel-server restart
                    2. mkdir -p /mnt/primary /mnt/secondary
                    3. mount -v -t nfs ubuntu:/export/secondary /mnt/secondary
                    4. mount -v -t nfs ubuntu:/export/primary /mnt/primary
                    5. /usr/lib/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt  -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2 -h kvm -F
                        you can run kvm on the box too if you want https://help.ubuntu.com/community/KVM/Installation

                        sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
                        optionally sudo modprobe kvm
                        optionally sudo adduser `id -un` libvirtd
                        
                        
                        
                        

                            Tuesday, April 9, 2013


                            Distributing files across a web farm or cluster.

                            We have 100's of servers in several locations , as part of our web content management we need to push content out frequently, some times several times a hour or more.

                            To date, we have used a mixture of http downloads and rsync script to accomplish this. Now we are testing new mixture, that we hope will scale out.

                            In our central location we have a large archive with all the files we need to distribute.  Our remote datacenters have a single node in each datacenter to help with distribution.  we take the archive, lets pretend its a freebsd iso file, and we make it available via https, so we can download it over the internet between our datacenters, not via our mpls or other expensive transits.  using metalink files, you can also specify the internal source as a lower preference.

                            then within the datacenter we share the file via torrent with the single node mentioned above being the seed for the datacenter, and also the tracker.  encryption is optional.

                            This works well for single files, like large tar files, we need to experiment with multiple individual small files.

                            1.     start a tracker in each datacenter (assume bt.local.dc)
                            o    bttrack  --dfile /tmp/dfile --port 85 --reannounce_interval 5
                            2.     create a torrent file of the archive in the central distribution point. (assume bt.master)
                            o    btmakemetafile  http://bt.local.dc:85/announce FreeBSD-8.4-BETA1-i386-dvd1.iso
                            3.     Make a metalink file describing the archive and torrent too
                            o     echo -e "external 100 https % https://bt.master.internet.ip \n internal 100 bittorrent % http://bt.master \n internal 10 http http://bt.master" | metalink -d md5 FreeBSD-8.4-BETA1-i386-dvd1.iso | sed  's/<url preference="100" location="internal" type="bittorrent">\(.*\)<\/url>/<url preference="100" location="internal" type="bittorrent">\1.torrent<\/url>/g' > f.metalink
                            4.     make the archive available on https, and the torrent file on https and a metalink too.
                            o    cp FreeBSD-8.4-BETA1-i386-dvd1.iso /var/www
                            o    cp f.metalink /var/www
                            o    cp FreeBSD-8.4-BETA1-i386-dvd1.iso.torrent /var/www
                            5.     on the node that is your tracker in each datacenter, start aria2c to download the metalink, it will then download the torrent and start to seed as it downloads the the archive with multi-part https download
                            o    aria2c --seed-ratio=0.0 --disable-ipv6 -V -d /var/www http://bt.master/f.metalink
                            6.     on your endpoints start aria2c to download the torrent , they will automatically then download the file in the torrent from the swarm.  set a post download hook to finish the job.
                            o    aria2c --seed-ratio=0.0 --disable-ipv6 -V -d /var/www --on-bt-download-complete=nextsteps.sh http://bt.local.dc/FreeBSD-8.4-BETA1-i386-dvd1.iso.torrent



                            using magnet links and dht, this process can probably be simplified , removing the need for a tracker, if I figure it out, i'll post it .

                            Also, initiating this with lsyncd looks like a good thing to do too.

                            Monday, March 11, 2013

                            Netscaler Nitro API, surge queues and servicegroup members

                            we just upgraded to 9.3 61.5 , and nitro changed , citrix call it 'tidying up', but all I can say is that not making your API backwards compatible in a minor release is bad bad bad.

                            so, to get the service group members, their surge queue and other stats is now a multi step process

                            poll the config , get the lb names, and the service groups bound

                            1. call /nitro/v1/config/lbvserver  to get a list of vserver names
                            2. call /nitro/v1/config/lbvserver_servicegroupmember_binding/{lbservername} to get list of members
                            3. call /nitro/v1/stat/servicegroupmember?args=servicegroupname:{servicegroupname},serverName:{ip},port:{port}"
                            suddenly a simple call is now N* bigger and more complex :-(

                            Mobile redirects and user agent device detection on a Netscaler

                            This is essential for integrated caching if you do redirects on apache/nginx based on mobile device etc.




                            bind policy patset User_Agent_Mobile Blackberry -index 260 -charset ASCII
                            bind policy patset User_Agent_Mobile iPod -index 200 -charset ASCII
                            bind policy patset User_Agent_Mobile iPhone -index 220 -charset ASCII
                            bind policy patset User_Agent_Mobile iPad -index 210 -charset ASCII
                            bind policy patset User_Agent_Mobile Android -index 250 -charset ASCII
                             
                            bind policy patset User_Agent_Desktop Linux -index 100 -charset ASCII
                            bind policy patset User_Agent_Desktop Macintosh -index 120 -charset ASCII
                            bind policy patset User_Agent_Desktop Windows -index 110 -charset ASCII

                            add policy expression is_mobile_ua "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS_INDEX(\"User_Agent_Mobile\").BETWEEN(200,299)"
                            add policy expression is_desktop_ua "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS_INDEX(\"User_Agent\").BETWEEN(100,199)"



                            then you can do policies like


                            add cache selector mytest  HTTP.REQ.URL is_mobile_ua is_desktop_ua

                            which will store different versions of the content based on being a mobile device or desktop device.




                            GSLB on the netscaler

                            There are lots (and lots) of articles around GSLB, but none of them really worked for my brain.  I recently had to implement GSLB, to handle persistence of a java app between datacenters.

                            The Scenario:  we have an internet java app in a datacenter, it uses jsession id's to track sessions.  We wanted to do active-active between our two datacenters, and have a big fat pipe between them .

                            The solution:

                            1. LB rules for jsession id persistence, like this http://blogs.citrix.com/2010/05/06/complete-jsessionid-persistence-with-appexpert/
                            2. GSLB for site persistence using connection proxy to make it travel our pipes between dc
                            The problems encountered.
                            1. we use ultradns for managing dns load balancing and failover between our datacenters, this duplicates what gslb does, but we didn't want to open udp 53 to our dc.  this is slightly slower than using gslb and it has to wait for health checks and polling
                            2. gslb is all about dns for failover, connection proxy is just for site persistence, we thought it would be smarter
                            3. our testing showed if you hit DC2 , and the vip went down, GSLB kept sending you to the downed datacenter, as thats all it does.  it would withdraw dns for that site and make you fail that way. but as modern OS and web browser cache dns, this takes minutes
                            The solution to these problems
                            1. we use backup vips in each dc, pointing to the opposite DC.  this is not a tidy config, but works well
                            2. in dc1 for example, we have a main CS as a landing point for the app, then a second CS which is the landing point for dc2's backup LB config.
                            3. to keep them in sync, we moved our policies to policy labels, and used the same label on each CS.
                            4. in dc2 we created an LB which had a single destination, the CS in dc1.
                            5. we had to be very careful here, jsessions don't play nice this way and you run a big risk of exposing session data.  as netscalers try to be smart, they try to re-use connections.  java isn't aware of this , and once the tcp stream is open it assumes all packets are the same session, so when customer 2 comes along, and hits the backup vip, this goes into the same tcp session as customer 1 , and same java session.  so you use max requests=1 and connection keep alive off.


                            How to sync your keys to your server farm / clusters


                            you need sshpass and ssh-copy-id , then do;

                            echo "StrictHostKeyChecking no" >> .ssh/config

                            echo "UserKnownHostsFile=/dev/null" >> .ssh/config

                            cat mylistofservers.txt | xargs -P10 -I {} sshpass -p 'mypassword' ssh-copy-id {}


                            Saturday, January 5, 2013

                            Netscaler and SSL offloading;

                            As many people are aware, you can offload SSL on a netscaler,  this usually causes some app level problems as your app could have logic to check that access was via https, and redirect if not.  Or your app my have  logic to insert the protocol into links, and as access to your app (from it's point of view) is not http, the links may now be incorrect protocol type.

                            you can workaround this using a technique I call ssl-intercept,  where ssl offloading is performed on the netscaler, say on a content switch, pointing to a  HTTP Load Balancer , which has services bound that are SSL services.  What this means is that the client's ssl terminates on the Netscaler and a new ssl session is made to the backend server, leaving the stream within the Netscaler as HTTP , allowing you to insert headers or make other decisions based on the http content within the ssl session. your app sees the traffic as ssl, so the problems above are negated.

                            This is particularly useful if not using source-ip, and inserting the client ip as a header to your apache or tomcat server.  otherwise you would have to do ssl-bridge and use source-ip which is sub-optimal.

                            If, like me, you want to achieve ssl offload, not do intercept, then there is a trick which can help.

                            1) add a header to indicate the netscaler has done ssl offload
                                The easiest way to do this is to use the microsoft header used for outlook web access, which inserts a header 'Front-End-Https: On'

                               you simply create an action with OWA set to yes, create a policy with that action and bind the policy to the CS or LB under it's ssl policy tab. http://support.citrix.com/proddocs/topic/netscaler-ssl-93/ns-ssl-config-owa-support-tsk.html

                            2) on tomcat or apache use this header to make the server think access is via ssl. my sample's below include making tomcat or apache accept the header of the client ip as though it's the real ip.

                                In Apache,  add this to httpd.conf


                                    SetEnvIf Front-End-Https "^On$" HTTPS=on

                                    LoadModule extract_forwarded_module modules/mod_custom_header.so


                                    MEForder refuse,accept
                                    MEFrefuse all
                                    MEFaccept 10.10.10.1
                                    MEFCustomHeader NS-Client-IP


                                NS-Client-IP is what we use to send the client ip from the netscaler as a header, see http://support.citrix.com/article/CTX109555

                                 In Tomcat , add this to server.xml


                                   <Valve
                                      className="org.apache.catalina.valves.RemoteIpValve"
                                      remoteIpHeader="X-Forwarded-For"
                                      protocolHeader="Front-End-Https"
                                      protocolHeaderHttpsValue="On"
                                   />

                                 X-Forwarded-For is what we use to send the client ip from the netscaler as a header for tomcat