Posts

Showing posts from 2021

decoding base64 signed urls in varnish

Fronting imgproxy with varnish, to honor old url base64 signed urls the javascript file vcl 4.0; import blob; import digest; # Default backend definition. Set this to point to your content server. backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_init { } sub vcl_recv { set req.http.base64part = regsub(req.url, "^/testpath/(.*)\.(.*)$", "\1"); set req.http.base64ashex = blob.transcode(encoding=HEX, decoding=BASE64URL, encoded=req.http.base64part); set req.http.imghash-hex = regsub(req.http.base64asHex, "^(.{0,64})(.*)$", "\1"); set req.http.imgauth-hex = regsub(req.http.base64asHex, "^(.{0,64})(.{0,32})(.*)$", "\2"); set req.http.imgparms-hex = regsub(req.http.base64asHex, "^(.{0,96})(.*)$", "\2"); set req.http.imgparms = blob.transcode(encoding=IDENTITY, decoding=HEX, encoded=req.http.imgparms-hex); set req.http.genimgauth = dige

nginx decode base64 url for use with imgproxy

 i've been testing imgproxy, to handle our image serving needs, and it looks good. our existing servers are php based, and we sign and encode our urls for images.  To test out imgproxy , I wanted to simply drop it in as a replacement for our servers by sending a % of traffic.  There are many ways to do this, varnish was one, with custom code, but nginx is our go-to web server, so I had to find a way to have nginx sit in front of imgproxy and rewrite the decoded url. I settled on using njs, the cut down version of javascript that plugs into nginx as a loadable module.  Then use proxy_pass to pass the uri to javascript that will return the imgproxy compatable url, and proxy to it. a sample url would be http://foo.bar/images/c2lnbmF0dXJlZm9vaHR0cDovL3MzLWV1LXdlc3QtMS5hbWF6b25hd3MuY29tL215YnVja2V0b2ZwaG90b3MvcGhvdG9fb2ZfYV9jYXQ1fHx8MTIwMHgxMjAwfHx8fHx8fHw==.jpeg it has a sig, a bucket url, and parameters like image size. Getting nginx setup nginx.conf load_module modules/ngx_http_js

running a victoriametrics cluster

I recently had a need to work with metrics, and looking at the landscape of modern tools, went with victoriametrics. After initially using the single binary version, I went on to setup the cluster version, using two nodes for everything (free nodes in oracles free tier!) run these binaries on each node (where 10.0.2.41 and 10.0.2.40 are the addresses of the nodes) ./vmstorage-prod -retentionPeriod 5y -storageDataPath /var/lib/victoriametrics ./vminsert-prod -storageNode=10.0.2.41:8400 -storageNode=10.0.2.40:8400 -replicationFactor=2 ./vmselect-prod -storageNode=10.0.2.41:8401 -storageNode=10.0.2.40:8401 -replicationFactor=2 -dedup.minScrapeInterval=1ms front vmselect and vminsert with nginx server { listen 443 ssl; server_name metrics.foo.bar; location /insert/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:8480; } location /select/ { proxy_set_header Host $host;

using vmctl to copy data

I had a need to copy data from a standalone victoriametrics system to a cluster setup. for some reason the command line wasn't intuitive, so making a note here ./vmctl vm-native --vm-native-src-addr=https://source.to.copy.from:443 --vm-native-src-user user1 --vm-native-src-password password1 --vm-native-dst-addr=https://destination.to.copy.to:443/insert/0/prometheus/api/v1/write --vm-native-filter-match='{db="db1"}'

using vmagent to collect victoriametrics stats

 vmagent is part of victoriametrics, and is a lightweight prometheus scraper. create a basic prometheus.yml file that defines a host to scrape, in this example it's using SSL and basic auth global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: vmetrics static_configs: - targets: - host.to.scrape:443 scheme: https basic_auth: username: user1 password: password1 then run vmagent to use this and send to a victoriametrics node to store the metrics it scrapes ./vmagent-prod -httpListenAddr 127.0.0.1:8240 -remoteWrite.basicAuth.username user2 -remoteWrite.basicAuth.password 'password2' -remoteWrite.url=https://host.to.send.to:443/api/v1/write -promscrape.config=prometheus.dsch.yml if using the cluster version, the url in the command line might be -remoteWrite.url=https://host.to.send.to:443/insert/0/prometheus/api/v1/write