SysAdmin

ELK on AWS ElasticSearch + ElasticBeanstalk + Laravel

NewRelic is a fantastic tool to get great insights of your application happenings and services surrounding it. It collects a massive amount of data and makes it easy accessible. Almost every metric and dashboard they offer is crucial to any DevOps or Cloud Engineer.

Now that Elastic acquired Packetbeat, which is essentially similar in the functionality to NewRelic’s agent (e.g. you can now collect data not only anymore from log files, but system metrics and external services via network sniffing), can the ELK stack, as open source alternative, replace NewRelic?

tl;dr: almost 🙂

I already did a post back in 2015 when I first got in touch with the ELK stack, this time however I will go a little more in detail and offer a full installation guide bringing together the following components:

  • ELK (ElasticSearch, Logstash & Kibana)
  • AWS ElasticSearch Service
  • ElasticBeanstalk (via ebextension)
  • Laravel (exception logs)
Conveniently Amazon Web Services now offers ElasticSearch as a Service, so it is no longer necessary to maintain a self-hosted version on EC2.

1) Create ElasticSearch Domain

The setup is pretty boring, but you might want to do something along the following screenshots:
Set the name of the ElasticSearch instance
 Set the ElasticSearch cluster dimension/size.
 Set the ElasticSearch storage.
 
In our setup we will not communicate directly to ElasticSearch, but instead instances will communicate via filebeat (formerly known as logstash-forwarder) to a Logstash instance. Hence we only need to whitelist the public and internal IP of the Logstash instance (see step 2).
 We end up receiving our ElasticSearch endpoint. Remember: AWS ships with Kibana pre-installed – for your convenience.

2) Create SSL certificate

We will need a SSL certificate to establish a secure and authenticated connection between agent/instance and Logstash. This might not be needed if you are running everything within the same VPC, though.
The next few steps get very surreal.. but trust me, it works. Please set the correct IP of your Logstash instance:

(openssl.cnf)

[ req ]
#default_bits  = 2048
#default_md  = sha256
#default_keyfile  = privkey.pem
distinguished_name = req_distinguished_name
attributes  = req_attributes
req_extensions = v3_req

[ req_distinguished_name ]
countryName   = Country Name (2 letter code)
countryName_min   = 2
countryName_max   = 2
stateOrProvinceName  = State or Province Name (full name)
localityName   = Locality Name (eg, city)
0.organizationName  = Organization Name (eg, company)
organizationalUnitName  = Organizational Unit Name (eg, section)
commonName   = Common Name (eg, fully qualified host name)
commonName_max   = 64
emailAddress   = Email Address
emailAddress_max  = 64

[ req_attributes ]
challengePassword  = A challenge password
challengePassword_min  = 4
challengePassword_max  = 20

[ v3_req ]
subjectAltName=@alt_names
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = CA:true

[alt_names]
IP.1 = XXX.XXX.XXX.XXX

And then do the following steps:

 

$ sudo mkdir -p /etc/pki/tls/certs
$ sudo mkdir /etc/pki/tls/private
$ sudo openssl req -x509 -nodes -days 3650 -newkey rsa:4096 
-keyout /etc/pki/tls/private/logstash.key 
-out /etc/pki/tls/certs/logstash.crt 
-config /etc/ssl/openssl.cnf 
-extensions v3_req

$ sudo chown logstash: /etc/pki/tls/private/logstash.key /etc/pki/tls/certs/logstash.crt
$ sudo chmod 600 /etc/pki/tls/private/logstash.key /etc/pki/tls/certs/logstash.crt

 

The whole custom configuration is necessary so the certificate can be correctly verified by both the Logstash and beats. Basically we are creating a self authorized certificate with the IP of Logstash as SAN (Subject Alternative Name – IP).

3) Logstash

Next we will need an EC2 instance that will run Logstash, thus be responsible for receiving logs & metrics from our application servers and passing them through to our ElasticSearch endpoint.
It won’t need a lot of resources, so you can start with a t2.medium and work yourself up if needed.Additionally we are going to host a nginx reverse-proxy for the Kibana endpoint. This will allow us to “bridge” the auth-system of AWS and instead replace it with our own simple http-auth.
Logstash is a Java application, so you will have to install it first – if you are on Ubuntu or Debian you can use my java ansible role to do so 🙂
Use something similar to the following as your nginx vhost config:
(nginx-vhost.conf)

 

server {
  listen 80;
  server_name kibana.acme.com;

  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;

  auth_basic "/dev/null";
  auth_basic_user_file /etc/nginx/htpasswd.conf;
  proxy_set_header Authorization "";

  location /.kibana-4 {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~* ^/(filebeat|topbeat|packetbeat)- {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/_(aliases|nodes)$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/.*/_search$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/.*/_mapping$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location / {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_plugin/kibana/;
  }
}

 

Now download and install Logstash:

$ wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_2.1.1-1_all.deb
$ sudo dpkg -i logstash_2.1.1-1_all.deb

The following Logstash config files have to be put under /etc/logstash/conf.d/

$ wget 
https://raw.githubusercontent.com/elastic/beats/master/topbeat/etc/topbeat.template.json 
https://raw.githubusercontent.com/elastic/beats/master/packetbeat/etc/packetbeat.template.json 
https://raw.githubusercontent.com/logstash-plugins/logstash-output-elasticsearch/master/lib/logstash/outputs/elasticsearch/elasticsearch-template.json
$ mv elasticsearch-template.json /etc/logstash/filebeat-template.json
$ sed -i 's/logstash/filebeat/' /etc/logstash/filebeat-template.json

(01-beats-input.conf)

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"
  }
}

This will accept connections from beats on port 5044 if SSL certificate matches.

(10-syslog.conf)

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    
    syslog_pri { }
    
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Simple syslog configuration/grok.

(11-apache.conf)

filter {
  if [type] == "apache" {
    grok {
      match => { "message" => "%{IP:clientip} - - [%{HTTPDATE:timestamp}] %{HOSTNAME:domain} "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} %{NUMBER:bytes:int} "(?:%{URI:referrer}|-)" %{QS:agent}" }
    }

    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
    }

    if [clientip] {
      geoip {
        source => "clientip"
        target => "geoip"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
      }
      
      mutate {
        convert => [ "[geoip][coordinates]", "float" ]
      }
    }
  }
}

Apache access-log configuration. Will also try to resolve the clientip to a geolocation.

(12-laravel.conf)

filter {
  if [type] == "laravel" {
    multiline {
      pattern => "^["
      what => "previous"
      negate=> true
    }

    grok {
      match => { "message" => "(?m)[%{TIMESTAMP_ISO8601:timestamp}] %{WORD:env}.%{LOGLEVEL:severity}: %{GREEDYDATA:content}" }
    }

    mutate {
      replace => [ "message", "%{content}" ]
      remove_field => [ "content" ]
    }
  }
}

Multi-line Laravel exception logs parser.

(30-es-output.conf)

output {
  elasticsearch {
    hosts => ["search-webapplogs-xxx.eu-west-1.es.amazonaws.com:80"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    template_overwrite => true
    template => "/etc/logstash/filebeat-template.json"
    template_name => "filebeat"
  }
}

Finally push it to our ElasticSearch endpoint.

Lets give it a try:

$ sudo /etc/init.d/logstash restart

Manually set index templates for topbeat and packetbeat:

$ curl -XPUT 'http://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_template/topbeat' -d@topbeat.template.json
$ curl -XPUT 'http://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_template/packetbeat' -d@packetbeat.template.json

4) ElasticBeanstalk ebextension

As with my other ebextensions, I like writing the heavy part in pure bash, this also allows me to enable certain ebextensions on a project to project basis by setting the activator params/envvars.

(12-beats.config)

# beats
#
# Author: Gunter Grodotzki 
# Version: 2016-01-18
#
# install and configure beats
# BEATS: enable
container_commands:
  01-beats:
    command: ".ebextensions/beats.sh"

(beasts.sh)

#!/bin/bash
#
# Author: Gunter Grodotzki (gunter@grodotzki.co.za)
# Version: 2016-01-18
#
# install and configure beats

set -e

if [[ "${BEATS}" == "enable" ]]; then

  export HOME="/root"
  export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"

  # lets do everything inside .ebextensions so it will clean itself
  cd .ebextensions

  # set optimized LogFormat
  sed -i '/^s*LogFormat/d' /etc/httpd/conf/httpd.conf
  sed -i '/^s*CustomLog/d' /etc/httpd/conf/httpd.conf

  cat <<'EOB' > /etc/httpd/conf.d/10-logstash.conf
SetEnvIf Remote_Addr "::1" dummy
SetEnvIf Remote_Addr "127.0.0.1" dummy
LogFormat "%a - - %t %{Host}i "%r" %>s %B "%{Referer}i" "%{User-Agent}i"" combined
CustomLog "logs/access_log" combined env=!dummy
EOB

  # add bash_history logging
  echo 'PROMPT_COMMAND='"'"'history -a >(tee -a ~/.bash_history | logger -t "$USER[$$]")'"'"'' > /etc/profile.d/logstash.sh

  # add key
  mkdir -p /etc/pki/tls/certs

  cat <<'EOB' > /etc/pki/tls/certs/logstash.crt
ENTER HERE THE CONTENT OF THE SSL CERTIFICATE WE CREATED
EOB

  # install beats
  packages=( filebeat-1.0.1 topbeat-1.0.1 packetbeat-1.0.1 )
  for package in "${packages[@]}"; do
    if ! rpm -qa | grep -qw ${package}; then
      rpm -i ${package}-x86_64.rpm
    fi
  done

  # configure filebeat
  cat < /etc/filebeat/filebeat.yml
filebeat:
  prospectors:
    -
      paths:
        - "/var/log/secure"
        - "/var/log/messages"
      document_type: syslog
    -
      paths:
        - "/var/log/httpd/access_log"
      document_type: apache
    -
      paths:
        - "/var/app/current/storage/logs/laravel*"
      document_type: laravel
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # configure topbeat
  cat <<'EOB' > /etc/topbeat/topbeat.yml
input:
  period: 10
  procs: [".*"]
  stats:
    system: true
    proc: true
    filesystem: true
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # configure packetbeat
  cat <<'EOB' > /etc/packetbeat/packetbeat.yml
interfaces:
  device: eth0
  type: af_packet
protocols:
  memcache:
    ports: [11211]
  mysql:
    ports: [3306]
  redis:
    ports: [6379]
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # start + enable beats
  /etc/init.d/filebeat restart > /dev/null 2>&1
  /etc/init.d/topbeat restart > /dev/null 2>&1
  /etc/init.d/packetbeat restart > /dev/null 2>&1
  chkconfig filebeat on
  chkconfig topbeat on
  chkconfig packetbeat on

fi

5) Kibana

The first time you visit your Kibana installation in your browser you will have to add the beats inputs (filebeat-*, topbeat-* and packetbeat-*) as seen here:

6) Curation

The way how ELK works, data will keep on growing. Mainly because of costs you might want to throw away older logs.

You can easily do this with curator and a cronjob:

$ sudo apt install python-pip python-dev
$ sudo pip install pyasn1
$ sudo pip install --upgrade ndg-httpsclient
$ sudo pip install elasticsearch-curator

Run at midnight:

$ curator --port 80 --host search-webapplogs-xxx.eu-west-1.es.amazonaws.com delete indices --older-than 35 --time-unit days --timestring '%Y.%m.%d'

DONE! Phew… wowses.. Creating all those fancy dashboards are out of this scope though. You can try to bootstrap your Kibana with ready made configurations: elastic/beats-dashboards.

As of now I wasn’t able to get packetbeat working with RDS. And there are still some features missing to fully replace NewRelic (though other features are much better – like actually searching for logs) – but I am very keen on seeing what might still come this year.

Update (2016-02-03):

I actually forgot to do some stuff which meant geo_point and some stuff on topbeat/packetbeat were not working 😉

Laravel Queues with Supervisor on ElasticBeanstalk

Job and/or message queues is an important component of a modern web application. Simple calls like sending verification emails should always be pushed to a queue instead of done directly, as these calls are expensive and will cause the user to wait a while for the website to finish loading.

In this blog post I will write how to keep a stable queue-worker running on an ElasticBeanstalk environment with the help of the watchdog: Supervisor.

First checkout queues.io for a list of queue-daemons and of course Laravel’s 5 own documentation page about queues so you know what’s coming up.

You will then most probably come to the conclusion that you need to run the following command for your queue to be actually processed:

$ php artisan queue:listen

Now, I have already seen the weirdest setups, but the most prominent might be maybe something like this:

$ nohup php artisan queue:listen &

The ampersand at the end will cause the call to go into the background, and the preceding nohup will make sure that it will keep running even if you exit your shell.
Personally I would always do something like this in a screen for various reasons – especially for convenience.

Anyways, on your server you will want this to run stable, for as long as possible, and automatically restart on crashes or server reboots.
This is especially true on ElasticBeanstalk, Amazon’s poorly but unfortunately popular implementation of a “Platform as a Service”:

  • Nothing really has a state – instances can go down and up independently of the application
    • This is especially true when AutoScaling is configured
  • Deploying can crash the queue-listener
  • The server could reboot for various reasons
  • Your queue-listener could crash for various reasons (this happens the most)
    • Application error (PHP exception, for example while working off a malformed payload)
    • SQS is down (yup, it happens!)

To get a grip of this you definitely need to use some kind of watchdog. You can either go with monit or use Supervisor which I found was easier to configure.

Use the following .ebextension to achieve the following (abstract, but checkout the source 😉 ):

  1. Install Supervisor
  2. Make sure it runs after a reboot
  3. stop the queue-worker shortly before a new application version goes live
  4. start the queue-worker shortly after a new application version went live

You will notice that you have to set a new param SUPERVISE and set it to “enable” for the script to run. This allows me to switch it on – depending on the environment – or off, if a script is causing problems.
Also be aware, this will only work with newer ElasticBeanstalk versions (1.3+).

I almost forgot to mention the following commands (do not run as root!) that will help you around.

Display last Worker Output

$ supervisorctl tail -1000 laravel_queue

Display last Worker Errors

$ supervisorctl tail -1000 laravel_queue stderr

Display Worker Status

$ supervisorctl status

Start Worker

$ supervisorctl start laravel_queue

Stop Worker

$ supervisorctl stop laravel_queue

Logstash recipe: Akamai on ELK

One of the perks of working for the new company is the usage of cool tools and providers. One of the new providers is Akamai – state of the art CDN/EdgeCache provider – and also the first one to exist.
The cool new tool: Kibana4!

Just a quick introduction to Kibana: Kibana belongs to the ELK Stack (Elasticsearch, Logstash and Kibana) – and as you spotted correctly comes in last, as it forms the Web/Userinterface to the underlying Elasticsearch database. Logstash sits somewhere in between and is a powerful tool to parse many log formats and also to inject them into Elasticsearch. Elasticsearch itself holds the data and offers a search engine/index.

Why do you need ELK? In a multi-server environment you will want to have your logs somewhere centralized – that is so you do not need to log into each box. Also you want to maybe have some kind of webinterface so you do simple tasks without some commandline-fu – like filtering all failed cronjobs.
There are some great tools that can achieve this as well, like syslog-ng or Graylog.

Wanna see what you are going to get? Here you go:

BTW, yes this is a demo dashboard only, meaning a lot is most probably redundant to Google Analytics – nevertheless it is more exact as it will also capture bots and file requests where no JavaScript is being loaded. The possibilities are of course fare more extensive.

This recipe will take care of three major points:

  • Actual grok filter to match the logs
  • Fix @timestamp to be parsed directly from log-line (as the logs come in batches and often also not in chronological order)
  • Apply GeoIP (via maxmind) filter to ClientIP so we can create cool looking maps on our dashboard

1) First things first

Currently there are two options to receive Logs from Akamai, via FTP and via email. You will want to receive it via FTP so I would suggest to setup a ftp server on your ELK setup.
Akamai will either send the log file gzipped or GPGP encrypted, both formats that Logstash can not read in-house, so you will need to build a script to translate it into plain/text.
Be smarter than me and chose a ftp-daemon that supports uploadscripts, like pure-ftpd or proftpd. It is much easier than using vsftpd + some funky logfile-analyzer-upload-script.

2) Setup Akamai Log Delivery Service (LDS)

  • Log into your Luna Control Center
  • Configure > Log Delivery
  • Select your Object > “Begin Delivery”
  • Make sure you choose “Combined + Cookie + Host Header” as log format – this will give us the possibility to extinguish between different projects later on Kibana

My settings look approx. like this:

3) Use the following Logstash configuration

$ sudo nano /etc/logstash/conf.d/11-akamai-access.conf
input {
file {
path => "/home/logs/incoming/akamai/access_*"
exclude => "*.gz"
type => "akamai-access"
}
}

filter {
if [type] == "akamai-access" {
grok {
match => { "message" => "%{IP:clientip} - - [%{HTTPDATE:timestamp}] %{HOSTNAME:hostname} "%{WORD:verb} /%{HOSTNAME:origin}%{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} %{NUMBER:bytes:int} "(?:%{URI:referrer}|-)" %{QS:agent} %{QS:cookie}" }
}

date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}

if [clientip] {
geoip {
source => "clientip"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}

4) Thats it!

  • Restart/reload logstash
  • Wait a few for the log files to come in (might take some hours)
  • Explore the data and create some nice visuals!

Some final notes: there are some major advantages (well and also disadvantages) when analyzing logs directly from the CDN/EdgeCache:

  1. You will get the actual Client-IP (you might be able to redirect it through your ELB until down to your EC2 – but that might be hell of a job)
  2. You will get more accurate Data, as in the best scenario your actual webserver will only get hit once a day 😉

One of the disadvantages: you do not (though there might be products for that) get the data in real time.

Compile Lighttpd with LibreSSL

As LibreSSL is gaining popularity you might want to switch your compiled Lighttpd version with one that uses LibreSSL for your https.

Tested on Debian Squeeze, but should work on Wheezy/Ubuntu in a similar way.

Prerequisites

$ sudo apt-get install make gcc libev-dev libpcre3-dev zlib1g-dev libbz2-dev gamin libgamin-dev liblua5.1-0-dev
$ wget
http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.35.tar.gz
http://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.0.5.tar.gz
$ tar xvfz libressl-2.0.5.tar.gz && tar xvfz lighttpd-1.4.35.tar.gz

Compile & Install LibreSSL

We are installing it in a non-standard path so it won’t interfer with your existing openssl/libssl(-dev)

$ cd libressl-2.0.5
$ ./configure --prefix=/opt/libressl
$ make
$ sudo make install

Verify the LibreSSL Installation

$ /opt/libressl/bin/openssl version
LibreSSL 2.0

Compile Lighttpd with LibreSSL

$ cd ../lighttpd-1.4.35
$ wget https://gist.github.com/lifeofguenter/7ef3fe9e089fcb24baed/raw/316108a350f69d622c17d0801cc429388cf36cef/lighttpd-libressl.patch
$ patch -p1 < lighttpd-libressl.patch
$ ./configure
--prefix=/usr
--sysconfdir=/etc
--localstatedir=/var
--with-libev
--with-pcre
--with-zlib
--with-bzip2
--with-fam
--with-lua
--with-openssl=/opt/libressl
$ make
$ sudo make install

Verify the Lighttpd Installation

$ lighttpd -v
lighttpd/1.4.35 (ssl) - a light and fast webserver
Build-Date: Aug 11 2014 12:54:04

Please have a look at the following URLs for further Documentation on configuring Lighttpd + SSL:

Killing MySQL Slow Queries with Xcache

I currently manage a high traffic Image Hoster with 10 million Page-Impressions per day causing high load on the Web Frontend Server and the DB Backend Server for some months now. My budget did not allow me to scale horizontally so I had to optimize the web application by killing a slow MySQL query with the usage of Xcache. Due to the website’s structure I was not able to use the Smarty Caching function as this would easily generate 2 million files and cause high disk i/o.

Pictures are just more than words.. so have a look at the screenshot of the MySQL-Server’s load before and after the optimizations (which went online on 6th October) – its like day and night 😉

Our Web Server uses Lighttpd 1.5 / SVN + PHP-FPM 5.3.3 (guys.. spawn-fgci is deprecated 😉 ) + Xcache (PHP Accelerator and varcache) to deliver static files and dynamic pages which connect to a separate MySQL 5.0 Server.

Unfortunately with a load of 5-10 (8 CPU cores) and 60-100% CPU usage on each core (!!) our MySQL Server was pretty much overloaded 😉 .

A big downside when having bottlenecks in your PHP-Script – usually caused when relying on external resources (like file_get_contents, cURL, massive non-asynchronous DNS-Lookups, MySQL queries, etc.) – is obviously the much higher execution time. This results in having a lot of PHP (or even worse Apache) processes being spawned or in use. You will easily get a over filled backlog or in worse case your Web server will start swapping – either way your website will slow down dramatically and you will lose a lot of visitors.

At first I checked the php-fpm.log.slow for scripts with too long execution times, just to make sure that this was not a PHP problem. There were a lot of scripts hanging during mysql_query() – so it was pretty clear where to look next.

Next I took a look at the MySQL Slow Query log and summarized queries which appeared most of the time. I was able to filter out the following query (simplified):

1
SELECT DISTINCT col1, col2, FROM table WHERE col3 = col4 AND id IN (SELECT id FROM table2 WHERE x = $variable) AND (SELECT id FROM table3 WHERE a = $variable) OR col5 = 1

A query with two sub-SELECTs and DISTICNT did not sound fast to me – especially not in that frequency it was requested – which was the key factor, as querying it on an empty MySQL-Server did not cause any problems .

Before putting the query into Xcache I checked all conditions and figured out that “… OR col5 = 1″ was never true, as currently no data had that value. I decided that if some feature / function based on that condition was not used since years, it will not be needed in future anyway, so I removed it.

Now I was finally ready for Xcache. This is only a very simple example how to cache individual SQL-Queries like

1
SELECT * FROM table WHERE name = '$variable'

in your PHP-Script:

1
2
3
4
5
6
7
8
9
if(xcache_isset("prefix_" . md5($variable)))
{
    $result = xcache_get("prefix_" . md5($variable));
}
else
{
    $result = mysql_query("SELECT * FROM table WHERE name = '$variable'");
    xcache_set("prefix_" . md5($variable), $result, (60 * 60 * 6));
}

So from now on, every SQL-Query will only be done once every 6 hours. Remember: we don’t want to fill up our Xcache for no reason.. so try only to SELECT columns which we really need. The biggest advantage with Xcache in contrary to other caching systems (e.g. Smarty Cache) – it has a garbage collector! So you don’t need to worry about zombie cache entries. Just try not to go out of memory, e.g. assign enough memory for your needs in the php.ini under the xcache section.

And set a reasonable time-to-live, not too short so enough data gets cached and load goes down, but not too long which could cause a too high memory usage.

Thats all 🙂

I was able to lower the load and CPU usage of our MySQL server by approx. 850%! How about you, were you able to optimize your website? Show off your awesomeness! I did by committing with following comment into svn 🙂