PHP

Deploying DB migrations with confidence

What role does your database play in your CI/CD?
How long does it take for your devs to get a running database?
How long does it take to recover a dev-database in case of an accidentally destruction?
How current are the database snapshots your devs use?
How confident are you with schema updates going to production?

These days DevOps conferences and talks are filled up with containerisations, docker, k8s, auto-scaling, auto-healing, ci/cd, agile, etc – disappointingly however, most of them only touch stateless environments and far too seldom do engineers share their knowledge on running a database in a CI/CD environment & workflow.

In this blog article I will give some info on how we solved it at my current place – which mainly consists of running a web-application based on Laravel on a traditional “LAMP”-stack.

I’ll be honest, we had a rough start. We used to have a shared AWS RDS for our QA & Staging environments which was also then be used by developers to connect their local workspace to that remote MySQL instance and be able to view the webapp locally with proper, non-seeded data – which sometimes is also just essential to debug and fix certain types of reported bugs.

So, our current state kinda worked, but was super unreliable. It was enough for a dev to accidentally drop the database or a botched staging deploy to suddenly kill the workflow of whole team. Restoring took over an hour (mix of larger-than-your-usual-wordpress-database and budget-restrictions-on-dev-instances).

Obviously this was super annoying and had to change. So I went over to my friends at the ZATech Slack channel but I was quickly hitting a wall, and in contrary it seems like I stepped on some people toes: I learned my lesson, never mention “on-premise” nearby a DevOps engineer (causes hefty allergic reactions)…

Basically the following two statements were made:

  • everything is in the cloud, no on-premise or no local databases
  • DB should be part of the CI/CD

It was difficult for me to agree on the first point, being based in South Africa there are absolutely no proper cloud providers – next hop is AWS London. And anyone who has ever connected his local webapp to a remote MySQL knows how quick a higher latency (>10ms) can make working locally a pita.

While I do agree that the DB should be part of the CI/CD, there is still a huge benefit (especially in efficiency and speed) when developing locally – and also not having to rely on seeded data.

Disappointed of no solutions I decided to go on my own against all odds, and with the support of our CTOs + wonderful person in our finance to allocate some budget for on-premise hardware (specs for the geeks like me: i7-6700 / 64GB RAM / 4x 256GB SSD @RAID10 / UPS).

Step 1: create a database service

We will use the database service to actual host the databases. I use Jenkins to nightly run a simple downstream job that mysqldump‘s production database (it ignores some larger tables that are not needed), anonymises the data (emails + mobile-numbers), pushes the dump to a predictable location (which is accessible internally by devs).

From there, the database service will launch three (one shared amongst devs, one for experimental tests cases / usage, one for our automated builds – see step 2) VMs that have MySQL running on them, import the above dump, then create a snapshot of the storage drive. I use Virtualbox as I had extensive experiencing using it in a programmatic way, but if I’d redo the architecture I would most probably do it with libvirt/qemu.

I created a small web interface as well:

database services (dbs)

With database services (dbs) the following goals have been achieved:

  • a developer has access to an anonymised production database that is never older than 24hrs
  • the dev can either download the dump and run it on its local machine, or directly connect to “dbs” (database services) – which will be especially fast from within the office
  • due to the usage of snapshots, should anything happen to the database it is possible to restore the state of last night in less than a minute (!!) – which is much faster than any AWS RDS snapshot restore and it does not involve any config changes (e.g. in-place restore)
  • Staging & QA still use a shared DB in the cloud, however due to the separation, issues on either side do not interfere with the whole team

Dbs has been running for quite a while and it solved a good amount of issues. However we were still getting the occasional botched staging deploy or failed master-build due to us only running a very optimistic/superficial check on database migrations.

This is due to us only running artisan migrate (laravel.com/docs/5.4/migrations) against a empty database in our CI builds (for predictability reasons). Meaning, builds would only fail if there was a PHP or SQL syntax error, not if the migration itself were faulty on production data. The easiest way to demonstrate a fail would be to add a unique-index on a column – perfectly fine on a empty database, not so much on production with potential duplicate values already existing.

Step 2: run builds against prod data

The safest way to make sure that your database migrations are sound & proof is to actually run it against production data, as that is what will be ultimately the case on a production deploy anyways.

Fortunately we do not need to run every build against prod snapshot, as we are only interested if anything within the /database/migrations/ folder changes.

I created an additional Jenkins job that runs on every PR and with the help of a little bash + the Github API, I can check if a migration was actually part of the code changes or not, and only then will the build further proceed.

I am taking advantage of dbs from step 1, which due to the fast restore capability I can run artisan migrate nearly every minute without the DB losing its original state, which is important for repeatable builds of course.

Once done, it will report back the time it took, which is a nifty indicator if the db migration is something heavy where a elevated error rate might be expected or not:

github build statuses

The console output of the job gives a little more indication of what is happening and why the build got triggered:

dbunit.sh

 

Setting up a proper db build pipeline and fully integrating it in our CI brought in the following goals:

  • full confidence in any database migrations being introduced
  • full visibility on the duration of database migrations as a “pre warning” on potential problems later on the production deploy
  • due to the usage of “dbs” (e.g. real restorable snapshots) this can be done cheaply and fast (3 min builds) even for larger databases (>10GB)

 

So curious: what problems did you have to solve for your database workflow / environment, and with what solutions did you come up with? 🙂

ELK on AWS ElasticSearch + ElasticBeanstalk + Laravel

NewRelic is a fantastic tool to get great insights of your application happenings and services surrounding it. It collects a massive amount of data and makes it easy accessible. Almost every metric and dashboard they offer is crucial to any DevOps or Cloud Engineer.

Now that Elastic acquired Packetbeat, which is essentially similar in the functionality to NewRelic’s agent (e.g. you can now collect data not only anymore from log files, but system metrics and external services via network sniffing), can the ELK stack, as open source alternative, replace NewRelic?

tl;dr: almost 🙂

I already did a post back in 2015 when I first got in touch with the ELK stack, this time however I will go a little more in detail and offer a full installation guide bringing together the following components:

  • ELK (ElasticSearch, Logstash & Kibana)
  • AWS ElasticSearch Service
  • ElasticBeanstalk (via ebextension)
  • Laravel (exception logs)
Conveniently Amazon Web Services now offers ElasticSearch as a Service, so it is no longer necessary to maintain a self-hosted version on EC2.

1) Create ElasticSearch Domain

The setup is pretty boring, but you might want to do something along the following screenshots:
Set the name of the ElasticSearch instance
 Set the ElasticSearch cluster dimension/size.
 Set the ElasticSearch storage.
 
In our setup we will not communicate directly to ElasticSearch, but instead instances will communicate via filebeat (formerly known as logstash-forwarder) to a Logstash instance. Hence we only need to whitelist the public and internal IP of the Logstash instance (see step 2).
 We end up receiving our ElasticSearch endpoint. Remember: AWS ships with Kibana pre-installed – for your convenience.

2) Create SSL certificate

We will need a SSL certificate to establish a secure and authenticated connection between agent/instance and Logstash. This might not be needed if you are running everything within the same VPC, though.
The next few steps get very surreal.. but trust me, it works. Please set the correct IP of your Logstash instance:

(openssl.cnf)

[ req ]
#default_bits  = 2048
#default_md  = sha256
#default_keyfile  = privkey.pem
distinguished_name = req_distinguished_name
attributes  = req_attributes
req_extensions = v3_req

[ req_distinguished_name ]
countryName   = Country Name (2 letter code)
countryName_min   = 2
countryName_max   = 2
stateOrProvinceName  = State or Province Name (full name)
localityName   = Locality Name (eg, city)
0.organizationName  = Organization Name (eg, company)
organizationalUnitName  = Organizational Unit Name (eg, section)
commonName   = Common Name (eg, fully qualified host name)
commonName_max   = 64
emailAddress   = Email Address
emailAddress_max  = 64

[ req_attributes ]
challengePassword  = A challenge password
challengePassword_min  = 4
challengePassword_max  = 20

[ v3_req ]
subjectAltName=@alt_names
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = CA:true

[alt_names]
IP.1 = XXX.XXX.XXX.XXX

And then do the following steps:

 

$ sudo mkdir -p /etc/pki/tls/certs
$ sudo mkdir /etc/pki/tls/private
$ sudo openssl req -x509 -nodes -days 3650 -newkey rsa:4096 
-keyout /etc/pki/tls/private/logstash.key 
-out /etc/pki/tls/certs/logstash.crt 
-config /etc/ssl/openssl.cnf 
-extensions v3_req

$ sudo chown logstash: /etc/pki/tls/private/logstash.key /etc/pki/tls/certs/logstash.crt
$ sudo chmod 600 /etc/pki/tls/private/logstash.key /etc/pki/tls/certs/logstash.crt

 

The whole custom configuration is necessary so the certificate can be correctly verified by both the Logstash and beats. Basically we are creating a self authorized certificate with the IP of Logstash as SAN (Subject Alternative Name – IP).

3) Logstash

Next we will need an EC2 instance that will run Logstash, thus be responsible for receiving logs & metrics from our application servers and passing them through to our ElasticSearch endpoint.
It won’t need a lot of resources, so you can start with a t2.medium and work yourself up if needed.Additionally we are going to host a nginx reverse-proxy for the Kibana endpoint. This will allow us to “bridge” the auth-system of AWS and instead replace it with our own simple http-auth.
Logstash is a Java application, so you will have to install it first – if you are on Ubuntu or Debian you can use my java ansible role to do so 🙂
Use something similar to the following as your nginx vhost config:
(nginx-vhost.conf)

 

server {
  listen 80;
  server_name kibana.acme.com;

  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;

  auth_basic "/dev/null";
  auth_basic_user_file /etc/nginx/htpasswd.conf;
  proxy_set_header Authorization "";

  location /.kibana-4 {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~* ^/(filebeat|topbeat|packetbeat)- {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/_(aliases|nodes)$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/.*/_search$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location ~ ^/.*/_mapping$ {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com;
  }

  location / {
    proxy_pass https://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_plugin/kibana/;
  }
}

 

Now download and install Logstash:

$ wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_2.1.1-1_all.deb
$ sudo dpkg -i logstash_2.1.1-1_all.deb

The following Logstash config files have to be put under /etc/logstash/conf.d/

$ wget 
https://raw.githubusercontent.com/elastic/beats/master/topbeat/etc/topbeat.template.json 
https://raw.githubusercontent.com/elastic/beats/master/packetbeat/etc/packetbeat.template.json 
https://raw.githubusercontent.com/logstash-plugins/logstash-output-elasticsearch/master/lib/logstash/outputs/elasticsearch/elasticsearch-template.json
$ mv elasticsearch-template.json /etc/logstash/filebeat-template.json
$ sed -i 's/logstash/filebeat/' /etc/logstash/filebeat-template.json

(01-beats-input.conf)

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"
  }
}

This will accept connections from beats on port 5044 if SSL certificate matches.

(10-syslog.conf)

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    
    syslog_pri { }
    
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Simple syslog configuration/grok.

(11-apache.conf)

filter {
  if [type] == "apache" {
    grok {
      match => { "message" => "%{IP:clientip} - - [%{HTTPDATE:timestamp}] %{HOSTNAME:domain} "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} %{NUMBER:bytes:int} "(?:%{URI:referrer}|-)" %{QS:agent}" }
    }

    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
    }

    if [clientip] {
      geoip {
        source => "clientip"
        target => "geoip"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
      }
      
      mutate {
        convert => [ "[geoip][coordinates]", "float" ]
      }
    }
  }
}

Apache access-log configuration. Will also try to resolve the clientip to a geolocation.

(12-laravel.conf)

filter {
  if [type] == "laravel" {
    multiline {
      pattern => "^["
      what => "previous"
      negate=> true
    }

    grok {
      match => { "message" => "(?m)[%{TIMESTAMP_ISO8601:timestamp}] %{WORD:env}.%{LOGLEVEL:severity}: %{GREEDYDATA:content}" }
    }

    mutate {
      replace => [ "message", "%{content}" ]
      remove_field => [ "content" ]
    }
  }
}

Multi-line Laravel exception logs parser.

(30-es-output.conf)

output {
  elasticsearch {
    hosts => ["search-webapplogs-xxx.eu-west-1.es.amazonaws.com:80"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    template_overwrite => true
    template => "/etc/logstash/filebeat-template.json"
    template_name => "filebeat"
  }
}

Finally push it to our ElasticSearch endpoint.

Lets give it a try:

$ sudo /etc/init.d/logstash restart

Manually set index templates for topbeat and packetbeat:

$ curl -XPUT 'http://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_template/topbeat' -d@topbeat.template.json
$ curl -XPUT 'http://search-webapplogs-xxx.eu-west-1.es.amazonaws.com/_template/packetbeat' -d@packetbeat.template.json

4) ElasticBeanstalk ebextension

As with my other ebextensions, I like writing the heavy part in pure bash, this also allows me to enable certain ebextensions on a project to project basis by setting the activator params/envvars.

(12-beats.config)

# beats
#
# Author: Gunter Grodotzki 
# Version: 2016-01-18
#
# install and configure beats
# BEATS: enable
container_commands:
  01-beats:
    command: ".ebextensions/beats.sh"

(beasts.sh)

#!/bin/bash
#
# Author: Gunter Grodotzki (gunter@grodotzki.co.za)
# Version: 2016-01-18
#
# install and configure beats

set -e

if [[ "${BEATS}" == "enable" ]]; then

  export HOME="/root"
  export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"

  # lets do everything inside .ebextensions so it will clean itself
  cd .ebextensions

  # set optimized LogFormat
  sed -i '/^s*LogFormat/d' /etc/httpd/conf/httpd.conf
  sed -i '/^s*CustomLog/d' /etc/httpd/conf/httpd.conf

  cat <<'EOB' > /etc/httpd/conf.d/10-logstash.conf
SetEnvIf Remote_Addr "::1" dummy
SetEnvIf Remote_Addr "127.0.0.1" dummy
LogFormat "%a - - %t %{Host}i "%r" %>s %B "%{Referer}i" "%{User-Agent}i"" combined
CustomLog "logs/access_log" combined env=!dummy
EOB

  # add bash_history logging
  echo 'PROMPT_COMMAND='"'"'history -a >(tee -a ~/.bash_history | logger -t "$USER[$$]")'"'"'' > /etc/profile.d/logstash.sh

  # add key
  mkdir -p /etc/pki/tls/certs

  cat <<'EOB' > /etc/pki/tls/certs/logstash.crt
ENTER HERE THE CONTENT OF THE SSL CERTIFICATE WE CREATED
EOB

  # install beats
  packages=( filebeat-1.0.1 topbeat-1.0.1 packetbeat-1.0.1 )
  for package in "${packages[@]}"; do
    if ! rpm -qa | grep -qw ${package}; then
      rpm -i ${package}-x86_64.rpm
    fi
  done

  # configure filebeat
  cat < /etc/filebeat/filebeat.yml
filebeat:
  prospectors:
    -
      paths:
        - "/var/log/secure"
        - "/var/log/messages"
      document_type: syslog
    -
      paths:
        - "/var/log/httpd/access_log"
      document_type: apache
    -
      paths:
        - "/var/app/current/storage/logs/laravel*"
      document_type: laravel
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # configure topbeat
  cat <<'EOB' > /etc/topbeat/topbeat.yml
input:
  period: 10
  procs: [".*"]
  stats:
    system: true
    proc: true
    filesystem: true
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # configure packetbeat
  cat <<'EOB' > /etc/packetbeat/packetbeat.yml
interfaces:
  device: eth0
  type: af_packet
protocols:
  memcache:
    ports: [11211]
  mysql:
    ports: [3306]
  redis:
    ports: [6379]
output:
  logstash:
    hosts: ["IP.OF.LOGSTASH:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
EOB

  # start + enable beats
  /etc/init.d/filebeat restart > /dev/null 2>&1
  /etc/init.d/topbeat restart > /dev/null 2>&1
  /etc/init.d/packetbeat restart > /dev/null 2>&1
  chkconfig filebeat on
  chkconfig topbeat on
  chkconfig packetbeat on

fi

5) Kibana

The first time you visit your Kibana installation in your browser you will have to add the beats inputs (filebeat-*, topbeat-* and packetbeat-*) as seen here:

6) Curation

The way how ELK works, data will keep on growing. Mainly because of costs you might want to throw away older logs.

You can easily do this with curator and a cronjob:

$ sudo apt install python-pip python-dev
$ sudo pip install pyasn1
$ sudo pip install --upgrade ndg-httpsclient
$ sudo pip install elasticsearch-curator

Run at midnight:

$ curator --port 80 --host search-webapplogs-xxx.eu-west-1.es.amazonaws.com delete indices --older-than 35 --time-unit days --timestring '%Y.%m.%d'

DONE! Phew… wowses.. Creating all those fancy dashboards are out of this scope though. You can try to bootstrap your Kibana with ready made configurations: elastic/beats-dashboards.

As of now I wasn’t able to get packetbeat working with RDS. And there are still some features missing to fully replace NewRelic (though other features are much better – like actually searching for logs) – but I am very keen on seeing what might still come this year.

Update (2016-02-03):

I actually forgot to do some stuff which meant geo_point and some stuff on topbeat/packetbeat were not working 😉

Laravel Queues with Supervisor on ElasticBeanstalk

Job and/or message queues is an important component of a modern web application. Simple calls like sending verification emails should always be pushed to a queue instead of done directly, as these calls are expensive and will cause the user to wait a while for the website to finish loading.

In this blog post I will write how to keep a stable queue-worker running on an ElasticBeanstalk environment with the help of the watchdog: Supervisor.

First checkout queues.io for a list of queue-daemons and of course Laravel’s 5 own documentation page about queues so you know what’s coming up.

You will then most probably come to the conclusion that you need to run the following command for your queue to be actually processed:

$ php artisan queue:listen

Now, I have already seen the weirdest setups, but the most prominent might be maybe something like this:

$ nohup php artisan queue:listen &

The ampersand at the end will cause the call to go into the background, and the preceding nohup will make sure that it will keep running even if you exit your shell.
Personally I would always do something like this in a screen for various reasons – especially for convenience.

Anyways, on your server you will want this to run stable, for as long as possible, and automatically restart on crashes or server reboots.
This is especially true on ElasticBeanstalk, Amazon’s poorly but unfortunately popular implementation of a “Platform as a Service”:

  • Nothing really has a state – instances can go down and up independently of the application
    • This is especially true when AutoScaling is configured
  • Deploying can crash the queue-listener
  • The server could reboot for various reasons
  • Your queue-listener could crash for various reasons (this happens the most)
    • Application error (PHP exception, for example while working off a malformed payload)
    • SQS is down (yup, it happens!)

To get a grip of this you definitely need to use some kind of watchdog. You can either go with monit or use Supervisor which I found was easier to configure.

Use the following .ebextension to achieve the following (abstract, but checkout the source 😉 ):

  1. Install Supervisor
  2. Make sure it runs after a reboot
  3. stop the queue-worker shortly before a new application version goes live
  4. start the queue-worker shortly after a new application version went live

You will notice that you have to set a new param SUPERVISE and set it to “enable” for the script to run. This allows me to switch it on – depending on the environment – or off, if a script is causing problems.
Also be aware, this will only work with newer ElasticBeanstalk versions (1.3+).

I almost forgot to mention the following commands (do not run as root!) that will help you around.

Display last Worker Output

$ supervisorctl tail -1000 laravel_queue

Display last Worker Errors

$ supervisorctl tail -1000 laravel_queue stderr

Display Worker Status

$ supervisorctl status

Start Worker

$ supervisorctl start laravel_queue

Stop Worker

$ supervisorctl stop laravel_queue

Hunspell spell checking under PHP with enchant

The spell checking that works perfectly on Google Chrome, OpenOffice and Mozilla Firefox is available to you and PHP as well – all thanks to open source software!

The above mentioned apps use the “Hunspell” library, which can be used directly under PHP without the usage of ugly (and unsecure) exec/system calls.

The following steps I did on my OSX MPB (10.10 / Yosemite) but they will be very similar on any Linux/Unix system (well, might even be easier on Ubuntu or Debian via their package system).
Just make sure you at least use libenchant 1.5

Compile and Install hunspell 

$ wget http://downloads.sourceforge.net/hunspell/hunspell-1.3.3.tar.gz 
$ tar xvfz hunspell-1.3.3.tar.gz
$ cd hunspell-1.3.3
$ ./configure
$ make
$ sudo make install

Compile and Install libenchant 

$ wget http://www.abisource.com/downloads/enchant/1.6.0/enchant-1.6.0.tar.gz 
$ tar xvfz enchant-1.6.0.tar.gz
$ cd enchant-1.6.0
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

Compile and Install php-enchant (in this case as shared lib)

(there is currently a bug in the configure file that will not recognize your libenchant version and thus not giving you some of the newer features, patch is here)

$ cd php-5.5.14/ext/enchant/
$ phpize
$ ./configure
$ make
$ sudo make install

Something something extension=enchant.so in your php.ini… Dictionaries

$ cd Dicts
$ sudo wget https://chromium.googlesource.com/chromium/deps/hunspell_dictionaries/+archive/master.tar.gz
$ sudo tar xvfz master.tar.gz

Sample usage 

Parallel/Asynchronous DNS resolving in PHP

In PHP one key for a scalable and performant web application is parallelism, whenever and wherever possible, even if you use queues. The most popular usage of parallelism in PHP is probably curl_multi_*.
In this post I will show you how to do multiple DNS requests lightning fast with two different approaches / PHP extensions.

In both cases the DNS requests are done asynchronously, meaning even with multiple requests the whole process will only take as long as the longest request takes (in theory).

PHP’s internal DNS functions rely on resolv.conf and in most cases it is not heavily optimized, defaulting to a rather long timeout of 5 seconds.
So even if you are only needing single DNS lookups both extensions might still be interesting as you can dynamically change the behavior of that, which what PHP is all about, or?

pecl-ares

pecl-ares offers PHP bindings for the c-ares library (affiliated with cURL).
I was happy that Michael Wallner (you might know him for pecl-http) offered help to revive the code, as it has not had a release since 4 years. So to get it running with the current c-ares version and a modern system, you should have a look at its git.
pecl-ares also allows the usage of callbacks which might be useful for certain scenarios.

Installation (assuming php-fpm)

$ sudo apt-get install libc-ares-dev php5-dev
$ git clone https://git.php.net/repository/pecl/networking/ares.git php-ares
$ cd php-ares
$ phpize
$ ./configure
$ make
$ sudo make install
$ sudo echo "extension=ares.so" > /etc/php5/mods-available/ares.ini
$ sudo php5enmod ares

Usage

It does not offer yet any documentation, but the source code is easy to understand, so anyways here is an example:

<?php

$ares = ares_init([
'timeoutms' => 2000,
'tries' => 1,
//'udp_port' => 53,
//'tcp_port' => 53,
'servers' => ['8.8.8.8'],
'flags' => ARES_FLAG_NOALIASES|ARES_FLAG_NOSEARCH,
]);

$q = [];
$q[] = ares_query($ares, null, 'www.lifeofguenter.de', ARES_T_A);
$q[] = ares_query($ares, null, 'lifeofguenter.de', ARES_T_A);

do {
$n = ares_fds($ares, $r, $w);
ares_select($r, $w, 100);
ares_process($ares, $r, $w);
} while ($n);

foreach ($q as $query) {
var_dump(ares_result($query, $errno, $errstr));
}

ares_destroy($ares);
unset($ares);

php-rdns

php-rdns offers OOP PHP bindings for librdns (same guy behind rspamd) and uses libev for event looping. “We” recently developed it for a client of ours and released it as open source. It is highly simplified and some things might not yet be implemented or working correctly, but if you are interested we are always happy to see a pull request. Initial development was done by Alexander Solovets and later bug-fixing by Eduardo Silva (lead dev/founder of monkey webserver).

Installation (assuming php-fpm)

$ sudo apt-get install libev-dev php5-dev
$ wget https://github.com/weheartwebsites/php-rdns/releases/download/v0.1.1/rdns-0.1.1.tgz
$ tar xvfz rdns-0.1.1.tgz
$ cd rdns-0.1.1/
$ phpize
$ ./configure
$ make
$ sudo make install
$ sudo echo "extension=rdns.so" >> /etc/php5/mods-available/rdns.ini
$ sudo php5enmod rdns
$ /etc/init.d/php-fpm restart

Usage

(full  documentation on GitHub)

<?php

$rdns = new RDNS;
$rdns->addServer('8.8.8.8');

$rdns->addRequest('www.lifeofguenter.de', RDNS_A, 2);
$rdns->addRequest('lifeofguenter.de', RDNS_A, 2);
$replies = $rdns->getReplies();
ksort($replies);

var_dump($replies);
unset($rdns);

You might also be interested in ReactPHP or swoole, which are event-driven solutions to this problem.