Awstat on Debian with NGINX

AWStats is a free powerful and featureful tool that generates advanced web, streaming, ftp or mail server statistics, graphically. This log analyzer works as a CGI or from command line and shows you all possible information your log contains, in few graphical web pages.

Install

apt-get install awstats

On debian, awstats install things in 3 places :

    • /etc/awstats : contains config files
    • /usr/share/awstats : contains tools and libraries used by awstats
    • /usr/share/doc/awstats : contains docs, tools for building the static html pages, icons and other static files used by html

To get a list of files and locations on your system you can always run:

dpkg -L awstats

Overview

I have some custom needs, I have 3 domains :

  • domain1.com
  • domain2.com
  • main-domain.com

And I want to have stats for the first two domains. The main-domain.com is used as the master domain of the server, with awstats available at awstats.main-domain.com, instead of having domain1.com/awstats and domain1.com/awstats.

We also want to password protect the stats, but with different credential for each vhost.

These steps have been tested on Debian whezzy and jessie.

Formatting Nginx log

Nginx by default output logs that can be read by awstats:

error_log /path/to/log.log;

Config

The /etc/awstats folder should have:

  • awstats.conf.dist
  • awstats.conf.local

awstats.conf.dist is the main conf file, origin of all the other conf files. It’ll also fallbacks to this file if no other config file exists.

awstats.conf.local is an empty file. It’s the parent of all the other config files. This is where you put shared rules.

Next I copy the content of awstats.conf.dist into awstats.conf.local, and just put the important rules inside each vhost config, so they’re easier to read, and shorter. So we need to create two additional files;

touch awstats.domain1.com.conf

touch awstats.domain2.com.conf

in the empty config files we copy and edit this:

# Path to you nginx vhost log file
LogFile="/var/logs/access.log"

# Exclude local subnets and some public IPs that we do not want to include in the stats
SkipHosts="REGEX[^192\.168\.1\.] REGEX[^10\.10\.10\.] 11.11.11.11 22.22.22.33"

# Domain of your vhost
SiteDomain="domain1.com"

# Directory where to store the awstats data
DirData="/var/lib/awstats/"

# Other alias, basically other domain/subdomain that's the same as the domain above
HostAliases="domain1.com"

Fine tune the global settings

We need to edit the awstats.conf.local file, and:
1. Disable DNSLookup : DNSLookup = 0
2. Remove LogFile, SiteDomain, DirData and HostAliases directive, as they’re useless outside their context.
3. Set LogFormat to Combined: LogFormat = 1
4. Enable GeoIP (require additional steps-see below).

Configure GeoIP plugin

Install GeoIP to speed up the hostname lookups. This will significantly improve the performance since DNS lookups will generally take a long time. Using this tutorial, install the GeoIP library.

Next, download the latest GeoIP perl module -and- latest GeoLite country database.
http://www.maxmind.com/download/geoip/api/perl/ I usually run wget to download the file from shell: i.e.:

wget http://geolite.maxmind.com/download/geoip/api/perl/Geo-IP-1.36.tar.gz

or
download it from https://github.com/maxmind/geoip-api-perl

Extract the tar/zip file and inside the Geo-IP folder, run:

perl Makefile.PL
make
make test
make install

 

wget http://geolite.maxmind.com/download/geoip/api/c/GeoIP-1.4.5.tar.gz

Download and extract the file.  Inside the GeoIP 1.4.5 folder, run:

./configure
make
make check
make install

now add this bit to awstats config file:

LoadPlugin=”geoip GEOIP_STANDARD /usr/share/GeoIP/GeoIP.dat”

Finalization

To calculate the stats, a perl script is available in /usr/share/doc/awstats/examples. The awstats_updateall.pl will compute the stats for each available config. It’s easy, just run :

/usr/share/doc/awstats/examples/awstats_updateall.pl now -awstatsprog=/usr/lib/cgi-bin/awstats.pl

Use cron job to run the script how often is necessary. If you logrotate make sure you run the script just before new log rotation.

Setting up nginx report website

This assumes that NGINX is already installed, configured and running OK. So we just need to create new vhost:

server {
    listen 80;
    server_name awstats.main-domain.com;
    root    /var/www/awstats;

    error_log /var/log/nginx/awstats.main-domain.com.error.log;
    access_log off;
    log_not_found off;

    location ^~ /icon {
        alias /usr/share/awstats/icon/;
    }

    location ~ ^/([a-z0-9-_\.]+)$ {
      return 301 $scheme://awstats.main-domain.com/cgi-bin/awstats.pl?config=$1;
}

        location ~ ^/cgi-bin/.*\\.(cgi|pl|py|rb) {
        gzip off;
        include         fastcgi_params;
        fastcgi_pass    127.0.0.1:9000;
        fastcgi_index   cgi-bin.php;
        fastcgi_param   SCRIPT_FILENAME    /etc/nginx/cgi-bin.php;
        fastcgi_param   SCRIPT_NAME        /cgi-bin/cgi-bin.php;
        fastcgi_param   X_SCRIPT_FILENAME  /usr/lib$fastcgi_script_name;
        fastcgi_param   X_SCRIPT_NAME      $fastcgi_script_name;
        fastcgi_param   REMOTE_USER        $remote_user;
    }
}

Create the /etc/nginx/cgi-bin.php file

 array("pipe", "r"),  // stdin is a pipe that the child will read from
    1 => array("pipe", "w"),  // stdout is a pipe that the child will write to
    2 => array("pipe", "w")   // stderr is a file to write to
);

$newenv = $_SERVER;
$newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"];
$newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"];

if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) {
    $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv);
    if (is_resource($process)) {
        fclose($pipes[0]);
        $head = fgets($pipes[1]);
        while (strcmp($head, "\\n")) {
            header($head);
            $head = fgets($pipes[1]);
        }
        fpassthru($pipes[1]);
        fclose($pipes[1]);
        fclose($pipes[2]);
        $return_value = proc_close($process);
    } else {
        header("Status: 500 Internal Server Error");
        echo("Internal Server Error");
    }
} else {
    header("Status: 404 Page Not Found");
    echo("Page Not Found");
}
?>

Now I copy the cgi-bin folder from relevant awstats location to my awstats vhost folder: /var/www/awstats and also do not forget to apply the correct permissions.

Now if everything is setup correctly you should be able to access your stats at:
http://awstats.main-domain.com/domain1.com
http://awstats.main-domain.com/domain2.com

if for some reason the above links do not work try:
http://awstats.master-domain.com/cgi-bin/awstats.pl?config=domain1.com

 

Pi vs UDOO MySQL performance test

Create test DB:

mysql -u root -p
mysql> create database test;
exit;

Prepare the database:

sysbench --test=oltp --oltp-table-size=100000 --mysql-db=test --mysql-user=root --mysql-password=mysqlpassword prepare

Run the test:

sysbench --test=oltp --oltp-table-size=100000 --mysql-db=test --mysql-user=root --mysql-password=mysqlpassword --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run

Pi Output:

OLTP test statistics:
    queries performed:
        read:                            28364
        write:                           0
        other:                           4052
        total:                           32416
    transactions:                        2026   (33.76 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 28364  (472.67 per sec.)
    other operations:                    4052   (67.52 per sec.)

Test execution summary:
    total time:                          60.0080s
    total number of events:              2026
    total time taken by event execution: 59.8767
    per-request statistics:
         min:                                 12.90ms
         avg:                                 29.55ms
         max:                                 76.67ms
         approx.  95 percentile:              50.34ms

Threads fairness:
    events (avg/stddev):           2026.0000/0.00
    execution time (avg/stddev):   59.8767/0.00

UDOO output:

OLTP test statistics:
    queries performed:
        read:                            287532
        write:                           0
        other:                           41076
        total:                           328608
    transactions:                        20538  (342.19 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 287532 (4790.71 per sec.)
    other operations:                    41076  (684.39 per sec.)

Test execution summary:
    total time:                          60.0186s
    total number of events:              20538
    total time taken by event execution: 479.3667
    per-request statistics:
         min:                                  7.26ms
         avg:                                 23.34ms
         max:                                196.47ms
         approx.  95 percentile:              32.59ms

Threads fairness:
    events (avg/stddev):           2567.2500/50.73
    execution time (avg/stddev):   59.9208/0.01

the important bit is the transactions line:
UDOO: transactions: 20538 (342.19 per sec.)
PI: transactions: 2026 (33.76 per sec.)

Last step – drop the test database:

mysql -u root -p
mysql> drop database test;
exit;

 

Pi vs UDOO quick CPU performance test

First install the software

apt-get install sysbench

then run on PI:

 sysbench --test=cpu --cpu-max-prime=2000 run

on UDOO:

 sysbench --test=cpu --num-threads=4 --cpu-max-prime=2000 run

UDOO output:

Maximum prime number checked in CPU test: 2000
Test execution summary:
 total time: 20.8079s
 total number of events: 10000
 total time taken by event execution: 166.2889
 per-request statistics:
 min: 8.18ms
 avg: 16.63ms
 max: 90.20ms
 approx. 95 percentile: 28.23ms
Threads fairness:
 events (avg/stddev): 1250.0000/6.71
 execution time (avg/stddev): 20.7861/0.03

PI output:

Maximum prime number checked in CPU test: 2000

Test execution summary:
total time: 48.0722s
total number of events: 10000
total time taken by event execution: 192.0741
per-request statistics:
min: 4.68ms
avg: 19.21ms
max: 71.20ms
approx. 95 percentile: 44.05ms

Threads fairness:
events (avg/stddev): 2500.0000/0.71
execution time (avg/stddev): 48.0185/0.02

SPDY with NGINX on Debian

As per mighty Wikipedia:

SPDY (pronounced speedy) is an open networking protocol developed primarily at Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security. SPDY achieves reduced latency through compression, multiplexing, and prioritization although this depends on a combination of network and website deployment conditions.

Prerequisited:
1. NGINX compiled with SPDY support

2. NGINX setup with https

3. Libpcre version 3 and libpcre3-dev

To install the libraries:

apt-get install libssl-dev libpcre3 libpcre3-dev

 

then open your nginx site config and find line:

listen          IP ADDRESS:443 ssl;

and replace it with

listen          IP ADDRESS:443 ssl spdy;

then in the location / { section add this bit:

		## Let the browser know about spdy ##
		add_header        Alternate-Protocol  443:npn-spdy/2;

and restart your nginx service

To test it:
Open Firefox and download SPDY indicator plugin.

Job Done

DEBIAN ….Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

So I’m getting corrupted error msg on my Raspberry PI:

[   48.261326] FAT-fs (mmcblk0p1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

umount /boot
 git clone http://daniel-baumann.ch/git/software/dosfstools.git
 cd dosfstools
 make
 ./fsck.fat -a /dev/mmcblk0p1

Get this output:

 fsck.fat 3.0.26 (2014-03-07)
 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
 Automatically removing dirty bit.
 Performing changes.
 /dev/mmcblk0p1: 14 files, 1232/7161 clusters

And now:

 mount /boot