AdSense Mobile Ad

Friday, May 28, 2010

Accessing ZFS Snapshots From Windows With "Previous Versions"

ZFS Snapshots

Solaris' ZFS snapshots are a great tool that allows us to instantly create a block level snapshot of a ZFS file system. ZFS uses copy-on-write semantics: newly written data is stored on new blocks while blocks containing the older data are retained (when referenced by, for example, a snapshot.) Since, at snapshot creation, data has already been allocated, both snapshot creation time and snapshot required storage (additional to referenced blocks) are almost negligible. Snapshots can be sent (to a file, over the network, etc.) and received on a destination host (as a file or as a ZFS file system.)

Using ZFS snapshots on Solaris and OpenSolaris is dead easy and incredibly flexible: not only you can snapshot a ZFS file systems for Solaris' own use. Since ZFS file systems are easily shared with many protocols such as NFS, CIFS or iSCSI, you can for example take a snapshot of a ZFS volume used as a Mac OS X Time Machine disk (what I call a two-dimensions time machine), or of a file systems shared with CIFS with Windows clients.

Where Are The Snapshots?

Solaris users probably know that snapshot can be accessed using the special .zfs directory on a ZFS file system. But what if you're accessing this filesystems remotely, for example via CIFS? A first approach might be making .zfs directories visible by setting their snapdir property accordingly:

# zfs set snapdir=visible your/zfs/file/system

Although this method perfectly works and may even seems natural to Solaris users, it might seem to lack the required amount of "user friendliness" required by the typical Windows user. Fortunately, thanks to CIFS, there's another solution which is just as easy and that perfectly integrates with the Windows user experience.

Shadow Copy and the Previous Versions Shell Extension

Microsoft introduced a technology called Shadow Copy (a.k.a. Volume Snapshot Service) back in Windows XP SP 2 and Windows Server 2003. Shadow copy is similar to ZFS snapshots in that it takes block-level snapshots of a running file system (although much more limited.) Microsoft also introduced a shell extension, called Previous Versions, that let the user browse through the previous versions of a file that's been shadow copied. With this extension, a new tab in your Windows Explorer's File Properties window, you can browse and restore a previous version of a modified or deleted file. It seems natural, then, to use this extensions to browse through the ZFS snapshots too.

CIFS

That's exactly what the CIFS guys thought while developing this wonderful Solaris service. CIFS is the natural choice for sharing a ZFS file systems with Windows clients. CIFS implements the SMB protocol and it's, moreover, a wonderfully easy service to configure and maintain. Since CIFS exists, I'm not longing for Samba any more. If you mount a CIFS share in a Previous Versions-enabled copy of Windows, you'll automatically get access to ZFS snapshots without the burden of manually accessing .zfs directories.

Conclusion

As you can see in the following screenshot, Solaris ZFS snapshot are visible in the Previous Versions tab:


Conclusions

ZFS, CIFS and Windows Previous Versions are a great team when sharing ZFS file systems to your Windows clients. Windows has got the most usable interface when you need access to your ZFS snapshots. At least once Windows is superior to Mac OS X which has a fancy, usable but pretty basic interface.

Monday, May 24, 2010

Upgrading OpenSolaris to the Latest Build from the dev Repository

At home I'm still running Solaris Express Community Edition. I was waiting for OpenSolaris 2010.03 to be released to perform a major upgrade of my workstation: months have passed and we're still waiting for it. Since I'm going to change my SATA drive with a new SAS one I could even try and go with OpenSolaris but I should upgrade it from the /dev repository since some ZFS pools are running versions which are unsupported by the 2009.06 release.

My earliest OpenSolaris test drives were pretty satisfactory as far as it concerns the OS "feeling." I really liked 2008.11 and, although it took some time to get accustomed to the IPS repository (it mainly is a psychological issue with it), I like the direction it took. Unfortunately SXCE was far more solid than OpenSolaris and, moreover, I was having some trouble with some Sun product (such as the Java Enterprise System) which I needed to work.

Since then and since SXCE discontinuation, I've been waiting for the next stable release before upgrading my system. This weekend I had some spare time and decided to give the latest OpenSolaris build a try. I downloaded VirtualBox for Mac, installed it and run the OpenSolaris 2009.06 installation. Once it finished the first thing I did was disabling the splash screen. 

Disabling the Splash Screen

To disable the OpenSolaris splash screen during boot you have to edit the /rpool/boot/grub/menu.lst and remove the following fragments:

[...snip...]
... ,console=graphics
splashimage = ...
foreground = ...
background = ...

Please, pay attention to remove just the ,console=graphics fragment and not the entire kernel line. Failing to do so will result in an unbootable system.

Upgrading to /dev

Once I modified the menu.lst file I changed the package repository to point to http://pkg.opensolaris.org/dev/:

# pkg set-authority -O http://pkg.opensolaris.org/dev/ opensolaris.org

and run an image update:

# pkg image-update -v

The new packaging system is working far better than I remembered. Unfortunately, it still seems pretty slow, especially when compared to similar packaging systems such as Debian's. After a couple of hours build 134 (snv_134) was installed and rebooted into the new boot environment.

There's no need to examine change logs to notice that, almost one year after OpenSolaris 2009.06 was released, many things have changed. Although I already considered the Nimbus theme the most beautiful GNOME theme out there, there were room for improvements and the OpenSolaris guys have made a great job.

Minor Problems

Missing xfs Service

During the first boot I noticed an error from the Service Management Facility relating a missing service, xfs. This is just a manifestation of bug 11602 and it just affected the first boot after the upgrade.

Xorg Fails to Start

A more serious problem was Xorg failing to start. After the reboot in the new boot environment the system was not unable to start the graphical login session and was continuously dropping down to console login. Long story short, the /etc/X11/xorg.conf file that was present on the system had some invalid paths in it which were preventing Xorg to start correctly. Since Xorg usually detects the computer configuration correctly, I just deleted the file and Xorg came up happily.

.ICEAuthority Could Not Be Found (A.K.A.: gdm User Has Changed its Home)

As soon as Xorg started, a popup appeared complaining about a missing .ICEAuthority file. That's another misconfiguration to correct but harder to find: you're running into the following bug:

13534 "Could not update ICEauthority file /.ICEauthority" on bootup of build 130
http://defect.opensolaris.org/bz/show_bug.cgi?id=13534

The gdm user home directory was reported as / by /etc/passwd. I just changed it to where it belongs and all problems were solved:

# usermod -d /var/lib/gdm gdm

Malfunctioning Terminals

Another problem you might find is the following:

12380 image-update loses /dev/ptmx from /etc/minor_perm
http://defect.opensolaris.org/bz/show_bug.cgi?id=12380

The workaround is the following:
  • Reboot into the working boot environment.
  • Execute the following commands:

$ pfexec beadm mount your-BE /mnt
$ pfexec sh -c "grep ^clone: /etc/minor_perm >> /mnt/etc/minor_perm"
$ pfexec touch /mnt/reconfigure
$ pfexec bootadm update-archive -R /mnt
$ pfexec beadm unmount your-BE

Waiting for the Next Release

So far, OpenSolaris snv_134 is a Solaris as great as ever. I wouldn't mind running it on my workstation now. I'll patiently wait a bit longer just in case: I surely prefer running stable versions on some machines. However, OpenSolaris now seems as stable as SXCE was and I think it's an operating system that now should deserve the attention of any user who is running other *NIX flavors on their home workstations.

Sunday, May 23, 2010

Inter Wins its Third Champions League


45 years after Helenio Herrera's Unbeatable Team won Inter's second Champions League, yesterday Inter succeeded in bringing its third European most important trophy home. Congratulations! Inter fans had waited so long I sincerely did not think such a moment would come "so soon".

I'm not a soccer fan whatsoever but I do have many good friends who are. And some of them are Inter fans. Yesterday, just as the match ended, I picked up the phone and called a friend of mine who's living in Milan. He was thrilled; and I was, too. Maybe it's just that I miss Italy so much but one thing I know for sure: yesterday there was an Inter fan more down there, in Madrid.

Wednesday, May 19, 2010

Setting up PostgreSQL on Solaris

PostgreSQL is bundled with Solaris 10 and is available from the primary OpenSolaris IPS repository.

To check if PostgreSQL is installed in your Solaris instance you can use the following command:

$ svcs "*postgres*"
STATE          STIME    FMRI
disabled       Feb_16   svc:/application/database/postgresql:version_81
disabled       16:11:25 svc:/application/database/postgresql:version_82

Install Required Packages

If you don't see any PosgreSQL instance in your Solaris box then proceed and install the following packages (the list may actually change over time):
  • SUNWpostgr
  • SUNWpostgr-contrib
  • SUNWpostgr-devel
  • SUNWpostgr-docs
  • SUNWpostgr-jdbc
  • SUNWpostgr-libs
  • SUNWpostgr-pl
  • SUNWpostgr-server
  • SUNWpostgr-server-data
  • SUNWpostgr-tcl

Check if PostgreSQL SMF Services are Configured

After installation, SMF services should be listed by (the output may depend on the actual PostgreSQL version you installed):

$ svcs "*postgres*"
STATE          STIME    FMRI
disabled       Feb_16   svc:/application/database/postgresql:version_81
disabled       16:11:25 svc:/application/database/postgresql:version_82

On Solaris, PostgreSQL is managed by the SMF framework. If you're curious you can check the service manifest at /var/svc/manifest/application/database/postgresql.xml and the service methods at /lib/svc/method/postgresql. Many important parameters are stored in the service configuration file (postgresql.xml): if you want to change some parameters (such as PostgreSQL data directory) you must use svccfg to edit the service configuration.

PostgreSQL and RBAC

PostgreSQL on Solaris uses RBAC to give users permissions over the database instance. When you install Solaris' PostgreSQL packages an RBAC role is setup for you:

[/etc/passwd]
postgres:x:90:90:PostgreSQL Reserved UID:/:/usr/bin/pfksh


This user is setup as an RBAC role in /etc/user_attr file:

[/etc/user_attr]
postgres::::type=role;profiles=Postgres Administration,All

Permission for the Postgres Administration profiles are setup in the /etc/security/exec_attr file:

[/etc/security/exec_attr]
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/initdb:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/ipcclean:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_controldata:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_ctl:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/pg_resetxlog:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/postgres:uid=postgres
Postgres Administration:solaris:cmd:::/usr/postgres/8.2/bin/postmaster:uid=postgres

Starting PostgreSQL

You can start PostgreSQL using the following SMF command from an account with the appropriate privileges:

$ su - postgres
$ svcadm enable svc:/application/database/postgresql:version_82

Initial Configuration

By default, PostgreSQL is configured to trust all of the local users. That's not a good practice because all your local users may connect to PostgreSQL as a superuser. The first to do is setting up a password for the postgres user:

$ psql -U postgres
postgres=# alter user postgres with password 'your-password';

Exit psql with the \q command and change the /var/postgres/8.2/data/pg_hba.conf file to set an appropriate authentication method and change the following line:

[/var/postgres/8.2/data/pg_hba.conf]
local all all trust

with, for example:

[/var/postgres/8.2/data/pg_hba.conf]
local all all md5

Next time you connect, PostgreSQL will be asking you for the user password. Now, let's refresh the PostgreSQL service so that PostgreSQL will receive a SIGHUP signal an re-read the pg_hba.conf file:

$ svcadm refresh svc:/application/database/postgresql:version_82

Done!

You're now running a PostgreSQL instance on your Solaris box ready to be given to your database administrator, ready for production use.


Adding Google Analytics Tracking Code to JIRA

Some posts ago I described how you can easily add the Google Analytics Tracking Code to your Confluence instance.

In the case of JIRA it's just as easy although it might not be intutiive: the quickest place where you can put the Google Analytics Code is in the "Announcement Banner." As of JIRA 4.1, Pasting your Analytics code there won't have any side-effect in the way the JIRA user interface appears in your browser. And yes, you will still be able to add an announcement banner text.

Tuesday, May 18, 2010

VirtualBox v. 3.2.0 Has Been Released Adding Support For Mac OS X


Today, Oracle Corporation has released VirtualBox v. 3.2.0 and renamed it Oracle VM VirtualBox.

This is a major version which includes many new technologies such as:
  • In-hypervisor networking.
  • Remote Video Acceleration.
  • Page Fusion.
  • Memory Ballooning.
  • Virtual SAS Controller.
  • Mac OS X guest support (on Apple hardware only.)

And much more. If your want to read the official announcement please follow this link. If you want to read the change log please follow this link.

Installing JIRA on Solaris

Installing Atlassian JIRA on Solaris is pretty easy. To run JIRA on a production environment you'll need:
  • Java SE (JRE or JDK).
  • A supported database.
  • Optionally, an application server.

Solaris 10 is bundled with everything you need while, on OpenSolaris, you'll rely packaging system to install the bits you're missing.

Installing Java SE

Solaris 10 is bundled with Java SE 5.0 at /usr/jdk but you might switch to 6.0 as well. If you want to install a private Java SE 6.0 instance on your Solaris 10 system, just download the shell executable versions from Sun website and install them:

$ chmod +x jdk-6u20-solaris-i586.sh
$ cd /java/installation/dir
$ ./jdk-6u20-solaris-i586.sh

If you're running an AMD64 system you should also install the x64 bits:

$ cd /java/installation/dir
$ chmod +x jdk-6u20-solaris-x64.sh
$ ./jdk-6u20-solaris-x64.sh

I usually install private Java SE instance on /opt/jdk replicating the structure of the /usr/jdk which is very helpful, for example, when decoupling specific Java SE instances from shell scripts:

# cd /opt
# mkdir -p jdk/instances
[...install here...]
# cd /opt/jdk
# ln -s instances/jdk1.6.0_20 jdk1.6.0_20
# ln -s jdk1.6.0_20 latest

Setting Up JAVA_HOME

When using JIRA scripts your JAVA_HOME environment variable should be set accordingly. I usually write a small script to prepare the environment for me:

[set-jira-env]
export JAVA_HOME=/opt/jdk/latest
export PATH=$JAVA_HOME/bin:$PATH

and then just source it into my current shell:

$ . ~/bin/set-jira-env

Setting Up an User

This is a point to seriously take into account when running your JIRA instances. Since I usually build a Solaris Zone to run JIRA into, I sometimes run JIRA as the root user. By the way, if you need to create an user, just run:

# useradd -d /export/home/jira -g staff -m -k /etc/skel -s /bin/bash jira

Please note that Solaris 10 use the /export/home directory as the root of local user home directories. You cal also use Solaris' automount to map user homes in /export/home in /home. Ensure that the /etc/auto_master file contains the following line:

/home  auto_home  -nobrowse

Then edit the /etc/auto_home file as in the following example:

*  -fstype=lofs  :/export/home/&

Ensure that the autofs service is running:

$ svcs \*autofs\*
STATE          STIME    FMRI
online         Feb_16   svc:/system/filesystem/autofs:default

If it's not, enable it:

# svcadm enable svc:/system/filesystem/autofs:default

After creating an user, you can just change its home directory and the automounter will mount its home into /home:

# usermod -d /home/jira jira

Setting Up a Project

Solaris has excellent resource management facilities such as Solaris Project. If you want to finely tune the resources you're assigning to your JIRA instance or to the Solaris Zone where your instance will be run you can read this blog post.

Setting Up PostgreSQL

Solaris 10 comes with a supported instance of the PostgreSQL database which is, moreover, one of Atlassian's very favorite databases. Solaris, then, provides out-of-the-box all of the pieces you need to run your JIRA instances.

To check if it's enabled just run:

# svcs "*postgresql*"
STATE          STIME    FMRI
disabled       abr_23   svc:/application/database/postgresql_83:default_32bit
disabled       abr_23   svc:/application/database/postgresql:version_82
disabled       abr_23   svc:/application/database/postgresql:version_82_64bit
disabled       abr_23   svc:/application/database/postgresql:version_81
online         abr_29   svc:/application/database/postgresql_83:default_64bit

In this case, the PostgreSQL 8.3 64-bits instance is active. If it were not, just enable it using the following command:

# svcadm enable svc:/application/database/postgresql_83:default_64bit

This is just the beginning, though. To make the initial configuration for your PostgreSQL instance on Solaris, please read this other post.

Installing JIRA

Please take into account that to install the standalone JIRA distribution you'll need GNU tar. GNU tar isn't always bundled in a Solaris 10 instance, while it is in OpenSolaris/Nevada. If it is, it should be installed in /usr/sfw/bin/gtar. If your Solaris 10 instance have no GNU tar and you would like to install it, you can grab it for example from the Solaris Companion CD.

Since I don't like having to rely on GNU tar, I usually decompress the GNU tar file on regenerate a pax file to store for later use. To create a pax file including the contents of the current directory you can run the following command:

$ pax -w -f your-pax-file.pax .

To read and extract the content of a pax file you can run:

$ pax -r -f your-pax-file

You can install JIRA on a directory of your choice. I usually install it in the /opt/atlassian subdirectory.

Create a JIRA Home Directory

JIRA will store its file on a directory you should provide. Let's say the you'll prepare the /var/atlassian/jira directory as the home directory for JIRA:

# mkdir -p /var/atlassian/jira

If you can, consider creating a ZFS file system instead of a plain old directory: ZFS provides you powerful tools in case you want, for example, to compress at runtime, to take a snapshot of, to backup or retore your file system:

# zfs create your-pool/jira-home
# zfs set mountpoint=[mount-point] your-pool/jira-home

Setting Your JIRA Home Directory

The jira-application.properties file is JIRA main configuration file. There, you'll find the jira.home properties which must point to the JIRA dome directory you just prepared:

[jira-application.properties]
[...snip...]
jira.home = /var/atlassian/jira
[...snip...]

Creating a Database Schema and an User for JIRA

The last thing you've got to do is creating a database user and a schema for JIRA to store its data. On Solaris, you can just use psql. The default postgres user comes with no password on a vanilla Solaris 10 installation. Please, consider to change it as soon as you start using your PostgreSQL database.

# psql -U postgres
postgres=# create user jira-user password 'jira-password';
postgres=# create database jira-db ENCODING 'UTF-8' OWNER jira-user;
postgres=# grant all on database jira-db to jira-user;

If you don't remember, you can exit psql with the \q command. ;)

Configuring Your Database in JIRA

To tell JIRA that it must use your newly created PostgreSQL database your have to open the conf/server.xml file and change the following parameters:

[server.xml]
<Context path="" docBase="${catalina.home}/atlassian-jira" reloadable="false">
<Resource name="jdbc/JiraDS" auth="Container" type="javax.sql.DataSource"
username="[enter db username]"
password="[enter db password]"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://host:port/database"
[ delete the minEvictableIdleTimeMillis and timeBetweenEvictionRunsMillis params here ]
/>

The last thing to do is configuring the entity engine modifying the atlassian-jira/WEB-INF/classes/entityengine.xml file:

[entityengine.xml]
<datasource name="defaultDS" field-type-name="postgres72"
schema-name="public"
helper-class="org.ofbiz.core.entity.GenericHelperDAO"
check-on-start="true"
use-foreign-keys="false"
use-foreign-key-indices="false"
check-fks-on-start="false"
check-fk-indices-on-start="false"
add-missing-on-start="true"
check-indices-on-start="true">

Start JIRA

You can now happily start JIRA by issuing:

# ./bin/startup.sh

from the JIRA installation directory.

Next Steps

The next step will tipically be configuring JIRA as a Solaris SMF Service.

Enjoy JIRA!



Thursday, May 13, 2010

Filtering Subversion Commits Using a Post Commit Hook

At the end of a successful commit phase Subversion invokes a post commit hook, if it exists. The post commit hook is an executable file that must be named $SVNREPO/hooks/post-commit. The post commit hook is passed two parameters:

  1. The repository affected by the commit operation.
  2. The committed revision number.

With these parameters you can find out the changes that affected the repository using the svnlook command:

$ svnlook changed -r [revision] [repository]

If your post-commit hook is a shell script, you can just use:

[...snip...]
svnlook changed -r "$2" "$1"
[...snip...]

Unless you can control which the $PATH environment variable will be at the time of the hook execution, be sure to use full command paths in your scripts to avoid path related errors.

Example

If you want to filter out a file name, for example web.xml, from svnlook output, you can use the following syntax (please note that the following uses Solaris-specific commands):

MODIFICATIONS=$(/opt/csw/bin/svnlook changed -r "$2" "$REPOS")

for i in "$MODIFICATIONS" ; do
  echo $i | /usr/xpg4/bin/grep -q "web.xml$"
  if [ $? == 0 ] ; then
    # The $CHANGES variable of this example will contain
    # the list of the To: addresses for the current email.
    echo $i | mailx -s "Web Config files have been modified" $CHANGES
  fi
done

Solaris Specific Syntax

The -q option for the command grep is supported by the XPG4 version of the grep command, which is bundled with Solaris and installed by default in the /usr/xpg4/bin directory.

Note about sending an email on Solaris

To send an email on an UNIX system without worrying about the specific infrastructure configuration, you should use a command that relies on the local SMTP server instance, if available. Such a program is mailx. Solaris is bundled with a Sendmail instance and with the mailx program. When sending an email with mailx, it will internally invoke the local sendmail which must be properly configured in order to relay the message to the message destination.

Wednesday, May 12, 2010

HTTP Compression: With Safari, Compress Just Text

As I've outline in another post I've been configuring a couple of internal Apache HTTP Server to use HTTP 1.1 response compression. Since these web server act as a front-end proxy to a good number of web applications deployed in distinct application server, that was the good place to centralize such a configuration without having to modify every server, one by one.

To my surprise, after applying the new configuration, I discovered that it wasn't working correctly with Safari (4.0.5). Every browser I could text (Firefox, Internet Explorer, Google Chrome, Opera) in a bunch of different operating systems (Solaris, GNU/Linux, Windows) were working correctly. Safari did not and the problem manifested as random blank pages, missing images on page, incorrect CSSs and so on.

I fiddled a while with the Apache configuration and at the end the only working solution was adding the following sad line to httpd.conf:

BrowserMatch Safari gzip-only-text/html

So sad.


Monday, May 10, 2010

Speeding Up Web Access and Reducing Traffic With Apache

One of the parameters that might affect our web sites response times that we, developers or system administrators, do have under our control is the size of the HTTP Response. This size may be reduced after careful analysis and engineering so that responses are non redundant and efficient. Nevertheless developers often forget that, just before our web server returns the HTTP response to our clients, there's one last thing that can be done in the case you're using at least HTTP/1.1 (which will almost invariably be the case): apply a compression algorithm.

Compression algorithms are everywhere and the HTTP protocol is no exception. Although you should carefully analyze your application's resource consumption to discover potential bottlenecks of your application,  users typically spend much of their time waiting for page to load. Images, scripts, embedded objects, the page markup: all of them contribute to a bandwidth usage that affect your web application response time. The same way you spare hard disk storage when you compress your images or your music with an appropriate compression algorithm, you'll spare bandwidth (and hence time) if you compress your responses.

A Short Introduction

Let's make a short introduction before going on. For compressed output to be understood by agents, coordination between the server and the browser must take place: that's why HTTP/1.1 formalized and standardized how and when compression can be used. Basically, servers and clients exchange information to determine whether compressed requests and responses can be used and, if both support a common algorithm, they use it. Most of the time this information exchange is made with the Accept-Encoding and Content-Encoding HTTP headers.

HTTP/1.1 specifies three compression methods that can be used: gzip, deflate and compress. Many clients and servers support gzip: notably, the Apache HTTP server does so. Others do support deflate although its usage by browsers is more quirky than gzip's.

gzip, that is surely known to UNIX users, will produce good compression rates for text: it's not uncommon to achieve compression rates of 70% and above when compressing text files, typical HTML markup or JavaScript code.

Configuring Your Apache Web Server

Configuring the Apache HTTP Server to compress its output is pretty easy. One of the things you should take into account is almost obvious: not every content type will compress well and compression has a cost. So, depending on the content served by your application, consider configuring your web server accordingly so that precious CPU cycles aren't wasted compressing something that should not be. Text, hence HTML markup, JavaScript, CSSs and so on will quite surely compress well. Compressed images such as JPEGs, PDFs, compressed multimedia files such as mp3, ogg, flac, will not.

Enabling mod_deflate

mod_deflate is a module, bundled with standard Apache 2 distibutions, that will provide the filter you need to compress your traffic. To enable mod_deflate you must modify your Apache configuration file accordingly. Open httpd.conf and verify that mod_deflate is enabled:

[...snip...]
LoadModule deflate_module libexec/mod_deflate.so
[...snip...]

Deciding When and What To Compress

The next choice you have to make is when and what to compress. Apache is pretty flexible and you can apply compression at distinct levels of your configurations such as, for example:
  • Apply it to everything.
  • Apply it at multiple <Location/> level.
  • Apply it at <VirtualHost/> level.

The "best" configuration will depending on how you're using your Apache HTTP server. If you're using your Apache HTTP Server as a proxy and to manage different virtual hosts, you might be interested on reducing configuration complexity:
  • Disable compression on every web server proxied by your front-end Apache server.
  • Configure compression on Apache by using appropriate <Location/> sections on at a virtual host level.

The last web server I configured for a client of mine acted as a proxy for a great number of virtual hosts. Since every virtual host was serving compressible content, we applied just one configuration at the / location:

<Location />
[...snip...]
# mod_deflate configuration here
</Location>

Take time in analyzing the characteristics of your traffic before blindingly turning on compression: you may save CPU cycles. Do remember to disable compression behind your Apache server: there's probably no point in compressing twice or more times, you should do it just before your response is sent to your clients.

An Example Configuration

Typical configuration will take into account:
  • Browsers non-compliant behaviors.
  • Content types not to compress.

The configuration we're running usually is more or less the same configuration exemplified in mod_deflate official documentation:

[...snip...]
# Insert filter
SetOutputFilter DEFLATE

# Netscape 4.x has some problems...
BrowserMatch ^Mozilla/4 gzip-only-text/html

# Netscape 4.06-4.08 have some more problems
BrowserMatch ^Mozilla/4\.0[678] no-gzip

# MSIE masquerades as Netscape, but it is fine
# BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

# NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48
# the above regex won't work. You can use the following
# workaround to get the desired effect:
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html

BrowserMatch Safari gzip-only-text/html

# Don't compress images
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png)$ no-gzip dont-vary

# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent env=!dont-vary
[...snip...]

A brief explanation of the configuration example is the following:
  • The first line sets the DEFLATE output filter.
  • The next four lines, beginning with the BrowserMatch directive, tells Apache to check its clients' browser version to solve some well-known quirks.
  • The sixth line is a regular expression to match the request URI with: if it matches, compression it's not applied. In this case, as you may see, common poorly compressible image formats are matched.
  • The last line tells Apache to append an additional header so that proxies will not deliver cached (compressed) responses to clients that cannot accept them.

Next Steps

Needless to say, that's just a basic configuration and much finer tunings can be done. One of the first thing you might want to tweak is using a better way to control which files will be compressed. Instead of using the SetEnvIfNoCase directive as shown in the example above, you could for example use the AddOutputFilterByType to register the DEFLATE filter and associated to the MIME Types of the files you want to compress. To do that, remove the SetOutputFilter directive from the example above and use the following instead:

[...snip...]
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
[...snip...]

and so on.

If you're managing many application with a variety of web and applications server on your boxes, consider using a front-end Apache to centralize such a configuration. Instead of configuring each of your servers you'll reduce your infrastructure complexity and improve its maintainability. If you want to know how to configure Apache Virtual Hosts, a previous blog post is a good starting point.

Command Line Clients to Manage Atlassian Software

Atlassian software such as JIRA and Confluence not only let user interact with them with a web interface: they both expose an API which can invoked remotely via JAX-RPC or SOAP protocols. Such an API is ideal if you need to batch execute some work on an instance or if you want to build a client of your own around it. Nowadays, with the help of modern IDEs and frameworks, it's pretty easy to build a JAX-RPC or SOAP client. The excellent Netbeans, for example, will build a Web Service client for you in just a couple of clicks: more than once I wrapped such a client inside some shell scripts just for their ease of use and for automation's sake.


Nevertheless, if what you need is just a wrapper around these remote APIs, you may consider using the Atlassian Command Line Interfaces instead of building your own client. Atlassian Command Line Interfaces are shell scripts about a Java client and the only requirement to run them is having Java in your $PATH which, probably, you already have. Their syntax respects the typical UNIX shell script conventions and you'll really feel at home with them.


There is an Atlassian Command Line Interface for almost every Atlassian product on the market:


If you're using more than one product, consider downloading the Atlassian Command Line Interface bundle instead:


Happy scripting and enjoy a better experience with your Atlassian products.


Changing Confluence attachment storage

By default, Confluence will store attachments in the attachments subdirectory of its home directory but Confluence not only supports local file systems as attachment storage: it also supports database storage. The only drawback you should be aware of is obvious: database required capacity is going to increase. But not more than you would anyway be using in the local filesystem.

Advantages of using database storage are ease of administration as far as it concerns backing up and restoring a Confluence instance. Using your database to save files will also shield you from problems that might arise from invalid characters in file names for the file systems you're using to host the Confluence home directory.

To change Confluence attachment storage, just go to the Administration Console, open the Attachment Storage panel and select the storage type you want:


Take into account that you can switch to database attachment storage any time you need it: even though your Confluence instance is already storing attachments in the local file systems, Confluence will perform a migration of the attachments to the new storage and will soon be back online.

Backing up JIRA and Confluence taking advantage of ZFS snapshots

If you're running an instance of JIRA or Confluence (or many other software packages as well) you probably want to make sure that your data is properly and regularly backed. If you've got some experience with JIRA or Confluence you surely have noticed the bundled XML backup facility: a scheduled backup service which takes advantage of it it's even running by default in your instances.

The effectiveness of such a backup facility depends on the size of your installation but the rule of thumb is that it's a mechanism that does not scale well as the amount of data stored in your instances grows up. In fact, XML backup was thought for small-scale installations and is not a recommended backup strategy for larger scale deployments.

In the case of JIRA I still continue to run automated XML backups since they do not store attachments in them but as far as it concerns Confluence, I always disable the automated XML backup and rely on native database backup and attachment storage backup. The database backup must be performed with the native database tools such as pg_dump for PostgreSQL. The backup of your instance's attachments will depend on the type of storage in use. If you're storing your attachment in the database, your attachments will be backed up automatically during your database backup. If you store your attachments in a file systems, as it's the case for both JIRA and Confluence default installations, there's plenty of tools out there to get the job done such as tar, pax, cpio and rsync (to name just a few). Each one of these have advantages and drawbacks and I won't enter in a detailed discussion: it suffices to say that none can beat a Solaris ZFS-based JIRA or Confluence installation. 

Since ZFS inception I've been taking advantage of its characteristics more and more often and snapshots are a ZFS killer feature that will considerably ease your administration duties. Whenever I install a new instance on a Solaris Zone, I set up a ZFS file system for hosting both the database files and JIRA or Confluence home directories:

# zfs create my-pool/my/db/files
# zfs create my-pool/jira/or/confluence/home

Taking a snapshot of a ZFS file system is a one-liner:

# zfs snapshot file-system-name@snapshot-name

In an instant your snapshot will be done and you will be able to send it to another device for permanent storage. ZFS snapshots, combined (or not...) with another tool such as rsync, will incredibly simplify backing up your files and, also, maintaining a cheap history (in terms of storage overhead) of changes in case you need to roll back your file systems (and hence the data stored in your application) in case you needed it.

Take into account that, to recover a single file from a snapshot in case your original pool crashes, you will need to ZFS receive the snapshot in another pool for files to be accessible. That's why I still rely on a scheduled rsync backup together with ZFS snapshots just in case, although with a much lower frequency than in the pre-ZFS epoch.





Sunday, May 2, 2010

JIRA Security Advisory 2010-04-16

Atlassian has published a Security Advisory for JIRA on 2010, April the 16th. The Security Advisory alerts about Privilege Escalation and XSS vulnerabilities. Patches to these vulnerabilities are distributed with JIRA 4.1.1. 


JIRA: Creating issues from TLS-encrypted mail

As you know I'm extensively using Atlassian JIRA and one of the features my current client uses most is automatically creating issues from received email. The ability of automatically parsing an email and creating an issue is a nice built-in JIRA feature which sometimes can spare you a lot of work.

Configuring this service is straightforward:
  • Configure a mail server.
  • Configure the mail service.

Configuring a Mail Server

The mail server configuration screen, that you can access from your JIRA Administration section, is a simple screen where you can configure the basic properties of your mail server:
  • Name.
  • Default From: address.
  • Email Subject: prefix.
  • SMTP configuration:
    • Host.
    • Port.
    • (Optional) User credentials.
  • JNDI location of a JavaMail Session, in case you're running JIRA on a Java EE Application Server.

Once you've set up a mail server, you can proceed and configure the service that will read your mail box and create issues for you.

Configuring a "Create Issues From Mail" Service

The Services configuration tab lets you define JIRA services, which are the JIRA equivalent of an UNIX cron job. JIRA ships with some predefined services two of which are:
  • Create Issues from POP.
  • Create Issues from IMAP.

Depending on the protocol you're accessing you mail server with, you'll choose the appropriate service. In my case, I always choose IMAP if available. The following screenshot is the configuration screen of the "Create Issues from POP/IMAP" service:


There are different handlers you can choose from: you can find detailed information in the JIRA Documentation. The "Create issue or comment" is probably what you're looking for. The handler parameters lets you fine tune your handler with parameters such as:
  • project: the project new issues will be created for.
  • issuetype: the type of issues that will be created.
  • createusers: a boolean flag that sets whether JIRA will create new users when a mail is received from an unknown address. Generally, you want this to be false.
  • reporterusername: the name of the issue reporter when the address of the email doesn't match the address of any of the configured JIRA users.

Usually I set this parameter to something like: project=myProjId,issuetype=1,createusers=false,bulk=forward,reporteruserame=myuser

The Uses SSL combo box lets you choose whether you mailbox will be accessed using an encrypted connection. If you're planning to use SSL to access you mailbox you will probably need to import your mail server certificate into your certificate file, as explained later.

The Forward Email parameter lets you specify the address where errors or email that could not be processed will be forwarded to.

The Server and Port parameters lets you choose the mail server this service will connect to. The Delay parameter lets you specify the interval between every service execution.

Connecting to an SSL Service

If you're going to access your mail server using SSL, you will probably need to import the mail server public key into your certificate file, otherwise you'll receive some javax.net.ssl.SSLHandshakeException. In a previous post I explained how you can retrieve a server public key using OpenSSL. Once you have got the public key, you can add it to your key store by using the keytool program. The location of your key store may depend on your environment or application server configuration. The default location of the system-wide key store is $JAVA_HOME/jre/lib/security/cacerts. To add a key to your key store you can run the following command:

# keytool -import -alias your.certificate.alias -keystore path/to/keystore -file key-file

Additional Considerations for Solaris Sparse Zones

I'm often using Solaris 10 Sparse Zones to quickly deploy instances of software such as JIRA. In this case, please note that the system wide Java key store won't be writable in a zone. Instead of polluting the global zone key store, I ended up installing Java SE on every zone I deploy to avoid ending up with application trusting some certificates just because other applications do.