MySQL + NPTL = Database Lock Ups

There is a bug in the interaction between Native Posix Thread Library (NPTL) and MySQL that causes database lock ups in Fedora, RedHat 9, and Linux kernels 2.5 and 2.6.

When using MySQL with NPTL, sometimes a process will hang. It is impossible to kill the process, even with a kill -9. You cannot kill the rest of the server gracefully, you can only kill them with kill -9, but the hung process will not die. This suggests that the problem is probably in the NPTL code in the kernel, as it is hanging in kernel space.

The NPTL causes MySQL to hang bug has been reported to MySQL, but they seem unwilling to look into it. All reported occurrences happened with SMP systems, although I’m not sure that’s related.

Without fail, when we killed the process in order to restart the server, we ended up with database corruption. This is not a good thing… One interesting side note is that if you strace the hung process, MySQL will recover. However, we have still had database corruption when we did this.

The solution mentioned in the bug report that has worked for others is to force the use of the older Linux Threads library by starting mysql thusly:

# export LD_ASSUME_KERNEL=2.2.5; mysqld_safe &

A Little Diddy About the Thunderbird Email Client

Don’t want to load Mozilla to check your mail? Sick of all the mediocre email clients out there. Give Thunderbird a try.

Ever since the folks over in Mozilla land finished up with their everything but the kitchen sink Mozilla browser, they’ve been realizing that maybe people didn’t want to load a web browser to check email and newsgroups. Maybe they could have actually delivered Mozilla when it still mattered if they’d realized it sooner.

Regardless, the new trend is to split out the browser and email portions of the Internet Suite so that we can actually load the part we want when we want it.

Enter Thunderbird 0.6, which was released just recently. Thunderbird is nearly ready for the prime time and it is a very straightforward email client. A real lean, mean, email machine. I’ve been using it at work for a few weeks and have been quite impressed.

I have stumbled across one bug on my Fedora install though. If I load Thunderbird then I can’t use Mozilla to browse afterwords. Several of my coworkers noticed the same problem. The solution, oddly enough, was to start both Thunderbird and Mozilla with the “-P default” option, which tells them to use the default user profile. (Why they wouldn’t anyway is anybody’s guess.)


thunderbird -P default
mozilla -P default

So if you want a nice little email client that starts up quickly and doesn’t feel quite as awkward as Mozilla mail then give thunderbird a try.

Using “xargs” and “convert” to change image file formats

Suppose you have a directory full of .gif images that you want to convert to .jpg files. Here’s how how to combine a handful of command line tools to convert them all. No clicking, dragging, or context menus required. 🙂

Lets build this up from the ground up. We’ll need 4 components which I’ll detail below

  1. First, we need a list of files ending with .gif. That should be easy enough.
    ls *.gif
  2. Second, we need to remove the .gif extension from the filename so that we can later append .jpg to create the new filename. To do this we use sed.
    sed -e "s/.gif$//"

    This tells sed to read from stdin and on any line that ends with .gif remove the .gif.

  3. Third we need a way to run a program once for each file that we are modifying. That is where xargs comes in. It reads from stdin and runs whatever program you tell it to. It even builds the rest of the command line for you. A slightly useless example would be
    xargs -n1 echo

    which would run the “echo” command once for each line read from stdin. Echo would turn around an output the string again.

  4. The final piece of the puzzle is a program for converting between images formats. We’ll use the aptly named “convert”. All we do is pass in an input filename and an output filename. convert takes care of the rest.
    convert myimage.gif myimage.jpg

Now we combine all these peices through the magic of pipes and we can convert all the .gif files in a directory to .jpg files.

Here is the command:

ls *.gif | sed -e "s/.gif$//" | xargs -n1 --replace convert {}.gif {}.jpg

Just a couple more points of detail

  • ls *.gif will automatically output one filename per line when we use it in a “pipe” situation
  • The -n1 argument says that we want xargs to read one and only one line of input for each time it runs convert. Otherwise it would try to read all of its stdin which would confuse “convert” which will be looking for one input file per command line
  • The –replace argument to xargs says that we want to replace {} with the value read from stdin. If we didn’t do this then xargs would tack the line on as the final parameter when it ran convert. (Like the previous xargs example)

Setting your $PATH variable

Ever want to run a program and your shell doesn’t know where to find it? How does your shell know how to find some directories and not others? The short answer is that there is a variable named: $PATH that contains a list of directories to look in. This article will focus on setting the path variable in the bash shell.

The first thing you need to know about setting the path in Linux is that the technique for setting it is shell specific. We’ll be concentrating on the Bourne-Again-Shell (bash).

Every bash user can have two files in their directory that .bash looks for at certain times. The first is .bash_profile which runs once per login. The other is the .bashrc file that runs once per interactive shell. (One that isn’t using pipes.)

Your .bash_profile file is the appropriate place to set your path. But there is a little more to it to understand. So lets take a look at a simple .bash_profile file:

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc

# User specific environment and startup programs


export PATH

As you can see we set the PATH variable = $PATH:$HOME/bin. In other words take everything that is already in the path and tack on the “bin” subdirectory of the path defined by $HOME.

Why would we append to the path rather than just setting it? The reason is that other files that were included before this one have already set the path variable. You’ll note that this file has a strange “if [ -f ~/.bashrc] ” statement. That line (and the one after) mean if ~/.bashrc exists then run it.

So what is in my ~/.bashrc. Lets take a look:

# User specific aliases and functions

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc

This file is where we would include any aliases we wanted to set up. Then we turn around and use the same trick we used before to run /etc/bashrc.

This whole files including files cycle goes on for a level or two more after this even. When it’s all said and done any script along the way could have added stuff to the path. That is why we use append to the path rather than setting it directly.

So lets show a real world example of setting the path variable. Sometimes /usr/local/bin isn’t part of your path. To set simply edit ~/.bash_profile. Somewhere in that file add the following:

export PATH

After that your can either type ” . ~/bash_profile ” or just log out and then log back in. Either way your path will be set from that point on

The beauty that is “xmllint”

Up until just recently I thought that there were no available xml validators available under GPL terms. Turns out the the XML Soft people have built a program named “xmllint” that will validate your xml based on a dtd you reference.

So I started looking into XML validation. Up until now it has always seemed like it would be more work than it was worth. Little did I know I would scarcely have to do a thing.

All you need to do to validate your xml is pass it into xmllint with the –valid flag. I believe xmllint is part of the libxml2 suite. It is by the same people. My gentoo machine already had it installed as did a RedHat machine that I use frequently.

Below is a sample XML document and the command line I used to validate it.

<!DOCTYPE article SYSTEM "/articles.dtd">

<p>This is a single paragraph article.</p>

Command line

xmllint --valid test.xml

Notice the “<!DOCTYPE” line? The second parameter is the name of the outermost tag for your document. In my case this was “article”. The “SYSTEM” means that we are validating against our own dtd rather than a well known dtd. The final parameter is a path to your dtd. Thats it

xmllint will return an exit code that tells you how it went. Zero means it worked, nonzero means there were errors. It will also output any errors to stderr. For my purposes I wanted to capture the errors and present them to a web client. Here is the php I used to make that happen.
$cmd="xmllint --valid --noout ".escapeshellarg($filename)." 2>&1";
exec($cmd, $output, $return_code);

There are a couple of items in the above example that I should probably explain now.

  • The –noout option tells xmllint not to output the contents of the file it validates.
  • The escapeshellarg() function is a php function that does its best to make your filenames safe for the command line. You should use EXTREME caution whenever dealing with anything you are going to run through exec().
  • The 2>&1 tells the shell to merge stdout and stderr into one stream. In this case we used it to capture stderr into our $output variable
  • The $output variable is a little quirky. It is returned as an array of lines.

Now that you have seen how easy it is to validate you XML documents, I hope you’ll take the time to validate your XML where appropriate. I know I will be.

Mixing static and dynamic linking

Most of us do nothing but dynamic linking in our small C or C++ programs, but what do you do if you need to use both. I recently found myself in just this situation. The answer seemed to be so obvious to people that nobody had bothered to document it. Here is what I found:

Static linking is actaully really easy to combine with dynamic linking. All you need to do is list the full name of the static library you want to link instead of using the -l option to build it for you. Here is a real world example that I used to link libsqlplus.a (static) with (dynamic).

INC =   -I/usr/include/mysql/ -I/usr/include/sqlplus
WARNINGS = -Wno-deprecated

# Note, libsqlplus is picky about where it builds,
# so I've linked it statically from a known good build.

    g++ $(INC) $(WARNINGS) \
       -L/usr/lib/mysql/ /usr/local/lib/libsqlplus.a -lmysqlclient -lz -o test

Note that libsqlplus.a is explicitly listed, while libmysqlclient and libz are just linked in using -l and -L. Not so bad, eh?

Building Mysql C++ Connector

MySQL/AB distributes a super cool C++ database wrapper for mysql that you can use under the terms of the LGPL to develop your apps, but the problem is that they don’t document very well how to build from source. I tried to download the patches and simply pipe them into patch with little success. Turns out that you have to do the patches using some special options.

Read on for the steps required.MySQL/AB distributes a super cool C++ database wrapper for mysql that you can use under the terms of the LGPL to develop your apps, but the problem is that they don’t document very well how to build from source. I tried to download the patches and simply pipe them into patch with little success. Turns out that you have to do the patches using some special options.

Read on for the steps required.First you’ll need to track down the patchs and source. I used the SRPM for RedHat 9. They also have 3 of the 5 patches available for downloading directly from Once you have the patches run this sequence of commands.

patch -p1 -d mysql++-1.7.9 < mysql++-gcc-3.0.patch
patch -p1 -d mysql++-1.7.9 < mysql++-gcc-3.2.patch
patch -p1 -d mysql++-1.7.9 < mysql++-gcc-3.2.2.patch
patch -p1 -d mysql++-1.7.9 < mysql++-prefix.patch
patch -p1 -d mysql++-1.7.9 < mysql++-versionfix.patch
cd mysql++-1.7.9
rm aclocal.m4 config.guess config.h config.status
config.sub configure install-sh libtool ltconfig missing
mkinstalldirs stamp* examples/ sqlplusint/
automake --foreign --add-missing

802.11B networking on the cheap

Sometimes you forget that there is such a thing as a good deal out there. Last week I was reminded by a $29 wireless cable router.

Granted it is 802.11B instead of 802.11G, but from what I’ve heard that isn’t all that big of a difference anyway. Particularly if you are in range of another hotspot and everything downgrades.

I recently switched to high speed internet via cable modem. For the first month or so I just left my Linux firewall running 24/7. The problem is that the firewall has some ancient SCSI hard drives that I bought on ebay a few years ago. They don’t like to be run that much so I decided it was time to invest in a cable router.

Turns out it was one of the coolest gadgets I ever picked up. I’m talking about the D-Link DI-514. A solid value if ever there was one. It has a build in 4 port switch. The switch is auto-sensing so you can use cross over cables or straight-through cables. It will sort it all out.

It has a sweet integrated firewall that you access at from any of the four internal network ports. (The four internal ports are physically separate from the 1 external port.) You plug in the external port cable connection on one side, one of your boxen on the other side, surf the IP mentioned above in a web browser and bam! Instant internet connection. The router takes care of DHCP, but also plays nice with any static IP’s you want to assign.

You can even tell this baby to pretend to have a different external mac than the default. This is real handy if you don’t want to give up the IP address you leased on DHCP before installing the cable router. I just opted to lease a new IP address. Faking the mac address sounded like a bad idea to me, but it is cool that you could in a jam.

I picked this super cool device up at CompUSA for I believe $59 dollars. That might not be correct, but it sounds right. I remember for sure that after rebate it was $29 bucks. Even at $59 it was competetive with the other wireless cable routers. Life is good.

Playing/converting .wma files

Want to play .wma files in linux. XMMS doesn’t know how to read them, but mplayer does! You can use mplayer to convert them to .wav files which xmms will be able to read.

Here is how you convert them. Note the “-aofile somefile.wav” that is how we specify the output filename.

mplayer somefile.wma -ao pcm -aofile somefile.wav

And here is how you play them without converting them (same thing but without an output filename):

mplayer somefile.wma -ao pcm

Mplayer is available prepackaged on RedHat and Gentoo. Also, through the miracle of the internet, you can also get it straight from the source in Budapest, Hungary at the Mplayer project home page

Free DNS for the masses

Need DNS hosting? Don’t want to pay? The good folks at EveryDNS will help you out. They have multiple points of presence so you can use them as your primary and secondary dns. The service is free, but they would like a donation. We’ve been using the service for this little web site for two months now and I think they probably do deserve a little something for their easy to use site. Check them out at