Category Archives: DB2

Never Don’t Buy Version 4.0 of Anything

As I’ve mentioned before, TOAD is one of my regular tools for database work.  At least it used to be.

I’ve had to keep upgrading TOAD for DB2 to stay current with DB2 releases, but the 4.0 release is so buggy that it’s unusable.  The 4.7 Commercial Version has been out for awhile now, but there’s no sign of those fixes and enhancements coming to the freeware version anytime soon.  Folks have been asking in forums, but there’s no meaningful response.

It’s seems that since the TOAD first donned that Quest T-shirt, this formerly independent tool has become increasingly more constrained.  Ever more menu options are grayed out with the “(Commercial Feature)” taunt.  Simple things that worked in prior releases are broken in the 4.0 freeware versions.  Perhaps they work just fine in the paid commercial version.

I’ve purchased Quest tools in the past, so cost isn’t the issue: availability is.  TOAD is one of those tools that I’ve frequently recommended largely because it was just one free download away.  If freedom and ubiquity weren’t so very important, we might all be coding in CFScript.

Quest is a great company, but TOAD has not done well lately under its ownership (don’t get me started on the lack of a *ix version).  I only wish there was a TOra for DB2.

This happens with some companies that take over a free, independent, or open source project: they take it in under their wing, lock it down, and look for improper ways to capitalize on it.  It can make money in the short run, but let’s be up front about it: state your intentions and either spin off a community version or kill off the freeware version altogether.  This “slippery slope” behavior is right down there with snake oil salesman and Oracle.


I usually don’t have (nor want) direct, real-time access to production databases, yet I’m sometimes asked to diagnose and correct performance issues with them.  So my typical MO is to collect periodic snapshot files (driven by cron or schtasks) and run them “offline” through a set of tools I’ve developed for postmortem analysis.  These tools of mine measure and report trends, calculate deltas, do performance calculations, and flag areas of concern.

But occasionally, I do need to monitor databases in real time.  For this, I’ve relied on third-party tools, such as Quest Spotlight, the free and simple db2mon, db2pd and get snapshot commands, and even Windows perfmon against the DB2 counters.  All the while, I wondered why IBM didn’t provide similar real-time monitoring tools.

Top for DB2

But IBM recently jumped into the game with the db2top utility for Linux and AIX.  As the name implies, it’s like top for DB2.  It uses a text user interface (it’s curses-based) which, despite the retro-80’s feel, is a benefit.  That is, it’s super-fast (jumping between views is quick and easy), and, of course, it runs from a command shell, so no X required.

Db2top shows just about everything available in DB2’s snapshots, and then some.  For example, it automatically calculates things like bufferpool hit ratios, sort overflow percentages, average I/O times, async read percentages, average execution times, and lock chains.  In most views, you can use the arrow keys to scroll right and left to see additional details; I found it useful to increase the screen width to see more without scrolling.  But, of course, the real benefit is watching instantaneous updates and deltas.  This avoids having to “do the math” yourself by processing a series of snapshot files.

With Any More Features, We’d Run Out of Alphabet

There are some nice tie-ins, like the ability to grab an instantaneous snapshot (S) for the corresponding screen, and run explains (L) directly from the Dynamic SQL screen.  Both toss the results directly into vi for viewing.  You can even force applications directly from the sessions (applications) view (bwaaaahahaha!)  And navigation aids like column sorting (Z, z) and screen “freezing” (f) come in very handy when drilling in for closer analysis, particularly when there’s a lot going on.  Quick reference help (h) is available in case you forget these keystrokes.

The bottlenecks (B) screen is useful for finding the “problem children” – those agent(s) consuming the most of each type of DB2 resource: CPU, memory, I/O, locks, sorts, etc.  This makes for a good starting point when jumping into a crisis.

The tool also supports capturing snapshots over time, running analysis reports, exporting to Excel, and replaying results.  I haven’t done much with these features yet, but they may very well take the place of my home-grown snapshot analysis tools.

Just What You Need

Before this, if you told me you had “Spotlight for a command shell,” I’d think you were crazy.  But db2top comes remarkably close.  Overall, it has a lot going for it: ubiquity (it’s there with every DB2 9.x *ix install), speed, and just the monitoring tools I need (no more, no less).  There’s not much lacking, but the following additions would be nice:

  • Extensibility (or source code).  For example, I could then add my own performance calculations – things like log, victim, and threshold trigger ratios for page cleaners.
  • Tie-ins to event monitors, at least db2detaildeadlock.
  • A Windows version, even if it required Cygwin.

Yet I’d prefer it miss a few of these flourishes than be laden with extra features that only add bulk or cost, or that just get in the way.  Too many other tools fall into that trap.

The Squeeze

My friendly AIX admin recently asked me to move some of my test databases from a P570 to a new Power 7 (P750) box.  I used db2look and db2move to replicate these on the new system, but I wanted to take it a bit further.  Since I’ve been called a “DASD pig,” I saw this as a good opportunity to enable DB2 row compression on some of the larger tables and see how much space I could save.

Look Before You Squeeze

Before moving, I took some compression estimates on the source system. Since it was DB2 9.1, I had to use inspect:

db2 inspect rowcompestimate table name TABLENAME schema SCHEMANAME results keep TABLENAME.insp
db2inspf sqllib/db2dump/TABLENAME.insp TABLENAME.out


This gave “percentage saved” estimates in the 61 to 81 percent range for the tables I selected: very promising, considering the table sizes.

The target system was DB2 9.7, so I could use admin_get_tab_compress_info to get estimates there:

db2 “select * from table(sysproc.admin_get_tab_compress_info(‘SCHEMA’, ‘TABLE’, ‘estimate’)) as t”

For one 8-million-row table, the estimated savings were 61%: the same as the inspect estimate for this table on the DB2 9.1 box.


I was curious how accurate these estimates were, so I measured it.  Since I had some similar databases, I created the first database without row compression and added it after loading data.

I compared sizes before and after the “alter table … compress yes” and reorg. The tablespace snapshots revealed a 51% reduction in 4K pages: from 909,985 to 442,817.  And the (SMS) disk space savings were in that same range: from 2.48 GB to 1.19 GB for that table’s data.  The index space savings weren’t significant (from 0.98 GB to 0.92 GB), but I didn’t expect much there.

I ran some quick tests using the system against the new (compressed) tables and found no significant performance impacts.  The initial reorgs and imports to enable compression and load data were painfully slow, but those are “off hours” activities anyway.

I’m Convinced

This was enough to convince me to enable compression from the very beginning for some of the larger tables in the remaining databases.  That is, I edited the db2look-generated DDL to add “compress yes” to the create table statements before running them.  I considered switching some of the SMS tablespaces to automatic storage, but decided to stick with only one change at a time for this round.

So far, I’m pleased with the results.  I’ll know more the next time we do a performance test, and I’ll be less of a “DASD pig” in the meantime.


Code page issues have become unexpectedly common now that recent versions of DB2 LUW default to UTF-8 / 1208 for XML data type support.  In recent days, two separate projects hit errors like the following:

SQL0302N The value of a host variable in the EXECUTE or OPEN statement is out of range for its corresponding use.

The root cause was that, with code page 1208, certain extended ASCII values were each converted from one byte to two: “two bytes for the price of one.”  This stretching data overflowed columns sized to expect one byte per character.

So if you get SQL0302 or similar errors, you can easily check the code page with: get db cfg for DBNAME | grep -i code (or get db cfg for DBNAME | find /i “code”).  The quick fix is to specify a code page like 1252 during database creation: create database DBNAME using codeset ibm-1252 territory us.  I do not recommend changing  the db2codepage registry variable for this problem.

However, code page 1252  prevents you from using XML data types.  So if this is an issue, there are at least two other options:

  • If the data you’re storing is really binary data, define the column with for bit data.  No codepage conversion will occur, and the data will typically come back to your application as raw bytes, not encoded strings.
  • Expand the size of the column to accommodate some “two for one” conversions.  Only a few extended ASCII characters get this conversion, but unless you go at least twice as large, this becomes a managed risk of how many of these you’ll get.

Painted into a Corner

I’ve had such good success with DB2 9.7 LUW under 64-bit Linux that I didn’t hesitate to install it on my new development laptop (Windows XP 32-bit, the corp-rat standard).  I immediately upgraded to Fix Pack 2, of course: never install fix level zero of anything.  I created and loaded a few databases and was on my way.

But it didn’t take long to notice some very bad behaviors.  I saw frequent CPU, memory, and I/O spikes, mostly in db2syscs and db2dasstm. On two occasions, all database operations froze without recovering, and db2syscs suddenly became a diehard: I couldn’t kill it from the Service Control Panel, Task Manager, or even pskill.  Windows shutdown then froze, requiring a hard power off.

This sent me on a fact-finding mission.  First stops: the Windows Event Viewer and db2diag.log.  Since I had not changed diagpath, I had to remind myself that instance information files are no longer under SQLLIB (with Windows, they’re now under Documents and Settings\All Users\Application Data\IBM\DB2). I spotted a huge and growing db2diag.log, at diaglevel 3.  It was flooded with messages like:


2010-07-23- I82554H352         LEVEL: Error
PID     : 260                  TID  : 536         PROC : db2dasstm.exe
INSTANCE: DB2                  NODE : 000
EDUID   : 536
FUNCTION: DB2 Tools, DB2 administration server, SchedulerThread_run, probe:5
DATA #1 : String, 50 bytes
Cannot complete scheduler thread’s initialization!


The db2diag flooding would certainly account for the CPU, I/O, and memory spikes, but I’m not sure about the periodic freezes.  But it’s one thing at a time, and couldn’t stop until I had a clean db2diag.

Fortunately, I found a fellow victim who reported the same issue just yesterday.  The root cause was that the DB2 Admin Server (db2dasstm) did not have required authorities on toolsdb.  This was surprising, since I let db2setup create and configure it.  But I’ve been accustomed to chanting the grant dbadm mantra since the introduction of SECADM, so I typed it out.

But in this case, it wouldn’t work. I couldn’t get into toolsdb with an authorization ID that had SECADM in order to do the grant. I tried stepping into it first via SYSADM/SYSADM_GROUP, but no dice. And toolsdb was unique in that SECADM was only granted to some bogus “SYSTEM” ID.  Thank you, db2setup, for painting me into a corner!

To fix it, I had to drop and re-create the toolsdb, following the proper steps to keep DAS happy.  A couple of db2admin stop/starts later and I had a small and steady diag log.

Time will tell if additional problems remain that contributed to the freeze problem (so far, so good), but I learned an important lesson: never let the DB2 installation program create the toolsdb.

I Ignore Your Puny 32 Bits

Customers these days: they think they’re entitled to 64 bits.  Heck, I remember when we had to do everything with just 32 bits, and only half of those worked.

Since 64-bit Windows and Linux are the new normal, the DB2 LUW folks (particularly with 9.7) have been encouraging migration away from 32 bit apps and libraries.  Sometimes this is more than a gentle nudging, as things break and require intervention.

Colleague Allen Cartright shares a tip for managing 32-bit ODBC data sources when 64-bit Windows just wants to ignore them.  So, if you find yourself running 32-bit DB2 code on a 64-bit OS written by a 2-bit company that can’t stand 1-bit of competition, heed his advice:

The ODBC connection manager in Windows Server 2008 (Administrative Tools -> Data Sources) is 64 bit only.  To get to the 32 bit ODBC connection manager in Windows Server 2008 Rx x64 you must run the executable directly as there aren’t any icons for it.  The executable is “C:\Windows\SysWOW64\odbcad32.exe”.  This is the same executable name as the 64 bit version but is in the legacy directory.  Once you have the 32 bit version of the ODBC Connection Manager open you may add the ODBC connections as normal.

The Microsoft Knowledge Base post on this has additional information and recommends a useful naming convention:

To work around this problem, use the appropriate version of the ODBC Administrator tool. If you build and then run an application as a 32-bit application on a 64-bit operating system, you must create the ODBC data source by using the ODBC Administrator tool in %windir%\SysWOW64\odbcad32.exe. To indicate the type of DSN, you can add “_32” to the 32-bit user DSNs and “_64” to the 64-bit user DSNs.


There are many great tools for running DB2 commands; I often use TOAD, Control Center (db2ce), and a CLI-based tool I wrote.  And with DB2 9.7, I’ve enjoyed experimenting with the new CLPPlus, IBM’s answer to SQL*Plus for Oracle refugees.  But for those quick commands, I usually just pop open a DB2 Command Window (Windows) or bash shell (Linux) and type “db2” followed by the command.  It works great nearly all the time.  Emphasis on nearly.

Today, Wayne reported how this command taken verbatim from the DB2 documentation (the “File type modifiers for export” section) choked:

db2 export to delfile2 of del modified by timestampformat=” hh:mm tt” select * from schedule

It failed with the error: SQL3192N  In the filetmod a user specified format “TIMESTAMPFORMAT” beginning with the string “yyyy” is not valid.  And no value for timestampformat worked.

This is a case where bugs in CLP’s command-line parsing (particularly with double quotes) get in the way.  The command works from inside CLP, Command Editor, and other tools, so you can just type db2 or db2ce and hit enter and then run the command (without “db2” in front) from there.  Using admin_cmd from a CLI/ODBC tool also works, like so:

call admin_cmd(‘ export to delfile2 of del modified by timestampformat=” hh:mm tt” select * from schedule’)

Bugs like this have survived in the db2 command line for awhile (this fails even in 9.7).  I’ll report it, but since CLPPlus is the new kid in town, the old db2 command line probably won’t get as much attention.

DML Insurance

In a chat today, we discussed how to protect data when formulating DML, particularly those SQL update and delete statements that can destroy perfectly good data if done incorrectly.  You may find yourself having to run such SQLs against large data sets (and, yes, even production data), so no amount of care is too much.

One option is to hold a transaction open, run the DML, select the (uncommitted) results, and then commit if good or rollback if not.  That’s clever, but it can hold a lot of locks for a long time, is error-prone, and doesn’t work for all cases.  Another is to keep full backups handy and restore the table(s) if something goes wrong.  This can be time consuming and can harm availability during restore if the table(s) are large.

A better approach is to create exports of exactly the data you will be changing or deleting.  That is, run an export command whose where clause is exactly the same as that in your update or delete statement.  That way, you can restore (via import) just the data you changed.  For DB2, I use IXF exports since they’re the richest.  So, for example, before you:

update company.employees set salary=100000000 where name=’Derek’

You should:

export to empderek.ixf of ixf select * from company.employees where name=’Derek’

If you realized maybe you shouldn’t have done that, you can put it back with:

import from empderek.ixf of ixf insert_update into company.employees

This can be used to recover from deletes as well (hence insert_update).  For example, before you:

delete from wikipedia.articles where title=’Malamanteau’

You should:

export to articles.ixf of ixf select * from from wikipedia.articles where title=’Malamanteau’

MySQL has fewer export/import options, but mysqldump can definitely help.  For example, export with:

mysqldump wikipedia articles –compact –where=”title=’Malamanteau'” > articles.sql

And, after the above delete, you can put it back with:

mysql wikipedia < articles.sql

For ease of reference, I often give these “backup” IXF and SQL files unique, descriptive names. It’s helpful to include the date and some tag that identifies what the data is.  In the above example, I could have used articles-Malamanteau-12-May-2010.ixf, although Malamanteau is hard to spell and probably isn’t even a real word.

Perhaps this simple technique of “export first, DML second” may rescue you from the occasional “oops moment.”

Sorting it Out

In prior posts, I described some of the benefits of DB2’s Self-Tuning Memory Manager (STMM), along with a few caveats.  One caution was around automated sort heaps, since sort requirements can often be unpredictable and confusing.

High-volume OLTP systems can, at times, experience heavy loads of small sorts or sort-like activity (hash joins, grpby’s, etc.).  It’s not uncommon for many of these to overflow (spill), or for a sub-optimal access plan to arise, even when more than enough memory is available for sorts.  Like the proverbial bad check writer, you’re left thinking, “I can’t be overflowing, I still have memory left!”

Sort overflow symptoms include high I/O and CPU utilization, poor response times, high sort overflow snapshot counts, long sort times, and large numbers of rows written on read-only SQLs like SELECTs.  Sort heap tuning is often one of the “last frontiers” in performance work (after fixing low-hanging fruit like SQLs, indexes, and bufferpools), and can be a stubborn one.

From my experience, most sort overflow problems in DB2 9.x fall into these categories:

  1. SQL.  In high-volume OLTP systems, it’s important to avoid sorting as much as possible.  Developers usually recognize that order by clauses require sorts, but can miss more subtle uses of the sort heap.  Count, distinct, and group by clauses also often require sort heap space, as do hash joins.  Statement snapshots and explains will reveal these.  The first step should be to try to rewrite problem SQLs to eliminate the sorts.  But wherever that isn’t possible, try step 2:
  2. DDL.  Adding or changing indexes and clustering may eliminate sorts.  If an existing index is used but sorting still occurs, check to see if adding index or include columns will help, and verify that allow reverse scans is specified when needed.  Again, the only way to know for sure is to measure by snapshots and explains.  In some cases, MDCs can help, but check it first by “doing the math” or running db2advis.  Sometimes the problem is as simple as outdated statistics (among other things, outdated statistics can cause DB2 to request too small a sort heap), so make sure your runstats is occurring as frequently as needed.
  3. Configuration.  And here we come to the STMM benefits and caveats.  You should “go automatic” with sort heap sizes, but be prepared to monitor and adjust as needed.

STMM only tunes shared sort memory, so to enable self-tuning sort memory, you must:

  • Set the instance (DBM)-level sheapthres to zero (0).  You can verify this with: db2 get dbm cfg  | grep -i sheap
  • Set the database-level sheapthres_shr (total amount of sort memory) and sortheap (memory available for each sort) to automatic.  You can verify this with: db2 get db cfg for <database> | grep -i sort

In some cases (such as when you’re experimenting and don’t want to wait on STMM’s tuning cycles), you may want to set an initial value for sortheap.  To do this, specify it alongside the automatic parameter, like so: db2 update db cfg using sortheap <size> automatic immediate.   But the initial value matters little after STMM “warms up” and then saves its settings.

When monitoring a database with sort overflow problems, keep an eye on the percentage of overflows (under 3% is a good target) and on the current sortheap and sheapthres_shr sizes.  You can view these with the snapshot monitors (db2 get snapshot for all on <database> | grep -i sort) and database configuration (db2 get db cfg for <database> show detail | grep -i sort).   But you may find, as I often have, that frequent, small overflows are occurring even when the sort memory areas have stretched to more than enough space.

Why this happens is a mystery to me.  A parallel symptom I often see is that the sort heap size reported in explain (db2exfmt) outputs is often far smaller than the current sortheap size.  At this point, the only choices are to leave sort heaps at automatic and accept some overflows, or abandon automatic and set a fixed size.  When setting large fixed sizes, remember that an overflowed sort writes the entire sort heap to tempspace.  So, a very large sort heap may coax the optimizer into relying more on sorts, only to have some of these very large sorts spill everything.

IBM is considering allowing ceiling and/or floor values for certain automatic parameters.  I do hope this gets added in a future DB2, and that sort heaps are among the first that can be configured this way.  Until then, DBAs will be left to sort this out on their own.

Yes You May

I’ve whined a bit lately about having to jump through syntactic hoops to get past new security restrictions.  Yet one of today’s DB2 barriers was a functional change, and not security-related.

I have this long-standing habit of using IXF export and import to quickly copy tables and shuffle data around.  So much so that commands like the following just flow from the subconscious:

db2 import from myfile.ixf of ixf create into mytable

Trouble is, create (and its variants) have been deprecated since DB2 9.5, so this command fails with an SQL3311 error in my DB2 9.7 environments.  The help text for that error (db2 ? sql3311) provides the work-around: add the forcecreate modifier.  That is:

db2 import from myfile.ixf of ixf modified by forcecreate create into mytable

I understand the reasoning for the change: new features like XML, MDC, and table partitioning have outgrown IXF.  But breaking compatibility of frequently-used commands is just plain cruel punishment toward old guys like me.  Yet since “modified by forcecreate” is the new “please”, I’m sure I’ll eventually learn to say it the first time.

The Friday Fragment

It’s Friday, and time again for a new Fragment: my weekly programming-related puzzle.

This Week’s Fragment

This week’s fragment is programming-related (by a stretch), and is borrowed from a recent Car Talk episode.  A listener there wrote how he didn’t get a programming job partly because he failed to solve the following interview puzzle within the allotted five minutes:

You have a four ounce glass, a nine ounce glass and an unlimited supply of water, which can be poured and dumped as much as needed.  Can you measure exactly six ounces of water in as few steps as possible?

If you want to “play along”, post the solution as a comment or send it via email.   To avoid “spoilers”, simply don’t expand comments for this post.  It’s your chance to demonstrate your qualifications for a job as a programmer.  Or at least a soda jerk.

Last Week’s Fragment – Solution

Last week was another SQL challenge, where IRS programmer Owen Moore had trouble joining his Employee table to the EmployeeAddress table.  When he ran his SQL, he found that it dropped some of the employees: those who did not have addresses on file.  He doesn’t mind having multiple rows for one employee (whenever there are multiple addresses), but the IRS never wants to overlook a taxpayer.  His SQL was:

select companyid, e.employeeid, hiredate, address
from employee e, employeeaddress ea
where e.employeeid = ea.employeeid
order by companyid, hiredate

Owen’s bane was that, by default, SQL joins are inner joins, meaning that results are included only if there are matching rows in both tables.  Owen needs an outer join, so that it includes all the rows from the Employee table, even if there aren’t matching rows in the EmployeeAddress table (the EmployeeAddress column values will be null were the rows are missing).  Outer joins can be left joins or right joins, depending on the order you list the tables.  Owen reads and thinks left-to-right, so he’ll list the Employee table first and use a left join, like so:

select companyid, e.employeeid, hiredate, address
from employee e
left join employeeaddress ea
on e.employeeid = ea.employeeid

Congratulations to Spencer for quickly spotting the problem and proposing the solution.  For that, he gets an extra hour to file his quarterly estimated tax payments.

If you’ve been following along, you know this was part of a larger SQL for Owen’s “pink slip pick list” report which shows IDs and addresses for all but the 49 most senior employees in each company.  The full SQL with the left join is now:

select companyid, e.employeeid, hiredate, address
from employee e
left join employeeaddress ea
on e.employeeid = ea.employeeid
not exists
 (select *
 from employee ei
 where ei.companyid = e.companyid
 and ei.employeeid = e.employeeid
 and 49 >
    (select count(*) from employee eii
    where eii.companyid = ei.companyid and
    eii.hiredate < ei.hiredate))

Simon Says

There was a bit more dialog today about impersonating the DB2 instance owner.  It’s a quick way to get around controls that newer versions of DB2 and tighter Windows and network security have brought us.  The extra step is annoying, but trying to convince the system you don’t need it is often worse.

Impersonation and elevation have become the “new normal” these days.  I’ve grown so accustomed to opening “run as administrator” shells in UAC Windows (7/Vista/2008), typing runas commands in XP, and using sudo in Ubuntu that these have become second nature.  And that level of user acceptance usually translates into approval to expand the practice, rather than a mandate to remove the inconvenience.  Enhancing security usually includes putting up new barriers.

A former co-worker has often said that what we really need is software that determines whether a user’s intentions are honorable.  Perhaps then security would become seamless.  But it’s more likely that its implementation would also test our manners and fading patience.

The Friday Fragment

It’s Friday, and time again for a new Fragment: my weekly programming-related puzzle.

This Week’s Fragment

Owen Moore at the IRS needs our help again.  He wants to add mailing addresses to his “pink slip pick list” report.  Easy enough: he just added in the EmployeeAddress table to get these fields.  But when he ran it, he found that it dropped some of the employees.  He doesn’t mind having multiple rows for one employee (whenever there are multiple addresses), but the IRS never wants to overlook a taxpayer.

Upon investigation, he discovered that the dropped ones didn’t have addresses on file. Here is his SQL:

select companyid, e.employeeid, hiredate, address
from employee e, employeeaddress ea
where e.employeeid = ea.employeeid
and ...
order by companyid, hiredate

Can you help Owen correct his mistake?  The “…” is the rest of his where clause (see solution below).  Since it’s not important to the solution, it’s omitted here.

If you want to “play along”, post the solution as a comment or send it via email.  You can use the attached file for sample data.  To avoid “spoilers”, simply don’t expand comments for this post.  Owen promises to add you to the “do not audit” list if you can help.

Last Week’s Fragment – Solution

Last week’s fragment was missing from a SQL statement.  IRS programmer Owen Moore needed a report of all but the 49 most senior employees of each company.  That is, fill in the dot, dot, dots (ellipsis) here:

select  companyid, employeeid, hiredate
from employee e
where ...
order by companyid, hiredate

Fortunately, Owen had a couple strokes of luck.  First, during his lunch break, the fast food drive-through attendant asked, “would you like a correlated subquery with that?”  Not knowing what such a thing was, he said “yes”, and it turned out to be just the thing he needed.  Second, upon arriving back at work, he was greeted with a couple of emails (including one from Spencer) suggesting a SQL like the following:

select companyid, employeeid, hiredate
from employee e
where 49 >
  (select count(*) from employee ei
   where ei.companyid = e.companyid and
   ei.hiredate > e.hiredate)
order by companyid, hiredate

That got him in the ballpark, but, alas, there was a gap:  it revealed the 49 newest employees (a good thing), but not all but the 49 oldest.  Well, Owen pulled up Google Translate and found that “all but” in English loosely translates to “not exists” in SQL.  So he wrapped an outer select around it, flipped the greater than sign (to exclude older ones), and came up with the following:

select companyid, employeeid, hiredate
from employee e
where not exists
  (select *
   from employee ei
   where ei.companyid = e.companyid
   and ei.employeeid = e.employeeid
   and 49 >
     (select count(*) from employee eii
      where eii.companyid = ei.companyid and
      eii.hiredate < ei.hiredate))

By the way, the “i” suffixes on table aliases mean “inner” (so “ei” is “employee inner” and “eii” is “employee inner-inner”), just a convention.

Owen has a “make it right, then make it fast” mentality, so he’ll consider tuning/rewriting later if performance is bad.  But if you’re interested in tuning it, he attached a script to create the table, load some sample data, and run the SQL.  This script also has SQLs and data to work on this week’s fragment.

Guilt By Association

Anyone who has done a little data mining knows that simple association rules (a.k.a., market basket analysis) and decision trees can reveal some of the most strange and wondrous things.  Often the results are intuitive, which builds confidence in the techniques.  But then let it run loose and you’ll usually find some (strongly correlated) wild surprises.

Folks who fold their underwear tend to make their bed daily.  I’ll buy that.  But people who like The Count on Sesame Street tend to support legalizing marijuana – are you kidding?

Those are some of the conclusions reached at  This site will happily make recommendations for you on all your life decisions, big or small.  There’s no real wisdom here – it just collects data and mines it to build decision trees.  So, as with most data mining, the results are based on pragmatics and association, and they never answer the question, “why?”  Yet “just because” is usually good enough for things like marketing, politics, and all your important life decisions.

In school they made me work through many of these data mining algorithms by hand: classifiers, associations, clusters, and nets using Apriori, OneR, Bayes, PRISM, k-means, and the like.  When it got too rote, we could use tools like Weka and DMX SQL extensions.  It was, of course, somewhat time-consuming and pedantic, but it made me realize that most of these “complex data mining techniques” that seem to mystify folks are actually quite simple.  The real value is in the data itself, and having it stored in such a way that it can be easily sliced and diced into countless permutations.  (NoSQL fans: that typically means a relational database.  Oh the horror.)

Yet simple associations can be valuable and entertaining.  I’ve run enough DMX and SQLs against large database tables (housing contact management, payment, and contribution data) to find some surprising ways to “predict” things like risk and likely contributors.  But since “past performance is no guarantee of future results”, these outputs must be used carefully.  It’s one thing to use them to lay out products in a store, quite another to deny credit or insurance coverage.

American Express, Visa, and others have caught a some flack lately for their overuse of these results.  “OK, so I bought something from and you’ve found that other cardholders who shop there have trouble paying their bills.  But that doesn’t mean I won’t pay my bill!  Don’t associate me with those guys!”  Well, associate is what data mining does best.  And, like actuarial science, it’s surprisingly accurate: numbers don’t lie.  But companies must acknowledge and accommodate exceptions to the rules.

Meanwhile, data mining will continue to turn wheels of business, so get used to it.  Just don’t let anyone know that you like The Count.

Isolation Level Levels

I got a question today about whether CS (cursor stability) really was the default isolation level for dynamic SQLs from a DB2 CLI application.  The short answer is, “yes”, but that can be overridden at many different levels.  So many that I thought of Shrek: onions have layers, ogres have layers, DB2 has layers.  Here are just a few:

  • In db2cli.ini, using the TXNISOLATION keyword.  It does require work to get it to apply to your connections.
  • At the connection level, by sending “set current isolation”, or setting the SQL_TXN_ISOLATION connection attribute (SetConnectAttr API)
  • For static SQLs, in the access plan (the bind).  For the DB2 CLI packages, the package naming convention includes the expected isolation level, and you can verify with: select pkgname, isolation from syscat.packages where pkgname like ‘SYS%’
  • At the statement level, using the with keyword.

There are further settings to tweak the semantics of some isolation levels.  For example, DB2 9.7 offers things like the cur_commit db config parameter and db2_evaluncommitted registry variable to change CS behaviors.

So with all these knobs and overrides, it’d be nice if an application could query its actual effective isolation level.  Running “values(current isolation)” only returns a value if “set current isolation” is used.  Also, it would be nice if dynamic SQL snapshots included the isolation level, or if db2exfmt consistently showed the correct level.  Lacking this, you have to combine these outputs with db2pd results.  That’s a lot of layers to peel back, which could make even an ogre cry.

That No HLL Thing

Probably the worst thing about high level languages (HLL) is that they are so good in what they are doing. Good enough to conquer the entire market on programming and hold it for decades.

Wait! That is a bad thing? How?

It is a bad thing because high level languages are appropriate for a wide range of tasks, but not for every task. Yet it is exactly that that caused them to be used in contexts where they are not appropriate. In the last month alone, my strong recommendation for two different clients was that they need to switch to assembly language because it would greatly simplify the work that they need to do.

That met with some (justified) resistance, predictably. People think that high level languages are the way to write software. I decided to write a series of blog posts about the topic, trying to explain why you might want to work with a low level language.

High level languages have the following properties:

* Standardization
* Simplification
* Optimization
* Ease of trainification

Just about any of the No HLL approaches give up on some of those properties., usually, it gives up on all of those properties. But think about how useful a HLL is, how flexible it can be. Why give it up?

Indeed, the most common reason to want to move from a high level language is running into the HLL limitations. In short, high level languages don’t scale. Actually, let me phrase that a little more strongly, high level languages cannot be made to scale.

The problem is inherit into the basic requirements of a high level language, it must be compiled, to handle things like converting text to machine language, etc. The problem, however, is that trying to scale a high level language system over a set of coding styles. At that point, you run head on into the Abstraction Penalty, which state that if performance is your absolute requirement, you need to give up on either optimization or ease of coding.

In most high scaling environments, it is not feasible to give up on either option, so high level languages are out. That leaves you with the No HLL options, I am going to dedicate a post for each of the following, let me know if I missed something:

* Gas
* Typing machine code into a hex editor
* Typing machine code into notepad or vi

Other languages, namely PL/S and inline assembly, exists. PL/S suffers from the same problem regarding the Abstraction Penalty as high level languages, and inline assembly is basically a special case of No HLL.

But seriously now… this post is, of course, an April Fools Day joke: a spoof on (and transliteration of) That No SQL Thing at  (You can follow the link to the post, or just Topeka it).  Oren and other NoSQL advocates do make some good points and offer very good advice.  But be careful out there, especially when it comes to such architecture decisions, so that you pick the best tool for the job: SQL or NoSQL.  And don’t get fooled again.

Don’t Get Fooled Again

Surprisingly, I caught a little flack for this comment I made several weeks ago:

“If all you need are LOBs, you probably don’t need a relational database.  And if you need a relational database, you probably shouldn’t use LOBs.”

The surprising part was the question raised: “why would you ever use a relational database these days?”  It was argued from some familiar and simple high-level reasons (trends, having to write SQL, taking the car apart to put it in the garage, etc.), and came from a NoSQL fan.

This objection seemed anachronistic to me, since we just finished migrating one of our large products from a non-relational model to using DB2 for all data storage.  Why would we do such a thing?  Lots of reasons, but it only took one: paying customers wanted it.  Further, data mining has become so popular and successful lately (on the wings of relational databases) that it’s hard to imagine tossing aside that shiny new benefit.

The NoSQL revolution has taken on a reactionary bent (just consider the name), which is odd for a new movement.  Chesterton reminds us that “the business of progressives is to go on making mistakes, and the business of the conservatives is to prevent the mistakes from being corrected.”  NoSQL advertises itself like a progressive movement, but actually falls to the right of conservative.  BigTable, Cassandra, and HBase are no newer concepts than the ancient things grandpa tells war stories about: flat files, hash files, partition stores, CAM, BDAM, ISAM, etc.  So blindly applying NoSQL is making the new mistake of rejecting well-established corrections to old mistakes.

When making architectural decisions like whether to use a relational database or not, I often start with a blank 8 1/2 x 11 sheet of paper, turned landscape.  At the top I write the relevant business requirements (real, not imagined, ones).  I then do a simple left-and-right table of pros and cons.  At the bottom I write areas where architectural prototypes are needed, if any.  This, of course, helps me weigh the options and make a decision, but it also means I “own” the cons.  If I decide to use an RDMS, I accept the costs.  If I decide not to use an RDMS, I willingly reject the benefits they offer with eyes wide open.

Yet the war of words for and against NoSQL rages on, often without fully or correctly acknowledging the cons, nor the simple fact that you don’t do both at the same time.  Many problems are best solved with a relational database and many are best solved without one.

In the early 90s, I would joke that there existed a parallel universe where client/server has fallen out of fashion and the new, cool buzz was mainframes and 3270 terminals. Funny that we soon joined that parallel universe when web apps and cloud computing became our trend.  Andrew Tanenbaum notes that, while recapitulation doesn’t really happen in biology, computer history does rightfully repeat itself.  Old solutions should be applied to new technologies; at the very least it keeps consultants gainfully employed.  Let the pendulum swing, but truly grok all the pros and cons as history repeats itself.

Unlike Ted Dziuba, I don’t want NoSQL to die.  By definition, it can’t, and that’s arguing a strawman anyway.  I just want it to be used where it best fits.  And the same goes for relational databases.  Repeat after me: “no golden hammers; pick the best tool for the job.”  Just don’t be a lemming.  And don’t get fooled again.

Impersonating Better Security

I got a question today from a co-worker who was painted into a corner trying access a database he had restored on his Windows development machine.  He stumbled over DB2 9.7’s new security twists, such as not having dbadm authority by default.  I rattled off my familiar quick fix:

db2 connect to <dbname>
db2 grant dbadm on database to <userid>

However, his default Windows user ID didn’t have secadm or sysadm authority, so that failed with an error.   So, I had him impersonate the one that did:

runas /user:<adminuser> db2cmd

Repeating the grant command from this new command shell did the trick.  It could have also been done with:

db2 connect to <dbname> user <adminuser> using <adminpassword>

And so it goes.  No matter how refined security policies become, they can usually be circumvented with a little impersonation.  For example, think of how many times we quickly and mindlessly sudo under Ubuntu.  In this case, impersonation was a fast route to giving a developer the access he should have had by default anyway.  Today’s technology cannot solve the impersonation problem, but sometimes we consider that more a feature than a bug.

Can’t Get There From Here

I got a question today from a friend who was trying “explain stmtcache all” on DB2 LUW and wondered why it was unrecognized.  He had stumbled across this command while looking up the explain syntax in the DB2 online documentation and was lured in by its promises.  What he didn’t notice was that this was in the DB2 for z/OS documentation, and that this command isn’t available in LUW.

Don’t get me started.

In the olden days, DB2 on the server and DB2 on the mainframe were two very different products, and no-one expected much compatibility.   It took years after the OS/2 EE Database Manager rewrite for DB2 LUW to catch up to DB2 for z/OS’ level of sophistication.  But DB2 LUW has come a very long way very quickly in recent years, and has adopted most of the important features previously only available from the mainframe.  I’ve gotten pretty used to this.

But I’ll often hear of all this cool stuff in new versions of DB2 for z/OS, like in the version 10 beta that rolls out tomorrow.  I expect new features to eventually appear in a subsequent LUW version, but sometimes they never do.  Explain stmtcache all is one such example.  In this case, architectural differences may mean that it will never come down from on-high, but it would be nice to have in LUW.  Yes, I can loop through snap_stmts or top_dynamic_sql and do explain all for on each of the stmt_texts (I have scripts that do this), but this is slow.  And changing the current explain mode is another option, but it’s rarely available and rarely efficient.  It’s a dynamic SQL world now, and the kinds of trends you can spot by mining the snapshot tables (and presumably the dsn_statement_cache_table) are just too useful.  So give us the best tools for it, even if we have to borrow them from the big iron.

Another related example is the stated plan to add access plan stability features for dynamic SQL (I don’t need it for static SQL, so I can skip the current PLANMGMT capabilities).  But, alas, this is currently only being planned for… where?  DB2 for z/OS, of course.

admin_cmd can

The question came up again today: “how do I run a DB2 export through my ODBC connection?”  Before recent versions of DB2, the answer was, “you can’t.”  If you tried just running the command, DB2 would give you the classic SQL0104 “duh” error message: “…an unexpected token was found…”

That’s because administrative commands and utilities require special handling.  And before DB2 8.2, they could only be run through a command line processor session.  Programs could not use their normal database connections for things such as export, runstats, reorg, or “update db cfg.”  The alternatives were often inelegant, such as calling cumbersome utility functions like db2Export/sqluexpr or shelling out to a script.

Fortunately, the new admin_cmd stored procedure lets you run several of these commands through a normal CLI or ODBC connection, much like any SQL statement.  You just pass the command as a parameter; for example:

call admin_cmd(‘export to sales.ixf of ixf select * from sales’)

Even if you’re not writing code, admin_cmd is useful for doing maintenance and data movement directly from your favorite tools.  Since so many programs and tools use ODBC connections, it’s a convenient and portable way of handling your DB2 administrivia.