Tag Archives: DB2

You Again

Remember back in the JDK 1.5 days when some genius changed BigDecimal.toString and broke half the universe? I’ve felt the impacts in DB2, Oracle, VoiceObjects, and other places. I’m over that now (and all the cool kids have updated or switched to toPlainString), but this thing just keeps coming back to haunt me.

Like this week, when old DB2 JDBC (JCC) drivers kept showing up on some platforms, prompting these intermittent errors:

[ibm][db2][jcc][converters] Overflow occurred during numeric data type conversion of …

This error can sneak up on ya’ because it often takes just the right numbers to kick in. Like the lottery, and sometimes with the same odds.

Fortunately, the fix is easy: just update JDBC drivers to something from the modern era.

Move it, Move it

Today I had to move a few dozen DB2 LUW databases from one Windows system to another. I’ve been down this road before, and I always script the process, but this time I initially left out tablespaces. Bummer, the downside of writing db2look commands from memory.

So that I don’t get fooled again, I’m posting my simple scripts which were inspired by this venerable article.

Export from source machine – Use list db directory to generate a master script that calls this expdb.cmd for each database:

mkdir %1
cd %1
db2move %1 export > export-%1-db2move-output.txt 2>&1
db2look -d %1 -e -a -l -o db2look-%1.sql > export-%1-db2look-output.txt 2>&1
cd ..

Import into target machine – Use an equivalent master script to call this impdb.cmd:

cd %1
db2 create db %1 using codeset ibm-1252 territory us > import-%1-db2-create-output.txt 2>&1
db2 -tvf db2look-%1.sql > import-%1-db2-output.txt 2>&1
db2move %1 load > import-%1-db2move-output.txt 2>&1
cd ..

Not So Fast

With the advent of DB2 10, IBM dropped its much-maligned Control Center tools in favor of a free version of IBM Data Studio.  I started using Data Studio back in April and wrote about some positive first impressions here.  But I shouldn’t have blogged so soon.

For occasional use with certain tasks on fast hardware, Data Studio is fine.  But if you want to get a lot done quickly, it can really get in the way.  It’s just too bloated, too slow, too cumbersome, too slow, too awkward, and too slow.  Did I mention that it’s slow?

So I find myself doing even more now with the command line.  When I need a GUI tool, I’ve gone back to using a tool I wrote myself, TOAD, and free tools like DbVisualizer and Eclipse DTP.

Data Studio has been around for awhile, and IBM is clearly committed to it.  But there remains a lot of room for improvement.  Now that Data Studio is the built-in default, I hope IBM gets on that.  And quickly.

Broken JARs

Since I work on a few apps that use the DB2 JDBC Type 4 drivers, work-arounds for common issues have become familiar.  But when the corrupted DB2 JAR issue resurfaced today, I realized that I had not posted about it.  Now is as good a time as any to make up for that neglect…

The problem is that for DB2 LUW versions between 9 and 9.5, IBM published db2jcc.jar files with corrupt classes.  In some Java EE environments, the container doesn’t fully scan JARs, so it doesn’t really matter if an unused class is wonky.  But many containers do scan, causing exceptions like the following:

SEVERE: Unable to process Jar entry [COM/ibm/db2os390/sqlj/custom/DB2SQLJCustomizer.class] from Jar [jar:file:…lib/db2jcc.jar!/] for annotations
org.apache.tomcat.util.bcel.classfile.ClassFormatException: null is not a Java .class file
at org.apache.tomcat.util.bcel.classfile.ClassParser.readID(ClassParser.java:238)
at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(ClassParser.java:114)

That particular exception is from Tomcat 7.

IBM acknowledges the problem (see http://www-01.ibm.com/support/docview.wss?uid=swg1LI72814) and offers a work-around: edit the JAR file to remove the offending classes.

Edit the JAR?  Really?!  A quicker, surer solution is to just grab a good JAR from http://www-01.ibm.com/support/docview.wss?uid=swg21363866.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

I got a friendly reminder this morning that I’ve neglected my Friday Fixes postings of late.  I didn’t say they’d be every Friday, did I?  At any rate, here are some things that came up this week.

Tabbed Freedom v. Logoff Security

Tabbed browsing is a wonderful thing, but its features can become security concerns to corp-rat folks who mainly use their browsers for mission critical apps.  For example, with most browsers, closing a tab (but not the browser itself) does not clean up session cookies. Yet those security first guys would like to have a way to trigger a log off (kill the session) on tab close.

This is a common request, but there’s no straightforward solution. As much as I’d like browsers to have a “tab closed” event, there isn’t one.  The best we can do is hook the unload event which is fired, yes, when the tab is closed, but also anytime you leave the page: whether it’s navigating a link, submitting a form, or simply refreshing.  So the trick is to detect and allow acceptable unloads.  Following is some JavaScript I pieced together (into a common JSPF) based loosely on various recommendations on the web.

  var isOkToUnload = false;
  var logoffOnClose = '<c:out value="${settingsBean.logoffOnClose}" />';
  function allowUnload() {
     isOkToUnload = true;
  function monitorClose() {
     window.onbeforeunload = function() {
        if (!isOkToUnload)
           return "If you leave this page, you will be logged off.";
     window.onunload = function() {
        if (!isOkToUnload) {
              async: false, type: "POST",
                   url: '<c:url value="/basiclogout"/>' });
     // Add other events here as needed
  $(document).ready(function() {
     if (logoffOnClose === 'Y')

This triggers on refresh, but that’s often a good thing since the user could lose work; Gmail and Google Docs do the same thing when you’re editing a draft.  It’s a good idea to make this behavior configurable, since many folks prefer the freedom of tabbed browsing over the security of forcing logoff.

DBCP Has Timed Out

Right after mastering the linked list, it seems every programmer wants to build a database connection pool.  I’ve built a couple myself, but this proliferation gets in the way of having a single golden solution that folks could rally around and expect to be supported forever.

Such was the story behind Apache DBCP: it was created to unify JDBC connection pools.  Although it’s broadly used, it’s over-engineered, messy, and limited. So it, too, fell by the wayside of open source neglect.  And since nearly all servlet containers now provide built-in connection pools, there’s really no use for DBCP anymore.

Yet I found myself having to fix DisconnectException timeouts with an existing DBCP implementation, typically stemming from errors like:  A communication error has been detected… Location where the error was detected: Reply.fill().

After trying several recovery options, I found that DBCP’s validationQuery prevented these, at the cost of a little extra overhead.  Although validationQuery can be configured, I didn’t want additional setup steps that varied by server.  So I just added it to the code:

  BasicDataSource ds = new BasicDataSource();
  // ...
  ds.setValidationQuery("select 1 from sysibm.sysdummy1");

In the next pass, I’ll yank out DBCP altogether and configure pools in WebSphere, WebLogic, and Tomcat 7.  But this gave me a quick fix to keep going on the same platform.

Aggregation Dictation

Weird: I got three questions about aggregating in SQL on the same day.  Two of them involved OLAP syntax that’s somewhat obscure, but quite useful.  So if you find yourself with complications from aggregation aggravation, try one of these:

  • Doing a group by and need rollup totals across multiple groups?  Try grouping sets, rollup, and cube.  I’ve written about these before; for example, see this post.
  • Need to limit the size of a result set and assign rankings to the results?  Fetch first X rows only works fine for the former, but not the latter.  So try the ranking and windowing functions, such as row_number, rank, dense_rank, and partition by.  For example, to find the three most senior employees in each department (allowing for ties), do this:

       (SELECT rank() OVER(partition BY workdept ORDER BY hiredate ASC) 
        AS rank, firstnme, lastname, workdept, hiredate 
        FROM emp) emps
      WHERE rank <= 3
      ORDER BY workdept

    Co-worker Wayne did a clever row_number/partition by implementation and added a nice view to clarify its meaning.


Some interesting links that surfaced (or resurfaced) this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.


If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Hashes Make for Poor Traces

SQL and parameter tracing  is covered well by most ORMs like Hibernate, but you’re on your own when using JDBC directly.  It’s helpful to funnel JDBC access through one place with a common wrapper or utility class, but if all you get is the PreparedStatement, there are no public methods to get the SQL text and parameter array from it for tracing.  Unbewiebable.  PreparedStatement.toString sometimes includes useful information, but that depends entirely on the JDBC driver.  In my case, all I got was the class name and hash.  So I just went upstream a bit with my calls and tracing hooks so I could pass in the original SQL text and parameter array.

Samy Isn’t My Hero

You’d think URL rewriting went out with the Backstreet Boys, but unfortunately, JSESSIONID is alive and well, and frequenting many address bars.  This week, I’ve been focused on hack-proofing (counting down the OWASP Top Ten) so I should be a bit more sensitive to these things.  Yet I inadvertently gave away my session when emailing a link within the team management system I use (does it count as a stolen session if you give it away?)  Not only does this system include the direct sessionguid in the URL, it also doesn’t validate session references (like by IP address) and it uses long expirations.

At least this system invalidates the session on logout, so I’ve resumed my habit of manually logging out when done.  That, and attention to URL parameters is a burden we must all carry in this insecure web world.  Site owners, update your sites.  Until then, let’s be careful out there.

Virtual Time Zone

While directing offshore developers last year, thinking 9 1/2 or 10 1/2 hours ahead finally became natural.  Having an Additional Clock set in Windows 7 provided a quick backup.  Amazingly, some software still isn’t timezone-aware, causing problems such as missed updates.  I won’t name names or share details, but I opted for a simple solution: keep a VM set in IST and use it for this purpose.  Virtual servers are cheap these days, and it’s easy enough to keep one in the cloud, hovering over the timezone of choice.

SQL1092 Redux

Yes, bugs know no geographical boundaries.  A blog reader from Zoetermeer in the Netherlands dropped me another note this week, this time about some SQL1092 issues he encountered.  Full details are here; the quick fix was the DB2_GRP_LOOKUP change.

This DB2 issue comes up frequently enough that DB2 should either change the default setting or also adopt WebSphere MQ’s strategy of using a separate limited domain ID just for group enumeration.  Those IBM’ers should talk.

Those Mainframe Hippies

I traditionally think of mainframe DB2’ers as belt-and-suspenders codgers who check your grammar and grade your penmanship, while viewing DB2 LUW users as the freewheeling kids who sometimes don’t know better.  That changes some with each new version, as the two platforms become more cross-compatible.

So I was surprised this week to find that DB2 for z/OS was more flexible than DB2 LUW on timestamp formats.  Mainframe DB2 has long accepted spaces and colons as separators, but unlike DB2 LUW, you can mix and match ’em.  For example, DB2 z/OS 9 and higher will take any of the following:

‘2012-03-16 19:20:00’



‘2012-03-16 19.20.00’

DB2 LUW (9.7) will reject the last two with SQL0180 errors.

Knowing the limits is important when writing code and scripts that work across multiple DB2 platforms and versions.  The problem could get worse as DB2 LUW adds more Oracle compatibility, but as long as the DB2 kids and codgers stay in sync, we can enjoy some format flexibility there.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

CSS – Space Matters

Guess you could say I was “lost in [a] space” wondering why my browser didn’t pick up my div’s CSS class.  Turns out, I wrote the selector as #myid .myclass (descendent selector) but really meant #myid.myclass (combined selector). Thank you, 2.5x reading glasses and Chris Coyier, for the quick save.

Point of No Return

JavaScript and jQuery iterator functions are awfully convenient, but it’s easy to overlook scope.  While refactoring some code today, I wrapped a function around some iterator code for reuse and broke it. Here’s a simplified example to demonstrate the problem:

     function myFunction() {
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              return ea;

At a glance, you’d think myFunction returns b, but since the inner function’s result is never passed out, it returns undefined.  And jQuery’s each method just returns the collection, so adding a return in front doesn’t help.  jQuery does have some alternatives (like filter) that can help, but those aren’t as flexible as each.

The simple solution is to just carry the value to the outer scope, like so:

     function myFunction() {
        var result;
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              result = ea;
              return false;
        return result;

Adding the “return false” stops the iteration at the first match.

Is this Thing Used?

My current project has this DB2 LUW fan coding against a DB2 for z/OS database.  While I give up a some things, I also gain a lot.  For example, while working on a DAO, I had questions about a particular table’s indexes.  Fortunately, DB2 for z/OS has SYSINDEXPLANSTATS which, among other things, keeps track of the last time an index was used (even for dynamic SQL) to help determine its merits.  So I first checked to see if this was at least version 9:

select getvariable('SYSIBM.VERSION') from sysibm.sysdummy1;

… and then answered my questions with a simple query like the following:

select i.name, i.tbname, i.creator, i.uniquerule, i.colcount, i.clustering, s.lastused
from sysibm.sysindexes i, sysibm.sysindexspacestats s
where i.name = s.name
and i.creator = s.creator
and i.dbname = 'MYDB'
and i.tbname like 'MYPREF%'
order by s.lastused

Can’t wait for this to make its way to DB2 LUW.


Yes, I presently have to use Microsoft Visual SourceSafe (VSS).  I’m over it now.  When using this “lock first” VCS against a deep project hierarchy, I often need to see a quick list of my check-outs.  I can use View->Search->Status Search (with subprojects selected) from the VSS GUI, but that’s often too slow over a remote VPN connection.  So I cooked up a quick script to run over an ssh command line from a server that is “local” to the repository:

cd /d "C:\Program Files\Microsoft Visual SourceSafe"
set SSDIR=\\myserver\VSS\myvssshare
ss STATUS $/MyProject -R -Umyusername

Flexigrid Echo

Imitation isn’t always flattering; it’s often annoying.  Having two Flexigrids on one page, both with pagers enabled (usepager: true) yielded some weird mocking behavior.  For example, paging forward on one grid caused both grids to show the same “Page X of Y” message.  After debugging, I found that similar bugs had been reported and fixed in the latest release.  Upgrading Flexigrid resolved that one.

Upgrade and/or Die

JDBC is one of those core, essential components where you want to see continued incremental improvement with no interface impacts: just keep dropping in the latest versions and expect it to work.  Yay, back compatibility!

JDBC 4.0 is relatively new to DB2 LUW (first available in version 9.5), but it and prerequisite Java 6 have certainly been around long enough for folks to start switching over to it.  Just don’t assume that everything in a deep open source stack will work with it.

In yesterday’s action, perfectly-good BIRT 2.5.1 reports failed (“data set column … does not exist” exception) in an environment with DB2’s JDBC 4.x drivers.  At first, I suspected the particular db2jcc4.jar level, since it was taken from DB2 9.7 Fixpack 3: that now-deprecated fixpack that does everything evil except steal your kidneys.  But since new versions of BIRT (like the 3.7.1 version I use) worked fine with that version of Ernie (ahem, JDBC 4.0) and old versions of BIRT worked fine with JDBC 3.0 (db2jcc.jar), that flagged the culprit.  Turns out, the report SQLs use column aliases (who doesn’t?) and apparently JDBC 4.0’s getColumnName “fixes” broke these.  Upgrading BIRT brought harmony back to the universe.

With “broken compatibility” issues like this, it’s tempting to procrastinate on open source upgrades.  That’s why Maven POMs quickly develop inertia: once you get it all working together, who has time to test new dependency version permutations?  Fortunately, the Eclipse project and others do a good job of bundling compatible toolsets for download.  That way, if something “doesn’t work on my machine”, it’s quick work to see how it plays with the latest and greatest stack.

Just Concentrate

In most cases (at least for OLTP systems), using SQL parameter markers instead of literal values improves performance, adds consistency, reduces CPU overhead, and makes the database easier to monitor.  SQL parameters let DB2 cache the access plan so that compilation only happens once and execution counts are combined for the same statement.

A transaction system that carelessly uses literals can often be improved by converting literals to markers wherever possible.  But since this recoding takes time, it would be nice to predict the performance impacts (hopefully benefits) of such a change without having to just code it all and then measure.

Fortunately, DB2 9.7 offers the new Statement Concentrator to do just that.  Enabling it at the database level is easy:

db2 update db cfg for DATABASE using stmt_conc literals

And you can see if it’s enabled with:

db2 get db cfg for DATABASE | find /i “stmt_conc”

I suggested this new feature to a colleague today, and did some experiments myself.  It was nice to see the statements combined in the Dynamic SQL Snapshots, and also see the (zero-execution) literal versions separated out.  This means I’ll have to update my snapshot analysis tools which combine similar SQLs.

But the nagging feeling remains that this might be too broad a change.  For some SQLs, those unique (literal-driven) access plans are necessary.  But it may be possible to reach a good balance by enabling the statement concentrator at just the right level and by using optimization profiles for those exceptions to the rule.  That’s something worth concentrating on.

DB2 in Cygwin

I often use Cygwin with ssh and putty to remote into various Windows servers. It’s a lot quicker than Windows Remote Desktop and, well, I’m a command line junkie.  But since db2cmd defaults to launching a new window/session, it doesn’t play nicely with ssh.  Sure, I could remotely attach or catalog the node and database, but that defeats the purpose of being quick and simple.

To work around this, I add the following alias to my .bashrc:

alias db2=’db2cmd -i -w -c db2′

It’s imperfect (it works only for isolated commands or scripts), but it’s great for quick commands.

Not Listening

DB2 9.7 seems to have brought back an old problem: DB2 servers that are deaf to TCP/IP connections.  In today’s action, my JDBC connections on a new Windows DB2 9.7 configuration were hanging for several minutes and then eventually failing with a “connection refused” error.  Since such problems are usually due to configuring the wrong port, I verified it:

db2 get dbm cfg | find “SVC”

type %WINDIR%\system32\drivers\etc\services | find /i “db2”

I had specified the right port: the standard 50000 default for the first instance.  And all the standard DB2 tools connected to the database just fine, including CLP and CLI connections.  After checking all the wrong places, I finally looked to see if DB2 was actually listening on this port:

netstat -an | find “500”

telnet localhost 50000

It wasn’t: I got no relevant matches on the netstat and telnet gave the “connect failed” error.  I checked DB2COMM and found it didn’t include TCP/IP.  Weird.  So I quickly remedied that with:

db2set DB2COMM=tcpip & db2stop & db2start

And verified with:

db2set -all db2comm

I’m not sure why some recent new DB2 9.7 installs didn’t have TCP/IP connections enabled by default.  Usually, it’s the setup/db2setup default.  But it’s something I’ll watch for, with a quick solution next time.

More DB2 Snippets

To paraphrase David Byrne, you may find yourself behind the wheel of a large database in another part of the world asking, “how do I work this?”  Such was the case today with a customer database suffering a performance hiccup.  It had the symptoms of improper tablescans, but there was no time to enable snapshots, run explains, or do detailed analysis.  It needed a quick sanity check of statistics, indexes, and organization.  A few basic commands flushed out the problem: a required index was missing.

I’ve recorded various common DB2 Snippets here before, but had not yet included today’s common mantras.  So, I’ll extend my DB2 pastebin with these useful snippets…

Check for missing indexes

This will give you a quick index inventory to compare with your expectations.  Substitute for MYSCHEMA and MYNAMES.

select tabname, indname, colnames from syscat.indexes where tabschema = ‘MYSCHEMA‘ and tabname like ‘MYNAMES%’ order by tabname

Check for recent runstats

See when statistics were updated for your tables, if at all.  Substitute for MYSCHEMA.

select ‘   ‘ || substr(rtrim(tabschema) || ‘.’ || tabname, 1, 40), stats_time from syscat.tables where type = ‘T’ and tabschema = ‘MYSCHEMA‘ order by stats_time

If statistics are out of date or missing, then runstats is the obvious remedy.  If you only have CLI/ODBC access (no CLP), you can call runstats via the admin_cmd proc; substitute for MY.TABLE and tweak the statistics options if needed:

call admin_cmd(‘runstats on table MY.TABLE with distribution and indexes all’)

Generate a runstats script

Don’t let outdated statistics fool you more than once – automate and change-proof the process.  One option is to enable DB2’s auto runstats, but that has its limits.  Another common solution is a cron job or scheduled task kicking off a fixed set of runstats commands.  Good idea, but you must update the script each time you add a table.  You can go one step further with a script to generate the runstats commands for all tables matching a certain criteria, and then call the generated script, all from a cron job.  Something like this:

db2 -x “select ‘runstats on table ‘ || rtrim(tabschema) || ‘.’ || tabname || ‘ with distribution and indexes all allow write access;’ from syscat.tables where type = ‘T’ and tabschema in (‘MYSCHEMA‘ ) order by tabname with ur” >> %GeneratedScriptName%

db2 -tvf %GeneratedScriptName% >> %GeneratedScriptName%-out.txt

You may want to reduce runstats frequency for the more static tables.  If so, add name, time, and other criteria like the following:

db2 -x “select ‘runstats on table ‘ || rtrim(tabschema) || ‘.’ || tabname || ‘ with distribution and indexes all allow write access;’ from syscat.tables where type = ‘T’ and tabschema in (‘MYSCHEMA‘ ) and tabname like ‘MYNAME%’ and ((stats_time is null) or (stats_time < current timestamp – %SinceMinutes% minutes)) order by tabname with ur” >> %GeneratedScriptName%

Perhaps you’ll find these incantations (or variations of them) useful as the days go by.

DB2 Second Steps

I’ve written about DB2 9.7 install issues, such as here and here, but neglected to record one that bugged a co-worker today.  You log into Windows after installing and are greeted by a popup message:

SQL5005C System Error. A system error, probably an I/O error, was encountered while accessing a configuration file.

It typically comes from DB2 First Steps (db2fs.exe), but that may not be obvious.

The likely cause is that you enabled DB2’s extended Windows system security (the new default), but did not add your user ID to the DB2ADMNS or DB2USERS group.  The DB2 installation program tells us to do that, but if you didn’t do the install, you may have never seen that advice.

Just add your ID to the group and problem solved.  And maybe turn off the db2fs auto-start so it doesn’t annoy you each time.

Default This

It’s a common problem with new Windows DB2 installs on corp-rat machines: you try to create a new database or do some other such sysadm or secadm operation and all you get in return is:

SQL1092N <User> does not have the authority to perform the requested command or operation.

It stems from using a domain ID but not having access to the corp-rat directory (Active Directory, LDAP, whatever), either because you’re disconnected or because group enumeration is disabled.  I’ve written about it here (in the comments), and so have many others, including the IBM DB2 Program Director’s thorough treatment, How to Develop DB2 Applications on an Airplane.  The fix is quick and simple:


Followed by a DB2 restart (db2stop & db2start) for good measure.  It can be verified with:

db2set -all | find /i “grp”

It’s familiar (classic, even) by now, but all too easy to forget: a co-worker who knew it well forgot about it during today’s work.  So I’m thinking this should either be a DB2 default, or added as an install-time option/reminder.  This is right up there with db2empfa, db2_skipdeleted, db2_awe, and other such knobs where the behavior was so commonly preferred it became the new default.

Big and Slow

All too often, folks will take a broad-brush “bigger is better” approach when sizing, configuring, and tuning databases.  After all, if 5 is good, 10 must be better, right?  Problems from this brute-force approach can show up quickly with “not enough memory” error messages, excessive paging, and slow start-up. But sometimes bad results can be delayed or indirect.  DBA “coolwinds” points out how high CPU utilization can result from an oversized tempspace bufferpool:

If a temporary tablespace has a very large bufferpool, and temporary tables are often dropped, CPU usage might be high. (Even if your application does not explicitly drop temporary tables, DB2 might still create and drop temporary tables internally in normal query processing.)

He explains why (dirty list scanning) and how to correct it (create a smaller bufferpool dedicated to the temporary tablespaces).

Tools make it easy to view bufferpool sizes, but you can always query the system catalog to check:

select tbspace, datatype, t.pagesize, bpname, npages
from syscat.tablespaces t, syscat.bufferpools b
where t.bufferpoolid = b.bufferpoolid
order by datatype, tbspace

Right-sizing can be a challenge. Fortunately, for DB2 9.1 and higher, you can use automatic sizing for many things and let STMM do the work.  When overriding, it helps to ask autoconfigure for its opinion.

DB2 group_concat and other figments

For all that DB2 has to offer, there remain some features found in other relational databases that DB2 still lacks.  Multi-version concurrency control (MVCC) and Oracle-style optimizer hints (between the /*+‘s) often top the list, but there are others.  I’m reminded of these when I find myself recommending something out of habit before I remember DB2 doesn’t have it (rhyme intended).

This happened today with a request for DB2 SQL to build an export command with an explicit column list (no “select *”).  The simple answer was quick:

select ‘export ‘ || group_concat(colname) || ‘ from ‘ || tabname from syscat.columns where … group by tabname

Unfortunately, DB2 doesn’t have group_concat, so I went googling for an alternative.  Some folks offered group_concat, concat_all, and rowtocol implementations as UDFs and stored procedures, while others suggested really ugly recursive SQLs or UNION/CASE alternatives.  None were very general.  One IBMer figured out a way to use DB2’s xmlagg function to hack a solution, which I applied as follows:

select ‘export ‘ || substr(replace(replace(xmlserialize(content xmlagg(xmlelement(name “COLNAME”, colname)) as varchar(1000)), ‘<COLNAME>’, ‘, ‘), ‘</COLNAME>’, ”),3) || ‘ from ‘ || tabname from syscat.columns where … group by tabname

It works, but it’s uuuugly!

I can understand some hesitation to add group_concat to DB2; for example, it’s very easy to exceed column limits.  And architectural differences mean that DB2 doesn’t always need to “clone” everything that other DBs have (there are other ways to do the same thing).  But it’s often helpful to keep up with the database Joneses in one area while you’re passing them in others.

Recent DB2 enhancements (currently committed semantics, literal replacement, optimization profiles, etc.) have shrunk many of these gaps, but many remain to be closed entirely.  With the continued rapid addition of new DB2 features, perhaps we’ll see some of these soon.

Never Don’t Buy Version 4.0 of Anything

As I’ve mentioned before, TOAD is one of my regular tools for database work.  At least it used to be.

I’ve had to keep upgrading TOAD for DB2 to stay current with DB2 releases, but the 4.0 release is so buggy that it’s unusable.  The 4.7 Commercial Version has been out for awhile now, but there’s no sign of those fixes and enhancements coming to the freeware version anytime soon.  Folks have been asking in forums, but there’s no meaningful response.

It’s seems that since the TOAD first donned that Quest T-shirt, this formerly independent tool has become increasingly more constrained.  Ever more menu options are grayed out with the “(Commercial Feature)” taunt.  Simple things that worked in prior releases are broken in the 4.0 freeware versions.  Perhaps they work just fine in the paid commercial version.

I’ve purchased Quest tools in the past, so cost isn’t the issue: availability is.  TOAD is one of those tools that I’ve frequently recommended largely because it was just one free download away.  If freedom and ubiquity weren’t so very important, we might all be coding in CFScript.

Quest is a great company, but TOAD has not done well lately under its ownership (don’t get me started on the lack of a *ix version).  I only wish there was a TOra for DB2.

This happens with some companies that take over a free, independent, or open source project: they take it in under their wing, lock it down, and look for improper ways to capitalize on it.  It can make money in the short run, but let’s be up front about it: state your intentions and either spin off a community version or kill off the freeware version altogether.  This “slippery slope” behavior is right down there with snake oil salesman and Oracle.


I usually don’t have (nor want) direct, real-time access to production databases, yet I’m sometimes asked to diagnose and correct performance issues with them.  So my typical MO is to collect periodic snapshot files (driven by cron or schtasks) and run them “offline” through a set of tools I’ve developed for postmortem analysis.  These tools of mine measure and report trends, calculate deltas, do performance calculations, and flag areas of concern.

But occasionally, I do need to monitor databases in real time.  For this, I’ve relied on third-party tools, such as Quest Spotlight, the free and simple db2mon, db2pd and get snapshot commands, and even Windows perfmon against the DB2 counters.  All the while, I wondered why IBM didn’t provide similar real-time monitoring tools.

Top for DB2

But IBM recently jumped into the game with the db2top utility for Linux and AIX.  As the name implies, it’s like top for DB2.  It uses a text user interface (it’s curses-based) which, despite the retro-80’s feel, is a benefit.  That is, it’s super-fast (jumping between views is quick and easy), and, of course, it runs from a command shell, so no X required.

Db2top shows just about everything available in DB2’s snapshots, and then some.  For example, it automatically calculates things like bufferpool hit ratios, sort overflow percentages, average I/O times, async read percentages, average execution times, and lock chains.  In most views, you can use the arrow keys to scroll right and left to see additional details; I found it useful to increase the screen width to see more without scrolling.  But, of course, the real benefit is watching instantaneous updates and deltas.  This avoids having to “do the math” yourself by processing a series of snapshot files.

With Any More Features, We’d Run Out of Alphabet

There are some nice tie-ins, like the ability to grab an instantaneous snapshot (S) for the corresponding screen, and run explains (L) directly from the Dynamic SQL screen.  Both toss the results directly into vi for viewing.  You can even force applications directly from the sessions (applications) view (bwaaaahahaha!)  And navigation aids like column sorting (Z, z) and screen “freezing” (f) come in very handy when drilling in for closer analysis, particularly when there’s a lot going on.  Quick reference help (h) is available in case you forget these keystrokes.

The bottlenecks (B) screen is useful for finding the “problem children” – those agent(s) consuming the most of each type of DB2 resource: CPU, memory, I/O, locks, sorts, etc.  This makes for a good starting point when jumping into a crisis.

The tool also supports capturing snapshots over time, running analysis reports, exporting to Excel, and replaying results.  I haven’t done much with these features yet, but they may very well take the place of my home-grown snapshot analysis tools.

Just What You Need

Before this, if you told me you had “Spotlight for a command shell,” I’d think you were crazy.  But db2top comes remarkably close.  Overall, it has a lot going for it: ubiquity (it’s there with every DB2 9.x *ix install), speed, and just the monitoring tools I need (no more, no less).  There’s not much lacking, but the following additions would be nice:

  • Extensibility (or source code).  For example, I could then add my own performance calculations – things like log, victim, and threshold trigger ratios for page cleaners.
  • Tie-ins to event monitors, at least db2detaildeadlock.
  • A Windows version, even if it required Cygwin.

Yet I’d prefer it miss a few of these flourishes than be laden with extra features that only add bulk or cost, or that just get in the way.  Too many other tools fall into that trap.

The Squeeze

My friendly AIX admin recently asked me to move some of my test databases from a P570 to a new Power 7 (P750) box.  I used db2look and db2move to replicate these on the new system, but I wanted to take it a bit further.  Since I’ve been called a “DASD pig,” I saw this as a good opportunity to enable DB2 row compression on some of the larger tables and see how much space I could save.

Look Before You Squeeze

Before moving, I took some compression estimates on the source system. Since it was DB2 9.1, I had to use inspect:

db2 inspect rowcompestimate table name TABLENAME schema SCHEMANAME results keep TABLENAME.insp
db2inspf sqllib/db2dump/TABLENAME.insp TABLENAME.out


This gave “percentage saved” estimates in the 61 to 81 percent range for the tables I selected: very promising, considering the table sizes.

The target system was DB2 9.7, so I could use admin_get_tab_compress_info to get estimates there:

db2 “select * from table(sysproc.admin_get_tab_compress_info(‘SCHEMA’, ‘TABLE’, ‘estimate’)) as t”

For one 8-million-row table, the estimated savings were 61%: the same as the inspect estimate for this table on the DB2 9.1 box.


I was curious how accurate these estimates were, so I measured it.  Since I had some similar databases, I created the first database without row compression and added it after loading data.

I compared sizes before and after the “alter table … compress yes” and reorg. The tablespace snapshots revealed a 51% reduction in 4K pages: from 909,985 to 442,817.  And the (SMS) disk space savings were in that same range: from 2.48 GB to 1.19 GB for that table’s data.  The index space savings weren’t significant (from 0.98 GB to 0.92 GB), but I didn’t expect much there.

I ran some quick tests using the system against the new (compressed) tables and found no significant performance impacts.  The initial reorgs and imports to enable compression and load data were painfully slow, but those are “off hours” activities anyway.

I’m Convinced

This was enough to convince me to enable compression from the very beginning for some of the larger tables in the remaining databases.  That is, I edited the db2look-generated DDL to add “compress yes” to the create table statements before running them.  I considered switching some of the SMS tablespaces to automatic storage, but decided to stick with only one change at a time for this round.

So far, I’m pleased with the results.  I’ll know more the next time we do a performance test, and I’ll be less of a “DASD pig” in the meantime.