Category Archives: DB2

You Again

Remember back in the JDK 1.5 days when some genius changed BigDecimal.toString and broke half the universe? I’ve felt the impacts in DB2, Oracle, VoiceObjects, and other places. I’m over that now (and all the cool kids have updated or switched to toPlainString), but this thing just keeps coming back to haunt me.

Like this week, when old DB2 JDBC (JCC) drivers kept showing up on some platforms, prompting these intermittent errors:

[ibm][db2][jcc][converters] Overflow occurred during numeric data type conversion of …

This error can sneak up on ya’ because it often takes just the right numbers to kick in. Like the lottery, and sometimes with the same odds.

Fortunately, the fix is easy: just update JDBC drivers to something from the modern era.

Move it, Move it

Today I had to move a few dozen DB2 LUW databases from one Windows system to another. I’ve been down this road before, and I always script the process, but this time I initially left out tablespaces. Bummer, the downside of writing db2look commands from memory.

So that I don’t get fooled again, I’m posting my simple scripts which were inspired by this venerable article.

Export from source machine – Use list db directory to generate a master script that calls this expdb.cmd for each database:

mkdir %1
cd %1
db2move %1 export > export-%1-db2move-output.txt 2>&1
db2look -d %1 -e -a -l -o db2look-%1.sql > export-%1-db2look-output.txt 2>&1
cd ..

Import into target machine – Use an equivalent master script to call this impdb.cmd:

cd %1
db2 create db %1 using codeset ibm-1252 territory us > import-%1-db2-create-output.txt 2>&1
db2 -tvf db2look-%1.sql > import-%1-db2-output.txt 2>&1
db2move %1 load > import-%1-db2move-output.txt 2>&1
cd ..

Not So Fast

With the advent of DB2 10, IBM dropped its much-maligned Control Center tools in favor of a free version of IBM Data Studio.  I started using Data Studio back in April and wrote about some positive first impressions here.  But I shouldn’t have blogged so soon.

For occasional use with certain tasks on fast hardware, Data Studio is fine.  But if you want to get a lot done quickly, it can really get in the way.  It’s just too bloated, too slow, too cumbersome, too slow, too awkward, and too slow.  Did I mention that it’s slow?

So I find myself doing even more now with the command line.  When I need a GUI tool, I’ve gone back to using a tool I wrote myself, TOAD, and free tools like DbVisualizer and Eclipse DTP.

Data Studio has been around for awhile, and IBM is clearly committed to it.  But there remains a lot of room for improvement.  Now that Data Studio is the built-in default, I hope IBM gets on that.  And quickly.

Broken JARs

Since I work on a few apps that use the DB2 JDBC Type 4 drivers, work-arounds for common issues have become familiar.  But when the corrupted DB2 JAR issue resurfaced today, I realized that I had not posted about it.  Now is as good a time as any to make up for that neglect…

The problem is that for DB2 LUW versions between 9 and 9.5, IBM published db2jcc.jar files with corrupt classes.  In some Java EE environments, the container doesn’t fully scan JARs, so it doesn’t really matter if an unused class is wonky.  But many containers do scan, causing exceptions like the following:

SEVERE: Unable to process Jar entry [COM/ibm/db2os390/sqlj/custom/DB2SQLJCustomizer.class] from Jar [jar:file:…lib/db2jcc.jar!/] for annotations
org.apache.tomcat.util.bcel.classfile.ClassFormatException: null is not a Java .class file
at org.apache.tomcat.util.bcel.classfile.ClassParser.readID(
at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(

That particular exception is from Tomcat 7.

IBM acknowledges the problem (see and offers a work-around: edit the JAR file to remove the offending classes.

Edit the JAR?  Really?!  A quicker, surer solution is to just grab a good JAR from

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

I got a friendly reminder this morning that I’ve neglected my Friday Fixes postings of late.  I didn’t say they’d be every Friday, did I?  At any rate, here are some things that came up this week.

Tabbed Freedom v. Logoff Security

Tabbed browsing is a wonderful thing, but its features can become security concerns to corp-rat folks who mainly use their browsers for mission critical apps.  For example, with most browsers, closing a tab (but not the browser itself) does not clean up session cookies. Yet those security first guys would like to have a way to trigger a log off (kill the session) on tab close.

This is a common request, but there’s no straightforward solution. As much as I’d like browsers to have a “tab closed” event, there isn’t one.  The best we can do is hook the unload event which is fired, yes, when the tab is closed, but also anytime you leave the page: whether it’s navigating a link, submitting a form, or simply refreshing.  So the trick is to detect and allow acceptable unloads.  Following is some JavaScript I pieced together (into a common JSPF) based loosely on various recommendations on the web.

  var isOkToUnload = false;
  var logoffOnClose = '<c:out value="${settingsBean.logoffOnClose}" />';
  function allowUnload() {
     isOkToUnload = true;
  function monitorClose() {
     window.onbeforeunload = function() {
        if (!isOkToUnload)
           return "If you leave this page, you will be logged off.";
     window.onunload = function() {
        if (!isOkToUnload) {
              async: false, type: "POST",
                   url: '<c:url value="/basiclogout"/>' });
     // Add other events here as needed
  $(document).ready(function() {
     if (logoffOnClose === 'Y')

This triggers on refresh, but that’s often a good thing since the user could lose work; Gmail and Google Docs do the same thing when you’re editing a draft.  It’s a good idea to make this behavior configurable, since many folks prefer the freedom of tabbed browsing over the security of forcing logoff.

DBCP Has Timed Out

Right after mastering the linked list, it seems every programmer wants to build a database connection pool.  I’ve built a couple myself, but this proliferation gets in the way of having a single golden solution that folks could rally around and expect to be supported forever.

Such was the story behind Apache DBCP: it was created to unify JDBC connection pools.  Although it’s broadly used, it’s over-engineered, messy, and limited. So it, too, fell by the wayside of open source neglect.  And since nearly all servlet containers now provide built-in connection pools, there’s really no use for DBCP anymore.

Yet I found myself having to fix DisconnectException timeouts with an existing DBCP implementation, typically stemming from errors like:  A communication error has been detected… Location where the error was detected: Reply.fill().

After trying several recovery options, I found that DBCP’s validationQuery prevented these, at the cost of a little extra overhead.  Although validationQuery can be configured, I didn’t want additional setup steps that varied by server.  So I just added it to the code:

  BasicDataSource ds = new BasicDataSource();
  // ...
  ds.setValidationQuery("select 1 from sysibm.sysdummy1");

In the next pass, I’ll yank out DBCP altogether and configure pools in WebSphere, WebLogic, and Tomcat 7.  But this gave me a quick fix to keep going on the same platform.

Aggregation Dictation

Weird: I got three questions about aggregating in SQL on the same day.  Two of them involved OLAP syntax that’s somewhat obscure, but quite useful.  So if you find yourself with complications from aggregation aggravation, try one of these:

  • Doing a group by and need rollup totals across multiple groups?  Try grouping sets, rollup, and cube.  I’ve written about these before; for example, see this post.
  • Need to limit the size of a result set and assign rankings to the results?  Fetch first X rows only works fine for the former, but not the latter.  So try the ranking and windowing functions, such as row_number, rank, dense_rank, and partition by.  For example, to find the three most senior employees in each department (allowing for ties), do this:

       (SELECT rank() OVER(partition BY workdept ORDER BY hiredate ASC) 
        AS rank, firstnme, lastname, workdept, hiredate 
        FROM emp) emps
      WHERE rank <= 3
      ORDER BY workdept

    Co-worker Wayne did a clever row_number/partition by implementation and added a nice view to clarify its meaning.


Some interesting links that surfaced (or resurfaced) this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.


If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Hashes Make for Poor Traces

SQL and parameter tracing  is covered well by most ORMs like Hibernate, but you’re on your own when using JDBC directly.  It’s helpful to funnel JDBC access through one place with a common wrapper or utility class, but if all you get is the PreparedStatement, there are no public methods to get the SQL text and parameter array from it for tracing.  Unbewiebable.  PreparedStatement.toString sometimes includes useful information, but that depends entirely on the JDBC driver.  In my case, all I got was the class name and hash.  So I just went upstream a bit with my calls and tracing hooks so I could pass in the original SQL text and parameter array.

Samy Isn’t My Hero

You’d think URL rewriting went out with the Backstreet Boys, but unfortunately, JSESSIONID is alive and well, and frequenting many address bars.  This week, I’ve been focused on hack-proofing (counting down the OWASP Top Ten) so I should be a bit more sensitive to these things.  Yet I inadvertently gave away my session when emailing a link within the team management system I use (does it count as a stolen session if you give it away?)  Not only does this system include the direct sessionguid in the URL, it also doesn’t validate session references (like by IP address) and it uses long expirations.

At least this system invalidates the session on logout, so I’ve resumed my habit of manually logging out when done.  That, and attention to URL parameters is a burden we must all carry in this insecure web world.  Site owners, update your sites.  Until then, let’s be careful out there.

Virtual Time Zone

While directing offshore developers last year, thinking 9 1/2 or 10 1/2 hours ahead finally became natural.  Having an Additional Clock set in Windows 7 provided a quick backup.  Amazingly, some software still isn’t timezone-aware, causing problems such as missed updates.  I won’t name names or share details, but I opted for a simple solution: keep a VM set in IST and use it for this purpose.  Virtual servers are cheap these days, and it’s easy enough to keep one in the cloud, hovering over the timezone of choice.

SQL1092 Redux

Yes, bugs know no geographical boundaries.  A blog reader from Zoetermeer in the Netherlands dropped me another note this week, this time about some SQL1092 issues he encountered.  Full details are here; the quick fix was the DB2_GRP_LOOKUP change.

This DB2 issue comes up frequently enough that DB2 should either change the default setting or also adopt WebSphere MQ’s strategy of using a separate limited domain ID just for group enumeration.  Those IBM’ers should talk.

Those Mainframe Hippies

I traditionally think of mainframe DB2’ers as belt-and-suspenders codgers who check your grammar and grade your penmanship, while viewing DB2 LUW users as the freewheeling kids who sometimes don’t know better.  That changes some with each new version, as the two platforms become more cross-compatible.

So I was surprised this week to find that DB2 for z/OS was more flexible than DB2 LUW on timestamp formats.  Mainframe DB2 has long accepted spaces and colons as separators, but unlike DB2 LUW, you can mix and match ’em.  For example, DB2 z/OS 9 and higher will take any of the following:

‘2012-03-16 19:20:00’



‘2012-03-16 19.20.00’

DB2 LUW (9.7) will reject the last two with SQL0180 errors.

Knowing the limits is important when writing code and scripts that work across multiple DB2 platforms and versions.  The problem could get worse as DB2 LUW adds more Oracle compatibility, but as long as the DB2 kids and codgers stay in sync, we can enjoy some format flexibility there.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

It’s a Big World After All

I got the complaint again this week: “DB2 is so stupid; why does it sometimes take two bytes for a character?”  I’ve answered that before, and the best solution is: if you’re storing truly binary data, use a binary type (like for bit data).  But if it really is text, I usually just send a link to Joel Spolsky’s classic article on character encodings.  And here’s a quiz – let me know what the following says:

אם אתה יכול לקרוא את זה, אתה תגרוק קידודים

The counterpoint to Joel’s article, or at least to blindly applying Postel’s Law to web apps, is that it can open the door to all kinds of security vulnerabilities: SQL injection, cross-site scripting, CSRF, etc.  So it’s common to have a blacklist of disallowed characters or a white list specifying the allowed character set.  The point is to be deliberate and think about what should be allowed: both the extent and limits.  Don’t treat binary data as text and don’t pretend there’s only one character encoding in the world.

Different, Not Better

Having endured VMWare’s product gyrations through the years, I’ve learned that upgrading it isn’t always best: sometimes I just get different, not better.  I feel the same way about Windows 8.

In this latest case, I found my beloved but now-obsolete VMWare Server can’t host the new Windows 8 CP edition, so I considered switching to VMWare Workstation 8.  But the latest VirtualBox (4.1.8) works, so I downloaded that, installed Windows 8 from ISO, and quickly had it running in a VM.  Glad I didn’t have to play a pair of 8s.

I suppose that’s a side benefit when leading players like VMWare and Microsoft make revolutionary changes to their incumbent products.  Major (often costly) upgrades create good opportunities to try out competing alternatives.

That Other Browser

I’ll always need the latest Windows and Internet Explorer (IE) versions running in some mode (VM or otherwise) just to test and fix IE browser compatibility issues.  This week, I ran into an “Access denied” JavaScript error in IE 8 – Intranet zone, Protected Mode.  As is often the case, my code worked fine in Firefox, Chrome, and other browsers.

I wanted to pin down exactly which IE setting was causing the problem, so I tweaked some likely suspects in custom Security Settings but could never nail it down to just one.  There was nothing in the Event Log to really help, just red herrings.  And apparently neither Protected Mode nor IE ESC was the root cause.  Switching zones (such as adding to trusted sites) didn’t work either, and extreme measures only made things worse.  Ultimately, doing a Reset all zones to the default level resolved the issue.

IE has far too many obscure and non-standard security settings that don’t work as expected.  And there’s no direct way to get a consolidated dump of all current settings.  Microsoft continues to work on fixing this, so I’m hopeful things will improve; they really can’t get worse.  Until then, the solution may continue to be “switch browsers” or “take one reset and call me in the morning.”

Jasper Variability

Jasper Reports (iReport) variables and groups are useful features.  But they can be frustrating at times because of documentation limits (yes, even with The Ultimate Guide) and differences between versions and languages (Groovy v. Java).  Sometimes the answer must be found by stepping through the code to determine what it wants.  That was necessary for me to get my group totals working.

For example, imagine a sales report grouped by region where you want group footers totaling the number of Sprockets sold for each region.  Standard variable calculations won’t work, and the Variable Expression must handle anything that comes its way, including null values from multi-line entries.  So, rather than using a built-in Counter variable with an Increment Type, I had to use a counter-intuitive variable, like so:

Property Value
Variable Class java.lang.Integer
Calculation Nothing
Reset type Group
Reset group regiongroup
Increment type None
Variable Expression ($F{TYPE} != null && $F{TYPE}.equals(“Sprocket”)) ? new java.lang.Integer($V{sprtotal}+1) : $V{sprtotal}
Initial Value Expression new java.lang.Integer(0)

In JRXML, that comes to:

	<variable name="sprtotal" class="java.lang.Integer" resetType="Group" resetGroup="regiongroup">
		<variableExpression><![CDATA[($F{TYPE} != null && $F{TYPE}.equals("Sprocket))?new java.lang.Integer($V{sprtotal}+1):$V{sprtotal}]]></variableExpression>
		<initialValueExpression><![CDATA[new java.lang.Integer(0)]]></initialValueExpression>


Some useful / interesting links that came up this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

CSS – Space Matters

Guess you could say I was “lost in [a] space” wondering why my browser didn’t pick up my div’s CSS class.  Turns out, I wrote the selector as #myid .myclass (descendent selector) but really meant #myid.myclass (combined selector). Thank you, 2.5x reading glasses and Chris Coyier, for the quick save.

Point of No Return

JavaScript and jQuery iterator functions are awfully convenient, but it’s easy to overlook scope.  While refactoring some code today, I wrapped a function around some iterator code for reuse and broke it. Here’s a simplified example to demonstrate the problem:

     function myFunction() {
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              return ea;

At a glance, you’d think myFunction returns b, but since the inner function’s result is never passed out, it returns undefined.  And jQuery’s each method just returns the collection, so adding a return in front doesn’t help.  jQuery does have some alternatives (like filter) that can help, but those aren’t as flexible as each.

The simple solution is to just carry the value to the outer scope, like so:

     function myFunction() {
        var result;
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              result = ea;
              return false;
        return result;

Adding the “return false” stops the iteration at the first match.

Is this Thing Used?

My current project has this DB2 LUW fan coding against a DB2 for z/OS database.  While I give up a some things, I also gain a lot.  For example, while working on a DAO, I had questions about a particular table’s indexes.  Fortunately, DB2 for z/OS has SYSINDEXPLANSTATS which, among other things, keeps track of the last time an index was used (even for dynamic SQL) to help determine its merits.  So I first checked to see if this was at least version 9:

select getvariable('SYSIBM.VERSION') from sysibm.sysdummy1;

… and then answered my questions with a simple query like the following:

select, i.tbname, i.creator, i.uniquerule, i.colcount, i.clustering, s.lastused
from sysibm.sysindexes i, sysibm.sysindexspacestats s
where =
and i.creator = s.creator
and i.dbname = 'MYDB'
and i.tbname like 'MYPREF%'
order by s.lastused

Can’t wait for this to make its way to DB2 LUW.


Yes, I presently have to use Microsoft Visual SourceSafe (VSS).  I’m over it now.  When using this “lock first” VCS against a deep project hierarchy, I often need to see a quick list of my check-outs.  I can use View->Search->Status Search (with subprojects selected) from the VSS GUI, but that’s often too slow over a remote VPN connection.  So I cooked up a quick script to run over an ssh command line from a server that is “local” to the repository:

cd /d "C:\Program Files\Microsoft Visual SourceSafe"
set SSDIR=\\myserver\VSS\myvssshare
ss STATUS $/MyProject -R -Umyusername

Flexigrid Echo

Imitation isn’t always flattering; it’s often annoying.  Having two Flexigrids on one page, both with pagers enabled (usepager: true) yielded some weird mocking behavior.  For example, paging forward on one grid caused both grids to show the same “Page X of Y” message.  After debugging, I found that similar bugs had been reported and fixed in the latest release.  Upgrading Flexigrid resolved that one.

Friday Fixes

Friday Fragments served a useful purpose, one of which was regularity: funny how Fridays kept coming around.  While I haven’t been in pedagogical fragment mode for awhile, I encounter real life puzzles every day.  Sharing these often helps others who encounter the same problems, or even myself when I need a quick reference down the road.  I just need a reminder and motivator to stop and record these.  I suppose Friday is as good as any.

So here goes the first installment of Friday Fixes: selected problems I encountered during the week and their solutions, along with other sundries.

Spring Loading

Like most form tags, binding a multi-select list in Spring MVC is easy: deceptively so.  All you need is this, right?

<form:select path="myFavorites" id="myFavorites" size="3" multiple="true" class="myList">
	<form:options items="${allChoices}" />

And, of course, the model attribute and/or bean methods to return the allChoices collection and get/set myFavorites.  Well, not so fast.  Turns out, multi-select lists in Spring MVC have always been a bit of a pain, particularly when it comes to making the initial selections (convincing Spring to add the selected attributes) on initial page load.  One pre-selection is fine, but with multiples, the comma-separated list pushed back into the model’s setter is a one-way trip.

Solving this in prior versions of Spring required using an InitBinder whether you otherwise needed one or not.  But for Spring MVC 3, the fix is to just map to collection getter/setters, even if your model wants to use the comma separated list.  For example, use the following getter and change the form:select path to use it: path=”myFavoritesList.

	public List getMyFavoritesList() {
		List list = new ArrayList();
		for (String fav : getMyFavorites().split(",")) {
		return list;

Time(outs) May Change Me…

Between the deadlock event monitor, db2pd, and snapshots, DB2 has long provided good tools for tracking down deadlock culprits.  But for lock timeouts, not so much.  The DB2 folks have tried to improve things lately, but they’ve changed their minds a lot, often adding new tools and then quickly taking them away.

Now that lock timeout event monitors are finally here, many of the other new approaches like db2_capture_locktimeout and db2pdcfg -catch (with db2cos call-out scripts) have been deprecated.  A coworker was concerned about the passing of db2_capture_locktimeout, but it appears it’ll be around a little longer.  For example, the following still works in even the latest 9.7 fixpacks.

db2set db2_capture_locktimeout=on
db2stop & db2start
db2 get db cfg for sample show detail | find /i "timeout"
db2 update db cfg for sample using locktimeout 30
db2 connect to sample
db2 -c- update org set deptname=deptname

Repeat the last two commands in another DB2 Window and then wait for the timeout.  Look for the report under your DIAGPATH, SQLLIB, or Application Data folder; for example: dir db2locktimeout*.* /s.  Even with the latest 9.7 fixpacks, the timeout report can occasionally have some holes in it (like missing SQLs), but it’s still quite useful.

Flexigrid Incantation

Got a Flexigrid with a radio button?  Want to fetch the value of a column on the radio-selected row?  Well, when Flexigrid generates the tds and divs from your colModel, it provides abbr attributes, not ids.  So the usual jQuery shorthands to find by ID don’t apply, and you’re off chasing the more obscure abbr.  For example:

$('.mytable').find('tr').each(function (index, tr) {
	if($(tr).find('input#sequence').attr('checked')) {
		myvalue = $(tr).find('div#myname').val();          // Nope, doesn't work
		myvalue = $('td[abbr="myname"] >div', tr).html();  // Gotta use this

New Toys

My own little Linkapalooza of online tools I found this week and actually used:

Upgrade and/or Die

JDBC is one of those core, essential components where you want to see continued incremental improvement with no interface impacts: just keep dropping in the latest versions and expect it to work.  Yay, back compatibility!

JDBC 4.0 is relatively new to DB2 LUW (first available in version 9.5), but it and prerequisite Java 6 have certainly been around long enough for folks to start switching over to it.  Just don’t assume that everything in a deep open source stack will work with it.

In yesterday’s action, perfectly-good BIRT 2.5.1 reports failed (“data set column … does not exist” exception) in an environment with DB2’s JDBC 4.x drivers.  At first, I suspected the particular db2jcc4.jar level, since it was taken from DB2 9.7 Fixpack 3: that now-deprecated fixpack that does everything evil except steal your kidneys.  But since new versions of BIRT (like the 3.7.1 version I use) worked fine with that version of Ernie (ahem, JDBC 4.0) and old versions of BIRT worked fine with JDBC 3.0 (db2jcc.jar), that flagged the culprit.  Turns out, the report SQLs use column aliases (who doesn’t?) and apparently JDBC 4.0’s getColumnName “fixes” broke these.  Upgrading BIRT brought harmony back to the universe.

With “broken compatibility” issues like this, it’s tempting to procrastinate on open source upgrades.  That’s why Maven POMs quickly develop inertia: once you get it all working together, who has time to test new dependency version permutations?  Fortunately, the Eclipse project and others do a good job of bundling compatible toolsets for download.  That way, if something “doesn’t work on my machine”, it’s quick work to see how it plays with the latest and greatest stack.

Just Concentrate

In most cases (at least for OLTP systems), using SQL parameter markers instead of literal values improves performance, adds consistency, reduces CPU overhead, and makes the database easier to monitor.  SQL parameters let DB2 cache the access plan so that compilation only happens once and execution counts are combined for the same statement.

A transaction system that carelessly uses literals can often be improved by converting literals to markers wherever possible.  But since this recoding takes time, it would be nice to predict the performance impacts (hopefully benefits) of such a change without having to just code it all and then measure.

Fortunately, DB2 9.7 offers the new Statement Concentrator to do just that.  Enabling it at the database level is easy:

db2 update db cfg for DATABASE using stmt_conc literals

And you can see if it’s enabled with:

db2 get db cfg for DATABASE | find /i “stmt_conc”

I suggested this new feature to a colleague today, and did some experiments myself.  It was nice to see the statements combined in the Dynamic SQL Snapshots, and also see the (zero-execution) literal versions separated out.  This means I’ll have to update my snapshot analysis tools which combine similar SQLs.

But the nagging feeling remains that this might be too broad a change.  For some SQLs, those unique (literal-driven) access plans are necessary.  But it may be possible to reach a good balance by enabling the statement concentrator at just the right level and by using optimization profiles for those exceptions to the rule.  That’s something worth concentrating on.

DB2 in Cygwin

I often use Cygwin with ssh and putty to remote into various Windows servers. It’s a lot quicker than Windows Remote Desktop and, well, I’m a command line junkie.  But since db2cmd defaults to launching a new window/session, it doesn’t play nicely with ssh.  Sure, I could remotely attach or catalog the node and database, but that defeats the purpose of being quick and simple.

To work around this, I add the following alias to my .bashrc:

alias db2=’db2cmd -i -w -c db2′

It’s imperfect (it works only for isolated commands or scripts), but it’s great for quick commands.

Not Listening

DB2 9.7 seems to have brought back an old problem: DB2 servers that are deaf to TCP/IP connections.  In today’s action, my JDBC connections on a new Windows DB2 9.7 configuration were hanging for several minutes and then eventually failing with a “connection refused” error.  Since such problems are usually due to configuring the wrong port, I verified it:

db2 get dbm cfg | find “SVC”

type %WINDIR%\system32\drivers\etc\services | find /i “db2”

I had specified the right port: the standard 50000 default for the first instance.  And all the standard DB2 tools connected to the database just fine, including CLP and CLI connections.  After checking all the wrong places, I finally looked to see if DB2 was actually listening on this port:

netstat -an | find “500”

telnet localhost 50000

It wasn’t: I got no relevant matches on the netstat and telnet gave the “connect failed” error.  I checked DB2COMM and found it didn’t include TCP/IP.  Weird.  So I quickly remedied that with:

db2set DB2COMM=tcpip & db2stop & db2start

And verified with:

db2set -all db2comm

I’m not sure why some recent new DB2 9.7 installs didn’t have TCP/IP connections enabled by default.  Usually, it’s the setup/db2setup default.  But it’s something I’ll watch for, with a quick solution next time.

Implicit Casting

Co-worker Bob and I were surprised to find today that DB2 9.7 accepts SQLs like the following:

select count(*) from syscat.tables where tbspaceid = ‘0’

In all prior DB2 versions, comparing an integer column with a string constant (or anything similar) would yield the error:

SQL0401N  The data types of the operands for the operation <operator> are not compatible or comparable.

The source of this new flexibility is DB2 9.7’s implicit casting, yet another feature added in the quest for greater Oracle compatibility.  The DB2 documentation clumsily introduces it:

Version 9.7 introduces support for implicit casting. Implicit casting is the automatic conversion of data of one data type to data of another data type based on an implied set of conversion rules.

Automatic type conversions (yea, duck typing even) are a good thing, but require care when arriving late to an installed base.  We developers are now on DB2 9.7, but have to support DB2 versions as far back as 9.1.  It’s now too easy to write SQL that will “work on my machine” but not on the older versions of DB2 that our customers run.

Once everyone is at DB2 9.7 and higher, this will be a non-issue.  IBM’s usual policy with breaking DB2 compatibility is to provide registry settings or other means of disabling the change, often leaving it disabled by default for some time.  But that doesn’t seem to be the case here.  I couldn’t find a mechanism to turn off implicit casting for now; if you know of one, please pass it on.

More DB2 Snippets

To paraphrase David Byrne, you may find yourself behind the wheel of a large database in another part of the world asking, “how do I work this?”  Such was the case today with a customer database suffering a performance hiccup.  It had the symptoms of improper tablescans, but there was no time to enable snapshots, run explains, or do detailed analysis.  It needed a quick sanity check of statistics, indexes, and organization.  A few basic commands flushed out the problem: a required index was missing.

I’ve recorded various common DB2 Snippets here before, but had not yet included today’s common mantras.  So, I’ll extend my DB2 pastebin with these useful snippets…

Check for missing indexes

This will give you a quick index inventory to compare with your expectations.  Substitute for MYSCHEMA and MYNAMES.

select tabname, indname, colnames from syscat.indexes where tabschema = ‘MYSCHEMA‘ and tabname like ‘MYNAMES%’ order by tabname

Check for recent runstats

See when statistics were updated for your tables, if at all.  Substitute for MYSCHEMA.

select ‘   ‘ || substr(rtrim(tabschema) || ‘.’ || tabname, 1, 40), stats_time from syscat.tables where type = ‘T’ and tabschema = ‘MYSCHEMA‘ order by stats_time

If statistics are out of date or missing, then runstats is the obvious remedy.  If you only have CLI/ODBC access (no CLP), you can call runstats via the admin_cmd proc; substitute for MY.TABLE and tweak the statistics options if needed:

call admin_cmd(‘runstats on table MY.TABLE with distribution and indexes all’)

Generate a runstats script

Don’t let outdated statistics fool you more than once – automate and change-proof the process.  One option is to enable DB2’s auto runstats, but that has its limits.  Another common solution is a cron job or scheduled task kicking off a fixed set of runstats commands.  Good idea, but you must update the script each time you add a table.  You can go one step further with a script to generate the runstats commands for all tables matching a certain criteria, and then call the generated script, all from a cron job.  Something like this:

db2 -x “select ‘runstats on table ‘ || rtrim(tabschema) || ‘.’ || tabname || ‘ with distribution and indexes all allow write access;’ from syscat.tables where type = ‘T’ and tabschema in (‘MYSCHEMA‘ ) order by tabname with ur” >> %GeneratedScriptName%

db2 -tvf %GeneratedScriptName% >> %GeneratedScriptName%-out.txt

You may want to reduce runstats frequency for the more static tables.  If so, add name, time, and other criteria like the following:

db2 -x “select ‘runstats on table ‘ || rtrim(tabschema) || ‘.’ || tabname || ‘ with distribution and indexes all allow write access;’ from syscat.tables where type = ‘T’ and tabschema in (‘MYSCHEMA‘ ) and tabname like ‘MYNAME%’ and ((stats_time is null) or (stats_time < current timestamp – %SinceMinutes% minutes)) order by tabname with ur” >> %GeneratedScriptName%

Perhaps you’ll find these incantations (or variations of them) useful as the days go by.

DB2 Second Steps

I’ve written about DB2 9.7 install issues, such as here and here, but neglected to record one that bugged a co-worker today.  You log into Windows after installing and are greeted by a popup message:

SQL5005C System Error. A system error, probably an I/O error, was encountered while accessing a configuration file.

It typically comes from DB2 First Steps (db2fs.exe), but that may not be obvious.

The likely cause is that you enabled DB2’s extended Windows system security (the new default), but did not add your user ID to the DB2ADMNS or DB2USERS group.  The DB2 installation program tells us to do that, but if you didn’t do the install, you may have never seen that advice.

Just add your ID to the group and problem solved.  And maybe turn off the db2fs auto-start so it doesn’t annoy you each time.

Default This

It’s a common problem with new Windows DB2 installs on corp-rat machines: you try to create a new database or do some other such sysadm or secadm operation and all you get in return is:

SQL1092N <User> does not have the authority to perform the requested command or operation.

It stems from using a domain ID but not having access to the corp-rat directory (Active Directory, LDAP, whatever), either because you’re disconnected or because group enumeration is disabled.  I’ve written about it here (in the comments), and so have many others, including the IBM DB2 Program Director’s thorough treatment, How to Develop DB2 Applications on an Airplane.  The fix is quick and simple:


Followed by a DB2 restart (db2stop & db2start) for good measure.  It can be verified with:

db2set -all | find /i “grp”

It’s familiar (classic, even) by now, but all too easy to forget: a co-worker who knew it well forgot about it during today’s work.  So I’m thinking this should either be a DB2 default, or added as an install-time option/reminder.  This is right up there with db2empfa, db2_skipdeleted, db2_awe, and other such knobs where the behavior was so commonly preferred it became the new default.

Big and Slow

All too often, folks will take a broad-brush “bigger is better” approach when sizing, configuring, and tuning databases.  After all, if 5 is good, 10 must be better, right?  Problems from this brute-force approach can show up quickly with “not enough memory” error messages, excessive paging, and slow start-up. But sometimes bad results can be delayed or indirect.  DBA “coolwinds” points out how high CPU utilization can result from an oversized tempspace bufferpool:

If a temporary tablespace has a very large bufferpool, and temporary tables are often dropped, CPU usage might be high. (Even if your application does not explicitly drop temporary tables, DB2 might still create and drop temporary tables internally in normal query processing.)

He explains why (dirty list scanning) and how to correct it (create a smaller bufferpool dedicated to the temporary tablespaces).

Tools make it easy to view bufferpool sizes, but you can always query the system catalog to check:

select tbspace, datatype, t.pagesize, bpname, npages
from syscat.tablespaces t, syscat.bufferpools b
where t.bufferpoolid = b.bufferpoolid
order by datatype, tbspace

Right-sizing can be a challenge. Fortunately, for DB2 9.1 and higher, you can use automatic sizing for many things and let STMM do the work.  When overriding, it helps to ask autoconfigure for its opinion.

DB2 group_concat and other figments

For all that DB2 has to offer, there remain some features found in other relational databases that DB2 still lacks.  Multi-version concurrency control (MVCC) and Oracle-style optimizer hints (between the /*+‘s) often top the list, but there are others.  I’m reminded of these when I find myself recommending something out of habit before I remember DB2 doesn’t have it (rhyme intended).

This happened today with a request for DB2 SQL to build an export command with an explicit column list (no “select *”).  The simple answer was quick:

select ‘export ‘ || group_concat(colname) || ‘ from ‘ || tabname from syscat.columns where … group by tabname

Unfortunately, DB2 doesn’t have group_concat, so I went googling for an alternative.  Some folks offered group_concat, concat_all, and rowtocol implementations as UDFs and stored procedures, while others suggested really ugly recursive SQLs or UNION/CASE alternatives.  None were very general.  One IBMer figured out a way to use DB2’s xmlagg function to hack a solution, which I applied as follows:

select ‘export ‘ || substr(replace(replace(xmlserialize(content xmlagg(xmlelement(name “COLNAME”, colname)) as varchar(1000)), ‘<COLNAME>’, ‘, ‘), ‘</COLNAME>’, ”),3) || ‘ from ‘ || tabname from syscat.columns where … group by tabname

It works, but it’s uuuugly!

I can understand some hesitation to add group_concat to DB2; for example, it’s very easy to exceed column limits.  And architectural differences mean that DB2 doesn’t always need to “clone” everything that other DBs have (there are other ways to do the same thing).  But it’s often helpful to keep up with the database Joneses in one area while you’re passing them in others.

Recent DB2 enhancements (currently committed semantics, literal replacement, optimization profiles, etc.) have shrunk many of these gaps, but many remain to be closed entirely.  With the continued rapid addition of new DB2 features, perhaps we’ll see some of these soon.