Monthly Archives: March 2010

Don’t Get Fooled Again

Surprisingly, I caught a little flack for this comment I made several weeks ago:

“If all you need are LOBs, you probably don’t need a relational database.  And if you need a relational database, you probably shouldn’t use LOBs.”

The surprising part was the question raised: “why would you ever use a relational database these days?”  It was argued from some familiar and simple high-level reasons (trends, having to write SQL, taking the car apart to put it in the garage, etc.), and came from a NoSQL fan.

This objection seemed anachronistic to me, since we just finished migrating one of our large products from a non-relational model to using DB2 for all data storage.  Why would we do such a thing?  Lots of reasons, but it only took one: paying customers wanted it.  Further, data mining has become so popular and successful lately (on the wings of relational databases) that it’s hard to imagine tossing aside that shiny new benefit.

The NoSQL revolution has taken on a reactionary bent (just consider the name), which is odd for a new movement.  Chesterton reminds us that “the business of progressives is to go on making mistakes, and the business of the conservatives is to prevent the mistakes from being corrected.”  NoSQL advertises itself like a progressive movement, but actually falls to the right of conservative.  BigTable, Cassandra, and HBase are no newer concepts than the ancient things grandpa tells war stories about: flat files, hash files, partition stores, CAM, BDAM, ISAM, etc.  So blindly applying NoSQL is making the new mistake of rejecting well-established corrections to old mistakes.

When making architectural decisions like whether to use a relational database or not, I often start with a blank 8 1/2 x 11 sheet of paper, turned landscape.  At the top I write the relevant business requirements (real, not imagined, ones).  I then do a simple left-and-right table of pros and cons.  At the bottom I write areas where architectural prototypes are needed, if any.  This, of course, helps me weigh the options and make a decision, but it also means I “own” the cons.  If I decide to use an RDMS, I accept the costs.  If I decide not to use an RDMS, I willingly reject the benefits they offer with eyes wide open.

Yet the war of words for and against NoSQL rages on, often without fully or correctly acknowledging the cons, nor the simple fact that you don’t do both at the same time.  Many problems are best solved with a relational database and many are best solved without one.

In the early 90s, I would joke that there existed a parallel universe where client/server has fallen out of fashion and the new, cool buzz was mainframes and 3270 terminals. Funny that we soon joined that parallel universe when web apps and cloud computing became our trend.  Andrew Tanenbaum notes that, while recapitulation doesn’t really happen in biology, computer history does rightfully repeat itself.  Old solutions should be applied to new technologies; at the very least it keeps consultants gainfully employed.  Let the pendulum swing, but truly grok all the pros and cons as history repeats itself.

Unlike Ted Dziuba, I don’t want NoSQL to die.  By definition, it can’t, and that’s arguing a strawman anyway.  I just want it to be used where it best fits.  And the same goes for relational databases.  Repeat after me: “no golden hammers; pick the best tool for the job.”  Just don’t be a lemming.  And don’t get fooled again.

The Friday Fragment

It’s Friday, and time again for a new Fragment: my weekly programming-related puzzle.  For some background on Friday Fragments, see the earlier post.

This Week’s Fragment

I “borrowed” this week’s fragment from a recent Car Talk puzzler.  I prefer original ones, but since this one follows last week’s well, I thought I’d re-use it.

You have 13 bottles of wine, and have been tipped off that one of them is tainted with a deadly poison: so deadly that one drop can kill a person or large animal within 24 hours.  With the help of four lab rats, how can you determine which one contains the fatal potion within one day?

If you want to “play along”, post a solution as a comment or send it via email.  To avoid “spoilers”, simply don’t expand comments for this post.

Last Week’s Fragment – Solution

“If God had intended man to have computers, he would have given him 16 fingers.”

Last week’s puzzle reminds us that there are three kinds of people in the world: those who understand binary, and those who don’t.  Your job was to write code to dictate which pile each of 52 playing cards should land in when sorting in multiple passes with only four piles.  It was a spin on an old fine sort with base conversion and compression algorithm used for sorting checks and other things.

I received some excellent responses from base conversion pros like Joe Richardson and Stephen Ake.  I’ll publish Stephen’s code, since I like his better than mine:

pileNumber := ((card / (pileCount raisedTo: (eaPass – 1))) ceiling) \\ pileCount.
(pileNumber = 0) ifTrue: [pileNumber := pileCount].

If you’re in that second (third?) group who doesn’t keep their checkbooks in hex and is happy with the 10 fingers God gave us, think of it the following way…

If I let you use 13 piles on the first pass, then you could easily do it in two passes.  In the first pass, sort by rank (13 piles), and then stack those up and sort by suit (4 piles). You could treat the suit and rank of each card as a two-digit number, with the rank as the low-order “digit” and the suit as the high-order “digit.”

But Johnny didn’t have room on his Toy Story 2 lunchbox for 13 piles on the second pass, only 4.  So he had to convert the suit and rank of each card to a base 4 number.  That’s the base conversion part, which is important when the number of available piles (or pockets on a sorter) doesn’t match the original base of the number.  That’s because you want to make the best use of all piles/pockets on all passes.

When sorting by things like account numbers on checks, there can also be gaps in the numbers, which wastes pockets and passes.  To avoid this problem, you assign “aliases” to each number.  For example, if the first two account numbers are 13500001 and 13511115, renumber the first account as “1”, and the second as “2”, and sort on the aliases.  That’s the compression part.

This is easier for ten-fingered humans to follow if we use 10 piles to sort cards with the numbers 000 through 999.  The pile number for the first pass would be the digit in the ones position, then the tens position, and then the hundredths position.  That’s the 10^1, then 10^2, then 10^3 position, or the pocket count raised to the pass number.  In code, a modulus (remainder) division “strips off” this digit for use.

Kudos to Joe and Stephen for also noticing that it can be done in only three passes, not four.  That’s because 52 base 4 is a three-digit number.  I wasn’t trying to be tricky; I just forgot to edit after deciding to give Johnny 4 piles instead of 3.  And kudos to Stephen for solving it in Smalltalk (plugging into the scaffolding code I provided), since Smalltalk throws some extra twists: 1-based (not 0-based) arrays to adjust for, and a less-common exponentiation operator (raisedTo:, not ^).  You can find Stephen’s post in last week’s comments; Joe sent me a funny response via email back on March 22; here’s an excerpt:

Little Johnny is always one step ahead. He can sort the deck of cards using only 3 passes. Since Johnny is a smart kid, he assigns the numbers 1 through 13 to the cards of the first suit, 14-26 to the second suit, 27-39 the third, and 40-52 to the cards for the fourth suit. He does this in his head with no need of a sort pass. He then converts this decimal number to base 4 (since 4 piles are used). Knowing that 52 base 10 = 310 base 4, a three digit number, Johnny knows that it will take at most 3 passes to sort the cards. He might get lucky and sort the cards in only one or two passes. The first sort  pass uses the right most digit of this assigned base 4 number. The second pass uses the middle digit  and the third pass uses the left most digit.

Little Johnny has time left over to once again ask “Are we there yet?”

Finally, I neglected to mention that Luke (my middle son) solved last week’s cryptogram.  I often have to decode his (teen-speak) sentences, too.

Impersonating Better Security

I got a question today from a co-worker who was painted into a corner trying access a database he had restored on his Windows development machine.  He stumbled over DB2 9.7’s new security twists, such as not having dbadm authority by default.  I rattled off my familiar quick fix:

db2 connect to <dbname>
db2 grant dbadm on database to <userid>

However, his default Windows user ID didn’t have secadm or sysadm authority, so that failed with an error.   So, I had him impersonate the one that did:

runas /user:<adminuser> db2cmd

Repeating the grant command from this new command shell did the trick.  It could have also been done with:

db2 connect to <dbname> user <adminuser> using <adminpassword>

And so it goes.  No matter how refined security policies become, they can usually be circumvented with a little impersonation.  For example, think of how many times we quickly and mindlessly sudo under Ubuntu.  In this case, impersonation was a fast route to giving a developer the access he should have had by default anyway.  Today’s technology cannot solve the impersonation problem, but sometimes we consider that more a feature than a bug.

Checkin’ Out By Generation

Unlike some others, I don’t get excited about source code management (SCM) systems, a.k.a, version control systems (VCS).  Unless it’s your job to build such things, they’re a means to an end, not an end in themselves.  For me, they’re much like circular saws: I’d better have one when I need it, and it had better work and work well.

I was reminded of that during some merging madness today.  Nearly all of our team’s work is done in source code managed by a very good VCS, but a few files are outside that.  We clashed on those files, so merging and managing them caused far more trouble than it should have.

So it brought source control out of my subconsciousness, and I thought about how different SCM approaches seem to have tracked right along with society’s philosophical trends.  Consider:

– 70s – The Me Decade.  The move from punch cards and shared readers to private online storage with checkout systems allowed each to “do his own thing” and “discover himself.”  Folks started digging groovy things like SCCS and IDSS.

– 80s – The Mine Decade.  Materialism rules, and he who dies owning the most files wins.  The “personal computer decade” had folks keeping their files to themselves and brought us things like PVCS (“lock early, lock often, unlock never”), and ENVY (“that’s my class, you can’t release it!”).  Gag me with a spoon!

– 90s – The Jam Decade.  By the late 80s (even before Pearl Jam hit it big), this pessimistic locking thing started getting really old, and the collaborative “web decade” saw SCCS revived as RCS and then CVS.  This had optimistic checkout as the default, and sharing and merging was expected.  Continuous integration became cool, as developers could know wassup with their homeysWord, dat’s phat!

– 00s – The Whatever Decade.  Early in the decade, Subversion began to loosen things further.  And soon, in postmodern fashion, everyone wanted his own version of the truth.  New distributed version control systems (DVCS) like Git, Mercurial, and Monticello followed, elevating the old idea of a change set to a legitimate stream of development.  This “truth depends on who you ask” approach was fast and flexible.  That is, until integration/build time, when only one right way must prevail, and everyone must merge onto the narrow path.  Often it would come down to trust: use the code from your sweet peeps, and ignore the n00bs.

So what will the 10s be known for in source control?  I suspect DVCSes will continue to grow, led by ever more open source projects using them.  Since this is becoming the “borrow and bailout decade”, perhaps we’ll see seamless integration with shared repositories of common code fragments.  Check back in 2020 and I may write about it.  That is, if I’m not so busy sawing away that I don’t really notice.

The Friday Fragment

It’s Friday, and time for a new Fragment: my weekly programming-related puzzle.  For some background on Friday Fragments, see the earlier post.

This Week’s Fragment

While the code required to solve this week’s puzzle is small, it requires a little setup.  I’ll do it in story form.

Little Johnny is riding along in the toddler seat at the back of Mom’s minivan, secretly playing with Dad’s new deck of cards which he is not allowed to touch.  Mom gives him the long-awaited news, “we’re almost there”, and he panics: he knows he must put the cards back in the box just the way he found them.  That means just like new, sorted in suit and rank order.  Little Johnny’s stubby fingers aren’t very dexterous, but he knows he can sort them by laying them out in piles and then re-stacking the piles until they’re ordered.  So he grabs his Toy Story 2 lunch box, which gives him room for 4 piles.  He has to hurry, but fortunately he knows a way to do it in only four passes of laying out cards into piles and stacking the piles.  Johnny’s a smart little toddler, because he knows which pile to put each card in for each pass to make this all work.

Are you smarter than a toddler?  Can you write the code to determine the pile number for each card on each pass so that they’re all sorted in the end?

To help out, I’ll post a comment with some scaffolding code.  If you want to use it, just fill in the missing line.

What this is really is a technique that banks have used for years to sort large numbers of checks into various orders (account number, destination, amount, etc.) on machines that have anywhere from 5 to 35 available pockets.  It’s called a fine sort with compression and base conversion, which sounds really fancy but actually boils down to one or two lines of code.

If you want to “play along”, post a solution as a comment or send it via email.  To avoid “spoilers”, simply don’t expand comments for this post.

Last Week’s Fragment – Solution

Last week’s puzzle was to solve the following cryptogram:

Si spy net work, big fedjaw iog link kyxogy

This cryptogram is the dedication in the excellent book, Network Security, by Kaufman, Perlman, and Speciner.  Nothing fancy here: it’s just a simple substitution cipher.  So you can solve it by just sitting down with a pencil and paper and plugging at it, building up the substitution table as you go.  Classic cryptanalysis starts with trying frequent letters (like the nostril combination – NSTRTL, familiar to Wheel of Fortune viewers), common patterns (ad, in, ing, ou, ur, etc.), and common words (to, the, and for are a great start for this puzzle).  If you like such puzzles, try the cryptograms web site.

The solution is:

To the bad guys, for making our jobs secure

That’s a clever dedication for a book on computer security, huh?

Starters and Finishers

Mariano Rivera is arguably the best closer of all time, with credentials that include a 0.77 ERA across 76 post-season games.  Yet with a 5.94 ERA as a starter during his ML rookie season, he was only allowed to open ten games.  Mo can’t start, but he can most definitely finish.  Even this Braves fan must acknowledge that.

Starters and finishers also clearly exist in the world of software development.  I was reminded of that this week when a developer proposed changing a few accelerator keys (shortcuts) on menu picks, and a detailed dialog and email chain followed.  Since I originally designed and built this system, I often either specify or review proposed changes.  But in this case, I honestly didn’t care, and refrained from commenting.  Yet, I’m eternally grateful for those who do care about such details.

I’ve had the pleasure of working with many developers who excel as finishers.  They’re not plagued by the “not invented here” attitude that often tugs at starters like me.  Rather, they’ll take something over, make it their own, and tirelessly extend and support it.

Understanding “starter” and “finisher” personalities can help when staffing a well-balanced team.  Martin Fowler describes starters as having the enabling attitude, while finishers possess the directing attitude.  Starters make good architects, business analysts, designers, and trailblazers.  Finishers make good project managers, maintainers, and support engineers.  Nearly all software organizations need a good mix of both.

But we should avoid caricatures.  This certainly doesn’t mean that starters never finish anything and finishers never start anything.  These labels shouldn’t excuse procrastination (by finishers) and incomplete work (by starters).  Starters should be required to stay with a system until it reaches maturity, and finishers should brought on early enough to understand its design and rationale.  I’ve started about a dozen new large systems (and a few dozen smaller systems and feature packages), and have enjoyed staying with the products through GA of versions 1.x, 2.x, and 3.x.  But my fondest memories are usually of early milestones (internal and alpha versions like 0.1 through 0.8), although these required a lot of round-the-clock hard work.

These personalities should also not be confused with quality measures.  It is simply wrong to excuse poor quality from a starter because, “a finisher will clean it up.”  Saves are exciting in baseball, but are poor process in software.  Yet many finishers do have extra perseverance to help close the final details.  For example, I’ve known finishers who code more bugs than the starters.  But even if it takes ten tries to get it right, they’ll doggedly clean them up.

The symbiotic benefits between starters and finishers are a good thing in software development, and should be encouraged, along with a healthy respect for each other.  Just ask Mo.

The Friday Fragment

It’s Friday, and time for a new Fragment: my weekly programming-related puzzle.  For some background on Friday Fragments, see the earlier post.

This Week’s Fragment

Solve the following cryptogram:

Si spy net work, big fedjaw iog link kyxogy

This cryptogram is the dedication in the excellent book, Network Security, by Kaufman, Perlman, and Speciner.  Nothing fancy here; it’s just a simple substitution cipher.  So don’t bother writing or downloading a program to crack it, just sit down with a pencil and paper and plug at it, building up the substitution table as you go.

If you want to “play along”, post a solution as a comment or send it via email.  To avoid “spoilers”, simply don’t expand comments for this post.

Last Week’s Fragment – Solutions

Last week’s puzzle was this:

Write code to create a string of “fill data” of a given length, containing the digits 1-9, then 0, repeating.  For example, if given a length of 25, return “1234567890123456789012345″. For a length of 1, return “1″; for a length less than 1, return an empty string.

My son did it in Python, and a co-worker pointed me to a simple way he uses atAllPut: for fill data.  Here are some solutions, in three languages I often use:

In Smalltalk:

(1 to: length) inject: ” into: [ :str :ea | str, (ea \\ 10) printString ]

If a few extra temporary objects are a concern, do this:

ws := String new writeStream.
1 to: length do: [ :ea | ws print: (ea \\ 10) ].
ws contents

Here it is in Ruby:

(1..length).to_a.inject(”)  { | str, ea | str << (ea % 10).to_s() }

Although perhaps there’s a clever way to use join.

And, finally, in Java:

String s=””;
for (int i=1; i<=length; i++)
s += i % 10;

Easy enough, huh?  All it takes is a fragment.

Can’t Get There From Here

I got a question today from a friend who was trying “explain stmtcache all” on DB2 LUW and wondered why it was unrecognized.  He had stumbled across this command while looking up the explain syntax in the DB2 online documentation and was lured in by its promises.  What he didn’t notice was that this was in the DB2 for z/OS documentation, and that this command isn’t available in LUW.

Don’t get me started.

In the olden days, DB2 on the server and DB2 on the mainframe were two very different products, and no-one expected much compatibility.   It took years after the OS/2 EE Database Manager rewrite for DB2 LUW to catch up to DB2 for z/OS’ level of sophistication.  But DB2 LUW has come a very long way very quickly in recent years, and has adopted most of the important features previously only available from the mainframe.  I’ve gotten pretty used to this.

But I’ll often hear of all this cool stuff in new versions of DB2 for z/OS, like in the version 10 beta that rolls out tomorrow.  I expect new features to eventually appear in a subsequent LUW version, but sometimes they never do.  Explain stmtcache all is one such example.  In this case, architectural differences may mean that it will never come down from on-high, but it would be nice to have in LUW.  Yes, I can loop through snap_stmts or top_dynamic_sql and do explain all for on each of the stmt_texts (I have scripts that do this), but this is slow.  And changing the current explain mode is another option, but it’s rarely available and rarely efficient.  It’s a dynamic SQL world now, and the kinds of trends you can spot by mining the snapshot tables (and presumably the dsn_statement_cache_table) are just too useful.  So give us the best tools for it, even if we have to borrow them from the big iron.

Another related example is the stated plan to add access plan stability features for dynamic SQL (I don’t need it for static SQL, so I can skip the current PLANMGMT capabilities).  But, alas, this is currently only being planned for… where?  DB2 for z/OS, of course.

Johnny Cash-ing In

There’s been a lot of buzz lately about the recent iTunes 10 billionth song winner: a Johnny Cash tune purchased by nearby Woodstock, GA resident Louie Sulzer.  One of our local papers, the Cherokee Ledger-News, covered the story like none other, with some great quotes.  Like this: “Sulcer said he picked up the phone and a man said ‘Congratulations, Lou, this is Steve Jobs.’ Sulcer sarcastically said, ‘Sure it is.'”  And the photo caption: “Now, he has to figure how he’s going to spend a $10,000 iTunes gift card he won in an Apple contest he knew nothing about.”

Just love small town candor.  You couldn’t make this stuff up.

A Berry of a Race and More

This morning’s Berry College 10K race (also with 5K, 1 mile, and 1/2 Marathon) could not have been better.  The venue was both ideal and idyllic, and as the world’s largest college campus there was plenty of room to lay out some scenic routes.  The tree-lined roads and trails carried us by pastures with deer, beautiful stone college buildings, and even a bit of snow still around in places as icing on the cake.  It was very well organized and staffed, making the handling of about 2,000 runners flow like clockwork.  By running the 10K, I dropped into my comfortable pace (a 54:14 finish), rather than too-fast paces I often try for 5Ks.  Overall, it was a top-notch event, the way all races should be.

Afterward, we enjoyed warming temps and exciting soccer. Lydia had a great game in our season opener with three goals, some excellent crosses and setups, and a “textbook” corner kick: she lifted it and it dropped right in front of the goal for her friend to finish.  Luke’s first regular season NASA game was a competitive one, ending in a tie.  Overall, a great Saturday!

The Friday Fragment

I’ve always been a sucker for a good (yet simple) puzzle: crosswords, Sodoku, the Car Talk Puzzler, PC-lint’s Bug of the Month, you name it.  And the same goes for programming and logic puzzles.  I’m not talking about k-coloring, circuit satisfiability, or RSA factoring: I mean simple problems with simple solutions.  In fact, the simpler the solution, the better.  After all, I’m a sucker for elegance, too.

And elegance is sorely needed.  I’ve stumbled across more than my share of clumsy, confusing code: functions and methods that go on for dozens of lines that were easily rewritten to just a few straightforward statements.  You’d think the programmer was paid by the line of code.  So to do my small part to remedy that, I’ve often given quick assignments to my eldest son and other suitable victims.  Nothing fancy, just simple problems that can be summed up in a couple of sentences and that exercise standard techniques: loops, arrays, iterators, collections, streams, strings, recursion, induction, closures, etc.  I did that just recently, and that gave me the idea to try it here.  If for no other purpose than to provide fodder for other beginning programmers.

So, I’m introducing my Friday Fragment: a weekly programming, logic, or related problem.  The problem won’t be hard; rather, it’ll usually be something quite common, and often a chance to try a “kernel” of a problem in multiple languages or with multiple techniques.  And it’ll truly be a fragment: the problem and solution will be at most just a few sentences (or lines of code).  Each weekly post will include solution(s) for the previous week’s fragment and a new problem.  This’ll go on until I find out that it’s harder than it seems.

If you want to “play along”, post a solution as a comment or send it via email.  To avoid “spoilers”, simply don’t expand comments for the post. For programming problems, pick the language of your choice.  That could add interest, since language wars can be great sport.

This Week’s Fragment

Write code to create a string of “fill data” of a given length, containing the digits 1-9, then 0, repeating.  For example, if given a length of 25, return “1234567890123456789012345”. For a length of 1, return “1”; for a length less than 1, return an empty string.

The idea for this came from some recent code I wrote, as part of a “test mode” to generate dummy data, up to a maximum field length.  This was so that XML output files could be fully validated against a rich schema (XSD), even if many of the source data fields were missing.  I used all digits for numeric fields, and the modulus positions were helpful for demonstrating field lengths.

“See” you in a week, when we’ll frag this one and add another.


The decision came down yesterday: our latest Agile/Scrum project will have to produce some rigid waterfall SDLC artifacts.  We’ve enjoyed a “bureaucracy break” for awhile, choosing things like simple wiki pages and working code to manage and communicate designs, test plans, sprint backlogs, and the like.  But now we’ll have to use some (verbose and redundant) standardized Word document templates for requirements/SRS, designs, code reviews, test plans, test cases, etc.

It isn’t the case that waterfall is “all bad” and agile is “all good”; rather, it’s easy to forget the motivation for each.  First, keep the paying customer happy, then place the emphasis on elegance and “the simplest thing that could possibly work”, and then let form follow function.  Whether the deliverable is a wiki page, Word document, Visio graph, UML diagram, EA model, xUnit, script, or code, pick the best tool for the job.  But don’t, for example, use outdated templates simply for standardization’s sake.  The mindset that emphasizes style over substance is the same one that measures lines of code rather than capability: increasing bulk without adding value.

I sometimes have to combine waterfall and Agile techniques to keep multiple stakeholders and competing interests satisfied.  And it doesn’t have to degrade to “agilefall” or “wagile”; there are ways to successfully blend the two.  For example, go ahead and develop a high level multi-sprint plan and stuff it in Microsoft Project if forced to.  But only define sprint goals and high level content in advance.  Don’t pretend you can predict detailed tasks six months in advance; rather, plan each sprint as you get to it and fluidly adjust the backlog as needed.   Go ahead and write that big spec and design document (to satisfy a contract or pre-paying customer) before starting your first sprint; just use a lightweight change control process to quickly include new discoveries.

I suppose “agiley waterfalling” is like running a chute: you plot out your course and pretend you know exactly how you’ll do it.  But the details of the run will be a wild blend of surprises and adjustments, with some boofs along the way.


Typical Wednesday.  One of my meetings for today was a conference call where I expected to remain mainly in listening mode.  So I attempted some of my own work during the call, making good use of the speaker and mute button.  I am male, but can usually handle two things at once.

But during the course of that one hour session, I got two cell phone calls, five urgent emails, IMs from four co-workers, and many more questions than expected from the call.  It came to a few dozen separate topics, far exceeding my “two plus or minus seven” capacity. Postponing non-urgent interruptions helps, but increases my backlog queue length.  Yet responding “on demand” often leads to dropped interrupts and increased interrupt latency.  It can be tough to maintain the balance.

Careful thoughtwork requires focus and concentration, and interruptions can quickly wreck that.  I was reminded of Larry Constantine’s classic essay, Irksome Interruptions.  In it, he wittily suggests that programmers adopt the nomenclature of CPU interrupt handling to deal with this problem.  It’s a geeky way to go about things, but handled this way, an “interrupt request” becomes short enough not to derail a thought process.  Want to chat with someone who might be busy at task?  Just ask “IRQ?” (pronounced “irk?”).  If it’s a good time, they’ll “ACK” you, which buys a moment to “save state” before “servicing your interrupt.”  If it’s a bad time, they can just answer “NAK” (negative acknowledge), and you know to try later, with no harm done (no thought process wrecked).  And someone who has developed a habit of frequent interruptions might be labeled IRQsome.

Constantine wrote his essay before our work environments had so many more interrupt request lines to service.  Multiple IMs and chat rooms, multiple phones, and emails arriving at high rates add to classic face-to-face interruptions.  And the economy was better then (and engineer-to-workload ratio higher), so folks weren’t pulled in so many different directions.  Yet his suggestions are compelling, and I do exchange “irq”s, “ack”s, and “nak”s (over IM) with one co-worker who has also read the old essay.  The classic instant messaging “yt?” also works, provided one is willing to answer “n” or “no” when not prepared to be mentally there.