Tag Archives: Performance

Big and Slow

All too often, folks will take a broad-brush “bigger is better” approach when sizing, configuring, and tuning databases.  After all, if 5 is good, 10 must be better, right?  Problems from this brute-force approach can show up quickly with “not enough memory” error messages, excessive paging, and slow start-up. But sometimes bad results can be delayed or indirect.  DBA “coolwinds” points out how high CPU utilization can result from an oversized tempspace bufferpool:

If a temporary tablespace has a very large bufferpool, and temporary tables are often dropped, CPU usage might be high. (Even if your application does not explicitly drop temporary tables, DB2 might still create and drop temporary tables internally in normal query processing.)

He explains why (dirty list scanning) and how to correct it (create a smaller bufferpool dedicated to the temporary tablespaces).

Tools make it easy to view bufferpool sizes, but you can always query the system catalog to check:

select tbspace, datatype, t.pagesize, bpname, npages
from syscat.tablespaces t, syscat.bufferpools b
where t.bufferpoolid = b.bufferpoolid
order by datatype, tbspace

Right-sizing can be a challenge. Fortunately, for DB2 9.1 and higher, you can use automatic sizing for many things and let STMM do the work.  When overriding, it helps to ask autoconfigure for its opinion.

db2top

I usually don’t have (nor want) direct, real-time access to production databases, yet I’m sometimes asked to diagnose and correct performance issues with them.  So my typical MO is to collect periodic snapshot files (driven by cron or schtasks) and run them “offline” through a set of tools I’ve developed for postmortem analysis.  These tools of mine measure and report trends, calculate deltas, do performance calculations, and flag areas of concern.

But occasionally, I do need to monitor databases in real time.  For this, I’ve relied on third-party tools, such as Quest Spotlight, the free and simple db2mon, db2pd and get snapshot commands, and even Windows perfmon against the DB2 counters.  All the while, I wondered why IBM didn’t provide similar real-time monitoring tools.

Top for DB2

But IBM recently jumped into the game with the db2top utility for Linux and AIX.  As the name implies, it’s like top for DB2.  It uses a text user interface (it’s curses-based) which, despite the retro-80’s feel, is a benefit.  That is, it’s super-fast (jumping between views is quick and easy), and, of course, it runs from a command shell, so no X required.

Db2top shows just about everything available in DB2’s snapshots, and then some.  For example, it automatically calculates things like bufferpool hit ratios, sort overflow percentages, average I/O times, async read percentages, average execution times, and lock chains.  In most views, you can use the arrow keys to scroll right and left to see additional details; I found it useful to increase the screen width to see more without scrolling.  But, of course, the real benefit is watching instantaneous updates and deltas.  This avoids having to “do the math” yourself by processing a series of snapshot files.

With Any More Features, We’d Run Out of Alphabet

There are some nice tie-ins, like the ability to grab an instantaneous snapshot (S) for the corresponding screen, and run explains (L) directly from the Dynamic SQL screen.  Both toss the results directly into vi for viewing.  You can even force applications directly from the sessions (applications) view (bwaaaahahaha!)  And navigation aids like column sorting (Z, z) and screen “freezing” (f) come in very handy when drilling in for closer analysis, particularly when there’s a lot going on.  Quick reference help (h) is available in case you forget these keystrokes.

The bottlenecks (B) screen is useful for finding the “problem children” – those agent(s) consuming the most of each type of DB2 resource: CPU, memory, I/O, locks, sorts, etc.  This makes for a good starting point when jumping into a crisis.

The tool also supports capturing snapshots over time, running analysis reports, exporting to Excel, and replaying results.  I haven’t done much with these features yet, but they may very well take the place of my home-grown snapshot analysis tools.

Just What You Need

Before this, if you told me you had “Spotlight for a command shell,” I’d think you were crazy.  But db2top comes remarkably close.  Overall, it has a lot going for it: ubiquity (it’s there with every DB2 9.x *ix install), speed, and just the monitoring tools I need (no more, no less).  There’s not much lacking, but the following additions would be nice:

  • Extensibility (or source code).  For example, I could then add my own performance calculations – things like log, victim, and threshold trigger ratios for page cleaners.
  • Tie-ins to event monitors, at least db2detaildeadlock.
  • A Windows version, even if it required Cygwin.

Yet I’d prefer it miss a few of these flourishes than be laden with extra features that only add bulk or cost, or that just get in the way.  Too many other tools fall into that trap.

Sorting it Out

In prior posts, I described some of the benefits of DB2’s Self-Tuning Memory Manager (STMM), along with a few caveats.  One caution was around automated sort heaps, since sort requirements can often be unpredictable and confusing.

High-volume OLTP systems can, at times, experience heavy loads of small sorts or sort-like activity (hash joins, grpby’s, etc.).  It’s not uncommon for many of these to overflow (spill), or for a sub-optimal access plan to arise, even when more than enough memory is available for sorts.  Like the proverbial bad check writer, you’re left thinking, “I can’t be overflowing, I still have memory left!”

Sort overflow symptoms include high I/O and CPU utilization, poor response times, high sort overflow snapshot counts, long sort times, and large numbers of rows written on read-only SQLs like SELECTs.  Sort heap tuning is often one of the “last frontiers” in performance work (after fixing low-hanging fruit like SQLs, indexes, and bufferpools), and can be a stubborn one.

From my experience, most sort overflow problems in DB2 9.x fall into these categories:

  1. SQL.  In high-volume OLTP systems, it’s important to avoid sorting as much as possible.  Developers usually recognize that order by clauses require sorts, but can miss more subtle uses of the sort heap.  Count, distinct, and group by clauses also often require sort heap space, as do hash joins.  Statement snapshots and explains will reveal these.  The first step should be to try to rewrite problem SQLs to eliminate the sorts.  But wherever that isn’t possible, try step 2:
  2. DDL.  Adding or changing indexes and clustering may eliminate sorts.  If an existing index is used but sorting still occurs, check to see if adding index or include columns will help, and verify that allow reverse scans is specified when needed.  Again, the only way to know for sure is to measure by snapshots and explains.  In some cases, MDCs can help, but check it first by “doing the math” or running db2advis.  Sometimes the problem is as simple as outdated statistics (among other things, outdated statistics can cause DB2 to request too small a sort heap), so make sure your runstats is occurring as frequently as needed.
  3. Configuration.  And here we come to the STMM benefits and caveats.  You should “go automatic” with sort heap sizes, but be prepared to monitor and adjust as needed.

STMM only tunes shared sort memory, so to enable self-tuning sort memory, you must:

  • Set the instance (DBM)-level sheapthres to zero (0).  You can verify this with: db2 get dbm cfg  | grep -i sheap
  • Set the database-level sheapthres_shr (total amount of sort memory) and sortheap (memory available for each sort) to automatic.  You can verify this with: db2 get db cfg for <database> | grep -i sort

In some cases (such as when you’re experimenting and don’t want to wait on STMM’s tuning cycles), you may want to set an initial value for sortheap.  To do this, specify it alongside the automatic parameter, like so: db2 update db cfg using sortheap <size> automatic immediate.   But the initial value matters little after STMM “warms up” and then saves its settings.

When monitoring a database with sort overflow problems, keep an eye on the percentage of overflows (under 3% is a good target) and on the current sortheap and sheapthres_shr sizes.  You can view these with the snapshot monitors (db2 get snapshot for all on <database> | grep -i sort) and database configuration (db2 get db cfg for <database> show detail | grep -i sort).   But you may find, as I often have, that frequent, small overflows are occurring even when the sort memory areas have stretched to more than enough space.

Why this happens is a mystery to me.  A parallel symptom I often see is that the sort heap size reported in explain (db2exfmt) outputs is often far smaller than the current sortheap size.  At this point, the only choices are to leave sort heaps at automatic and accept some overflows, or abandon automatic and set a fixed size.  When setting large fixed sizes, remember that an overflowed sort writes the entire sort heap to tempspace.  So, a very large sort heap may coax the optimizer into relying more on sorts, only to have some of these very large sorts spill everything.

IBM is considering allowing ceiling and/or floor values for certain automatic parameters.  I do hope this gets added in a future DB2, and that sort heaps are among the first that can be configured this way.  Until then, DBAs will be left to sort this out on their own.

Can’t Get There From Here

I got a question today from a friend who was trying “explain stmtcache all” on DB2 LUW and wondered why it was unrecognized.  He had stumbled across this command while looking up the explain syntax in the DB2 online documentation and was lured in by its promises.  What he didn’t notice was that this was in the DB2 for z/OS documentation, and that this command isn’t available in LUW.

Don’t get me started.

In the olden days, DB2 on the server and DB2 on the mainframe were two very different products, and no-one expected much compatibility.   It took years after the OS/2 EE Database Manager rewrite for DB2 LUW to catch up to DB2 for z/OS’ level of sophistication.  But DB2 LUW has come a very long way very quickly in recent years, and has adopted most of the important features previously only available from the mainframe.  I’ve gotten pretty used to this.

But I’ll often hear of all this cool stuff in new versions of DB2 for z/OS, like in the version 10 beta that rolls out tomorrow.  I expect new features to eventually appear in a subsequent LUW version, but sometimes they never do.  Explain stmtcache all is one such example.  In this case, architectural differences may mean that it will never come down from on-high, but it would be nice to have in LUW.  Yes, I can loop through snap_stmts or top_dynamic_sql and do explain all for on each of the stmt_texts (I have scripts that do this), but this is slow.  And changing the current explain mode is another option, but it’s rarely available and rarely efficient.  It’s a dynamic SQL world now, and the kinds of trends you can spot by mining the snapshot tables (and presumably the dsn_statement_cache_table) are just too useful.  So give us the best tools for it, even if we have to borrow them from the big iron.

Another related example is the stated plan to add access plan stability features for dynamic SQL (I don’t need it for static SQL, so I can skip the current PLANMGMT capabilities).  But, alas, this is currently only being planned for… where?  DB2 for z/OS, of course.

Don’t Get CLOBbered

The subject of direct I/Os came up in an unexpected phone call today.  This was from a friend who had long ago recovered from CLOB abuse and was concerned that he had somehow fallen off the wagon again.

Often new OO developers are tempted to treat the database as little more than a file system.  With wonderful things like CLOBs, BLOBs, and long varchars, who would actually parse their objects into columns?  Why bother with ORM frameworks?  Why take the car apart to put it in the garage?

The answer, of course, lies in the system requirements.  If all you need are LOBs, you probably don’t need a relational database.  And if you need a relational database, you probably shouldn’t use LOBs.

It’s not only an important design issue, but a big performance issue as well.  In DB2, LOBs and other large objects do not go in the bufferpools.  This means every read and write is physical – DB2 must go directly to disk.  Enabling file system caching on large object tablespaces helps some.  But, even with that, heavy use of LOBs is one of the quickest ways to kill OLTP performance.

There is a place in this world for LOBs and other large objects, used carefully and sparingly.  But far too often they’re used for object storage shortcuts.  I’ve had to tune a few systems where CLOBs factored heavily into the design and were among the most frequently-accessed data in the system.  LOB abuse has become so common that the snapshot counters for direct reads and direct writes are among the first I check when measuring a new system.  Sometimes the fix is simple: convert the CLOB types to varchars, with for bit data, if needed. Sometimes, though, the data is so large or the process so entrenched that deeper surgery is required.

This post-op friend of mine had long ago eradicated direct I/Os in the billions, but was now seeing counts in the few hundred thousands range after several hours of heavy system activity.  A quick check of the tablespace snapshot for syscatspace found the expected smoking gun: nearly all of these were in the system catalog.

Several of the system catalog tables (such as syscat.tables, columns, and indexes) do use LOBS.  They are strategically accessed, but can (especially after changes) cause some direct I/O spikes.  There are some tricks you can play to minimize this, but these direct I/Os are typically unimportant in the grand scheme of things.  Just subtract the syscatspace numbers from the DB-level totals and see if what’s left demands a closer look.  This will help you distinguish true LOB abuse symptoms from flashbacks.

The Right Retool for the Job

Facebook’s publication of the HipHop transformer and runtime raised the question of whether PHP is really the right language for a web site that has scaled to massive volumes.   The same was asked of Twitter when parts of its Ruby code were rewritten in Scala.

Of course it is.

Proper language choice is based on “the best tool for the job.”  In the case of Facebook, Twitter, and many others like it, “the job” was to get a web site up fast and very quickly evolve it.  Without PHP, Facebook probably wouldn’t have been created.  Productivity, web-orientation, innovation, and even fun trumped raw performance, so PHP and Ruby won the day for those two sites.  No-one would code a web site in C++, just as no-one would code a real-time system in PHP.

If productivity and agility are your primary concerns, pick a language and platform that makes it fast and fun.  I recommend a dynamic, object-oriented language.  If it’s a web app, consider Seaside.  If productivity isn’t important but commodity skill sets and ubiquity are your primary concerns, you might choose Java: the COBOL of the 21st century.  Just don’t be a lemming, letting TPCI ranking become a deciding factor.

And don’t make the decision based on hearsay and perceptions of performance.  Today’s performance tuning and scaling tools (profilers, monitors, load balancers, and even static analyzers) make easy to identify the true bottlenecks and fix those, leaving the non-time-critical majority of the system as is.  Speculative performance tuning can be counter-productive, so if it ain’t broke, don’t fix it.

HipHop is a welcome addition to the PHP universe: the ease and productivity of PHP with the speed of C++.  Hopefully it will mean that PHP developers no longer have the specter of scalability issues hanging over them.  Now, if we could just get some overdue cleanup and deprecation (eliminating redundancy in favor of cleaner object-oriented design), life would be grand.  But that’s another story.

I’d like to try out HipHop, but frankly, I don’t need to.  With the possible exception of a CiviCRM site that I pushed to the limit, my PHP sites just don’t need additional horsepower, and I certainly don’t have to worry about reducing CPU utilization on a few thousand servers.  Obscurity isn’t always a bad thing.

STeMMing The Tide

The sheer number of DB2’s memory-related parameters can hit like a tidal wave.  Bufferpools, sort heaps, lock lists, package/statement caches, and various other heaps can all harm performance if undersized, and can waste or overcommit memory if oversized.  Further, many of the configuration parameters are interrelated; for example, increase logbufsz and you may also have to increase dbheap to accomodate it.  DBAs and performance tuners can feel overwhelmed by the number of knobs one must turn in each database.  And developers can feel overburdened when providing scripts and configuration tools to right-size their databases for each customer’s unique volume requirements.

Fortunately the latest (9.x) versions of DB2 LUW provide the Self-Tuning Memory Manager (STMM).  This feature significantly simplifies memory management in DB2, letting the database automatically size memory areas based on activity and needs.  For most parameters, you can also specify an optional initial size to avoid the overhead of progressively stretching memory after startup.

To turn on STMM for a database, specify:

update db cfg using self_tuning_mem on

You’ll likely also want database_memory set to automatic:

update db cfg using database_memory automatic

You can fix-size or initial-size database_memory, or use the computed option for back-compatibility.  The publib DB2 documentation has more information.

At this point, you can auto-size many of the memory areas.  For example, for dynamic lock list sizing, specify:

update db cfg using maxlocks automatic
update db cfg using locklist automatic

Depending on DB2 version, you can configure any of the following this way: dbheap, pckcachesz, sortheap, stmtheap, applheapsz, appl_memory, stat_heap_sz, dft_prefetch_sz, num_iocleaners, num_ioservers, maxappls, and avg_appls.   Anyone who has tried to package or build scripts that set these beyond the old (painfully too low) defaults can appreciate the benefits of “going automatic.”

Perhaps the greatest benefit comes when auto-sizing bufferpools.  For that, specify:

alter bufferpool <poolname> size automatic

To specify an initial (startup) size (in pages), place it before the automatic keyword.

In my experience so far, “going automatic” has worked swimmingly, with one exception: sort heaps.  OLTP systems can, at times, experience heavy loads of small sorts or sort-like activity (hash joins, grpby’s, etc.).  It’s not uncommon for too many of these to overflow (spill), or for a sub-optimal access plan to arise, even when more than enough memory is available for sorts.  This is partly because auto-sizing the sort heap requires some extra work and partly because the optimizer can’t predict future sizes.  More on this in a future post.