Tag Archives: Programming

Just Say It

I recently built a voice response system with Voxeo‘s software: their VoIP IVR (Prophecy) and VoiceXML server (VoiceObjects).  This stack is available in both on-premise (local install) and cloud versions (Evolution/VoiceObjects on Demand), and includes an Eclipse-based development environment, VoiceObjects Desktop for Eclipse (DfE).  VoiceObjects (VO) takes a little getting used to, but is a nice platform for developing call flows.  It sure beats hand-coding VoiceXML, and it integrated well with my Java service back-end.

VoiceObjects’ documentation is excellent, but I learned quite a lot by trial and error.  So I thought I’d share a few tips and lessons learned.

Use scripts and expressions

VO’s library of objects is quite rich, but you will inevitably run into some requirement that needs more than just objects with properties.  For example, I used nested concatenate expressions to build complex types for web service calls; split, index, findcell, strequal, and matchesregexp expressions to process web service results; and JavaScript to handle custom DTMF mappings. The prepackaged expressions are also helpful, such as the ones that grab ANI and DNIS from session state.

Set event handlers

It’ll take about 5 seconds to grow weary of the thickly-accented “service you are trying to reach is not available…” default message.  Get a friendlier message with more root cause details by setting the event handlers in your top-level module (they’ll be inherited by child objects).  Just open the Outline view, expand Event Handling section, and add away.  You can add whatever outputs you’d like, and use different handlers for repeat occurrences.

Set default routing

Be sure you have a default routing (*) entry in the Prophecy Commander applications list, since surely someone will use the wrong SIP address or configure the wrong URI.  With a default routing entry, callers at least land in a recognizable place.

Get Blink

VO’s built-in SIP Phone is very basic, so you’ll find yourself looking for alternatives.  There are a lot of free SIP phones out there, but most require signing up to a service or can otherwise violate corp-rat security concerns.  I settled on Blink.  It’s a little buggy, but does the job.

Get an ATA

SIP phones are handy, but you’ll eventually want to pick up a POTS phone or cell and call into your IVR.  While you can can assign PSTN phone numbers in Evolution/VoD, that doesn’t help with your local Prophecy install.  That’s where Analog Telephone Adapter (ATA) gateways come in.  I used the AudioCodes MP-114; the model with two FXO and two FXS ports provides both analog in and analog out options.

Get the latest version

Since Voxeo’s hosted environment uses VO 10, I started out with that version.  But I quickly ran into some problems that required recent fixes.  Voxeo was kind enough to build a custom VO 11 environment in an AWS instance for my cloud testing, but their GA hosted platform will be upgraded soon.  In the mean time, there’s really no reason not to use one of the recent versions: 11 or 12.

Add debug outputs

VO provides a Log object for tracing purposes, but I often found it easier to use additional Output objects to hear trace messages on the call.  By convention, I labelled these with a “Debug – ” prefix, and disabled them (but left them in place) when not used.

Use the logs

Even minor typos can cause VO to throw internal exceptions that leave you scratching your head.  Sometimes you can diagnose these with Debug and Trace, but in most cases the VoiceObjects logs (viewable from DfE) provide the best information.   If the problem is on the IVR side, use Prophecy’s built-in Log Viewer to hunt down the root cause.

Use the support forums

For some vendors, support fora are where problems go to die from neglect.  But that’s far from the case with Voxeo.  They have an excellent support team that quickly responds to issues.  You can browse existing posts for common problems and easily add new tickets.


This “very Monday” brought some unexpected troubles during a new deployment of my app.

My web services code had worked fine in a variety of environments (WebLogic, Tomcat, etc.), but failed in WebSphere 7 fixpack 23 with exceptions like the following:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection#0’: Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: org/apache/ws/commons/schema/XmlSchemaCollection.read(Lorg/xml/sax/InputSource;)

Since this had the symptoms of a typical classpath problem, I started there.  I had to remember that WebSphere’s version of -verbose as a JVM argument is Application server > Java and Process Managerment > Process definition > Java Virtual Machine – Verbose class loading.  I also did a quick check to see what I was using in other environments:


This revealed that the obsolete XmlSchemaCollection class came from WebSphere’s old version of org.apache.axis2.jar in …\IBM\WebSphere\AppServer\plugins.  In spite of the “plugin” name, WebSphere is hopelessly intertangled with Axis 2, so removing it was not an option.  So I preempted WebSphere by providing my own xmlschema-core-2.0.2.jar, and made it a part of my distribution by adding it to the pom.xml:


Finally, I set “Classes loaded with local class loader first (parent last)” in WebSphere’s class loader settings for my app.  In some cases, I had to also place xmlschema-core-2.0.2.jar in a shared library folder.

These extra steps are a bit annoying, but I suppose that’s the price paid for using a Java EE container that lags a bit behind on open source frameworks.

Broken JARs

Since I work on a few apps that use the DB2 JDBC Type 4 drivers, work-arounds for common issues have become familiar.  But when the corrupted DB2 JAR issue resurfaced today, I realized that I had not posted about it.  Now is as good a time as any to make up for that neglect…

The problem is that for DB2 LUW versions between 9 and 9.5, IBM published db2jcc.jar files with corrupt classes.  In some Java EE environments, the container doesn’t fully scan JARs, so it doesn’t really matter if an unused class is wonky.  But many containers do scan, causing exceptions like the following:

SEVERE: Unable to process Jar entry [COM/ibm/db2os390/sqlj/custom/DB2SQLJCustomizer.class] from Jar [jar:file:…lib/db2jcc.jar!/] for annotations
org.apache.tomcat.util.bcel.classfile.ClassFormatException: null is not a Java .class file
at org.apache.tomcat.util.bcel.classfile.ClassParser.readID(ClassParser.java:238)
at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(ClassParser.java:114)

That particular exception is from Tomcat 7.

IBM acknowledges the problem (see http://www-01.ibm.com/support/docview.wss?uid=swg1LI72814) and offers a work-around: edit the JAR file to remove the offending classes.

Edit the JAR?  Really?!  A quicker, surer solution is to just grab a good JAR from http://www-01.ibm.com/support/docview.wss?uid=swg21363866.

Quoth the Maven

I’ve enjoyed Maven’s built-in maven-eclipse-plugin as a handy little tool for creating Eclipse projects from Maven POMs.  But I was recently lured into trying m2eclipse (a.k.a, m2e) because of its feature set. After all, common Maven tasks (running builds, editing POMs, adding dependencies, updating repos,  etc.) can certainly benefit from some tooling and automation.

So I installed the plug-in into Eclipse and then ran the obligatory Configure -> Convert to Maven Project. Imagine my surprise when I was immediately rewarded with the exception:

“Updating Maven Project”. Unsupported IClasspathEntry kind=4

Turns out, this is a known bug due to the fact that m2e and maven-eclipse-plugin use two different approaches for classpathentry values in .classpath files.  If you try to migrate a project created with eclipse:eclipse (like mine and countless others were), you get this error.  M2e went so far as to add this comment to my.project file:

NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are not supported in M2Eclipse.

Can’t we all just get along?  Can’t maven plugin developers agree on a standard for this fairly common and straightforward requirement?

Since interoperability with multiple environments and the command-line are important to me, I decided to just ditch m2e.  I’m simply not willing to lock my project into this plug-in, closing the other (more standard) options.  If m2e adds compatibility in a future release, I’ll try it again.  And if not?  Nevermore.

Secret Handshake

Since security is king in my corp-rat world, standards dictate that my public web services be accessed via mutual authentication SSL.  The extra steps this handshake requires can be tedious: exchanging certs, building keystores, configuring connections, updating encryption JARs, etc.  So when helping developers of a third party app call in, it’s useful to provide a standard tool as a non-proprietary point of reference.

This week I decided to use soapUI to demonstrate calls into my web services over two-way SSL.  The last time I did something like this, I used keytool and openssl to build keystores and convert key formats.  But this go ’round I stumbled across this most excellent post which recommends the user-friendly Portecle tool, and steps through the soapUI setup.

Just a few tips to add:

  • SoapUI’s GUI-accessible logs (soapUI log, http log, SSL Info, etc.) are helpful for diagnosing common problems, but sometimes you have to view content in bin\soapui-errors.log and error.log.   Take a peek there if other diags aren’t helpful.
  • SoapUI doesn’t show full details of the server/client key exchange.  You can get more detailed traces with the simple curl -v or curl –trace; for example:

curl -v -E mykey.pem https://myhost.com/myservice

Happy handshaking!

Mocking J

Although I’m a perennial test-driven development (TDD) wonk, I’ve been surprised by the recent interest in my xUnits, many of which are so pedestrian I’ve completely forgotten about them.  After all, once the code is written and shipped, you can often ignore unit tests as long as they pass on builds and you aren’t updating the code under test (refactoring, extending, whatever).  Along with that interest has come discussions of mock frameworks.

Careful… heavy reliance on mocks can encourage bad practice.  Classes under test should be so cohesive and decoupled they can be tested independently with little scaffolding.  And heavy use of JUnit for integration tests is a misuse of the framework.

But we all do it.  You’re working on those top service-layer classes and you want the benefits of TDD there, too.  They use tons of external resources (databases, web services, files, etc.) that just aren’t there in the test runner’s isolated environment.  So you mock it up, and you want the mocks to be good enough to be meaningful.  Mocks can be fragile over time, so you should also provide a way out if the mocks fail but the class under test is fine.  You don’t want future maintainers wasting time propping up old mocks.

So how to balance all that? Here’s a quick example to illustrate a few techniques.

public class MyServiceTest {
	private static Log logger = LogFactory.getLog(MyServiceTest.class);
	private MyService myService = new MyService();	                 // #1
	private static boolean isDatabaseAvailable = false;
	public static void oneTimeSetUp() throws NamingException   {
		// Set up the mock JNDI context with a data source.	
		DataSource ds = getDataSource(); 
		if (ds != null) {                                        // #2
			isDatabaseAvailable = true;
			SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder();
			builder.bind("jdbc/MY_DATA_SOURCE", ds);
	public void setUp() {
		// Ignore tests here if no database connection
		Assume.assumeTrue(isDatabaseAvailable);                 // #3
	public void testMyServiceMethod() throws Exception {
		String result = myService.myServiceMethod("whatever");
		logger.trace("myServiceMethod result: " + result);      // #4
		assertNotNull("myServiceMethod failed, result is null", result);
		// Other asserts here...

Let’s take it from top to bottom (item numbers correspond to // #x comments in the code):

  1. Don’t drag in more than you need.  If you’re using Spring, you may be tempted to inject (@Autowire) the service, but since you’re testing your implementation of the service, why would you?  Just instantiate the thing. There are times when you’ll want a Spring application context and, for those, tools like @RunWith(SpringJUnit4ClassRunner.class) come in handy.  But those are rare, and it’s best to keep it simple.

  3. Container? forget it!  Since you’re running out of container, you will need to mock anything that relies on things like JNDI lookups.  Spring Mock’s SimpleNamingContextBuilder does the job nicely.

  5. Provide a way out.  Often you can construct or mock the database content entirely within the JUnit using in-memory databases like HSQLDB.  But integration test cases sometimes need an established database environment to connect to.  Those cases won’t apply if the environment isn’t there, so use JUnit Assume to skip them.

  7. Include traces.  JUnits on business methods rarely need logging, but traces can be valuable for integration tests.  I recommend keeping the level low (like debug or trace) to make them easy to throttle in build logs.

Frameworks like JMockit make it easy to completely stub out dependent classes.  But with these, avoid using so much scaffolding that your tests are meaningless or your classes are too tightly coupled.

Just a few suggestions to make integration tests in JUnits a bit more helpful.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Today’s post will be the last of the Friday Fixes series.  I’ve received some great feedback on Friday Fixes content, but they’re a bit of a mixed bag and therefore often too much at once. So we’ll return to our irregularly unscheduled posts, with problems and solutions by topic, as they arrive.  Or, as I get time to blog about them.  Whichever comes last.

More Servlet Filtering

On prior Fridays, I described the hows and whys of tucking away security protections into a servlet filter. By protecting against XSS, CSRF, and similar threats at this lowest level, you ensure nothing gets through while shielding your core application from this burden.  Based on feedback, I thought I’d share a couple more servlet filter security tricks I’ve coded.  If either detects trouble, you can redirect to an error page with an appropriate message, and even kill the session if you want more punishment.

Validate IP Address (Stolen session protection)

At login, grab the remote IP address: session.setAttribute(“REMOTE_IP_ADDR”, request.getRemoteAddr()). Then, in the servlet filter, check against it for each request, like so:

private boolean isStolenSession(HttpServletRequest request) {    
    String uri = request.getRequestURI();
    if (isProtectedURI(uri)) {
        HttpSession session = request.getSession(false);
        String requestIpAddress = request.getRemoteAddr();            
        if (session != null && requestIpAddress != null) {
            String sessionIpAddress = (String) session.getAttribute("REMOTE_IP_ADDR");
            if (sessionIpAddress != null)
                return !requestIpAddress.equals(sessionIpAddress);
    return false;

Reject Unsupported Browsers

There may be certain browsers you want to ban entirely, for security or other reasons.  IE 6 (“MSIE 6”) immediately comes to mind.  Here’s a quick test you can use to stop unsupported browsers cold.

private boolean isUnsupportedBrowser(HttpServletRequest request) {
    String uri = request.getRequestURI();
    if (isProtectedURI(uri)) {
        String userAgent = request.getHeader("User-Agent");
        for (String id : this.unsupportedBrowserIds) {
            if (userAgent.contains(id)) {
                return true;
    return false;

Community Pools

In my last Friday Fixes post, I described how to use Apache DBCP to tackle DisconnectException timeouts.  But as I mentioned then, if your servlet container is a recent version, it will likely provide its own database connection pools, and you can do without DBCP.

When switching from DBCP (with validationQuery on) to a built-in pool, you’ll want to enable connection validation.  It’s turned off by default and the configuration settings are tucked away, so here’s a quick guide:

Servlet Container Configuration Steps
Tomcat 7 (jdbc-pool) Add the following to Tomcat’s conf/server.xml, in the Resource section containing your jdbc JNDI entry:

  • factory=”org.apache.tomcat.jdbc.pool.DataSourceFactory”
  • testOnBorrow=”true”
  • validationQuery=”select 1 from sysibm.sysdummy1″
WebSphere Set the following in Admin Console:

Resources – JDBC – Data sources > (select data source) > WebSphere Application Server data source properties:

  • Check “Validate new connections”, set “Number of retries” to 5 and set “Retry interval” to 3.
  • Check “Validate existing pooled connections”, and set “Retry interval” to 3.
  • Set “Validation options – Query” to “select 1 from sysibm.sysdummy1”.
WebLogic Set the following in the Administration Console:

JDBC – Data Sources – (select data source) – Connection Pool – Advanced:

  • Check “Test Connections on Reserve”
  • Set “Test Table Name” to “SQL SELECT 1 FROM SYSIBM.SYSDUMMY1”
  • Set “Seconds to Trust an Idle Pool Connection” to 300.

Adjust intervals as needed for your environment.  Also, these validation SQLs are for DB2; use equivalent SQLs for other databases.

Auto Logoff

JavaScript-based auto-logoff timers are a good complement to server-side session limits.  Such are especially nice when coupled with periodic keep-alive messages while the user is still working.  The basic techniques for doing this are now classic, but new styles show up almost daily.  I happen to like Mint‘s approach of adding a red banner and seconds countdown one minute before logging off.

Fortunately, Eric Hynds created a nice jQuery-based Mint-like timer that we non-Intuit folks can use.  It uses Paul Irish’s idleTimer underneath.

Dropping it in is easy; just include the two JS files and add the idletimeout div into your common layout.  I tucked the boilerplate JavaScript into a simple common JSPF, like so:

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<script src="<c:url value="/script/jquery.idletimer.js"/>"></script>
<script src="<c:url value="/script/jquery.idletimeout.js"/>"></script>
var idleTimeout = <%= SettingsBean.getIdleTimeout() %>;
if (idleTimeout > 0) {
  $.idleTimeout('#idletimeout', '#idletimeout a', {
	idleAfter: idleTimeout,
	pollingInterval: 30,
	keepAliveURL: '<c:url value="/keepalive" />',
	serverResponseEquals: 'OK',
	onTimeout: function(){
		window.location = '<c:url value="/logout" />';
	onIdle: function(){
		$(this).slideDown(); // show the warning bar
	onCountdown: function( counter ){
		$(this).find("span").html( counter ); // update the counter
	onResume: function(){
		$(this).slideUp(); // hide the warning bar

Inanimate IE

Among Internet Explorer’s quirks is that certain page actions (clicking a button, submitting, etc.) will freeze animated GIFs.  Fortunately, gratuitous live GIFs went out with grunge bands, but they are handy for the occasional “loading” image.

I found myself having to work around this IE death today to keep my spinner moving.  The fix is simple: just set the HTML to the image again, like so:

// IE stops GIF animation after submit, so set the image again:
 '<img src="<c:url value="/images/loading.gif"/>" border=0 alt="Loading..." />');

IE will then wake up and remember it has a GIF to attend to.


The trouble with keeping your data in the cloud is that clouds can dissipate.  Such was the case this week with the Nike+ API.

Nike finally unveiled the long-awaited replacement for their fragile Flash-based Nike+ site, but at the same time broke the public API that many of us depend on.  As a result, I had to turn off my Nike+ Stats code and sidebar widget from this blog.  Sites that depend on Nike+ data (dailymileEagerFeet, Nike+PHP, etc.) are also left in a holding pattern.  At this point, it’s not even clear if Nike will let us access our own data; hopefully they won’t attempt a Runner+-like shakedown.

This type of thing is all too common lately, and the broader lesson here is that this cloud world we’re rushing into can cause some big data access and ownership problems.  If and when Nike lets me access my own data, I’ll reinstate my Nike+ stats (either directly, or through a plugin like dailymile’s).  Until then, I’ll be watching for a break in the clouds.

Broken Tiles

I encountered intermittent problems where Apache Tiles 2.2.2 where concurrency issues cause it to throw a NoSuchDefinitionException and render a blank page.  There have been various JIRAs with fixes, but these are in the not-yet-released version 2.2.3.  To get these fixes, update your Maven pom.xml, specify the  2.2.3-SNAPSHOT version for all Tiles components, and add the Apache snapshot repository:

    <name>Apache Maven Snapshot Repository</name>

Hopefully 2.2.3 will be released soon.

Toolbox Linkapalooza

A few favorite tools I used this week:

  • WebScarab – Nice Ethical Hacking tool.  Among many features, it exposes hidden form fields for editing.
  • Burp Suite – Similar to WebScarab; its proxy and intruder features make packet capture, modification, and replay a snap.
  • Runner’s World Shoe Advisor.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

I got a friendly reminder this morning that I’ve neglected my Friday Fixes postings of late.  I didn’t say they’d be every Friday, did I?  At any rate, here are some things that came up this week.

Tabbed Freedom v. Logoff Security

Tabbed browsing is a wonderful thing, but its features can become security concerns to corp-rat folks who mainly use their browsers for mission critical apps.  For example, with most browsers, closing a tab (but not the browser itself) does not clean up session cookies. Yet those security first guys would like to have a way to trigger a log off (kill the session) on tab close.

This is a common request, but there’s no straightforward solution. As much as I’d like browsers to have a “tab closed” event, there isn’t one.  The best we can do is hook the unload event which is fired, yes, when the tab is closed, but also anytime you leave the page: whether it’s navigating a link, submitting a form, or simply refreshing.  So the trick is to detect and allow acceptable unloads.  Following is some JavaScript I pieced together (into a common JSPF) based loosely on various recommendations on the web.

  var isOkToUnload = false;
  var logoffOnClose = '<c:out value="${settingsBean.logoffOnClose}" />';
  function allowUnload() {
     isOkToUnload = true;
  function monitorClose() {
     window.onbeforeunload = function() {
        if (!isOkToUnload)
           return "If you leave this page, you will be logged off.";
     window.onunload = function() {
        if (!isOkToUnload) {
              async: false, type: "POST",
                   url: '<c:url value="/basiclogout"/>' });
     // Add other events here as needed
  $(document).ready(function() {
     if (logoffOnClose === 'Y')

This triggers on refresh, but that’s often a good thing since the user could lose work; Gmail and Google Docs do the same thing when you’re editing a draft.  It’s a good idea to make this behavior configurable, since many folks prefer the freedom of tabbed browsing over the security of forcing logoff.

DBCP Has Timed Out

Right after mastering the linked list, it seems every programmer wants to build a database connection pool.  I’ve built a couple myself, but this proliferation gets in the way of having a single golden solution that folks could rally around and expect to be supported forever.

Such was the story behind Apache DBCP: it was created to unify JDBC connection pools.  Although it’s broadly used, it’s over-engineered, messy, and limited. So it, too, fell by the wayside of open source neglect.  And since nearly all servlet containers now provide built-in connection pools, there’s really no use for DBCP anymore.

Yet I found myself having to fix DisconnectException timeouts with an existing DBCP implementation, typically stemming from errors like:  A communication error has been detected… Location where the error was detected: Reply.fill().

After trying several recovery options, I found that DBCP’s validationQuery prevented these, at the cost of a little extra overhead.  Although validationQuery can be configured, I didn’t want additional setup steps that varied by server.  So I just added it to the code:

  BasicDataSource ds = new BasicDataSource();
  // ...
  ds.setValidationQuery("select 1 from sysibm.sysdummy1");

In the next pass, I’ll yank out DBCP altogether and configure pools in WebSphere, WebLogic, and Tomcat 7.  But this gave me a quick fix to keep going on the same platform.

Aggregation Dictation

Weird: I got three questions about aggregating in SQL on the same day.  Two of them involved OLAP syntax that’s somewhat obscure, but quite useful.  So if you find yourself with complications from aggregation aggravation, try one of these:

  • Doing a group by and need rollup totals across multiple groups?  Try grouping sets, rollup, and cube.  I’ve written about these before; for example, see this post.
  • Need to limit the size of a result set and assign rankings to the results?  Fetch first X rows only works fine for the former, but not the latter.  So try the ranking and windowing functions, such as row_number, rank, dense_rank, and partition by.  For example, to find the three most senior employees in each department (allowing for ties), do this:

       (SELECT rank() OVER(partition BY workdept ORDER BY hiredate ASC) 
        AS rank, firstnme, lastname, workdept, hiredate 
        FROM emp) emps
      WHERE rank <= 3
      ORDER BY workdept

    Co-worker Wayne did a clever row_number/partition by implementation and added a nice view to clarify its meaning.


Some interesting links that surfaced (or resurfaced) this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges ran the gamut, but there’s probably not much broad interest in consolidated posting for store-level no advice chargebacks, image format and compression conversion, SQLs with decode(), 798 NACHA addenda, or many of the other crazy things that came up.  So I’ll stick to the web security vein with a CSRF detector I built.

Sea Surf

If other protections (like XSS) are in place, meaningful Cross-Site Request Forgery (CSRF) attacks are hard to pull off.  But that usually doesn’t stop the black hats from trying, or the white hats from insisting you specifically address it.

The basic approach to preventing CSRF (“sea surf”) is to insert a synchronizer token on generated pages and compare it to a session-stored value on subsequent incoming requests.  There are some pre-packaged CSRF protectors available, but many are incomplete while others are bloated or fragile.  I wanted CSRF detection that was:

  • Straightforward – Please, no convoluted frameworks nor tons of abstracted code
  • Complete – Must cover AJAX calls as well as form submits, and must vary the token by rendered page
  • Flexible – Must not assume a sequence of posts or limit the number of AJAX calls from one page
  • Unobtrusive – Must “drop in” easily without requiring @includes inside forms.

I also wanted to include double submit protection, without having to add another filter (certainly no PRG filters – POSTs must be POSTs).  Here below is the gist of it.

First, we need to insert a token.  I could leverage the fact that nearly all of our JSPs already included a common JSPF file, so I just added to that.  The @include wasn’t always inside a form so I added the hidden input field via JavaScript (setToken).  I used a bean to keep the JSPF as slim as possible.

<jsp:useBean id="tokenUtil" class="com.writestreams.TokenUtil">
	<% tokenUtil.initialize(request); %>
	var tokenName = '<c:out value="${tokenUtil.tokenName}" />';
	var tokenValue = '<c:out value="${tokenUtil.tokenValue}" />';	
	function setToken(form) {
   			type: 'hidden',
    		id: tokenName,
    		name: tokenName,
    		value: tokenValue
	$("body").bind("ajaxSend", function(elm, xhr, s){
		if (s.type == "POST") {
			xhr.setRequestHeader(tokenName, tokenValue);

I didn’t want to modify all those $.ajax calls to pass the token, so the ajaxSend handler does that.  The token arrives from AJAX calls in the request header, and from form submits as a request value (from the hidden input field); that gives the benefit of being able to distinquish them.  You could use a separate token for each if you’d like.

The TokenUtil bean is simple, just providing the link to the CSRFDetector.

public class TokenUtil {
	private String tokenValue = null;
	public void initialize(HttpServletRequest request) {
		this.tokenValue = CSRFDetector.createNewToken(request);
	public String getTokenValue() {
		return tokenValue;
	public String getTokenName() {
		return CSRFDetector.getTokenAttribName();

CSRFDetector.createNewToken generates a new random token for each page render and adds it to a list stored in the session.  It does JavaScript-encoding in case there are special characters.

public static String createNewToken(HttpServletRequest request) {
	HttpSession session = request.getSession(false);		
	String token = UUID.randomUUID().toString();		
	addToken(session, token);
	return StringEscapeUtils.escapeJavaScript(token)	

A servlet filter (doFilter) calls CSRFDetector to validate incoming requests and return a simple error string if invalid.  You can limit this to only validating POSTs with parameters, or extend it to other requests as needed.  The validation goes like this:

public String validateRequestToken(HttpServletRequest request) {			
	HttpSession session = request.getSession(false);
	if (session == null) 
		return null;	// No session established, token not required
	String ajaxToken = decodeToken(request.getHeader(getTokenAttribName()));
	String formToken = decodeToken(request.getParameter(getTokenAttribName()));
	if (ajaxToken == null && formToken == null) 		
		return "Missing token";
	ArrayList<String> activeTokens = getActiveTokens(session);
	if (ajaxToken != null) {
		// AJAX call - require the token, but don't remove it from the list
		// (allow multiple AJAX calls from the same page with the same token).
		if (!activeTokens.contains(ajaxToken))		
			return "Invalid AJAX token";
	} else {
		// Submitted form - require the token and remove it from the list
		// to prevent any double submits (refresh browser and re-post).
		if (!activeTokens.contains(formToken)) 			
			return "Invalid form token";				
		setActiveTokens(session, activeTokens);
	return null;		

There you have it.  There are several good tools available for testing; I recommend OWASP’s CSRFTester.


Some useful / interesting links that came up this week:

  • Grep Console – Eclipse console text coloring (but no filtering)
  • Springpad – Think Evernote++
  • Pocket – New name for ReadItLater
  • RFC 2616 – The HTTP 1.1 spec at, well, HTTP

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.


If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges were all over the place, but I’ll focus in on just a couple of fixes in one popular category: Spring Security.

Filters : MVC :: Oil : Water

The web app I’m currently working on has some unique functions that occur at login.  Sure, it all starts with a login form, but it goes far beyond the simplified functions of standard Java EE form authentication.  Good news is, Spring MVC makes implementing these login functions nice, while Spring Security provides many of the protections we need. Bad news is, Spring Security and Spring MVC don’t play together at all.

A key reason is that Spring Security’s access control and authentication mechanisms are called upstream in the filter chain (before the controllers) and can’t access the controller’s state, session, and request.  With forms authentication (LoginUrlAuthenticationEntryPoint, UsernamePasswordAuthenticationFilter, and the like), you get one monitored post URL for authentication (by default,  j_spring_security_check) with no means for the controller to do sophisticated validation or page flow.

If I could lock myself into the strict confines of form authentication, then Spring Security’s implementation would be helpful: outside a bit of XML namespace configuration, all I’d need is a simple UserDetailsService DAO.  But I couldn’t, and trying to force it just convinced me that I really didn’t want Spring Security’s forms authentication after all.  What I really wanted was a Spring MVC login process which passed authentication details to Spring Security when done.

That part was very easy.  I just wrote a small utility class which, after it authenticated the user and retrieved his authorities, simply set them in Spring Security’s context.  Like so:

 * Notify Spring Security that we've logged in and initialize
 * authorities for downstream filtering.
public void login(MyUser myUser, HttpSession session) {
	List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>();	
	for (String str : myUser.getAuthorities()) {
		authorities.add(new SimpleGrantedAuthority(str));
	UsernamePasswordAuthenticationToken auth = new UsernamePasswordAuthenticationToken(
				myUser.getUserId(), myUser.getPassword(), authorities);	   
	SecurityContext securityContext = SecurityContextHolder.getContext();
	session.setAttribute("SPRING_SECURITY_CONTEXT", securityContext);		

To update Spring Security at logoff, I just did this:


This made Spring Security and me both happy, with no need to mix the oil of Spring MVC controller flow with the water of Spring Security authentication filters.

Casting Custom SPeLs

Among its many uses, the Spring Expression Language (SpEL) allows for flexible URL filter (intercept-url) expressions.  And custom expressions can be used to support unique access control requirements through simple configuration.

The T(my.package.Class).myMethod(whatever) syntax was one option, but that’s ugly, limited, and prone to error.  Further, by stepping through the code, I learned that the expression handlers called in Spring Security have access to all sorts of useful information to make authorization decisions.  So building my own custom expression evaluator was just the ticket.

But plugging in a custom expression handler took some maneuvering.  Turns out, you can’t just add a new SecurityExpressionRoot directly in the security configuration; you have to swap in a custom SecurityExpressionHandler and have it instantiate the evaluator, like so:

public class MySecurityExpressionHandler extends DefaultWebSecurityExpressionHandler {
	protected SecurityExpressionRoot createSecurityExpressionRoot(
				Authentication authentication, FilterInvocation fi) {
		WebSecurityExpressionRoot root = 
				new MySecurityExpressionRoot(authentication, fi);
		return root;

To round out the example, here’s a template for the custom evaluator and security context configuration.

public class MySecurityExpressionRoot extends WebSecurityExpressionRoot {
	public MySecurityExpressionRoot(Authentication a, FilterInvocation fi) {
		super(a, fi);
	public boolean canIDoThis() {
		// Your clever authentication here.
		// 'this' has access to the authentication token, request, and other goodies.
		// If conditions are satisfied, return true; otherwise...
		return false;
<http use-expressions="true" ... >   	
	<intercept-url pattern="/public/stuff/**" access="permitAll"/>  
	<intercept-url pattern="/private/stuff/**" access="denyAll"/>  
	<intercept-url pattern="/members/only/stuff/**" access="isAuthenticated()"/>  
	<intercept-url pattern="/conditional/stuff/**" access="canIDoThis()"/>        
	<expression-handler ref="mySecurityExpressionHandler" />
	... other configuration ...
<beans:bean id="mySecurityExpressionHandler"

Note that using expression-handler under http requires Spring Security 3.1 or higher, or at least an XSD patch.


Some useful / interesting links that came up this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Hashes Make for Poor Traces

SQL and parameter tracing  is covered well by most ORMs like Hibernate, but you’re on your own when using JDBC directly.  It’s helpful to funnel JDBC access through one place with a common wrapper or utility class, but if all you get is the PreparedStatement, there are no public methods to get the SQL text and parameter array from it for tracing.  Unbewiebable.  PreparedStatement.toString sometimes includes useful information, but that depends entirely on the JDBC driver.  In my case, all I got was the class name and hash.  So I just went upstream a bit with my calls and tracing hooks so I could pass in the original SQL text and parameter array.

Samy Isn’t My Hero

You’d think URL rewriting went out with the Backstreet Boys, but unfortunately, JSESSIONID is alive and well, and frequenting many address bars.  This week, I’ve been focused on hack-proofing (counting down the OWASP Top Ten) so I should be a bit more sensitive to these things.  Yet I inadvertently gave away my session when emailing a link within the team management system I use (does it count as a stolen session if you give it away?)  Not only does this system include the direct sessionguid in the URL, it also doesn’t validate session references (like by IP address) and it uses long expirations.

At least this system invalidates the session on logout, so I’ve resumed my habit of manually logging out when done.  That, and attention to URL parameters is a burden we must all carry in this insecure web world.  Site owners, update your sites.  Until then, let’s be careful out there.

Virtual Time Zone

While directing offshore developers last year, thinking 9 1/2 or 10 1/2 hours ahead finally became natural.  Having an Additional Clock set in Windows 7 provided a quick backup.  Amazingly, some software still isn’t timezone-aware, causing problems such as missed updates.  I won’t name names or share details, but I opted for a simple solution: keep a VM set in IST and use it for this purpose.  Virtual servers are cheap these days, and it’s easy enough to keep one in the cloud, hovering over the timezone of choice.

SQL1092 Redux

Yes, bugs know no geographical boundaries.  A blog reader from Zoetermeer in the Netherlands dropped me another note this week, this time about some SQL1092 issues he encountered.  Full details are here; the quick fix was the DB2_GRP_LOOKUP change.

This DB2 issue comes up frequently enough that DB2 should either change the default setting or also adopt WebSphere MQ’s strategy of using a separate limited domain ID just for group enumeration.  Those IBM’ers should talk.

Those Mainframe Hippies

I traditionally think of mainframe DB2’ers as belt-and-suspenders codgers who check your grammar and grade your penmanship, while viewing DB2 LUW users as the freewheeling kids who sometimes don’t know better.  That changes some with each new version, as the two platforms become more cross-compatible.

So I was surprised this week to find that DB2 for z/OS was more flexible than DB2 LUW on timestamp formats.  Mainframe DB2 has long accepted spaces and colons as separators, but unlike DB2 LUW, you can mix and match ’em.  For example, DB2 z/OS 9 and higher will take any of the following:

‘2012-03-16 19:20:00’



‘2012-03-16 19.20.00’

DB2 LUW (9.7) will reject the last two with SQL0180 errors.

Knowing the limits is important when writing code and scripts that work across multiple DB2 platforms and versions.  The problem could get worse as DB2 LUW adds more Oracle compatibility, but as long as the DB2 kids and codgers stay in sync, we can enjoy some format flexibility there.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

It’s a Big World After All

I got the complaint again this week: “DB2 is so stupid; why does it sometimes take two bytes for a character?”  I’ve answered that before, and the best solution is: if you’re storing truly binary data, use a binary type (like for bit data).  But if it really is text, I usually just send a link to Joel Spolsky’s classic article on character encodings.  And here’s a quiz – let me know what the following says:

אם אתה יכול לקרוא את זה, אתה תגרוק קידודים

The counterpoint to Joel’s article, or at least to blindly applying Postel’s Law to web apps, is that it can open the door to all kinds of security vulnerabilities: SQL injection, cross-site scripting, CSRF, etc.  So it’s common to have a blacklist of disallowed characters or a white list specifying the allowed character set.  The point is to be deliberate and think about what should be allowed: both the extent and limits.  Don’t treat binary data as text and don’t pretend there’s only one character encoding in the world.

Different, Not Better

Having endured VMWare’s product gyrations through the years, I’ve learned that upgrading it isn’t always best: sometimes I just get different, not better.  I feel the same way about Windows 8.

In this latest case, I found my beloved but now-obsolete VMWare Server can’t host the new Windows 8 CP edition, so I considered switching to VMWare Workstation 8.  But the latest VirtualBox (4.1.8) works, so I downloaded that, installed Windows 8 from ISO, and quickly had it running in a VM.  Glad I didn’t have to play a pair of 8s.

I suppose that’s a side benefit when leading players like VMWare and Microsoft make revolutionary changes to their incumbent products.  Major (often costly) upgrades create good opportunities to try out competing alternatives.

That Other Browser

I’ll always need the latest Windows and Internet Explorer (IE) versions running in some mode (VM or otherwise) just to test and fix IE browser compatibility issues.  This week, I ran into an “Access denied” JavaScript error in IE 8 – Intranet zone, Protected Mode.  As is often the case, my code worked fine in Firefox, Chrome, and other browsers.

I wanted to pin down exactly which IE setting was causing the problem, so I tweaked some likely suspects in custom Security Settings but could never nail it down to just one.  There was nothing in the Event Log to really help, just red herrings.  And apparently neither Protected Mode nor IE ESC was the root cause.  Switching zones (such as adding to trusted sites) didn’t work either, and extreme measures only made things worse.  Ultimately, doing a Reset all zones to the default level resolved the issue.

IE has far too many obscure and non-standard security settings that don’t work as expected.  And there’s no direct way to get a consolidated dump of all current settings.  Microsoft continues to work on fixing this, so I’m hopeful things will improve; they really can’t get worse.  Until then, the solution may continue to be “switch browsers” or “take one reset and call me in the morning.”

Jasper Variability

Jasper Reports (iReport) variables and groups are useful features.  But they can be frustrating at times because of documentation limits (yes, even with The Ultimate Guide) and differences between versions and languages (Groovy v. Java).  Sometimes the answer must be found by stepping through the code to determine what it wants.  That was necessary for me to get my group totals working.

For example, imagine a sales report grouped by region where you want group footers totaling the number of Sprockets sold for each region.  Standard variable calculations won’t work, and the Variable Expression must handle anything that comes its way, including null values from multi-line entries.  So, rather than using a built-in Counter variable with an Increment Type, I had to use a counter-intuitive variable, like so:

Property Value
Variable Class java.lang.Integer
Calculation Nothing
Reset type Group
Reset group regiongroup
Increment type None
Variable Expression ($F{TYPE} != null && $F{TYPE}.equals(“Sprocket”)) ? new java.lang.Integer($V{sprtotal}+1) : $V{sprtotal}
Initial Value Expression new java.lang.Integer(0)

In JRXML, that comes to:

	<variable name="sprtotal" class="java.lang.Integer" resetType="Group" resetGroup="regiongroup">
		<variableExpression><![CDATA[($F{TYPE} != null && $F{TYPE}.equals("Sprocket))?new java.lang.Integer($V{sprtotal}+1):$V{sprtotal}]]></variableExpression>
		<initialValueExpression><![CDATA[new java.lang.Integer(0)]]></initialValueExpression>


Some useful / interesting links that came up this week:

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

CSS – Space Matters

Guess you could say I was “lost in [a] space” wondering why my browser didn’t pick up my div’s CSS class.  Turns out, I wrote the selector as #myid .myclass (descendent selector) but really meant #myid.myclass (combined selector). Thank you, 2.5x reading glasses and Chris Coyier, for the quick save.

Point of No Return

JavaScript and jQuery iterator functions are awfully convenient, but it’s easy to overlook scope.  While refactoring some code today, I wrapped a function around some iterator code for reuse and broke it. Here’s a simplified example to demonstrate the problem:

     function myFunction() {
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              return ea;

At a glance, you’d think myFunction returns b, but since the inner function’s result is never passed out, it returns undefined.  And jQuery’s each method just returns the collection, so adding a return in front doesn’t help.  jQuery does have some alternatives (like filter) that can help, but those aren’t as flexible as each.

The simple solution is to just carry the value to the outer scope, like so:

     function myFunction() {
        var result;
        $.each(['a', 'b', 'c'], function(idx, ea) {
           if (ea === 'b') {
              result = ea;
              return false;
        return result;

Adding the “return false” stops the iteration at the first match.

Is this Thing Used?

My current project has this DB2 LUW fan coding against a DB2 for z/OS database.  While I give up a some things, I also gain a lot.  For example, while working on a DAO, I had questions about a particular table’s indexes.  Fortunately, DB2 for z/OS has SYSINDEXPLANSTATS which, among other things, keeps track of the last time an index was used (even for dynamic SQL) to help determine its merits.  So I first checked to see if this was at least version 9:

select getvariable('SYSIBM.VERSION') from sysibm.sysdummy1;

… and then answered my questions with a simple query like the following:

select i.name, i.tbname, i.creator, i.uniquerule, i.colcount, i.clustering, s.lastused
from sysibm.sysindexes i, sysibm.sysindexspacestats s
where i.name = s.name
and i.creator = s.creator
and i.dbname = 'MYDB'
and i.tbname like 'MYPREF%'
order by s.lastused

Can’t wait for this to make its way to DB2 LUW.


Yes, I presently have to use Microsoft Visual SourceSafe (VSS).  I’m over it now.  When using this “lock first” VCS against a deep project hierarchy, I often need to see a quick list of my check-outs.  I can use View->Search->Status Search (with subprojects selected) from the VSS GUI, but that’s often too slow over a remote VPN connection.  So I cooked up a quick script to run over an ssh command line from a server that is “local” to the repository:

cd /d "C:\Program Files\Microsoft Visual SourceSafe"
set SSDIR=\\myserver\VSS\myvssshare
ss STATUS $/MyProject -R -Umyusername

Flexigrid Echo

Imitation isn’t always flattering; it’s often annoying.  Having two Flexigrids on one page, both with pagers enabled (usepager: true) yielded some weird mocking behavior.  For example, paging forward on one grid caused both grids to show the same “Page X of Y” message.  After debugging, I found that similar bugs had been reported and fixed in the latest release.  Upgrading Flexigrid resolved that one.

Keeping Baby and Bathwater

After untangling some weirdness this afternoon, I feel like joining the masses storming the JavaScript Bastille.  But it’s too late for that with this “can’t live with it, can’t live without it” language.

A Big Baby

JavaScript is unmatched for its immaturity and ubiquity.  Both, of course, are intertwined.  With an installed base of a few billion JavaScript-enabled browsers out there, it’s virtually impossible to break compatibility and significantly fix this underdeveloped language.  The slow progress of the ECMAScript committee doesn’t help matters.  Hence so many “compile to JavaScript” tools and languages: GWT, CoffeeScript, Dart, etc.  But if going non-native were truly a good idea, we’d all be debugging bytecode.

As the size of our JavaScript code bases continues to explode (propagated by richer clients and SSJS), maybe we finally accept that the raw, unevolved language is here to stay and, since we can’t fix it, do a better job of propping it up.  Since many (perhaps most) JavaScript developers don’t naturally write clean code nor stick to The Good Parts on their own, we need code checkers.  Yes, I mean lint, but one that folks will actually use.

JSLint – A Good Start

If a proliferation of cross-compilers condemns a language, the proliferation of lint tools condemns its compilers, editors, and IDEs.  Lint tools typically fill the gap where a language lacks or the tools haven’t caught up or caught on.  Few people still use a separate C lint because most compilers and IDEs have integrated their most important checks.  And PMD became less important to me as I started tweaking Eclipse’s Java compiler warnings (although it still needs better file and package filtering).

There are a few JavaScript lint options; I prefer Douglas Crawford’s own JSLint and its recent JSHint fork.  It can be a bit pedantic, overemphasizing style.  Some of the trivial style checks can be disabled (for example, white: true and vars: true), while others can’t (“Mixed spaces and tabs”, really??).  But it’s helpful enough at catching things to be worth using in spite of its false alarms and awkwardness.

But what JSLint lacks most is integration: both with embedded source and standard tools.  That is, JavaScript source is often in server-side pages like JSPs, intermixed with form tags, embedded foreign language, and other noise that JSLint can’t handle.  The work-around is to let the server render it and then paste it into JSLint, but couldn’t someone integrate JSP parsing smarts with JavaScript validation?  And could we focus energies on making it an extension of Eclipse’s JavaScript validation: one that’s actually useful?

Perhaps smarter code analysis in this loose language is a hard nut to crack.

The Next Level

And that certainly makes sense: static analysis of JavaScript is hard because it isn’t just a dynamic language, it’s the wild west.  Is x a variable, function, property, or value?   Is it local, global, or forgotten?  Without true scoping, access control, and other aids, context is very hard to discern.  JSLint helps by not letting you get away with ambiguity, but arbitrary belts and suspenders aren’t the best solution.  Since we humans often have to let the JavaScript page render and step through it in Firebug to see what’s really going on, can we really expect a computer to figure it out from flat source code?

I think so.  With enough (warranted) attention, JavaScript validation will improve, and we’ll even see better code assist and refactoring tools.  It may take some new ways of looking at the problem, but fortunately, there’s some promising new research happening now.

Looks like we’re stuck with this JavaScript baby and all its dirty bathwater.  But here’s hoping future tooling makes cleaning it up a lot easier.

Measuring Wins

As a follow-up to Saturday’s post and last night’s BCS title game, I was asked about RPI rankings for this year.  Strength of schedule and RPI are particularly important for NCAA football where, in this era of “buy games” and weak conferences, win-loss records must be taken with a grain of salt.  It’s not just about how you play, but also who you play.

So I gathered 2011 NCAA data and ran it though the RPI programs we coded as Friday Fragments this time last year; see: http://derekwilliams.us/fragments/rpi.html.  The top 10 teams by RPI are:

Rank Name Record Win % RPI
1 LSU 13-1-0 0.929 0.701
2 Alabama 12-1-0 0.923 0.676
3 Oklahoma St. 12-1-0 0.923 0.670
4 South Carolina 11-2-0 0.846 0.643
5 Arkansas 11-2-0 0.846 0.638
6 Kansas St. 10-3-0 0.769 0.634
7 Baylor 10-3-0 0.769 0.633
8 Oklahoma 10-3-0 0.769 0.631
9 Michigan 11-2-0 0.846 0.630
10 Oregon 12-2-0 0.857 0.626

Teams like Stanford, Boise State, and Wisconsin aren’t in this top ten.  That’s because, with their low strength of schedule ratings, the computer just doesn’t like them.

Playoff Prognostication

“Prediction is very difficult, especially about the future.”

Niels Bohr

It’s NFL playoff time again, and seemingly everyone’s prognosticating.  Last year’s late rise of the Packers (6th seed who won it all) proved again that “past performance does not guarantee future results.”  But since a calculated guess is better than a wild one, I thought I’d run my own numbers using the power ranking tools we developed as Friday Fragments this time last year.

I gathered fresh data (2011 NFL regular season results) and ran it through http://derekwilliams.us/fragments/tourney.html.  Of the resulting top 12 teams (by power ranking), 11 are in the playoffs:

Rank Team Power
1 Packers 90
2 49ers 86
3 Saints 83
4 Steelers 83
5 Ravens 70
6 Bengals 68
7 Texans 67
8 Patriots 67
9 Giants 64
10 Lions 63
11 Falcons 61
12 Raiders 56

The Raiders (#12) didn’t make the playoffs; instead we have the Broncos who won the AFC West, but rank #22 (power 41).

This has the Packers winning it all again.  But with my Falcons at #11, I’m hoping for a few surprises, starting with tomorrow’s game against the Giants.

Wait for it

“If there are enterprise beans in the application, the EJB deployment process can take several minutes. Please do not abandon all hope while the process completes.”

WebSphere deployment message (paraphrased)

Of all the many ills of EJBs, slow deployments are among the most chronic.  New products like JRebel attempt to reduce this time impact, but with inconsistent results.  And hot-swapping code while debugging often fails for all but the most trivial changes.  So if each significant code change requires a long wait for redeployment, it at least better work.

That’s why today’s deployment failures were so annoying.  I could see from console logs that RMIC was running and there were no errors, but the EJB stub classes were missing from the resulting JARs.  This was a new MyEclipse workspace (Blue edition, version 9.1), but I had never had this problem before.  I re-checked my configuration and verified that I had enabled EJBDeploy for my WebSphere 6.1 target server (Properties – MyEclipse – EJB Deploy).  Yet in the process of poking around and checking things, I stumbled on the problem.  While there, I kicked off a manual run by clicking “Apply and run EJB Deploy” and finally got an error message: it could not create the files because it needed the .ejbDeploy sub-folder under my workspace path.  I manually created that folder and it worked.

I don’t know why MyEclipse or ejbdeploy couldn’t just create the folder, nor why it wouldn’t report the error to the console during normal deployments.  But since this manual work-around resolved the problem, I’ll know what to do next time it happens.

Problem solved, and back to waiting on deployments again.  Thanks… I think.

Please Design Responsibly

Handled correctly, loose typing can be a powerful thing , but with great power comes great responsibility.   The license to enjoy the benefits of dynamic typing comes with the responsibility to actually design with real objects.  If one just strings together collections of simple types, weaker typing can lead to some very confusing code.

I was reminded of this today in an improbable way.  I uplifted some old, unfamiliar code to a modern Java version, which included adding in generics: taking untyped collections toward strongly-typed ones.  This meant grokking and stepping through the code to infer data types where nested collections were used, and it smoked out what the design really looked like.  One commonly-used structure came to this, and there were several others like it:

 HashMap<String, HashMap<String, HashMap<String,
                           HashMap<String, HashMap<String, Vector<String[]>>>>>>

Wow.  I wish I was joking.

This just screams, “make objects!”  It reminded me of perhaps why the “everything in triplicate” crowd fears dynamic typing.  If you aren’t designing with objects, then strong typing partly saves you from yourself.  In this case, the addition of crazy nested generics actually made the subsequent code easier to understand and less error-prone.

But a much better approach, of course, is to create new classes to handle what these nested collections do, with the added benefit of factored behavior.  In this case, the real objects exist very plainly in the problem domain; just follow the nouns.  A truly object-oriented design usually doesn’t need strong typing to clarify the design.

This does require diligence and I’ve often been guilty of missing objects, too.  Yet perhaps this example reveals a new code quality metric: if you need more than two adjacent angle brackets to make the “raw type” compiler warnings go away, it’s time to step back and look for objects.


The standard syntax highlighting in gvim is fine for short stints, but it can grow tough on the eyes after awhile.  At just the right moment last night (after a long day that brought eyestrain to a peak), Douglas Putnam’s most excellent blog pointed me to Solarized, Ethan Schoonover’s well-designed color scheme alternative.  I browsed the screen shots and then wasted no time adding it to gvim, like so:

git clone https://github.com/altercation/vim-colors-solarized.git

Copy files to my Vim folder

Edit _vimrc to add the following:

syntax enable
set background=light
colorscheme solarize

The Linux setup is similar; just copy to .vim and edit .vimrc.

I used the menu to switch between light and dark backgrounds whenever I needed an eye-relaxing change of pace, and for some languages.  I like it so much that I’ll probably add it to my terminals soon.

If life in gvim Technicolor is wearing you out, I suggest giving the Solarized color scheme a try.