Git Merge and Diff

When it comes to comparing and merging changes with git repositories, I typically use EGit’s tools or just plain vim with “git diff” and “git merge” from a command line.  But for some projects, I work outside Eclipse yet still want a graphical view.  Such was the case today when working with a large set of changed files under Windows.

To set up WinMerge as a custom difftool and mergetool, I pieced together several recommendations and created the following.

.gitconfig

        ...
        [diff]
                tool = winmerge
        [difftool "winmerge"]
                cmd = c:/usr/bin/git-difftool.bat \"$LOCAL\" \"$REMOTE\"
        [difftool]
                prompt = false
        [merge]
                tool = winmerge
        [mergetool "winmerge"]
                cmd = c:/usr/bin/git-difftool.bat \"$LOCAL\" \"$REMOTE\"

git-difftool.bat

	@echo off
	for /f "delims=" %%a in ('cygpath -w %1') do @set file1=%%a 
	for /f "delims=" %%a in ('cygpath -w %2') do @set file2=%%a 
	"WinMergeU.exe" -e -ub "%FILE1%" "%FILE2%"

Works like a charm.  I can use just “git diff” and “git merge” (with vim) for quick work, and use “git difftool” or “git mergetool” when I want a GUI for more involved comparing/merging.

Colophon

Like O’Reilly cover art, today’s xkcd comic at top right is only loosely related to this content.  But it’s awesome, so follow the link.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Just the Facts

Local news sites are notorious for low signal-to-noise ratios.  The news content is good, but that’s often crowded out by excessive ads, Flash videos, runaway JavaScript, and animated GIFs. These things make 90s websites look clean and elegant. I get seasick visiting them.

My typical antidote has been to just stick to RSS, and let blockers like Chrome’s Click to Play squelch things when I have to visit the site.  But the Atlanta Journal-Constitution (AJC) recently broke their RSS feeds while at the same time expanded their JavaScript and floating div monstrosities. What’s a guy to do when he just wants to read the latest Falcons and Georgia Tech news?

Well, I took the nuclear solution and went with lynx.  Yes, lynx: the old text mode browser. Whenever I want to read content from the AJC or similar news sites, I just fire it off from the command line and browse away. It works well, and I can do a quick news check in no time. Hopefully, the AJC won’t start disallowing or punishing lynx use.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

EMV Day Now

When it comes to consumer technologies, we in the US often let the rest of the developed world “leap frog” us, frequently with our own innovations.  The main culprits are typically our size and social adoption curves.  When you have an installed base of familiar and comfortable (but old) technologies numbering in the hundreds of millions, transition takes awhile.  So we’re stuck with broad use of anachronistic things like CDMA cell phone networks, Windows XP, checks, and skimmable mag stripe credit cards.  In payments, where adoption is key, it often takes significant financial and regulatory incentives to bring in the new.

As card fraud escalates, US payment networks are stepping up incentives to migrate to chip-embedded credit and debit cards using the Europay-Mastercard-Visa (EMV) standard.  For example, Visa’s new October, 2015 fraud liability shift (from issuer to merchant) for non-EMV transactions provides the looming punitive “stick,” while their recently-announced common debit solution and Technology Innovation Program (TIP) provide some “carrots.”  But that’s all “network push” with little “consumer pull.”  Hopefully, as more EMV cards roll out in the US, consumers will value the extra security, and competitive pressure will motivate issuers to send out those new cards quickly.  EMV doesn’t solve all card fraud problems, but it’s a step worth taking.  The costs of fraud affect us all, and it’s time we caught up with the rest of the world.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Tight Spring

I still recall the wise advice of a friend while I was designing and building a set of application frameworks over two decades ago: “create as many hooks as possible and be quick to add new ones.”  So for each service flow I noted every interesting step and created extension points for each.  I didn’t think most of these would be be used but, over time, nearly all were.  And as users requested new hooks, I quickly added them.

The Spring frameworks, with their IoC underpinnings, are built for extensibility.  The application context is a software breadboard allowing all sorts of custom wirings and component swapping.  And namespace configuration elements make for much clearer and cleaner XML.  Where done right, the namespace configuration captures most core extension points, documents them well, and makes them easy to use. But shortcuts, inflexible designs, and atrophy can cause parts of Spring to be too tight. Here are just a couple of examples I encountered with this week.

Spring Security – Session Management Filter

A customer’s single sign-on (SSO) flow required tweaks to SessionManagementFilter, so I built my own: a subclass with just a simple override.  But swapping it in turned out to be quite clumsy, and basic things like setting the invalid-session-url in the namespace no longer work as documented.  Since I now have to specify the complete bean construction in the security context anyway, I decided to just replace the SimpleRedirectInvalidSessionStrategy.  Here again, I just needed one method override, but, alas, it’s a final class.  So, with some copy and paste re-use and ugly XML I finally got what I needed; here’s the gist:

 <http use-expressions="true" ...>
	...
    	<!-- Disable the default session mgt filter: /-->	
	<session-management session-fixation-protection="none" />
    	<!-- ... and use the one configured below: /-->		
	<custom-filter ref="sessionManagementFilter" position="SESSION_MANAGEMENT_FILTER" />
 </http>		
 
 <!-- Configure the session management filter with the custom invalid session re-direct. -->  
 <beans:bean id="sessionManagementFilter" 
		class="org.springframework.security.web.session.SessionManagementFilter">
	<beans:constructor-arg name="securityContextRepository" 
		ref="httpSessionSecurityContextRepository" />
	<beans:property name="invalidSessionStrategy" 
		ref="myInvalidSessionStrategy" />
 </beans:bean> 	
 <beans:bean id="httpSessionSecurityContextRepository"
	class="org.springframework.security.web.context.HttpSessionSecurityContextRepository"/>
 <beans:bean id="myInvalidSessionStrategy"
	class="com.writestreams.my.common.service.security.MyInvalidSessionStrategy">
  	<beans:constructor-arg name="invalidSessionUrl" value="/basic/authentication" />
	<beans:property name="myProperty" value="myvalue" />  		
 </beans:bean>

I wish JIRAs like SEC-1920 weren’t ignored.

Spring Web Services – HTTP Headers

I built web service calls (using Spring-WS) to a site that requires authentication fields in the SOAP header. It’s pretty pedestrian stuff except there are no direct Spring public methods to set header fields or namespaces. Instead I had to use the recommended WebServiceMessageCallback like so:

webServiceTemplate.sendSourceAndReceiveToResult(getUri(), source, 
        					getWebServiceMessageCallback(), result);
 
private WebServiceMessageCallback getWebServiceMessageCallback() {
 
   return new WebServiceMessageCallback() {
 
	   public void doWithMessage(WebServiceMessage message) {
		   try {
			SoapMessage soapMessage = (SoapMessage) message;
	           	SoapHeader header = soapMessage.getSoapHeader(); 
	           	StringSource headerSource = 
					new StringSource(getAuthenticationHeaderXml());
	           	Transformer transformer = 
					TransformerFactory.newInstance().newTransformer();
	           	transformer.transform(headerSource, header.getResult());
		   } catch (Exception e) {
			// handle exception
		   }
	   }
   };
}

That’s too verbose for what should have been one line of code.  I agree with JIRAs like SWS-479 that this should be simplified and extended.  Is an addHeader convenience method too much to ask?

BTW, when developing web service calls, I usually start with XMLSpy and soapUI to get the XML down first.  Once I switch over to coding, I typically set JVM system properties (proxySet, proxyHost, and proxyPort) to point to Fiddler2 so I can examine request and response packets.  It’s a nice arrangement, but I’m always looking for new ideas.  If you prefer a different approach, write me.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Just Ask the Mayans

“When in doubt, predict that the present trend will continue.”

Murphy’s Laws

It’s now an annual tradition here to calculate year-end NFL power rankings using tools we developed as Friday Fragments.  So I grabbed 2012 NFL regular season records and ran them through our tournament matrix calculator.  Of the resulting top 12 teams (by power ranking), 10 are in the playoffs:

Rank Team Power
1 Falcons 91
2 Texans 79
3 Broncos 77
4 Colts 76
5 Ravens 76
6 49ers 76
7 Packers 72
8 Patriots 67
9 Bears 65
10 Seahawks 59
11 Bengals 59
12 Rams 55

The Vikings and Redskins replace the Bears and Rams in real life.

These rankings look at the season as a whole, which is why my Falcons come out strongly on top.  But it’s a valid argument to weigh recent games more heavily; for example, if you remove the first 5 weeks, you wipe out the Broncos’ 3 losses, and they come in first.  Feel free to visit the page and run your own partial season calculations.

Of course, “past performance does not guarantee future results,” so that’s why they play ’em.  But one thing seems sure: it’s been an exciting regular season, and the postseason promises to be just as good.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

jQuery Mobile

This week, I needed a quick mobile (smart phone) proof-of-concept for one of our web app products.  Since speed of development and broad platform support were more important than native device features, I took the mobile web route.  And as a jQuery fan, I chose jQuery Mobile (JQM).

JQM emphasizes doing as much as possible with simple HTML5 and CSS.  It uses data-* custom attributes to know how to add extra behaviors, rendered and tailored as needed for the target device.  Indeed, I found I could do most things with just markup and server-side code, and very little additional JavaScript.

JQM makes AJAX calls by default, but normally expects the results to be new page divs to add to the navigation stack.  This is different from my standard web app, where $.ajax calls return only data (XML and JSON), with bits of JavaScript to update DOM components from the results.  I decided to play along and add div wrappers around results, letting JQM navigate to them as simple linked pages (href=, action=) and dialogs (data-rel=”dialog”).  The end result was cleaner code (and smoother flow than standard postbacks would be), although I did give up a bit in look and feel.

Native app development requires real devices or emulators (such as the Android Virtual Device, AVD), but with a pure mobile web app, I was able to just use Chrome with the nice Ripple extension.  This removed all delays from the code-run cycle: yet another benefit of the mobile web / HTML5 world.

I went with a standard theme and did not (yet) need the extra step of packaging for offline use or adding a native wrapper (like PhoneGap/Cordova).  But those processes appear straightforward when the time comes.  The were a few little glitches along the way, but nothing I couldn’t work around with a little JavaScript.

jQuery Mobile is perhaps the go-to framework for HTML5 mobile web apps.  Certainly for those of us who are sold on the benefits of jQuery.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

New Tricks

I recently resumed working a bit on a VA Smalltalk-based system I first created over a dozen years ago. VA Smalltalk is a fantastic programming environment, but I quickly realized how much I missed automated build packaging in that world. Each build takes only a few minutes, but it’s manual, requiring clicks through a GUI and waiting for results. I wanted this to now be automated like the rest of the continuous integration free world.

Fortunately, some folks have recently built tools to help.

I started with Ernest Micklei’s Melissa tool, a handy EpPackager front-end for scripting builds.  Using it to automate builds for our headed (GUI) images was straightforward: I just created a workspace and CMD file for my particular requirements.

But scripting headless (cross-development, XD) packaging was not so simple, due to the nature of passive images and the fact that the controls are hopelessly embedded in the XD development UIs.  Eventually, I found that Thomas Koschate cracked that nut with his nice HqaAutomatedBuildSupport tools (available at VASTGoodies.com).  To use it, I mainly just created my own AbtBuildSpecification to specify the maps, subsystems, and features to match what I used in my XD Image Properties.

For consistency’s sake, I wrote some code to invoke AbtBuildSpecification build from MelissaBuilder. When done, I set up the CMD files as Windows Scheduled Tasks (schtasks) to run overnight.

It’s a handy process, but clearly something that could benefit from standardization by the VA Smalltalk vendor.  For example, there are different ways to script startup loads: abt.cnf, Melissa, AbtImageStartup, or some combination of these.  Perhaps our friends at Instantiations will soon include this in the base product.  Our venerable community could certainly benefit from a common way of doing these new tricks.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

The Pareto Lamp

I was asked over the Thanksgiving break to create an online store for soccer fundraising.  This planned Christmas Catalog was behind schedule and needed something fast.  With turkey on deck, there was no time to code anything from scratch, so I jumped into Softaculous and installed the highly-rated OpenCart. Configuration was quick, and I soon had the 25 product store online and ready for business.

It can be just that easy in the LAMP Stack world.

That’s largely because most of what we do on the web has already been done countless times before. There’s rarely good reason to code a custom solution when open source options are readily available. That’s a very good thing, because an occupational hazard for code monkeys like me is that we often get asked to help with “computer things” like this.  We like to be able to knock out these common problems quickly and reserve most of our time for the uncharted territories that require more invention and programming horsepower.

I did have to browse the OpenCart source, but that was only to answer my questions about how to configure some things (like sales tax) that didn’t work quite right at first.  Since it follows a familiar MVC structure, navigating the PHP was straightforward.  In the end, I didn’t have to write a single line of code.

Nearly 80% of the server-side web is written in PHP.  And certainly less than 20% of overall programming work goes toward maintaining it.  I’m thankful for the LAMP stack and the Pareto principle that powers it, and will keep OpenCart and other ready Softaculous solutions handy.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Just Say It

I recently built a voice response system with Voxeo‘s software: their VoIP IVR (Prophecy) and VoiceXML server (VoiceObjects).  This stack is available in both on-premise (local install) and cloud versions (Evolution/VoiceObjects on Demand), and includes an Eclipse-based development environment, VoiceObjects Desktop for Eclipse (DfE).  VoiceObjects (VO) takes a little getting used to, but is a nice platform for developing call flows.  It sure beats hand-coding VoiceXML, and it integrated well with my Java service back-end.

VoiceObjects’ documentation is excellent, but I learned quite a lot by trial and error.  So I thought I’d share a few tips and lessons learned.

Use scripts and expressions

VO’s library of objects is quite rich, but you will inevitably run into some requirement that needs more than just objects with properties.  For example, I used nested concatenate expressions to build complex types for web service calls; split, index, findcell, strequal, and matchesregexp expressions to process web service results; and JavaScript to handle custom DTMF mappings. The prepackaged expressions are also helpful, such as the ones that grab ANI and DNIS from session state.

Set event handlers

It’ll take about 5 seconds to grow weary of the thickly-accented “service you are trying to reach is not available…” default message.  Get a friendlier message with more root cause details by setting the event handlers in your top-level module (they’ll be inherited by child objects).  Just open the Outline view, expand Event Handling section, and add away.  You can add whatever outputs you’d like, and use different handlers for repeat occurrences.

Set default routing

Be sure you have a default routing (*) entry in the Prophecy Commander applications list, since surely someone will use the wrong SIP address or configure the wrong URI.  With a default routing entry, callers at least land in a recognizable place.

Get Blink

VO’s built-in SIP Phone is very basic, so you’ll find yourself looking for alternatives.  There are a lot of free SIP phones out there, but most require signing up to a service or can otherwise violate corp-rat security concerns.  I settled on Blink.  It’s a little buggy, but does the job.

Get an ATA

SIP phones are handy, but you’ll eventually want to pick up a POTS phone or cell and call into your IVR.  While you can can assign PSTN phone numbers in Evolution/VoD, that doesn’t help with your local Prophecy install.  That’s where Analog Telephone Adapter (ATA) gateways come in.  I used the AudioCodes MP-114; the model with two FXO and two FXS ports provides both analog in and analog out options.

Get the latest version

Since Voxeo’s hosted environment uses VO 10, I started out with that version.  But I quickly ran into some problems that required recent fixes.  Voxeo was kind enough to build a custom VO 11 environment in an AWS instance for my cloud testing, but their GA hosted platform will be upgraded soon.  In the mean time, there’s really no reason not to use one of the recent versions: 11 or 12.

Add debug outputs

VO provides a Log object for tracing purposes, but I often found it easier to use additional Output objects to hear trace messages on the call.  By convention, I labelled these with a “Debug – ” prefix, and disabled them (but left them in place) when not used.

Use the logs

Even minor typos can cause VO to throw internal exceptions that leave you scratching your head.  Sometimes you can diagnose these with Debug and Trace, but in most cases the VoiceObjects logs (viewable from DfE) provide the best information.   If the problem is on the IVR side, use Prophecy’s built-in Log Viewer to hunt down the root cause.

Use the support forums

For some vendors, support fora are where problems go to die from neglect.  But that’s far from the case with Voxeo.  They have an excellent support team that quickly responds to issues.  You can browse existing posts for common problems and easily add new tickets.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Not So Fast

With the advent of DB2 10, IBM dropped its much-maligned Control Center tools in favor of a free version of IBM Data Studio.  I started using Data Studio back in April and wrote about some positive first impressions here.  But I shouldn’t have blogged so soon.

For occasional use with certain tasks on fast hardware, Data Studio is fine.  But if you want to get a lot done quickly, it can really get in the way.  It’s just too bloated, too slow, too cumbersome, too slow, too awkward, and too slow.  Did I mention that it’s slow?

So I find myself doing even more now with the command line.  When I need a GUI tool, I’ve gone back to using a tool I wrote myself, TOAD, and free tools like DbVisualizer and Eclipse DTP.

Data Studio has been around for awhile, and IBM is clearly committed to it.  But there remains a lot of room for improvement.  Now that Data Studio is the built-in default, I hope IBM gets on that.  And quickly.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

WebSpeared

This “very Monday” brought some unexpected troubles during a new deployment of my app.

My web services code had worked fine in a variety of environments (WebLogic, Tomcat, etc.), but failed in WebSphere 7 fixpack 23 with exceptions like the following:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection#0’: Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: org/apache/ws/commons/schema/XmlSchemaCollection.read(Lorg/xml/sax/InputSource;)
Lorg/apache/ws/commons/schema/XmlSchema;

Since this had the symptoms of a typical classpath problem, I started there.  I had to remember that WebSphere’s version of -verbose as a JVM argument is Application server > Java and Process Managerment > Process definition > Java Virtual Machine – Verbose class loading.  I also did a quick check to see what I was using in other environments:

   Class.forName(name).getProtectionDomain().getCodeSource().getLocation().toURI();

This revealed that the obsolete XmlSchemaCollection class came from WebSphere’s old version of org.apache.axis2.jar in …\IBM\WebSphere\AppServer\plugins.  In spite of the “plugin” name, WebSphere is hopelessly intertangled with Axis 2, so removing it was not an option.  So I preempted WebSphere by providing my own xmlschema-core-2.0.2.jar, and made it a part of my distribution by adding it to the pom.xml:

        <groupId>org.apache.ws.xmlschema</groupId>
        <artifactId>xmlschema-core</artifactId>
        <version>2.0.2</version>

Finally, I set “Classes loaded with local class loader first (parent last)” in WebSphere’s class loader settings for my app.  In some cases, I had to also place xmlschema-core-2.0.2.jar in a shared library folder.

These extra steps are a bit annoying, but I suppose that’s the price paid for using a Java EE container that lags a bit behind on open source frameworks.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Broken JARs

Since I work on a few apps that use the DB2 JDBC Type 4 drivers, work-arounds for common issues have become familiar.  But when the corrupted DB2 JAR issue resurfaced today, I realized that I had not posted about it.  Now is as good a time as any to make up for that neglect…

The problem is that for DB2 LUW versions between 9 and 9.5, IBM published db2jcc.jar files with corrupt classes.  In some Java EE environments, the container doesn’t fully scan JARs, so it doesn’t really matter if an unused class is wonky.  But many containers do scan, causing exceptions like the following:

SEVERE: Unable to process Jar entry [COM/ibm/db2os390/sqlj/custom/DB2SQLJCustomizer.class] from Jar [jar:file:…lib/db2jcc.jar!/] for annotations
org.apache.tomcat.util.bcel.classfile.ClassFormatException: null is not a Java .class file
at org.apache.tomcat.util.bcel.classfile.ClassParser.readID(ClassParser.java:238)
at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(ClassParser.java:114)

That particular exception is from Tomcat 7.

IBM acknowledges the problem (see http://www-01.ibm.com/support/docview.wss?uid=swg1LI72814) and offers a work-around: edit the JAR file to remove the offending classes.

Edit the JAR?  Really?!  A quicker, surer solution is to just grab a good JAR from http://www-01.ibm.com/support/docview.wss?uid=swg21363866.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Quoth the Maven

I’ve enjoyed Maven’s built-in maven-eclipse-plugin as a handy little tool for creating Eclipse projects from Maven POMs.  But I was recently lured into trying m2eclipse (a.k.a, m2e) because of its feature set. After all, common Maven tasks (running builds, editing POMs, adding dependencies, updating repos,  etc.) can certainly benefit from some tooling and automation.

So I installed the plug-in into Eclipse and then ran the obligatory Configure -> Convert to Maven Project. Imagine my surprise when I was immediately rewarded with the exception:

“Updating Maven Project”. Unsupported IClasspathEntry kind=4

Turns out, this is a known bug due to the fact that m2e and maven-eclipse-plugin use two different approaches for classpathentry values in .classpath files.  If you try to migrate a project created with eclipse:eclipse (like mine and countless others were), you get this error.  M2e went so far as to add this comment to my.project file:

NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are not supported in M2Eclipse.

Can’t we all just get along?  Can’t maven plugin developers agree on a standard for this fairly common and straightforward requirement?

Since interoperability with multiple environments and the command-line are important to me, I decided to just ditch m2e.  I’m simply not willing to lock my project into this plug-in, closing the other (more standard) options.  If m2e adds compatibility in a future release, I’ll try it again.  And if not?  Nevermore.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Secret Handshake

Since security is king in my corp-rat world, standards dictate that my public web services be accessed via mutual authentication SSL.  The extra steps this handshake requires can be tedious: exchanging certs, building keystores, configuring connections, updating encryption JARs, etc.  So when helping developers of a third party app call in, it’s useful to provide a standard tool as a non-proprietary point of reference.

This week I decided to use soapUI to demonstrate calls into my web services over two-way SSL.  The last time I did something like this, I used keytool and openssl to build keystores and convert key formats.  But this go ’round I stumbled across this most excellent post which recommends the user-friendly Portecle tool, and steps through the soapUI setup.

Just a few tips to add:

  • SoapUI’s GUI-accessible logs (soapUI log, http log, SSL Info, etc.) are helpful for diagnosing common problems, but sometimes you have to view content in bin\soapui-errors.log and error.log.   Take a peek there if other diags aren’t helpful.
  • SoapUI doesn’t show full details of the server/client key exchange.  You can get more detailed traces with the simple curl -v or curl –trace; for example:

curl -v -E mykey.pem https://myhost.com/myservice

Happy handshaking!

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Mocking J

Although I’m a perennial test-driven development (TDD) wonk, I’ve been surprised by the recent interest in my xUnits, many of which are so pedestrian I’ve completely forgotten about them.  After all, once the code is written and shipped, you can often ignore unit tests as long as they pass on builds and you aren’t updating the code under test (refactoring, extending, whatever).  Along with that interest has come discussions of mock frameworks.

Careful… heavy reliance on mocks can encourage bad practice.  Classes under test should be so cohesive and decoupled they can be tested independently with little scaffolding.  And heavy use of JUnit for integration tests is a misuse of the framework.

But we all do it.  You’re working on those top service-layer classes and you want the benefits of TDD there, too.  They use tons of external resources (databases, web services, files, etc.) that just aren’t there in the test runner’s isolated environment.  So you mock it up, and you want the mocks to be good enough to be meaningful.  Mocks can be fragile over time, so you should also provide a way out if the mocks fail but the class under test is fine.  You don’t want future maintainers wasting time propping up old mocks.

So how to balance all that? Here’s a quick example to illustrate a few techniques.

public class MyServiceTest {
	private static Log logger = LogFactory.getLog(MyServiceTest.class);
	private MyService myService = new MyService();	                 // #1
	private static boolean isDatabaseAvailable = false;
 
	@BeforeClass
	public static void oneTimeSetUp() throws NamingException   {
		// Set up the mock JNDI context with a data source.	
		DataSource ds = getDataSource(); 
		if (ds != null) {                                        // #2
			isDatabaseAvailable = true;
			SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder();
			builder.bind("jdbc/MY_DATA_SOURCE", ds);
			builder.activate();
		}
	}	
 
	@Before
	public void setUp() {
		// Ignore tests here if no database connection
		Assume.assumeTrue(isDatabaseAvailable);                 // #3
	}
 
	@Test
	public void testMyServiceMethod() throws Exception {
		String result = myService.myServiceMethod("whatever");
		logger.trace("myServiceMethod result: " + result);      // #4
		assertNotNull("myServiceMethod failed, result is null", result);
		// Other asserts here...
	}
}

Let’s take it from top to bottom (item numbers correspond to // #x comments in the code):

  1. Don’t drag in more than you need.  If you’re using Spring, you may be tempted to inject (@Autowire) the service, but since you’re testing your implementation of the service, why would you?  Just instantiate the thing. There are times when you’ll want a Spring application context and, for those, tools like @RunWith(SpringJUnit4ClassRunner.class) come in handy.  But those are rare, and it’s best to keep it simple.
  2.  

  3. Container? forget it!  Since you’re running out of container, you will need to mock anything that relies on things like JNDI lookups.  Spring Mock’s SimpleNamingContextBuilder does the job nicely.
  4.  

  5. Provide a way out.  Often you can construct or mock the database content entirely within the JUnit using in-memory databases like HSQLDB.  But integration test cases sometimes need an established database environment to connect to.  Those cases won’t apply if the environment isn’t there, so use JUnit Assume to skip them.
  6.  

  7. Include traces.  JUnits on business methods rarely need logging, but traces can be valuable for integration tests.  I recommend keeping the level low (like debug or trace) to make them easy to throttle in build logs.

Frameworks like JMockit make it easy to completely stub out dependent classes.  But with these, avoid using so much scaffolding that your tests are meaningless or your classes are too tightly coupled.

Just a few suggestions to make integration tests in JUnits a bit more helpful.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Use the Source

Spring’s online documentation is often quite helpful when getting started with most of their frameworks. That is, you walk through the examples, quickly hit something that doesn’t work, and then grab the source code and step through it, using the docs as a navigational aid.

Such was the case today when working through the Spring Web Services tutorial.  After fixing a few configuration issues, I got stuck on an exception thrown in MessageDispatcher.getEndpointAdapter:

java.lang.IllegalStateException: No adapter for endpoint […]: Is your endpoint annotated with @Endpoint, or does it implement a supported interface like MessageHandler or PayloadEndpoint?

After attaching source and debugging, I found that the JDomPayloadMethodProcessor was not in the list of the DefaultMethodEndpointAdapter‘s methodArgumentResolvers.  It seems it should have been since initMethodArgumentResolvers adds it whenever org.jdom.Element is present (found by the classloader), and it was present.

Now I had never used JDOM before, but I thought I’d play along since the example used it.  Besides, he who dies with the most XML frameworks wins, right?  Since the example had org.jdom, I had Maven fetch it.

Upon further debugging, I found that initMethodArgumentResolvers wasn’t initializing because the list had already been set by AnnotationDrivenBeanDefinitionParser.registerEndpointAdapters.  And that class was looking for org.jdom2.Element.

Doh!  Those Spring developers should talk.  Meanwhile, I just grabbed jdom2 and converted to it.

This debugging stint was a small price to pay for a web services framework that definitely beats Axis, Axis2, and others I’ve used.  But I look forward to the day when I can use a Spring framework as a black box.  Until then, I’ll keep going to the Spring Source code.  And, of course, wish founder Rod all the best in his new endeavors.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Today’s post will be the last of the Friday Fixes series.  I’ve received some great feedback on Friday Fixes content, but they’re a bit of a mixed bag and therefore often too much at once. So we’ll return to our irregularly unscheduled posts, with problems and solutions by topic, as they arrive.  Or, as I get time to blog about them.  Whichever comes last.

More Servlet Filtering

On prior Fridays, I described the hows and whys of tucking away security protections into a servlet filter. By protecting against XSS, CSRF, and similar threats at this lowest level, you ensure nothing gets through while shielding your core application from this burden.  Based on feedback, I thought I’d share a couple more servlet filter security tricks I’ve coded.  If either detects trouble, you can redirect to an error page with an appropriate message, and even kill the session if you want more punishment.

Validate IP Address (Stolen session protection)

At login, grab the remote IP address: session.setAttribute(“REMOTE_IP_ADDR”, request.getRemoteAddr()). Then, in the servlet filter, check against it for each request, like so:

private boolean isStolenSession(HttpServletRequest request) {    
    String uri = request.getRequestURI();
    if (isProtectedURI(uri)) {
        HttpSession session = request.getSession(false);
        String requestIpAddress = request.getRemoteAddr();            
        if (session != null && requestIpAddress != null) {
            String sessionIpAddress = (String) session.getAttribute("REMOTE_IP_ADDR");
            if (sessionIpAddress != null)
                return !requestIpAddress.equals(sessionIpAddress);
        }
    }
    return false;
}

Reject Unsupported Browsers

There may be certain browsers you want to ban entirely, for security or other reasons.  IE 6 (“MSIE 6”) immediately comes to mind.  Here’s a quick test you can use to stop unsupported browsers cold.

private boolean isUnsupportedBrowser(HttpServletRequest request) {
    String uri = request.getRequestURI();
    if (isProtectedURI(uri)) {
        String userAgent = request.getHeader("User-Agent");
        for (String id : this.unsupportedBrowserIds) {
            if (userAgent.contains(id)) {
                return true;
            }
        }
    }
    return false;
}

Community Pools

In my last Friday Fixes post, I described how to use Apache DBCP to tackle DisconnectException timeouts.  But as I mentioned then, if your servlet container is a recent version, it will likely provide its own database connection pools, and you can do without DBCP.

When switching from DBCP (with validationQuery on) to a built-in pool, you’ll want to enable connection validation.  It’s turned off by default and the configuration settings are tucked away, so here’s a quick guide:

Servlet Container Configuration Steps
Tomcat 7 (jdbc-pool) Add the following to Tomcat’s conf/server.xml, in the Resource section containing your jdbc JNDI entry:

  • factory=”org.apache.tomcat.jdbc.pool.DataSourceFactory”
  • testOnBorrow=”true”
  • validationQuery=”select 1 from sysibm.sysdummy1″
WebSphere Set the following in Admin Console:

Resources – JDBC – Data sources > (select data source) > WebSphere Application Server data source properties:

  • Check “Validate new connections”, set “Number of retries” to 5 and set “Retry interval” to 3.
  • Check “Validate existing pooled connections”, and set “Retry interval” to 3.
  • Set “Validation options – Query” to “select 1 from sysibm.sysdummy1”.
WebLogic Set the following in the Administration Console:

JDBC – Data Sources – (select data source) – Connection Pool – Advanced:

  • Check “Test Connections on Reserve”
  • Set “Test Table Name” to “SQL SELECT 1 FROM SYSIBM.SYSDUMMY1”
  • Set “Seconds to Trust an Idle Pool Connection” to 300.

Adjust intervals as needed for your environment.  Also, these validation SQLs are for DB2; use equivalent SQLs for other databases.

Auto Logoff

JavaScript-based auto-logoff timers are a good complement to server-side session limits.  Such are especially nice when coupled with periodic keep-alive messages while the user is still working.  The basic techniques for doing this are now classic, but new styles show up almost daily.  I happen to like Mint‘s approach of adding a red banner and seconds countdown one minute before logging off.

Fortunately, Eric Hynds created a nice jQuery-based Mint-like timer that we non-Intuit folks can use.  It uses Paul Irish’s idleTimer underneath.

Dropping it in is easy; just include the two JS files and add the idletimeout div into your common layout.  I tucked the boilerplate JavaScript into a simple common JSPF, like so:

<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<script src="<c:url value="/script/jquery.idletimer.js"/>"></script>
<script src="<c:url value="/script/jquery.idletimeout.js"/>"></script>
<script>
var idleTimeout = <%= SettingsBean.getIdleTimeout() %>;
if (idleTimeout > 0) {
  $.idleTimeout('#idletimeout', '#idletimeout a', {
	idleAfter: idleTimeout,
	pollingInterval: 30,
	keepAliveURL: '<c:url value="/keepalive" />',
	serverResponseEquals: 'OK',
	onTimeout: function(){
		$(this).slideUp();
		allowUnload();
		window.location = '<c:url value="/logout" />';
	},
	onIdle: function(){
		$(this).slideDown(); // show the warning bar
	},
	onCountdown: function( counter ){
		$(this).find("span").html( counter ); // update the counter
	},
	onResume: function(){
		$(this).slideUp(); // hide the warning bar
	}
  });
}

Inanimate IE

Among Internet Explorer’s quirks is that certain page actions (clicking a button, submitting, etc.) will freeze animated GIFs.  Fortunately, gratuitous live GIFs went out with grunge bands, but they are handy for the occasional “loading” image.

I found myself having to work around this IE death today to keep my spinner moving.  The fix is simple: just set the HTML to the image again, like so:

document.form.submit();
// IE stops GIF animation after submit, so set the image again:
$("#loading").html(
 '<img src="<c:url value="/images/loading.gif"/>" border=0 alt="Loading..." />');

IE will then wake up and remember it has a GIF to attend to.

Dissipation

The trouble with keeping your data in the cloud is that clouds can dissipate.  Such was the case this week with the Nike+ API.

Nike finally unveiled the long-awaited replacement for their fragile Flash-based Nike+ site, but at the same time broke the public API that many of us depend on.  As a result, I had to turn off my Nike+ Stats code and sidebar widget from this blog.  Sites that depend on Nike+ data (dailymileEagerFeet, Nike+PHP, etc.) are also left in a holding pattern.  At this point, it’s not even clear if Nike will let us access our own data; hopefully they won’t attempt a Runner+-like shakedown.

This type of thing is all too common lately, and the broader lesson here is that this cloud world we’re rushing into can cause some big data access and ownership problems.  If and when Nike lets me access my own data, I’ll reinstate my Nike+ stats (either directly, or through a plugin like dailymile’s).  Until then, I’ll be watching for a break in the clouds.

Broken Tiles

I encountered intermittent problems where Apache Tiles 2.2.2 where concurrency issues cause it to throw a NoSuchDefinitionException and render a blank page.  There have been various JIRAs with fixes, but these are in the not-yet-released version 2.2.3.  To get these fixes, update your Maven pom.xml, specify the  2.2.3-SNAPSHOT version for all Tiles components, and add the Apache snapshot repository:

<repository>
    <id>apache.snapshots</id>
    <name>Apache Maven Snapshot Repository</name>
    <url>http://repository.apache.org/snapshots</url>
</repository

Hopefully 2.2.3 will be released soon.

Toolbox Linkapalooza

A few favorite tools I used this week:

  • WebScarab – Nice Ethical Hacking tool.  Among many features, it exposes hidden form fields for editing.
  • Burp Suite – Similar to WebScarab; its proxy and intruder features make packet capture, modification, and replay a snap.
  • Runner’s World Shoe Advisor.
Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

I got a friendly reminder this morning that I’ve neglected my Friday Fixes postings of late.  I didn’t say they’d be every Friday, did I?  At any rate, here are some things that came up this week.

Tabbed Freedom v. Logoff Security

Tabbed browsing is a wonderful thing, but its features can become security concerns to corp-rat folks who mainly use their browsers for mission critical apps.  For example, with most browsers, closing a tab (but not the browser itself) does not clean up session cookies. Yet those security first guys would like to have a way to trigger a log off (kill the session) on tab close.

This is a common request, but there’s no straightforward solution. As much as I’d like browsers to have a “tab closed” event, there isn’t one.  The best we can do is hook the unload event which is fired, yes, when the tab is closed, but also anytime you leave the page: whether it’s navigating a link, submitting a form, or simply refreshing.  So the trick is to detect and allow acceptable unloads.  Following is some JavaScript I pieced together (into a common JSPF) based loosely on various recommendations on the web.

  var isOkToUnload = false;
  var logoffOnClose = '<c:out value="${settingsBean.logoffOnClose}" />';
 
  function allowUnload() {
     isOkToUnload = true;
  }
  function monitorClose() {
     window.onbeforeunload = function() {
        if (!isOkToUnload)
           return "If you leave this page, you will be logged off.";
     }
     window.onunload = function() {
        if (!isOkToUnload) {
           $.ajax({
              async: false, type: "POST",
                   url: '<c:url value="/basiclogout"/>' });
        }
     }   
     // Add other events here as needed
     $("a").click(allowUnload);      
     $("input[type=button]").click(allowUnload);      
     $("form").submit(allowUnload);                     
  }
 
  $(document).ready(function() {
     if (logoffOnClose === 'Y')
        monitorClose();
  });

This triggers on refresh, but that’s often a good thing since the user could lose work; Gmail and Google Docs do the same thing when you’re editing a draft.  It’s a good idea to make this behavior configurable, since many folks prefer the freedom of tabbed browsing over the security of forcing logoff.

DBCP Has Timed Out

Right after mastering the linked list, it seems every programmer wants to build a database connection pool.  I’ve built a couple myself, but this proliferation gets in the way of having a single golden solution that folks could rally around and expect to be supported forever.

Such was the story behind Apache DBCP: it was created to unify JDBC connection pools.  Although it’s broadly used, it’s over-engineered, messy, and limited. So it, too, fell by the wayside of open source neglect.  And since nearly all servlet containers now provide built-in connection pools, there’s really no use for DBCP anymore.

Yet I found myself having to fix DisconnectException timeouts with an existing DBCP implementation, typically stemming from errors like:  A communication error has been detected… Location where the error was detected: Reply.fill().

After trying several recovery options, I found that DBCP’s validationQuery prevented these, at the cost of a little extra overhead.  Although validationQuery can be configured, I didn’t want additional setup steps that varied by server.  So I just added it to the code:

  BasicDataSource ds = new BasicDataSource();
  // ...
  ds.setValidationQuery("select 1 from sysibm.sysdummy1");
  ds.setTestOnBorrow(true)

In the next pass, I’ll yank out DBCP altogether and configure pools in WebSphere, WebLogic, and Tomcat 7.  But this gave me a quick fix to keep going on the same platform.

Aggregation Dictation

Weird: I got three questions about aggregating in SQL on the same day.  Two of them involved OLAP syntax that’s somewhat obscure, but quite useful.  So if you find yourself with complications from aggregation aggravation, try one of these:

  • Doing a group by and need rollup totals across multiple groups?  Try grouping sets, rollup, and cube.  I’ve written about these before; for example, see this post.
  • Need to limit the size of a result set and assign rankings to the results?  Fetch first X rows only works fine for the former, but not the latter.  So try the ranking and windowing functions, such as row_number, rank, dense_rank, and partition by.  For example, to find the three most senior employees in each department (allowing for ties), do this:

      SELECT * FROM
       (SELECT rank() OVER(partition BY workdept ORDER BY hiredate ASC) 
        AS rank, firstnme, lastname, workdept, hiredate 
        FROM emp) emps
      WHERE rank <= 3
      ORDER BY workdept

    Co-worker Wayne did a clever row_number/partition by implementation and added a nice view to clarify its meaning.

Linkapalooza

Some interesting links that surfaced (or resurfaced) this week:

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges ran the gamut, but there’s probably not much broad interest in consolidated posting for store-level no advice chargebacks, image format and compression conversion, SQLs with decode(), 798 NACHA addenda, or many of the other crazy things that came up.  So I’ll stick to the web security vein with a CSRF detector I built.

Sea Surf

If other protections (like XSS) are in place, meaningful Cross-Site Request Forgery (CSRF) attacks are hard to pull off.  But that usually doesn’t stop the black hats from trying, or the white hats from insisting you specifically address it.

The basic approach to preventing CSRF (“sea surf”) is to insert a synchronizer token on generated pages and compare it to a session-stored value on subsequent incoming requests.  There are some pre-packaged CSRF protectors available, but many are incomplete while others are bloated or fragile.  I wanted CSRF detection that was:

  • Straightforward – Please, no convoluted frameworks nor tons of abstracted code
  • Complete – Must cover AJAX calls as well as form submits, and must vary the token by rendered page
  • Flexible – Must not assume a sequence of posts or limit the number of AJAX calls from one page
  • Unobtrusive – Must “drop in” easily without requiring @includes inside forms.

I also wanted to include double submit protection, without having to add another filter (certainly no PRG filters – POSTs must be POSTs).  Here below is the gist of it.

First, we need to insert a token.  I could leverage the fact that nearly all of our JSPs already included a common JSPF file, so I just added to that.  The @include wasn’t always inside a form so I added the hidden input field via JavaScript (setToken).  I used a bean to keep the JSPF as slim as possible.

<jsp:useBean id="tokenUtil" class="com.writestreams.TokenUtil">
	<% tokenUtil.initialize(request); %>
</jsp:useBean>
 
<script>
	var tokenName = '<c:out value="${tokenUtil.tokenName}" />';
	var tokenValue = '<c:out value="${tokenUtil.tokenValue}" />';	
 
	function setToken(form) {
		$('<input>').attr({
   			type: 'hidden',
    		id: tokenName,
    		name: tokenName,
    		value: tokenValue
		}).appendTo(form);	
	}
 
	$("body").bind("ajaxSend", function(elm, xhr, s){
		if (s.type == "POST") {
			xhr.setRequestHeader(tokenName, tokenValue);
   		}
	});	
</script>

I didn’t want to modify all those $.ajax calls to pass the token, so the ajaxSend handler does that.  The token arrives from AJAX calls in the request header, and from form submits as a request value (from the hidden input field); that gives the benefit of being able to distinquish them.  You could use a separate token for each if you’d like.

The TokenUtil bean is simple, just providing the link to the CSRFDetector.

public class TokenUtil {
	private String tokenValue = null;
	public void initialize(HttpServletRequest request) {
		this.tokenValue = CSRFDetector.createNewToken(request);
	}	
	public String getTokenValue() {
		return tokenValue;
	}
	public String getTokenName() {
		return CSRFDetector.getTokenAttribName();
	}	
}

CSRFDetector.createNewToken generates a new random token for each page render and adds it to a list stored in the session.  It does JavaScript-encoding in case there are special characters.

public static String createNewToken(HttpServletRequest request) {
	HttpSession session = request.getSession(false);		
	String token = UUID.randomUUID().toString();		
	addToken(session, token);
	return StringEscapeUtils.escapeJavaScript(token)	
}

A servlet filter (doFilter) calls CSRFDetector to validate incoming requests and return a simple error string if invalid.  You can limit this to only validating POSTs with parameters, or extend it to other requests as needed.  The validation goes like this:

public String validateRequestToken(HttpServletRequest request) {			
	HttpSession session = request.getSession(false);
	if (session == null) 
		return null;	// No session established, token not required
 
	String ajaxToken = decodeToken(request.getHeader(getTokenAttribName()));
	String formToken = decodeToken(request.getParameter(getTokenAttribName()));
	if (ajaxToken == null && formToken == null) 		
		return "Missing token";
 
	ArrayList<String> activeTokens = getActiveTokens(session);
	if (ajaxToken != null) {
		// AJAX call - require the token, but don't remove it from the list
		// (allow multiple AJAX calls from the same page with the same token).
		if (!activeTokens.contains(ajaxToken))		
			return "Invalid AJAX token";
	} else {
		// Submitted form - require the token and remove it from the list
		// to prevent any double submits (refresh browser and re-post).
		if (!activeTokens.contains(formToken)) 			
			return "Invalid form token";				
		activeTokens.remove(formToken);
		setActiveTokens(session, activeTokens);
	}
	return null;		
}

There you have it.  There are several good tools available for testing; I recommend OWASP’s CSRFTester.

Linkapalooza

Some useful / interesting links that came up this week:

  • Grep Console – Eclipse console text coloring (but no filtering)
  • Springpad – Think Evernote++
  • Pocket – New name for ReadItLater
  • RFC 2616 – The HTTP 1.1 spec at, well, HTTP
Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.

JRebel

If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Share This:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • RSS