Tag Archives: Java


devnexusTaking in a good tech conference like JavaOne, Google I/O, or the WWDC usually means hopping a plane to the Valley for a week.  So it’s nice to know there are a couple of good local (Atlanta) and shorter conferences like DevNexus and GASS / NoFluffJustStuff

A colleague talked me into DevNexus this year, and I’m glad he did. This AJUG-sponsored conference has grown to the caliber and content of much larger conferences, includes tracks much broader than Java, and is well worth the low cost. So if, like me, you can’t usually break away for a week of JavaOne, take in DevNexus or NoFluff instead.


xrebelMy colleagues and I have plenty of Java profilers at our disposal, from the free ones included with every JDK to commercial suites. But while the best tool is the one you’ll use, heavyweight profiling tools often get shoved to the late stages of a project when tuning time is short.

XRebel is a lightweight profiler from our JRebel friends for ongoing use during code development. It doesn’t try to be an end-all monitor; rather, it targets a few of the most common trouble spots in Java web apps: SQL performance, session memory, and exceptions.  Installation is trivial: just add -javaagent to your container’s VM arguments. It injects an iframe to each of your served pages with a dashboard that flags any problems. From there, you can drill into stack traces and SQLs, and view execution counts and times.

I’ve used XRebel with several web apps, and have demo’ed it to others who have become fans. The new 2.0 Beta adds application code profiling, but I’ve encountered some UI hiccups with the initial beta version.  I’m sure those will be fixed fast, and I look forward to the upgrade.

If you haven’t yet tried XRebel, give it a run.  It’s so easy to download and enable, there’s no excuse not to try it.

Mule Ride

MuleESBMy prior experiences with heavyweight Enterprise Service Buses (ESBs) had me running away screaming, and ready to join the nascent NoESB movement. But I recently started a new project with broad integration requirements, and point-to-point solutions wouldn’t cut it.

This time, I went after a lightweight, embeddable solution. After ruling out lesser alternatives, I settled on Mule ESB (aka, Anypoint Platform). I downloaded Anypoint Studio and runtimes, worked through Mule in Action, and started building my adapter platform. I found it easy to integrate Mule with my other Java EE stuff and extend it as needed. Mule flows and Mule Expression Language (MEL) took a little getting used to, but I found them rich enough for much of what I needed, with supporting services and Java components filling the gaps. The GUI flow editors are nice, but direct XML editing was faster.

I ran into some problems with new message processors introduced with 3.5, such as the Web Service Consumer and generic Database connector, but their predecessors (HTTPS and JDBC connectors) worked fine. The new DataMapper is nice, but I avoided components that require the enterprise (vs community) edition. Through various flows, I was able to quickly integrate services (SOAP and REST), databases (mostly DB2), queues (WebSphere MQ and ActiveMQ), files, and various other endpoints on servers and mainframes.

If you need to quickly implement many decoupled enterprise integration patterns with SEDA benefits, I recommend MuleESB.

The Other One

As JAX-RS implementations go, I suppose Apache CXF is as good as any. But among all the conveniences, it certainly has its share of surprises. For example, although I follow a typical REST resource URI scheme, I had to write a custom ResourceComparator to get certain lookups to work across multiple service beans. I’m over that now, but today’s bug was really weird.

In this case, I’m handling basic authentication in an upstream inInterceptor, pulling credentials from the message. This has several benefits, including simpler service beans. If credentials are missing or invalid, I do the usual 401 / 403 thing, but if authenticated, I inject a SecurityPolicy containing my Principal, like so:

        message.put(SecurityContext.class, new BasicSecurityContext(userProfile));

.. with the hopes of my service bean having access via:

        @Context SecurityContext securityContext

Trouble is, that came out null. Turns out, this is a known issue with CXF, and the work-around (barring unwanted extra plumbing) is to use javax.ws.rs.core.SecurityContext, not org.apache.cxf.security.SecurityContext. So much for polymorphism.


In today’s action, I wrote a new Google Cloud Messaging (GCM) server interface to the GCM Cloud Connection Server (CCS) via XMPP. GCM CCS offers several benefits over GCM HTTP including one of my key requirements, bidirectional messaging.

My app server didn’t yet have a XMPP interface, so I followed Google’s sample and plugged in Smack. Then the fun began. I immediately started getting exceptions (see Smackdowns below). These varied based on state and configuration, but basically the result was the same: the GCM CCS server was abruptly closing the connection during authentication. How rude; Smack was getting smacked down.

I dug in but ran into several red herrings. For example, Smack’s stream headers didn’t match Google’s doc (no tag close); Google’s doc is wrong. I debugged Smack, tried different Smack versions, tried other XMPP clients, tweaked configurations, studied XMPP protocols and SASL authentication, and updated authentication parameters and allowed IPs. Ultimately, I had a minor error in one of the authentication parms. I found it by Base64-decoding the auth value in Google’s example and comparing it to my own traces.

This would have been a quick fix if CCS had just given an authentication error message rather than just slamming the door. It must be a security thing. Oh well, on to the next adventure.


No response from the server.: 
	at org.jivesoftware.smack.NonSASLAuthentication.authenticate(NonSASLAuthentication.java:73)
	at org.jivesoftware.smack.SASLAuthentication.authenticate(SASLAuthentication.java:357)
	at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:221)
java.net.SocketException: Connection closed by remote host
	at com.sun.net.ssl.internal.ssl.SSLSocketImpl.checkWrite(Unknown Source)
	at com.sun.net.ssl.internal.ssl.AppOutputStream.write(Unknown Source)
	at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
	at sun.nio.cs.StreamEncoder.implFlushBuffer(Unknown Source)
	at sun.nio.cs.StreamEncoder.implFlush(Unknown Source)
	at sun.nio.cs.StreamEncoder.flush(Unknown Source)
	at java.io.OutputStreamWriter.flush(Unknown Source)
	at java.io.BufferedWriter.flush(Unknown Source)
	at org.jivesoftware.smack.util.ObservableWriter.flush(ObservableWriter.java:48)
	at org.jivesoftware.smack.PacketWriter.writePackets(PacketWriter.java:168)
java.io.EOFException: no more data available - expected end tag  to close start tag  from 
                                      line 1, parser stopped on END_TAG seen ...... @1:344
	at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3035)
	at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
	at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1144)
	at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
	at org.jivesoftware.smack.PacketReader.parsePackets(PacketReader.java:279)

Android Studio

I’ve been waiting months for a “ready for prime time” version of Android Studio. Its move from “Preview” to “Beta” (version 0.8) on June 26 became my signal to dive in deep and use it for real work. After all “beta” in Google terms usually means a level of quality some products don’t achieve with years of “GA” releases.

Android Studio inherits many of its benefits from IntelliJ. I like the speed and stability, the quality of the development and rendering tools, the productivity, the rich static code analysis, and all the other benefits that come from its IntelliJ underpinnings.  I’ve bounced back and forth between Eclipse keymaps and IntelliJ defaults, but settled on the IntelliJ’s, with the cheat sheet on hand. “When in Rome,” don’tchaknow, although I haven’t yet joined the Darcula side. Even from a few days’ use, I’m sold on IntelliJ and certainly don’t miss Eclipse’s crashes and stalls.

I’m also pleased with the Gradle-based build system and AAR support. After fighting with Maven plugin bugs, apklib hacks, and other Eclipse-Maven clashes, it’s refreshing to have a build system that is elegant, is backed by Google, and just works. The factoring of content across build.gradle files, manifests, etc., is clean and DRY, and the Android-specific checks guide you toward best practices.

The main downside is that there is no automatic nor incremental build as with Eclipse, so builds are slower and many errors aren’t discovered during editing. Build speed will likely improve as it exits beta (perhaps with the help of parallel builds), but the rest is just the IntelliJ way.

Still, I’m happy with both IntelliJ and the Android Studio so far. Now if I could only switch from MyEclipse to IntelliJ for server-side code…

Time’s Up

I recently needed to add an inactivity timer to a pair of new apps. If the user was logged in but inactive past a configurable period, the app should display an error when resumed and return to the login screen. It must be non-intrusive and not interrupt any foreground apps. It could not require a background service.

For Android, this fit well into the standard activity architecture. In particular, well-behaved activities should normally remain quiet when paused and should finish themselves rather than be killed by some other activity.  So I created the following simple utility class to enable this.  The primary hooks are:

  • initialize – Called when the first (main) activity starts.
  • startMonitoring – Called when the user logs in.
  • monitor – Called from each activity’s onResume.  I added it to my common fragment superclass.

It also has an optional requestExit to request an immediate exit based on some other criteria.

The deus ex machina is the simple activity.finish().  This ripples through all active activities so that they politely exit whenever the sShouldExitNow flag is set.

public class UserActivityMonitor {
    private static long sUserTimeoutMillisecs = 5 * 60 * 1000;	// Default to 5 minutes
    private static boolean sIsMonitoring = false;
    private static boolean sShouldExitNow = false;
    private static long sLastActionMillisecs = 0;
     * Initialize monitoring. Called at app startup.
    public static void initialize(long timeoutMillisecs) {
        sUserTimeoutMillisecs = timeoutMillisecs;
        sIsMonitoring = false;
        sShouldExitNow = false;
        sLastActionMillisecs = 0;
     * Start monitoring user activity for idle timeout.
    public static void startMonitoring() {
        sIsMonitoring = true;
        sLastActionMillisecs = System.currentTimeMillis();
     * The given activity is at the foreground.
     * Record the user's activity and, if necessary, handle an exit or timeout request.
     * If timed out, start the postTimeoutActivity (if provided) on a new task.
    public static void monitor(Activity currentActivity, Intent postTimeoutActivity) {
        if (sShouldExitNow) {
            // Calls here will ripple through and finish all activities
        } else {
            if (sIsMonitoring) {
                checkForIdleTimeout(currentActivity, postTimeoutActivity);
        sLastActionMillisecs = System.currentTimeMillis();
     * Request to exit the app by triggering all activities to finish.
    public static void requestExit() {
        sShouldExitNow = true;
     * Check for an inactivity timeout.
     * If we have timed out, display a message, finish the activity,
     * and set the "should exit now" flag so that other activities quit.
     * If timed out, start the postTimeoutIntent (if provided) on a new task.
    private static void checkForIdleTimeout(final Activity activity,
                                            final Intent postTimeoutIntent) {
        if (User.getCurrentUser().isLoggedIn() && sLastActionMillisecs > 0) {
            long idleMillisecs = System.currentTimeMillis() - sLastActionMillisecs;
            if (idleMillisecs > sUserTimeoutMillisecs) {
                // The first time we detect a timeout, display a popup message
                        activity.getString(R.string.msg_timeout_title), activity, 
                         new IErrorDisplayListener() {
                            public void onResponse(int buttonClicked) {
                                sShouldExitNow = true;
                                if (postTimeoutIntent != null) {
                                    postTimeoutIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK |

Such Little Things

After many Android app versions with perfectly-good icons, it came time to replace them: new launcher icons, new store listing graphic, new action bar icons, and in some cases, removing the icon from the action bar altogether.

For the latter, I preferred an easily-customized manifest entry:


This worked nicely for local and ad-hoc installs. So I was surprised when publishing to the Google Play store failed with the obscure yet common error:

Your APK cannot be analyzed using ‘aapt dump badging’.  Error output:

Failed to run aapt dump badging: Error getting ‘android:icon’ attribute: attribute is not a string value

You wouldn’t know it from the message nor from any online suggestions, but that @color/transparent trick was the root cause. So instead I created a 1×1 transparent PNG (ic_empty.png) and used it instead:


BTW, when it’s “icon update” time, I’ve found the quickest way to get scaled icons is to create the 512×512 web version, and point the New Android Application wizard at it.  It’ll create the mdpi, hdpi, xdpi, and xxhdpi sizes for you.

22.6.2, Maybe?

The recent 22.6 Android Developer Tools (ADT) update broke quite a few things: AVD creation and editing, ProGuard, performance, etc.  Rather than revert to 22.3 like many folks, I decided to muddle through with some work-arounds and hold out for the fixes in 22.6.1.

Well, 22.6.1 has arrived and at least fixed the AVD issues, but ProGuard still fails with:

Error: Unable to access jarfile ..\lib\proguard.jar

Fortunately, working around that bug is easy: simply edit proguard.bat and provide an absolute path to the JAR.

Back Up

Enabling Up (ancestral) navigation in Android apps is straightforward for the typical case: just specify parent activities in the manifest and send setDisplayHomeAsUpEnabled(true). But non-typical cases (like dynamic parents) require extra steps.

Today I implemented some non-typical cases and experimented with overriding action bar callbacks, tweaking the back stack, etc. But in most cases I simply wanted up navigation to behave like back navigation whenever there was no fixed parent defined. For that, I developed a simple recipe:

public boolean onOptionsItemSelected(MenuItem item) {
	switch (item.getItemId()) {
		case android.R.id.home:
			if (NavUtils.getParentActivityName(getActivity()) != null) {
			} else {
			return true;
			return super.onOptionsItemSelected(item);

Groovy Fast

I sometimes find myself trying out small, standalone Java snippets, much as I do in other languages. But unlike other languages, Java doesn’t provide a built-in REPL for this. Sure, I can wrap the snippets in JUnits or small main programs, or use Eclipse scrapbook pages, but that’s the slow way to experiment. I want something lightweight and fast for this decidedly heavyweight and slow language.

In the past, I’ve used different Java shells and sites to fill this gap, but since I’m working in Groovy more lately, my new favorite is the Groovy Console. And when I need snippets integrated with my code and environment, the Groovy-Eclipse plugin provides that.

You Again

Remember back in the JDK 1.5 days when some genius changed BigDecimal.toString and broke half the universe? I’ve felt the impacts in DB2, Oracle, VoiceObjects, and other places. I’m over that now (and all the cool kids have updated or switched to toPlainString), but this thing just keeps coming back to haunt me.

Like this week, when old DB2 JDBC (JCC) drivers kept showing up on some platforms, prompting these intermittent errors:

[ibm][db2][jcc][converters] Overflow occurred during numeric data type conversion of …

This error can sneak up on ya’ because it often takes just the right numbers to kick in. Like the lottery, and sometimes with the same odds.

Fortunately, the fix is easy: just update JDBC drivers to something from the modern era.

Open for Repairs

Although Android Libraries are the recommend way to package and use larger components, there’s no clean way to handle them in a Maven-based Eclipse or Jenkins environment. Sure, JARs are easy enough, but not the co-requisite resource files. Checking these into source control is just wrong, and full Maven support for these libs in incomplete, even with m2e, m2e-android, and android-maven-plugin in place.

Apklibs provide a decent stopgap, so I settled on that approach: loading the zips (ahem, apklibs) to Nexus and creating scripts to link things up during build. It requires one manual step in Eclipse, but no biggie. Yet there are greener pastures ahead…

The upcoming new Android Build System addresses this problem with AARs. Of course, it goes much further, ditching Maven for Gradle, ADT for Android Studio, Eclipse for IntelliJ, and XML for Groovy. I like new things, but while we’re waiting on the new house, I wish Google would finish fixing the plumbing in the current one.


Code analyzers like FindBugs, PMD, and Sonar provide a nice safety net for catching potential quality issues early, during code and build.

By default, PMD over-emphasizes style, leading to a lower signal-to-noise ratio, but FindBugs is quite good at catching real issues that slip past standard, compiler, lint, and IDE checks. Surprisingly good at times, given that there’s only so much you can learn from static analysis.  So I was skeptical that any commercial static analyzer could “take it to the next level” and provide qualitative improvements over open source tools.

That is, until I tried Coverity.

I recently got the chance to try Coverity’s Advisors hands-on, and met with two of their sales engineers to learn more. Coverity is not cheap, but its cost can often easily be justified for commercial code based on time savings and quality improvements.

I ran Coverity against three projects: two server apps with UIs and web services, and an Android app. In all three cases, I was impressed with the findings, particularly the High and Medium Impact ones like resource leaks and the various null dereference checks. These are the value add checks that you don’t get from its embedded FindBugs or from other open source tools. Of course, the UI presentations like code paths (in both Eclipse and the web interface) were also very helpful.

There were quite a few false positives, but not enough to be a problem. Many of the issues were deliberate fail-fasts and mocks in JUnit test classes, and could be marked as ignores. Some of the more interesting finds were null dereferences of results from calls to third party libraries – where I expected a method to always throw an exception on error, but in some situations it could return null instead. This demonstrates how the tool “looks” into underlying JARs to find undocumented behaviors.

Coverity also provides a broad range of security advisor and webapp-security checks, looking for vulnerabilities to SQL injection, XSS, and other exploits in the OWASP and CWE lists. Coverity can receive feeds from other complementary analyzers such as Android Lint, clang, PMD, etc., and include them on its dashboards. This is a nice feature to manage all issues from one common database.

Full runs with most checks enabled took time, often over 30 minutes, so these wouldn’t be done on every continuous integration build. If Coverity ever adds incremental analysis, that would help there.

It’s impossible for developers to catch all potential weaknesses through manual code inspections alone, so when it comes to static quality analyzers, good tools are important. Coverity is top-notch, and this evaluation demonstrated to me the value of a purchased tool.

Droid Units

JUnit testing of model and service classes under Android/Dalvik is straightforward, but when it comes to testing UIs (activities and fragments), the basic support is just too low-level. Fortunately, Robotium (think Selenium for droids) provides a most-excellent adjunct for this.

Yet Robotium should be used with care to create maintainable tests that aren’t brittle. To that end, I’ve developed some UI automation practices:

  • Write true unit test classes that cover only a single activity or fragment. Of course, many UI actions will take you to the next activity (and you should assert that), but leave multi-activity testing for separate integration tests.
  • Stub back-end behaviors using standard mock and dependency injection techniques. For my current app, I kept it simple/lightweight and wrote my own code, but I’ve also used Mockito (with dexmaker) and Dagger (with javax.inject); these are nice Android-compatible frameworks.
  • Rather than repeating the same raw Robotium calls directly from your tests, wrap them in descriptive helper methods like enterUserID(String id), enterPassword(String password), clickLoginButton(), etc. This DRY approach makes for more readable tests and simplifies updates when your UI changes.
  • Since you probably use common superclasses for your activities and fragments, also create parent test case classes to factor common testing behaviors. See below for snippets from one of mine.

I haven’t found a good tool for measuring code coverage for apps (Emma under Android is flakely), so I’d love to hear your recommendations.

public abstract class BaseActivityTest<T extends Activity> 
                             extends ActivityInstrumentationTestCase2<T> {
	protected Solo mSolo;	
	public BaseActivityTest(Class<T> activityClass) {
	protected void setUp() throws Exception {
		// Reset your DI container and common mocks here...		
		mSolo = new Solo(getInstrumentation(), getActivity());
	protected void tearDown() throws Exception {
	public void testLayoutPortrait() {
	public void testLayoutLandscape() {
	// ...

Worth the Weight

With many Android apps, there often seems to be little correlation between the size of the package and the value it provides. Multi-megabyte apps that do almost nothing leave me wondering, “what’s in there?”

Unpacking with dex2jar and JD-GUI often provides answers, and frequently it just means the developer forgot to enable Proguard when building. An overabundance of ill-compressed drawables are another common source. But beyond these, the habits of server-side re-use (freely expanding POMs and dropping in FOSS JARs) are a key source of bloat.

I try to be stingy when it comes to Android app libs, often taking a tougher route to avoid bringing in large JARs that might otherwise be useful. Such was the case recently when I needed to do a multi-part post to a REST web service that consumed a mix of binary image data and JSON. Multipart HTTP is conceptually simple, but the markup is obscure enough to make generating it directly from a business app just wrong.

Fortunately, though, Apache HttpMime is just 26K, and makes the process simple. For example:

MultipartEntity entity = new MultipartEntity();
entity.addPart("request", new StringBody(request));
entity.addPart("image", new ByteArrayBody(image, "image/jpeg", filename));

To avoid duplicate class errors at Proguard time, exclude the dependent httpcore in your pom.xml, like so:


I don’t always add JARs to Android apps, but when I do, I prefer light ones.

Out of the Box

Business travels and extra duties lately have brought some unexpected surprises. I find such “out of the box” adventures refreshing: pulling the rug out from under typical routines forces me to challenge assumptions and improvise. Here are some examples from the past two days.

The Right Sequence

During group architecture sessions, I needed to quickly create, modify and display sequence diagrams to demonstrate interactions among several new web services. Lacking my usual tools (and convinced there had to be a better mousetrap), I searched around and found websequencediagrams.com. This STTCPW site was exactly what I needed. Using it was faster than tools such as Visio Pro, and even faster than drawing and editing on whiteboards.

Weekend Jobs

Today I got word that some Quartz jobs weren’t running as expected during pre-production testing, where typical weekday processes were run as a logical business day on Saturday. The cron-expressions were correct, so I needed to see more. A look at the full jobs.xml revealed the culprit: this environment had been configured to use WeeklyCalendar rather than the usual AnnualCalendar. A quick temporary switch had Quartz working weekends with the rest of us.

Fog of WAR

Despite version labels and other indicators, a web app was behaving as if it was down-level in a key area. Since the WAR file hadn’t been obfuscated, I grabbed a copy of the JD-GUI decompiler for a closer look. Sure enough, the class in question was at an older level. When sources aren’t available or are in question, the JD Project tools are indispensable for moving past assumptions and getting to facts.

Decimal, with an E

New calls to my web services from VoiceObjects were sending certain decimal data in E notation.  The underlying call flow objects were TTA – Literals/Digits with a digits?minlength=1;maxlength=14 grammar, so it should have been decimal from end to end.  But there turned out to be an unexpected feature of the platform to convert to float whenever any arithmetic is done, with no access to DecimalFormat or other mechanisms to convert back. Since there was fortunately no loss of precision and since a problem correctly stated often solves itself, I knew exactly what to do: modify the web service to accept the format. Voxeo provides a great VoiceXML platform, but it has plenty of surprising nuances like this.


We corp-rat developers typically do most real work in some big honkin’ desktop IDE, yet dream of popping into a browser or lightweight shell to get things done. For example, with much of my PHP and Ruby code, I can ssh and vim from anywhere, but my Java work is hopelessly tied to a single laptop. If only we could combine the rich features of an IDE with the shareability and convenience of a cloud environment.

That’s what projects like OrionHub and Codenvy seek to accomplish. Orion seems to have stalled a bit, but since Codenvy is quickly moving forward (and ambitiously taking on Java EE development), I gave it a try.

Codenvy supports a variety of languages, frameworks and cloud platforms, but I only needed Java: for a standalone JAR, and some web code with Spring and Tomcat.

Codenvy works hard to re-create an IDE-style environment in the browser, and does a good job of it. It has an Eclipse-like layout and features, and even many of the same keyboard shortcuts. There were plenty of gaps and differences to remind me that I wasn’t in Kansas anymore, yet I could adjust to those. Even required periodic browser refreshes were OK; they always took me back to some reasonable point.

But it was just too sluggish and buggy for anything but small projects. Granted, Eclipse desktop itself is infamous for stalls, crashes, and sudden disappearances, but this is another level. I suspect some of that can be addressed with faster servers and networks. Indeed, to use this for real work, I’d need the on-premise version where I could control that along with back-end interfaces.

I’ll keep an eye on Codenvy and use it where it fits. It’s great for sharing and collaborating on small bits of code, like a jsFiddle for Java. And maybe soon it’ll be the envy of every developer tied to a desktop IDE.

Tight Spring

I still recall the wise advice of a friend while I was designing and building a set of application frameworks over two decades ago: “create as many hooks as possible and be quick to add new ones.”  So for each service flow I noted every interesting step and created extension points for each.  I didn’t think most of these would be be used but, over time, nearly all were.  And as users requested new hooks, I quickly added them.

The Spring frameworks, with their IoC underpinnings, are built for extensibility.  The application context is a software breadboard allowing all sorts of custom wirings and component swapping.  And namespace configuration elements make for much clearer and cleaner XML.  Where done right, the namespace configuration captures most core extension points, documents them well, and makes them easy to use. But shortcuts, inflexible designs, and atrophy can cause parts of Spring to be too tight. Here are just a couple of examples I encountered with this week.

Spring Security – Session Management Filter

A customer’s single sign-on (SSO) flow required tweaks to SessionManagementFilter, so I built my own: a subclass with just a simple override.  But swapping it in turned out to be quite clumsy, and basic things like setting the invalid-session-url in the namespace no longer work as documented.  Since I now have to specify the complete bean construction in the security context anyway, I decided to just replace the SimpleRedirectInvalidSessionStrategy.  Here again, I just needed one method override, but, alas, it’s a final class.  So, with some copy and paste re-use and ugly XML I finally got what I needed; here’s the gist:

 <http use-expressions="true" ...>
    	<!-- Disable the default session mgt filter: /-->	
	<session-management session-fixation-protection="none" />
    	<!-- ... and use the one configured below: /-->		
	<custom-filter ref="sessionManagementFilter" position="SESSION_MANAGEMENT_FILTER" />
 <!-- Configure the session management filter with the custom invalid session re-direct. -->  
 <beans:bean id="sessionManagementFilter" 
	<beans:constructor-arg name="securityContextRepository" 
		ref="httpSessionSecurityContextRepository" />
	<beans:property name="invalidSessionStrategy" 
		ref="myInvalidSessionStrategy" />
 <beans:bean id="httpSessionSecurityContextRepository"
 <beans:bean id="myInvalidSessionStrategy"
  	<beans:constructor-arg name="invalidSessionUrl" value="/basic/authentication" />
	<beans:property name="myProperty" value="myvalue" />  		

I wish JIRAs like SEC-1920 weren’t ignored.

Spring Web Services – HTTP Headers

I built web service calls (using Spring-WS) to a site that requires authentication fields in the SOAP header. It’s pretty pedestrian stuff except there are no direct Spring public methods to set header fields or namespaces. Instead I had to use the recommended WebServiceMessageCallback like so:

webServiceTemplate.sendSourceAndReceiveToResult(getUri(), source, 
        					getWebServiceMessageCallback(), result);
private WebServiceMessageCallback getWebServiceMessageCallback() {
   return new WebServiceMessageCallback() {
	   public void doWithMessage(WebServiceMessage message) {
		   try {
			SoapMessage soapMessage = (SoapMessage) message;
	           	SoapHeader header = soapMessage.getSoapHeader(); 
	           	StringSource headerSource = 
					new StringSource(getAuthenticationHeaderXml());
	           	Transformer transformer = 
	           	transformer.transform(headerSource, header.getResult());
		   } catch (Exception e) {
			// handle exception

That’s too verbose for what should have been one line of code.  I agree with JIRAs like SWS-479 that this should be simplified and extended.  Is an addHeader convenience method too much to ask?

BTW, when developing web service calls, I usually start with XMLSpy and soapUI to get the XML down first.  Once I switch over to coding, I typically set JVM system properties (proxySet, proxyHost, and proxyPort) to point to Fiddler2 so I can examine request and response packets.  It’s a nice arrangement, but I’m always looking for new ideas.  If you prefer a different approach, write me.

Just Say It

I recently built a voice response system with Voxeo‘s software: their VoIP IVR (Prophecy) and VoiceXML server (VoiceObjects).  This stack is available in both on-premise (local install) and cloud versions (Evolution/VoiceObjects on Demand), and includes an Eclipse-based development environment, VoiceObjects Desktop for Eclipse (DfE).  VoiceObjects (VO) takes a little getting used to, but is a nice platform for developing call flows.  It sure beats hand-coding VoiceXML, and it integrated well with my Java service back-end.

VoiceObjects’ documentation is excellent, but I learned quite a lot by trial and error.  So I thought I’d share a few tips and lessons learned.

Use scripts and expressions

VO’s library of objects is quite rich, but you will inevitably run into some requirement that needs more than just objects with properties.  For example, I used nested concatenate expressions to build complex types for web service calls; split, index, findcell, strequal, and matchesregexp expressions to process web service results; and JavaScript to handle custom DTMF mappings. The prepackaged expressions are also helpful, such as the ones that grab ANI and DNIS from session state.

Set event handlers

It’ll take about 5 seconds to grow weary of the thickly-accented “service you are trying to reach is not available…” default message.  Get a friendlier message with more root cause details by setting the event handlers in your top-level module (they’ll be inherited by child objects).  Just open the Outline view, expand Event Handling section, and add away.  You can add whatever outputs you’d like, and use different handlers for repeat occurrences.

Set default routing

Be sure you have a default routing (*) entry in the Prophecy Commander applications list, since surely someone will use the wrong SIP address or configure the wrong URI.  With a default routing entry, callers at least land in a recognizable place.

Get Blink

VO’s built-in SIP Phone is very basic, so you’ll find yourself looking for alternatives.  There are a lot of free SIP phones out there, but most require signing up to a service or can otherwise violate corp-rat security concerns.  I settled on Blink.  It’s a little buggy, but does the job.

Get an ATA

SIP phones are handy, but you’ll eventually want to pick up a POTS phone or cell and call into your IVR.  While you can can assign PSTN phone numbers in Evolution/VoD, that doesn’t help with your local Prophecy install.  That’s where Analog Telephone Adapter (ATA) gateways come in.  I used the AudioCodes MP-114; the model with two FXO and two FXS ports provides both analog in and analog out options.

Get the latest version

Since Voxeo’s hosted environment uses VO 10, I started out with that version.  But I quickly ran into some problems that required recent fixes.  Voxeo was kind enough to build a custom VO 11 environment in an AWS instance for my cloud testing, but their GA hosted platform will be upgraded soon.  In the mean time, there’s really no reason not to use one of the recent versions: 11 or 12.

Add debug outputs

VO provides a Log object for tracing purposes, but I often found it easier to use additional Output objects to hear trace messages on the call.  By convention, I labelled these with a “Debug – ” prefix, and disabled them (but left them in place) when not used.

Use the logs

Even minor typos can cause VO to throw internal exceptions that leave you scratching your head.  Sometimes you can diagnose these with Debug and Trace, but in most cases the VoiceObjects logs (viewable from DfE) provide the best information.   If the problem is on the IVR side, use Prophecy’s built-in Log Viewer to hunt down the root cause.

Use the support forums

For some vendors, support fora are where problems go to die from neglect.  But that’s far from the case with Voxeo.  They have an excellent support team that quickly responds to issues.  You can browse existing posts for common problems and easily add new tickets.