Tag Archives: Infosec

In a Heartbeat

Although the Heartbleed data leak vulnerability is as old as OpenSSL 1.0.1‘s heartbeats (over two years), it has just now risen to instant infamy. First, it has taken us all a while to upgrade OpenSSL to 1.x and, second, it wasn’t publicized until this week. So now that we have a perfect storm of ubiquity and fame, the internet will be flooded with hackers scanning sites and running off with all the data they can grab.

So Filippo Valsorda‘s Heartbleed tester has arrived just in time. I found the online version to be too slow for my needs, so I grabbed the golang source and used it. It contains the simple magic:

     err = binary.Write(&buf, binary.BigEndian, uint16(len(payload)+100))

… which can get you back more than you gave and everything you asked for:

00000000 02 00 79 68 65 61 72 74 62 6c 65 65 64 2e 66 69 |..yheartbleed.fi|
00000010 6c 69 70 70 6f 2e 69 6f 59 45 4c 4c 4f 57 20 53 |lippo.ioYELLOW S|
00000020 55 42 4d 41 52 49 4e 45 3f e5 bc eb c8 a6 ba e0 |UBMARINE?.......|
00000030 c3 a2 3f f4 27 a1 66 00 4d 6b ca 79 ed 24 8b 2c |..?.'.f.Mk.y.$.,|
00000040 ab ff 3a 31 25 8d c5 c2 6c ea 04 bb 2c e3 53 41 |..:1%...l...,.SA|
00000050 4f 2e 56 09 de e5 99 98 dc ef f8 42 67 41 9f 21 |O.V........BgA.!|
00000060 6c 73 e7 6f f5 4a 54 90 a5 fc bb 5b c1 2c aa 78 |ls.o.JT....[.,.x|
00000070 d8 1c c4 ea 5f 99 f5 09 69 bb b7 46 76 0d 8a 2b |...._...i..Fv..+|
00000080 1c 48 f3 c5 1c 9f d8 47 e7 b1 b6 15             |.H.....G....|

That’s right: OpenSSL versions between 1.0.1 and 1.0.1f will use your payload length for its return data, up to nearly 64K of goodies.

Kudos to Filippo for putting together such a handy tester so quickly.

Goto Fail

Now that the media frenzy over Apple’s SSL (goto fail) MitM vulnerability has calmed and our iPhones and Macs have been updated, folks are starting to reflect. That’s a good thing. Among other things, this event has pushed issues of code quality, review, testing, patching/updates, version blocking and user notification up the management chain to, hopefully, get more focus and funding.

This had something for everyone and the net is now flooded with commentary on it. I’m certainly not going to repeat that, but consider:

  • Some static analyzers (like Coverity and clang with -Wunreachable-code) would have caught this.
  • Peer code review likely would have caught this.
  • The bug was in the open source community for anyone to find.
  • Vendors everywhere were asking how they could help notify their users and ensure they upgrade.

This one-liner joins the ranks of the most apt and pithy software bugs. It’s right up there with the misplaced break that broke the AT&T network on my 25th birthday. BTW, if your birthday call to me didn’t get through that day, there’s still time; call or message me. But please upgrade your iPhone first.

Entropy Atrophy

derek williams (gatech) has the 2nd best hash.  See the full standings at http://almamater.xkcd.com/best.csvKudos to the Carnegie Mellon alum for the closest-fit solution to xkcd’s externally-controlled April Fools comic and Skein hash collision contest (always read the alt text).  I was far too hardware-deprived to be nerd-sniped by this one, but there were plenty others who jumped right on it, all motivated by challenge rather than “money.”

That’s encouraging because it seems information theory isn’t taught much anymore (my own alma mater has long since dropped “ICS” for “CS”). Although we now need it most (in our big data and security-starved era), our collective entropy-sophy has atrophied.

In areas like security policy and algorithm design, the cold reality of the pigeonhole principle is too often forgotten.  We often regard hashes as magic, forgetting they’re just bit-twiddling mapping functions and that when the domain is bigger than the range, there will be collisions.  Simple truths like this are ignored in spots ranging from the XBox to ReiserFS.

The winner of xkcd’s contest was still 384 bits shy of a total collision, but every imperfect hash has plenty of clashes.  So be careful out there to win that battle between bit length and processing power, at least while GPUs and quantum computers develop.

BTW, it seems Wikipedia got enough donations from the effort to be good sports about the xkcd-hacking.

Secret Handshake

Since security is king in my corp-rat world, standards dictate that my public web services be accessed via mutual authentication SSL.  The extra steps this handshake requires can be tedious: exchanging certs, building keystores, configuring connections, updating encryption JARs, etc.  So when helping developers of a third party app call in, it’s useful to provide a standard tool as a non-proprietary point of reference.

This week I decided to use soapUI to demonstrate calls into my web services over two-way SSL.  The last time I did something like this, I used keytool and openssl to build keystores and convert key formats.  But this go ’round I stumbled across this most excellent post which recommends the user-friendly Portecle tool, and steps through the soapUI setup.

Just a few tips to add:

  • SoapUI’s GUI-accessible logs (soapUI log, http log, SSL Info, etc.) are helpful for diagnosing common problems, but sometimes you have to view content in bin\soapui-errors.log and error.log.   Take a peek there if other diags aren’t helpful.
  • SoapUI doesn’t show full details of the server/client key exchange.  You can get more detailed traces with the simple curl -v or curl –trace; for example:

curl -v -E mykey.pem https://myhost.com/myservice

Happy handshaking!

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges ran the gamut, but there’s probably not much broad interest in consolidated posting for store-level no advice chargebacks, image format and compression conversion, SQLs with decode(), 798 NACHA addenda, or many of the other crazy things that came up.  So I’ll stick to the web security vein with a CSRF detector I built.

Sea Surf

If other protections (like XSS) are in place, meaningful Cross-Site Request Forgery (CSRF) attacks are hard to pull off.  But that usually doesn’t stop the black hats from trying, or the white hats from insisting you specifically address it.

The basic approach to preventing CSRF (“sea surf”) is to insert a synchronizer token on generated pages and compare it to a session-stored value on subsequent incoming requests.  There are some pre-packaged CSRF protectors available, but many are incomplete while others are bloated or fragile.  I wanted CSRF detection that was:

  • Straightforward – Please, no convoluted frameworks nor tons of abstracted code
  • Complete – Must cover AJAX calls as well as form submits, and must vary the token by rendered page
  • Flexible – Must not assume a sequence of posts or limit the number of AJAX calls from one page
  • Unobtrusive – Must “drop in” easily without requiring @includes inside forms.

I also wanted to include double submit protection, without having to add another filter (certainly no PRG filters – POSTs must be POSTs).  Here below is the gist of it.

First, we need to insert a token.  I could leverage the fact that nearly all of our JSPs already included a common JSPF file, so I just added to that.  The @include wasn’t always inside a form so I added the hidden input field via JavaScript (setToken).  I used a bean to keep the JSPF as slim as possible.

<jsp:useBean id="tokenUtil" class="com.writestreams.TokenUtil">
	<% tokenUtil.initialize(request); %>
</jsp:useBean>
 
<script>
	var tokenName = '<c:out value="${tokenUtil.tokenName}" />';
	var tokenValue = '<c:out value="${tokenUtil.tokenValue}" />';	
 
	function setToken(form) {
		$('<input>').attr({
   			type: 'hidden',
    		id: tokenName,
    		name: tokenName,
    		value: tokenValue
		}).appendTo(form);	
	}
 
	$("body").bind("ajaxSend", function(elm, xhr, s){
		if (s.type == "POST") {
			xhr.setRequestHeader(tokenName, tokenValue);
   		}
	});	
</script>

I didn’t want to modify all those $.ajax calls to pass the token, so the ajaxSend handler does that.  The token arrives from AJAX calls in the request header, and from form submits as a request value (from the hidden input field); that gives the benefit of being able to distinquish them.  You could use a separate token for each if you’d like.

The TokenUtil bean is simple, just providing the link to the CSRFDetector.

public class TokenUtil {
	private String tokenValue = null;
	public void initialize(HttpServletRequest request) {
		this.tokenValue = CSRFDetector.createNewToken(request);
	}	
	public String getTokenValue() {
		return tokenValue;
	}
	public String getTokenName() {
		return CSRFDetector.getTokenAttribName();
	}	
}

CSRFDetector.createNewToken generates a new random token for each page render and adds it to a list stored in the session.  It does JavaScript-encoding in case there are special characters.

public static String createNewToken(HttpServletRequest request) {
	HttpSession session = request.getSession(false);		
	String token = UUID.randomUUID().toString();		
	addToken(session, token);
	return StringEscapeUtils.escapeJavaScript(token)	
}

A servlet filter (doFilter) calls CSRFDetector to validate incoming requests and return a simple error string if invalid.  You can limit this to only validating POSTs with parameters, or extend it to other requests as needed.  The validation goes like this:

public String validateRequestToken(HttpServletRequest request) {			
	HttpSession session = request.getSession(false);
	if (session == null) 
		return null;	// No session established, token not required
 
	String ajaxToken = decodeToken(request.getHeader(getTokenAttribName()));
	String formToken = decodeToken(request.getParameter(getTokenAttribName()));
	if (ajaxToken == null && formToken == null) 		
		return "Missing token";
 
	ArrayList<String> activeTokens = getActiveTokens(session);
	if (ajaxToken != null) {
		// AJAX call - require the token, but don't remove it from the list
		// (allow multiple AJAX calls from the same page with the same token).
		if (!activeTokens.contains(ajaxToken))		
			return "Invalid AJAX token";
	} else {
		// Submitted form - require the token and remove it from the list
		// to prevent any double submits (refresh browser and re-post).
		if (!activeTokens.contains(formToken)) 			
			return "Invalid form token";				
		activeTokens.remove(formToken);
		setActiveTokens(session, activeTokens);
	}
	return null;		
}

There you have it.  There are several good tools available for testing; I recommend OWASP’s CSRFTester.

Linkapalooza

Some useful / interesting links that came up this week:

  • Grep Console – Eclipse console text coloring (but no filtering)
  • Springpad – Think Evernote++
  • Pocket – New name for ReadItLater
  • RFC 2616 – The HTTP 1.1 spec at, well, HTTP

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.

JRebel

If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Out of Whack

Another programmer gone bad story:

Man Arrested for ‘Whac-A-Mole’ Sabotage

Mallet-wielding children in arcades and amusements parks could be holding their weapons over mole holes in vain, waiting for a weasel that just won’t pop. If you run across a Whac-A-Mole arcade cabinet that becomes a dud out of nowhere, it might not be the arcade’s fault. A game programmer in Florida has been running a scheme since 2008, infecting Whac-A-Moles with a computer virus.

A Florida local news video on the story is available here.

This sort of thing can happen just about anywhere.  How to avoid it?  More on that later.

Go Phish

I’ve seen an uptick lately in phishing emails that do a much better job of replicating legitimate ones. For example, I’ve received several that look like Amazon orders or LinkedIn reminders. In all cases, the email content is a dead ringer for kosher ones except that the content (book titles, names, etc.) is unfamiliar, and if I mouse over the embedded links, the target URL is fishy indeed. And therein lies the purpose: to get unsuspecting recipients to click one of those links, visit its site, and receive malware.

Identifying and stopping these emails was easy.  Since they arrived at my Gmail account, I created some quick filters to corral them.  On closer inspection, I found that they were sent to a couple of my forwarded email addresses, so I simply turned off those forwards at my domain host.  And that provided some insight into the source: one was an old address I had given out only to InformationWeek.  Have they been selling or otherwise disclosing my email address?

Google’s anti-phishing initiatives have had mixed success.  Their phishing filter has been criticized for too many false positives, their DKIM initiatives have had too little uptake, and their “authentication icon for verified senders” is much too passive-aggressive.  But this has perhaps a simple solution: flag any email with embedded links where the target URL’s domain differs from the sender’s domain.  Perhaps this could be done with some creative filters or a Gmail gadget.  If these things come back, I’ll give it a shot.

Checkmate

Hats off to local SecureWorks for detecting and thwarting the massive BigBoss Russian check counterfeiting ring.  Their Counter Threat Unit (yes, 24 fans, there really is a CTU) uncovered an operation used to create over $9 million in counterfeit checks over the past year.

It was a sophisticated attack utilizing ZeuS trojans, SQL injection, a couple thousand infected computers, and a VPN to transmit stolen data.  The perps stole over 200,000 check images from archive services and used these to create counterfeit checks.  They then overnighted these checks to U.S. recipients (drawn from a stolen database of job seekers) who were to deposit the checks and wire some of the funds back to them.  These unwitting money mules (who thought they were job candidates) did become suspicious, so the plan was apparently not very successful.

Compared with credit card fraud, widespread check fraud is less common and is typically easier to resolve.  However, check authorization systems are incomplete, so prevention is more difficult.  But solutions are well within reach, such as a secure national shared database of positive pay, authorization, negative list, and “stop” information that could be accessible to everyone, not just large commercial customers.  This could plug one of the last big security holes in our bank accounts.

Third Time’s a Charm

Reality is firmly rooted: we don’t quite yet have quantum computers, nor have we really proven that P != NP.  Yet while cracking most modern encryption and hash algorithms falls into the “not impossible, just highly improbable” category, academic weaknesses do get attention.  So much so that the old SHA-1 and MD5 hashing mainstays are no longer considered acceptable.  Soon enough, SHA-2 will also be as uncool as a rickroll.

Just in the nick of time, the NIST is narrowing the list of candidates for the new SHA-3 algorithm.  The second round just finished, and it’s down to 14 candidates, with the winner to be chosen before the Aztec calendar ends in 2012.  It should be a good contest, as long as FIFA referees aren’t involved.

This is exciting stuff, and I’m sure you’ll want to play along.  Just use your jailbroken, Krakenproofed cell phone to text your favorite to 2600.  I’m pulling for Skein, mainly because of the cool name and celebrity status.

Defaulting Less Security

Today, a friend reported that one of the apps I provide as a community service was down.  Its WebCalendar component complained that magic_quotes_gpc was no longer enabled, which I quickly confirmed by dropping in a phpinfo() call.  The remedy was also quick and easy: add a local php.ini with:

magic_quotes_gpc = On

This automatically adds slashes to escape quotes and other characters in GET, POST, and Cookie strings (hence “gpc”).  The PHP code then removes these via stripslashes and similar techniques.

Magic_quotes_gpc is no longer considered a good way to guard against SQL injection attacks, so the PHP Security Consortium and others now recommend against it.  I suspect this is why my hosting service changed their global setting to off, but security-wise, that was a step in the wrong direction.

Many PHP apps that support magic quotes are coded to work even if it’s turned off, and the get_magic_quotes_gpc()  ? stripslashes template for doing this is seemingly everywhere.  Fortunately, WebCalendar checks for this, but many apps don’t.  Disabling magic quotes was probably done to force apps to change, but it’s more likely apps will continue to work and developers and admins won’t realize that they are suddenly far more susceptible to injection attacks.  A better approach would have been to just let it die with the 5.3 upgrade.

Academic Pursuits

Like any obedient grad student, I wrote a lot of papers while recently working on my Masters degree.  While most were admittedly specialized and pedantic (and probably read like they were written by SCIgen), some may accidentally have some real world relevance.  Just last week, I handed out my XTEA paper to a co-worker who was foolish enough to ask.

At the risk that others might be interested, I posted a couple of the less obscure ones where I was the sole author; they are:

The Tiny Encryption Algorithm (TEA) The Tiny Encryption Algorithm (TEA) was designed by Wheeler and Needham to be indeed “tiny” (small and simple), yet fast and cryptographically strong. In my research and experiments, I sought to gain firsthand experience to assess its relative simplicity, performance, and effectiveness. This paper reports my findings, affirms the inventors’ claims, identifies problems with incorrect implementations and cryptanalysis, and recommends some solutions.
 
Practical Wireless Network Security Security measures are available to protect data communication over wireless networks in general, and IEEE 802.11 (Wi-Fi) in particular. Unfortunately, these measures are not widely used, and many of them are easily circumvented. While Wi-Fi security risks are often reported in the technical media, these are largely ignored in practice. This report explores reasons why.
 

Click on a title to access a PDF.

The Hacker Crackdown

It seems like only yesterday, but it’s been 20 years now since a simple bug in a C program brought down much of AT&T’s long distance network and brought on a national phreak hunt.  It was January 15, 1990: a day I’ll never forget because it was my 25th birthday, and the outage made for a rough work day.  But, in retrospect, it offers a great story, full of important lessons.

The first lesson was realized quickly and is perhaps summed up by Occam’s razor: the simplest explanation is often the most likely one.  This outage wasn’t a result of a mass conspiracy by phone phreaks, rather the result of recent code changes: System 7 upgrades to 4ESS switches.

There are obvious lessons to be learned about testing.  Automated unit test was largely unknown back then, and it could be argued that this wouldn’t have happened had today’s unit test best practices been in place.

This taught us a lot about code complexity and factoring, since a break statement could be so easily misaligned in such a long, cumbersome function.  The subsequent 1991 outage caused by a misplaced curly brace in the same system provided yet another reminder.

Finally, this massive chain reaction of failures reminded us of the systemic risk from so many interconnected systems.  That situation hasn’t improved; rather, it’s much worse today.  We don’t call it the internet and the web for nothing.

I was reminded of 1990 when Google recently deployed Buzz: yet another player in our tangled web of feeds and aggregators.  These things are convenient; for example, I rarely login to Plaxo, but it looks as though I’m active there because I have it update automatically from this blog and other feeds.  It makes me wonder if someone could set off a feedback loop with one of these chains.  Aggregators can easily prevent this (some simple pattern matching will do), but there may be someone out there trying to make it happen.  After all, there’s probably a misplaced a curly brace in all that Java code.

Bruce Sterling’s book, The Hacker Crackdown provides an interesting read of the 1990 failure, and he has put it in the public domain on the MIT web site.  If you want a quick partial read, I recommend Part I.