Tag Archives: Security

EMV Day Now

When it comes to consumer technologies, we in the US often let the rest of the developed world “leap frog” us, frequently with our own innovations.  The main culprits are typically our size and social adoption curves.  When you have an installed base of familiar and comfortable (but old) technologies numbering in the hundreds of millions, transition takes awhile.  So we’re stuck with broad use of anachronistic things like CDMA cell phone networks, Windows XP, checks, and skimmable mag stripe credit cards.  In payments, where adoption is key, it often takes significant financial and regulatory incentives to bring in the new.

As card fraud escalates, US payment networks are stepping up incentives to migrate to chip-embedded credit and debit cards using the Europay-Mastercard-Visa (EMV) standard.  For example, Visa’s new October, 2015 fraud liability shift (from issuer to merchant) for non-EMV transactions provides the looming punitive “stick,” while their recently-announced common debit solution and Technology Innovation Program (TIP) provide some “carrots.”  But that’s all “network push” with little “consumer pull.”  Hopefully, as more EMV cards roll out in the US, consumers will value the extra security, and competitive pressure will motivate issuers to send out those new cards quickly.  EMV doesn’t solve all card fraud problems, but it’s a step worth taking.  The costs of fraud affect us all, and it’s time we caught up with the rest of the world.

Friday Fixes

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.”  Or something like that.  I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.

JRebel

If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel.  I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago.  This thing is so essential, it should be part of the Eclipse base.

And while you’re in there, check out their Java EE Productivity Report.  Interesting.

Data Studio

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring.  There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old command line.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward.  With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio.  Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness.  Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool.  Capability wise, it’s like “TOAD meets DTP,”  and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces.  That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing.  It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere.  Alongside this, everybody uses the same directory structures and naming conventions.  Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops.  Developers give in to the lure of customization and go their own ways.  But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that.  Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Programmer Lifestyle

I’ve been following Josh Primero’s blog as he challenges the typical programmer lifestyle.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted.  This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent.  Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.”  It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between.  Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed.  You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>.  But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions.  Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users.  The general solution approach is simple: validate your inputs and escape your outputs.  And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights.  All it takes is one missed input validation or one missed output-encode to create a hole.  99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >.  “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed.  It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require.  In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog.  Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals.  Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts.  Either way, it’s time-consuming to reject them all, so I’m turning comments off completely.  To send feedback, please email me.

Simon Says

There was a bit more dialog today about impersonating the DB2 instance owner.  It’s a quick way to get around controls that newer versions of DB2 and tighter Windows and network security have brought us.  The extra step is annoying, but trying to convince the system you don’t need it is often worse.

Impersonation and elevation have become the “new normal” these days.  I’ve grown so accustomed to opening “run as administrator” shells in UAC Windows (7/Vista/2008), typing runas commands in XP, and using sudo in Ubuntu that these have become second nature.  And that level of user acceptance usually translates into approval to expand the practice, rather than a mandate to remove the inconvenience.  Enhancing security usually includes putting up new barriers.

A former co-worker has often said that what we really need is software that determines whether a user’s intentions are honorable.  Perhaps then security would become seamless.  But it’s more likely that its implementation would also test our manners and fading patience.

Impersonating Better Security

I got a question today from a co-worker who was painted into a corner trying access a database he had restored on his Windows development machine.  He stumbled over DB2 9.7’s new security twists, such as not having dbadm authority by default.  I rattled off my familiar quick fix:

db2 connect to <dbname>
db2 grant dbadm on database to <userid>

However, his default Windows user ID didn’t have secadm or sysadm authority, so that failed with an error.   So, I had him impersonate the one that did:

runas /user:<adminuser> db2cmd

Repeating the grant command from this new command shell did the trick.  It could have also been done with:

db2 connect to <dbname> user <adminuser> using <adminpassword>

And so it goes.  No matter how refined security policies become, they can usually be circumvented with a little impersonation.  For example, think of how many times we quickly and mindlessly sudo under Ubuntu.  In this case, impersonation was a fast route to giving a developer the access he should have had by default anyway.  Today’s technology cannot solve the impersonation problem, but sometimes we consider that more a feature than a bug.

Academic Pursuits

Like any obedient grad student, I wrote a lot of papers while recently working on my Masters degree.  While most were admittedly specialized and pedantic (and probably read like they were written by SCIgen), some may accidentally have some real world relevance.  Just last week, I handed out my XTEA paper to a co-worker who was foolish enough to ask.

At the risk that others might be interested, I posted a couple of the less obscure ones where I was the sole author; they are:

The Tiny Encryption Algorithm (TEA) The Tiny Encryption Algorithm (TEA) was designed by Wheeler and Needham to be indeed “tiny” (small and simple), yet fast and cryptographically strong. In my research and experiments, I sought to gain firsthand experience to assess its relative simplicity, performance, and effectiveness. This paper reports my findings, affirms the inventors’ claims, identifies problems with incorrect implementations and cryptanalysis, and recommends some solutions.
 
Practical Wireless Network Security Security measures are available to protect data communication over wireless networks in general, and IEEE 802.11 (Wi-Fi) in particular. Unfortunately, these measures are not widely used, and many of them are easily circumvented. While Wi-Fi security risks are often reported in the technical media, these are largely ignored in practice. This report explores reasons why.
 

Click on a title to access a PDF.

The Hacker Crackdown

It seems like only yesterday, but it’s been 20 years now since a simple bug in a C program brought down much of AT&T’s long distance network and brought on a national phreak hunt.  It was January 15, 1990: a day I’ll never forget because it was my 25th birthday, and the outage made for a rough work day.  But, in retrospect, it offers a great story, full of important lessons.

The first lesson was realized quickly and is perhaps summed up by Occam’s razor: the simplest explanation is often the most likely one.  This outage wasn’t a result of a mass conspiracy by phone phreaks, rather the result of recent code changes: System 7 upgrades to 4ESS switches.

There are obvious lessons to be learned about testing.  Automated unit test was largely unknown back then, and it could be argued that this wouldn’t have happened had today’s unit test best practices been in place.

This taught us a lot about code complexity and factoring, since a break statement could be so easily misaligned in such a long, cumbersome function.  The subsequent 1991 outage caused by a misplaced curly brace in the same system provided yet another reminder.

Finally, this massive chain reaction of failures reminded us of the systemic risk from so many interconnected systems.  That situation hasn’t improved; rather, it’s much worse today.  We don’t call it the internet and the web for nothing.

I was reminded of 1990 when Google recently deployed Buzz: yet another player in our tangled web of feeds and aggregators.  These things are convenient; for example, I rarely login to Plaxo, but it looks as though I’m active there because I have it update automatically from this blog and other feeds.  It makes me wonder if someone could set off a feedback loop with one of these chains.  Aggregators can easily prevent this (some simple pattern matching will do), but there may be someone out there trying to make it happen.  After all, there’s probably a misplaced a curly brace in all that Java code.

Bruce Sterling’s book, The Hacker Crackdown provides an interesting read of the 1990 failure, and he has put it in the public domain on the MIT web site.  If you want a quick partial read, I recommend Part I.

Minum Data Redaction

WriteStreams.com is pleased to announce its new Minum Data Redaction (MDR) product.  MDR provides physical data security for sensitive bank and credit card information, complementing the electronic data security covered by IBM’s just-announced Optim Data Redaction product.  Used together, these products can help you achieve PCI DSS compliance with little or no coding.

IBM’s new Optim Data Redaction automatically removes account data from documents and forms.  You can get that wondrous XXXXXXXXXXX1234 credit card number formatting with little or no effort on your part (apart from buying software, of course).

Our new Minum Data Redaction product extends account number protection to the physical world, protecting the bank cards you carry.  Its super-strong rear adhesive and front opaque covering ensures that your sensitive credit card information stays protected.  It comes in a variety of colors (including duct silver and black), and our Premium version provides extra thickness to cover embossing.

But seriously now, we go to great lengths to protect electronic card information by encrypting it in stored files (56-bit DES isn’t good enough); redacting it on printed receipts, reports, and statements; and setting disclosure requirements that publicly embarrass companies who slip up.  But yet our simple payment process requires that we hand all this information over to any clerk or waiter who usually goes off with it for awhile: certainly long enough to copy it all down.  PCI DSS offers the classic false sense of security.

I was recently a victim of fraud against my Visa card.  A series of small (mostly $5) fraudulent charges hit my account over several days until I closed the account.  From what I learned, the charges where only authorized by account number and expiration date; there was no zip code verification.  I don’t know how the perps got my credit card number, but I doubt they grabbed data from a financial institution in the dark of night, nor devoted the $250,000 and 56 hours required to run an EFF DES crack against it.  It probably came from a clerk or waiter who handled my card.  My cards, like everyone else’s, have account number, expiration date, and CCV printed right on them.  Zip code isn’t there, but anyone who wants it can just ask to see my driver’s license for verification.  It’s a gaping hole.

Until credit cards gain better physical security, there is no “silver bullet.”  But banks and card companies could enlist help from their own customers.  For example, let me specify which types of charges I would allow/authorize.  It would spare consumers the hassles of disputing charges, and would save issuers the dispute processing fees and write-offs.