InfoCard – how is this different (in function, not in form) from MS Wallet, Passport and/or any of the dozens of "form filler" applications?

Microsoft to flash Windows ID cards

I’m trying to understand what the heck “new” is being offered here…

  1. InfoCard will install a secure “store” (which can contain identity and payment info, among other things) on your PC.
  2. I presume that the whole InfoCard “store” will be password-protected (as is de rigeur for consumers, even in these heady days of security), though perhaps they’ll offer the option integrating things like the MS Fingerprint Reader, a smart card or some form of USB smart device [for those few consumers who actually care enough about security to bother with all the hassle this’d bring].
  3. Presumably you’ll be asked to input each of your credentials and your credit card information into the software.
  4. When a user then visits an online service that asks for InfoCard-compatible versions of either (a) one of the authentication credentials the user’s stored in their InfoCard “store” and/or (b) payment information, the user will be prompted for their InfoCard “master password” (hopefully every time their InfoCard store is about to give out this information), and then the InfoCard store will use whatever Web Services communications protocols to securely communicate the consumer’s information to the site.

What is so revolutionary about this, you might ask? Compare this to the typical “form filler application”:

  1. The form filler app installs a secure “store” (to contain identity and payment info) on your PC.
  2. The form filler app asks you to establish a “master password” to protect anything it adds to its “store”.
  3. You’re asked by the form filling application whether you wish to add any creds it’s just observed you type (into a web form) into its secure “store”. You can also pre-fill identity information (one or more sets) into the form filler app, to be auto-filled later into web forms.
  4. When a user then visits an online service that asks for either (a) one of the authentication credentials the user added to the app’s “store” and/or (b) payment information, the user is prompted for their “master password” [which can then be cached for a specified period of time], and the browser then submits this information to the web site using whatever communication protocols were established (usually, SSL/TLS).

So as near as I can tell, the InfoCard user experience for web surfing fundamentally boils down to the Web Services “format” for communicating this sensitive information from consumer to web site.

Colour me skeptical. Sure, in a few years, most web sites will have one or more Web Services communications & security engines built into it, and by that time, the pressure for “current” approaches to communicating this information consistently & securely will be heightened. However, I guess I’m not clear on what InfoCard buys the consumer (who presumably will have to adopt this technology in droves) *today* over and above the current “good enough” approaches for storing & communicating this information – and until there’s a clear consumer benefit, watch people not bother in droves. [I’m sure *I’ll* download it as soon as it hits the web and play around with it, but I’m still a bit “sick in the head” with my love for trying the new stuff out…]

As well, I’m not sure what will compel the web services application vendors, and their customers (the web application programmers and architects) to adopt the Microsoft “InfoCard” approach over the growing number of identity-integration technologies that are becoming available. [Or maybe under the hood, “InfoCard” is just a friendly name for an implementation of a WS-*-compliant client, and any server that speaks WS-* will be immediately capable of interpreting the output from the InfoCard client…]

You know what would be a brilliant integration? If this were wired in tightly with the MSN Toolbar, so that its form-filling feature could adapt over time to add these capabilities without requiring the MSN consumers to adopt yet *another* piece of technology, at ever-decreasing return-on-effort.

I’m also intrigued by the promised integration of X.509 creds into the InfoCard client – will this be natively interoperable (integrated) with the current CAPI store of X.509 creds already available in Windows? Will this make X.509 more accessible to consumers, without watering down the benefits of “strong authN” to the point where it’s really no better than a password or an email address? And, are they considering integration with RMS and their XrML-based identity assertions (which are the more logical entry point for Web Services, if only because an RMS “account certificate” already uses XrML to format the identity “message”)?

There’s a bit more detail (including mention of the “WinFX” dependency of the InfoCard client) here.

Bottom line: This is something to keep an eye on for your B2C projects, but I’ll continue to seek out additional authentication & identity technologies for the enterprise space until I get a stronger understanding of this…

Data security as a superior approach to perimeter, network, host or application security?

I’ve always believed that the network is a poor substitute for protecting a host, its apps or (ultimately) the Data made available via the n/w, server and its apps. To me, the other layers of perimeter, network, host & applications are necessary means to the end, which is to access, manipulate and sometimes even add to that Data. Similarly, security enforced at the perimeter, network, host or application layers, while necessary, is not sufficient when what you’re really trying to accomplish is to security the Data.

Disclosure: I started my IT career in the desktop arena, and I’ve grown into the server and enterpise data arena over time. It’s been a great journey, but I’m probably still biased towards the PC at heart.

Certainly your ability to protect each set of data scales much better, the further you draw it out towards the perimeter. However, in my experience, you sacrifice the ability to provide very specific protections for the data inside the perimeter, for example:

  • the firewall rule which blocks or allows 80/tcp access doesn’t provide an ability to authorize access for specific groups of users (only for IP addresses or subnets)
  • the host security that allows/denies a user to Logon to the host (e.g. “access this computer from the network”) doesn’t give you the ability to allow the user to only access one set of data on the host but not access all other sets of data on the host
  • the application protection (e.g. SQL per-user/group roles assignment, Samba per-folder ACLs, email per-server/mailstore ACLs) don’t allow you to protect individual pieces of data accessed through the application that “front-ends” the data, though often the application can also provide built-in “bulk data protection” mechanisms. However, if another application were used to access the same on-disk data, it is often possible for the application-level protections to be completely bypassed.

In my opinion, the defining characteristic of data-level protection is that it only allows access for the authorized user(s) to the authorized data (set(s)), no matter what application or OS is used to access the data.

In the extreme, this definition would only include data that has been encrypted such that only the key that is in the authorized user’s posession can unlock the data (e.g. PGP, S/MIME).

Examine for a moment what happens when you rely only on the application-level security:

  • Windows file share-level permissions could be bypassed by users who can access the admin shares, a Remote desktop or an FTP session
  • SQL roles-based access can be bypassed by anyone who can mount the .mdf file as a new database instance, or who has one of the privileged server or database roles such as sysadmin or db_owner.

In each example, the application protections that were meant to protect the data itself has not been changed, but the data could be accessed by the user (attacker) even though the application protections were configured to block the user.

The principle of “defense in depth” implies that the more layers of defense, the better the asset is protected against a failure at any one layer. However in my experience, a valued asset will “grow” new methods of access over its lifetime – e.g. a mainframe application may start out with access only from terminal emulators, then PC access is added via Communications Server & PComm, then later a Web Services interface is added to the mainframe for direct browser-based access.

The mainframe application may not have changed in the slightest throughout all this, but the “DiD” model that was suitable at first may not be appropriate to protecting from the threats that are now possible via the new interface.

Data-level protection may not require encryption to prevent access, but it’s the only significant protection I can think of, and it seems to be the protection to which most people turn. You can also add DRM (or ERM, RMS, etc.), but if it doesn’t also include encryption, to me it’s pretty much ineffective at actually protecting content you’ve given to someone else from being exploited beyond the rights you wished to assign.