Occupied Neurons, early July 2016: security edition

Who are you, really: Safer and more convenient sign-in on the web – Google I/O 2016

Google shared some helpful tips for web developers to make it as easy as possible for users to securely sign in to your web site, from the Google Chrome team:

  • simple-if-annoying-that-we-still-have-to-use-these attributes to add to your forms to assist Password Manager apps
  • A Credential Management API that (though cryptically explained) smoothes out some of the steps in retrieving creds from the Chrome Credential Manager
  • This API also addresses some of the security threats (plaintext networks, Javascript-in-the-middle, XSS)
  • Then they discuss the FIDO UAF and U2F specs – where the U2F “security key” signs the server’s secondary challenge with a private key whose public key is already enrolled with the online identity the server is authenticating

The U2F “security key” USB dongle idea is cute and useful – it requires the user’s interaction with the button (can’t be automatically scraped by silent malware), uses RSA signatures to provide strong proof of possession and can’t be duplicated. But as with any physical “token”, it can be lost and it requires that physical interface (e.g. USB) that not all devices have. Smart cards and RSA tokens (the one-time key generators) never entirely caught on either, despite their laudable security laurels.

The Credential Manager API discussion reminds me of the Internet Explorer echo chamber from 10-15 years ago – Microsoft browser developers adding in all these proprietary hooks because they couldn’t imagine anyone *not* fully embracing IE as the one and only browser they would use everywhere. Disturbing to see Google slip into that same lazy arrogance – assuming that web developers will assume that their users will (a) always use Chrome and (b) be using Chrome’s Credential Manager (not an external password manager app) to store passwords.

Disappointing navel-gazing for the most part.

Google’s password-free logins may arrive on Android apps by year-end

Project Abacus creates a “Trust Score API” – an interesting concept which intends supplant the need for passwords or other explicit authentication demands, by taking ambient readings from sensors and user interaction patterns with their device to determine how likely it is that the current holder/user is equivalent to the identity being asserted/authenticated.

This is certainly more interesting technology, if only because it allows for the possibility that any organization/entity that wishes to set their own tolerance/threshold per-usage can do so, using different “Trust Scores” depending on how valuable the data/API/interaction is that the user is attempting. A simple lookup of a bank balance could require a lower score than making a transfer of money out of an account, for example.

The only trick to this is the user must allow Google to continuously measure All The Thingz from the device – listen on the microphone, watch all typing, observe all location data, see what’s in front of the camera lens. Etc. Etc. Etc.

If launched today, I suspect this would trip over most users’ “freak-out” instinct and would fail, so kudos to Google for taking it slow. They’re going to need to shore up the reputation of Android phones and their inscrutably cryptic if comprehensive permissions model and how well that’s sandboxed if they’ll ever get widespread trust for Google to watch everything you’re doing.


Looks like Microsoft is incorporating “widely-used hacked passwords” into the set of password rules that Active Directory can enforce against users trying to establish a weak password. Hopefully this’ll be less frustrating than the “complex passwords” rules that AD and some of Microsoft’s more zealous customers like to enforce, making it nigh-impossible to know what the rules are let alone give a sentient human a chance of getting a password you might want to type 20-50 times/day. [Not that I have any PTSD from that…]

Unfortunately, they do a piss-poor job of explaining how “Smart Password Lockout” works. I’m going to take a guess how this works, and hopefully someday it’ll be spelled out. It appears they’ve got some extra smarts in the AD password authentication routine that runs at the server-side – it can effectively determine whether the bad password authentication attempt came from an already-known device or not. This means that AD is keeping a rolling cache of the “familiar environments” – likely one that ages out the older records (e.g. flushing anything older than 30 days). What’s unclear is whether they’re recording remote IP addresses, remote computer names/identities, remote IP address subnets, or some new “cookie”-like data that wasn’t traditionally sent with the authentication stream.

If this is based on Kerberos/SAML exchanges, then it’s quite possible to capture the remote identity of the computer from which the exchange occurred (at least for machines that are part of the Active Directory domain). However, if this is meant as a more general-purpose mitigation for accounts used in more Internet (not Active Directory domain) setting, then unless Active Directory has added cookie-tracking capabilities it didn’t have a decade ago, I’d imagine they’re operating strictly on the remote IP address enveloped around any authentication request (Kerberos, NTLM, Basic, Digest).

Still seems a worthwhile effort – if it allows AD to lockout attackers trying to brute-force my account from locations where no successful authentication has taken place – AND continues to allow me to proceed past the “account lockout” at the same time – this is a big win for end users, especially where AD is used in Internet-facing settings like Azure.

Can non-Microsoft ERM (electronic rights management) be integrated into MOSS 2007?

Fascinating question: can an organization that has deployed MOSS 2007 plug in another ERM/IRM (Electronic Rights Management) technology into the MOSS back-end, so that documents downloaded from MOSS would be automatically protected with that non-Microsoft ERM technology?

MOSS 2007 (aka SharePoint 2007) provides integration with the Microsoft Information Rights Management (IRM) technology – any documents that are uploaded to an “IRM-enabled” Document Library will automatically be (encrypted and) protected with a specific IRM policy whenever that document is downloaded again.  This depends both on the Microsoft implementation of IRM (RMS) policies (known as “Information Management Policy” in the MOSS SDK) as well as the inclusion of the Microsoft IRM “lockbox” (security processor) library on the MOSS server farm.  As I understand it, the procedure is basically:

  1. MOSS receives the download request from a remote client
  2. MOSS looks up the information management policy that is associated with the document’s List or Content Type (depending where the policy is applied)
  3. MOSS calls an instance of the IRM security processor (installed with the RMS Client on the front-end servers) to (a) encrypt the document, (b) generate the IRM license based on the associated policy, and (c) encrypt the content encryption key with appropriate RM Server’s public key. 
  4. MOSS delivers the protected document to the remote client – otherwise the same way that it would deliver an unprotected document.

Guessing How Third-Party ERM Could Integrate Into MOSS

So theoretically, for a third-party ERM solution to properly intercept the steps in this sequence:

  • the MOSS server would have to request a method/API that is “pluggable”
  • the MOSS server would have to support the ability to “plug” alternative ERM policy services in place of the native Microsoft IRM policy services
  • the MOSS server would have to support the ability to “plug” an alternative security processor in place of the native Microsoft RM security processor
  • the ERM solution would have to implement the pluggable responder for the “policy lookup” service, as well as a replacement UI and business logic framework for the server-side ERM policy “creation/assignment” capability that MOSS provides for IRM
  • the ERM solution would have to support a thread-safe, multi-threaded, rock-solid-stable security processor that could run in a potentially high-volume server environment

Given how much effort Microsoft has gone to in the past couple of years (not without external incentives, of course) to make available and document the means for ISV’s to interoperate with Microsoft client and server technologies, I’d figured there must be some “open protocol” documentation that documents how an ISV would create compatible ERM components to plug into the appropriate locations in a MOSS environment.

I scoured the SharePoint protocols specifications, but there were no specific protocols documents, nor any mention of “information management” in any of the overview documents.

There are some occasional references in the Microsoft Forums and elsewhere that hint at details that might be relevant to a third-party ERM plugin for MOSS, but I can’t tell if this is actually related or if I’m jus chasing spectres:

Aha!  It Appears the Answer is “Yes”

(I thought about erasing and rewriting the above, but there’s probably someone somewhere who thinks the same was I do about this, so I’ll leave it and just share my new insight below).

As always, I really should’ve started with the WSS 3.0 SDK and then branched out into the MOSS SDK and other far-off lands.

It turns out that the WSS SDK had the “secret” locked up in a page entitled “Custom IRM Protectors” (not to be confused with the forum post linked above).  My theory above didn’t nearly guess correctly, but it most closely resembled the “Autonomous Protector”:

Create an autonomous protector if you want the protector to have total control over how the protected files are rights-managed. The autonomous protector has full control over the rights-management process, and can employ any rights-management platform. Unlike the process with an integrated protector, when Windows SharePoint Services invokes an autonomous protector, it passes the specific rights that the user has to the document. Based upon these rights, an autonomous protector is responsible for generating keys for the document and creating rights-managed metadata in the correct format.

The autonomous protector and the client application must use the same rights-management platform.

So for a third-party ERM vendor to support an integrated experience in MOSS, while still using its non-Microsoft ERM client (i.e. not the Microsoft RMS Client), it would have to:

  • provide a COM component on each MOSS web server that implements the I_IrmProtector interface and an I_IrmPolicyInfo_Class object (analogous to my theorized “alternative ERM policy service”).
  • provide a rights management platform that protects (at the server) in a way that’s compatible with protections enforced by their rights management client (e.g. an alternative security processor available either locally or remotely from each MOSS web server)
  • override the default “integrated protectors” for Microsoft Office document types, and (presumably) support the ability to protect the Microsoft Office document types with the autonomous protector(s)

If I’m reading this right, then with a server-accessible rights management platform and one or more autonomous protectors, MOSS would be able to handle the rest of the required functionality: policy storage, UI, management interfaces (business logic), etc.

Now I wonder if anyone has actually implemented this support in their ERM solution…

ComponentChk.cpp – I resolved my first C++ compiler "fatal error"!

Yes, for those of you with even more grey in your beard/hair than I, this is probably a “meh” event.  However, for me, who’s always looked at C++ code and wondered how the H*** anyone could ever make sense of it, this is a big event.  [Patting myself on the back.]


I’m working on finishing up the Setup bootstrapper for my VSTO add-in Word2MediaWiki.NET.  I’m targeting Word 2003 and Word 2007 (despite Microsoft’s best efforts/worst neglect to the contrary), and I’m trying to achieve what many have done before, but which seemed impossible for me: allow the Setup.exe to detect which version of Word is installed, and then conditionally install the prerequisite software (i.e. the so-called Office PIAs) for whichever version of Word is discovered.

Problem 1: most folks in the VSTO space seem to think that, despite the fact that a version of the .NET framework is required to support VSTO, the Setup.exe that goes with your VSTO add-in install package should use UNmanaged code to detect whether the pre-requisites are installed.

Problem 2: I know squat about writing unmanaged code.

Problem 3: Microsoft’s VSTO & Office teams left it up as an exercise to the VSTO app developer to figure out how to assemble the amazing number of parts necessary to make a VSTO install work transparently to the end user.

Problem 4: There’ve been many articles written posthumously (I mean, long after VSTO v2 was released) on various aspects of automating VSTO-add-in installation, but none of them addressed the very inevitable scenario where the app developer needs to support both Word 2003 and Word 2007.  [Don’t even *think* about pre-2003 versions of Office — you’d have to be clinically insane.]

Manna from heaven

One ridiculously brave soul, Mark Hogan, actually took the time to not only figure out how to build such a conditional pre-requisite detection app in C++, but he was so overcome with glee that he beat Microsoft at something even they were too scared to do, that he published the full set of source code and XML configuration files for the entire world to use.

Now, masochist that I am, I took it upon myself to try to integrate the code that Mark Hogan published into the existing code that I’d already slotted into my Office PIA bootstrapper files.  However, I didn’t anticipate the foreseeable result that, because I’m coding this stuff in my spare time, and usually late at night, I would (a) not finish the integration work in one sitting, (b) forget what I’d originally set out to do and (c) forget to integrate any of the critical code that actually performed the conditional logic.

Mike chases his tail

Miraculously, I was left with a .CPP file that would compile and that appeared to be significantly different from the original file I started from, and that threw off an error code that created a spectacular wild goose-chase:

“Unable to satisfy all prerequisites for Word2MediaWikiDotNET.  Setup cannot continue until all system components have been successfully installed.”


“Prerequisite check for system component Microsoft Office 2003/2007 Primary Interop Assemblies (Word Only) failed.

See the setup log file located at ‘C:\DOCUME~1\msmithlo\LOCALS~1\Temp\VSD33.tmp\install.log’ for more information.”

And in the INSTALL.LOG file was the illusory answer:

Running external check with command ‘C:\DOCUME~1\msmithlo\LOCALS~1\Temp\VSD33.tmp\Office2003PIAor2007_WordOnly\ComponentChk.exe’ and parameters ‘/delete /save /word {1EBDE4BC-9A51-4630-B541-2561FA45CCC5}’

“Process exited with code 1607”

Setting value ‘1607 {int}’ for property ‘SupportedWordVersionInstalled’

Setting value ‘1607 {int}’ for property ‘PIAsRegistered’

This led me down the path of chasing InstallShield errors, since the only Google search results that looked at all related were things like these.  After a few days of trying to figure out how the InstallShield scripting engine could be related to a Visual Studio SETUP.EXE or custom C++ application, I finally got my brain back and tried to isolate where in the code or configuration files the problem existed.  That led me to a line of C++ code that threw ERROR_UNKNOWN_COMPONENT, which as it turns out also shares the 1607 error/exit value.

[The whole gruesome story is illustrated in the Bug I filed to myself here.]

Back to the original goal, with a new mission

Once I realized what I’d left out of the C++ code, I quickly got it to the point where I could try compiling it.  Now a new mission emerged – how to debug a C++ compiler error?

C:\>cl.exe /Oxs /MT /GS ComponentChk.cpp advapi32.lib
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.21022.08 for 80×86
Copyright (C) Microsoft Corporation.  All rights reserved.

Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation.  All rights reserved.

ComponentChk.obj : error LNK2019: unresolved external symbol __imp__WaitForInputIdle@8 referenced in function “int __cdecl MyStart(char const *,char const *)” (?MyStart@@YAHPBD0@Z)
ComponentChk.exe : fatal error LNK1120: 1 unresolved externals

That command line (cl.exe /Oxs /MT /GS ComponentChk.cpp advapi32.lib) was derived from the original article from which we were all working (or stumbling around half-blindly, it felt to me).  I just ran this without a second thought, assuming that since Mark Hogan didn’t seem to mention any modifications to it, any errors it showed must’ve been my fault.

Where is “__imp__WaitForInputIdle@8” I wondered?  It didn’t show up in the source code when I searched for it, and nor did “int __cdecl MyStart“.

After staring at this for a while longer, I figured some substring searches would work, which after a few attempts finally showed me that I was actually looking for the code

WaitForInputIdle(pi.hProcess, INFINITE);

which is called in the function MyStart().  I tried some silly things first, of course, but I eventually realized that if WaitForInputIdle() didn’t exist as a named function in the source code, perhaps it existed in another library that wasn’t yet connected to the code?  A quick MSDN Library search told me this function was actually part of USER32.DLL, and it wasn’t too painful a leap of logic to try adding USER32.LIB to the compilation parameters, like so:

cl.exe /Oxs /MT /GS ComponentChk.cpp advapi32.lib user32.lib

And when I loaded up the now-successfully-compiled COMPONENTCHK.EXE into DEPENDS.EXE, it confirmed exactly what I expected to see:


Mark, many thanks to you, and my apologies for calling into question your mad C++ skillz.

Certificate enrollment in .NET managed code: a primer on gathering the right tools

[y’know, I always thought that was pronounced “pr(eye)mer”, not “pr(imm)er”…]

So here I am with Visual Studio 2008 and .NET Framework 3.5, and I can’t for the life of me find a managed class that would submit an enrollment request to a Windows Certificate Server.  I know that the System.Security namespace has had a bunch of classes available (e.g. .Cryptography.Pkcs, .Cryptography.X509Certificates) that assist in the digital certificates arena.  However, there’s nothing in the framework (at least through v3.5) that appears to map in any way to the functionality in XENROLL (the Win32 container for certificate enrollment, which included COM interfaces) or CERTENROLL (the same control renamed in Windows Vista).

The only way I knew so far to take advantage of Win32 functionality in .NET was to use p/invoke to map/marshal the unmanaged call with a .NET “wrapper”.  However, when I went looking on pinvoke.net for anything mentioning XENROLL or CERTENROLL, I came up snake eyes.

The clue that finally broke the mystery wide open for me was this article (“XENROLLLib reference for C# and VB.NET”), that seemed to indicate there was some way of getting a set of classes that exposed the XENROLL functionality.  I tried pasting some of the example code into a new Visual Studio project but that didn’t work, and when I searched the Object Browser for any default classes that contained XENROLL, XENROLLLib, CENROLL or other terms that looked like they would be base classes, I still didn’t succeed.

I don’t recall how anymore, but somewhere I connected a few dots between that article and “adding a reference” to the COM object, and then it clicked!  Damn, it seems so obvious once you know what you were supposed to do.  All I had to do was look in the Solution Explorer view, right-click on References, choose “Add Reference…”, choose the COM tab, and select the Component name “xenroll 1.0 Type Library”.

Oh, that’s right, now I remember: there was some discussion about XENROLL being a COM object, and other discussions that helped me piece together various ways to “wrap” a COM object for use by managed code.

Your best bet would be to find the Primary Interop Assembly (PIA) — i.e. the “official” RCW for the COM object in which you’re interested.  [Unfortunately, the developers of XENROLL have so far refused to acknowledge any need for a PIA for XENROLL.]

Another option would be to build an interop assembly to wrap the COM object, which requires some degree of hand-coding, but not as bad as writing the p/invoke signatures around each and every function exposed by that COM object.

However, the easiest of the “do-it-yourself” options is by adding a Reference to the COM object, and letting Visual Studio “magically”, automatically create the interop assembly (aka/or RCW) that’s needed to access the functions.

It turns out that the XENROLLLib article I’d found was really just documenting what was exposed automatically by Visual Studio when you add the Reference to “xenroll 1.0 Type Library”, but that wasn’t obvious to me (and may not be obvious to most folks who happened to stumble on the article).

Tip: the IC* interfaces are COM (scripting) oriented, and the I* interfaces are C++ oriented.  Thus, any manual work to “wrap” the native cert enrollment code in .NET interop assemblies should probably focus on IEnroll4 (which inherits from IEnroll2 and IEnroll), rather than ICEnroll4 and its predecessors.

So if I’m going to add COM references in my code to create a Cert enrollment assembly, do I just need to reference XENROLLib?  Apparently not — this discussion indicates that I’ll also need a reference “CERTCLIENTLib” to kick off the actual submission of the Certificate enrollment request to the Windows CA’s web interface.

And which COM reference will this require then?  A quick Google search on CERTCLIENTLib.CCertRequest should give me a lead pretty quick.  And bingo!  The second link is to this article, which mentions “…code below that uses the CertCli library get the certificate from the CA”, and this corresponds in Visual Studio’s “Add Reference” COM tab to “CertCli 1.0 Type Library”.  Now I’m getting the hang of this!

One open question: how do I go about actually using the Interfaces vs. the Class implementations such as CEnroll or CEnrollClass?

Summary of Findings

So there we go: to write a certificate enrollment client in C# requires a whole lot of COM interop, using COM References to “xenroll 1.0 Type Library” and “CertCli 1.0 Type Library”.  That, plus leaning heavily on samples from others who’ve gone down this path before (like here and here), should enable a pretty reusable .NET certificate enrollment library.  [Now if only Microsoft’s CERTENROLL team would add “releasing a CERTENROLL PIA” to their development schedule.]

Additional References

These will be useful at least for myself, even if they’re not useful to anyone else.

MSDN: Mapping XENROLL functions to CERTENROLL

MSDN: implementation of initializing a certificate request using a cert template (including C# code in Community section)

some native code & discussion of using IEnroll “class” (though I don’t know if the concept is known as a “class” in Win32)

An MSDN newsgroup article on requesting a certificate in managed code

MyPicasaPictures Part 4: Smart Client Software Factory — a deeper look

Having determined that of all the Patterns and Practices guidance, the SCSF is likely the one with the most to offer my project, I decided to see what following the Getting Started steps would get me.

And just my luck, I can’t even get past the first steps — to create a Hello Word application, I need to be able to select from the “Guidance Packages Project” in Visual Studio, but it appears they didn’t even get installed.  I’ve run the SCSF (May 2007) installer twice and selected everything, but still it doesn’t add any entries to the New Project dialog.

What’s a little disturbing is that when I started to peruse the discussions to see if anyone else had run into this, the detailed steps that people had to go through to troubleshoot problems with this guidance were shocking (see this thread, for example).

I really didn’t expect this “factory” to be this complex — I was hoping for a library of code and simple steps to piece different bits together.  What I seem to have gotten instead is another “learning opportunity”, much like the MCML rabbit hole — some limited set of benefits that’s masked by a steep up-front learning curve, and that only benefits your use of that proprietary set of tools.

Does anyone ever just try to create a set of code snippets or object libraries?  Why does everything require learning a whole new development paradigm, and some new hierarchy of newly-overloaded terminology that doesn’t relate to any of the stuff upon which it purports to build?


Dependency Checker: decoupled from install?

Wow, so I decided to go right back to the beginning, and see if there were any hidden steps I missed the first couple of times.  It turns out that, without making too much of a big deal of it, the SCSF team has written a separate downloadable tool that checks for all the required and “optional” (?) dependencies [who ever heard of an optional dependency?  That’s a new one on me].  After all this time, I’ve come to expect that the installer for the program I’m after would check those dependencies as a first step, and would tell me to go get the pieces I’m missing if it detects that it won’t be able to work if it installs.

No, for whatever reason these guys seem to have decided that they want their end users to “educate” themselves on what it really means to use their guidance, so they seem to have intentionally left at least one land mine in place for most newbies to trip over.  I dunno, but if most .NET applications can test for the presence of the required .NET Framework (and halt the install if it’s missing), how much harder could it possibly be to add this dependency checker code as a pre-install step and direct users to the pieces they’re missing?

This reminds me of the days when you downloaded Eudora or Mosaic to “access the Internet”, and then later learned that you needed to download Trumpet Winsock or some other TCP/IP stack before they’d work.

YAFR (Yet Another Freakin Runtime): Guidance Automation Extensions

Well, so it looks like I’m missing something called the “Guidance Automation Extensions“, which sound reminiscent of the Office PIAs — yet another set of libraries that are necessary before the specialized development software will ever work.  Why does everyone seem to want to carve off a set of “foundation libraries”, rather than just include them in their software package itself?  Why do these scenarios inevitably require the end user to become an expert in the internals of the application, rather than just make it as easy as possible to get up and running in one step?

Maybe it’s the Micro-skeptic in me, seeing how often people at Microsoft fooled themselves into thinking that *their* little piece of the platform was the *most* important thing around, and why *shouldn’t* our users spend some time learning how our stuff is supposed to be used?  It’s just such ridiculous arrogance, and in a lot of ways, I suspect it stems from the hiring process itself — hire these geeks straight out of college (with no previous real-world experience or humility i.e. ever failed at anything in their lives), tell them they’re the absolute best of the best, and then set them loose on a project.  Of *course* they’re going to believe that whatever comes from their mind is the product of god.

Now, after installing the so-called “GAX”, re-running the SCSF installer enables me to select the “Automated Guidance” selection (which had been greyed out before –  which in other installers simply means that you’ve already installed those bits, and don’t need to worry about them again).

Why CAB in the first place?

However, this is just the beginning of the confusion – for a tool that’s supposed to make it easy to build a well-patterned application, the discussions and blogs around SCSF make it clear that this is no library – this is yet another “only once you’re expert with it should you use it” tool.

Bil Simser says, “The point is that SCSF made me more efficient in how I could leverage CAB, just like ReSharper makes me a more efficient developer when I do refactorings.”  However, what he pointedly fails to say is that SCSF makes him more efficient as a developer.  The debate assumes that you have already determined you must use CAB, and once you’ve fallen down that crevasse, what’s the best way to crawl back out?

I guess I need to take this one step further back, and understand the value of CAB to my current application development needs.  Until that time, I’m going to be very skeptical of having to learn the internal logic and complexity of yet another proprietary tool, and be very resistant to investing the time needed to learn enough to make it useful.

A little earlier in his blog, Bil Simser has this to say:

I’ll be the first to admit that CAB is complex. EntLib is large. There is a lot there. As Chris said this morning in what I think was an excellent response to the entire discussion, CAB for example is not just about building maintainable WinForm apps. I like CAB as it gives me a bunch of things and they all work together in a fairly harmonious way. EventBroker is a nice way to message between views and keeping the views separate; CommandHandlers allow me to hook up UI elements indirectly to code to execute them; the ActionCatalog let’s me security trim my commands (and in turn my UI); and the implementation of the MVP pattern using views lets me write presenter tests and keep my UI thin. This all makes me feel good. Did it take me a while to get here? Absolutely. I’ve spent the better part of a year learning CAB, EntLib, ObjectBuilder, WorkItems, and all that jargon but it’s no different than learning a dozen different 3rd party libraries. I simply chose the MS path because it was there and everything was in one neat package. If you packaged up Castle, NHibernate, StructureMap, and others together in a single package maybe I would have chosen that path (and is there really two different paths here? I use both tools together anyways).

Yipes!  Better part of a year?  [And that’s not even counting the SCSF learning curve!]

Not to mention that even the experts wouldn’t want to try to explain it in a short amount of time:

Unfortunately [Entlib/CAB is] not something I could introduce at a conference or User Group session and describe the entire stack in an hour, so I tend to avoid showing off applications and concepts using it as it just turns into a discussion of what (SmartPart) means instead of the main goal like describing MVP which I can do with my own code.

And the promise that even with all of what’s bundled into Entlib and CAB, you could still find yourself dragging in all sorts of other crap:

Eventually I could have a really ugly monster on my hands with copies of Castle, StructureMap, CAB, EntLib, NHibernate, log4net, and who knows what else all living (hopefully) together in happy existence. I don’t want that.

More words to give me pause, from Chris Holmes:

I’ll also say this about ObjectBuilder : I agree with Jeremy; I wouldn’t use it on its own. It’s hideous. It is overly complicated, undocumented and cumbersome. It is the worst part of EntLib and CAB. Fortunately, when using EntLib and CAB I don’t have to manipulate ObjectBuilder. If I did, I’d have abandon both tools a long time ago.

So we come to this question: If I don’t like ObjectBuilder, then why am I using CAB or EntLib?

Now, some people might say “that’s Big Design Up Front”. I’d argue otherwise. Our ultimate goal was testability. We wanted to adopt an MVP architecture so that we could have a very testable UI layer. We wanted automated tested; we wanted automated builds; we wanted the confidence that comes from having a vast suite of unit tests; we wanted the confidence to refactor our code without fear of breaking a tightly coupled system. So our goal was not to have “nifty tools”, but to have a framework to build upon that would give us our ultimate goal: a testable UI layer that we all would have confidence in.

My assumption was that [rolling my own code] would take a considerable amount of time (and not as BDUF; but as time spent building the pieces when necessary. The bottom line is, it still takes time, no matter how you allocate it, and I wasn’t certain how much). That assumption was probably unfortunately influenced by looking at the CAB as a frame of reference and saying, “Holy cow, there’s a lot there…”

So we went with CAB. The idea was that we thought it would help us get out of the gate faster. I thought it would take less time to grok the CAB and make use of it than roll my own code. Maybe that was an error in judgment. It’s certainly possible, I am human and very flawed. But I did manage to grok the CAB pretty fast compared to other adopters, so it seemed like a good decision to me at the time.

I chose CAB because I thought it was the best solution at the time, given everything I knew about software design and weighing all of the other factors involved. I might make a different decision today, given the amount of knowledge I’ve accumulated and the tools that have emerged. I might do just what Jeremy and Oren propose, and grab an IoC tool like Castle and grow my own solutions as necessary.

SCSF, CAB & Entlib: just not ready for the Media Center developer crowd

While having an IM conversation with one of my respected colleagues today, I complained about all the time I was investing in learning about yet another framework:

I just dove deep into the CAB/SCSF/MVC-P crevasse, and it’s amazing how much of learning the bit you need depends on learning the next-layer-down foundational piece – and how many f’n foundational layers there are once you start looking closely.

The thing that J.D. Meier & co did well was provide some automated “wizards” that setup all the skeleton code once you know which pattern you want to use.

The part they addressed poorly is to provide any “entry-level” patterns for weekend hackers like me – everything is all about “steep learning curve, big architecture, lots of pieces to re-use once you figure out the puzzle up front”.  I’m not even convinced I ever want to do this kind of project again, and the amount of setup to even try it is making it less likely I’ll even do it once.

It strikes me that if I were to adopt one of these mammoth frameworks for the MyPicasaPictures project, the only advantage to the pre-defined pieces that are available to fire up at will is that other people (already familiar with the same framework) would have an easier to finding a customizing the pieces they wanted to bolt into.

For the rest of the Media Center community, if they wanted to extend the application or plug in a new module, they’d be forced to suffer the same learning curve as I’m facing right now.  This is not a community that I’d expect to be familiar with these enterprise development frameworks, nor do I feel particularly cruel enough to try to foist such a hairball upon them.

Fascinating quote to end this research vein with:

The end result is that this [anti-Agile programming] architect and I could not possibly coexist on the same project.  My longstanding opinion that traditional architects are useless if not downright harmful remains completely intact.  I guess you wouldn’t be slightly surprised to find out this guy is a big proponent of software factories as well.  Of course, this guy had more people in his talks and better evaluations than me, so what do I know?

If his description is any indication, and if I assume the likelihood that factories-based development suits the Media Center developer crowd is slim to none, then let’s just forget about the whole idea of using factories/frameworks for VMC-oriented development.



Next stop: back to the drawing board.  Perhaps I’ll work on a few crisp use cases to anchor the development of one or more rounds of Agile-esque coding.

Porting Word2MediaWikiPlus to VB.NET: Part 14 (Mysteries Abound)

[Previous articles in this series: Prologue, Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, Part 11 (The Return), Part 12 (Initialization continued), Part 13 (VBA Oddities).]

Mysterious Character: Vertical Tab (VT) — Do These Still Show Up in Word Documents?

In working through the code in MediaWikiConvert_Lists(), I ran across a block of code that purports to “replace manual page breaks”, and is using the Chr(11) construct to do so.  I must’ve been feeling extra-curious ’cause I went digging into what this means, and the harder I looked, the more puzzled I became.

According to ASCIITables.com, the character represented by decimal “11” is the so-called “vertical tab”.  I’ve never heard of this before (but then, there’s a whole host of ASCII & Unicode characters I’ve never paid attention to before), so I had to check with a half-dozen other references on the ‘net before I was sufficiently convinced that this wasn’t some “off-by-one” problem where the VBA coders were intending to look for Chr(10) (aka “line feed”) or Chr(12) (aka “form feed”).

On the assumption that we’re really and truly looking for “vertical tab”, I had to do some deep digging to figure out what this might actually represent in a Word document.  There’s the obligatory Wikipedia entry, which only said that “The vertical tab is  but is not allowed in SGML (including HTML) or XML 1.0.”.  Then I found this amusing reference to one of the Perl RFCs, which quotes Russ Allbery to say “The last time I used a vertical tab intentionally and for some productive purpose was about 1984.”.  [Sometimes these quotes get better with age…]

OK, so if the vertical tab is so undesirable and irrelevant, what could our VBA predecessors be thinking?  What is the intended purpose of looking for an ASCII character that is so unappreciated?

Mysterious Code Fragment: “If 1 = 2” – WTF?

I started to notice these odd little appendages growing out of some of the newer code in the VBA macro.  At first I figured there must be some special property of VBA that makes “If 1=2” a valid statement under some circumstances, and I just had to ferret out what that was.

Instead, the more I looked at it, the more puzzled I became.  What the hell could this possibly mean?  Under what circumstances would *any* logical programming language ever treat “If 1 = 2” as anything but a comparison of two absolute numbers, that will ALWAYS evaluate to False?

Eventually I had to find out what greater minds that mine thought about this, and so off to Google I go.  As you might expect, there’s not much direct evidence of any programming practices that include adding this “If 1 = 2” statement.  In fact, though it appears in the odd piece of code here and there, it’s surprisingly infrequent.  However, I finally ran across what I take to be the best lesson on what this really means (even if I had to unearth it through the infamous “Google cache”):

>>>Anyone know how to comment out a whole section in VBA rather than just
>>>line by line with a ” ‘ “?
>>If the code is acceptable (won’t break because some control doesn’t
>>exist, etc), I sometimes to
If 1 = 2 then
>> ….existing code
>> End If
>>The code will never fire until the day 1 = 2.
> Thanks, think Id prefer the first option. The second option might
> confuse any programmers that try and read my code.

Now that’s the understatement of the year.

So as far as I’m concerned, I’m going to go back and comment out any and all instances where I find this statement, as it tells me the original programmer didn’t want this code to fire, and was thinking of coming back to it someday after their last check-in.

Mysterious Approach: Localization via Macro?  No way.

There are a few routines that attempt to implement localization at runtime.  While this makes sense for VBA, this makes little if any sense for the use of VB.NET.  Any English-only strings can be substituted in the corresponding Resources file that will accompany this code.

Thus, the MW_LanguageTexts() routine will be skipped, since it had little if any effect anyway.

Mysterious Exception: “add-in could not be found or could not be loaded”

I’ve been struggling for a few days to try to actually run this add-in, and after finding out why, I can say with confidence that there was no good troubleshooting guide for this.

Here’s the setup:

  • I could Build the add-in just fine — no build-time errors, only two compiler warnings (about unused variables).
  • However, when I tried to either (a) Debug the project from within Visual Studio, or (b) add the add-in manually to Word, I was completely stymied.
  • When I started the Debug sequence (F5) from Visual Studio, it would launch Word 2003, which created all its default menus and toolbars, and then threw this error dialog:
    Office document customization is not available - An add-in could not be found or could not be loaded.
  • The details of this exception read:
  • Could not create an instance of startup object Word2MediaWiki__.ThisAddIn in assembly Word2MediaWikiPlusPlus, Version=, Culture=neutral, PublicKeyToken=1a75eafd9e81be84.

    ************** Exception Text **************
    Microsoft.VisualStudio.Tools.Applications.Runtime.CannotCreateStartupObjectException: Could not create an instance of startup object Word2MediaWiki__.ThisAddIn in assembly Word2MediaWikiPlusPlus, Version=, Culture=neutral, PublicKeyToken=1a75eafd9e81be84. —> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. —> System.NullReferenceException: Object reference not set to an instance of an object.
       at Word2MediaWiki__.Word2MediaWikiPlusPlus.Convert..ctor() in C:\VS2005 Projects\Word2MediaWiki++\Word2MediaWiki++\Convert.vb:line 44
       at Word2MediaWiki__.ThisAddIn..ctor(IRuntimeServiceProvider RuntimeCallback) in C:\VS2005 Projects\Word2MediaWiki++\Word2MediaWiki++\ThisAddIn.vb:line 29
       — End of inner exception stack trace —

  • If I tried to load the add-in from within Word (using the Tools > COM Add-ins… menu — which you can add with these instructions), Word would only tell me:
  • Load Behavior: Not loaded. A runtime error occurred during the loading of the COM Add-in.

    I won’t even bore you with the details of all the stuff I tried to do to debug this issue.   It turned out that I was instantiating my Application object too early in the code (at least, the way I’d constructed it).

    Broken Code

    ThisAddin.vb (relevant chunk)

    Imports Office = Microsoft.Office.Core
    Imports Word2MediaWiki__.Word2MediaWikiPlusPlus.Convert
    Public Class ThisAddIn
    #Region " Variables "
        Private W2MWPPBar As Office.CommandBar
        WithEvents uiConvert As Office.CommandBarButton
        WithEvents uiUpload As Office.CommandBarButton
        WithEvents uiConfig As Office.CommandBarButton
        Dim DocumentConversion As Word2MediaWikiPlusPlus.Convert = New Word2MediaWikiPlusPlus.Convert ' Line 29
    #End Region

    Convert.vb (relevant chunk)

    Imports Word = Microsoft.Office.Interop.Word
    Namespace Word2MediaWikiPlusPlus
    Public Class Convert
    #Region "Variables"
            Dim App As Word.Application = Globals.ThisAddIn.Application 'PROBLEM - Line 44
            Dim Doc As Word.Document = App.ActiveDocument 'PROBLEM
    #End Region
    #Region "Public Subs"
            Public Sub InitializeActiveDocument()
                If Doc Is Nothing Then
                    Exit Sub
                End If
            End Sub

    #End Region

    #Region “Public Subs”

    Fixed Code

    Convert.vb (relevant chunk)

    Imports Word = Microsoft.Office.Interop.Word
    Namespace Word2MediaWikiPlusPlus
    Public Class Convert
    #Region "Variables"
            Dim App As Word.Application 'FIXED 
            Dim Doc As Word.Document 'FIXED 
    #End Region
    #Region "Public Subs"
            Public Sub InitializeActiveDocument()
                App = Globals.ThisAddIn.Application 'NEW
                Doc = App.ActiveDocument 'NEW
                If Doc Is Nothing Then
                    Exit Sub
                End If
            End Sub
    #End Region

    What I Think Went Wrong

    As much as I understand of this, it seems like when the ThisAddIn class tries to create a new instance of the Convert class as a DocumentConversion object, the ThisAddIn object hasn’t been instantiated yet, so the reference in the Convert class to Globals.ThisAddIn.Application can’t be resolved (how can you get the ThisAddin.Application object if its parent object — ThisAddIn — doesn’t exist yet?) causes the NullReferenceException that is the heart of the problem.

    By pulling out that instantiation code from the App variable declaration, and delaying it instead to one of the Convert class’s Subs, there was no need for the managed code to “chase its tail” — trying to resolve an object reference back through the calling code, which hadn’t been instantiated yet.

    Y’know, I’m sure I read somewhere over the last year that combining the declaration with the instantiation of a variable is bound to lead to subtle debugging issues, but man.  Losing three days to this?  What a disaster.

    Lesson for the day: It never pays to take shortcuts.

    SharePoint Development resources

    I’ve run across some great sets of resources for SharePoint developers, so I figured I’d share them rather than bookmark ’em and never remember them again:

    I’m sure there’s lots of other great resources, but these were the ones I found today.  Enjoy.

    Porting Word2MediaWikiPlus to VB.NET: Part 7

    Previous articles in this series: Prologue, Part 1, Part 2, Part 3, Part 4, [no Part 5 – apparently I lost the ability to count], Part 6.]

    Troubleshooting ThisAddIn.Startup() continued…

    Still struggling with getting the CommandBar and CommandBarButton instantiated in the ThisAddIn.Startup() Sub.  I’m finding that the initial exploration of the CommandBar to see if there is a pre-existing instance of the “W2MWPP Convert” button is not working.  The code starts off like this:

            Dim MyControl As Microsoft.Office.Core.CommandBarButton
            MyControl = Application.CommandBars("W2MWPPBar").FindControl(Tag:="W2MWPP Convert")

    Then when I debug (F5) this addin, Word reports an unhandled exception with the error “Value does not fall within the expected range”.  I seem to recall having this same problem with my previous VSTO Word AddIn until I first had the button created — then, the next time I ran the addin, it had something to FindControl().  At present, since the button doesn’t exist, it appears that FindControl() is getting “jammed” and I’m never going to get anywhere (kind of a chicken-and-egg problem).

    It will be easy to get around this problem on my computer, but I’m afraid that when I build and release this add-in for others to install, if I start the code with a FindControl() call when there’s no button to find, no one else will be able to use this addin either.

    Alternative approach to creating the CommandBarButton?

    I have to imagine that there’s another way to skin the cat: if we need to determine if the button exists before attempting to create it, but trying to find it by name isn’t working, then perhaps there’s some CommandBar control collection that we could iterate through, and compare the Tag value for each (if any) to find the one we want.  That should go something like this:

    Dim commandBarControlsCollection As Office.CommandBarControls = W2MWPPBar.Controls
    Dim buttonExists As Boolean

    For Each control As Microsoft.Office.Core.CommandBarControl In commandBarControlsCollection
          If control.Tag = "W2MWPP Convert" Then
              MyControl = control
              buttonExists = True
          End If
    If buttonExists = False Then
          'Create a new ControlButton
          MyControl = Application.CommandBars("W2MWPPBar").Controls.Add(Type:=Microsoft.Office.Core.MsoControlType.msoControlButton)
    End If
    Is it a Variable Scope issue?

    This still doesn’t resolve the error, so I’m continuing to search for good example code from folks who should know how to construct VSTO code.  This blog entry from the VSTO team has an interesting thing to say:

    You should declare your variables for the command bar and buttons at the class level so that your buttons don’t suddenly stop working.

    The referenced article (which I’ve linked from Archive.org — the Internet Wayback Machine”) says:

    The solution is to always declare your toolbar/menu/form variables at the class level instead of inside the method where they’re called. This ensures that they will remain in scope as long as the application is running.

    I wonder whether this advice was more relevant to document-based VSTO projects rather than the application-level add-ins that are possible today — but something tells me it can’t hurt either way, and it’s worth trying to see if it changes anything about the errors above.

    Result: unfortunately, this isn’t a problem of variable scope.  In taking a closer look at the exception, here’s the first-level exception:

    System.ArgumentException: Value does not fall within the expected range.
       at Microsoft.Office.Core.CommandBarsClass.get_Item(Object Index)

    Am I looking at the wrong object?

    What exactly is the problem?  Is this saying that get_Item() is failing to get the CommandBar, or the CommandBarButton?  I’ve assumed up to now that it’s a problem referencing the CommandBarButton, since the CommandBar is getting created in Word each time I Debug this add-in.  However, now that I’m looking at it, CommandBarsClass.get_Item() seems more likely to be acting on the CommandBar than the button (or else it’d refer to something like CommandBarButtonsClass.get_Item(), no?).

    What’s odd, however, is that the VS Object Browser doesn’t even have an entry for CommandBarsClass — when I search for that term, no results come up, and when I search on “CommandBars”, the closest thing I can find is the “Class CommandBars” entry, which doesn’t have a get_Item() method.

    Searching in MSDN, I found the entry for CommandBarsClass Members, which doesn’t reference the get_Item() method but does mention an Item Property.  That page says a very curious thing:

    This property supports the .NET Framework infrastructure and is not intended to be used directly from your code.

    I wonder what that’s all about then?  In fact, the documentation for the CommandBarsClass Class also says the same thing.  I can understand that there are some “internal functions” generated by the compiler that aren’t really meant for use in my code, but it’s really tough to debug a problem when these constructs barely get a stub page and there’s no information to explain what I should think when one of these things pops up in my day-to-day work.

    I feel like I’m chasing my tail here — now I’m back on the Members page, hoping that one of the Properties or Methods that are documented will help me deduce whether this class references the CommandBar or the CommandBarButton when it calls get_Item() [and maybe even help me figure out why a just-created CommandBar object can’t be referenced in code].

    The best clue I’m coming up with so far is that the page documenting the Parent Property shows that under J#, there’s what appears to be a get_Parent() method, not existing in the other languages mentioned, which leads me to believe that the get_Item() method is something generated by the compiler when it needs to get the value of the Item Property.  [At least I’m learning something for all my trouble…]

    The only other tantalizing tidbit so far is that the CommandBarsClass page indicates that this class implements interfaces that all refer to CommandBar, not to any controls associated with the CommandBar: _CommandBars, CommandBars, _CommandBarsEvents_Event.  I can’t tell the difference between the first two (at least from the docs), but obviously the Event interface is its own beast.

    Success: it’s the CommandBar!

    I think I have confirmation, finally: the docs for _CommandBars.Item state that the Item Property “Returns a CommandBar object from the CommandBars collection.”  OK, so now I finally know: my code is barfing on trying to access the just-created CommandBar, not the CommandBarButton as I thought all along.  Whew!

    Aside: Re-Using Variables

    I’m not much of a code snob yet — there’s very few things I know how to do “right”.  However, I’ve found something that just doesn’t seem right in the original VBA code that I’m going to change.

    The original code sets up three separate buttons in the CommandBar, and each time the code looks for the button, then creates it, then configures it, it’s using the exact same variable each time: MyControl.  I know this obviously worked (at least in the VBA code), so it’s hardly illegal, but it seems much safer to create three variables and instantiate them separately.  Maybe it’s just so that I can follow the code easier, I don’t know.  In any case, I’m having a hard time with it so I’m going to call them each something else.

    However, I’m not so much of a snob that I’ll create three boolean variables to track whether I’ve found an existing instance of the button, so I’m going to re-use the buttonExists variable.


    Keep tuning in… someday I’ll actually start getting into Wiki-related code (I swear!)

    EFS Certificate Configuration Updater tool is released!

    After weeks of battling with Visual Studio over some pretty gnarly code issues, I’ve released the first version of a tool that will make IT admins happy the world over (well, okay, only those few sorry IT admins who’ve struggled to make EFS predictable and recoverable for the past seven years).

    EFS Certificate Configuration Updater is a .NET 2.0 application that will examine the digital certificates a user has enrolled and will make sure that the user is using a certificate that was issued by a Certificate Authority (CA).

    “Yippee,” I hear from the peanut gallery. “So what?”

    While this sounds pretty freakin lame to most of the planet’s inhabitants, for those folks who’ve struggled to make EFS work in a large organization, this should come as a great relief.

    Here’s the problem: EFS is supposed to make it easy to migrate from one certificate to the next, so that if you start using EFS today but decide later to take advantage of a Certificate Server, then the certs you issue later will replace the ones that were first enrolled. [CIPHER /K specifically tried to implement this.]

    Unfortunately, there are some persistent but subtle bugs in EFS that prevent the automatic migration from self-signed EFS certificates to what are termed “version 2” certificates. Why are “version 2” certificates so special? Well, they’re the “holy grail” of easy recovery for encrypted files – they allow an administrator to automatically and centrally archive the private key that is paired with the “version 2” certificate.

    So: the EFS Certificate Configuration Updater provides a solution to this problem, by finding a version 2 EFS certificate that the user has enrolled and forcing it to be the active certificate for use by EFS. [Sounds pretty simple eh? Well, there’s plenty of organizations out there that go to a lot of trouble to try to do it themselves.]

    Even though this application fills a significant need, it doesn’t (at present, anyway) do everything that might be needed in all scenarios. The additional steps that you might need to cover include:

    • Enrolling a version 2 EFS certificate. [You can automate this with autoenrollment policy and the Windows Server 2003-based CA that is already in place for issuing v2 certificates and Key Archival.]
    • Updating EFS’d files to use the new certificate. [You can automate this by using CIPHER /U, but it’ll take a while if the user has a lot of encrypted files. The good news, however, is that the update only has to re-encrypt the FEK, not re-encrypt the entire file, so it’s much quicker than encrypting the same set of files from scratch.]
    • Ensuring that the user’s EFS certificate doesn’t expire before a new or renewed certificate is enrolled. [This is very easy to accomplish with Autoenrollment policy, but without the use of Autoenrollment, there is a significant risk that when the user’s preferred EFS certificate expires, the EFS component driver could enroll for a self-signed EFS certificate.]
    • Archiving unwanted EFS certificates. [This is different from deleting a digital certificate – which also invalidates the associated private key, which is NOT recommended. This would keep the certificates in the user’s certificate store, and preserve the private key — so that any files encrypted with that old certificate were still accessible. This is hard to do from UI or script, but is a feature I’m hoping to add to the EFS Certificate Configuration Updater in the near future. This is also optional – it just minimizes the chances of a pre-existing EFS certificate being used if the preferred certificate fails for some reason.]
    • Publishing the user’s current EFS certificate to Active Directory. [This is also optional. It is only necessary to make it possible — though still hardly scalable — to use EFS to encrypt files for access by multiple users (see MSDN for more information). This can be automated during Autoenrollment, but some organizations choose to disable publishing a 2nd or subsequent EFS certificate since the EFS component driver may get confused by multiple EFS certificates listed for a single user in Active Directory.]
    • Synchronizing the user’s EFS certificate and private key across all servers where encrypted files must be stored. [This is not needed if you’re merely ensuring that all sensitive data on the user’s notebook/laptop PC is encrypted, so that the loss or theft of that PC doesn’t lead to a data breach. However, if you must also enforce EFS encryption on one or more file servers, the EFS Certificate Configuration Updater will not help at all in this scenario.]

    Try it out — Tell your friends (you have friends who’d actually *use* this beast? Man, your friends are almost as lame as mine – no offense) — Let me know what you think (but no flaming doo-doo on my front porch, please). And have a very crypto-friendly day. 😉

    Funny Microsoft link of the day/month/year (whatever)

    One in a very infrequent and unpredictable series:


    I’m pretty sure I’ve seen this performance — and if not, it’s damned similar to other occasions when I didn’t know whether to be amused, embarrassed or downright frightened.