Codeplex Licensing: Who controls license to my projects? Not I, apparently

Man, does this ever kill the relationship I thought I was building with the Codeplex team.   A few weeks ago I got another RSS notification from one of my Codeplex projects that the License to the project had changed (AGAIN).

In my interpretation of the world of software licensing, the License Agreement is a contract between the author of the software and the users of it.  I have the impression that these License Agreements carry the weight of law behind them, and that as such there should be restrictions on who can and cannot change the terms of the contract.

The way I see it, and unless I can find some text to the contrary in the Codeplex Terms of Use, the person who creates the Codeplex project is the only one who should be able to make any changes to the License — and even then, they should be very careful not to make frequent or arbitrary changes to the License (so as to provide as stable a set of expectations as you can with your users).

Codeplex: No Respect For Licenses

So why is it that the License for my CacheMyWork project has changed (TWICE), without my consent or participation?  I’ve looked over the Terms of Use published on the Codeplex site, and I can’t find any warning or mention of Microsoft or the Codeplex team monkeying with the License for any project:

“Microsoft does not claim ownership of the materials you provide to Microsoft…or post, upload, input or submit to any Services or its associated services for review by the general public, or by the members of any public or private community, (each a “Submission” and collectively “Submissions”).”


“Microsoft does not control, review, revise, endorse or distribute third-party Submissions. Microsoft is hosting the CodePlex site solely as a web storage site as a service to the developer community.”


“…Microsoft may remove any Submission at any time in its sole discretion.”

I see nothing claiming that Codeplex or Microsoft reserves the right to alter, update or otherwise affect any License agreement attached to any current projects.  I do not believe that this is an implicit right of the Terms they have published, nor of the kinds of services that are being provided here.

SourceForge: The Opposite Extreme

Out of curiosity, I decided to examine the Sourceforge Terms of Service:

“ reserves the right to update and change the Terms, including without limitation the Privacy Statement, Policies and/or Service-Specific Rules, from time to time.”


“In addition, the content on the Website, except for all Content, including without limitation, the text, graphics, photos, sounds, sayings and the like (“Materials”) and the trademarks, service marks and logos of COMPANY contained therein (“Marks”), are owned by or licensed to COMPANY, subject to copyright and other intellectual property rights under United States and foreign laws and international conventions.”


“Except for Feedback, which you agree to grant COMPANY any and all intellectual property rights owned or controlled by you relating to the Feedback, COMPANY claims no ownership or control over any Content. You or your third party licensor, as applicable, retain all intellectual property rights to any Content and you are responsible for protecting those rights, as appropriate.

With respect to Public Content, the submitting user retains ownership of such Public Content, except that publicly-available statistical content which is generated by COMPANY to monitor and display project activity is owned by COMPANY.

By submitting, posting or displaying Content on or through, you grant COMPANY a worldwide, non-exclusive, irrevocable, perpetual, fully sublicensable, royalty-free license to use, reproduce, adapt, modify, translate, create derivative works from, publish, perform, display, rent, resell and distribute such Content (in whole or part) on and incorporate Content in other works, in any form, media, or technology developed by COMPANY, though COMPANY is not required to incorporate Feedback into any COMPANY products or services. COMPANY reserves the right to syndicate Content submitted, posted or displayed by you on or through and use that Content in connection with any service offered by COMPANY.

With respect to Content posted to private areas of (e.g., private development tools or Mail), the submitting user may grant to COMPANY or other users such rights and licenses as the submitting user deems appropriate.”


“ fosters software development and content creation under Open-Source Initiative (“OSI”)-approved licenses or other arrangements relating to software and/or content development that may be approved by COMPANY. For more information about OSI, and OSI-approved licenses, visit

Use, reproduction, modification, and ownership of intellectual property rights to data stored in CVS, SVN or as a file release and posted by any user on (“Source Code”) shall be governed by and subject to the OSI-approved license, or to such other licensing arrangements approved by COMPANY, applicable to such Source Code.

Content located on any subdomain which is subject to the sole editorial control of the owner or licensee of such subdomain, shall be subject to the OSI-approved license, or to such other licensing arrangements that may be approved by COMPANY, applicable to such Content.”

My interpretation of these Sourceforge terms is that they’re at least as generous as the Codeplex terms, and they are much better specified.  It’s as if the Sourceforge leaders took their responsibilities seriously, whereas the Codeplex leaders still consider this a *Microsoft* project, where they can make arbitrary decisions just as Microsoft always does with their customers’ welfare.

Would you feel comfortable hosting your source code on the Sourceforge site?  I would.

Google: Somewhere in the Between-Space

Compare also to the Google Code Terms of Service:

“Google claims no ownership or control over any Content submitted, posted or displayed by you on or through Google services. You or a third party licensor, as appropriate, retain all patent, trademark and copyright to any Content you submit, post or display on or through Google services and you are responsible for protecting those rights, as appropriate. By submitting, posting or displaying Content on or through Google services which are intended to be available to the members of the public, you grant Google a worldwide, non-exclusive, royalty-free license to reproduce, adapt, modify, publish and distribute such Content on Google services for the purpose of displaying, distributing and promoting Google services. Google reserves the right to syndicate Content submitted, posted or displayed by you on or through Google services and use that Content in connection with any service offered by Google. Google furthermore reserves the right to refuse to accept, post, display or transmit any Content in its sole discretion.”


“Google reserves the right at any time and from time to time to modify or discontinue, temporarily or permanently, Google services (or any part thereof) with or without notice. You agree that Google shall not be liable to you or to any third party for any modification, suspension or discontinuance of Google services.”

Pretty spartan, but still more specifics in their terms of what rights you receive and give up than the Codeplex service.

If I was starting over…

If I had to choose it all over again, and if I knew I had equivalent IDE integration for any of the source code management add-ins, I’d have a hard time choosing among them.  Currently, the fact that all four of my projects are hosted on Codeplex is a significant barrier to hosting any project on any other site.  The next strongest reason keeping me on Codeplex is that I understand the basics of how to check-in and check-out my code using the Visual Studio integration with TFS — not that it means I find it brainless or foolproof, but it happens to be the source code management system I know best right now.  [As well, I continue to figure that I’ll focus my development skills in .NET, which likely will work better in Visual Studio, which is investing heavily in TFS integration and adoption – and I suspect if I ever get into coding for work, that I’ll have the TFS infrastructure available to really exploit for our projects.]

What about Sourceforge?  I like that it’s the first one out there, and I imagine it’s been held to a higher standard (by the zealots of the open-source community) than any other site — which is goodness.  However, I don’t “get” Subversion or (the older source code management tool), and I must admit I’m put off by the number of stale or poorly-used projects littering the Sourceforge site.  I imagine that Codeplex will look similar, given the same amount of time, so this isn’t a knock against Sourceforge but rather just a reflection of its age and roots.

What about Google Code?  Despite my better [i.e. paranoid] judgment, I’ve become a fan of Google’s services, and I really appreciate the ability to easily leverage all their “properties” in what appears to me to be a cohesive fashion.  I’m also thrilled that most of their properties are frequently updated, and that the updates focus on the kinds of changes that I usually appreciate seeing.  At this point I haven’t looked much deeper into it, but if it was possible to test it out, I probably will.

If I was running Codeplex…

If I could sit down with the Codeplex team, I’d tell them this:

  • You should feel free to publish any updated licenses you wish, and make them available at any time
  • You should NEVER alter ANY of the properties of any of the hosted projects on Codeplex, let alone the License for any project

It’s possible the database admins are simply updating database records for the existing licenses, rather than inserting new records for each new or updated license

  • If you don’t want to make available “older” licenses because you think that would be too confusing for Codeplex users, then at least leave the existing licenses in place, but mark them such that they cannot be assigned to new projects.  There should be NO excuse why you can’t leave existing projects alone, and allow the project coordinators to choose when or if to update the license on their project.
  • If you’re simply updating the existing licenses for the purpose of streamlining, constraining or clarifying the licenses attached to Microsoft projects, then you ought to be publishing Microsoft-specific versions of these licenses, since the changes you’re making to the existing licenses are having unintended consequences on those non-Microsoft projects that also use those licenses.
  • Ideally, however, you should be able to publish and maintain multiple versions of each license, and to be able to publish new licenses as well.  If the GPL can manage three different versions of their license, and if these are really simply database records (with “published on” and “version” metadata attached to each) then there should be no excuse why Codeplex cannot keep multiple versions of its licenses online at the same time, and leave the existing projects’ licenses ALONE.

Having worked at Microsoft for six years, I saw firsthand how arrogant and ignorant Microsoft can be about the impact of sweeping decisions they make, since most of those folks have never been responsible for large populations of users before, and they rarely have had exposure to an environment where their decisions (no matter how foolish or under-thought) are openly challenged.

I wouldn’t be surprised if the Codeplex team was no different; in fact, I’d be surprised if they were even measured on such end-user impact.  There’d sure be no incentive for Codeplex management to care, since (a) there’s no revenue to be gained or lost by poor or quality service, and (b) there’s no revenue to be “stolen” from a competing service (since all the major online code repositories charge nothing for their service, and their advertising revenue must be pretty thin).

I’m hopeful that the Codeplex folks wise up and take their responsibilities more seriously, but after the catastrophic database failure of last year, plus this half-arsed attitude towards the legal aspects of this service, I’m not going to make any bets on it.

If anyone has any suggestions on how I could help remedy either the Codeplex situation or my own desire for a better service (to which I could quickly adapt, given my sorry skills), please speak up – I’d love to hear it.

Windows Update — talk about shooting yourself in both feet…

Microsoft update - nonsense

For the love of Pete (who’s Pete you ask?  It’s a joke, son), who’s keeping watch over Microsoft customers’ safety and security?  For well over a year now, I’ve encountered Windows XP SP2 PC after PC, dutifully configured to automatically download and install all high-priority updates.  Some of these PCs, I’ve mothered over multiple times, hoping that I was seeing just a one-time problem that would be magically resolved the next time I arrived.

Microsoft even makes a big deal in its advertising about the fact that Windows Update (or Microsoft Update, if you’ve opted-in to this long-overdue expansion of updates across many Microsoft consumer and business products) “…helps keep your PC running smoothly — automatically”.  [And if you don’t believe me, check it out for yourself.]

Hogwash, I say.

Windows Update?  It’s more like “Rarely Update”, or “Windows Downtime”.

In almost every single case (and I suspect the rare PCs that weren’t this way, had been similarly mothered by some other poor lackey of the Beast from Redmond), I’ve found that I had to visit the Windows Update web site, download yet another update to the “Windows Genuine Validation” ActiveX control, install this piece o’ quicksand, and then subject my friend’s (or family member’s) PC to the agony of between one and three (depending on how long it’d been since I last visited) sessions of downloading and installing the very updates that they (and I) continued to falsely believe were being downloaded “automatically”.

In those cases where it’d been a year or more since the last occasion of hand-holding by me, the cycle of abuse wasn’t complete with a single session — I had to reboot after all “available” updates were installed, and re-visit Windows Update to find yet *another* batch of updates that magically appeared on this subsequent go-around.

How does this happen?  How could a service that is supposed to minimize the occurrence of unpatched PCs turn against itself so horribly?

I have to imagine that the WU (Windows Update) team doesn’t have any oversight or centralized control over the content that’s being hosted on their site.  If they did (and assuming they’re the folks who paid for the above ad), then they’d take their responsibilities more seriously, and make sure their site could deliver on the promise being advertised.

As it stands, it appears that the team responsible for Windows Genuine Validation feels it’s more important to ensure that their software is being explicitly installed by the end user, than to ensure that Microsoft’s customers are being adequately protected from the constant onslaught of Windows-targeting malware.

Each and every time I have visited the Windows/Microsoft Update site on these “under-managed” PCs (i.e. PCs owned by those folks who have left their PCs alone, as they’ve been promised to be able to by Microsoft), I’ve found that I had to perform the “Custom” scan, then accept the only-via-the-web download for the Windows Genuine Validation software, and only then is the computer capable of automatically downloading the remaining few dozen updates that have been queued up while the PC has been prevented by the requirement to download the validation control.

It seems like the Windows Genuine Validation team isn’t satisfied with their software getting onto every Windows PC in existence; they also seem bound & bent to ensure that every user is explicitly aware that they’re being surveilled by the Microsoft “licensing police”.

Why is it that Windows Update (or Microsoft Update) can update every other piece of software on my Windows PC automatically, but the license police can’t (or won’t) get its act together and make their (unwanted but unavoidable) software available automatically as well?  And don’t tell me it’s a “privacy” thing, or that it wasn’t explicitly allowed in the Windows XP SP2 EULA.  We’ve had plenty of opportunities to acknowledge updated privacy notifications or EULA addenda (hell, there’s at least one of those to acknowledge every year via WU, it seems), so that don’t fly.

So here’s my proposition: I’d love to see the Windows Genuine Validation team fall in line with the rest of the Microsoft “internal ecosystem” and figure out a way to make it so that WU/MU automatic updates actually become automatic again.  Wouldn’t it be grand if Windows systems around the world were still able to keep on top of all the emerging threats on behalf of all those individuals who’ve filled Microsoft’s coffers over the years?

Let’s get the current WGA control packaged up like any other High-Priority update and pushed down on the next Patch Tuesday (pitch it as if it’s similar to the monthly malware scanning tool).  If you have to, add in one of those EULA addenda (with or without a prominent privacy notification up front), and if you’re really worried, run a big press “push” that gets the word out that a privacy notification is coming.  C’mon Microsoft!  You’ve conquered bigger engineering problems before.  This one (at least to my naive viewpoint) can’t possibly be that hard…

Nerd alert – my D&D character is…

I Am A: Chaotic Good Human Wizard (5th Level)

Ability Scores:

Chaotic Good A chaotic good character acts as his conscience directs him with little regard for what others expect of him. He makes his own way, but he’s kind and benevolent. He believes in goodness and right but has little use for laws and regulations. He hates it when people try to intimidate others and tell them what to do. He follows his own moral compass, which, although good, may not agree with that of society. Chaotic good is the best alignment you can be because it combines a good heart with a free spirit. However, chaotic good can be a dangerous alignment because it disrupts the order of society and punishes those who do well for themselves.

Humans are the most adaptable of the common races. Short generations and a penchant for migration and conquest have made them physically diverse as well. Humans are often unorthodox in their dress, sporting unusual hairstyles, fanciful clothes, tattoos, and the like.

Wizards are arcane spellcasters who depend on intensive study to create their magic. To wizards, magic is not a talent but a difficult, rewarding art. When they are prepared for battle, wizards can use their spells to devastating effect. When caught by surprise, they are vulnerable. The wizard’s strength is her spells, everything else is secondary. She learns new spells as she experiments and grows in experience, and she can also learn them from other wizards. In addition, over time a wizard learns to manipulate her spells so they go farther, work better, or are improved in some other way. A wizard can call a familiar- a small, magical, animal companion that serves her. With a high Intelligence, wizards are capable of casting very high levels of spells.

Find out What Kind of Dungeons and Dragons Character Would You Be?, courtesy of Easydamus (e-mail)

MyPicasaPictures Part 5: Hacking the XP development environment for Vista Media Center applications

So I’m going through the Windows Media Center Application Step by Step guide (which incidentally, is in the XPS format — Microsoft’s pretender to the PDF throne, and a risky choice for any application to choose ahead of the widespread adoption of the underlying platform).  I’ve gotten as far as to add the References to the MediaCenter assemblies, when I realize (for the first time) that developing a Media Center application might very well mean that I have to have the development environment installed on a Media Center PC.

This seems like an odd dependency to have, considering all the other types of development projects that don’t necessarily require you to run your IDE on the target platform.  So I’m wondering if there’s a way to either (a) just drop the assembly files (i.e. the DLLs) into the expected directory and just reference them & go, or (b) I’ll have to figure out how to register those assemblies on my non-Media Center system.

On pages 5 and 6 of the Step by Step guide the instructions indicate to add references to the Microsoft.MediaCenter.dll and Microsoft.MediaCenter.UI.dll assemblies.  While it assumes you’re working from a Vista Premium/Ultimate machine and have those assemblies in the %SYSTEMROOT%\ehome folder, I found that copying those files from the VMC box and browsing to their copied location on my Windows XP SP2 system seemed to resolve the references just fine.  [I’m not convinced there won’t be other issues later on, but it’s at least worth a try.]

What’s The Sample Code Doing?

The code going into Application.cs looks interesting, but without any Comments to help us understand the purpose of each line, it’s not very instructive.  For me, this is all I’m able to infer:

  • we’re creating an Application class (though I don’t know what VMC developers would think of as an “application” — an add-in?  a piece of UI?  an assembly?  the whole Media Center process and children?)
  • we’re creating three Properties, both a private instance and its public instance (though I don’t really have any specific idea what these properties are meant to do)
  • the Application “object” is overloaded, but I’m not even sure if it’s a Property or something else
  • there are references to this, but I don’t really know what the “this” is referencing — is it the Application class, or is it something else?  [It doesn’t help that I’m not an expert at coding, so I don’t intuitively recognize what each of these things are, but it sure would help to reinforce my tentative steps into this if I had Comments that told me what I was looking at.]

I’m also puzzled by why this code doesn’t follow the Design Guidelines for Managed Class Developers, when nearly all managed code published by Microsoft adheres to these standards.  For example,

public void DialogTest(string strClickedText)

does not need “str” to prefix the ClickedText variable (aka Hungarian notation) — Visual Studio will always indicate the datatype with tooltips.

As another example, the private member variables

            int timeout = 5;
            bool modal = true;
            string caption = Resources.DialogCaption;

are all named using Camel case but without any prefix (such as a leading “_” character).  I understand that the Design Guidelines don’t specifically call out prefixing rules for member (? or private ?) variables, but apparently the guidelines do recommend the use of “this.” to prefix instance variables (which is awful confusing, as you can see above).  Personally I’d prefer to see “_” or “m_” prefixes on private member variables, but I’d at least like to see *some* consistency in the use (or non-use) of prefixes on variables.

While prefixing is a religious fight, the use of Hungarian is not — it’s clearly not recommended.

Compile Errors

I followed the Step by Step through to Build the Solution, at which point I got back one error in the compiler, at Line 55 of my code:

‘MyProject.Resources’ does not contain a definition for ‘DialogCaption’

If I followed the code adequately, the Step by Step has a bug (or at least an ambiguity) in the step Add Resources.resx/Add Strings.  Whereas the Step by Step guide actually wanted me to create one String called “DialogCaption” in the Resources file, I understood it to instruct me to create two Strings — one called “Name” and the other called “Value”:

It seems pretty obvious now, but at the time my interpretation seemed intuitive.


After resolving that more trivial issue, the next Build confronted me with this error:

The command “%windir%\eHome\McmlVerifier.exe -verbose -assemblyredirect:”C:\VS2005 Projects\MyProject\MyProject\bin\Debug” -directory:”C:\VS2005 Projects\MyProject\MyProject\Markup”” exited with code 9009.

This turned out to be harder (and dumber) to diagnose.  There were no specific errors related to McmlVerifier and 9009, and the first real lead from Google referred to path quoting issues.  I finally realized that the McmlVerifier.exe file didn’t even exist on my XP (development) system.  Strangely though, it didn’t exist on the VMC system either – is this one of the SDK tools?  Yes.  However, for some reason the SDK didn’t install it into the %windir%\ehome directory.

Once I copied McmlVerifier.exe to the ehome directory, the Build completed successfully.

Strangely, most of the steps under Enable UI Testing and MCML Verification (steps 7-17) were redundant – they were already completed on my behalf.


When I tried Debugging the solution, I got yet another error:

System.DllNotFoundException was unhandled

Unable to load DLL “EhUI.dll”: The specified module could not be found.
(Exception from HRESULT: 0x8007007E)

However, this time when I merely copied EhUI.dll to the %windir%\ehome directory, the Debugger threw another exception, slightly more impenetrable:

System.DllNotFoundException was unhandled

Unable to load DLL “EhUI.dll”: The specified procedure could not be found.
(Exception from HRESULT: 0x8007007F)

The stack trace for this was:

at Microsoft.MediaCenter.Interop.RenderApi.SpWrapBufferProc(ProcessBufferProc pfnProcessBufferProc, IntPtr* ppNativeProc)\r\n
at Microsoft.MediaCenter.Interop.RenderApi.InitArgs..ctor(MessageCookieLayout layout, ContextID idContextNew, ProcessBufferProc pfnProcessBufferProc)\r\n
at Microsoft.MediaCenter.UI.MessagingSession..ctor(UiSession parentSession, ContextID idLocalContext, TimeoutHandler handlerTimeout, UInt32 nTimeoutSec)\r\n
at Microsoft.MediaCenter.UI.UiSession..ctor(ContextID idLocalContext, IServiceProvider parentProvider, RenderingInfo renderingInfo, EventHandler rendererConnectedCallback, TimeoutHandler handlerTimeout, UInt32 nTimeoutSec)\r\n
at Microsoft.MediaCenter.UI.UiSession..ctor(RenderingInfo renderingInfo)\r\n
at Microsoft.MediaCenter.Tools.StandAlone.Startup(String[] args)\r\n
at Microsoft.MediaCenter.Tools.StandAlone.Startup()\r\n
at Microsoft.MediaCenter.Tools.McmlPad.Main(String[] args)

Given that I’ve got the Vista version of EHUI.DLL in the right place, I assumed I’d have to “register” it (I forget the equivalent in the .NET Framework world) so that its procedures could be “found”.  However, before going down a “.NET theory” path I decided to google the error message.  The first five or so forum posts that came back pointed finally clued me in that in fact this may represent a second-order dependency in another missing DLL.  With that, I decided to just copy in the entire contents of my VMC’s %WINDIR%\eHome directory and retry debugging.

Debugging exponentially: Process Monitor, Dependency Walker

That wasn’t any more successful, so next I fired up Process Monitor and watched for any “NOT FOUND” errors that followed the EHUI.DLL load event (hoping that this was merely a case of a missing filesystem reference, and not something more sinister).  That helped me discover the following errors (presumably, as is often the case in these situations, mostly a list of false negatives):

  • McmlPad.exe called RegCreateKey() for HKLM\Software\Microsoft\Fusion\GACChangeNotification\Default
  • McmlPad.exe failed to find OLE32.DLL under C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727, and oddly didn’t try searching the environment’s PATH locations
  • devenv.exe searched multiple times for C:\temp\symbols.flat.txt and McmlPad.pdb (symbols) under C:\temp\symbols\McmlPad.pdb\6B1042D64F29479FA1C07939AE072D941\, C:\Windows\symbols\exe, C:\Windows\exe, C:\Windows
  • McmlPad.exe failed to find C:\WINDOWS\assembly\GAC\PublisherPolicy.tme (and didn’t look anywhere else)
  • McmlPad.exe tried looking for a couple of files in the GAC search paths before it went looking directly for the file in C:\Windows\ehome:
    • Microsoft.MediaCenter\6.0.6000.0__31bf3856ad364e35
    • Microsoft.MediaCenter.UI\6.0.6000.0__31bf3856ad364e35
  • Once McmlPad.exe successfully loaded EHUI.DLL from C:\Windows\ehome, it (successfully) opened these files:
    • C:\WINDOWS\assembly\GAC_32\mscorlib\\sorttbls.nlp
    • C:\WINDOWS\assembly\GAC_32\mscorlib\\sortkey.nlp
  • Then devenv.exe successfully loaded C:\Program Files\Microsoft Visual Studio 8\Common7\Packages\Debugger\cscompee.dll
  • Then McmlPad.exe loaded C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Configuration\eee9b48577689e92db5a7b5c5de98d9b\

Hmmm, I’m not sure I learned much from that.  It looks like the application being debugged (McmlPad.exe) was looking for some GAC registration info and a few obscure files.  However, it’s likely that this is expected behaviour even on a working system.

So I went back to my google search results which convinced me to try DEPENDS.EXE to see what it would say.  I expected nothing, frankly, but that gave me two new leads I wouldn’t have otherwise found: it indicates that EHUI.DLL is looking for DWMAPI.DLL and DXGI.DLL, neither of which were present anywhere on my system.

Fascinating – one of the first Forum posts I found referencing DWMAPI.DLL indicates this file may not be needed when “linked” by a multi-OS-targeting application.  However, I suspect that for VMC libraries like EHUI.DLL, these two missing DLLs are not “load on demand” libraries – they’re just libraries that don’t appear on non-Vista platforms.

Once I grabbed copies of these two files from my Vista machine and dropped them into the %windir%\ehome folder, DEPENDS.EXE warned me that D3D10.DLL and NUCLEUS.DLL were required for DXGI.DLL, and that EHTRACE.DLL (a demand-load library) was missing.  Okey-dokey… and then I’m “warned” that even with all these files in place…:

Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module.
Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module.

Feeling lucky despite all evidence to the contrary, I re-ran MyProject from Visual Studio, but no love was had – still and all, “the specified procedure could not be found”.  OK, let’s continue…

The unresolved imports were flagged in the following libraries on my XP development PC:


Unresolved Import(s)

MSVCRT.DLL _except_handler4_common
D3D9.DLL Direct3DCreate9Ex
USER32.DLL IsThreadDesktopComposited
MPR.DLL WNetRestoreConnectionA

So, the obvious question is: if I copy *these* libraries from my VMC computer to C:\windows\ehome on my development XP computer, will that suffice to allow McmlPad.exe + Visual Studio 2005 to successfully debug my VMC app?

And the answer is: not quite yet.  Unfortunately, when loading EHUI.DLL, Windows will still end up loading the standard libraries (such as MSVCRT.DLL) from the PATH (i.e. %windir%\system32).  I realize that Windows is just trying to behave nicely, not being “tricked” into loading rogue versions of already-installed libraries, so I am not too upset by this.  However, this has turned into a much more difficult problem than I first anticipated.

Files copied to my development system so far:

  • %windir%\ehome\Microsoft.MediaCenter.dll (from VMC system)
  • %windir%\ehome\Microsoft.MediaCenter.UI.dll (from VMC system)
  • %windir%\ehome\ehui.dll (from VMC system)
  • %programfiles%\Microsoft SDKs\Windows Media Center\v5.0\Tools\McmlVerifier.exe (to %windir%\ehome)
  • %programfiles%\Microsoft SDKs\Windows Media Center\v5.0\Tools\McmlPad.exe (to %windir%\ehome)
  • %windir%\system32\dwmapi.dll (from VMC system)
  • %windir%\system32\dxgi.dll (from VMC system)
  • %windir%\system32\d3d10.dll (from VMC system)
  • nucleus.dll (from I don’t remember where)
  • %windir%\system32\msvcrt.dll (from VMC system)
  • %windir%\system32\d3d9.dll (from VMC system)
  • %windir%\system32\user32.dll (from VMC system)
  • %windir%\system32\advapi32.dll (from VMC system)
  • %windir%\system32\mpr.dll (from VMC system)


  • 😦 “I will say that if you are a hobbyist then MCML is not likely to be very friendly to you.”
  • 😦 “It took me a long while to create a gallery that displays a list of images from an ArrayListDataSet.”
  • 🙂 “I have persevered in the days since my last post and have managed to get a repeater/scroller working to emulate the behaviour of the ‘my music’ section of media center.”
  • 🙂 has a good tutorial on making the sliding menu at the top of the screen.”
  • 😦 “Following the rumour about Microsoft encouraging their internal teams to deprecate the use of an underscore prefix for private fields, I have decided to do the same.”  [I just got my head wrapped around the use of underscores for private fields, and now I need to unlearn that behaviour just as quickly.  At least I haven’t done too much damage with it.]
  • 🙂 “If you can’t easily work out where the variable is defined (local scope, local parameter, member variable) then your code is too complex and needs refactoring.  It’s that simple.”  [I can handle that idea, yeah.]

MyPicasaPictures Part 4: Smart Client Software Factory — a deeper look

Having determined that of all the Patterns and Practices guidance, the SCSF is likely the one with the most to offer my project, I decided to see what following the Getting Started steps would get me.

And just my luck, I can’t even get past the first steps — to create a Hello Word application, I need to be able to select from the “Guidance Packages Project” in Visual Studio, but it appears they didn’t even get installed.  I’ve run the SCSF (May 2007) installer twice and selected everything, but still it doesn’t add any entries to the New Project dialog.

What’s a little disturbing is that when I started to peruse the discussions to see if anyone else had run into this, the detailed steps that people had to go through to troubleshoot problems with this guidance were shocking (see this thread, for example).

I really didn’t expect this “factory” to be this complex — I was hoping for a library of code and simple steps to piece different bits together.  What I seem to have gotten instead is another “learning opportunity”, much like the MCML rabbit hole — some limited set of benefits that’s masked by a steep up-front learning curve, and that only benefits your use of that proprietary set of tools.

Does anyone ever just try to create a set of code snippets or object libraries?  Why does everything require learning a whole new development paradigm, and some new hierarchy of newly-overloaded terminology that doesn’t relate to any of the stuff upon which it purports to build?


Dependency Checker: decoupled from install?

Wow, so I decided to go right back to the beginning, and see if there were any hidden steps I missed the first couple of times.  It turns out that, without making too much of a big deal of it, the SCSF team has written a separate downloadable tool that checks for all the required and “optional” (?) dependencies [who ever heard of an optional dependency?  That’s a new one on me].  After all this time, I’ve come to expect that the installer for the program I’m after would check those dependencies as a first step, and would tell me to go get the pieces I’m missing if it detects that it won’t be able to work if it installs.

No, for whatever reason these guys seem to have decided that they want their end users to “educate” themselves on what it really means to use their guidance, so they seem to have intentionally left at least one land mine in place for most newbies to trip over.  I dunno, but if most .NET applications can test for the presence of the required .NET Framework (and halt the install if it’s missing), how much harder could it possibly be to add this dependency checker code as a pre-install step and direct users to the pieces they’re missing?

This reminds me of the days when you downloaded Eudora or Mosaic to “access the Internet”, and then later learned that you needed to download Trumpet Winsock or some other TCP/IP stack before they’d work.

YAFR (Yet Another Freakin Runtime): Guidance Automation Extensions

Well, so it looks like I’m missing something called the “Guidance Automation Extensions“, which sound reminiscent of the Office PIAs — yet another set of libraries that are necessary before the specialized development software will ever work.  Why does everyone seem to want to carve off a set of “foundation libraries”, rather than just include them in their software package itself?  Why do these scenarios inevitably require the end user to become an expert in the internals of the application, rather than just make it as easy as possible to get up and running in one step?

Maybe it’s the Micro-skeptic in me, seeing how often people at Microsoft fooled themselves into thinking that *their* little piece of the platform was the *most* important thing around, and why *shouldn’t* our users spend some time learning how our stuff is supposed to be used?  It’s just such ridiculous arrogance, and in a lot of ways, I suspect it stems from the hiring process itself — hire these geeks straight out of college (with no previous real-world experience or humility i.e. ever failed at anything in their lives), tell them they’re the absolute best of the best, and then set them loose on a project.  Of *course* they’re going to believe that whatever comes from their mind is the product of god.

Now, after installing the so-called “GAX”, re-running the SCSF installer enables me to select the “Automated Guidance” selection (which had been greyed out before –  which in other installers simply means that you’ve already installed those bits, and don’t need to worry about them again).

Why CAB in the first place?

However, this is just the beginning of the confusion – for a tool that’s supposed to make it easy to build a well-patterned application, the discussions and blogs around SCSF make it clear that this is no library – this is yet another “only once you’re expert with it should you use it” tool.

Bil Simser says, “The point is that SCSF made me more efficient in how I could leverage CAB, just like ReSharper makes me a more efficient developer when I do refactorings.”  However, what he pointedly fails to say is that SCSF makes him more efficient as a developer.  The debate assumes that you have already determined you must use CAB, and once you’ve fallen down that crevasse, what’s the best way to crawl back out?

I guess I need to take this one step further back, and understand the value of CAB to my current application development needs.  Until that time, I’m going to be very skeptical of having to learn the internal logic and complexity of yet another proprietary tool, and be very resistant to investing the time needed to learn enough to make it useful.

A little earlier in his blog, Bil Simser has this to say:

I’ll be the first to admit that CAB is complex. EntLib is large. There is a lot there. As Chris said this morning in what I think was an excellent response to the entire discussion, CAB for example is not just about building maintainable WinForm apps. I like CAB as it gives me a bunch of things and they all work together in a fairly harmonious way. EventBroker is a nice way to message between views and keeping the views separate; CommandHandlers allow me to hook up UI elements indirectly to code to execute them; the ActionCatalog let’s me security trim my commands (and in turn my UI); and the implementation of the MVP pattern using views lets me write presenter tests and keep my UI thin. This all makes me feel good. Did it take me a while to get here? Absolutely. I’ve spent the better part of a year learning CAB, EntLib, ObjectBuilder, WorkItems, and all that jargon but it’s no different than learning a dozen different 3rd party libraries. I simply chose the MS path because it was there and everything was in one neat package. If you packaged up Castle, NHibernate, StructureMap, and others together in a single package maybe I would have chosen that path (and is there really two different paths here? I use both tools together anyways).

Yipes!  Better part of a year?  [And that’s not even counting the SCSF learning curve!]

Not to mention that even the experts wouldn’t want to try to explain it in a short amount of time:

Unfortunately [Entlib/CAB is] not something I could introduce at a conference or User Group session and describe the entire stack in an hour, so I tend to avoid showing off applications and concepts using it as it just turns into a discussion of what (SmartPart) means instead of the main goal like describing MVP which I can do with my own code.

And the promise that even with all of what’s bundled into Entlib and CAB, you could still find yourself dragging in all sorts of other crap:

Eventually I could have a really ugly monster on my hands with copies of Castle, StructureMap, CAB, EntLib, NHibernate, log4net, and who knows what else all living (hopefully) together in happy existence. I don’t want that.

More words to give me pause, from Chris Holmes:

I’ll also say this about ObjectBuilder : I agree with Jeremy; I wouldn’t use it on its own. It’s hideous. It is overly complicated, undocumented and cumbersome. It is the worst part of EntLib and CAB. Fortunately, when using EntLib and CAB I don’t have to manipulate ObjectBuilder. If I did, I’d have abandon both tools a long time ago.

So we come to this question: If I don’t like ObjectBuilder, then why am I using CAB or EntLib?

Now, some people might say “that’s Big Design Up Front”. I’d argue otherwise. Our ultimate goal was testability. We wanted to adopt an MVP architecture so that we could have a very testable UI layer. We wanted automated tested; we wanted automated builds; we wanted the confidence that comes from having a vast suite of unit tests; we wanted the confidence to refactor our code without fear of breaking a tightly coupled system. So our goal was not to have “nifty tools”, but to have a framework to build upon that would give us our ultimate goal: a testable UI layer that we all would have confidence in.

My assumption was that [rolling my own code] would take a considerable amount of time (and not as BDUF; but as time spent building the pieces when necessary. The bottom line is, it still takes time, no matter how you allocate it, and I wasn’t certain how much). That assumption was probably unfortunately influenced by looking at the CAB as a frame of reference and saying, “Holy cow, there’s a lot there…”

So we went with CAB. The idea was that we thought it would help us get out of the gate faster. I thought it would take less time to grok the CAB and make use of it than roll my own code. Maybe that was an error in judgment. It’s certainly possible, I am human and very flawed. But I did manage to grok the CAB pretty fast compared to other adopters, so it seemed like a good decision to me at the time.

I chose CAB because I thought it was the best solution at the time, given everything I knew about software design and weighing all of the other factors involved. I might make a different decision today, given the amount of knowledge I’ve accumulated and the tools that have emerged. I might do just what Jeremy and Oren propose, and grab an IoC tool like Castle and grow my own solutions as necessary.

SCSF, CAB & Entlib: just not ready for the Media Center developer crowd

While having an IM conversation with one of my respected colleagues today, I complained about all the time I was investing in learning about yet another framework:

I just dove deep into the CAB/SCSF/MVC-P crevasse, and it’s amazing how much of learning the bit you need depends on learning the next-layer-down foundational piece – and how many f’n foundational layers there are once you start looking closely.

The thing that J.D. Meier & co did well was provide some automated “wizards” that setup all the skeleton code once you know which pattern you want to use.

The part they addressed poorly is to provide any “entry-level” patterns for weekend hackers like me – everything is all about “steep learning curve, big architecture, lots of pieces to re-use once you figure out the puzzle up front”.  I’m not even convinced I ever want to do this kind of project again, and the amount of setup to even try it is making it less likely I’ll even do it once.

It strikes me that if I were to adopt one of these mammoth frameworks for the MyPicasaPictures project, the only advantage to the pre-defined pieces that are available to fire up at will is that other people (already familiar with the same framework) would have an easier to finding a customizing the pieces they wanted to bolt into.

For the rest of the Media Center community, if they wanted to extend the application or plug in a new module, they’d be forced to suffer the same learning curve as I’m facing right now.  This is not a community that I’d expect to be familiar with these enterprise development frameworks, nor do I feel particularly cruel enough to try to foist such a hairball upon them.

Fascinating quote to end this research vein with:

The end result is that this [anti-Agile programming] architect and I could not possibly coexist on the same project.  My longstanding opinion that traditional architects are useless if not downright harmful remains completely intact.  I guess you wouldn’t be slightly surprised to find out this guy is a big proponent of software factories as well.  Of course, this guy had more people in his talks and better evaluations than me, so what do I know?

If his description is any indication, and if I assume the likelihood that factories-based development suits the Media Center developer crowd is slim to none, then let’s just forget about the whole idea of using factories/frameworks for VMC-oriented development.



Next stop: back to the drawing board.  Perhaps I’ll work on a few crisp use cases to anchor the development of one or more rounds of Agile-esque coding.

MyPicasaPictures Part 3: Abstraction, Encapsulation, Model-View pattern — starting on the design

…I’ve never actually implemented a design pattern before, so this’ll be a learning experience all to itself.

However, I have implemented in previous apps what I’ve understood as the distributed application n-tier logical separation approach.  That is, I’ve tried to implement a basic separation/abstraction between the data access code and the “display” (presentation layer) code.  I’ve also tried to encapsulate some of the more substitutable functionality into utility classes, or wrap the access to a particular application behind a generic function call, so that I (or someone else) could substitute an alternative approach or target later on.

With that in mind, here’s a few ideas would make sense for MyPicasaPictures (without an understanding of what the Model-View pattern recommends):

  • provide a class for accessing the remote web service (Picasa Web Albums at first, but others later on), which would provide methods like GetMetaData() & SetMetaData() [both overloaded for one or many Pictures], UploadPicture(), UploadPictures(), DeleteRemotePicture(), RenameRemotePicture(), and Authenticate().
  • providing a class for the local Pictures and Folders, if needed, to create any custom Properties or Methods that aren’t entirely suited to this web services work.  Perhaps there’s some class in the VMC SDK for managing Pictures that includes the specific Properties & metadata that’s exposed by VMC?
  • providing a class for tying together the boring stuff like Startup, Shutdown, cleaning up connections, releasing resources.


Funny how, once I reviewed the VMC SDK’s Model-View Separation discussion, it turned out to be pretty much what I’ve described above.  I was hoping for something more granular, more sophisticated — something I could really sink my teeth into to get a head-start on this web services stuff.

Maybe the…simplicity is because this is just a subset of the Model-View-Presenter pattern, which itself has evolved into more sophisticated Supervising Controller and Passive View patterns.

Model-View, and the VMC-specific code

I’ve been thinking about this for a few days, and I’ve been hammering pretty hard on the MediaCenterSandbox community (trying to get my head around the MCML development model — NOT an intuitive thing, let me tell you).  The more I’ve been digging into this, the more planning and learning I’ll need to do before diving into UI development.  That was the most obvious areas to start (or at least the area I usually start with, giving me more time to mull over how the underlying classes will be divvied up and various functions will be implemented).  Media Center UI work also seems to be where all the ‘talk’ is.

However, I finally had a revelation today, that’ll make my life easier and let me get down to building something: why not start at the bottom, and build my way up?  I was thinking about the Model-View pattern, and it seemed obvious all of a sudden, to first develop “…the logic of the application (the code and data)…”.  Why not generate the web services client code, the data send and receive code, then create the local proxy stubs that call the web services functions, and then the functions for accessing and manipulating the local Pictures and Folders?

While I stabilize the “model” code, I can continue to investigate the MCML UI option, as well as other ways that I might “surface” the functionality I’m building in the “model” portion of the code.  [What if I just built a normal WinForms app?  Or a command-line batch processor?]  [Or why not just fall back to Picasa and be done with it?]

And since the “model” code would all be native C#, no MCML or VMC-isms, there’d be no lost work — all good ol’ fashioned reusable lessons, plus getting to know the Google Data API while I’m at it.  Can’t really lose with this approach, now can I?



  • Apparently, Background Add-ins are *not* persistent processes — i.e. they’re not intended to run the whole time from startup of the Media Center UI until it’s closed, or the box is rebooted.  [God only knows what the hell they’re good for if they’re as ephemeral *and* invisible to the user…]
  • MCE Controller — a SourceForge C# project enables programmatic control of the MCE UI.  Last updated a couple of years ago, so it’s hard to know how much of this is still relevant to VMC, and whether much of it has been replaced by the Media Center class libraries.