Occupied Neurons, early July 2016: security edition

Who are you, really: Safer and more convenient sign-in on the web – Google I/O 2016

Google shared some helpful tips for web developers to make it as easy as possible for users to securely sign in to your web site, from the Google Chrome team:

  • simple-if-annoying-that-we-still-have-to-use-these attributes to add to your forms to assist Password Manager apps
  • A Credential Management API that (though cryptically explained) smoothes out some of the steps in retrieving creds from the Chrome Credential Manager
  • This API also addresses some of the security threats (plaintext networks, Javascript-in-the-middle, XSS)
  • Then they discuss the FIDO UAF and U2F specs – where the U2F “security key” signs the server’s secondary challenge with a private key whose public key is already enrolled with the online identity the server is authenticating

The U2F “security key” USB dongle idea is cute and useful – it requires the user’s interaction with the button (can’t be automatically scraped by silent malware), uses RSA signatures to provide strong proof of possession and can’t be duplicated. But as with any physical “token”, it can be lost and it requires that physical interface (e.g. USB) that not all devices have. Smart cards and RSA tokens (the one-time key generators) never entirely caught on either, despite their laudable security laurels.

The Credential Manager API discussion reminds me of the Internet Explorer echo chamber from 10-15 years ago – Microsoft browser developers adding in all these proprietary hooks because they couldn’t imagine anyone *not* fully embracing IE as the one and only browser they would use everywhere. Disturbing to see Google slip into that same lazy arrogance – assuming that web developers will assume that their users will (a) always use Chrome and (b) be using Chrome’s Credential Manager (not an external password manager app) to store passwords.

Disappointing navel-gazing for the most part.

Google’s password-free logins may arrive on Android apps by year-end

Project Abacus creates a “Trust Score API” – an interesting concept which intends supplant the need for passwords or other explicit authentication demands, by taking ambient readings from sensors and user interaction patterns with their device to determine how likely it is that the current holder/user is equivalent to the identity being asserted/authenticated.

This is certainly more interesting technology, if only because it allows for the possibility that any organization/entity that wishes to set their own tolerance/threshold per-usage can do so, using different “Trust Scores” depending on how valuable the data/API/interaction is that the user is attempting. A simple lookup of a bank balance could require a lower score than making a transfer of money out of an account, for example.

The only trick to this is the user must allow Google to continuously measure All The Thingz from the device – listen on the microphone, watch all typing, observe all location data, see what’s in front of the camera lens. Etc. Etc. Etc.

If launched today, I suspect this would trip over most users’ “freak-out” instinct and would fail, so kudos to Google for taking it slow. They’re going to need to shore up the reputation of Android phones and their inscrutably cryptic if comprehensive permissions model and how well that’s sandboxed if they’ll ever get widespread trust for Google to watch everything you’re doing.


Looks like Microsoft is incorporating “widely-used hacked passwords” into the set of password rules that Active Directory can enforce against users trying to establish a weak password. Hopefully this’ll be less frustrating than the “complex passwords” rules that AD and some of Microsoft’s more zealous customers like to enforce, making it nigh-impossible to know what the rules are let alone give a sentient human a chance of getting a password you might want to type 20-50 times/day. [Not that I have any PTSD from that…]

Unfortunately, they do a piss-poor job of explaining how “Smart Password Lockout” works. I’m going to take a guess how this works, and hopefully someday it’ll be spelled out. It appears they’ve got some extra smarts in the AD password authentication routine that runs at the server-side – it can effectively determine whether the bad password authentication attempt came from an already-known device or not. This means that AD is keeping a rolling cache of the “familiar environments” – likely one that ages out the older records (e.g. flushing anything older than 30 days). What’s unclear is whether they’re recording remote IP addresses, remote computer names/identities, remote IP address subnets, or some new “cookie”-like data that wasn’t traditionally sent with the authentication stream.

If this is based on Kerberos/SAML exchanges, then it’s quite possible to capture the remote identity of the computer from which the exchange occurred (at least for machines that are part of the Active Directory domain). However, if this is meant as a more general-purpose mitigation for accounts used in more Internet (not Active Directory domain) setting, then unless Active Directory has added cookie-tracking capabilities it didn’t have a decade ago, I’d imagine they’re operating strictly on the remote IP address enveloped around any authentication request (Kerberos, NTLM, Basic, Digest).

Still seems a worthwhile effort – if it allows AD to lockout attackers trying to brute-force my account from locations where no successful authentication has taken place – AND continues to allow me to proceed past the “account lockout” at the same time – this is a big win for end users, especially where AD is used in Internet-facing settings like Azure.

Stalk Google’s stalking of you

What with all the attention on Google right now because of the sweeping and selfish privacy policy changes they just publicized, it’s smart, fun and sometimes a little disturbing to see just what Google’s advertising arm thinks it knows about you.  Play along at home – browse to this page from whatever browser(s) you like to use, and compare with what I found.


(1) Here’s what Google thinks based on the cookie in my iPhone’s Twitter cache:

Your categories
Below you can review the interests and inferred demographics that Google has associated with your cookie. You can remove or edit these at any time.
Arts & Entertainment – Comics & Animation – Comics
Arts & Entertainment – Humor – Live Comedy

Your demographics
We infer your age and gender based on the websites you’ve visited. You can remove or edit these at any time.
Age: 35-44
Gender: Male

(2) Here’s what Google says about me (the same user) according to my
iPhone’s browser cookie (i.e. in Safari):

Your categories
Below, you can review the interests and inferred demographics that Google has associated with your cookie. You can remove or edit these at any time.
Beauty & Fitness
Jobs & Education – Jobs
Reference – General Reference
World Localities – North America – USA – Pacific Northwest – Oregon – Portland (OR)

Your demographics
No demographic categories are associated with your ad preferences so far. You can add or edit demographics at any time.

(3) Here’s what Google thinks of me according to Chrome on my laptop:

Your categories and demographics
No interest or demographic categories are associated with your ads preferences so far. You can add or edit interests and demographics at any time.

(4) Here’s who they think I am according to IE on my laptop:

Your categories
Below you can review the interests and inferred
demographics that Google has associated with your cookie. You can remove
or edit
these at any time.
Computers & Electronics – Computer Security – Antivirus
& Malware
Computers & Electronics – Software – Freeware &
Computers & Electronics – Software – Software
Online Communities – File Sharing & Hosting
Reference – General Reference – Time &

Your demographics
We infer your age and gender based on the websites
you’ve visited. You can remove
or edit
these at any time.
Age: 25-34
Gender: Male

Frankly, I’m glad that Google has multiple fragmented profiles on me (at least for this usage models of this ad service they sell). If I was to see the exact same data across all apps and devices) I’d be chilled to the bone.

But I’m not naive enough to think they’re not deliberately and inevitably taking us (along for the ride) there. The closer try get to a single, unified, aggregated view of my behaviour and interests, the more they’ll be able to charge their advertisers for every impression and click-through (or whatever new metrics they’re currently stewing up).

Share that revenue stream with me?  Then I’m all for the kind of surreptitious gathering that you’re doing.  Steal that from me and pretend that “more relevant advertising” is fair compensation (legal: “consideration”)?  You know where you can stuff it.

Codeplex Licensing: Who controls license to my projects? Not I, apparently

Man, does this ever kill the relationship I thought I was building with the Codeplex team.   A few weeks ago I got another RSS notification from one of my Codeplex projects that the License to the project had changed (AGAIN).

In my interpretation of the world of software licensing, the License Agreement is a contract between the author of the software and the users of it.  I have the impression that these License Agreements carry the weight of law behind them, and that as such there should be restrictions on who can and cannot change the terms of the contract.

The way I see it, and unless I can find some text to the contrary in the Codeplex Terms of Use, the person who creates the Codeplex project is the only one who should be able to make any changes to the License — and even then, they should be very careful not to make frequent or arbitrary changes to the License (so as to provide as stable a set of expectations as you can with your users).

Codeplex: No Respect For Licenses

So why is it that the License for my CacheMyWork project has changed (TWICE), without my consent or participation?  I’ve looked over the Terms of Use published on the Codeplex site, and I can’t find any warning or mention of Microsoft or the Codeplex team monkeying with the License for any project:

“Microsoft does not claim ownership of the materials you provide to Microsoft…or post, upload, input or submit to any Services or its associated services for review by the general public, or by the members of any public or private community, (each a “Submission” and collectively “Submissions”).”


“Microsoft does not control, review, revise, endorse or distribute third-party Submissions. Microsoft is hosting the CodePlex site solely as a web storage site as a service to the developer community.”


“…Microsoft may remove any Submission at any time in its sole discretion.”

I see nothing claiming that Codeplex or Microsoft reserves the right to alter, update or otherwise affect any License agreement attached to any current projects.  I do not believe that this is an implicit right of the Terms they have published, nor of the kinds of services that are being provided here.

SourceForge: The Opposite Extreme

Out of curiosity, I decided to examine the Sourceforge Terms of Service:

“SourceForge.net reserves the right to update and change the Terms, including without limitation the Privacy Statement, Policies and/or Service-Specific Rules, from time to time.”


“In addition, the content on the Website, except for all Content, including without limitation, the text, graphics, photos, sounds, sayings and the like (“Materials”) and the trademarks, service marks and logos of COMPANY contained therein (“Marks”), are owned by or licensed to COMPANY, subject to copyright and other intellectual property rights under United States and foreign laws and international conventions.”


“Except for Feedback, which you agree to grant COMPANY any and all intellectual property rights owned or controlled by you relating to the Feedback, COMPANY claims no ownership or control over any Content. You or your third party licensor, as applicable, retain all intellectual property rights to any Content and you are responsible for protecting those rights, as appropriate.

With respect to SourceForge.net Public Content, the submitting user retains ownership of such SourceForge.net Public Content, except that publicly-available statistical content which is generated by COMPANY to monitor and display SourceForge.net project activity is owned by COMPANY.

By submitting, posting or displaying Content on or through SourceForge.net, you grant COMPANY a worldwide, non-exclusive, irrevocable, perpetual, fully sublicensable, royalty-free license to use, reproduce, adapt, modify, translate, create derivative works from, publish, perform, display, rent, resell and distribute such Content (in whole or part) on SourceForge.net and incorporate Content in other works, in any form, media, or technology developed by COMPANY, though COMPANY is not required to incorporate Feedback into any COMPANY products or services. COMPANY reserves the right to syndicate Content submitted, posted or displayed by you on or through SourceForge.net and use that Content in connection with any service offered by COMPANY.

With respect to Content posted to private areas of SourceForge.net (e.g., private SourceForge.net development tools or SourceForge.net Mail), the submitting user may grant to COMPANY or other users such rights and licenses as the submitting user deems appropriate.”


“SourceForge.net fosters software development and content creation under Open-Source Initiative (“OSI”)-approved licenses or other arrangements relating to software and/or content development that may be approved by COMPANY. For more information about OSI, and OSI-approved licenses, visit www.opensource.org.

Use, reproduction, modification, and ownership of intellectual property rights to data stored in CVS, SVN or as a file release and posted by any user on SourceForge.net (“Source Code”) shall be governed by and subject to the OSI-approved license, or to such other licensing arrangements approved by COMPANY, applicable to such Source Code.

Content located on any SourceForge.net-hosted subdomain which is subject to the sole editorial control of the owner or licensee of such subdomain, shall be subject to the OSI-approved license, or to such other licensing arrangements that may be approved by COMPANY, applicable to such Content.”

My interpretation of these Sourceforge terms is that they’re at least as generous as the Codeplex terms, and they are much better specified.  It’s as if the Sourceforge leaders took their responsibilities seriously, whereas the Codeplex leaders still consider this a *Microsoft* project, where they can make arbitrary decisions just as Microsoft always does with their customers’ welfare.

Would you feel comfortable hosting your source code on the Sourceforge site?  I would.

Google: Somewhere in the Between-Space

Compare also to the Google Code Terms of Service:

“Google claims no ownership or control over any Content submitted, posted or displayed by you on or through Google services. You or a third party licensor, as appropriate, retain all patent, trademark and copyright to any Content you submit, post or display on or through Google services and you are responsible for protecting those rights, as appropriate. By submitting, posting or displaying Content on or through Google services which are intended to be available to the members of the public, you grant Google a worldwide, non-exclusive, royalty-free license to reproduce, adapt, modify, publish and distribute such Content on Google services for the purpose of displaying, distributing and promoting Google services. Google reserves the right to syndicate Content submitted, posted or displayed by you on or through Google services and use that Content in connection with any service offered by Google. Google furthermore reserves the right to refuse to accept, post, display or transmit any Content in its sole discretion.”


“Google reserves the right at any time and from time to time to modify or discontinue, temporarily or permanently, Google services (or any part thereof) with or without notice. You agree that Google shall not be liable to you or to any third party for any modification, suspension or discontinuance of Google services.”

Pretty spartan, but still more specifics in their terms of what rights you receive and give up than the Codeplex service.

If I was starting over…

If I had to choose it all over again, and if I knew I had equivalent IDE integration for any of the source code management add-ins, I’d have a hard time choosing among them.  Currently, the fact that all four of my projects are hosted on Codeplex is a significant barrier to hosting any project on any other site.  The next strongest reason keeping me on Codeplex is that I understand the basics of how to check-in and check-out my code using the Visual Studio integration with TFS — not that it means I find it brainless or foolproof, but it happens to be the source code management system I know best right now.  [As well, I continue to figure that I’ll focus my development skills in .NET, which likely will work better in Visual Studio, which is investing heavily in TFS integration and adoption – and I suspect if I ever get into coding for work, that I’ll have the TFS infrastructure available to really exploit for our projects.]

What about Sourceforge?  I like that it’s the first one out there, and I imagine it’s been held to a higher standard (by the zealots of the open-source community) than any other site — which is goodness.  However, I don’t “get” Subversion or (the older source code management tool), and I must admit I’m put off by the number of stale or poorly-used projects littering the Sourceforge site.  I imagine that Codeplex will look similar, given the same amount of time, so this isn’t a knock against Sourceforge but rather just a reflection of its age and roots.

What about Google Code?  Despite my better [i.e. paranoid] judgment, I’ve become a fan of Google’s services, and I really appreciate the ability to easily leverage all their “properties” in what appears to me to be a cohesive fashion.  I’m also thrilled that most of their properties are frequently updated, and that the updates focus on the kinds of changes that I usually appreciate seeing.  At this point I haven’t looked much deeper into it, but if it was possible to test it out, I probably will.

If I was running Codeplex…

If I could sit down with the Codeplex team, I’d tell them this:

  • You should feel free to publish any updated licenses you wish, and make them available at any time
  • You should NEVER alter ANY of the properties of any of the hosted projects on Codeplex, let alone the License for any project

It’s possible the database admins are simply updating database records for the existing licenses, rather than inserting new records for each new or updated license

  • If you don’t want to make available “older” licenses because you think that would be too confusing for Codeplex users, then at least leave the existing licenses in place, but mark them such that they cannot be assigned to new projects.  There should be NO excuse why you can’t leave existing projects alone, and allow the project coordinators to choose when or if to update the license on their project.
  • If you’re simply updating the existing licenses for the purpose of streamlining, constraining or clarifying the licenses attached to Microsoft projects, then you ought to be publishing Microsoft-specific versions of these licenses, since the changes you’re making to the existing licenses are having unintended consequences on those non-Microsoft projects that also use those licenses.
  • Ideally, however, you should be able to publish and maintain multiple versions of each license, and to be able to publish new licenses as well.  If the GPL can manage three different versions of their license, and if these are really simply database records (with “published on” and “version” metadata attached to each) then there should be no excuse why Codeplex cannot keep multiple versions of its licenses online at the same time, and leave the existing projects’ licenses ALONE.

Having worked at Microsoft for six years, I saw firsthand how arrogant and ignorant Microsoft can be about the impact of sweeping decisions they make, since most of those folks have never been responsible for large populations of users before, and they rarely have had exposure to an environment where their decisions (no matter how foolish or under-thought) are openly challenged.

I wouldn’t be surprised if the Codeplex team was no different; in fact, I’d be surprised if they were even measured on such end-user impact.  There’d sure be no incentive for Codeplex management to care, since (a) there’s no revenue to be gained or lost by poor or quality service, and (b) there’s no revenue to be “stolen” from a competing service (since all the major online code repositories charge nothing for their service, and their advertising revenue must be pretty thin).

I’m hopeful that the Codeplex folks wise up and take their responsibilities more seriously, but after the catastrophic database failure of last year, plus this half-arsed attitude towards the legal aspects of this service, I’m not going to make any bets on it.

If anyone has any suggestions on how I could help remedy either the Codeplex situation or my own desire for a better service (to which I could quickly adapt, given my sorry skills), please speak up – I’d love to hear it.

MyPicasaPictures Part 3: Abstraction, Encapsulation, Model-View pattern — starting on the design

…I’ve never actually implemented a design pattern before, so this’ll be a learning experience all to itself.

However, I have implemented in previous apps what I’ve understood as the distributed application n-tier logical separation approach.  That is, I’ve tried to implement a basic separation/abstraction between the data access code and the “display” (presentation layer) code.  I’ve also tried to encapsulate some of the more substitutable functionality into utility classes, or wrap the access to a particular application behind a generic function call, so that I (or someone else) could substitute an alternative approach or target later on.

With that in mind, here’s a few ideas would make sense for MyPicasaPictures (without an understanding of what the Model-View pattern recommends):

  • provide a class for accessing the remote web service (Picasa Web Albums at first, but others later on), which would provide methods like GetMetaData() & SetMetaData() [both overloaded for one or many Pictures], UploadPicture(), UploadPictures(), DeleteRemotePicture(), RenameRemotePicture(), and Authenticate().
  • providing a class for the local Pictures and Folders, if needed, to create any custom Properties or Methods that aren’t entirely suited to this web services work.  Perhaps there’s some class in the VMC SDK for managing Pictures that includes the specific Properties & metadata that’s exposed by VMC?
  • providing a class for tying together the boring stuff like Startup, Shutdown, cleaning up connections, releasing resources.


Funny how, once I reviewed the VMC SDK’s Model-View Separation discussion, it turned out to be pretty much what I’ve described above.  I was hoping for something more granular, more sophisticated — something I could really sink my teeth into to get a head-start on this web services stuff.

Maybe the…simplicity is because this is just a subset of the Model-View-Presenter pattern, which itself has evolved into more sophisticated Supervising Controller and Passive View patterns.

Model-View, and the VMC-specific code

I’ve been thinking about this for a few days, and I’ve been hammering pretty hard on the MediaCenterSandbox community (trying to get my head around the MCML development model — NOT an intuitive thing, let me tell you).  The more I’ve been digging into this, the more planning and learning I’ll need to do before diving into UI development.  That was the most obvious areas to start (or at least the area I usually start with, giving me more time to mull over how the underlying classes will be divvied up and various functions will be implemented).  Media Center UI work also seems to be where all the ‘talk’ is.

However, I finally had a revelation today, that’ll make my life easier and let me get down to building something: why not start at the bottom, and build my way up?  I was thinking about the Model-View pattern, and it seemed obvious all of a sudden, to first develop “…the logic of the application (the code and data)…”.  Why not generate the web services client code, the data send and receive code, then create the local proxy stubs that call the web services functions, and then the functions for accessing and manipulating the local Pictures and Folders?

While I stabilize the “model” code, I can continue to investigate the MCML UI option, as well as other ways that I might “surface” the functionality I’m building in the “model” portion of the code.  [What if I just built a normal WinForms app?  Or a command-line batch processor?]  [Or why not just fall back to Picasa and be done with it?]

And since the “model” code would all be native C#, no MCML or VMC-isms, there’d be no lost work — all good ol’ fashioned reusable lessons, plus getting to know the Google Data API while I’m at it.  Can’t really lose with this approach, now can I?



  • Apparently, Background Add-ins are *not* persistent processes — i.e. they’re not intended to run the whole time from startup of the Media Center UI until it’s closed, or the box is rebooted.  [God only knows what the hell they’re good for if they’re as ephemeral *and* invisible to the user…]
  • MCE Controller — a SourceForge C# project enables programmatic control of the MCE UI.  Last updated a couple of years ago, so it’s hard to know how much of this is still relevant to VMC, and whether much of it has been replaced by the Media Center class libraries.

MyPicasaPictures Part 2: Rolling up my sleeves, sketching out a Spec

I find that the more time I spend detailing exactly what features and behaviours I expect from something I’m building, the faster and more successfully I’m able to actually build the thing I’m seeking.  This is a truism of technology development (and an old boss would say, of project management), whether you’re a one-person shop or a team of thousands.  However, it’s a lesson each person has to learn and digest to be able to figure out exactly what works best for them.

mini “Functional Spec” for MyPicasaPictures

Here’s a list of the features I’ve dreamed up so far, and the priority (1=high, 3=low) I’m personally assigning to them:

  • Pri1: upload a currently-selected Picture to (the default, if such a thing exists) folder/album/library (or whatever Google calls its collections) on the photo-sharing server
  • Pri1: enumerate and select the working folder/album/library when there are multiple folders to choose from
  • Pri1: authenticate to remote server
    • Pri3: cache remote server’s credentials securely, using DPAPI
  • Pri2: view names and/or thumbnails of pictures already in the selected remote folder/album/library
    • Pri3: enumerate and present the other metadata associated with each picture (e.g. Tags) and each folder/album/library
  • Pri2: write the code so that it abstracts the service-specific logic – enabling future versions of this app to easily add support for other server-based Photo sharing sites such as Flickr, Windows Live Spaces, Facebook, etc.
  • Pri2: Assign a new tag for a photo that has already been uploaded
    • Pri3: enumerate existing tags from remote server, and allow user to assign tag(s) from that set (along with new tags not part of the existing enumeration, in a mix-and-match formation)
  • Pri2: collect tags from the user for photo about to be uploaded, and submit the tags simultaneously (if supported by the photo-sharing service) with the upload, or right afterwards (if simultaneous isn’t supported)
  • Pri2: upload a batch of photos all at once (i.e. entire contents of a local folder that has been explicitly selected by the user).  Note: this may not be possible, as buttons or other controls cannot be added to the existing VMC UI.
    • Pri3: pick & choose a set of photos to be uploaded (e.g. subset of a single local folder, subset of photos in > 1 folders)
  • Pri2: delete existing photos in online albums (one at a time)
    • Pri3: delete existing photos in online albums in a batch (e.g. a subset of one album, or a subset of photos that span multiple albums)
  • Pri3: resize the photo(s) before they’re uploaded
    • Pri3: enumerate and present “spinner”-based choices control for the user to select one of the photo sizes “supported” by the remote server (if there’s any sort of default/preferred sizes that the server chooses to assert)
  • Pri3: rename the to-be-posted picture where another of the same name already exists on the remote site

Questions that keep coming to mind

  1. How do I implement client-side support for a SOAP- or REST-based Web Service in MCML apps?
  2. What “model” should I aspire to use for this kind of development? [cf. Gang of Four]
  3. Where will the entry point(s) for this add-in be located?  Steven Harding crystallized a suspicion I had been gathering on this topic – “Unfortunately, you can’t add anything to an existing Media Center interface.  So there’s no “Send This Folder to Picasa/Flickr” possible.”
  4. What exactly is a Start Menu Strip, and why are only two of them allowed at one time (and why does VMC seem to “punt” the oldest one out unceremoniously when a third is added)?
  5. What is the real difference between a VMC application and a VMC “background application” (other than the obvious visibility issue)?  i.e. Under what circumstances would I want to use one approach and not the other?
  6. What does “running on the public platform” mean in terms of (a) additional functionality that’s possible, and (b) what kinds of security restrictions are lifted on apps running on that “public platform”?
  7. If the More Information (“i” button) is so strongly discouraged, how should we provide that kind of “added functionality” in context of the application, ONLY when the user wants it, and WITHOUT cluttering up the UI in a way that makes it harder for most users to get their basic needs met?
    • Should we use a horizontal, multi-layer menu that’s presented on the Recorded TV screen?
    • Should we try the vertical stack of buttons that stay resident for all contexts, such as you see when you browse the detailed info for a Movie?
  8. Why is it that 3rd parties can’t add controls to the VMC UI, but Microsoft gets to change the user experience whenever they please (e.g. with the “Internet TV (beta)” object that quietly inserted itself – without asking, and without giving me any visible way to opt out – on the TV menu strip a few months ago)?
  9. What parts of the VMC UI are off-limits to third-parties?
    • Adding new objects to existing Start Menu Strips?
    • Adding new objects to existing “More Information” context menus?
    • Adding new sorting/filtering options to existing collections of content e.g.
      • I’d love to add “Show only Movies” to the “By Title” and “By Date” choices currently enabled in the Recorded TV collection
      • I’d give my left…toe to be able to sort a TV show’s episodes by the “Originally Broadcast” date, so I could accumulate a bunch of episodes of some show and then watch them in the order they were meant to be seen, not in the order they happen to have been recorded
    • Adding new tiles to the “More Programs” collections that are buried one click away from the Start Menu?


  • 🙂 “I’ve watched a video on Channel 9 where Charlie Owen and a programmer (Mark Finocchio?) demonstrate how to do basic MCML.  That was slightly more illuminating (though it took 35 minutes to get to the programming), but then I discover that you basically have to create your own buttons from total scratch. It really is back to the ark stuff.”


  • 😦 “I’ll check in to the ability to catch the More Information button — last I recall the handler is there but didn’t work exactly as planned.”  [sounds like one of those famous understatements of the year…]
  • 😦 “There are only the very basic visual elements – ‘Graphic’, ‘Text’, ‘Colorfill’ and ‘Clip’. Everything complex – like a button, menu, scroller etc. must be built from those four visual elements.” [also see 😦 ]
  • 😦 “This question comes up a LOT.  Enough that I covered it in my blog… Please check out the article called ‘Scope in MCML’.”

Next Steps

Here’s the list of beginner articles I’ve seen multiple folks point newbies at, to get a feel for what I’m about to take on:

  1. VMC SDK‘s “Step-by-Step” walkthrough (linked from the Windows Start Menu once you install the v5.2 or v5.3 SDK)
  2. MCML: UI’s (Steven Harding)
  3. UI Properties: Making UI’s Flexible (Steven Harding)
  4. Model-View Separation (Steven Harding)
  5. Stage 1: A Basic Layout (Steven Harding) through Stage 11
  6. GData .NET client library “Getting Started” guide
  7. cURL client for testing upload of files to Picasa

Source code to investigate: (as it might have some hooks already worked out that I won’t have to learn from scratch)

Picasa Web Albums Data API + Vista Media Center = MyPicasaPictures

I’ve found over and over, if I don’t upload pictures to the web when I first download them from my camera, then there’s little or no chance I’ll do it later.  Out of sight, out of mind.

But I wonder…

If I had an add-in command in the My Pictures sub-app of the Vista Media Center, would I be more inclined to upload pictures to the web then (and give my mom at least a glimpse into my life more than once a year)?

Google developer help: plenty

Since I’m already personally using Picasa’s companion Web Albums for what few pictures I’m sharing, I’m inclined to tie the online component of such an add-on to Google.

Having already investigated the Google APIs, I figured there was likely an API tuned for the Picasa Web Albums, and so there is.  Further, it appears that it provides the ability to post photos, create albums, update & delete photos, manipulate tags, and review existing albums/photos/comments/tags.  Sounds like it provides more than I was even looking for.

From what I gather with a quick skim, the basis for the “PWA” Data API is the Google Data API.  And while I don’t know whether it’s absolutely necessary, Google provides a .NET Client Library (currently v1.1.2) to speed web services client development.

Similar apps

Big Screen Photos v2 — the Yahoo Flickr client for Vista Media Center.  It doesn’t enable you to upload the pictures hosted in Media Center, but rather to view the pictures that are already available on the Flickr site.

Yougle Vista — provides access to online Video, Music and Picture sites including Flickr.  Still no upload capability.

PhotoConnect — extends My Pictures with some interesting functionality including the ability to email pictures to others.  However, it looks like this vanished in a puff of smoke and synthetic ash.

FlickrMCE — another viewing app, only for Windows XP MCE 2005.

PictureBook 1.0 — screensaver add-in, not really all that similar after all.

Windows Vista Media Center add-ons: sorting through all this new technology

Most add-ons for Windows Media Center 2005 (i.e. Windows XP) or earlier were written as “hosted HTML” apps that ran outside the Media Center shell.  Now, while hosted HTML apps may still work in Vista, the drive is towards either the MCPL/MCML or XBAP/XAML:

  • MCPL: Media Center Presentation Layer (the programming model)
  • MCML: Media Center Markup Language (the managed code language derivative)
  • XBAP: WinFX XAML Browser Application (the programming model)
  • XAML: eXtensible Application Markup Language (the language in which XBAP is more-or-less implemented)

However, it appears that the XBAP/XAML model has already been dumped by the eHome team.  Lovely, not even a year out of the gate and they already drop support for the “future-forward looking” programming model — I wonder what hurdles they had to clear to be able to explicitly drop support for the whole WPF (aka .NET 3.0) (aka WinFX) (aka WS-*) company-wide drive.?  Oh well, at least that’s one less choice to confuse me…but I have to bitch just this once: when can we abandon app-specific languages?  Any language or programming model that starts with the name of the vendor-specific technology makes me feel more than a little soiled.

Other resources that should prove useful

Some specific coding issues that I’ve noted for future reference

Interesting Tidbits

  • 🙂 Add-ins run out-of-proc in a hosted process called ehExtHost.exe, and communicate with the main Media Center process using .NET Remoting over “Named Pipes” (aka IPC within .NET).
  • 🙂 Media Center unloads the add-in when the Launch( ) method exit.
  • 🙂 The entry in More Programs is created by the running of RegisterMceApp.exe not by the running of the add-in itself.
  • 🙂 In an MCML application, you can only embed a couple of visual elements, NOT including ActiveX controls (i.e. no native Flash rendering).
  • 🙂 You can’t embed a WPF app / window inside of MCML.
  • 🙂 Charlie Owen has invited anyone to provide good/bad developer feedback on Media Center and the SDK, and left his contact info here.
  • 🙂 You can’t override ANYTHING on the Windows Media Center screens — if you want any new functions, they have to be added on completely new screens, which would mean re-building the whole ‘Videos’ library.


I’m not 100% convinced I’m going forward with this yet — I’m inclined to spec out the features, dig into the SDK/MCMLookalike/Codeplex-hosted sources to get a feel for just how much (a) volume and (b) complexity of code I’m facing, and then decide whether to keep pushing forward.  I’d sure appreciate feedback from anyone who has ever delved into any of these areas for encouragement or “there be dragons” warnings…

  • MCML programming
  • web services integration in managed code — Google or otherwise

Cheers all!