Converting from Microsoft Money to Quicken — finally, a documented solution!

After years of fighting and losing the battle to use Microsoft Money to stay on top of my finances, and after reading that Microsoft Money has only 1/4 of the market share that Quicken has (even after all these years of Microsoft trying to take over), I finally gave in (with a really sweet $33 deal via to sense and decided to switch my financial management to Quicken.

Unfortunately, as much as I would’ve figured that Quicken would want to make it dead-easy to convert MS Money data to the Quicken format, it turns out that not only is their conversion tool buggy and not well documented, but the conversion process is pretty damned fragile (and requires a lossy and manual process of exporting into an XML format).

I spent much of this weekend fighting with this software, troubleshooting all the various errors (mostly using Process Monitor — the best friend of anyone fighting with crappy, cryptic software) and finally came up with a solution that is (a) successful (unlike every report I’ve been able to find on the Internet) and (b) repeatable.

This is my gift to you, the many who have wisely determined that their foolish experiments with Microsoft’s finance software are best left behind.  If you’d like to hear the whole gory story, just keep reading.  If you’d prefer just the step-by-step recipe, just skip to the end of the article and enjoy!

First Error

CreateQWPAManager could not be found in qwutil.dll

I dug around Google for a while, which was not encouraging (it sounds like not one customer had ever found a tech support rep at Intuit that had any clue what this error message really means).

However, to confirm my suspicions I fired up DEPENDS.EXE (no, not an adult undergarment utility) and confirmed that the version of QWUTIL.DLL that ships with Quicken Deluxe 2007 does not include the CreateQWPAManager() API.

I tried to find a more up-to-date version of this file that I would’ve expected Intuit to make available for users of the Data Converter utility, but so far they have nothing there.  So I fired up my favourite torrent utility and “borrowed” a copy of Quicken 2008 Home & Business (which at the time was the only version of Quicken 2008 that was being “shared”).

In the DISK1 folder of the CD image there’s a file called DATA1.CAB.  Opening that up with Winzip, I was able to extract a copy of QWUTIL.DLL.

I then renamed the Quicken 2007 version, copied the Quicken 2008 version to the C:\Program Files\Quicken folder, and confirmed with DEPENDS.EXE that this 2008 version of QWUTIL.DLL indeed does include the CreateQWPAManager() API.

[Now, because the Data Converter utility loads QWUTIL.DLL directly from the Quicken program directory, it isn’t possible to just copy the QWUTIL.DLL into the C:\Program Files\Data Converter folder (which is what would normally work with most Windows apps).  Instead we have to “replace” the old file with the new one in the same location (and then restore the old file once we’re done).]

Second Error

The data converter can not find the Microsoft Money Account Transactions*.xml report.
Try re-creating the Account Transactions*.xml report.

I fired up Process Monitor, and spotted the MNY2QDF.EXE process failing to QueryDirectory for C:\Documents and Settings\mikesl\Desktop\Investment Transactions*.xml and :\Documents and Settings\mikesl\Desktop\Account Transactions*.xml.

So after a bit of head-scratching, I renamed the report XML files to “Investment Transactions.xml” and “Account Transactions.xml”.

Third Error

The data converter does not recognize the report as a Microsoft Money Account Transactions report.
This could be because the report was renamed or because it is not the correct report.

Edited the contents of the element in the XML file(s) to replace the unexpected filename (e.g. “Quicken export”) with the required filename (without the .xml suffix):



Investment Transactions.xml Investment Transactions
Account Transactions.xml Account Transactions

Note that this error can be caused by problems with either the Account Transactions or the Investment Transactions report.

Fourth Error

The Microsoft Money Investment Transactions report must include certain fields.
Make sure that the Investment Transactions report includes the Account, Date, Investment, Activity, Shares, Price and Amount fields, and create it again.

I temporarily renamed the “Investment Transactions.xml” file and Data Converter was able to complete its work on Account Transactions.xml”.

<Edit> A little more detail… This error message was caused by the simple fact that I didn’t include all of the fields it mentions here.  At first I was confused by this error because one of the many guides on the Quicken site spelled out exactly which checkboxes to include, and (I checked afterwards) it definitely missed one of the fields, which I dutifully did not check.  Later, once I re-ran the report with all the fields included, the data converter successfully converted the Investment Transactions report as expected.

(koty, a reader, asked me to clarify what I meant by “temporarily renamed” — it probably would’ve been less confusing if I’d said I temporarily removed the file so that the data converter would only focus on the Account Transactions file.) </Edit>

Fifth Error

The converter tried to launch Quicken 2007 automatically once it completed the conversion, but Quicken 2007 immediately stops when the 2008 QWUTIL.DLL is present.

While I’d hoped that Quicken 2007 could operate properly with the 2008 version of this file, I had to remove the 2008 version and replace it with the 2007 version of QWUTIL.DLL.

Then I had to locate where the converter had dumped the converted Accounts data.  I’d assumed it would copy the results into the currently-configured Quicken data file (the one I’d created when I figured launched Quicken 2007), but that was still empty.   It turns out that, even though ‘d made a point of storing the Quicken Data file that I’d created originally in a non-default location, the converter didn’t find that file, nor try to put its conversion results in the same folder.  Instead, it created a file QDATAMNY.QDF in My Documents\Quicken.

Sixth Error

When I asked Quicken 2007 Deluxe to open the QDATAMNY.QDF file, it gave me this warning:

You are currently using Quicken 2007 Deluxe.
The current data file was last used with Quicken Home & Business.
To add the additional features of Quicken 2007 Home & Business again, click Unlock Again.

I chose Unlock Later.

This is the result of me temporarily “borrowing” the only Quicken 2008 version I could get my hands on, and just manually swapping the QWUTIL.DLL files back & forth depending on what I needed.  [Hopefully this doesn’t cause any long-term compatibility issues with the converted data.]

Once I switched back to the original version of QWUTIL.DLL, Quicken started up fine, and was able to open the QDATAMNY.QDF file that the Data Converter had just created.


  1. Intuit should be more forgiving when seeking the named reports on the Desktop — it’s hard-coded to look for the default names of those reports, and while that would work in most cases, I was one of those crazies that renamed the reports (in the “Title” field of the Customize Report UI).  Without any documentation that the Data Converter only looks for the default Report names, it was not at all easy for me to figure out the source of the Second Error.
  2. Intuit needs to clearly test and document the dependencies for the Data Converter and ensure that anyone who downloads the Data Converter can use it in the proscribed environment.  Currently, the version of the Data Converter that is available on from Intuit only works with the 2008 version of QWUTIL.DLL.
  3. Intuit needs to ensure that what the Data Converter requires from the MS Money Reports is what’s documented in their Help files.  For example, in this page‘s Step 4, #3, they fail to include the Account field.
  4. Intuit should be more forgiving about the naming of the MS Money Reports that it’s trying to convert.  There’s no good reason to both look for hard-coded names for the files and require the XML element inside the file be exactly the same as the file’s filename.
  5. MS Money is not required to be installed on the same computer where the Data Converter is run (Process Monitor watched MNY2QDF.EXE during startup and during the conversion, and never did I see Data Converter try to find, open or use any of the Money files or Registry settings).  However, you do have to have a copy of MS Money so you can create the specified Reports, and you need to get the reports onto the Desktop of the user running the Data Converter.

Bottom Line: Recipe for Converting MS Money transactions data to Quicken

  1. Start with the instructions here, and whenever I haven’t specified something here, do whatever the Quicken help says.
  2. Install your version of Quicken on your current computer.
  3. Download the Microsoft Money to Quicken Data Converter and install it on the same computer.
  4. Get a copy of QWUTIL.DLL from any Quicken 2008 edition (if you can’t find one elsewhere, you should be able to find a Quicken 2008 Home & Business version here).
  5. Rename the installed Quicken’s QWUTIL.DLL to QWUTIL.DLL.ORIGINAL (by default, it’s found under C:\Program Files\Quicken\qwutil.dll).
  6. Copy the 2008 version of QWUTIL.DLL to the folder where you renamed the original one (e.g. copy it to C:\Program Files\Quicken).
  7. Follow the instructions for creating the Account Transactions Report and Investment Transactions Report (e.g. Step 3 and Step 4 from the online Help).  Ensure you preserve the default Title of each report.  If the XML file(s) on the Desktop are not named Account Transactions.xml and/or Investment Transactions.xml, then rename the resulting XML file(s) found on your Desktop to Account Transactions.xml and/or Investment Transactions.xml and edit these files’ XML to match the settings in the table above.
  8. Click the “Import Into Quicken” button in Data Converter.  You’ll eventually see a flash of the Quicken splash screen, then you’ll notice that neither the Data Converter nor Quicken will not be running.
  9. Delete the 2008 version of QWUTIL.DLL from the Quicken folder, and rename QWUTIL.DLL.ORIGINAL back to qwutil.dll.
  10. Launch Quicken, open the My Documents\Quicken\QDATAMNY.QDF file and party on with your old MS Money data!  [Oh, and I believe you can safely ignore any warning about the Quicken data file having been opened last time with a version of Quicken you don’t have installed; that’s a one-time warning that is the result of this workaround.]

BBC newsman stung by his own arrogance; glass houses, anyone?

I read this, and while I wanted to laugh along with the original blogger whose post led me to this, I have to feel sorry for all the rest of us who aren’t so arrogant:

BBC NEWS | Entertainment | Clarkson stung after bank prank

He was reporting on the loss of 25 million bank account numbers, and thought the controversy was so silly that he actually read out his account information to his viewers.  He honestly believed that it wouldn’t be possible to withdraw money from his account with that information.

Of course, the only reason why I’m writing anything about this is that he was spectacularly wrong — someone donated £500 to a charity from his account, and it cannot be traced.

I have to believe that there are plenty of reasonably technical folks out there who’d believe the same thing.  Hell, I’ve been guilty of similarly arrogant statements in the past, and I can only thank the fates that I wasn’t similarly embarrassed by them.  One of the core encryption technologies with which I worked turned out to have a brutal backdoor (that I don’t believe has yet been fixed) that I had claimed for years was patently impossible.

Am I embarrassed now?  You bet.

What can I do to avoid that in the future?  Well, I won’t pretend it didn’t happen.  I can’t make reality go away.  However, I can continue to remind myself (preferably with sharpened sticks) of these failures when I’m about to say:

  • “There’s much bigger problems to worry about than that little security hole”
  • “That’s a perfectly reasonable way to mitigate against the script kiddies”
  • “Sure, there’s always the risk of some determined attacker spending unlimited time and resources, but the odds of that happening in this case is vanishingly small”

I know these sound like stupid statements when they’re presented in this fashion, but stop for a moment and ask: have you never said anything like this?  What makes that different from this?  Is it a perfectly sound security solution, or is it just that no one’s discovered the circumvention approach yet?

Codeplex Licensing: Who controls license to my projects? Not I, apparently

Man, does this ever kill the relationship I thought I was building with the Codeplex team.   A few weeks ago I got another RSS notification from one of my Codeplex projects that the License to the project had changed (AGAIN).

In my interpretation of the world of software licensing, the License Agreement is a contract between the author of the software and the users of it.  I have the impression that these License Agreements carry the weight of law behind them, and that as such there should be restrictions on who can and cannot change the terms of the contract.

The way I see it, and unless I can find some text to the contrary in the Codeplex Terms of Use, the person who creates the Codeplex project is the only one who should be able to make any changes to the License — and even then, they should be very careful not to make frequent or arbitrary changes to the License (so as to provide as stable a set of expectations as you can with your users).

Codeplex: No Respect For Licenses

So why is it that the License for my CacheMyWork project has changed (TWICE), without my consent or participation?  I’ve looked over the Terms of Use published on the Codeplex site, and I can’t find any warning or mention of Microsoft or the Codeplex team monkeying with the License for any project:

“Microsoft does not claim ownership of the materials you provide to Microsoft…or post, upload, input or submit to any Services or its associated services for review by the general public, or by the members of any public or private community, (each a “Submission” and collectively “Submissions”).”


“Microsoft does not control, review, revise, endorse or distribute third-party Submissions. Microsoft is hosting the CodePlex site solely as a web storage site as a service to the developer community.”


“…Microsoft may remove any Submission at any time in its sole discretion.”

I see nothing claiming that Codeplex or Microsoft reserves the right to alter, update or otherwise affect any License agreement attached to any current projects.  I do not believe that this is an implicit right of the Terms they have published, nor of the kinds of services that are being provided here.

SourceForge: The Opposite Extreme

Out of curiosity, I decided to examine the Sourceforge Terms of Service:

“ reserves the right to update and change the Terms, including without limitation the Privacy Statement, Policies and/or Service-Specific Rules, from time to time.”


“In addition, the content on the Website, except for all Content, including without limitation, the text, graphics, photos, sounds, sayings and the like (“Materials”) and the trademarks, service marks and logos of COMPANY contained therein (“Marks”), are owned by or licensed to COMPANY, subject to copyright and other intellectual property rights under United States and foreign laws and international conventions.”


“Except for Feedback, which you agree to grant COMPANY any and all intellectual property rights owned or controlled by you relating to the Feedback, COMPANY claims no ownership or control over any Content. You or your third party licensor, as applicable, retain all intellectual property rights to any Content and you are responsible for protecting those rights, as appropriate.

With respect to Public Content, the submitting user retains ownership of such Public Content, except that publicly-available statistical content which is generated by COMPANY to monitor and display project activity is owned by COMPANY.

By submitting, posting or displaying Content on or through, you grant COMPANY a worldwide, non-exclusive, irrevocable, perpetual, fully sublicensable, royalty-free license to use, reproduce, adapt, modify, translate, create derivative works from, publish, perform, display, rent, resell and distribute such Content (in whole or part) on and incorporate Content in other works, in any form, media, or technology developed by COMPANY, though COMPANY is not required to incorporate Feedback into any COMPANY products or services. COMPANY reserves the right to syndicate Content submitted, posted or displayed by you on or through and use that Content in connection with any service offered by COMPANY.

With respect to Content posted to private areas of (e.g., private development tools or Mail), the submitting user may grant to COMPANY or other users such rights and licenses as the submitting user deems appropriate.”


“ fosters software development and content creation under Open-Source Initiative (“OSI”)-approved licenses or other arrangements relating to software and/or content development that may be approved by COMPANY. For more information about OSI, and OSI-approved licenses, visit

Use, reproduction, modification, and ownership of intellectual property rights to data stored in CVS, SVN or as a file release and posted by any user on (“Source Code”) shall be governed by and subject to the OSI-approved license, or to such other licensing arrangements approved by COMPANY, applicable to such Source Code.

Content located on any subdomain which is subject to the sole editorial control of the owner or licensee of such subdomain, shall be subject to the OSI-approved license, or to such other licensing arrangements that may be approved by COMPANY, applicable to such Content.”

My interpretation of these Sourceforge terms is that they’re at least as generous as the Codeplex terms, and they are much better specified.  It’s as if the Sourceforge leaders took their responsibilities seriously, whereas the Codeplex leaders still consider this a *Microsoft* project, where they can make arbitrary decisions just as Microsoft always does with their customers’ welfare.

Would you feel comfortable hosting your source code on the Sourceforge site?  I would.

Google: Somewhere in the Between-Space

Compare also to the Google Code Terms of Service:

“Google claims no ownership or control over any Content submitted, posted or displayed by you on or through Google services. You or a third party licensor, as appropriate, retain all patent, trademark and copyright to any Content you submit, post or display on or through Google services and you are responsible for protecting those rights, as appropriate. By submitting, posting or displaying Content on or through Google services which are intended to be available to the members of the public, you grant Google a worldwide, non-exclusive, royalty-free license to reproduce, adapt, modify, publish and distribute such Content on Google services for the purpose of displaying, distributing and promoting Google services. Google reserves the right to syndicate Content submitted, posted or displayed by you on or through Google services and use that Content in connection with any service offered by Google. Google furthermore reserves the right to refuse to accept, post, display or transmit any Content in its sole discretion.”


“Google reserves the right at any time and from time to time to modify or discontinue, temporarily or permanently, Google services (or any part thereof) with or without notice. You agree that Google shall not be liable to you or to any third party for any modification, suspension or discontinuance of Google services.”

Pretty spartan, but still more specifics in their terms of what rights you receive and give up than the Codeplex service.

If I was starting over…

If I had to choose it all over again, and if I knew I had equivalent IDE integration for any of the source code management add-ins, I’d have a hard time choosing among them.  Currently, the fact that all four of my projects are hosted on Codeplex is a significant barrier to hosting any project on any other site.  The next strongest reason keeping me on Codeplex is that I understand the basics of how to check-in and check-out my code using the Visual Studio integration with TFS — not that it means I find it brainless or foolproof, but it happens to be the source code management system I know best right now.  [As well, I continue to figure that I’ll focus my development skills in .NET, which likely will work better in Visual Studio, which is investing heavily in TFS integration and adoption – and I suspect if I ever get into coding for work, that I’ll have the TFS infrastructure available to really exploit for our projects.]

What about Sourceforge?  I like that it’s the first one out there, and I imagine it’s been held to a higher standard (by the zealots of the open-source community) than any other site — which is goodness.  However, I don’t “get” Subversion or (the older source code management tool), and I must admit I’m put off by the number of stale or poorly-used projects littering the Sourceforge site.  I imagine that Codeplex will look similar, given the same amount of time, so this isn’t a knock against Sourceforge but rather just a reflection of its age and roots.

What about Google Code?  Despite my better [i.e. paranoid] judgment, I’ve become a fan of Google’s services, and I really appreciate the ability to easily leverage all their “properties” in what appears to me to be a cohesive fashion.  I’m also thrilled that most of their properties are frequently updated, and that the updates focus on the kinds of changes that I usually appreciate seeing.  At this point I haven’t looked much deeper into it, but if it was possible to test it out, I probably will.

If I was running Codeplex…

If I could sit down with the Codeplex team, I’d tell them this:

  • You should feel free to publish any updated licenses you wish, and make them available at any time
  • You should NEVER alter ANY of the properties of any of the hosted projects on Codeplex, let alone the License for any project

It’s possible the database admins are simply updating database records for the existing licenses, rather than inserting new records for each new or updated license

  • If you don’t want to make available “older” licenses because you think that would be too confusing for Codeplex users, then at least leave the existing licenses in place, but mark them such that they cannot be assigned to new projects.  There should be NO excuse why you can’t leave existing projects alone, and allow the project coordinators to choose when or if to update the license on their project.
  • If you’re simply updating the existing licenses for the purpose of streamlining, constraining or clarifying the licenses attached to Microsoft projects, then you ought to be publishing Microsoft-specific versions of these licenses, since the changes you’re making to the existing licenses are having unintended consequences on those non-Microsoft projects that also use those licenses.
  • Ideally, however, you should be able to publish and maintain multiple versions of each license, and to be able to publish new licenses as well.  If the GPL can manage three different versions of their license, and if these are really simply database records (with “published on” and “version” metadata attached to each) then there should be no excuse why Codeplex cannot keep multiple versions of its licenses online at the same time, and leave the existing projects’ licenses ALONE.

Having worked at Microsoft for six years, I saw firsthand how arrogant and ignorant Microsoft can be about the impact of sweeping decisions they make, since most of those folks have never been responsible for large populations of users before, and they rarely have had exposure to an environment where their decisions (no matter how foolish or under-thought) are openly challenged.

I wouldn’t be surprised if the Codeplex team was no different; in fact, I’d be surprised if they were even measured on such end-user impact.  There’d sure be no incentive for Codeplex management to care, since (a) there’s no revenue to be gained or lost by poor or quality service, and (b) there’s no revenue to be “stolen” from a competing service (since all the major online code repositories charge nothing for their service, and their advertising revenue must be pretty thin).

I’m hopeful that the Codeplex folks wise up and take their responsibilities more seriously, but after the catastrophic database failure of last year, plus this half-arsed attitude towards the legal aspects of this service, I’m not going to make any bets on it.

If anyone has any suggestions on how I could help remedy either the Codeplex situation or my own desire for a better service (to which I could quickly adapt, given my sorry skills), please speak up – I’d love to hear it.

Windows Update — talk about shooting yourself in both feet…

Microsoft update - nonsense

For the love of Pete (who’s Pete you ask?  It’s a joke, son), who’s keeping watch over Microsoft customers’ safety and security?  For well over a year now, I’ve encountered Windows XP SP2 PC after PC, dutifully configured to automatically download and install all high-priority updates.  Some of these PCs, I’ve mothered over multiple times, hoping that I was seeing just a one-time problem that would be magically resolved the next time I arrived.

Microsoft even makes a big deal in its advertising about the fact that Windows Update (or Microsoft Update, if you’ve opted-in to this long-overdue expansion of updates across many Microsoft consumer and business products) “…helps keep your PC running smoothly — automatically”.  [And if you don’t believe me, check it out for yourself.]

Hogwash, I say.

Windows Update?  It’s more like “Rarely Update”, or “Windows Downtime”.

In almost every single case (and I suspect the rare PCs that weren’t this way, had been similarly mothered by some other poor lackey of the Beast from Redmond), I’ve found that I had to visit the Windows Update web site, download yet another update to the “Windows Genuine Validation” ActiveX control, install this piece o’ quicksand, and then subject my friend’s (or family member’s) PC to the agony of between one and three (depending on how long it’d been since I last visited) sessions of downloading and installing the very updates that they (and I) continued to falsely believe were being downloaded “automatically”.

In those cases where it’d been a year or more since the last occasion of hand-holding by me, the cycle of abuse wasn’t complete with a single session — I had to reboot after all “available” updates were installed, and re-visit Windows Update to find yet *another* batch of updates that magically appeared on this subsequent go-around.

How does this happen?  How could a service that is supposed to minimize the occurrence of unpatched PCs turn against itself so horribly?

I have to imagine that the WU (Windows Update) team doesn’t have any oversight or centralized control over the content that’s being hosted on their site.  If they did (and assuming they’re the folks who paid for the above ad), then they’d take their responsibilities more seriously, and make sure their site could deliver on the promise being advertised.

As it stands, it appears that the team responsible for Windows Genuine Validation feels it’s more important to ensure that their software is being explicitly installed by the end user, than to ensure that Microsoft’s customers are being adequately protected from the constant onslaught of Windows-targeting malware.

Each and every time I have visited the Windows/Microsoft Update site on these “under-managed” PCs (i.e. PCs owned by those folks who have left their PCs alone, as they’ve been promised to be able to by Microsoft), I’ve found that I had to perform the “Custom” scan, then accept the only-via-the-web download for the Windows Genuine Validation software, and only then is the computer capable of automatically downloading the remaining few dozen updates that have been queued up while the PC has been prevented by the requirement to download the validation control.

It seems like the Windows Genuine Validation team isn’t satisfied with their software getting onto every Windows PC in existence; they also seem bound & bent to ensure that every user is explicitly aware that they’re being surveilled by the Microsoft “licensing police”.

Why is it that Windows Update (or Microsoft Update) can update every other piece of software on my Windows PC automatically, but the license police can’t (or won’t) get its act together and make their (unwanted but unavoidable) software available automatically as well?  And don’t tell me it’s a “privacy” thing, or that it wasn’t explicitly allowed in the Windows XP SP2 EULA.  We’ve had plenty of opportunities to acknowledge updated privacy notifications or EULA addenda (hell, there’s at least one of those to acknowledge every year via WU, it seems), so that don’t fly.

So here’s my proposition: I’d love to see the Windows Genuine Validation team fall in line with the rest of the Microsoft “internal ecosystem” and figure out a way to make it so that WU/MU automatic updates actually become automatic again.  Wouldn’t it be grand if Windows systems around the world were still able to keep on top of all the emerging threats on behalf of all those individuals who’ve filled Microsoft’s coffers over the years?

Let’s get the current WGA control packaged up like any other High-Priority update and pushed down on the next Patch Tuesday (pitch it as if it’s similar to the monthly malware scanning tool).  If you have to, add in one of those EULA addenda (with or without a prominent privacy notification up front), and if you’re really worried, run a big press “push” that gets the word out that a privacy notification is coming.  C’mon Microsoft!  You’ve conquered bigger engineering problems before.  This one (at least to my naive viewpoint) can’t possibly be that hard…

What if Google Reader had a sucky API…?

I’ve been thinking more about the idea of building a VSTO add-in for Outlook that could synchronize between Attensa for Outlook and Google Reader (I wrote previously about the idea here).

I’ve had a heckuva time trying to figure out how to build Web Services client code into a Windows client application.  There’s a ton of books out there documenting how to build the server side of Web Services; even the books and articles that talk about building Web Services client code, are really talking about consuming Web Services into another server application.  I finally figured out a query or two that gave us some leads, but it’s wasn’t exactly a slam dunk.

I’ve also had more trouble than I expected to figure out what the story is with Google Reader’s API.  Back in 2005 there was an effort by Niall Kennedy to reverse-engineer the then-current Reader API; at the time, the Reader team even commented a couple of times on the imminent release of the API.  Ever since then, there’ve been various inquiries but no further details to let folks know where they stood.  [Well, at least from the SOAP API standpoint.]

I’ve been getting more and more suspicious that Google no longer intends to document and release a SOAP-based API for Google Reader.  I finally became convinced of this when I stumbled on a not-terribly-related article by Scoble, where he caught us up on Google’s inauspicious switch from their original SOAP Search API to an AJAX Search API.

My take: despite some of the comments on Scoble’s article that imply he’s just being paranoid, I happen to think there’s merit to the theory that Google has no interest in developing APIs that don’t further their advertising revenue.  It’s not that I particularly dislike Google (I’m a big fan of many of their services), but after two years’ delay from when the Reader team was about to release their SOAP API anyway, it’s pretty hard to imagine that the great disappearing act is not in some way deliberate.

The fact that they developed and released the Reader AJAX API in less time demonstrates that they’re capable of getting a Reader API out in that timeframe, and the distinct lack of mention of any further API development (either on the Reader group discussion boards or on the Reader team’s blog) pretty much seals the deal for me.

So what’s an in-their-spare-time developer supposed to do at this point?

Google Reader: toss it…

From what I see of the Reader AJAX API documentation, it’s not meant as a full-fidelity API for manipulating the contents of a user’s Reader configuration, article read/unread status or even the list of blogs to which they’re subscribed.  I’m not terribly interested in placing an eight-article list of links (+ Google ads) in a VSTO add-in to Outlook.  Nor am I all that interested in trying to use an undocumented and unsupported (not to mention still-in-flux) SOAP API — it’s enough trouble for me to do something once, let alone every week or month when they decide to rip out something else.  It sure doesn’t help to hear someone who’s been keeping an eye on this space say things like Google is embarking on the “creation of de jure and de facto standards which are pitched as being great for customers but seem suspiciously like attempts at Fire & Motion.

Attensa Online reader: no more

I probably wouldn’t even be going down this road if Attensa had kept up their own online web-based feed server.  However, despite multiple positive reviews and different URLs that used to point to it, it’s gone the way of the dodo bird.  Heck, I’ve even lost access to Attensa’s own corporate feed server (to which they gave me an account last year after all the bug reports I’d sent them).

MyYahoo: no published RSS APIs

Many news reports repeat the “fact” that MyYahoo is one of the most popular online RSS readers on the ‘net.  I dug into its interface, documentation, and searched around Yahoo’s site for a while, but it was so obtuse it was hard to actually find any references to terms like RSS or OPML.  I’ve come to the conclusion that there’s either (a) no official API for MyYahoo feed reading, or (b) it’s just not something Yahoo has invested any effort into.  However, Niall Kennedy has documented some API characteristics, and I stumbled upon Yahoo’s general developer documentation (among which they mention that most APIs use the REST approach).  The Yahoo SDK provides C# & VB.NET samples for Yahoo’s various search properties but nothing more.

NewsGator Online: in a word, wow

According to a trip report from ETech 2006, the Newsgator API “uses standard protocols (SOAP) which can be consumed from any platform” and it “supports granular operations [add feed, mark item read, rename feed, etc]”.  The Newsgator Online API has documentation and even C# sample code here.  Newsgator also provides a web-based feed reader that’s optimized for iPhone (which is my ultimate online target) and a Windows Media Center-integrated web reader (which’d be a bonus for me with Vista Ultimate controlling my TV universe at home).  Hey, they even provide a Desktop client that integrates with the IE7/Windows RSS Store (though it’s unclear whether it syncs with NewsGator Online).  Plus there’s an Outlook-integrated client that has some of the capabilities of the Attensa Outlook client (though not the free price tag). unpublished API (or less?)

This one was mentioned on Wikipedia‘s article for Google Reader.  I wasn’t able to find anything on Rojo’s site about an API, but I did see a few sporadic mentions on the ‘net (e.g. Yuan.CC’s archives through 2005, a Rojo forum suggestion).  Unfortunately, as the discussion seems to center around “undocumented” APIs, I’m even less interested in this than in Google’s limited interface.

Bloglines: “it sucks a lot

According to the ETech 2006 trip report, “The Bloglines Sync API didn’t work for synchronization since it is mainly a read-only API for fetching unread items from Bloglines. However it is complicated by the fact that fetching unread items from Bloglines marks them as read even if you don’t read them in retrieving applications.”

NewsAlloy: cool but not there yet

Currently in Beta, this free web-based feed reader implements the “Blogroll API” (whatever that means), and provides both a rich web-based and a mobile interface.  Unfortunately, the developer has been struggling off and on to keep this project going, but it’s a really cool looking project.

Feedlounge: cancelled

Read here.

Gritwire: no idea

Gritwire Labs (where I’d expect mention of APIs to show up) doesn’t mention anything about APIs.  The author once mentioned “It relies on an XML API to receive and provide data to the backend”, but there’s little if anything else that would help me understand what this “API” is used for.

Conclusion #1: Build a Generic Sync add-in

I’d started my solution as a Google-Attensa app (heck, I’d named it GoogleReaderSyncToAttensa).  Now, after looking into this, and not seeing any one clear “winner” among the online web-based feed readers, I have decided that the only responsible thing to do is to build this add-in on the assumption that others’ll want to add their own web-based readers to the codebase.  Thus, all the direct button-to-codebehind and code-to-Attensa calls should be abstracted as much as possible to not tie the code to any one web-based reader.  The idea is to make it as *easy* as possible for someone to add their own assembly/class that encapsulates the necessary site-specific functionality, and (ideally) not have to rewrite any of the existing code, other than to call into the appropriate assembly/class.

Heck, if I was feeling really generous, I’d even consider abstracting the Attensa interface so that Microsoft’s IE7 Feeds interface could be used as the local client, or even the inevitable Google Apps offline reader.  However, I’m getting waaayyy ahead of myself with this kind of thinking.  I’m going to have a hard enough time abstracting the remote site interfaces, let alone try to wire the classes for “plug and play” with various local feed reading apps.

Conclusion #2: I Should Just Cough Up the $30 for Newsgator Inbox

At my core, I’m just a lazy guy.  If I can find a pre-built application to do most of what I want to do, then I’ll suffer with the pain of that rather than go off and “build” something myself.  That was the core lesson I learned from Comp Sci 1A03 (first-year programming) in undergrad, and my computing bias was solidified that way ever since.

After looking at the breadth and depth of offerings from Newsgator (SOAP API, REST API, Outlook client, Windows Feed Store integration, iPhone web client, Windows Media Center client, and broad support for synchronization among all the feed readers) it feels like a tough choice for me: spend a few months developing a synchronization framework between Attensa and one or more online feed readers, or spend the $30 on Newsgator Inbox and postpone development indefinitely.

I guess the big question is, am I more interested in learning how to code Outlook VSTO and Web Services (SOAP and/or REST), or am I really just interested in getting on with my RSS reading and work on something else?  I do have plenty of other development projects I can continue doing, and it’s not like I have all this spare time to devote to multiple concurrent development projects.  On the other hand, this is an interesting space for development, and (for those folks not paying $$ to Newsgator yet) there’s something to be said for laying the groundwork for a generic offline-online RSS synchronization framework.

I’m going to have to sleep on this for a while — this is not an easy choice.

Just one of the many reasons why Vista pisses me off…

I’ve spent the better part of three nights a week, for at least a month, trying to figure out how to reinstall my Linksys WUSB54G USB Network Adapter.  I’d bought this nice little device little while ago, and I was foolish (!?!) enough to think that I could disconnect it and plug it into any old USB port on my Vista PC, and have it work again.  [After this many years of working with USB devices in this manner, what was I thinking ?!?]

Instead, I found out when I plugged it back in that its attempts to “reinstall the driver” (during creation of the “new” device — oops, I guess plugging it into a different USB port was NOT to Vista’s liking) were being stymied by one of the most impenetrable errors I’ve ever encountered: ERROR_DUPLICATE_SERVICE_NAME.  Oh sure, you think this’d be an easy one to resolve eh?  Sure – just try to find the duplicated name anywhere in the Services hive of the Registry.  Nothing with “Linksys” in the name, and simply deleting anything with “Linksys” or “WUSB54G” in any of the setting, value or data didn’t cut it.  Vista still bitched about the duplicate name.

The error has plenty of references online (e.g. peruse here or here), but no one seemed to have any decent solutions on resolving this for any of the Linksys network devices that were at all similar to the one I have.  Plenty of speculation, just no good results.

Yes, I tried KB 823771, I’ve tried crawling through the SETUPAPI.LOG file, and I’ve tried a number of other brick walls to bang my head against.  The closest I got with the SETUPAPI.LOG was to look for references to “xxxxx” (can’t recall what that said exactly anymore), as in:

#E279 Add Service: Failed to create service “xxxxxx”. Error 1078: The name is already in use as either a service name or a service display name.
#E033 Error 1078: The name is already in use as either a service name or a service display name.
#E275 Error while installing services. Error 1078: The name is already in use as either a service name or a service display name.
#E122 Device install failed. Error 1078: The name is already in use as either a service name or a service display name.
#E154 Class installer failed. Error 1078: The name is already in use as either a service name or a service display name.
#I060 Set selected driver.

Aside: Why I Hate Vista

I’m having a bitch of a time trying to get Vista to preserve a network connection through its Sleep & Resume states.  I know that part of it is the fact that the networking hardware vendors haven’t written solid, stable drivers for Vista, but considering how widespread this issue is (even to this day — what, almost a year since release?), it’s really making me more frustrated with Vista [or perhaps it’s really I’m just pissed off at myself for having bought into the hype around Vista, when all it’s been for me since bringing it home has been needless hardware replacement and constant crashes, freezes, and troubleshooting].

This is the third network device I’ve purchased for my Vista box, and the third one that has had driver issues.  The first one just didn’t have a Vista driver, and the claimed “should be compatible” XP driver just gave Vista too many bluescreens.  The second one had a Vista driver and really good reviews on, but the device would lose its driver as soon as Vista went to Sleep (and then resumed), and wouldn’t reload until I rebooted the box.  I’m not kidding — I spent a month trying to get that one to work like it should’ve.

I’ve been a Windows bigot for most of my adult life, and I even spent six years working for Microsoft, every day spent trying to make sure that Windows would work reliably and securely for my customers.  If *I* have this much trouble with Vista, my sympathies to those of you who’ve been trying to get by on just being a *part*-time Windows geek.  [And my sarcasm should be apparent, as I am firmly of the belief that *no* one should have to learn the ins-and-outs of a computer, just to be able to operate it.  If you *want* to geek out, by all means c’mon aboard.  But if you have *other* interests, then the device should be your servant — not the other freakin’ way around.]

Resolution (?)

What did I finally do that did (or seems to have done) the trick?

I finally went through the Registry and deleted any key that in any way shape or form referred to “USB\VID_13B1”.  The HARDWAREID for the Linksys WUSB54G USB Network Adapter is USB\VID_13B1&PID_000D (or some derivative thereof), and while this was never mentioned as the source of the error in any of the logs I crawled through, it finally seemed to me to be the most likely commonality among all the “duplicate names” that must’ve been detected by Vista during the attempted install of the device.  I only found a few such entries, but obviously they were the underlying showstopper for re-introduction of this wireless device into my setup.


Another VSTO app idea? Man, I can’t keep up!

I’m an avid user of Attensa for Outlook, a free Outlook add-in for aggregating RSS feeds as folders of “messages” in Outlook.  I like it because it (a) allows me to search my feeds quickly via Windows Desktop Search, and (b) lets me read my feeds whether I’m connected to the ‘net or not.

However, there isn’t currently a free way to read my feeds via a web browser (e.g. from my new iPhone – hee hee!).  Well, I should say I can read my feeds via Google Reader, but my read/unread status doesn’t get sync’ed from Attensa to Google or back.  That means if I bravely skim through a bunch of articles in one place, I’ll likely have to wade through them (or get distracted by them) again in the other.

I had a brainwave today (stand back, that could be contagious) about how to add functionality to be able to sync back & forth, and I think I’ve just dreamt up yet another coding project for myself:

I have a pretty reasonable idea how to write managed C# or VB.NET that can integrate with Office via the Visual Studio Tools for Office model.  I’m not unfamiliar with web services, or with the basics of a .NET-based HTTP client [having just wasted a weekend authoring a very rudimentary web site parser].  I am bright enough to imagine that the Attensa add-in exposes a more abstract approach to addressing feeds & articles than just crawling the raw PST file, enumerating folders and addressing message objects directly.

Now what I’d need to know is: is there an Attensa SDK and/or API which I could leverage in an Outlook application add-in using VSTO?  Would there be any advantage to using that abstraction layer, as opposed to just enumerating the PST folders and messages directly?  If the Attensa team only exposed an unmanaged API, would I be creating a performance nightmare to code through that (with all the PInvoke‘ing that is required) rather than just take my chances with the native Outlook object model?

I can even imagine that the Attensa client might provide me a way of finding the translation between “articles from feed ‘x'” and “messages in folder ‘y'”, that relied on Attensa’s internal database, and then I could grind through the Outlook folders themselves.  That’d be a damn sight easier than trying to match up (a) feeds from the Google Reader API (article, wiki) to the folders as they’re named in the PST file, and (b) articles from the Google Reader API to the messages stored in the PST file.  It’d sure help if there was an indexed search capability in (a) the Google Reader API and (b) the Outlook PST object model.

Oh, it’s fun to imagine all the ways I could make my life easier…after six months of hard dev work to get there.  Madman I am.

Memory leaks, GDI Objects and Desktop Heap – Windows registry changes for high-memory systems

In case I haven’t blogged about it this year, I wanted to share the usual fix-up that needs to be done to make full use of more than say 512 MB of RAM:

I had to swap “shells” recently, dropping my laptop’s hard drive into a replacement chassis. I realized later that it had half the usual RAM, and to get back to the 2 GB I was supposed to have took a few weeks.

On the suspicion that Windows might readjust its memory allocation parameters if it detects less memory than it started with, I figured I’d check on it after getting the RAM upgraded back to 2 GB. Sure enough, things are back to the defaults:

  • the “Windows SharedSection” portion of the Subsystems\Windows Registry setting was configured to 1024,3072,512, and like Matt I boosted it to 1024,8192,2048
  • the “SessionViewSize” Registry setting was configured to 48 MB, and I boosted it to 64 MB (just another multiple of 16, and figured a little more probably goes a long way).

Now go and do likewise.

Shameful: "Hotmail Fails To Deliver Up To 81% Of All Attachment Emails"

Outrageous.  First it took them FOREVER to come through on their promise to deliver 2 GB email accounts in the first place – my wife waited 3 or 4 months for her PAID account to get converted from the time they said it would happen “immediately”, and a year or more for her unpaid accounts. 

And now we see clear, damned-hard-to-refute evidence of how they’re not even providing a reliable mail delivery (let alone storage) service?

Hotmail Fails To Deliver Up To 81% Of All Attachment Emails

I can’t wait to see how they treat the stuff you put in their “reliable online storage services” – I sure ain’t gonna rely on Windows Live to backup my photos or documents.  I mean, how many of them would just “disappear” after six months of inactivity (or even just randomly, when no one’s looking)?

XSLT 1.0 defies the laws of physics – sucks AND blows simultaneously…

Holy crap, whoever came up with XSLT 1.0 must’ve really wanted me to suffer 🙂

I have spent the better part of a couple of weeks fighting with a simple piece of XSLT code to be able to generate some short, organized reports using the Microsoft Threat Analysis and Modelling tool (currently at v2.1.2).

It doesn’t help that the data model used by this tool has made a really poor choice in the way it classifies Threats:

  • The logical model would be to define an XPath like /ThreatModel/Threats/Threat[x], and then use an attribute or sub-element to assign the category of Threat
  • Instead, MS TAM v2.1.2 defines an XPath like this for each threat: /ThreatModel/Threats/$ThreatCategorys/Threats/$ThreatCategory
  • Thus, for a Confidentiality Threat, you get /ThreatModel/Threats/ConfidentialityThreats/Threats/ConfidentialityThreat

It certainly also doesn’t help that Microsoft has fallen behind all the other major XML engine vendors in implementing XSLT 2.0.  This article here indicates that not only did they wait to start any of the work until after the Recommendation was completed, but that they have NO planned ship vehicle or release date (despite the fact that XSLT 2.0 has been in the works for five years).

But really the fundamental problem that I (and about a million other people out there, over the last 7-8 years) am challenged by is the fact that you can’t pass what’s called an RTF (“Result Tree Fragment”) in an XPath 1.0 expression – the XSLT 1.0 standard just doesn’t allow dynamic evaluation of such an expression, AND they didn’t provide any reasonable way to actually get around the problem of RTFs.  It means that all the vendors providing engines to process XSL had to come up with their own extensions to handle this (e.g. [1], [2], [3]), and many people have also come up with creative (but horribly obtuse) ways to get around the problem.

So it goes – I’m stuck with (a) XSLT 1.0 & XPath 1.0 + proprietary “extension functions” [1], [2] in MSXML, because (b) the Microsoft TAM tool only uses the MSXML engine (which is fair – it’s gotta default to something).

What’s REALLY painful is learning that not only did I spend weeks banging my head against a wall learning some very obtuse and shouldn’t-be-necessary coding hacks to what in other languages are fairly trivial problems – but now I discover that it wasn’t even a question of RTFs at all, but rather that XSLT just ends up taking what I think is a reasonably well-thought-out design and dumping it on the floor:

Oh, and how did the XSLT overlords solve this problem in XSLT 2.0?  The just eliminated the limitation on RTF in XPath expressions.  Done.  And done. 

Ugh – that’ll teach me to ever get lured into using a [functional?] programming language again.  Back to C# – that seems positively trivial by comparison…

SO: if you happen to have a masochistic streak in you, or you find that you absolutely must use XSLT 1.0 and not either XSLT 2.0 or System.xml.xsl, then (a) you have my sympathies and (b) here are some resources that I recommend you consult sooner than later: