DLL Injection in Windows: what security countermeasures can you use?

Manage the Administrators group

Examine any default install of Windows since NT4 SP6.  You’ll notice that the SeDebugPrivilege is assigned by default only to the .\Administrators local group of the Windows host.  While this isn’t exactly unusual for users to be members of Administrators on their own PC, don’t think that every user or process automatically gets this capability in Windows.

cuffedCountermeasure: If you want to assert an explicit distinction between those who do and do not have the SeDebugPrivilege on a Windows system, explicitly manage the membership of the Administrators local group.  This is especially useful (and applicable) to Windows Servers, where most of your users won’t have (or have need for) this membership.

How to implement:

  • run the net localgroup command locally e.g. with these parameters: “net localgroup Administrators NAME_OF_USER_OR_GROUP_TO_REMOVE /Delete” (or run it remotely via a remote-shell tool such as psexec.exe)
  • configure a Group Policy (e.g. using Active Directory group policy) that sets the membership of the Administrators group using the Restricted Groups security setting (either overwriting the existing membership or incrementally adding/deleting specified security principals)

Manage the SeDebugPrivilege

The obvious flipside of the default SeDebugPrivilege assignment is that you can change the security principals to whom the privilege is assigned.  In fact, if you review (or have implemented) the Microsoft Security Security Accelerators for Windows (Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008), you’ll find they recommend

Countermeasure: remove the Administrators group from the SeDebugPrivilege.

How to implement:

  • run the ntrights.exe Resource Kit command-line tool locally e.g. with these parameters: “ntrights.exe –u Administrators –r SeDebugPrivilege” (or run it remotely via a remote-shell tool such as psexec.exe)
  • configure a Group Policy (e.g. using Active Directory group policy) that removes all security principals that are assigned the SeDebugPrivilege privilege

Run Apps, Services as lesser-privileged user

So the first two BKMs are great and all, but there are still lots of situations where you can’t make these blanket changes to the entire OS (though thankfully virtualization is reducing these “shared system” problems).  You may have to find ways to launch one or more processes with different security context or privileges than the rest of the system – sometimes having to run something with more privilege than the rest of the system (e.g. try Sudo for Windows), but usually wanting to strip privilege and permissions away from specific processes.

jailbirdCountermeasure: use Windows’ Software Restriction Policy (aka SRP or “SAFER”) to strip the Token of as many groups and privileges as the application can tolerate.  You don’t have to set a restrictive policy for the whole system – you can set this on an application-by-application basis (which can be practical in server environments, where you may only have a few critical applications to have to protect from each other).

How to implement:

  • Download and use Michael Howard’s SetSAFER application, which will strip varying levels of privileges and groups from the security token assigned to the process (thus making it more difficult for the process to access privileged objects in Windows).  If you want to dig into the code for this, and if the source code isn’t available, you can take a look at the code included in the original article (on which SetSAFER was based), or fire up DotNet Reflector and inspect the MSIL for the SetSAFER “executable”.
  • You could also try “psexec –l” (which implements one of the approaches taken by SetSAFER – one of the “stripped-down profiles”).

 

Something about this feels like I’ve missed another approach that should be mentioned in this context, but I’m sure there’s smarter folks than I reading this who can add any missing details to the picture.  Thanks, and have fun with this!

Advertisements

Gotta love it when Microsoft catches Microsoft doing something insecure…

image

The Office document conversion utility (that converts Office 2007 documents for use in Office 2003) seems to have gotten caught trying to execute (stack? heap?  I still don’t know which) memory devoted to data.  Naughty naughty, wonder how they missed that?  🙂

Which Security Event Log audit categories are most useful on a Windows client?

Let’s say you’re looking to maximize the value of the data logged to the Security Event Log on a huge number of Windows clients (say, Windows XP SP2).

Further, let’s assume that you’re not inspecting such logs on a regular basis, but instead you just want to keep the most critical events in case you have to track down some “suspicious activity” later.  [Suspicious activity would probably include such things as successful intrusions into the PC (whether by attackers or malware), which is going to be a losing battle but worth trying.]

You have two different sets of knobs to twiddle: which categories of security events will be logged, and how the security Event Log will be configured.  The categories are the more involved thinking, so let’s start with the Event Log configuration first, shall we?

Security Event Log configuration

The default Security Event Log size on Windows XP is a paltry 512 KB.  [It got boosted on Windows Vista, so don’t go yelling at Microsoft — they heard ya already.]  The question isn’t if you should increase its size, but by how much?

When it comes down to a “best practice”, I’ve always found it to be an arbitrary choice.  This choice should be informed by the level of activity you expect (or tend) to see — many customers who turn on all logging options can fill up a 10 MB log in the space of a week, but those who make more judicious choices can survive on 2048 KB for sometimes a month.

The upper limit is somewhere in the neighbourhood of 300 MB, but that limit includes all Event Logs (even custom event logs created by other applications, I believe) — this is documented in Chapter 6 of Threats and Countermeasures.  So for example, if you’ve already set the System and Application logs to 50 MB apiece, I would strongly advise a Maximum log size of somewhere around 150-200 MB for the Security event log.  [Note: there is a bug that causes problems with Security event logs over 50 MB, which hopefully has not only been fixed in Windows Server 2003 but also Windows XP SP2.]

Aside: I’m sure there are others you might find, but on my own Windows XP box I’ve got four additional custom Event Logs:

  • Microsoft Office Diagnostics
  • Microsoft Office Sessions
  • Virtual Server
  • Windows PowerShell

The next setting to consider is how the logs will respond when they (inevitably) fill up.  There’s a setting innocuously labelled When maximum log size is reached, and there’s no perfect selection for everyone.  I’ve generally advised people to choose Overwrite events as needed, since most times, my customers would be interested in having a record of the most recent activity on the PC (e.g. tracking down details of a recent virus outbreak or suspected break-in attempt).

Finally, if you’re really anal about your Security Event logs (and what security geek doesn’t ideally want to keep them around forever?), you can enable one or two other specialized settings created just for you — but should you?

  • WarningLevel: recent versions of Windows can warn the Administrator when the Security Event log is nearly full (the usual recommendation is 80 or 90% threshold).  Windows will record a single System event with EventID = 523.  However, this is really only useful in cases where the Administrator wants to archive all Security Event Log records for later analysis or compliance checking, and they don’t already have an infrastructure for collecting and centralizing this logging info.  Warning someone of imminent failure, when they have no way to avert disaster, is really just a tease.  Thus, the more useful setting is…
  • AutoBackupLogFiles: Rather than let the log files overwrite themselves, some would prefer to archive all log entries.  This registry setting enables Windows to automatically backup and empty the specified Event Log, so that all the entries are stored in a local file on disk.  This isn’t perfect (a malicious attacker could wipe them out, for instance) but in cases where you just can’t imagine copying the security Event log between the time the 90% alarm goes off and you get the time to deal with it, this can be an effective alternative.  The most significant consequence of this is, over time, you may end up filling the OS volume with these archived files.  However, shunting such saved data to a separate, non-OS volume — or monitoring for disk space — are the kinds of problems that aren’t difficult to solve.

Security Event Log Category choices

Now the tough part: deciding which Success & Failure event categories to enable.  Leaning on Eric Fitzgerald and Randy Franklin Smith, here’s the current thinking I’m advising my customer for keeping the noise down (and which you’re welcome to leverage, if our thinking seems to fit):

Account Logon

  • This’ll identify the local (i.e. SAM-based) usernames that users have attempted to logon at this PC
  • If you’re interested in tracking actual user activity and successful break-ins, then enable Success auditing.
  • If you’re interested in (and plan to actually investigate) attempted but failed break-ins, and if your users don’t use local accounts (and thus won’t be the overwhelming cause of failed account logon attempts due to fat-fingering their password), then enable Failure auditing.  Under such circumstances, this shouldn’t be a significant contributor to the security logs.
  • Recommendation: enable Success and Failure auditing.

Account Management

  • This’ll identify such things as account creation, password reset and group membership changes.
  • Under normal circumstances these should be highly useful records (both the successful changes and the attempts) — especially if you don’t often manipulate local accounts on your XP clients.
  • Recommendation: enable Success and Failure auditing.

Directory Service Access

  • pointless — this only applies to Domain Controllers
  • Recommendation: No Auditing

Logon events

  • In a non-domain context, this doesn’t add much value over and above Account Logon auditing
  • Recommendation: No Auditing

Object Access auditing

  • This is a tricky one.  It logs little or nothing by default, even when Success and Failure auditing are enabled for this.
  • Used correctly, you can collect information with a fairly high signal-to-noise ratio.
  • Used incorrectly, however (and I was as guilty of this as anyone in my early career, and am still guilty today), and you’ll wipe out any useful information that the security log might’ve otherwise kept for you.
  • For example, I’m currently recording “Handle Closed” and “Object Access Attempted” events dozens or hundreds of times an hour.  What is being accessed?  LSASS.  Why?  Because of a single “Everyone: Full Control” auditing entry I added to the EFS\Current Keys registry key, to try to track down some odd behaviour a few months ago.  I’d forgotten about this ever since, and now I’m filling my 10 MB security log every 36 hours.
  • If you follow a VERY specific set of SACLs as in the EricFitz article linked above, then you will get some real value out of this category.
  • Recommendation: only enable Success and Failure auditing if you have specific activity you’re looking for, but be VERY careful when setting any SACLs on the system.

Policy Change

  • I’ve never seen anything in this category that helps really track down malicious behaviour
  • While it may be interesting to highlight attempted (or successful) changes to Audit policy or assigned user rights, I’m extremely skeptical that any of this information would be conclusive.
  • However, with Windows XP SP2 and the use of Windows Firewall, there are a number of very specific audit records (e.g. Event IDs 851, 852, 860) that track changes in the Windows Firewall configuration.  [It’s unfortunate that there’s not better info on the source of those changes.]
  • If you’re using the Windows Firewall in XP SP2, these records could well be useful in isolating the source, cause, or spread of a malware outbreak.
  • Recommendation: enable Success and Failure auditing when using Windows Firewall.

Privilege Use auditing

  • One of the greatest sources of log pollution, with little practical application.
  • This looks very useful to a security geek on paper, but in practice 99% of the recorded events will be (a) legitimate behaviour and (b) completely harmless.
  • Recommendation: No Auditing

Process Tracking

  • Aka “Detailed Tracking” (which is how these events are labelled in the security Event Log)
  • A great way to swell the size of your security logs, unless your PCs run a very small number of applications for very long periods of time.
  • However, when you’re using Windows Firewall, Failure auditing will record (in Event ID 861) a number of potentially useful pieces of information about any application that attempts to open an exception in the Firewall rules.
  • This logging can be very frequent (I show over 2000 events in the last 36 hours on my PC), but will give very detailed information on the the Port opened, the process that bound it, and whether the process is a service or RPC application.
  • (One good non-security use for this auditing capability is to troubleshoot unknown application behaviours.)
  • Recommendation: enable Failure auditing when using Windows Firewall.

System events

  • The only semi-useful information I’ve ever found from this auditing are the startup and shutdown events, and they’re much more useful in determining uptime statistics (and otherwise unseen BSOD events) than they are for security.
  • Unfortunately, these events get buried under the amazing number of 514, 515 and 518 events that accumulate in the space of a few days.
  • Recommendation: No Auditing

Summary: Windows XP Security Event Log auditing category recommendations

Security Event Log Category

Recommended Audit Level

Account Logon Success, Failure
Account Management Success, Failure
Directory Services access No auditing
Logon events No auditing
Object Access auditing No auditing*
Policy Change No auditing*
Privilege Use auditing No auditing
Process Tracking No auditing*
System events No auditing

* except in unusual circumstances, see above.

Advanced Oddities

Per-user Auditing

  • As of Windows XP SP2, auditing can be enabled or disabled for any or all users
  • Each category can be separately configured as well
  • On a PC with many user accounts, this would be useful to help remove the less interesting entries
  • However, where few accounts exist, and for PCs not joined to a domain, per-user auditing is not advised

Windows Firewall auditing

  • As I hinted above, there are some aspects of Windows Firewall’s operations that can be logged to the Security Event Log, and which don’t get logged to the pFirewall.log.
  • For organizations using Windows Firewall, and especially those that don’t have a perfect idea of all the exceptions they need to open up on their user’s systems, this auditing can be extremely useful.
  • Recommendation: To capture this data, you should enable Policy Change (success and failure) and Process Tracking (failure) auditing on the target systems

File/Registry access auditing

  • If you’re interested in detecting attacks that tamper with system files, then EricFitz has some fascinating work you should examine
  • His work became the input for the Security Configuration Wizard in Windows Server 2003 SP1
  • Having had a quick look at it, there’s nothing that looks dangerous or unsuitable for an XP client
  • Recommendation: if you’d like a quick & dirty way to detect changes to system files, cut and paste those “file access auditing” settings from the SCW templates, and make sure that you’ve also enabled Object Access auditing (success and/or failure, depending on whether you’re after actual changes or just attempted changes)

Full Privilege Auditing

  • You can toggle a Registry setting known as (duh) FullPrivilegeAuditing, but be warned: these are default disabled for good reason
  • Recommendation: do NOT enable this setting

Audit the access of global system objects

  • Ever since this got added late in the NT4 service pack cycle, I’ve never quite figured out what this really tells me.  Eric doesn’t seem to interested in this either for most of us.
  • Recommendation: turn this setting Off

Audit the use of Backup and Restore privilege

  • This setting blows me away — it’ll fill up the most generous security event log, ’cause it creates an entry for each file that is backed up or restored
  • Recommendation: do NOT enable this setting

CrashOnAuditFail aka “Shut down system immediately if unable to log security audits”

  • Are you nuts?  Have you ever met a sysadmin that voluntarily puts in place a predictable Denial of Service attack?
  • If you’re that one-in-a-million organization that can actually implement this setting, I want to hear from you.  Yours is a tale I just gotta hear…
  • Recommendation: duh, do NOT enable this setting

For More Information…

Eric Fitzgerald is an old colleague of mine from my days at Microsoft, and I have an incredible amount of respect for the depth and persistence with which he pursued issues in the Auditing subsystem of Windows over the years.  He’s like the Rain Main of Windows security eventing, except I don’t think he’s much of a fan of Wapner. 😉  Eric’s “Windows Security Logging and Other Esoterica” blog is chock full of Windows security auditing goodness.

Windows Security Log Encyclopedia — Randy Franklin Smith’s take on Security Event Logs

Technet Events & Errors Message Center — detailed information backing up each security Event ID and what it means.

Deciphering Account Logon Events — in case you wonder what “Logon Type 5” really means…

Account Management — disabling the noise — and we’re done!

 

[Apologies to anyone monitoring my external blog, as this is a straight repost.  However, I’m assuming very few of you know about both, so I’m going to start reposting anything that’s applicable to both audiences.]

BBC newsman stung by his own arrogance; glass houses, anyone?

I read this, and while I wanted to laugh along with the original blogger whose post led me to this, I have to feel sorry for all the rest of us who aren’t so arrogant:

BBC NEWS | Entertainment | Clarkson stung after bank prank

He was reporting on the loss of 25 million bank account numbers, and thought the controversy was so silly that he actually read out his account information to his viewers.  He honestly believed that it wouldn’t be possible to withdraw money from his account with that information.

Of course, the only reason why I’m writing anything about this is that he was spectacularly wrong — someone donated £500 to a charity from his account, and it cannot be traced.

I have to believe that there are plenty of reasonably technical folks out there who’d believe the same thing.  Hell, I’ve been guilty of similarly arrogant statements in the past, and I can only thank the fates that I wasn’t similarly embarrassed by them.  One of the core encryption technologies with which I worked turned out to have a brutal backdoor (that I don’t believe has yet been fixed) that I had claimed for years was patently impossible.

Am I embarrassed now?  You bet.

What can I do to avoid that in the future?  Well, I won’t pretend it didn’t happen.  I can’t make reality go away.  However, I can continue to remind myself (preferably with sharpened sticks) of these failures when I’m about to say:

  • “There’s much bigger problems to worry about than that little security hole”
  • “That’s a perfectly reasonable way to mitigate against the script kiddies”
  • “Sure, there’s always the risk of some determined attacker spending unlimited time and resources, but the odds of that happening in this case is vanishingly small”

I know these sound like stupid statements when they’re presented in this fashion, but stop for a moment and ask: have you never said anything like this?  What makes that different from this?  Is it a perfectly sound security solution, or is it just that no one’s discovered the circumvention approach yet?

Windows Update — talk about shooting yourself in both feet…

Microsoft update - nonsense

For the love of Pete (who’s Pete you ask?  It’s a joke, son), who’s keeping watch over Microsoft customers’ safety and security?  For well over a year now, I’ve encountered Windows XP SP2 PC after PC, dutifully configured to automatically download and install all high-priority updates.  Some of these PCs, I’ve mothered over multiple times, hoping that I was seeing just a one-time problem that would be magically resolved the next time I arrived.

Microsoft even makes a big deal in its advertising about the fact that Windows Update (or Microsoft Update, if you’ve opted-in to this long-overdue expansion of updates across many Microsoft consumer and business products) “…helps keep your PC running smoothly — automatically”.  [And if you don’t believe me, check it out for yourself.]

Hogwash, I say.

Windows Update?  It’s more like “Rarely Update”, or “Windows Downtime”.

In almost every single case (and I suspect the rare PCs that weren’t this way, had been similarly mothered by some other poor lackey of the Beast from Redmond), I’ve found that I had to visit the Windows Update web site, download yet another update to the “Windows Genuine Validation” ActiveX control, install this piece o’ quicksand, and then subject my friend’s (or family member’s) PC to the agony of between one and three (depending on how long it’d been since I last visited) sessions of downloading and installing the very updates that they (and I) continued to falsely believe were being downloaded “automatically”.

In those cases where it’d been a year or more since the last occasion of hand-holding by me, the cycle of abuse wasn’t complete with a single session — I had to reboot after all “available” updates were installed, and re-visit Windows Update to find yet *another* batch of updates that magically appeared on this subsequent go-around.

How does this happen?  How could a service that is supposed to minimize the occurrence of unpatched PCs turn against itself so horribly?

I have to imagine that the WU (Windows Update) team doesn’t have any oversight or centralized control over the content that’s being hosted on their site.  If they did (and assuming they’re the folks who paid for the above ad), then they’d take their responsibilities more seriously, and make sure their site could deliver on the promise being advertised.

As it stands, it appears that the team responsible for Windows Genuine Validation feels it’s more important to ensure that their software is being explicitly installed by the end user, than to ensure that Microsoft’s customers are being adequately protected from the constant onslaught of Windows-targeting malware.

Each and every time I have visited the Windows/Microsoft Update site on these “under-managed” PCs (i.e. PCs owned by those folks who have left their PCs alone, as they’ve been promised to be able to by Microsoft), I’ve found that I had to perform the “Custom” scan, then accept the only-via-the-web download for the Windows Genuine Validation software, and only then is the computer capable of automatically downloading the remaining few dozen updates that have been queued up while the PC has been prevented by the requirement to download the validation control.

It seems like the Windows Genuine Validation team isn’t satisfied with their software getting onto every Windows PC in existence; they also seem bound & bent to ensure that every user is explicitly aware that they’re being surveilled by the Microsoft “licensing police”.

Why is it that Windows Update (or Microsoft Update) can update every other piece of software on my Windows PC automatically, but the license police can’t (or won’t) get its act together and make their (unwanted but unavoidable) software available automatically as well?  And don’t tell me it’s a “privacy” thing, or that it wasn’t explicitly allowed in the Windows XP SP2 EULA.  We’ve had plenty of opportunities to acknowledge updated privacy notifications or EULA addenda (hell, there’s at least one of those to acknowledge every year via WU, it seems), so that don’t fly.

So here’s my proposition: I’d love to see the Windows Genuine Validation team fall in line with the rest of the Microsoft “internal ecosystem” and figure out a way to make it so that WU/MU automatic updates actually become automatic again.  Wouldn’t it be grand if Windows systems around the world were still able to keep on top of all the emerging threats on behalf of all those individuals who’ve filled Microsoft’s coffers over the years?

Let’s get the current WGA control packaged up like any other High-Priority update and pushed down on the next Patch Tuesday (pitch it as if it’s similar to the monthly malware scanning tool).  If you have to, add in one of those EULA addenda (with or without a prominent privacy notification up front), and if you’re really worried, run a big press “push” that gets the word out that a privacy notification is coming.  C’mon Microsoft!  You’ve conquered bigger engineering problems before.  This one (at least to my naive viewpoint) can’t possibly be that hard…

What does it really mean to Prevent Buffer Overruns in Managed Code, Michael Howard?

One of the reasons I’m spending so much of my free time writing code (and neglecting my wife and dogs, much to their chagrin and my isolation) is that I’m trying to personalize the lessons of developing code, and developing secure code, that I preach as part of my day-to-day job.

I’ve been seeing a lot of references to “don’t trust user input”, and I’ve been trying to figure out what I’m supposed to do in managed code.  What I’m really after are some code samples or some prescriptive guidelines.

Of all the resources I know of on the subject, I suspect the best guidance I’ll find is in the book 19 Deadly Sins Of Software Security: Programming Flaws and How To Fix Them (Howard, LeBlanc, Viega).  I flipped through this a couple of months ago and while it seemed heavily weighted towards unmanaged code (C and C++), I seem to remember a reasonable amount of mention of managed code as well.

When I dug into the table of contents, there wasn’t any one chapter entitled “don’t trust user input”.  Instead there’s titles like “Sin 1: Buffer Overruns“, “Sin 2: Format String Problems“, “Sin 3: Integer Overflows“, “Sin 4: SQL Injection“, “Sin 5: Command Injection” and “Sin 14: Improper File Access“.  [I believe these are all the sins that relate to trusting user input, but I’m sure that’s hardly all the ways that trusted user input can be harmful to your code’s health!]

Sin 1: Buffer Overruns

So it looks like this is the most significant of all the Sins to consider when developing managed code.  Not only does it encapsulate the kind of thinking that should be applied to other Sins, but that it’s the most prevalent issue to expect in managed code and it applies to all types of managed code applications.

While I’ve understood for years what a buffer overrun means in general, I’ve never paid too much attention to thinking through exactly how to implement protections against buffer overruns.  What’s worse is, the guidance for managed code developers in this book isn’t exactly crystal-clear (at least, not to a relative novice like me):

C# enables you to perform without a net by declaring unsafe sections; however, while it provides easier interoperability with the underlying operating system and libraries written in C/C++, you can make the same mistakes you can in C/C++. If you primarily program in higher-level languages, the main action item for you is to continue to validate data passed to external libraries, or you may act as the conduit to their flaws.

So what does this mean to the managed code developer?  Am I reading this right, that we should only have to worry about calls to unmanaged code, and that all managed code functions are perfectly fine as-is?  Or is this also trying to say that any calls between assemblies, whether managed-managed code or managed-unmanaged code, should be equally guarded so that all passed buffers are checked?

Let’s assume for the moment that it’s the former, and that only when we’re calling into an unmanaged code (PInvoke) function do we need to worry about protecting against buffer overruns.  Should we assume that every single PInvoke needs to be protected against buffer overruns, no matter what?  Or should we focus instead on following external user inputs, tracing them through our code, and only put guard code in place at one or more of those chained calls, when that external input will actually intersect with a PInvoke function?

Put another way, does this advice mean we should focus on the “back end” (protecting every PInvoke), or should we focus on the “front end” (tracing external input to any PInvoke)?

I have no real appreciation for this space, and I can imagine good reasons for taking either approach.  However, I also don’t relish the thought of either approach.  I’d hate to have to try to trace every external input all the way through the twisty paths that it’ll often take — what a nightmare for a large codebase (what a grueling code review that’d be)!  On the other hand, it seems really inefficient to have to wrap every PInvoke in some form of guard code (or worse, wrap every call to the PInvoke – thus duplicating the extra code over and over, and still leaving yourself open to overlooking one or more critical calls).

And hey — if every PInvoke should always be wrapped in anti-overrun guard code, then shouldn’t the Microsoft employee who runs PInvoke.net be aware of that, and be ensuring that such guard code is included in every PInvoke signature that’s documented on that site?  Based on this reasoning, I’d have to believe that it’s not practical — or not even theoretically effective — to try to protect against buffer overruns in the PInvoke signatures.

Quick Analysis of the Rest of the “User Input” Sins

Sin 2: Format String Problems

It sounds like the only significant effect of this Sin on managed code is when reading in input from external files.  The recommended “guard code” is to try to be sure you’re reading in the file you want (and not some path– or filename–spoofed substitute).

Sin 3: Integer Overflows

It sounds like the only time this is a problem in managed code is when performing calculations inside unmanaged code.  If I’m reading this right, the recommended “guard code” would check that the integer values passed into the unmanaged code call are in fact integer values.

Sin 4: SQL Injection

I’m not touching any SQL databases or data access libraries, so this is irrelevant to my current investigations.  If it’s relevant for you, go read everything you can on the subject — it’s a doozy.

Sin 5: Command Injection

No .NET languages are mentioned in this chapter, but I would imagine that anytime a “shell execute” type command is instantiated, this vulnerability could be present.  In such cases, I would follow the same advice they give: “You can either validate everything you’re going to ship off to the external process, or you can just validate the parts that are input from untrusted sources. Either one is fine, as long as you’re thorough about it.”

Sin 14: Improper File Access

It sounds like there’s no easy “rules” to implement as guard code for this class of flaw, but rather to be hyper-vigilant anytime managed code calls System.IO.File or StreamReader methods.

Five Ways to Use Visual Studio to Avoid Secure Coding Mistakes

I was talking with a colleague recently, and we got on the subject of static analysis and why we all have to suffer with the problem of first making the mistakes in code, and then fixing them later.  She challenged me to come up with some ways that we could avoid the mistakes in the first place, and here’s what I told her:

  1. IntelliSense — the Visual Studio IDE is pretty smart about providing as-you-type hints and recommendations on all sorts of common coding flaws (or at least, it catches me on a lot of the mistakes that I frequently make), and they’re enabled out of the box (at least for Visual Basic.NET — I can’t recall if that’s true for C# as well).  [But I wonder why IntelliSense doesn’t handle some of the basic code maintenance?]
  2. Code snippets — Visual Studio has a very handy feature that allows you to browse a self-describing tree of small chunks of code, that are meant to accomplish very specific purposes.  These snippets save lots of time on repetitive or rarely-used routines, and reduce the likelihood of introducing errors in similar hand-coded blocks of code.
  3. PInvoke.net — if you ever need to P/Invoke to Win32 APIs (aka unmanaged code), this free Visual Studio add-on gives you as definitive a library as exists of recommended code constructs for doing this right.
  4. Code Analysis (cf. FxCop) — this is a bit of a cheat, as these technologies at first are simply about scanning your code (MSIL in fact) to identify flaws in your code (including a wide array of security-related flaws).  However, with the very practical tips they provide on how to resolve the coding flaw, this quickly becomes a teaching tool to reinforce better coding behaviours so you (and I) can avoid making those mistakes again in the future.
  5. Community resources — F1 is truly this coder’s best friend.  Banging on the F1 key in Visual Studio brings up a multi-tabbed search UI that gives you access not only to local and online versions of MSDN Library, but also to two collections that I personally rely on heavily: the CodeZone community (a group of MS-friendly code-junkie web sites with articles, samples and discussions) and the MSDN Forums (Microsoft’s dazzling array of online Forums for discussing every possible aspect of developing for the Microsoft platform).  If there’s one complaint I have about the MSDN Forums, it’s that there so freakin’ many of them, it’s very easy to end up posting your question to the wrong Forum, only to have the right one pointed out to you later (sometimes in very curt, exasperated, “why do these morons keep showing up?” form).

However, if like me you’re not satisfied with just the default capabilities of Visual Studio, then try out some of these add-ons to enhance your productivity:

There are a large number of third-party code snippets available from http://www.gotcodesnippets.net as well (though the quality of these is totally unverified, and should be approached with caution).

 

  • Code Analysis (FxCop):
    • JSL FxCop — a coding tool that eases the difficulty of developing custom rules, as well as a growing library of additional rules that weren’t shipped by Microsoft.
    • Detecting and Correcting Managed Code Defects — MSDN Team System walkthrough articles for the Code Analysis features of Visual Studio.

I’m also working on trying to figure out how to add a set of custom sites to the Community search selections (e.g. to add various internal Intel web sites as targets for search).

Intriguing question: What standards does Authenticode use?

A colleague of mine just asked a very interesting if potentially misleading question: what standards are used/implemented by Microsoft Authenticode?

I felt pretty dumb at first, because I couldn’t even grok the question.  Authenticode implements a set of APIs, usually derived from IE and its dependencies – that’s the nearest I can think of to a reasonably relevant answer to the question.

When pressed for details, it turns out the context was a security investigation of a particular set of software being developed by a non-security group.  The security auditor was looking for answers to questions like which digital signature standards are implemented by their code, what crypto algorithms, etc, and the responses from the developers were of the form “don’t worry, we’re using crypto, it’s all well-understood”.

I’ve been in this situation many times, and I have a permanent forehead scar from the amount of time I’ve beaten my head against a wall trying to get answers to such questions out of Developers.  I have learned (the hard way) that this is a fruitless exercise – it’s like asking my mom whether her favourite game is written in managed or unmanaged code.  Either way, the only response you should expect are blank stares.  [Yes, there are a small minority of developers who actually understand what is going on deep beneath the APIs of their code, but with the growth of 3rd & 4th-generation languages, that’s a rapidly dying breed.]

Advice: Look for Code, not Standards

My advice to my colleague, which I’m sharing with you as well, is this: don’t pursue the security constructs implemented (or not) by their code.  If you don’t get an immediate answer to such questions, then switch as fast as possible to unearthing where in their code they’re “using crypto”:

  • What component/library/assembly, or which source file(s), are “doing the crypto operations”?
  • Which specific API calls/classes are they using that they believe are implementing the crypto?

With a narrowed-down list of candidate APIs, we can quickly search the SDKs & other documentation for those APIs and find out everything that’s publically known or available about that functionality.  This is a key point:

  • once the developers have implemented a piece of code that they believe meets their security requirements, often they cannot advance the discussion any further
  • once you’ve found the official documentation (and any good presentations/discussions/reverse-engineering) for that API, there’s usually no further you can take the investigation either.
  • If you’re lucky, they’re using an open-source development language and you can then inspect the source code for the language implementation itself.  However, I’ve usually found that this doesn’t give you much more information about the intended/expected behaviour of the code (though sometimes that’s the only way to supplement poorly-documented APIs), and a security evaluation at this level is more typically focused on the design than on finding implementation flaws.  [That’s the realm of such techniques as source code analysis, fuzzing & pen testing, and those aren’t usually activities that are conducted by interviewing the developers.]

Specific Case: Authenticode

Let’s take the Authenticode discussion as one example:

  • the developers are almost certainly using Win32 APIs not managed code, since managed code developers more often refer to the System.Security namespace & classes – however, ShawnFa makes it clear that Authenticode also plays in the managed code space, so watch out.
  • Authenticode is implemented by a number of cryptographic APIs in the Win32 space
  • This page leads one to think they ought to read works from such esteemed authorities as CCITT, RSA Labs and Bruce Schneier, but as with most Microsoft stuff you’re better off looking first at how Microsoft understands and have interpreted the subject.
  • My understanding of Authenticode is that it’s more or less a set of tools and common approaches for creating and validating digital signatures for a wide array of binary files
  • However, its most common (or perhaps I should say most attention-generating) usage is for digitally signing ActiveX controls, so let’s pursue that angle
  • A search of MSDN Library for “activex authenticode” leads to an array of articles (including some great historical fiction – “We are hard at work taking ActiveX to the Macintosh® and UNIX“)
  • One of the earliest (and still one of the easiest to follow) was an article written in 1996 (!) entitled “Signing and Marking ActiveX Controls“.  This article states:
    • Once you obtain the certificate, use the SIGNCODE program provided with the ActiveX software development kit (SDK) to sign your code. Note that you’ll have to re-sign code if you modify it (such as to mark it safe for initializing and scripting). Note also that signatures are only checked when the control is first installed—the signature is not checked every time Internet Explorer uses the control.
  • Another article indicates “For details on how to sign code, check the documentation on Authenticode™ in the ActiveX SDK and see Signing a CAB File.”  The latter also says to use SIGNCODE; the former wasn’t linked anywhere I looked on the related pages.

Further searches for the ActiveX SDK led to many pages that mention but do not provide a link to this mysterious SDK. [sigh…]  However, I think we can safely assume that all APIs in use are those implemented by SIGNCODE and its brethren.  [If you’re curious which ones specifically, you could use Dependency Walker (depends.exe) to make that determination.]

  • However, one of the articles I found has led me to this, which I think provides the answers we’re after: Signing and Checking Code with Authenticode
    • “The final step is to actually sign a file using the SignCode program. This program will:
      • 1. Create a Cryptographic Digest of the file.
        2. Sign the digest with your private key.
        3. Copy the X.509 certificates from the SPC into a new PKCS #7 signed-data object. The PKCS #7 object contains the serial numbers and issuers of the certificates used to create the signature, the certificates, and the signed digest information.
        4. Embed the object into the file.
        5. Optionally, it can add a time stamp to the file. A time stamp should always be added when signing a file. However, SignCode also has the ability to add a time stamp to a previously signed file subject to some restrictions (see the examples that follow the options table).”
    • –a = “The hashing algorithm to use. Must be set to either SHA1 or MD5. The default is MD5.
    • -ky = “Indicates the key specification, which must be one of three possible values:

      1. Signature, which stands for AT_SIGNATURE key specification.
      2. Exchange, which stands for AT_KEYEXCHANGE key specification.
      3. An integer, such as 3.
      See notes on key specifications below.”

    • “The ChkTrust program checks the validity of a signed file by:
      1. Extracting the PKCS #7 signed-data object.
      2. Extracting the X.509 certificates from the PKCS #7 signed-data object.
      3. Computing a new hash of the file and comparing it with the signed hash in the PKCS #7 object.

      If the hashes agree, ChkTrust then verifies that the signer’s X.509 certificate is traceable back to the root certificate and that the correct root key was used.

      If all these steps are successful, it means that the file has not been tampered with, and that the vendor who signed the file was authenticated by the root authority.”

    • “The MakeCTL utility creates a certificate trust list (CTL) and outputs the encoded CTL to a file. MakeCTL is supported in Internet Explorer 4.0 and later.

      The input to MakeCTL is an array of certificate stores. MakeCTL will build a CTL which includes the SHA1 hash of all of the certificates in the certificate stores. A certificate store can be one of the following:

      • A serialized store file
      • A PKCS #7
      • An encoded certificate file
      • A system store”

 

For those who still want more detail, I’d recommend digging into CryptoAPI and especially reviewing the FIPS submissions Microsoft has made for the Windows components that FIPS evaluated.

 

Aside: Here’s a really neat easter egg I stumbled on: the Internet Explorer Application Compatibility VPC Image.  You can download a Virtual PC image pre-installed with the stuff you need to troubleshoot compatibility issues for IE apps, add-ins etc.  Very helpful – save you a few hours of setting up a clean testing environment every time you run into a problem (or if you’re like me, it’ll save you weeks of hair pulling when trying to troubleshoot such issues when using your own over-polluted browser).

Troubleshooting RunOnce entries: Part One

I’ve been investigating the root cause for a critical issue
affecting my CacheMyWork app (for those of you paying attention, it has come up in the past in this column). Ever since I received my (heavily-managed) corporate laptop at work, I’ve been unable to get Windows XP to launch any of the entries that CacheMyWork populates in RunOnce.

Here’s what I knew up front

  • On other Windows XP and Windows Vista systems, the same version of CacheMyWork will result in RunOnce entries that all launch at next logon
  • On the failing system, the entries are still being properly populated into the Registry – after running the app, I’ve checked and the entries under RunOnce are there as expected and appear to be well-formatted
  • The Event Log (System and Application) doesn’t log any records that refer even peripherally to RunOnce, let alone that there are any problems or what might be causing them
  • The entries haven’t disappeared as late as just before I perform a logoff (i.e. they’re not being deleted during my pre-reboot session).

Here’s what I tried

UserEnv logging
  • I added HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\UserEnvDebugLevel = 30002 (hex).
  • This is able to show that the processes I’m observing are firing up correctly, but there is nothing in the log that contains “runonce” or the names of the missing processes, and I haven’t spotted any entries in the log that point me to any problems with the RunOnce processing.
ProcMon boot-time logging
  • I’ve got over 3.3 million records to scan through, so while I haven’t found anything really damning, I may never be 100% sure there wasn’t something useful.
  • After a lot of analysis, I found a few interesting entries in the ProcMon logs:
Process Request Path Data
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockRuleName_2 Prevent Outlook from launching anything from the Temp folder
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockRuleName_10 Prevent access to suspicious startup items (.exe)
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockWildcard_10 **\startup\**\*.exe
BESClient.exe RegOpenKey HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Query Value
Explorer.exe RegEnumValue HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Index: 0, Length: 220
waatservice.exe RegOpenKey HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Desired Access: Maximum Allowed
Windows Auditing

I finally got the bright idea to put a SACL (audit entry) on the HKCU\…\RunOnce registry key (auditing any of the Successful “Full Control” access attempts for the Everyone special ID). After rebooting, I finally got a hit on the HKCU\…\RunOnce key:

Event Log data

First log entry

Second log entry

Third log entry

Category Object Access Object Access Object Access
Event ID 560 567 567
User (my logon ID) (my logon ID) (my logon ID)

And here are the interesting bits of Description data for each:

Description field

First log entry

Second log entry

Third log entry

Title Object Open: Object Access Attempt: Object Access Attempt:
Object Type Key Key Key
Object Name \REGISTRY\USER\S-1-5-21-725345543-602162358-527237240-793951\Software\Microsoft\ Windows\CurrentVersion\RunOnce [n/a] [n/a]
Image File Name C:\WINDOWS\explorer.exe C:\WINDOWS\explorer.exe C:\WINDOWS\explorer.exe
Accesses

DELETE
READ_CONTROL
WRITE_DAC
WRITE_OWNER
Query key value
Set key value
Create sub-key
Enumerate sub-keys
Notify about changes to keys
Create Link

[n/a] [n/a]
Access Mask [n/a] Query key value Set key value

Not that I’ve ever looked this deep into RunOnce behaviour (nor can I find any documentation to confirm), but this seems like the expected behaviour for Windows. Except for the fact that something is preventing the RunOnce commands from executing, of course.

Blocking the Mystery App?

Then I thought of something bizarre: maybe Explorer is checking for RunOnce entries to run during logon, and it isn’t finding any. Is it possible some process has deleted them during boot-up or logon, but before Explorer gets to them?

This flies in the face of my previous theory, that the entries were still there when Windows attempted to execute them, but something was blocking their execution. Now I wondered if the entries are even there to find – whether some earlier component hasn’t already deleted them (to “secure” the system).

If so, the only way to confirm my theory (and catch this component “in the act”) is if the component performs its actions on the Registry AFTER the LSA has initialized and is protecting the contents of the Registry. [It’s been too long since I read Inside Windows NT, so I don’t recall whether access to securable objects is by definition blocked until the LSA is up and ready.]

Hoping this would work, I enabled “Deny” permission for Everyone on the HKCU\…\RunOnce key for both “Set Value” and “Delete” (not knowing which one controls the deletion of Registry values in the key). This also meant that I had to enable Failure “Full Control” auditing for the Everyone group on this key as well.

However, while I’ve firmly confirmed that the Deletion takes place when I remove this Deny ACE, I can’t get Windows to log any information to indicate what process or driver is deleting the registry entries (and thus preventing Windows from executing them). It looks like – beyond what I’ve already found – there’s nothing else for which the LSA is asked to make any access control decisions for the HKCU\…\RunOnce key.

“Run Away!”

That’s all for now – I’m beat and need to regroup. If anyone has any bright ideas on ways to try to dig deeper into this system and figure out what’s missing, I’d love to hear it.

To be continued…

Threat Modeling Google group is now available

I’ve been using Microsoft’s Threat Analysis and Modeling (TAM) tool for about a year now, and I’ve gotten to really love how much easier and user-friendly this tool is than anything else I’ve found so far on the ‘net.  I’ve tried to find anything that was as comprehensive, easy for beginners, flexible and extensible as TAM is (let alone free), and there’s nothing else that even comes close.  Anytime I’m asked now to do any Threat Modeling for a product or technology, the only tool I would seriously consider is TAM.

That said, the more I work with it, I’m finding there are enhancements I’d like to make, or things I’d like to better understand:

  • What are the key steps that I should never skip?
  • What tools are useful for generating additional XSLT Report templates?
  • How does TAM merge overlapping content when importing Attack Libraries?
  • What extensibility classes are available for .NET-friendly developers to add to this tool?
  • What’s a reasonable number of Components or Attacks to include in any one threat model?

 I’ve worked with the TAM team at Microsoft to get some ideas on this, but they’re pretty much working flat-out on the Security Assessments for which they built this tool in the first place.  I’ve scoured their old blog entries (here, here and here) to glean tidbits, but I’d really like to work with more folks who are also using this – share what I’ve learned and get their input and ideas as well.

I’d hoped that Microsoft would have a Community forum for this great tool, but since they don’t, I’ve taken the bull by the horns and created one myself.  You can find it here on the Google Groups site.  Yes, Google.  Horrors!

I’ve tried to use MSN Spaces in the past as a collaboration workspace, but I’ve found Google Groups and Yahoo Groups are both better platforms for this sort of thing.  They give you more control, with less futzing around trying to make things “look right”, and they’re investing significant effort into these platforms.  Frankly, I’m a lazy guy at heart, and it was really freakin’ easy to setup the Google Group.  Sue me.

Call to Action: if you’re using Microsoft’s TAM tool already, or you know someone who’s responsible for things like “Secure Coding”, “Risk Assessments” or “Threat Modeling”, I’d encourage them to check out the Group, post some sample Files, start some Discussions or even just lurk for good ideas!