Wednesday 18 May 2011

Out vs Ref For TryXxx Style Methods

None of the popular programming languages that I know of allow you to overload a method based on error semantics. A common pattern to workaround this is to provide two overloads – one named normally that throws an exception on failure and another no-throw version who’s name is prefixed with “Try” and returns a bool instead (with any additional return values handled by output parameters). A classic example is the parsing functions on the C'# DateTime type:-

DateTime Parse(string value);
bool     TryParse(string value, out DateTime result);

In principle it’s easy enough to move from the exception throwing form:-

{
  DateTime output = DateTime.Parse(input);
  . . .
}

…to the alternate non-throwing form when you decide you need the different error handling semantics:-

{
  DateTime output;

  if (DateTime.TryParse(input, output))
  {
    . . .
  }
}

But what about when you don’t care if the method succeeded or not? On a number of occasions I have used a TryXxx style method and have not cared about the boolean return code, I just want it to use my default value if it fails:-

{
  DateTime output = DateTime.Now; // default

  DateTime.TryParse(input, output);
  . . .
}

Unfortunately this won’t have the desired effect (it actually won’t compile as is, but hold on) because your default value gets clobbered. Consider the following method on my IConfiguration interface that attempts to retrieve a configuration setting, if it exists:-

bool TryGetSetting(string key, out string value);

If I use the ‘out’ keyword as part of the interface I am forced to provide a value for all code paths. Consequently the implementation will probably look like this:-

bool TryGetSetting(string key, out string value)
{
  // Attempt to retrieve the setting 
  if (. . .)
  { 
    value = . . .; 
    return true;
  }
  else
  {
    // Must initialise the output value on all paths
    value = null;
    return false;
  }
}

The only general default value you can provide for a reference is null. Yes, for a string (or any other class that defines it) you could use the ‘Empty’ value, but that still clobbers any input from the caller. And so you force the caller to acknowledge the failure and write the slightly more verbose:-

{
  string value;

  if (!config.TryGetSetting("setting", out value))
    value = "my default value";
  . . .
}

The alternative is to use ‘ref’ instead, which allows the caller to provide a default value and you no longer have to clobber it in your implementation:-

bool TryGetSetting(string key, ref string value)
{
  // Attempt to retrieve the value
  if (. . .)
  { 
    value = . . .; 
    return true;
  }
  else
  {
    // Leave caller’s value untouched
    return false;
  }
}

Finally, as a caller I can now just write this:-

{
  string value = "my default value";

  config.TryGetSetting("setting", ref value)
  . . .
}

So, I wonder why the C# designers picked ‘out’ over ‘ref’ in the first place? Perhaps they felt it was safer. But is it that much safer? If you use ‘out’ and don’t check the return code you’ll probably end up either accessing a null reference or continuing with the equivalent of 0 for a value type, i.e. whatever default<type>) returns. OK, this is far superior to the C++ world where an uninitialized variable could be anything[*]. If you use ‘ref’ then you let the caller choose the uninitialised value, which, if they are following best practice will result in the same effect because they won’t be reusing existing variables for other purposes.

There is of course a semantic difference between ‘out’ and ‘ref’, but I think what I’m suggesting blurs the line between them. If you look at ‘out’ and ‘ref’ through COM’s eyes and put a network in the middle then it’s all about whether you need to marshal the value to the callee and this is not the behaviour we want. The callee doesn’t need the value and we certainly don’t want to allow it to be able to modify it, so ‘ref’ is out (if you’ll pardon the pun). What we want ‘out’ to mean in this scenario is “don’t clobber the existing variable if no output value was provided, and don’t bother marshalling the value into the callee either”.

It’s great that C# points out where you have attempted to use an uninitialised variable, but sadly I think it’s that same mechanism that also gets in the way sometimes.

[*] I once got bitten by an uninitialized ‘bool’ during my C++ days. Somewhat ironically it was exactly because we were using all the debug settings during development and they very cleverly initialise stack variables and heap memory to a non-zero value that it went unnoticed. This is because the “uninitialized” value was always reinterpreted for a ‘bool’ as ‘true’. There is a reason why you always write a failing unit test first…

Monday 16 May 2011

Testing Drives the Need for Flexible Configuration

If you look at our system’s production configuration settings you would be fooled into thinking that we only need a simple configuration mechanism that supports a single configuration file. In production it’s always easier because things have settled down, but during testing is when the flexibility of your configuration mechanism really comes into play.

I work on distributed systems which naturally have quite a few moving parts and one of the biggest hurdles to development and maintenance in the past has been because the various components cannot be independently configured so that you can cherry-pick which services you run locally and which you draw from your integration/system test environment. Local (as in on your desktop) integration testing puts the biggest strain on your configuration mechanism as you probably can only afford to run a few of the services that you might need unless your company also provides those big iron boxes for developer workstations[#].

In the past I’ve found the need to override component settings using a variety of criteria and the following list is definitely not exhaustive, but gives the most common reasons I have encountered:-

Per-Environment

The most obvious candidate is environmental as there is usually a need to have multiple copies of the system running for different reasons. I would hazard a guess that most teams generally have separate DEV, TEST & PROD environments to cover each aspect of the classic software lifecycle. For small systems, or systems with top-notch test coverage the DEV & TEST environments may serve the same purpose. Conversely I have worked on a team that had 7 DEV environments (one per development stream), a couple of TEST environments and a number of other special environments used for regulatory purposes, all in addition to the single production one.

What often distinguishes these environments is the instances of the external services that you can use. Generally speaking all production environments are ring-fenced so that you only have PROD talking to PROD to ensure isolation. In some cases you may be lucky enough to have UAT talking to PROD, perhaps to support parallel running. But DEV environments are often in a sorry state and highly untrusted so are ring-fenced for the same reason as PROD, but this time for the stability of everyone else’s systems.

Where possible I like the non-production environments to be a true mirror of the production one, with the minimum changes required to work around environmental differences. Ideally we’d have infinite hardware so that we could deploy every continuous build to multiple environments configured for different purposes, such as stress testing, fault injection, DR failover etc. But we don’t. So we have to settle for continuous deployment to DEV to run through some basic scenarios, followed by promotion to UAT to provide some stability testing. What this means is that our inputs are often the same as for production, but naturally our outputs have to be different. But you don’t want to have to configure each output folder separately, so you need some variable-based mechanism to keep it manageable.

The Disaster Recovery (DR) environment is an interesting special case because it should look and smell just like production. A common technique for minimising configuration changes during a failover is to use DNS Common Names (CNAMEs) for the important servers, but that isn’t always foolproof. Kerberos delegation in combination with CNAMEs is a horribly complicated affair. And that’s when you have no control over the network infrastructure.

Per-Machine

Next up is machine specific settings. Even in a homogenous Windows environment you often have a mix of 64-bit and 32-bit hardware, slightly different hard disk partitioning, or different performance for different services. Big corporations love their “standard builds” which helps minimises the impact but even those change over time as the hardware and OS changes – just look at where user data has been stored in Windows over the last few releases. The ever changing security landscape also means that best practices change and these will, on occasion, have a knock-on effect on your system set up.

By far the biggest use for per-machine overrides though is during development, i.e. when running on the developers workstation. While unit testing makes a significant contribution to the overall testing process you still need the ability to easily cobble together a local sandbox in which you can do some integration testing. I believe the DEV environment cannot be a free-for-all and should be treated with almost the same respect as production because if the DEV environment is stable (and running the latest code) you can often reduce the setup time for your integration testing sandbox by drawing on the DEV services instead of running them locally.

Per-Process-Type

Virtually all processes in the system will probably share the same basic configuration, but certain processes will have specific tasks to do and so they may need to be reconfigured to work around transient problems. One of the reasons for using lots of processes (that share logic via libraries) is exactly to make configuration easier because you can use the process name as a “configuration variable”.

The command line is probably the default mechanism most people think of when you want to control the behaviour of a process, but I find it’s useful to distinguish between task specific parameters, which you’ll likely always be providing, and background parameters that remain largely static. This means that when you use the “--help” switch you are not inundated with pages of options. For example a process that always needs an input file will take that on the command line, as it might an optional output folder; but the database that provides all the background data will be defaulted using say an .ini file.

Per-User

My final category is down to the user (or service account) under which the process runs. I’m not talking about client-side behaviour which could well be entirely dynamic, but server-side where you often run all your services under one or more special accounts. There is often an element of crossover here with the environment as there may be separate DEV, TEST and PROD service accounts to help with isolation. Support is another scenario where the user account can come into play as I may want to enable/disable certain features to help avoid tainting the environment I’m inspecting, such as using a different logging configuration.

Getting permissions granted is one of those tasks that often gets forgotten until the last minute (unless DEV is treated liked PROD). Before you know it you switch from DEV (where everyone has way too many rights) to UAT and suddenly find things don’t work. A number of times in the past I’ve worked on systems where a developer’s account has been temporarily used to run a process in DEV or UAT to keep things moving whilst the underlying change requests bounce around the organisation. Naturally security is taken pretty seriously and so permissions changes always seem to need three times as many signatures as other requests.

Hierarchical Configuration

Although most configuration differences I’ve encountered tend to fall into one specific category per setting, there are some occasions where I’ve had cause to need to override the same setting based on two categories, say, environment and machine (or user and process). However because the hardware and software is itself partitioned (e.g. environment/user) it’s usually been the same as overriding on just the latter (e.g. machine/process).

What this has all naturally lead to is a hierarchical configuration mechanism, something like what .Net provides, but where <machine> does not mean all software on that machine, just my system. It may also take in multiple configuration providers, such as a database, .ini files, registry[*] etc. My current system only uses .config style[$] files at present and on start-up each process will go looking for them in the assembly folder in the following order:-

  1. System.Global.config
  2. System.<environment>.config
  3. System.<machine>.config
  4. System.<process>.config
  5. System.<user>.config

Yes, this means that every process will hit the file-system looking for up to 5 files, but in the grand scheme of things the hit is minimal. In the past I have also allowed config settings and the config filename to be overridden on the command line by using a global command line handler that processes the common settings. This has been invaluable when you want to run the same process side-by-side during support or debugging and you need slightly different configurations, such as forcing them to write to different output folders.

Use Sensible Defaults

It might appear from this post that I’m configuration mad. On the contrary, I like the ability to override settings when it’s appropriate, but I don’t want to be forced to provide settings that have an obvious default. I don’t like seeing masses of configuration entries just because someone may need to use it one day – that’s what source code and documentation is for.

I once worked on a system where all configuration settings were explicit. This was intentional according to the lead developer because you then knew what settings were being used without having to rummage around source code or find some (probably out-of-date) documentation. I understand this desire but it made testing so much harder as there was a single massive configuration object to bootstrap before any testable code ran. I shouldn’t need to provide a valid setting for some obscure business rule when I’m trying to test changes to the messaging layer – it just obscures the test.

Configuration Formats

I’m a big fan of simple string key/value pairs for the configuration format – the old fashioned Windows .ini file still does it for me. Yes XML may be more flexible but it’s also far more verbose. Also, once you get into hierarchical configurations (such as .Net .config files), its behaviour becomes unintuitive as you have to question whether sections are merged at the section level, or individual entries within each section. These little things just make integration/systems testing harder.

I mentioned configuration variables earlier and they make a big difference during testing. You could specify, say, all your input folders individually, but when they are related that’s a real pain when it comes to environmental changes, e.g.

[Feeds]
SystemX=\\Server\PROD\Imports\SystemX
SystemY=\\Server\PROD\Imports\SystemY

One option is to generate your configuration from some sort of template, but I find that a little too invasive. It’s pretty easy to emulate the environment variable syntax so you only have one setting to change:-

[Variables]
SharedData=\\Server\PROD
FeedsRoot=%SharedData%\Imports

[Feeds]
SystemX=%FeedsRoot%\SystemX
SystemY=%FeedsRoot%\SystemY

You can even chain onto the environment variables collection so that you can use %TEMP% and %ProgramFiles% when necessary.

 

[#] Quite how anyone was ever expected to develop solid, reliable, multi-threaded services with a machine with only a single or dual hyper-threaded CPU is quite beyond me. I remember 10 years ago when we had 1 single dual-CPU box in the corner of the room which was used “for multi-threaded testing”. Things are better now, but sadly not by that much.

[*] Environment variables are great for controlling local processes but are unsuitable when it comes to Windows services because a machine reboot is required when they change. This is because the environment variables that a service process receives is inherited from the SCM (Service Control Manager), so you’d need to restart the SCM as well as the service (it doesn’t notice changes like the Explorer shell does). So, in this scenario I would favour using the Registry instead as you can get away with just bouncing the service.

[$] Rather than spend time up-front learning all about the .Net ConfigurationManager I just created a simple mechanism that happened to use files with a .config extension and that also happened to use the same XML format as for the <appSettings> section. The intention was always to switch over to the real .Net ConfigurationManager, but we haven’t needed to yet – even our common client-side WCF settings use our hierarchical mechanism.

Wednesday 11 May 2011

The Public Interface of a Database

The part of my recent ACCU London talk on database unit testing that generated the most chatter was the notion that there can be a formal public interface to it. Clearly we’re not talking about desktop databases such as Access, but the big iron products like SQL Server and Oracle. It is also in the Enterprise arena that this distinction is most sorely needed because it is too easy to bypass any Data Access Layer and directly hit the tables with tools like SQLCMD and BCP. I’m not suggesting that this is ever done with any malicious intent, on the contrary, it may well be done as a workaround or Tactical Fix[*] until a proper solution can be developed.

In any Enterprise there are often a multitude of in-house and external systems all connected and sharing data. Different consumers will transform that data into whatever form they need and successful systems can in turn find themselves becoming the publishers of data they only intended to consume because they can provide it in a more digestible form. The tricky part is putting some speed bumps in place to ensure that the development team can see when they have violated the design by opening up the internals to the outside world.

Building Abstractions

So what does this Public Interface look like? If you look at tables and stored procedures in a similar light to structs and functions in C, or classes and methods in C++/C# you naturally look for a way to stop direct access to any data members and this means stopping the caller performing a SELECT, INSERT, UPDATE or DELETE directly on the table. You also need to ensure that any implementation details remain private and do not leak out by clearly partitioning your code so that the clients can’t [accidentally] exploit them.

I’ll admit the analogy is far from perfect because there are significant differences between a table of data rows and a single instance of a class, but the point is to try and imagine what it would do for you if you could, and how you might be able to achieve some of the more desirable effects without sacrificing others such as productivity and/or performance. As you tighten the interface you will gain more flexibility in how you can implement a feature and more importantly open yourself up to the world of database refactoring (assuming you have a good unit test suite behind you) and even consider using TDD.

Stored Procedures

The most obvious conclusion to the encapsulation of logic is the blanket use of Stored Procedures and User Defined Functions so that all access is done using parameterised functions. These two constructs provide the most flexibility in the way that you design your abstractions because they allow for easy composition which becomes more essential as you start to acquire more and more logic. They also provide a good backdoor for when the unanticipated performance problems start to appear as you can often remediate a SQL query without disturbing the client.

Of course they are not a panacea, but they do provide a good fit for most circumstances and are a natural seam for writing unit tests and using TDD. The unit test aspect has probably been the biggest win for us because it has allowed us to develop and test the behaviour in isolation and then just plug it in knowing it works. It has also allowed us to refactor our data model more easily because we have the confidence that we are in control of it.

Views

Stored procedures are great when you have a very well defined piece of functionality to encapsulate, such as updating a record that has other side-effects that are inconsequential to the client, but they are more of a burden when it comes to querying data. If you try and encapsulate every read query using stored procedures you can easily end up with a sea of procedures all with subtly different names, or a single behemoth procedure that takes lots of nullable parameters and has a ‘where’ clause that no one wants to touch. Views solve this problem by giving the power back to client, but only insofar as to let them control the query predicates – the list of columns (and their names) should be carefully decided up front to avoid just exposing the base tables indirectly, as then you’re back to where you started.

User Defined Types

User defined types seem to have a bad name for themselves but I couldn’t find much literature on what the big issues really are[#][$]. I find them to be a very good way to create aliases for the primitive types in the same way as you would use ‘typedef’ in C/C++. A classic source of inconsistencies that I’ve seen in SQL code is where you see a varchar(100) used in some places and a varchar(25) in others as the developer just knows it has to be “big enough” and so picks an arbitrary size; it’s then left to the maintainer to puzzle the reason for the mismatch. UDTs allow your interface to be consistent in it’s use of types which makes comprehension easier and they can also be used by the client to ensure they pass compatible types across the boundary.

My current system started out on SQL Server 2005 which didn’t have the native Date and Time types so we created aliases for them. We managed to move to SQL Server 2008 before release and so could simply change the aliases. The system also has a number of different inputs and outputs that are not simple integers and so it’s important that we use the same scale and precision for the same type of value.

Schemas 

Another tool in the box is Schemas. These are a way to partition your code, much like you would with namespaces (or packages), although they don’t provide the hierarchic behaviour that you get with, say, C# or C++. A very common database model is to separate the input (or staging) tables from the final ones whilst the data is validated and cleansed. One way to apply that might be to use the default ‘dbo’ schema for the production tables and use a separate ‘staging’ schema for the input tables. You can then overload table and object names without having to resort to ugly prefixes or suffixes.

You can of course go much further than this if you start out by treating the default ‘dbo’ schema as synonymous with ‘internal’ or ‘private’; then all objects in that default schema are considered as purely implementation details and not directly exposed to the client. You then create separate schemas for the major subsystems, such as a ‘backend’ schema for the server-side code and a ‘ui’ schema for the client. These will allow you to create multiple interfaces tailored for each subsystem that, if you use traditional functional composition, avoids duplicate code by just internally layering stored procedures. This natural layering also quickly highlights design problems, such as when you see an object in the ‘dbo’ schema referencing one in the ‘staging’ schema. There is often a natural flow, such as from staging to production, and schemas help you remain consistent about how you achieve that flow, i.e. push from staging to production or pull from staging by production.

Permissions

Ultimately if you want to deny access to something, just don’t grant it in the first place :-). Databases offer a fine-grained permissions system that allows you to grant access to specific objects such as individual stored procedures, functions and views. This means that your core definition of what constitutes the public interface is “anything you’ve been granted access to”. However it is not always obvious what may or may not be permissioned, and to what role, so the other mechanisms, such as schemas, can be used to provide a more visible means of scope.

Part of the permissions structure will be based around the roles the various clients play, such as server or UI, and so you will probably see a strong correlation between a role and the set of objects you then choose to expose in your various interfaces rather than just blindly exposing the same set for everyone. For example it is common to have a “Service Account” under which the back-end runs, this could be mapped to a specific Service Role which is then only granted permission to those objects it genuinely invokes which all exist in the specific Service schema. In contrast a Support Role may be granted access to all objects that only performs reads but have a bunch of special objects in a separate Support schema that can do some serious damage but are needed occasionally to fix certain problems. Different ‘public’ interfaces for different roles.

Oranges Are Not The Only Fruit

Jeff Atwood wrote a post for his Coding Horror blog way back in 2004 titled “Who Needs Stored Procedures, Anyways?” that rallies against the blanket use of stored procedures. One of his points is aimed at dispelling the performance myth of procedures, but as I wrote last year “Stored Procedures Are About More Than Just Performance”, but he’s right that embedded parameterised queries can perform equally well. Hopefully my comment about the use of views shows that I also feel the blanket use of procedures is not necessarily the best choice.

There are a lot of comments to his post, many of which promote the silver bullet that is an O/RM, such as Hibernate or Entity Framework. And that is why I was careful to describe the context in my opening paragraphs. If you’re working in a closed shop on an isolated system you can probably get away with a single Data Access Layer through which everything is channelled, but that is often not possible in a large corporation.

I would say that Jeff’s main point is that if you want to expose “A Service” then do just that, say, as a Web Service, so that you truly abstract away the persistent store. I agree with the sentiment, but corporate systems just don’t change technology stacks on a whim, they seem to try and drain every last ounce out of them. Hence I would question the real value in building (and more importantly maintaining) that proverbial “extra level of indirection” up front. Clearly one size does not fit all, but sadly many commenter's appear to fail to appreciate that. Heterogeneous systems are also wonderful in theory but when a company purposefully restricts itself to a small set of vendors and products what does it really buy you?

For me personally the comments that resonated most closely were the ones advocating the use of independent unit testing for the database code. I believe the ideal of using TDD to develop the database layer adds an interesting modern twist that should at least cause Jeff some pause for thought.

 

[*] The term tactical is almost always synonymous for quick-and-dirty, whereas strategic is seen as the right way to do it. However the number of tactical fixes and even entire tactical systems that have lived longer that many strategic solutions is huge. It just goes to show that growing systems organically is often a better way to approach the problem.

[#] The only real issue I’ve come across is that you can’t define a temporary table using UDTs on SQL Server – you have to add them to the model database or something which all sounds a bit ugly. I’ve not tried it out but I recently read that you can use “SELECT TOP(0) X,Y,Z INTO #Tmp FROM Table” or “SELECT X,Y,Z INTO #Tmp FROM Table WHERE 1 = 0” as a fast way of creating a temporary table based on a result set because the optimiser should know it doesn’t need to do any I/O. Describing your temporary table this way makes perfect sense as it avoids the need to specify any types or column names explicitly; so long as it incurs a minimal performance cost that is.

[$] Another more minor grumble seems to be to do with binding rules, in that they are effectively copied at table creation time instead of referencing the UDT master definition. You also can’t alias the XML type on SQL Server which would have been useful to us as we had performance concerns about using the XML type on a particular column. In the end we got close to what we wanted by using an alias to varchar(8000) with a rule that attempts a conversion of the value to XML – we could then just drop the rule if we noticed a performance problem in testing.

Wednesday 4 May 2011

A StyleCop/FxCop For Databases

My team all use the most excellent ReSharper during day-to-day C# coding as it performs some great on-the-fly static code analysis. We also occasionally run FxCop to provide some further insights into our C# code. However we didn’t really have anything for the database side of things, so a colleague of mine wrote one and called it ‘DbCop’. OK, so it’s clearly not in the same league as ReSharper but it has got me wondering if there are any commercial products out there that fill this space? I’ve not specifically hunted for one, but you still see things like this mentioned (if they’re worth anything) on sites like StackOverflow and personal blogs. I also dropped some not-so-subtle hints at the Red Gate stand at this years ACCU Conference but they didn’t give anything away if they do have anything like this in the pipeline...

Our tool is really nothing fancy, just some simple SQL scripts that spelunk the system tables and look at the schema metadata for some common mistakes and coding convention violations. It runs at the end of our database Continuous Integration build and generates a report; but it doesn’t fail the build if it finds a problem because we currently have no exclusion mechanism in place (besides manually hard-coding one into the script). We also have a separate schema/namespace called “build” for the DbCop objects so that they are not applied to integration test/system test/production databases by accident.

So far it only checks the following:-

  • No reserved words have been used for the names of our objects
  • The names of tables, columns and parameters adhere to our coding conventions
  • Table columns are defined using our UDT’s, not primitive types
  • Each table has a primary key
  • Each table has a clustered index

Clearly rules like the last two are made to be broken, but as a general rule they are pretty sound. Personally I tend to avoid a Primary Key rule violation by writing a unit test in the first place that ensures duplicates are disallowed when appropriate. It would be great to include spell checking into the process (just like FxCop) because I’m forever misspelling identifiers and with a database it’s much harder to change table and column names after they’ve gone into production and contain millions of rows.

Tuesday 3 May 2011

PowerShell, Throwing Exceptions & Exit Codes

[I raised a cut down version of this as a question via a comment to a related post on the Windows PowerShell blog back in March. I’ve not seen a response yet and doubt I ever will as it’s an old post and so I very much doubt anyone is monitoring it.]

I’ve got a bit of a love/hate relationship with PowerShell at the moment. Naturally whilst learning any new language the books steer you nicely towards the things that work, but as you start to “do your own thing” you step outside that comfort zone and the warts and inconsistencies start to appear. I should point out that this particular affliction affects more than just PowerShell[*] but it’s worse because it appears inconsistent in its behaviour and so appears to work – sometimes.

The Process Exit Code

Every process can return an exit code to signal to its caller something about the outcome of the task it was asked to perform. I don’t believe there is a formal definition anywhere about what constitutes “success” and “failure”[#] but the established convention is that zero means success and non-zero means unsuccessful. Of course what “unsuccessful” then means opens a whole new can of worms but if you’re writing a Windows batch file then the following construct is probably embedded in your head:-

<execute some process>
IF ERRORLEVEL 1 (
  ECHO ERROR: <some error message>
  EXIT /B 1
)

The ERRORLEVEL test is read as “if the last process exit code was greater than or equal to 1”. This assumes that the exit code will always be positive which is pretty much the norm, but it doesn’t have to be. So what does this have to do with PowerShell then?

The PowerShell EXIT & THROW Keywords

If you transliterate a Windows batch file into PowerShell you will probably end up writing code like this:-

<do something>
if ( <not some condition> )
{
  write-output “something bad occurred”
  exit 1
}

This is because “echo” is aliased as “write-output” and PowerShell still has an equivalent “exit” keyword to terminate the script[+]. So far so good, but PowerShell can do so much more and it also supports a “throw” keyword to allow you to use a more modern style of exception handling in your code that is especially useful when combined with functions. So you might expect that you could avoid the hard-coded exit code and use something like this instead (which is what I wanted to do):-

<do something>
if ( <not some condition> )
{
  throw “something bad occurred”
}

Yes, I know that I’m using exceptions for error handling and that might be considered bad form, but in the places I was using this idiom the errors were truly non-recoverable and so the effect would be the same – the script should terminate and the caller be signalled that a fatal error occurred.

Unhandled Exceptions

The way that I test all my scripts and processes to ensure that they exit with a well formed result code is by using the following Windows batch file, which I name “RUN.CMD”:-

@ECHO OFF
CALL %*
ECHO.
ECHO ExitCode=[%ERRORLEVEL%]

So I ran my PowerShell script to test the error handling and I noticed that it always returned 0, irrespective of whether it terminated with a throw or not. The output showed the details of the exception as I expected, but with the exit code being 0 my calling parent batch scripts and job scheduler would not be able to detect a failure (without some really ugly scraping of the output streams). So I tried a few experiments with the throw construct. First an ‘inline’ command:-

C:\Temp>run PowerShell -command "throw 'my error'"
my error
At line:1 char:6
+ throw <<<<  'my error'
    + CategoryInfo : OperationStopped: (my error:String) [], RuntimeException
    + FullyQualifiedErrorId : my error

ExitCode=[1]

Great, an unhandled exception in a inline script causes PowerShell.exe to return a non-zero exit code. What about if I put the same one-liner in a .ps1 script file and execute it:-

C:\Temp>run PowerShell -file test.ps1
my error
At C:\Temp\test.ps1:1 char:6
+ throw <<<<  'my error'
    + CategoryInfo : OperationStopped: (my error:String) [], RuntimeException
    + FullyQualifiedErrorId : my error

ExitCode=[0]

Not so good. Yes we get the error message, but PowerShell.exe exited with a code that signals success. I have always specified the –File switch when running a script to avoid the need to do the whole .\ relative path thing. So what about running the same script file as a Command, surely that would be the same, wouldn’t it?

C:\Temp>run PowerShell -command .\test.ps1
my error
At C:\Temp\test.ps1:1 char:6
+ throw <<<<  'my error'
    + CategoryInfo : OperationStopped: (my error:String) [], RuntimeException
    + FullyQualifiedErrorId : my error

ExitCode=[1]

Meh? So, depending on whether I use the “–File” or “–Command” switch to execute the .ps1 script I get different behaviour. Is this a PowerShell bug or is there something fundamental about the execution model the differs between –File and –Command that I’ve yet to understand? Either way I wouldn’t want to rely on someone not being helpful and “fixing” the command line by switching the use of –Command to –File, especially as it affects error handling and we all know how hard people test their changes to verify the error handling still works as designed…

Trap To The Rescue

I have found a somewhat invasive workaround that at least ensures a sensible exit code at the expense of less pretty output. It relies on adding a Trap handler at the top of the script to catch all errors, output them and then manually exit the script:-

trap
{
  write-output $_
  exit 1
}

throw 'my error'

Here’s the output from it when run with the previously unhelpful “-File” switch:-

C:\Temp>run PowerShell -file test.ps1
my error
At C:\Temp\test.ps1:7 char:6
+ throw <<<<  'my error'
    + CategoryInfo : OperationStopped: (my error:String) [], RuntimeException
    + FullyQualifiedErrorId : my error

ExitCode=[1]

Now that’s better. The obvious change would be to use Write-Error as the output cmdlet but that has a side-effect when using redirection[+]. There is probably some way I can have my cake and eat it but my PowerShell skills are less than stellar at the moment and my Googling has turned up nothing positive so far either.

 

[*] Matthew Wilson wrote a Quality Matters column for one of the ACCU journals where he showed that C++, Java & C# all treated an unhandled exception in main() as a “successful” execution as far as reporting the process result code goes.

[#] In C/C++ you can use the two constants EXIT_SUCCESS & EXIT_FAILURE to avoid hard-coding a return code that is platform dependent. On Windows these equate to 0 and 1 respectively, although the latter could be any non-zero value. I seem to recall that these constants are defined by <stdlib.h>.

[+] Let’s ignore the fact that you might use “write-error” and are keeping to a similar model to cmd.exe. I have another post queued up that shows that the output mechanism is annoyingly broken in PowerShell when using file redirection if you’re considering using it to replace batch files that run under, say, a job scheduler.