Friday, 30 September 2011

SQL Server Cursors - Avoiding the Duplicate FETCH NEXT

The example below comes from SQL Server Books Online (or just BOL as it’s affectionately known) about how to use a cursor. I have trimmed the predicates and the body of the cursor loop to remove some unnecessary noise, but it essentially remains the same:-

DECLARE authors_cursor CURSOR FOR
SELECT au_id, au_fname, au_lname
FROM authors

OPEN authors_cursor

FETCH NEXT FROM authors_cursor
INTO @au_id, @au_fname, @au_lname

  -- stuff done here with inner cursor elided

  -- Get the next author. 
  FETCH NEXT FROM authors_cursor 
  INTO @au_id, @au_fname, @au_lname

CLOSE authors_cursor
DEALLOCATE authors_cursor

What bugs me about this code, and other examples I’ve come across[*], is that there is obvious duplication; the FETCH NEXT statement is specified twice - once before entering the loop and again at the bottom.

Now, as a software developer we are taught that cut-and-paste coding is frowned upon because you increase the cost of maintenance. Yes, it’s just a simple loop and no one is really going to forget to keep them in sync are they (especially if SQL is their bread and butter)... But, if we did want to remove the duplication, could we?

Of course we can and the revised version below is how a colleague has tackled this for many years - we just need to move the FETCH NEXT inside the loop and then change the loop condition. But this is where we swap one dubious construct (duplicate code) for another - the infinite loop:-

OPEN authors_cursor

WHILE (1 = 1) 
  -- Get the next author. 
  FETCH NEXT FROM authors_cursor 
  INTO @au_id, @au_fname, @au_lname 

  IF (@@FETCH_STATUS <> 0)

  -- stuff done here with inner cursor elided

CLOSE authors_cursor

If this code were written in a static language like C++ and you ran a Lint tool over it, it would probably wince at the (1 = 1) expression - I certainly did when I first came across it. There are some constructs that send shivers down your spine and the infinite loop is one of them because we expect the loop condition to have a natural point of exit[#]. Here is an attempt to set that record straight:-

DECLARE @status int = 0

WHILE (@status = 0) 
  -- Get the next author. 
  FETCH NEXT FROM authors_cursor 
  INTO @au_id, @au_fname, @au_lname 

  SET @status = @@FETCH_STATUS

  IF (@status = 0)
    -- stuff done here with inner cursor elided

Single fetch statement, check. No early loop exit, check. No duplication, che... Sadly we’ve just swapped duplication of the FETCH NEXT for duplication of the loop test. Hmmm, what about swapping the BREAK for a CONTINUE and then changing the loop tests:-

DECLARE @done bit = 0

WHILE (@done = 0) 
  -- Get the next author. 
  FETCH NEXT FROM authors_cursor 
  INTO @au_id, @au_fname, @au_lname 

  IF (@@FETCH_STATUS <> 0)
    SET @done = 1

  -- stuff done here with inner cursor elided

To be honest I think that’s cheating because we’ve still got two tests to control the loop. But the switch from a BREAK to CONTINUE somehow feels a little less dirty. Sadly my SQL skills are fairly average and so perhaps someone more experienced can come along and finish the job off properly.


[*] Hey, it’s just example code and examples are meant to be simple so that people can understand them! True, I’m just saying that I’ve not seen it done any other way when Googling for stuff that utilised cursors which is what prompted me to write it up.

[#] Even services that sit there and spin indefinitely handling requests still need to be coded to terminate gracefully at some point and so in the very least you’ll be waiting on a synchronisation object of some sort.

Thursday, 29 September 2011

What’s the Check-In Frequency, Kenneth?

Last week saw two unrelated events come together to provide the basis for this post. The first was the demise of R.E.M., a rock band that I would never describe myself as being a massive fan of, but via osmosis[*] aware enough for them to provide a particularly bad pun for this post’s title. The second, and far more relevant event was the discussion on site about whether it’s possible to check-in too frequently or not.

Let’s be clear from the outset that we’re talking specifically here about integration branches. Private branches are, by definition, personal and so the frequency with which a developer chooses to commit is entirely their own choice (feature branches are usually synonymous with private branches). However an integration branch serves different purposes depending on the point in time that you wish to mine it.

Feature Development

As a developer I want to integrate my changes as fast as possible to ensure that my codebase always remains fresh. The longer I leave uncommitted changes outstanding, the more I open myself up to the possibility of tricky merges and integration problems. That’s the theory.

In practice one hopes that the entire codebase is not so volatile that this actually happens. If it does then I’d be worried that there is so much refactoring going on that no one is actually getting any real work done! Of course the team size will play a large part in this, but even then you just can’t expect developers to make sensible progress if they keep tripping over each others heels - you can’t build a house on quicksand.

For new features, where there is the greater chance of a high check-in rate as the developer goes through various twists and turns, the check-ins are mostly of interest to them and anyone reviewing. Once a feature reaches v1.0 and enters maintenance mode then every check-in becomes of interest to a larger audience because you now have something to break.

Integration/System Testing

When feature is complete the code will likely be in a state where all the unit tests pass and hopefully some amount of integration and system testing has been done too (and acceptance testing, if you’re lucky enough to have them). Depending on the testability of the system though it may only be complete to unit and rudimentary integration test level and so still have to go through some form of system testing where you get to throw higher volumes of data at it.

At the point of deployment[+] into a system test environment I find that the integration branch history takes on a different perspective. Now I’m interested in what changes are going to be deployed so that I have an idea of where trouble it likely to show up, and when it does I want to be able to drill into the changes to triage the problem quickly. If you’re working on a batch processing system, nothing kills a day more than finding that a bug has caused the system to keel over at the first hurdle as you’ve effectively lost an entire test run.

Software Archaeology

Once a feature is mature it hopefully just sits in the background doing its thing. Until it breaks, or meets a condition it never expected. If it’s a regression, or you don’t understand why the problem has only just surfaced, then you may need to go back in time and trawl the VCS logs piecing together any related changes to build a picture of what’s happened. Once again this is where noise from frequent check-ins makes the job much harder and you’ll almost certainly only be interested in changes since the feature went live. Any experimentation at this point is just noise, you’re only interested in version N and N-1 and the net effects of those changes.

If you’re new to a team then you wont have any history at all to draw on and so the VCS logs may be your only way of finding out about how the system has evolved. I’ve seen plenty of strange code and it has only been through the power of the VCS time machine that I’ve managed to work out why it’s in that state and, more importantly, how I can move it forward in the intended direction[#]. However this is also one scenario where going back over the changes in a private branch can actually be useful.

Personally I believe that after the text editor, the next most important tool you really need to grok is the version control system. This is especially true when working on legacy systems as you probably have none of the original team around.

It’s Just a Tooling Problem

My case for a little restraint may well appear weak with the counter-argument being that frequent check-ins are A Good Thing and what I describe is just a problem with the VCS tool (or my inability to drive it sufficiently well). And you’d have a point. But the VCS tools I’ve uses to date are all focused around managing single (hopefully atomic) change-sets; they do not provide an easy way for you to view the multiple change-sets that constitute “a feature” as a single set. To me “multiple change-sets” sounds awfully like what you’d use a feature/private branch for… The question then becomes “how many check-ins before I should have created a separate branch”?

The Extreme Case

In case I still haven’t made a compelling argument then perhaps the following extreme case will help you see where I’m coming from, and remember that I’m talking about doing this in an integration branch:-

Every check-in should be atomic and what is checked in shouldn’t break the build or destabilise the codebase if at all possible. That means that when writing a new feature, using a process like TDD, I could check in my changes every time I write a passing test.

I would contend that doing this adds a significant amount of noise to an integration branch, and that if you want to keep every little step you make then you should create your own branch and commit to that instead. Then integrate your changes on a less frequent basis, but still frequent enough that you avoid integration issues.

DVCS to the Rescue?

One technique I’ve yet to explore is to layer a DVCS like Git or Mercurial on top of Subversion to use as a private branch repository. This sounds attractive from the noise perspective, but I wouldn’t want to give up the check-in history of the small steps entirely in case it’s useful some time in the future. I know there are a few ACCU members who are well versed in this area and so I shall have to see whether there is more to this way of working than meets the eye.

Show Me the Money

So, putting my Tom Gilb hat on I feel compelled to somehow quantify what I feel is too often and what is too little. When you’re in maintenance mode a lot of features may literally only take minutes or hours to write and so there are natural points of integration already built-in. The basis for this post though was about significant new features because there are far less checks-and-balances (e.g. regression testing) in play to stem the natural flow of commits because you having nothing to break. In this case I would look to break the task down to see if there are any natural points where I can deliver sizeable chunks (1/2 day - 1 day) of complete functionality so that it can be pushed into system test sooner and I can then reap the system testing benefits in the background. If there is absolutely no way I can deliver something concrete without destabilising the system, or if I know I could be experimenting, only then will I resort to a private branch and only for the destabilising change. Once I can get back to delivering large chunks directly into the integration branch I will.

In essence what seems to have driven the upper limit of the commit frequency for me in recent years is the rate at which system testing occurs. Even in this day and age of unit testing and continuous integration “getting something in the build” is still something to aim for when a build, deployment and end-to-end test takes hours.


[*] I used to share a house with a number of people who listened to R.E.M. a fair bit. Personally I was happy enough to leave it at Eponymous and the 12” of The One I Love; for the wife of course...

[+] This is as often as possible. In theory, on the sorts of systems I work on, it would be great for it to happen automatically after each end-to-end run so that we have a continuous cycle of build, deploy, do an end-to-end run, rinse & repeat. In practice it ends up being largely automated, with the manual element being the decision to run the cycle to ensure the highest chance of getting a successful end-to-end run.

[#] I remember one very curious piece of code that changed rapidly and had lots of simple check-ins that seemed to be trying to workaround some kind of memory management problem. I came across it whilst trying to fix a very large memory leak and from the VCS log I could deduce that the programmer was trying to emulate a weak pointer with a non-weak one, hence the eventual leak.

Monday, 26 September 2011

From C++ to C# – Two Years On

Two years ago I interviewed for what was originally a C++ position, but which for internal reasons switched to being a C# role at the last minute. I was still offered the contract even though I knew virtually no C# with the expectation that I would pick it up pretty quickly as the two languages aren’t exactly worlds apart. After doing 15 years of C++ I was somewhat pensive of switching my main language, but as I wrote back then in “Why is C++ Getting Left on the Bench” the market for C++ is slowly diminishing. So of course I took it!

Comparing languages is somewhat of a dubious pursuit as the reason we have so many is exactly because they are intended to solve different types of problems. However these two cover similar ground and I’ll be using them to solve the same kinds of problems so I think it’s probably an interesting comparison on some level...


I’ve only ever worked on Windows and Visual Studio has pretty much been the only IDE I’ve used for the past 15 years, so naturally I was instantly at home with the default toolset. The only big difference I’ve noticed is that IntelliSense actually seems to work with C#, unlike C++[#]. However our system is still quite small so it’ll be interesting to see how that scales. Build times are also pretty good, but the same caveats about project size apply.

I’m sure this is obvious to any seasoned C# programmer but the biggest win seems to come from Resharper[*], a plug-in that provides on-the-fly static code analysis to point out potential bugs, stylistic violations and all round good practices. This tool just makes the refactoring step in TDD that bit more pleasurable and easy. There is a tool in the C++ world vying for similar acclaim (Visual Assist) that I used some years ago on another contract and it does add value to Visual C++, but I suspect that C++ is just so much harder to parse than C# and so I doubt it will match something like Resharper.

Something you should bare in mind that I only write services and so have no use for all the GUI related features. After all, I still code the few html pages I do write the way I’ve always done - with a text editor.


Resharper aside the other big win has been the bundled libraries. C++ gives you a basic set of libraries to cover simple I/O needs, string formatting etc, but avoids giving you the other things a modern developer needs - networking, xml support, threading. Ok, so this is being rectified to some degree in the new standard and Boost can fill in a huge number of gaps in the other areas - but fundamentally, out-of-the-box, you’re on your own. Being able to do so much more from the get go means the project can progress in many areas without getting bogged down in technology problems quite so soon.

Of course Windows and .Net come from the same stable and so any notion of being “cross-platform” (Mono notwithsdtanding) is a drop in the ocean compared to the vastly differing platforms that C++ aims to support. But as you should already have gleaned by now that is of little interest to me, at least professionally.


At a basic language level there are so many similarities between the two, constructs such as classes and enumerations are roughly equivalent and the private/protected/public access levels are pretty close too. Yes there are various small notational differences, such as using a . instead of :: to separate namespace names, but I’ve covered this in an earlier post “C# + C++ + C++ CLI – Context Switch Hell”. I’ve also documented my comments on resource management in the past too in “IDisposable’s Should Assert In Their Finalizer”.

This means that by-and-large I’m designing and writing code in a very similar manner. But then that was to be expected as the expression of OO is through interfaces and classes and that was never going to be much different. However the inclusion of Closures (which C++ has recently gained too) has meant the Functional Programming element probably feels more natural now - Functors and Boost::Function are just very clunky mechanisms in comparison.

There are many small touches that make certain common C++ pitfalls a thing of the past, but most seasoned C++ programmers have long since learnt how to avoid making them. There are a couple of things in C# though that are still bugging me:-

Sorely Missing Const

The thing that I miss the most by far is the concept of “constness” in C++. In C# immutability is a property of a type (assuming you choose to design the class that way), not a trait of an instance of a type. One place where this really comes into play is when concurrency rears its head because immutable instances are shareable across threads. Another is when dishing data out from a cache - you want to return a const reference to ensure the instance is not accidentally modified. I have written a number of bits of code in C# already where I know that if a cache was introduced the compiler would not be able to help highlight the violations - that scares me[^].

It didn’t take me long to notice the loss of constness, but that may be largely due to me regressing to old habits such as returning a const reference to an internal container rather than exposing the functionality on the type (the proverbial Law of Demeter), or exposing iterators via begin()/end() methods (ala Extended STL). In C# IEnumerable covers similar ground and more besides, so careful thought eliminates many cases in the end.

Don’t even mention ReadOnlyCollection! That is an abomination. I’m using a statically compiled language because I want the compiler to point out my mistakes at compile time, I don’t want to rely on it happening at run time. Yes, good unit test coverage does minimise the chance of a bug slipping through the net, but I don’t understand why there isn’t an IReadOnlyList, from which IList derives?

My current system uses a complex in-house container type that can be very expensive to copy and it too provides a single interface for reading and writing. The later addition in C# of the “initialiser syntax” seems to actively promote the creation of mutable types by default[+] which is not the de facto stance to take when you want thread-safe code to be the norm and not the exception. Even if you’re not bothered about immutability, this syntax still relies on you supporting 2-phase construction which is another thing to avoid by default.

Equality Comparison is Hard

In C++ everything is essentially a value type and so comparison is simple to implement. If you try and implement it on a type and anything it aggregates doesn’t support it you’ll get notified at compile time. C# is the other way round, which is both a blessing and a curse. Manually implementing the Equals() method correctly is non-trivial and that’s why everyone uses a tool like Resharper. Steve Love and Roger Orr gave an excellent talk on the subject at this year’s ACCU conference which is what made me realise the subject is way deeper than I ever imagined.

So what’s the blessing? Well it’s caused me to try and properly understand the difference between value types and reference types. This is turn has forced me to question whether I should be providing a comparison operator on a type in the first place or am I just doing it because it used to be easy. Unit testing makes this question harder because you often see a need for comparison as your unit test asserts are usually implemented by comparing the actual and expected outcomes. Implementing Equals() once in the type seems easier, but you can always implement it as a test helper method instead and refactor later if it turns out you got it wrong.


So do I feel more productive in C# than in C++? Probably. Slightly.

I’m one of those people that believes most time is spent in the designing phase, not in the actual coding. I spend way more time thinking about the problem and when the moment comes I probably write it in much the same way in either language. I’ve mentioned how nice it is to have a decent set of libraries on tap, and we definitely managed to leverage that in the early part of the project, but once the infrastructure was written it was pretty much business as normal.

If anything, the biggest win so far has been the tight integration between PowerShell and .Net as that has provided us with an ability to work around production issues you could only dream about in the days of C++/COM/VBScript.


[#] Throw something macro heavy like STLport into the mix and IntelliSense seems to fall apart.

[*] Nope, I have no relationship with them whatsoever, just a happy [corporate] customer.

[^] After what I said about TODOs the other day, this is one example where I have turned turned a TODO into a formal non-functional requirement. This is because the code is dangerous and building a facade over the mutable type in question is non-trivial, as is creating copies.

[+] The C# initialiser syntax relies on using public setter methods to assign values to properties. Curiously, when creating an anonymous type, you use the same syntax, but the compiler does not generate the public setters. Of course the new named argument syntax in C# 4 will hopefully push people back onto the right course once again.

Thursday, 22 September 2011

TODO or TODO Not - There Is No Later

The “TODO” style comment is a blight on the modern codebase. Not so much the wizard generated ones that MFC programmers never bother to remove, but ones blindly added in some vain hope that they are going to be understood and then actioned by some other programmer in their spare time. I once remember seeing a comment like this, which through the power of the version control system I discovered had been unanswered for nearly 10 years:-

// TODO: See if this is right.

One would hope that after so long it’s either right or on a dead code path that no one cares about. Hey, how about writing “a test” to answer the question instead of writing the comment? Going back to the wishful thinking style TODO I particularly liked this one I saw in a piece I code I then had to work on:-

  // TODO: How to handle errors?

I answered the question by removing the catch-all handler and letting them propagate to the caller. Funnily enough this unearthed an occasional error that then explained some other mysterious behaviour. Performance related TODO’s always amuse me because they are so likely to be wrong when the time comes:-

// TODO: Switch to a map if this needs to be faster
std::list<. . .> something;

What are the chances that at the time performance becomes a problem that this comment will still be correct? I would never blindly make a change based on someone’s gut instinct 5 years ago about how the code is likely to evolve. If you’re worried enough that you’ve chosen the wrong container type up front to comment on it then change it. Otherwise let the profiler do the talking.

One of the first things I disable in Visual Studio is the auto-detection of TODO’s being added to the Task window. After all I don’t want to get bogged down in a sea of unscheduled work left by my predecessors. In the modern era of programming where we’re all doing TDD our entire workload is already planned out of the next few weeks in stories and any refactoring will be done along with it. You simply don’t have the time to mark a point in the code with the intention of coming back some time later to finish it off - done has to mean done.

If TODOs in the source code comments are bad enough, some teams go one step further and make use of compiler features like #pragma message in Visual C++ to make sure they also appear in the build output just in case you managed to avoid them in the code. This just makes a clean build output an impossibility as you’re constantly drawn to check out the distracting build message as it passes by each time in case it wasn’t the TODO this time but an important message…

On the last few projects I’ve worked on I’ve instigated a “No TODO can live past a release” policy. This means I grep the codebase just before a release and either promote the TODO to a fully fledged work item if it is still worthy of attention (probably with the lowest priority in the hope it’ll just go away), or more commonly, delete it. So many are just as superfluous as the code comments and will be just as apparent when the code is next visited - refactoring will happen naturally if it needs to be done.

Tuesday, 20 September 2011

Copy & Rename (Like Copy & Swap But For File-Systems)

In C++ one of the most popular idioms for writing exception safe code is the “copy & swap” idiom. The idea is that you perform all “dangerous” work off to the side and then “commit” the changes using an exception safe mechanism[*] such as std::swap[#]. The idiom is not exactly intuitive because what you want to do most of the time is commit by assignment, but that will likely cause a copy and therefore open you up to the possibility of an exception (e.g. out of memory) which is exactly what you’re trying to avoid. In contrast the implementation of std::swap (assuming it’s been implemented efficiently for the type in question) is usually very simple and just swaps some internals members. The swapped out value is then cleaned up later.

In the world of distributed systems it’s very common to receive data in “flat files”, published by some external system that you then consume as inputs, perhaps during some batch process, before publishing your results to someone else. These files can often be quite large and take a non-trivial amount of time to write by the publishing system and so it’s important that you don’t start reading them before writing has finished[^]. If you do happen to start processing a file before it’s completely written you can end up with some real head scratching moments as you try and diagnose it later (after writing has then finished and the file is intact!).

The solution is to something akin to the copy & swap idiom, but using a file rename in place of std::swap to commit the change. The rename is just a metadata change and so torn files are avoided. It will also be safe in the face of a potential out-of-disk-space scenario as no “visible” data will be committed if the write terminates abruptly - the subscriber always gets all or nothing.

File renaming can also be used in a manner similar to the atomic CompareExchange function. When writing a service that caches data (in the file-system) and you get two simultaneous requests for the same data you have the potential for a race condition as both could try and update the cache together. What you can do is to let both continue as normal but write their data to a temporary file or folder. Then you “commit” by renaming this temporary file/folder to its final name at the end. The winner gets to be oblivious whereas the loser gets an error on renaming to signal it lost and has to cleanup before returning results; possibly out of the winners cache. If at any point an error occurs the temporary cache can be cleaned without affecting other requests.


[*] Yes it’s possible that std::swap could generate an Access Violation (or the equivalent thereof) and still cause corruption but that is no doubt considered to be covered under the realms of Undefined Behaviour.

[#] The new C++ standard introduces the concept of r-value references and std::move() which should make for a more intuitive solution in future.

[^] Assuming the file share mode allows it and you have a format that supports it you can do this but you’ll be adding a lot of complexity that you probably don’t need or can avoid through other means.

Monday, 19 September 2011

NTL/Virgin Emails Going Missing

The problem with being a software developer is that sometimes you [think you] know too much and, as the old saying goes, a little knowledge can be dangerous. Such was the case with a bizarre email problem my wife seemed to be having with her new iPhone…

She told me that emails were going missing. Now we all know that end users see the world in a different light to developers and so the first thing I did was to put aside her interpretation of the problem and get some cold hard facts. She swore there were emails she’d seen on her iPhone that hadn’t come through to Outlook on the desktop machine, even though a cursory glance didn’t show up any obvious casualties. So we left it that she would keep an eye out and investigate again when the problem knowingly resurfaced.

No doubt at this point anyone who knows anything about setting up email accounts will be shouting “Leave messages on the server”. This was my gut instinct too - that the iPhone was defaulting (or had been overridden) to delete messages from the server after retrieval. Up to this point I’d had zero experience with smart phones and so it took a little longer than usual to convince me that this wasn’t happening. I naturally did a little Googling too to see if this was a known issue with iPhones but nothing showed up.

A month or so later she commented that it was still happening and so we took another look. This time she did an extensive comparison between her various email accounts (she has many because of her freelance nature) and it transpired that it was only happening with her Virgin Media account (what was originally called NTL). This new information made the problem much more specific and it triggered me remembering something about Virgin changing the email system a while back so that they were using Google under the covers, or something like that. And so once again I fired up Google, this time armed with some additional keywords.

Many of the results still seemed to be about the opposite problem caused by the “Leave messages on server” setting. But finally I found one that seemed to mirror my wife’s problem and it talked about something called “recent mode”. This didn’t look like a standard POP feature but sounded more like a workaround NTL/Virgin might have added to the email system for the problem of multiple devices trying to access the same mailbox, e.g. desktop + smart phone. At the time I couldn’t find an official word on the subject, but I’ve since revisited the problem with my new phone and found this page on the Virgin Media site[*].

On the one hand I was somewhat relieved to discover that my reasonable knowledge of POP and email wasn’t totally flawed but my diagnostic skills certainly took more of a bashing. My early assumption was that internet mail is well understood and so the problem must be device specific. Add to this my wife’s complaint about iPhones’ only supporting one Exchange server and it only strengthened my prejudice against Apples’ implementation. However once the finger was pointing to a single email provider as well things were easier. Of course now I know about “recent mode” it’s a doddle to find those articles again and the relevant page of Virgins’ web site :-)


[*] This workaround is not without its own problems. One side-effect is that you can see the same email multiple times, even after the initial fetch, depending on what email client you’re using.