Month: January 2008

Microsoft Managed Services Engine

I’m trying to work out why the Microsoft Managed Services Engine project isn’t getting more attention.

The short answer (I guess) is that the concept is pretty dry unless you are an architect who has encountered these sorts of problems ūüôā

It is a great tool for virtualizing your web services. One previous project I worked on had an internal IT cost of around $40,000 to open a new port in their firewall for a new version of their client. This is an issue that MSE would have solved easily.


MSE allows you to host multiple services with the same names on a single endpoint.

It uses the format of the request to decide which version of the web service is appropriate for which service.


When you decide that you do not want to maintain an older service, you can deactivate it from the managment console.

Furthermore, you can redirect requests to the retired service to another by providing an XSLT transformation.


I admit that I don’t fully understand the capability of this feature. I know you can apply a ‘policy’ to a service such as running a RegEx expression against its output. There is also a policy to ‘throttle’ a WCF service, which could be useful if you want to give priority to the newer one. It seems to be straightforward to define new policies.. I can think of some security policies that might work well.

Reading around, it seems this was actually developed by Microsoft for some customers and they decided to release as an open source CTP project.

The team do not seem to have a blog or anything around this, so it is hard to know where it is going. Nevertheless, I hope it gets a formal roadmap soon since this is something I’d like to consider using¬†on customer projects.

Some questions I have that I will endeavor to answer myself over the coming weeks:

  1. Do virtualized services perform as well as ‘real’ ones?
  2. Are there ‘best practice’ guidelines?
  3. How can load balancing be applied?

Framework Design Guidleines – 2nd Edition

Software developers are hard people to work with. You can have running religious wars inside an organization over the position of the curly brackets, and often people mistake dangerous code as ‘efficient’ and ‘innovative’.

¬†There isn’t a lot of weapons to bear on the belligerent programmer, but there is the second edition of Framework Design Guidelines !

The first edition was such a great book. The knowledge and rules within apply to any kind of .NET development, not just your own framework. You have all the arguments in black and white from the smartest source. No questions.

It isn’t however an arrogant book; the authors go to length to justify their rules, or even disagree with each other over a point in question. This way you can be sure it is all deeply considered rather than arbitrary.

People often use agile philosophy as an excuse that quality code does not matter anymore. You can write bad ugly code because¬†if you need to fix something it can be ‘refactored’ or just written again.

Agile has its place but every programmer should strive to write elegant readable code. It is a craft that should be mastered and not seen as an overhead on project time.

Elegant code pays for itself many times in the lifetime of the codebase.

OMG! MFC Lives!!!

I started off my career writing apps and controls in C++ with the MFC library.. despite what many say, I found it gave flexibility that Visual Basic could not, and allowed you to build a complex GUI with relative ease.

Considering that even WinForms looks obsolete with the advent of WPF, it is good to see that Microsoft have not abandoned MFC, but have in fact updated it!

I personally have no compelling reason to go back to MFC… WPF has sold me totally. But with so much legacy code out there, it isn’t unfeasible to get involved again in future…

Database Flash Memory Storage

Hard disks have become so huge that less than $200 will now buy you a 500GB external drive in a store. Still, flash drives are catching up in size whilst providing far better access speeds and power usage. (Dell are charging $800 for a 32GB Flash Drive for my laptop Рenough to be usable)

Today, major SQL Server installations are unlikely to use flash drives, but the common belief is that this will change in the near future.

But before you start planning for these great 10x performance gains, you should spend some time to understand the underlying differences, and see that it is in fact the nature of your data reads and writes that will affect performance. It really isn’t as uniform as you might think, and it is worth to read the following paper on the subject:


Update: Spoke too soon.. the flash drive for database systems is just around the corner

SQL Server 2005 Table Partitioning – Links

Table partitions have new and improved functionality in SQL Server 2005, and it is now actually very easy to define them.

A simple take is that partitions enable you spread table data over multiple locations.

In one scenario, your application may have 10 years worth of data in a ‘Sales’ table. If you create a partition for each year and place them each on a separate disk, you will in theory have fewer reads/writes on each disk resulting in more efficient queries.

In addition, many bulk operations can be further optimised by dropping indexes and other constraints on individual table partitions.

It should be noted that your database will have to be very large to make this a worthwhile exercise. Even then, other factors might produce little or even worse performance benefit. 

If you are interested, take a look at the following links to get started:

Partitioned Tables and Indexes in SQL Server 2005
How to Implement an Automatic Sliding Window in a Partitioned Table on SQL Server 2005
Top 10 SQL Server 2005 Performance Issues for Data Warehouse and Reporting Applications
Partition Elimination in SQL Server 2005

And for some more detail:

Loading Bulk Data into a Partitioned Table

Top 10 SQL Server 2005 Performance Issues for Data Warehouse and Reporting Applications