Version Properly using AssemblyVersion and AssemblyFileVersion

By jay at July 18, 2010 15:25 Tags: , ,

Cet article est disponible en francais.

We talk quite easily about new technologies and things we just learned about, because that's the way geeks work. But for newcomers, this is not always easy. This is a large recurring debate, but I find that it is good to step back from time to time and talk about good practices for these newcomers.

 

The AssemblyVersion and AssemblyFileVersion Attributes

When we want to give a version to a .NET assembly, we can choose one of two common ways :

 

Most of the time, and by default in Visual Studio 2008 templates, we can find the AssemblyInfo.cs file in the Properties section of a project. This file will generally only contain an AssemblyVersion attribute, which will force the value of the AssemblyFileVersion value to the AssemblyVersion's value. The AssemblyFileVersion attribute is now added by default in the Visual Studio 2010 C# project templates, and this is a good thing.

It is possible to see the value of the AssemblyFileVersion attribute in file properties window the Windows file explorer, or by adding the "File Version" column, still in the Windows Explorer.

We can also find a automatic numbering provided by the C# compiler, by the use of :

[assemblyAssemblyVersion("1.0.0.*")]

 

Each new compilation will create a new version.

This feature is enough at first, but when you start having somehow complex projects, you may need to introduce continuous integration that will provide nightly builds. You will want to give a version to the assemblies in such a way it is easy to find which revision has been used in the source control system to compile those assemblies.

You can then modify the Team Build scripts to use tasks such as the AssemblyInfo task of MSBuild Tasks, and generate a new AssemblyInfo.cs file that will contain the proper version.

 

Publishing a new version of an Assembly

To come back to the subject of versionning an assembly properly, we generally want to know quickly, when a project build has been published, which version has been installed on the client's systems. Most of the time, we want to know which version is used because there is an issue, and that we will need to provide an updated assembly that will contain a fix. Particularly when the software cannot be reinstalled completely on those systems.

A somehow real world example

Lets consider that we have a solution with two assemblies signed with a strong name Assembly1 and Assembly2, with Assembly1 that uses types available in Assembly2, and that finally are versioned with an AssemblyVersion set to 1.0.0.458. These assemblies are part of an official build published on the client's systems.

If we want to provide a fix in Assembly2, we will create a branch in the source control from the revision 1.0.0.458, and make the fix in that branch which will give revision 460, so the version 1.0.0.460.

If we let the Build System compile that revision, we will get assemblies that will be marked as 1.0.0.460. If we only take Assembly2, and we place it on the client's systems, the CLR will refuse to load this new version if the assembly, because Assembly1 requires to have Assembly to of the version 1.0.0.458. We can use the bindingRedirect parameter in the configuration file to get around that, but this is not always convenient, particularly when we update a lot of assemblies.

We can also compile this new version with the AssemblyVersion of 1.0.0.460 set to 1.0.0.458 for Assembly2, but this willl have the disadvantage of lying about the actual version of the file, and that will make diagnostics more complex in case of an other issue that could happen later.

Adding AssemblyFileVersion

To avoid having those issues with the resolution of assembly dependencies, it is possible to keep the AssemblyVersion constant, but use the AssemblyFileVersion to provide the actual version of the assembly.

The version specified in the AssemblyFileVersion is not used by the .NET Runtime, but is displayed in the file properties in the Windows Explorer.

We will then have the AssemlyVersion set to the original published version of the application, and set the AssemblyFileVersion to the same version, and later change only the AssemblyFileVersion when we published fixes of these assemblies.

Microsoft uses this technique to version the .NET runtime and BCL assemblies, and we take a look at System.dll for .NET 2.0, we can see that the AssemblyVersion is set to 2.0.0.0, and that the AssemblyFileVersion is set, for instance, to 2.0.50727.4927.

 

Other examples of versionning issues

We can find other cases of loading issues linked to the mismatch of the version for a loaded assembly that is different from the expected assembly version.

Custom Behaviors in WCF

WCF gives the developer a way to provide custom behaviors to alter the default behaviors for out-of-the-box bindings, and it is necessary to provide the fully qualified name, without errors. This is a pretty annoying but in WCF 4.x because it is somewhat complex to debug, and it is a very good case of use for the deactivation of "Just My Code" to find out why the assembly is not being loaded.

A good new though, this pretty old bug has been fixed in WCF 4.0 !

Dynamic Proxy Generators

Some dynamic proxy generators like Castle Dynamic Proxy 2 or Spring.NET use fully qualified types to generate the code for the proxy's, and loading issues can occur if the assembly referenced by the proxy is not exactly what is being loaded, with or without a Strong Name. These frameworks are heavily used with AOP, or by frameworks nHibernate, ActiveRecords or iBatis.

To be a bit more precise, the use of the ProxyGenerator.CreateInterfaceProxyWithTarget method generates a proxy that targets the assembly that is referenced during the generation of the code for the proxied interface.

To give an example, let's take an interface I1 in an assembly A1(1.0.0.0), which has a method that uses a type T1 in an assembly A2(1.0.0.0). If we change the assembly A2 and that its version becomes A2(2.0.0.0), the proxy will not be properly generated because the reference T1/A2(1.0.0.0) will be used because compiled in A1(1.0.0.0), regardless if we loaded A2(2.0.0.0)

The best practice of not changing the AssemblyVersion avoid loading issues of this kind. These issues are not blocking, but this more work to do to get around it.

And You ?

This is only a example of "Best Practice", which seems to have worked properly so far.

And you ? What do you do ? Which practices do you use to version your assemblies ?

[VS2010] On the Impacts of Debugging with “Just My Code”

By jay at July 05, 2010 19:58 Tags: , ,

Cet article est disponible en francais.

The “Just My Code” feature has been there for a while in Visual Studio. Since Visual Studio 2005 actually. And it's fairly easy to miss its details...

High level, this feature only shows you the stack that contains your code, mostly those assemblies that are in debug mode and have debugging symbols (pdb files). Most of the time, this is interesting, particularly if you’re debugging fairly simple code.

But if you’re debugging somehow complex issues, where you want to intercept exceptions that may be rethrown in some parts of the code that are not “Just Your Code”, then you have to disable it.

If you’re an experienced .NET developer, chances are you disabled it because it annoyed you at some point. I did, until a while back.

 

Debugger Exception Handling

The “Just my Code” (I’ll call it JMC for the rest of the article) feature changes a few things in the way the debugger handles exceptions.

If it is enabled, you’ll notice two columns in the “Debug / Exceptions” menu :

  • Thrown, which means that if you check that box, the debugger will break on the least deep rethrow in the stack of the exception
  • User-unhandled, which means that if you check that box the debugger will break if the exception has not been handled by any user code exception handler in the current stack.

 

If it is not enabled, then the same dialog box will display one column :

  • Thrown, which means that the debugger will break as soon as the exception is thrown

 

You’ll probably notice a big difference in the way the debugger handles the “Thrown” option. To be a bit more clear about that difference, let’s consider this code sample :

    static void Main(string[] args) 
    { 
        try 
        { 
            var t = new Class1(); 
            t.Throw(); 
        } 
        catch (Exception e) 
        { 
            Console.WriteLine(e); 
        } 
      }
    

Main executable, in debug configuration with debugging symbols enabled

    public class Class1 
    { 
        public void Throw() 
        { 
            try 
            { 
                Throw2(); 
            } 
            catch (Exception e) 
            { 
                throw; 
            } 
        }
        private void Throw2() 
        { 
            throw new InvalidOperationException("Test"); 
        } 
      }

Different assembly, in debug configuration without debugging symbols.

If we execute this code with the debugger with JMC enabled and with the “Thrown” column check for “InvalidOperationException”, here is the stack trace :

     NotMyCode.dll!NotMyCode.Class1.Throw() + 0x51 bytes
  > MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

And here is the stack trace without the JMC feature :

     NotMyCode.dll!NotMyCode.Class1.Throw2() + 0x46 bytes
NotMyCode.dll!NotMyCode.Class1.Throw() + 0x3d bytes
> MyCode.exe!MyCode.Program.Main(string[] args = {string[0]}) Line 15 + 0xb bytes

 

You’ll notice the impact of the “least deep in the stack rethrow”, which means that if you enable JMC, you will not have the original location of the exception.

Then you may wonder why it may be interesting to have the original location of the exception in the debugger. It is a debugging technique that is commonly used to find tricky issues that throw exceptions deep in code you do not own, and one of these exceptions is often TypeInitializerException. It can be useful to break at the original location to have the proper context, or stack that lead to the exception.

Lately, I’ve been using this technique of “Break on all exceptions” without JMC to troubleshoot loading of 32 bits assemblies in a 64 Bits CLR. You don’t exactly know which exception you’re looking for in the first place, and having JMC “hiding” some exceptions is not of a great help.

Also, to be fair, a more deep and intense debugging often leads to the use of WinDBG and the SOS extension (and here is a good SOS cheat sheet). But that’s another topic.

 

Step Into “Debugging Experience” with JMC

If you’ve read this far, you may now ask yourself why you would ever want to enable JMC. After all, you can handle your code yourself and with enough experience, you can easily mentally ignore pieces of the stack that are not yours. Actually, the gray font used for code that does not have debugging symbols helps a lot for that.

Well, there’s one example of good use of JMC : The debugger “Step into” feature. A very simple feature that allows step by step debugging of the software.

If you’re in debugging mode, you’ll step into the code that is called on the next line, if that’s possible, and see what’s in there.

So demonstrate this, let’s consider this example :

    static void Main(string[] args) 
    { 
        var myObject = new MyObject();

        Console.WriteLine(myObject); 
    }
    
    class MyObject 
    { 
        public override string ToString() 
        { 
            return "My object";
        } 
    }
      

This is a very simple program that will use the fact that Console.WriteLine will call the ToString method on the object that is passed as a parameter.

The point of this sample is to make “My Code” (Main) call some of “No My Code” (Console.WriteLine) that will call “My Code” (MyObject.ToString). Easy.

Now if you run this sample with the debugger with JMC disabled, if you try to “Step Into” Console.WriteLine, you’ll actually step over. This is not very helpful from the point of view of debugging you own code.

A very concrete example of that lack of “Step Into” can be found when you have proxies like the ones found in Spring.NET or Castle's DynamicProxy, they get in the way of simple debugging. You can’t step into objects that have been proxied to perform some AOP, for instance.

But if you enable JMC, well, you can actually “Step Into” your own code, even if the next actual method when you step into was not one of yours.

 

Final Words

Using JMC in this context is very useful and natural I would say. And the feature has been there for so long I missed its original goals. It originally got into my way for deep debugging purposes, and I dismissed as a “junior” feature, even cosmetic. Well, I was wrong…

Anyway, in Visual Studion 2010, the JMC has been improved a bit, as the way to enable and disable it is now far more easier to reach because it is now in the IntelliTrace “Show Calls View”.

Time to switch to Visual Studio 2010, people ! :)

[WP7Dev] Using the WebClient with Reactive Extensions for Effective Asynchronous Downloads

By jay at June 22, 2010 21:07 Tags: , , , , ,

There’s a very cool framework that has slipped into the Windows Phone SDK : The Reactive Extensions.

It's actually a quite misunderstood framework, mainly because it is a bit hard to harness, but when you get a handle on it, it is very handy ! I particularly like the MemoizeAll extension, a tricky one, but very powerfull.

But I digress.

 

A Non-Reactive String Download Sample

On Windows Phone 7, the WebClient class only has a DownloadStringAsync method and a corresponding DownloadStringCompleted event. That means that you're forced to be asynchronous, to be nice to the UI and not make the application freeze on the user, because of the bad coding habit of being synchronous on remote calls.

In a world without the reactive extensions, you would use it like this :

public void StartDownload()
{
    var wc = new WebClient();
    wc.DownloadStringCompleted += 
      (e, args) => DownloadCompleted(args.Result);
                  
    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public void DownloadCompleted(string value)
{
    myLabel.Text = value;
}

Pretty easy. But you soon find out that the execution of the DownloadStringCompleted event is performed on the UI Thread. That means that if, for some reason you need to perform some expensive calculation after you’ve received the string, you’ll freeze the UI for the duration of your calculation, and since the Windows Phone 7 is all about fluidity and you don't want to be the bad guy, you then have to queue it in the ThreadPool.

But you also have to update the UI in the dispatcher, so you have to come back from the thread pool.

You then have :

 public void StartDownload()
 {
     WebClient wc = new WebClient();
     wc.DownloadStringCompleted += 
        (e, args) => ThreadPool.QueueUserWorkItem(d => DownloadCompleted(args.Result));

     // Start the download
     wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
  }

 public void DownloadCompleted(string value)
 {
     // Some expensive calculation
     Thread.Sleep(1000);

     Dispatcher.BeginInvoke(() => myLabel.Text = value);
 }

That’s a bit more complex. And then you notice that you also have to handle exceptions because, well, it’s the Web. It’s unreliable.

So, let’s add the exception handling :

public void StartDownload()
{
    WebClient wc = new WebClient();

    wc.DownloadStringCompleted += (e, args) => {
        try {
            ThreadPool.QueueUserWorkItem(d => DownloadCompleted(args.Result));
        }
        catch (WebException e) {
            myLabel.Text = "Error !";
        }
    };
   
    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public void DownloadCompleted(string  value)
{
    // Some expensive calculation
    Thread.Sleep(1000);
    Dispatcher.BeginInvoke(() => myLabel.Text = value);
}

That’s starting to be a bit complex. But then you have to wait for an other call from an other WebClient to end its call and show both results.

Oh well. Fine, I'll spare you that one.

 

The Same Using the Reactive Extensions

The reactive extensions treats asynchronous events like a stream of events. You subscribe to the stream of events and leave, and you let the reactive framework do the heavy lifting for you.

I’ll spare you the explanation of the duality between IObservable and IEnumerable, because Erik Meijer explains it very well.

So, I’ll start again with the simple example, and after adding the System.Observable and System.Reactive references, I can downloading a string :

public void StartDownload()
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => newString.EventArgs.Result);

    // Subscribe to the observable, and set the label text
    o.Subscribe(s => myLabel.Text = s);


    // Start the download
    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

This does the same thing the very first example did. You’ll notice the use of Observable.FromEvent to transform the event into a string from the DownloadStringCompleted event args. For this exemple, the event stream will only contain one event, since the download only occurs once. Each of these ocurrence of the event is then “projected”, using the Select statement, to a string that represents the result of the web request.

It’s a bit more complex for the simple case, because of the additional plumbing.

But now we want to handle the threads context changes. The Reactive Extensions has the concept of scheduler, to observe an IObservable in a specific context.

So, we use the scheduler like this :

public void StartDownload()
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // Let's make sure that we’re on the thread pool
                      .ObserveOn(Scheduler.ThreadPool)

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => ProcessString(newString.EventArgs.Result))

                      // Now go back to the UI Thread
                      .ObserveOn(Scheduler.Dispatcher)

                      // Subscribe to the observable, and set the label text
                      .Subscribe(s => myLabel.Text = s);

    wc.DownloadStringAsync(new Uri("http://www.data.com/service"));
}

public string ProcessString(string s)
{
    // A very very very long computation
    return s + "1";
}
 

In this example, we’ve changed contexts twice to suit our needs, and now, it’s getting a bit less complex than the original sample.

And if we want to handle exceptions, well, easy :

    .Subscribe(s => myLabel.Text = s, e => myLabel.Text = "Error ! " + e.Message);

And you have it !

 

Combining the Results of Two Downloads

Combining two or more asynchronous operations can be very tricky, and you have to handle exceptions, rendez-vous and complex states. That make a very complex piece of code that I won’t write here, I promised, but instead I’ll give you a sample using Reactive Extensions :

public IObservable<string> StartDownload(string uri)
{
    WebClient wc = new WebClient();

    var o = Observable.FromEvent<DownloadStringCompletedEventArgs>(wc, "DownloadStringCompleted")

                      // Let's make sure that we're not on the UI Thread
                      .ObserveOn(Scheduler.ThreadPool)

                      // When the event fires, just select the string and make
                      // an IObservable<string> instead
                      .Select(newString => ProcessString(newString.EventArgs.Result));

    wc.DownloadStringAsync(new Uri(uri));

    return o;
}

public string ProcessString(string s)
{
    // A very very very long computation
    return s + "<!-- Processing End -->";
}

public void DisplayMyString()
{
    var asyncDownload = StartDownload("http://bing.com");
    var asyncDownload2 = StartDownload("http://google.com");

    // Take both results and combine them when they'll be available
    var zipped = asyncDownload.Zip(asyncDownload2, (left, right) => left + " - " + right);

    // Now go back to the UI Thread
    zipped.ObserveOn(Scheduler.Dispatcher)

          // Subscribe to the observable, and set the label text
          .Subscribe(s => myLabel.Text = s);
}

You’ll get a very interesting combination of google and bing :)

[WP7Dev] Beware of the [ThreadStatic] attribute on Silverlight for Windows Phone 7

By Admin at June 19, 2010 21:36 Tags: , , , , , , ,

Cet article est disponible en francais.

In other words, it is not supported !

And the worst in all this is that you don’t even get warned that it’s not supported... The code compiles, but the attribute has no effect at all ! Granted that you can read the msdn article about the differences between silverlight on Windows and Windows Phone, but well, you may still miss it. Maybe a custom code analysis rule could prevent this.

Still, you want to use ThreadStatic because you probably need it, somehow. But since it is not supported, you could try the Thread.GetNamedDataSlot, mind you.

Well, too bad. It’s not supported either.

That leaves us implementing or own TLS implementation, by hand...

 

Updating Umbrella for Silverlight on Windows Phone

I’m a big fan of Umbrella, and the first time I had to use Dictionary<>.TryGetValue and its magically aweful out parameter in my attempt to rewrite my Remote Control app for Windows Phone 7, I decided to port Umbrella to it. So I could use GetValueOrDefault without rewriting it, again.

I managed to get almost all the desktop unit tests to pass, except for those who emit code, use web features, use xml and binary serializers, call private methods using reflection, and so on.

There are a few parts where the code needed to be updated, because TypeDescriptor class is not available on WP7, you have to crash and burn to see if a value is convertible from one type to the other. But that’s not too bad, it works as expected.

 

Umbrella’s ThreadLocalSource

Umbrella has this nice ThreadLocalSource class that wraps the TLS behavior, and you can easily create a static variable of that type instead of the ThreadStatic static variable.

The Umbrella quick start samples make that kind of use for it :

    ISource<int> threadLocal = new ThreadLocalSource<int>(1);

    int valueOnOtherThread = 0;

    Thread thread = new Thread(() => valueOnOtherThread = threadLocal.Value);
    thread.Start();
    thread.Join();

    Assert.Equal(1, threadLocal.Value);
    Assert.Equal(0, valueOnOtherThread);

The main thread set the value to 1, and the other thread tries to get the same value from the other thread and it should be different (the default value of an int, which is 0).

 

Updating the ThreadLocalSource to avoid the use of ThreadStatic

The TLS in .NET is basically a dictionary of string/object pairs that is attached to each running threads. So, to mimic this, we just need to make a list of all threads that want to store something for themselves and wrap it nicely.

We can create a variable of this type :

    private static Tuple<WeakReference, IDictionary<string, T>>[] _tls;

That variable is intentionally an array to try to make use of memory spacial locality, and since on that platform we won’t get a lot of threads, this should be fine when we got through the array to find one. This approach is trying to be lockless, by using a retry mechanism to update the array. The WeakReference is used to avoid keeping a reference to the thread after it has been terminated.

So, to update the array, we can do as follows :

    private static IDictionary<string, T> GetValuesForThread(Thread thread)
    {
        // Find the TLS for the specified thread
        var query = from entry in _tls

                    // Only get threads that are still alive
                    let t = entry.T.Target as Thread

                    // Get the requested thread
                    where t != null && t == thread
                    select entry.U;

        var localStorage = query.FirstOrDefault();

        if (localStorage == null)
        {
            bool success = false;

            // The storage for the new Thread
            localStorage = new Dictionary<string, T>();

            while(!success)
            {
                // store the original array so we can check later if there has not
                // been anyone that has updated the array at the same time we did
                var originalTls = _tls;

                var newTls = new List<Tuple<WeakReference, IDictionary<string, T>>>();

                // Add the slots for which threads references are still alive
                newTls.AddRange(_tls.Where(t => t.T.IsAlive));

                var newSlot = new Tuple<WeakReference, IDictionary<string, T>>()
                {
                    T = new WeakReference(thread),
                    U = localStorage
                };

                newTls.Add(newSlot);

                // If no other thread has changed the array, replace it.
                success = Interlocked.CompareExchange(ref _tls, newTls.ToArray(), originalTls) != _tls;
            }
        }

        return localStorage;
    }

Instead of the array, another dictionary could be created but I’m not sure of the actual performance improvement that would provide, particularly for very small arrays.

Using a lockless approach like this one will most likely limit the contention around the use of that TLS-like class. There may be, from time to time, computations that are performed multiple times in case of race conditions on the update of the _tls array, but that is completely acceptable. Additionally, livelocks are also out of the picture on that kind of preemptive systems.

I think developing on that platform is going to be fully of little workarounds like this one... This is going to be fun !

[VS2010] How to disable the Power Tools Ctrl+Click Go to Definition

By Admin at June 13, 2010 17:17 Tags: ,

Last week, Microsoft released the Visual Studio 2010 Productivity Power Tool Extensions, which includes a lot of features that probably should have made it to the VS2010 RTM, but somehow did not.

 

A must install, really. Just for the fixed Add Reference dialog that includes a search filter. A big time saver.

 

But there’s also an other feature, the Ctrl+Click Go to Definition that allows to go to the definition with a single left click (the F12 key in the default keyboard bindings).

 

If you’re like me, you may be lazy enough to let the text editor select complete words and you’re probably using Ctrl + left lick to select words so you don’t have to aim too much with the mouse pointer. That particular Power Tools feature conflicts directly with it and you end up constantly going to the definition of types when you want to select text... And that’s pretty annoying.

 

There does not seem to be a way to disable that feature from the IDE, so you may well disable it the hard way :

  • Go to C:\Users\USER_NAME\AppData\Local\Microsoft\VisualStudio\10.0\Extensions\Microsoft\Visual Studio 2010 Pro Power Tools\10.0.10602.2200
  • Remove or rename the GoToDefProPack.dll file.
  • Enjoy your complete word selection again !

Have fun :)

[LINQ] Finding the next available file name

By jay at June 10, 2010 20:16 Tags: , ,

Cet article est disponible en francais.


Sometimes, the most simple examples are the best.

 

Let’s say you have a configuration file, but you want to make a copy of it before you modify it. Easy, you copy that file to “filename.bak”. But what happens there’s already that file ? Well, either you replace it, or you create an autoincremented file.

 

If you want to do the latter, you could do it using a for loop. But since you’re a happy functional programming guy, you want to make it using LINQ.

 

You then can do it like this :

    public static string CreateNewFileName(string filePath)
    {
        if (!File.Exists(filePath))
            return filePath;

        // Don't do that for each file.
        var name = Path.GetFileNameWithoutExtension(filePath);
        var extension = Path.GetExtension(filePath);

        // Now find the next available file
        var fileQuery = from index in Enumerable.Range(2, 10000)

                        // Build the file name
                        let fileName = string.Format("{0} ({1}){2}", name, index, extension)

                        // Does it exist ?
                        where !File.Exists(fileName)

                        // No ? Select it.
                        select fileName;

        // Return the first one.
        return fileQuery.First();
    }

Note the use of the let operator, which allows the reuse of what is called a “range variable”. In this case, it avoids using string.Format multiple times.

 

The case of Infinity

There’s actually one problem with this implementation, which is the arbitrary “10000”. This might be fine if you don’t intend to make more than 10000 backups of your configuration file. But if you do, to lift that limit, we could write this iterator method :

    public static IEnumerable<int> InfiniteRange(int start)
    {
         while(true)
         {
             yield return start++;
         }
    }

Which basically will return an new value each time you ask for one. To use that method you have to make sure that you have an exit condition (the file does not exist, in the previous example), or you may well be enumerating until the end of times... Actually up to int.MaxValue, for the nit-pickers, but .NET 4.0 adds System.Numerics.BigInteger to be sure to get to the end of times. You never know.

 

To use this iterator, just replace :

        var fileQuery = from index in Enumerable.Range(2, 10000)

by

        var fileQuery = from index in InfiniteRange()

And you’re done.

[VS2010] Configure Code Analysis for the Whole Solution

By jay at March 06, 2010 17:38 Tags: ,

Cet article est disponible en francais.

In Visual Studio, configuring Code Analysis was a bit cumbersome. If you had more than a bunch (say 10), this could take a long time to manage and have a single set of rules for all your solution. You had to resort to update all the projects files by hand, or use a small tool that would edit the csproj file to set the same rules everywhere.

Not very pleasant, nor efficient. Particularly when you have hundreds of projects.

In Visual Studio 2010, the product team added two things :

  1. Rules are now in external files, and not embedded in the project file. That makes the rules reuseable in other projects in the solution. Nice.
  2. There’s a new section in the Solution properties named “Code Analysis Settings”, that allows to set rule files to use for single projects, and even better, for all projects ! Very nice.

That option is also available from the “Analyze” menu, with “Configure Code Analysis for Solution”.

One gotcha there though, to be able to select all files, you can’t use Ctrl+A but you have to select all files by selecting the first item, then hold Ctrl while selecting the last item. Maybe the Product Team will fix that for the release...

Migrating Rules from VS2008

If you’re migrating your projects from VS2008, and were using code analysis there, you’ll notice that the converter will generate a file named “Migrated rules for MyProject.ruleset” for every project in the solution. That’s nice if all your projects don’t have the same rules. But if they do, you’ll have to manage all of them...

Like all programmers, I’m lazy, and I wrote a little macro that will remove all generated ruleset files for the current solution, and use a single rule set.

This is not a very efficient macro, but since it won’t be used that often... You’ll probably live with the bad performance, and bad VB.NET code :)

Here it is :

Sub RemoveAllRuleset()

    For Each project As Project In DTE.Solution.Projects
        FindRuleSets(project)
    Next

End Sub

Sub FindRuleSets(ByVal project As Project)

    For Each item As ProjectItem In project.ProjectItems

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then
                FindRuleSets(item.SubProject)
            End If
        End If

        If Not item.SubProject Is Nothing Then
            If Not item.SubProject.ProjectItems Is Nothing Then

                Dim ruleSets As List(Of ProjectItem) = New List(Of ProjectItem)

                For Each subItem In item.SubProject.ProjectItems
                    If subItem.Name.StartsWith("Migrated rules for ") Then
                        ruleSets.Add(subItem)
                    End If
                Next

                For Each ruleset In ruleSets
                    ruleset.Remove()
                Next
            End If
        End If
    Next

End Sub

Reactive Framework: MemoizeAll

By jay at February 04, 2010 18:21 Tags: , , ,

Cet article est disponible en francais.

For some time now, with the release of the Rx Framework and the reactive/interactive programming, some new features were highlighted through a very good article of Bart De Smet dealing with System.Interactive and the “lazy-caching”.

When using LINQ, one can find two sorts of operators: The “lazy” operators that take elements one by one and forward them when they a requested (Select, Where, SelectMany, First, …), and the operators that I would call “Rendez-vous” for which the entirety of the elements of the enumerators need to be enumerated (Count, ToArray/ToList, OrderBy, …) to produce a result.

 

“Lazy” Operators

Lazy operators are pretty useful as they offer a good performance when it is not required to enumerate all the elements of an enumerable. This can also be useful when it may take a very long time to enumerate each element of an enumerator, and that we only want to get the first few elements.

For instance this :
         

static IEnumerable<int> GetItems()
{
    for (int i = 0; i < 5; i++)
    {
        Console.WriteLine(i);
        yield return i + 1;
    }
}

static void Main()
{
   Console.WriteLine(GetItems().First());
   Console.WriteLine(GetItems().First());
}

Will output :


0
1
0
1

Only the first element of the enumerator will be enumerated from GetItems().

However, these operators expose a behavior that is important to know about: Each time they are enumerated, they also enumerate their source again. That could either be a advantage (Enumerating multiple times a changing source) or a problem (enumerating multiple times a resource intensive source).

 

“Rendez-vous” Operators

These operators are also interesting because they force the enumeration of all the elements of the enumerable, and in the particular case of ToArray, this allows the creation of an immutable version of the content of the enumerable. These are useful in conjunction with lazy operators to prevent them to enumerate their source again, when enumerated multiple times.

If we the previous sample, and update it a bit:


static void Main()
{
   var items = GetItems().ToArray();

   Console.WriteLine(items.Count());
   Console.WriteLine(items.Count());
}

We get this result :


0
1
2
3
4
5
5

Because Count() needs to know all the elements of the source enumerator to determine the count.

These operators also enumerate their source with each use, but using ToArray/ToList prevents their result to enumerate the source again.

The case of multiple enumerations

A concrete example of the problem posed by the multiple enumerations is the creation of an enumerable partitionning operator. In this example, we can see that the enumerable passed as the source is used by to "Where" different operators, which implies that the source enumerable will be enumerated twice. Storing the whole content of the source enumerable by means of a ToArray/ToList is possible, but that would be a possible waste of resource, mainly because we can't know if the output enumerable will be enumerated completely (If that is possible, as in the case of an infinite enumerable, ToArray is not applicable).

An intermediate operator between "Lazy" and "Rendez-vous" would be useful.

EnumerableEx.MemoizeAll

The EnumerableEx class brings us an extension, MemoizeAll (built from the Memoization concept), that is just the middle ground we're looking for, and will cache elements from the source enumerator when they are requested. A sort of "lazy" ToArray.

If we take the example of Mark Needham, we would modify it like this :


var evensAndOdds = Enumerable.Range(1, 10)
                             .MemoizeAll()
                             .Partition(x => x % 2 == 0);

In this example, the MemoizeAll does not have a real benefit on the performance side, since Enumerable.Range is not a very expensive operator. But in the case where the source of the "Partition" operator would be a most expensive enumerable, like a Linq2Sql query, the lazy caching could be very effective.

One of the comments suggests that a GroupBy based implementation could be written, but this operator also evaluates the source operator when a group is enumerated. The MemoizeAll is then again appropriate for better performance, but as always, this is a tradeoff between processing and memory.

By the way, Bart de Smeth discusses the part of the elimination of side effects linked the multiple enumeration of enumerables by using Memoize and MemoizeAll, which is not really an issue in the previous example, but is nonetheless a very interesting subject.

 

.NET 4.5 ?

On a side note, I find regrettable that the EnumerableEx extensions did not make their way in .NET 4.0... They are very useful, and not very complex. They may have arrived too late in the development cycle of .NET 4.0... Maybe in .NET 4.5 :)

WinForms, DataBinding and Updates from multiple Threads

By jay at January 02, 2010 23:08 Tags: ,

Cet article est disponible en francais.

When one is trying to use the MVC model on the WinForms, it is possible to use the INotifyPropertyChanged interface to allow DataBinding between the controler and form.

It is then possible to write a controller like this :

    

public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
PropertyChanged(this, new PropertyChangedEventArgs("Status"));
}
}
}

The form is defined like this :


public partial class MyForm : Form
{
private MyController _controller = new MyController();

public MyForm()
{
InitializeComponent();

// Make a link between labelStatus.Text and _controller.Status
labelStatus.DataBindings.Add("Text", _controller, "Status");
}

private void buttonChangeStatus_Click(object sender, EventArgs e)
{
_controller.ChangeStatus();
}

}


The form will update the “labelStatus” when the “Status” property of controller changes.

All of this code is executed in the main thread, where the message pump of the main form is located.

 

A touch of asynchronism

Let’s imagine now that the controller is going to perform some operations asynchronously, using a timer for instance.

We update the controller by adding this :


private System.Threading.Timer _timer;

public MyController()
{
_timer = new Timer(
d => ChangeStatus(),
null,
TimeSpan.FromSeconds(1), // Start in one second
TimeSpan.FromSeconds(1) // Every second
);
}


By altering the controller this way, the “Status” property is going to be updated regularly

The operation model of the System.Threading.Timer implies that the ChangeStatus method is called from a different thread than the thread that created the main form. Thus, when the code is executed, the update of the label is halted by the following exception :

   Cross-thread operation not valid: Control 'labelStatus' accessed from a thread other than the thread it was created on.

The solution is quite simple, the update of the UI must be performed on the main thread using Control.Invoke().

That said, in our example, it’s the DataBinding engine that hooks on the PropertyChanged event. We must make sure that the PropertyChanged event is called “decorated” by a call to Control.Invoke().

We could update the controller to invoke the event on the main Thread:


set
{
_status = value;

// Notify that the property has changed
Action action = () => PropertyChanged(this, new PropertyChangedEventArgs("Status"));
_form.Invoke(action);
}


But that would require the addition of WinForms depend code in the controller, which is not acceptable. Since we want to put the controller in a Unit Test, calling the Control.Invoke() method would be problematic, as we would need a Form instance that we would not have in this context.

 

Delegation by Interface

The idea is to delegate to the view (here the form) the responsibility of placing the call to the event on the main thread. We can do so by using an interface passed as a parameter of the controller’s constructor. It could be an interface like this one :


public interface ISynchronousCall
{
void Invoke(Action a);
}


The form would implement it:


void ISynchronousCall.Invoke(Action action)
{
// Call the provided action on the UI Thread using Control.Invoke()
Invoke(action);
}


We would then raise the event like this :


_synchronousInvoker.Invoke(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

But like every efficient programmer (read lazy), we want to avoid writing an interface.

 

Delegation by Lambda

We will try to use lambda functions to call the method Control.Invoke() method. For this, we will update the constructor of the controller, and instead of taking an interface as a parameter, we will use :


public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker;
...
}

To clarify, we give to the constructor an action that has the responsibility to call an action that is passed to it by parameter.

It allows to build the controller like this :


_controller = new MyController(a => Invoke(a));

Here, no need to implement an interface, just pass a small lambda that invokes an actions on the main thread. And it is used like this :


_synchronousInvoker(
() => PropertyChanged(this, new PropertyChangedEventArgs("Status"))
);

This means that the lambda specified as a parameter will be called on the UI Thread, in the proper context to update the associated label.

The controller is still isolated from the view, but adopts anyway the behavior of the view when updating “databound” properties.

If we would have wanted to use the controller in a unit test, it would have been constructed this way :


_controller = new MyController(a => a());

The passed lambda would only need to call the action directly.

 

Bonus: Easier writing of the notification code

A drawback of using INotifyPropertyChanged is that it is required to write the name of the property as string. This is a problem for many reasons, mainly when using refactoring or obfuscation tools.

C# 3.0 brings expression trees, a pretty interesting feature that can be used in this context. The idea is to use the expression trees to make an hypothetical “memberof” that would get the MemberInfo of a property, much like typeof gets the System.Type of a type.

Here is a small helper method that raises events :


private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}

A method that can be used like this :


_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);

The “Status” property is used as a property in the code, not as a string. It is then easier to rename it with a refactoring tool without breaking the code logic.

Note that the lambda () => Status is never called. It is only analyzed by the InvokePropertyChanged method as being able to provide the name of a property.

 

The Whole Controller


public class MyController : INotifyPropertyChanged
{
// Register a default handler to avoid having to test for null
public event PropertyChangedEventHandler PropertyChanged = delegate { };

private System.Threading.Timer _timer;
private readonly Action<Action> _synchronousInvoker;

public MyController(Action<Action> synchronousInvoker)
{
_synchronousInvoker = synchronousInvoker

_timer = new Timer(
d => Status = DateTime.Now.ToString(),
null,
1000, // Start in one second
1000 // Every second
);
}

public void ChangeStatus()
{
Status = DateTime.Now.ToString();
}

private string _status;

public string Status
{
get { return _status; }
set
{
_status = value;

// Notify that the property has changed
_synchronousInvoker(
() => InvokePropertyChanged(() => Status)
);
}
}

/// <summary>
/// Raise the PropertyChanged event for the property “get” specified in the expression
/// </summary>
/// <typeparam name="T">The type of the property</typeparam>
/// <param name="expr">The expression to get the property from</param>
private void InvokePropertyChanged<T>(Expression<Func<T>> expr)
{
var body = expr.Body as MemberExpression;

if (body != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(body.Member.Name));
}
}
}

WCF Streamed Transfers, IIS6 and IIS7 HTTP KeepAlive

By Jay at July 10, 2009 21:18 Tags: ,

Ce billet est disponible en francais.

A while back, I was working on a client issue where I was having some kind of unusual socket exception from a WCF client connecting to an IIS6 hosted WCF service.

To get a long story short, if you're using the .NET 3.5 WCF streamed transfer on IIS6 and making a lot of transfers in a small time, disable the KeepAlive feature on your web site. The performance will be lower, but it will last longer (without a client support call).

Still here with me ? :) If you have a bit more time to read, here some detail about what I found on this issue...

The setup is pretty simple : A WCF client that is sending a stream over a WCF service that has the transferMode set to streamed. This allows the transfer of a lot of information using genuine streaming, which means that the client writes to a System.IO.Stream instance, and the server reads from an other System.IO.Stream, and the data does not need to be transferred all at once, like in a "normal" SOAP communication. I'm using the required basicHttpBinding for both ends.

The strange thing is that after having made more than 15000 requests to transfer streams,  I was receivig this exception :

System.ServiceModel.CommunicationException: Could not connect to http://server/streamtest/StreamServiceTest.Service1.svc.
TCP error code 10048: Only one usage of each socket address (protocol/network address/port) is normally permitted 10.0.0.1:80. 
---> System.Net.WebException: Unable to connect to the remote server
---> System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted 10.0.0.1:80

This is a rather common issue, which is mostly found where an application tries to bind to a TCP port but cannot do so, either because it is already being bound to an other application or because it does not use the SO_REUSEADDR socket option and the port was closed very recently.

What is rather unusual is that this exception is raised on the client side and not on the server side !

After a few netstat -an, I found out that an awful lot of sockets were lingering with the following state :

TCP    the.client:50819     the.server:80          TIME_WAIT

There were something like 15000 lines of this, with incrementing numbers for the local port. This state is normal, it's meant to be that way, but it's generally more found lingering on a server, much less on a client.

That could mean only one thing, considering that IIS6.0 is an HTTP/1.1 compliant web server: WCF is requesting that the connection to be closed at after a streamed transfer.

Wireshark being my friend, I started looking up at the content of the dialog between IIS6 and my client application :

POST /streamtest/StreamServiceTest.Service1.svc HTTP/1.1
MIME-Version: 1.0
Content-Type: multipart/related; type="application/xop+xml";start="<http://tempuri.org/0>";boundary="uuid:41d2cf74-aaa6-4a80-a6c4-0ec37692a437+id=1";start-info="text/xml"
SOAPAction: "http://tempuri.org/IService1/Operation1"
Host: the.server
Transfer-Encoding: chunked
Expect: 100-continue
Connection: Keep-Alive

The server answers this :

HTTP/1.1 100 Continue

 Then the stream transfer takes place, gets the SOAP response, then at the end :

HTTP/1.1 200 OK
Date: Sat, 11 Jul 2009 01:40:16 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Connection: close
MIME-Version: 1.0
Transfer-Encoding: chunked
Cache-Control: private

I quickly found out that IIS6.0, or the WCF handler is forcing the connection to close on this last request. That's not particularly unusual, since a server may explictly deny an HTTP client to keep alive the connection.

What's even more unusual is that out of luck by trying to deactivate the IIS6.0 keep alive setting on my web site, I noticed that all the connections were properly closed on the client...!

I tried analysing a bit deeper the dialog between the client and the server, and I noticed two differences :

  1. The content of the final answer of the IIS contains two "Connection: close" headers, which could mean one by the WCF handler, and one by IIS itself. I'm not sure if repeating headers is forbidden in the RFC, I'd have to read it again to be sure.
  2. It looks like the order of the FIN/ACK, ACK packets is a bit different, but I'm not sure either where that stands. Both the client and the server are sending FIN packets to the other side, probably the result of calling Socket.Close().

But then I found out something even stranger : It all works on IIS7 ! And the best of all, the KeepAlive status is honored by the web server. That obviously means that the global performance of the web service is better on IIS7 than it is on IIS6, since there is only one connection opened for all my 15000 calls, which is rather good. Too bad my client cannot switch to IIS7 for now...

It also seems that the WCF client is not behaving the same way it does with IIS6, because at the TCP level only the client is sending a TCP FIN packet and the server is not when the keep alive is disabled.

I think I'll be posting this on Microsoft Connect soon, but I'm not sure where the problem lies, whether it is in IIS6, the WCF client or the WCF server handler, but there is definitely an issue here.

About me

My name is Jerome Laban, I am a Software Architect, C# MVP and .NET enthustiast from Montréal, QC. You will find my blog on this site, where I'm adding my thoughts on current events, or the things I'm working on, such as the Remote Control for Windows Phone.