Archive | Uncategorized RSS feed for this section

Dependency Property Basics

5 Mar

When working with WPF it can be necessary to extend a control to enable data-binding to a property that isn’t bind-able (because it is not a dependency property etc.). The following code demonstrates an example I have made where I override a Text property and give it a dependency property to enable data-binding.


public static readonly DependencyProperty TextProperty = DependencyProperty.Register("Text", typeof(string), typeof(TextEditorControl), new FrameworkPropertyMetadata(OnTextPropertyChanged));


private static void OnTextPropertyChanged(DependencyObject dp, DependencyPropertyChangedEventArgs e)


    //TODO: action handler here



public override string Text




        return (string)GetValue(TextProperty) ?? string.Empty;




        base.Text = value;


        SetValue(TextProperty, value);




The code also shows how to handle the event raised when the binding source (the view model) is updated. I recommend using dependency properties for WPF controls; as for view model classes I recommend implementing the INotifyPropertyChanged interface instead.


Using Castle’s Dynamic Proxy

17 Dec

As a user of Rhino Mocks, Ninject and certain parts of the Castle Project, I found myself wondering what the Castle Project’s dynamic proxy was. I have since learned and love the idea of, dynamic proxies.

What does a dynamic proxy do

A dynamic proxy is a way of generating a subclass from a class or interface of which is generally a model. That subclass overrides every method that it can (make your methods virtual to allow to do it). This provides the ability to intercept calls to all methods on your class/interface because the sub-classed methods route requests to an interface which dictates whether a call can proceed. You could implement that functionality yourself, however you would need to cater for all method calls. The dynamic proxy provides one interceptor handler for all methods, and you can have many interceptors on one class.

Cross cutting concerns

One of the major benefits of using proxy objects is the ability to separate cross cutting concerns such as logging. Take the following diagram as an example. It shows how the logging interceptor which takes care of the logging can log calls made to the person object without the person object having a reference or knowing about the logging interceptor. The person object also doesn’t know about the dynamic proxy library.


An argument for separating out logging from our classes could be that we might want to turn logging on or off, or that we want to replace our logging library at a later date. Both of those scenarios could be catered for by having a logging class which is directly called by our Person object. The advantage of doing that could be minor performance gains, and the disadvantage of not using a proxy is that the person object would be polluted with unnecessary code. Ideally a class would have a single purpose, and logging would not conform to the purpose of the person business object in my opinion because logging is an environmental aspect regarding person as opposed to actually defining what a person is. A strong example of this is using a dynamic proxy to add permission based access to calls on third party libraries without modifying the external library.


The following code shows how a logging interceptor might look. It logs the method name of the method that is currently being accessed, then it allows the method request to proceed. Without invocation.proceed, the call would not proceed to the person object in the previous example, and would instead be consumed.

    public class LoggingInterceptor : IInterceptor


        public void Intercept(IInvocation invocation)


            Console.Write("Log: Method Called: "+ invocation.Method.Name);





Boxing and Un-boxing proxies

To create a proxy you can do the following:

ProxyGenerator generator = new ProxyGenerator();

Person person = generator.CreateClassProxy<Person>(new LoggingInterceptor());


If you are developing in a situation where you have a network boundary and you need to send your business objects across a network you will probably want to shed your proxies and possibly recreate the proxies on the other side of the network boundary.

var proxyTarget = personProxyObject as IProxyTargetAccessor

var person = proxyTarget.DynProxyGetTarget() as Person;


More Information

Boosting Visual Studio Performance

6 Mar

Visual Studio has always worked fairly well for me at home, but at work where many of our projects/solutions are very large Visual Studio struggles. At home I still find that my Visual Studio is better than the one I use at work and I believe this is because at home I have my hard-drives in a performance RAID (Raid 0) and my OS is Vista x64 rather than XP x86.

Two things I suspect are:

Vista manages memory much better and doesn’t struggle with the out of memory exceptions that I frequently get at work. Note: Out of memory exceptions are actually to do with the .NET runtime not being able to garbage collect all of its generations properly. But the size of Visual Studio and the number of objects it is using is the cause of this.

Hard-drive speed affects Visual Studio more greatly than CPU and potentially even RAM. When I get out of memory exceptions at work (we all get this too btw) my PC is not using all of its 2 gigs of ram. Visual Studio is limited by default to 2 gigs, but this can be overcome.



Faster harddrives, lowerlatency hard drives:

On Vista with my RAID 0 my experience is better but is still latent. My IDE can load lots of data fast but often takes a while to do so. This means waits are still a small amount of time due to latency issues but large amounts of data are not really an issue.

I am starting to think that the ideal solution is a performance raid of Solid State Drives (SSD).  A link on performance improvements for a Java IDE are available here:

If you have an infinite money pool, a RAID card would be recommended too. These can have RAM on them to cache disk access which would further reduce latency.

Upgrade OS:

Upgrade to Vista and make sure you upgrade to the 64 bit version. It is silly not to take advantage of your CPU being x64.




Myth: Reshaper significantly degrades performance, and causes out of memory exceptions.

Interestingly resharper shows the out of memory exceptions (which can be a pain) where as Visual Studio virtually hides it from the user. Resharper is not causing these issues, and from what I have seen there is no significant performance difference apart from when the project is being loaded. That is because resharper loads its cache then by analysing assemblies.


Myth: Increasing RAM will increase Visual Studio’s performance

I think that if Visual Studio is running out of memory then a RAM increase will help, but for us at work our RAM is not being fully utilized so on XP increasing our RAM won’t help us as the OS will not use it. On Vista however the OS endeavours to use as much free RAM as it can to cache and pre-allocate so it is possible this will help a little bit. I think the biggest myth about personal computing is that upgarding CPU and RAM makes the biggest difference. My computer at home is 4 years old and my computer at work is 6 months old. My computer at home that is 4 years old is faster in practical usage. If it came down to a number crunching war then the new computer would win but there is virtually no use case for that in personal computing.

Beyond Relational Databases

14 Feb

I am no expert with databases, however I can definitely see scalability issues that relational databases provide when needed to grow beyond one database. Relational databases also are strongly typed and explicit relationships exist between entities. This explicit relationship specification and strong typing often requires developers to write migration scripts which can be a pain and could potentially be avoided or minimized.

There is an interesting article: which discusses the benefits of key-value databases or cloud oriented databases.

Unfortunately most of these databases are in beta and the standards between them are lacking, but I can definitely see from my experience developing payroll software that a key-value database would help us in many areas.

The down sides to key-value databases are that they are weakly typed and hence there is no schema which means you can get bad data in the database, meaning that code has to deal with that possibility, which could lead to more mistakes. I do feel however that weaker typing and not specifying relationships between entities is the way of the future because if we look at C# and other modern languages we can notice a trend of moving towards more weakly typed languages.

I do prefer a strongly typed language currently however as technology progresses I can see that dynamic typing will lead to more general code that will be more flexible to change. Currently my biggest concern is that dynamically typed languages are prone to catching errors at runtime rather than compile time which is a blow for software reliability. I am confident that this can be overcome.

It is important to state that there is an overhead in specifying types/schemas and converting between types. I also think that currently developers can make assumptions in strongly typed languages that objects will be populated in a way that they expect which sometimes is not the case. With dynamically typed objects this assumption would not be exercised as it would soon become clear that less can be assumed. The immediate development areas of .NET CLR 3.5 and 4.0 show that functional programming and dynamic objects are the current direction. I also like the hybrid approach that .NET is creating by using the power of strong typing and providing facilities towards gradual introduction of dynamic typing. This will hopefully allow for existing functionality that applies to strongly typed objects to be used against dynamically typed objects.

Event Handling Techniques

13 Feb

As .NET grows so do the number of ways of tackling the same or similar problems. Event Handling is no exception.

button1.Click += new RoutedEventHandler(button1_Click);
button1.Click += button1_Click;
button1.Click += delegate(System.Object o, RoutedEventArgs e)
button1.Click += delegate
button1.Click += (o, e) => MessageBox.Show("Click!");


Personally I use approach 2 and 5 depending on the situation. Approaches 3, 4 & 5 are anonymous handlers which means that their name is determined at creation (so we don’t know what it is). This can sometimes be an issue if you want to unsubscribe from an event, see here. Again with this post Resharper will make suggestions to use approach 2 and 5.

I feel there is an important difference between approach 1 and 2 that should be elaborated on. Approach 1 delcares the delegate of RoutedEventHandler twice because the delegate is already specified in, and hence inferred from, the event declaration. This is shown below:

internal event RoutedEventHandler Click;


Inferring types rather than having duplicate definitions is important as it means that you don’t have to repeat yourself and when you refactor your code you will have an easier job updating it when it is specified once.

For those unfamiliar with delegates, delegates are method parameter contracts which in the case of RoutedEventHandler specifies that an implementer of its contract will have the parameters of (Object, RoutedEventArgs).

C# variable naming

15 Jan

From working in a team environment, I have learnt of a number of different ways of naming variables. Reading this post it could be thought that variable naming schemes are purely preference, however I believe that there are some arguments against certain schemes. Take for example:

        private String str;

        public void Method1(String str)


            this.str = str;



If you use a plugin called Resharper  like I do then the method input is highlighted because it hides str. I have more of an issue with the following potential mistake:

        private String str;

        public void Method1()


            this.str = str;



The previous code is still valid with the compiler, however it is clear that str is not going to be populated, however a tool such as resharper will not make you aware that the this.str is never assigned.

The approach that I use for code sample 1 would be:

        private String _str;

        public void Method1(String str)


            _str = str;



The underscore ‘_’ is used for member/class wide variables. If we are to replicate code sample 2 with the underscore naming scheme we will get the following result:

        private String _str;

        public void Method1()


            _str = str;


We can clearly see that this code will not compile. We can also observe that the underscore notation is more compact by not requiring the ‘this’ keyword.

Another two reasons I like the underscore naming convention are:


  1. Intellisense grouping. Press underscore key and intellisense will group all member variables together.

  2. No confusion as to the availability of a variable. If it has underscore it is available class-wide. If it doesn’t it is only available within that method.

I have seen other people use m_VariableName but I think that just using an underscore is better because it is shorter, and the intellisense is grouped better because it is at the start of the intellisense list. Another alternative naming scheme involves putting the type in the variable name. I would advice against this because tools like resharper can tell a user what type a variable is in design time, and implicit typing will likely destroy the naming scheme.

List Conversion

9 Jan

One of my favourite use cases for lambda expressions in .NET is converting lists. Take the following code as an example:

using System;

using System.Collections.Generic;

using System.Linq;

using System.Windows;


namespace LambdaComparison


    public partial class Window1 : Window


        public Window1()




            IList<Person> people = new List<Person>();






        public void Example1(IList<Person> people)


            var names = people

                          .Select(item => item.Name).ToList();



        public void Example2(IList<Person> people)


            var names = new List<String>();


            foreach (var person in people)







    public class Person


        public String Name


            get; set;




From using this approach for converting lists between types I have found that it leads to more readable code, as it is one line rather than many.

One other thing that is worth noting and is very important tip when using LINQ or lambda expressions, is that the return type is IEnumerable<T>. Often when using LINQ etc it is common for people to expect a list or use the "var" keyword. The var keyword is not actually a type but rather is implicitly typed which means that the compiler infers/computes the type based on the right hand side of an assignment. I usually use var but i had issues with more complex LINQ statements until I realised how to understand what the var keyword was representing and how it was being deduced (which is not immediately obvious for beginners with many LINQ statements). LINQ also offers the ability to create annonymous types which provide challenges in the area of non implicit typing.