Thursday, December 31, 2009

The "Fancy Proxy" - having fun with WCF - 1-Simple TCP

This article is part of The "Fancy Proxy" tutorial - implementing a few advance ideas in WCF in one solution.

This is the 1st step - Simple TCP

Ok...so let's start than...

1st step, lets implement the easiest WCF "hello world" - just to warm up.

We'll start with the contract:

[ServiceContract]
public interface ISampleContract
{
[OperationContract]
string GetData(Guid identifier);

[OperationContract]
void Execute(Guid identifier);
}


Next we'll implement this interface on server side - our "bussiness logic":

public class testFancyProxyService:ISampleContract
{
public string GetData(Guid identifier)
{
Console.WriteLine("recieved GetData request (id {0})", identifier);

//in real life we'll probably get data using the identifier
return "hello world..";
}

public void Execute(Guid identifier)
{
Console.WriteLine("recieved Execute request (id {0})", identifier);
}
}


Next, lets host the service:

class Program
{
private static ServiceHost serviceHost = null;

static void Main(string[] args)
{
try
{
startListening();

Console.WriteLine("Server is up, press any key to stop it...");
Console.Read();
}
catch (Exception ex)
{
Console.WriteLine(string.Format("Error: {0}\n\n Stack:{1}", ex.Message, ex.StackTrace));
Console.Read();
}
finally
{
stopListening();
}
}

private static void startListening()
{
serviceHost = new ServiceHost(typeof(testFancyProxyServer.testFancyProxyService));

// listening for messages.
serviceHost.Open();
}

private static void stopListening()
{
if (serviceHost != null)
{
if (serviceHost.State == CommunicationState.Opened)
{
serviceHost.Close();
}
}
}
}


Configure the server's endpoint to listen on any available TCP port and that's it for the server.

<endpoint address="net.tcp://localhost:8080/testFancyProxy" binding="netTcpBinding"
contract="testFancyProxyContracts.ISampleContract" /;>


Same thing on client side - we'll code a small proxy (yes, you can instead use "add service reference" on visual studio :-)):


public class testFancyProxyProxy:ISampleContract
{
private ChannelFactory channelFactory;
private ISampleContract proxy;

public testFancyProxyProxy()
{
channelFactory = new ChannelFactory("tcpEndPoint");
proxy = channelFactory.CreateChannel();
}

#region ISampleContract Members

public string GetData(Guid identifier)
{
return proxy.GetData(identifier);
}

public void Execute(Guid identifier)
{
proxy.Execute(identifier);
}

#endregion
}


Configure the client and test it:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.serviceModel>
<client>
<endpoint address="net.tcp://localhost:8080/testFancyProxy/"
binding="netTcpBinding"
contract="testFancyProxyContracts.ISampleContract"
name="tcpEndPoint"/>
</client>
</system.serviceModel>
</configuration>


Testing...:

public class testFancyProxyConsumer
{
private testFancyProxyClient.testFancyProxyProxy proxy;

public void Run()
{
proxy = new testFancyProxyProxy();

Console.WriteLine(proxy.GetData(Guid.NewGuid()));

Console.WriteLine("calling Execute..");
proxy.Execute(Guid.NewGuid());

}
}







That's it!! 1st step - nice & simple..this project will be the base of the next steps.

Till next step...
Diego

PS: source download

The "Fancy Proxy" - having fun with WCF

A little project I was doing at home lately lead me to search for a solution to disconnected application.

In my imaginination I imagined an application that knows to work both "online" and "offline"...not just that but also knows how to switch between them when needed and ofcourse - controlled from WCF configuration or other infrastructure that will be as "transparent" as possible for the programmer who writes the application.

Sounds a little like science fiction? not quite...

Gathering information from all sort of good articles & blogs I've reach to a nice project which I've decieded to share with you in a kind of tutorial structure to help ,who ever finds this interesting, step by step.
I know the code could & should pass a bit of polish, but hey! remember it's just an idea not production material :-)

Each step in this "tutorial" represents a step in the way for the full solution, this is only to help understand each concept seperatly, feel free to jump over a few steps or go directly to the final step...

1- Simple TCP
2- Simple MSMQ
3- Simple Duplex
4- MSMQ Duplex
5- Simple Dynamic proxy
6- Dynamic & Duplex proxy

Feel free to ask or comment...

Diego

Tuesday, December 22, 2009

XmlDataDocument - Synchronizing DataSet and XmlDocument

Introduction

XmlDataDocument object combines relational data model (DataSet) and hierarchical data model (XML document) and performs transparent synchronization between them to ensure that at any point of time, both models contain the same data.
This means that any change made to the DataSet is authomatically reflected in the corresponding XMLDocument.

XmlDataDocument Using Example

In order to demonstrate how to work with XmlDataDocument object, we will create a simple dataset (representing books database):


DataSet newDS = new DataSet("BooksDS");

DataTable authorsTable = new DataTable("Authors");
authorsTable.Columns.Add("AuthorID", typeof(int));
authorsTable.Columns.Add("FirstName", typeof(string));
authorsTable.Columns.Add("LastName", typeof(string));
authorsTable.PrimaryKey = new DataColumn [] { authorsTable.Columns["AuthorID"]};

authorsTable.Rows.Add(1, "Benjamin", "Franklin");
authorsTable.Rows.Add(2, "Herman", "Melville");

newDS.Tables.Add(authorsTable);

DataTable booksTable = new DataTable("Books");
booksTable.Columns.Add("BookID", typeof(int));
booksTable.Columns.Add("AuthorID", typeof(int));
booksTable.Columns.Add("BookTitle", typeof(string));
booksTable.Columns.Add("Genre", typeof(string));
booksTable.Columns.Add("PublicationDate", typeof(DateTime));
booksTable.Columns.Add("ISBN", typeof(string));
booksTable.PrimaryKey = new DataColumn[] { booksTable.Columns["BookID"] };

booksTable.Rows.Add(1, 1, "The Autobiography of Benjamin Franklin", "autobiography", DateTime.Now, "1-2-234234-4");
booksTable.Rows.Add(2, 2, "The Confidence Man", "novele", DateTime.Now, "5-2-234234-4");

newDS.Tables.Add(booksTable);

newDS.Relations.Add("AuthorsBooks",
newDS.Tables["Authors"].Columns["AuthorID"],
newDS.Tables["Books"].Columns["AuthorID"]);
newDS.Relations[0].Nested = true;



Now, we can create an instance of XmlDataDocument by feeding its constructor with newDS



XmlDataDocument xmlDoc = new XmlDataDocument(newDS);


Since XmlDataDocument inherits from XmlDocument, we have all XML-related objects within it.
In addition, XmlDataDocument contains DataSet object which references our newDS instance and allows access to relational view.

We can persist our XmlDataDocument into XML file and check how the inner XML looks like:



xmlDoc.Save("books.xml");


Fllowing XML file is generated:



<booksds>
<authors>
<authorid>1</authorid>
<firstname>Benjamin</firstname>
<lastname>Franklin</lastname>
<books>
<bookid>1</bookid>
<authorid>1</authorid>
<booktitle>The Autobiography of Benjamin Franklin</booktitle>
<genre>autobiography</genre>
<publicationdate>2009-12-22T12:15:06.7278538+02:00</publicationdate>
<isbn>1-2-234234-4</isbn>
</books>
</authors>
<authors>
<authorid>2</authorid>
<firstname>Herman</firstname>
<lastname>Melville</lastname>
<books>
<bookid>2</bookid>
<authorid>2</authorid>
<booktitle>The Confidence Man</booktitle>
<genre>novele</genre>
<publicationdate>2009-12-22T12:15:06.7278538+02:00</publicationdate>
<isbn>5-2-234234-4</isbn>
</books>
</authors>
</booksds>


Let's change some field value in newDS and save XML again:


newDS.Tables["Books"].Rows[0]["BookTitle"] = "The Revised biography of Benjamin Franklin";
xmlDoc.Save("books.xml");


As we can see, the change is automatically reflected in xmlDoc:



<booksds>
<authors>
<authorid>1</authorid>
<firstname>Benjamin</firstname>
<lastname>Franklin</lastname>
<books>
<bookid>1</bookid>
<authorid>1</authorid>
<booktitle><strong><span >The Revised</span> </strong>biography of Benjamin Franklin</booktitle>
<genre>autobiography</genre>
<publicationdate>2009-12-22T12:15:06.7278538+02:00</publicationdate>
<isbn>1-2-234234-4</isbn>
</books>
</authors>
<authors>
<authorid>2</authorid>
<firstname>Herman</firstname>
<lastname>Melville</lastname>
<books>
<bookid>2</bookid>
<authorid>2</authorid>
<booktitle>The Confidence Man</booktitle>
<genre>novele</genre>
<publicationdate>2009-12-22T12:15:06.7278538+02:00</publicationdate>
<isbn>5-2-234234-4</isbn>
</books>
</authors>
</booksds>


You can find the detailed info about this powerful object here:
http://msdn.microsoft.com/en-us/library/1t4362sd.aspx
http://msdn.microsoft.com/en-us/library/system.xml.xmldatadocument.aspx

Thats' it,
Mark.

Saturday, November 28, 2009

Partial Methods

Partial classes was a great feature added in .NET framework 2.0.
Mainly,this feature allows us to add a custom code, when working with auto-generated code after adding a new form, DataSet, web-service etc..

Microsoft has extended this feature and introduced partial methods in C# 3.0.
With partial method we can define a method signature in one part of a partial class and its implementation in another part of the same type.
It enables class creators to provide method hooks that developers may decide to implement,very similar to events provider-subscriber relationship.

Example:

public partial class Department
{
//partial method signature
partial void OnCreated();

public Department()
{
OnCreated();
}
}

//could be placed in separate file.
public partial class Department
{
partial void OnCreated()
{
Console.WriteLine("OnCreated");
}
}


Partial method should comply with following rules:

1. Signatures in both parts of the partial type must match.

2. The method must return void.

3. No access modifiers or attributes are allowed. Partial methods are implicitly private.

You can find additional info about partial methods here:

http://msdn.microsoft.com/en-us/library/6b0scde8.aspx
http://geekswithblogs.net/sdorman/archive/2007/12/24/c-3.0---partial-methods.aspx

Thursday, November 26, 2009

Handling Errors in SQL Server Transactions

I've decided to write this post, because of misunderstanding I had about default error handling while writting T-SQL statements enclosed within single transaction.

Generally, we put our T SQL statements in a transaction, in order to perform all tasks as an atomic unit of work.

For instance:


BEGIN TRANSACTION
INSERT INTO t2 VALUES (1);
INSERT INTO t2 VALUES (2);
INSERT INTO t2 VALUES (3);
COMMIT TRANSACTION;


When we look at this block, we expect that if some statement raises a run-time error, the entire block is aborted, but it's not what actually happens.

By default, SQL Server aborts the erroneous statement only, and normally completes the rest of the statements.

I'm pretty sure, that it was not our intention :)

We have several solutions to deal with this situation:

1. Use try-catch block introduced in SQL Server 2005


BEGIN TRANSACTION
BEGIN TRY

INSERT INTO t2 VALUES (1);
INSERT INTO t2 VALUES (2);
INSERT INTO t2 VALUES (3);

COMMIT

END TRY
BEGIN CATCH
ROLLBACK
END CATCH


2. Check @@ERROR variable after each statement and commit only when the value of @@ERROR is zero.

3. Turn on XACT_ABORT option - actually the most convenient way to achieve the desired behaviour:


SET XACT_ABORT ON


This option simply tells to SQL Server to terminate and roll back the current transaction when a run-time error occurs.

You can find the detailed description of "SET XACT_ABORT" command here.

That's it...

Mark.

Monday, November 23, 2009

Handling Data Concurrency Using ADO.NET

I've encountered this article during studying of concurrency mechanism implemented in SQL Server 2005.

I found it nice, because it discusses various concurrency aspects while using ADO.NET disconnected model.

http://msdn.microsoft.com/en-us/magazine/cc163924.aspx

Enjoy,

Mark.

Wednesday, November 4, 2009

Cross-Tab in Crystal reports

Introduction

Cross-tabs are special objects, designed in a spreadsheet style format (such as excel), you can place in your Crystal Reports. The cross tab object refers to rows and columns on the grid as groups of data in order to generate a summary data. This provides the user with an advanced analyzing data tool supported by an easy to read and use report format.

Before starting with the explanation, here are some general facts:

- You cannot create a cross-tab without a summarized field.

- All columns in a cross-tab must be the same width.

- You can pivot cross-tabs (swap the position of the rows and columns) .

- Cross-tabs doesn’t support RTL.

To illustrate the need of such an object, consider the following example:
A marketing analyst wants to generate a report that will show how many products (in units) were sold in each US state and what was the total cost.

The simple (but not easy) way to generate such a report is to generate a report that is initially grouped by US state and within grouped by product type as appears in Table1.



It should be noted that this kind of report may cause a difficulty to the analyst in comparing between some totals. For instance, comparing between the total income from selling Mountain bikes and the total income from selling Kids bikes.

A much simple view of the totals could be achieved by filling a grid using cross tab object as appears in Table2.


It should be noticed that the display in Table2 is much easier to read and use and in fact, the comparison between the total income from selling Mountain bikes and Kids bikes is immediate (9850$ Vs. 15360$).

Cross-Tab object wizard

The wizard can be divided into two main sections- Data section and Design section. The data section is handled under the tab named "Cross-Tab" and the design section is handled in tabs named "Style" and "Customize Style".

The "Cross-Tab" tab is used to define the database fields or formulas that make up the rows and columns of the cross-tab. The "Style tab" lets you choose a predefined formatting style for the grid on the cross-tab. And, the "Customize Style" tab displays a large number of custom formatting options to precisely control the appearance of the cross-tab.

How do we choose the data to be displayed in the cross-tab?

The "Cross-Tab" tab contains several data cubes:

- The "Available Fields" cube displays a list of the available report fields (for display).

- The "Columns" cube displays the list of fields that should be presented as columns.

- The "Rows" cube displays the list of fields that should be presented as rows.

- The "Summarized Fields" cube displays the fields to be summarized in each cell.

The summarized fields in the previous example were "Total Income" and "Total sold units". It should be stated that a summarized field can also be placed in the row total or column total of the cross tab object.

The choice of the fields could be performed by "Drag & Drop" a field from one cube to another or by using arrows. The summarized type of each summarized field could be changed by clicking "Change Summary" button.

How the data source of the cross-tab report should be look like?

In order to display properly that data on the Cross tab report, each cell on the cross tab object should be represented in a different row. A General structure of each row on the data source should be looked like:

For example, we can consider the data source for the previous example:

This structure of the data source allows the cross tab object to build a dynamically rows and columns that are based only on the data source rows.
The cross tab object search for a set of distinct row-values and distinct column-values and based on those sets builds the relevant table in the report.

Example of a dynamically cross-tab report

An international shipping company delivers in each day commodity between countries. For control management purposes, the company wants to generate a monthly report that will display the number of deliveries that were taken in each month in each route. A route is a path from a specific country to a specific country.

1. The data source for this report is in the form:

2. The Cross-Tab object wizard:

- Rows cube: "Source Country".

- Columns cube: "Target Country".

- Summarized Fields cube: "Sum of Total Monthly Deliveries".

3. The Cross-Tab object design:

Summary

Cross-tab object is a neccessary tool that displays data in a spreadsheet format and allows us to generate a summary data for further analyzing.

I hope that this post will help you to get started with this powerful crystal report's feature.

Ehud

Tuesday, November 3, 2009

Flags Attribute

Introduction

Flags attribute is useful C# feature that allows treating of enumeration members as bit fields, therefore to combine multiple options within single enum variable by using bitwise OR operation.

Example

We simply decorate our enum with Flags attribute and assign to each member number that follows base 2 sequence like 1, 2, 4, 8 and so on:


[Flags]
public enum MoneyPart
{

A = 1,
B = 2,
C = 4
}


Combine the appropriate options:


MoneyPart moneyPartVar = MoneyPart.A | MoneyPart.B;


Now we can add a method that receives MoneyPart variable and checks its value by using bitwise AND operation:


private decimal CalculatePolicyRedemption(MoneyPart moneyParts)
{
decimal total = 0M;
if ((moneyParts & MoneyPart.A) != 0)
{
total += 100;
}
if ((moneyParts & MoneyPart.B) != 0)
{
total += 200;
}
if ((moneyParts & MoneyPart.C) != 0)
{
total += 300;
}

return total;
}


Isn't it handy feature? :)

Mark.

Saturday, October 31, 2009

HashSet Collection

Introduction

I believe that many of us find ourselves writing a code that checks if object already exists in collection before we add it. The purpose of this check is creating a collection which holds unique objects, or simply a math set.

HashSet Collection

A HashSet is a collection that contains no duplicate elements, with no particular order. It was introduced in .NET framework 3.5 and provides high-performance set operations.

Since this object was modeled as mathematical set, it supports common set operations like Union and Intersection.

A HashSet has Add method which returns a boolean result indicating whether an element was added to collection or not (in case it already exists).

HashSet Using Example

HashSet<int> set1 = new HashSet<int>();
set1.Add(1);
set1.Add(2);
set1.Add(3);
set1.Add(1); //won't be added

HashSet<int> set2 = new HashSet<int>();
set2.Add(3);
set2.Add(5);
set2.Add(6);

//Produces an united set by using Union method.
HashSet<int> unionSet = new HashSet<int>(set1.Union(set2));
//UnionWith - modifies set1 itself.
set1.UnionWith(set2);

//Produces an intesected set by using Intersect method.
HashSet<int> interectSet = new HashSet<int>(set1.Intersect(set2));
//IntersectWith - modifies set1 itself.
set1.IntersectWith(set2);

You can see the complete HashSet documentation here.

Mark.

Wednesday, October 28, 2009

12 Steps to Better Code (by Joel Spolsky)

According to Joel Spolsky we can ask ourselves couple of simple questions in order to decide how good our team is:

1. Do you use source control?
2. Can you make a build in one step?
3. Do you make daily builds?
4. Do you have a bug database?
5. Do you fix bugs before writing new code?
6. Do you have an up-to-date schedule?
7. Do you have a spec?
8. Do programmers have quiet working conditions?
9. Do you use the best tools money can buy?
10. Do you have testers?
11. Do new candidates write code during their interview?
12. Do you do hallway usability testing?

The nice thing about this test is that it's easy to get a quick yes or no to each question.

Give your team 1 point for each "yes" answer.

A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Of course, these are not the only factors that determine success or failure: in particular, if you have a great software team working on a product that nobody wants, well, people aren't going to want it. And it's possible to imagine a team of "gunslingers" that doesn't do any of this stuff that still manages to produce incredible software that changes the world. But, all else being equal, if you get these 12 things right, you'll have a disciplined team that can consistently deliver.

You can read the complete Joel's post here.

Mark.

Wednesday, October 21, 2009

New features in C# 4.0

Introduction

As we all know,Microsoft is coming to release the new version of Visual Studio and .NET framework - Visual Studio 2010 with .NET framework 4.0.
If you're interested to get an early look of C# 4.0 upcoming features, this post is for you.
Generally, there are four main additions introduced in the new version:

1.Dynamic Lookup

2.Named and Optional Arguments

3.Improved COM Interoperability

4. Variance and Contravariance


We will take a look at first two features: Dynamic Lookup & Named and Optional Arguments

Dynamic Lookup

Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations bypass the C# static type checking and instead gets resolved at runtime.
With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection.
A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so.

The dynamic type

C# 4.0 introduces a new static type called dynamic.

When you have an object of type dynamic you can “do things to it” that are resolved only at runtime:

dynamic d = GetDynamicObject(…);

d.M(7);

Dynamic operations

Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically:

dynamic d = GetDynamicObject(…);
d.M(7); // calling methods
d.f = d.P; // getting and settings fields and properties
d[“one”] = d[“two”]; // getting and setting thorugh indexers
int i = d + 3; // calling operators
string s = d(5,7); // invoking as a delegate

Named and Optional Arguments


Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list.

Optional parameters

A parameter is declared optional simply by providing a default value for it:
public void M(int x, int y = 5, int z = 7);
Here y and z are optional parameters and can be omitted in calls:
M(1, 2, 3); // ordinary call of M
M(1, 2); // omitting z – equivalent to M(1, 2, 7)
M(1); // omitting both y and z – equivalent to M(1, 5, 7)

Named arguments

C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write:
M(1, z: 3); // passing z by name
or
M(x: 1, z: 3); // passing both x and z by name
or even
M(z: 3, x: 1); // reversing the order of arguments

Overload resolution

Named and optional arguments affect overload resolution, but the changes are relatively simple:
A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type.

You can find the detailed description of the rest of the features in this document.


Mark.

Tuesday, October 20, 2009

Automatic Setting of MaxLength Property for Binded Controls

Following example demonstrates smart setting of textBox's MaxLength property, according to the length of binded data source field.
It might be useful in order to prevent exceptions, caused by entering a text, which exceeds specified maximum field size within your data source.
All you should do is to subscribe to DataBindings.CollectionChanged event of your control and put this simple code:

private void DataBindings_CollectionChanged(object sender, CollectionChangeEventArgs e)
{
if (e != null && e.Action == CollectionChangeAction.Add)
{
int bindedFieldMaxLength = this.MaxLength;
Binding bindingObj = (e.Element as Binding);
if (bindingObj != null)
{
if (bindingObj.DataSource != null &&
bindingObj.BindingMemberInfo != null)
{
DataTable sourceTable = (bindingObj.DataSource as DataTable);
DataView sourceView = (bindingObj.DataSource as DataView);
BindingMemberInfo bindingMemberInfoObj =
bindingObj.BindingMemberInfo;
if ((sourceTable != null sourceView != null) &&
bindingMemberInfoObj != null)
{
string bindedFieldName = bindingMemberInfoObj.BindingField;
if (!string.IsNullOrEmpty(bindedFieldName))
{
if ( sourceTable != null &&
sourceTable.Columns[bindedFieldName].MaxLength > 0)
{
bindedFieldMaxLength =
sourceTable.Columns[bindedFieldName].MaxLength;
}
if(sourceView != null &&
sourceView.Table.Columns[bindedFieldName].MaxLength > 0)
{
bindedFieldMaxLength =
sourceView.Table.Columns[bindedFieldName].MaxLength;
}
if (this.MaxLength != bindedFieldMaxLength)
{
this.MaxLength = bindedFieldMaxLength;
}
}
}
}
}
}


That's it...

Monday, October 19, 2009

Assembly binding error

Recently I had an error changing the Enterprise Library version (to 4.1) in a project I was developing.

The error was:

System.IO.FileLoadException was unhandled by user code
Message="Could not load file or assembly 'Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.(Exception from HRESULT: 0x80131040)"

I had similar error in the past which I somehow solved and I remembered it was related to the location of the configuration file.

Anyhow...

After reading the above bolded error description I've changed the version number in the configuration file thinking that was the problem - but couldn't pass this error.

Searching for a solution I found a nice & simple tool from Microsoft that clearly tells you what the problem is (if all problems were solved so easily...).
The tool name is Assembly Binding Log Viewer.

Running the application when this tool is in the background will log every assembly binding (you can choose between any and/or only exceptions).




Double clicking the 'bad' binding will open up a detailed log looking something like this:

LOG: This bind starts in default load context.
LOG: Using application configuration file: D:\...\DiegJukeboxManagerLoader.vshost.exe.Config
LOG: Using machine configuration file from C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\config\machine.config.
LOG: Post-policy reference: Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: GAC Lookup was unsuccessful.
LOG: Attempting download of new URL file:///D:/../Microsoft.Practices.EnterpriseLibrary.Logging.DLL.
LOG: Assembly download was successful. Attempting setup of file: D:\...\Microsoft.Practices.EnterpriseLibrary.Logging.dll
LOG: Entering run-from-source setup phase.
LOG: Assembly Name is: Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=null
WRN: Comparing the assembly name resulted in the mismatch: PUBLIC KEY TOKEN
ERR: The assembly reference did not match the assembly definition found.
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.

As you can see in plain english the error was that the dll was without the publicKeyToken (since I downloaded the sources of EL & the installation program compiled it...) - so the binding failed since the configuration file contained Microsoft's publicToken (remained there from previous version...).

Concuslion: next time you encounter any kind of assembly binding error - don't work too hard, download this tool and use it!!

Diego.

Saturday, October 3, 2009

MSMQ & WCF

Very interesting sample I ran into showing how to use MSMQ with WCF.

Going through the 'right' way to implement this, I encountered many samples where the queing & message stuff were embeded in the bussiness logic - I couldn't understand how this meets WCF goal to seperate transport from bussiness logic.

Finally I encounter this article.

The interesting part of the sample was the elegant way the writer shows how to narrow the MSMQ code to the binding code (configuration) & leaving the code nice and clean allowing to change transport without touching the bussiness logic - this maybe sounds easy when moving from TCP to HTTP or similar, but since MSMQ is message oriented it is more of a challenge.

For the full story, including a short & sweet video - see:
SOA'izing MSMQ with WCF (and Why It's Worth It)

Diego.

Monday, August 31, 2009

CAST and CONVERT (Transact-SQL) - Truncating and Rounding Results

Here's some "puzzle" for you dudes .
Before you opening SQL Management Studio and pasting
from clipboard , answer -
what is the result for lines below?


DECLARE @D INT
SET @D = 12345

SELECT CAST(@D AS VARCHAR(2))

Thursday, August 20, 2009

Using CTE


This post is a part of Mohd Nizamuddin very interesting article
Sending multiple rows to the Database from an Application
Albert

Table valued function using Number List



First I will explain the pieces of code, which form the building blocks of the final table valued function.
We need to create a number list using the CTE as below
      ;WITH
            L0 AS(SELECT 1 AS c UNION ALL SELECT 1),
            L1 AS(SELECT 1 AS c FROM L0 AS A, L0 AS B),
            L2 AS(SELECT 1 AS c FROM L1 AS A, L1 AS B),
            L3 AS(SELECT 1 AS c FROM L2 AS A, L2 AS B),
            L4 AS(SELECT 1 AS c FROM L3 AS A, L3 AS B),
            Numbers AS(SELECT ROW_NUMBER() OVER(ORDER BY c) AS Number FROM L4)
      SELECT * FROM Numbers

This CTE is creating the list of numbers from 1 to POWER(POWER(POWER(POWER(2, 2), 2), 2), 2), i.e. until 65536.
Now Consider the below code snippet, where @list and @delim variables have been assigned.

 DECLARE
 @list NVARCHAR(MAX), @delim NCHAR(1)
 SELECT @list = 'aaa,bbbbb,cccc,dddd', @delim = ','
 
                ;WITH
              L0 AS(SELECT 1 AS
c UNION ALL SELECT 1),
              L1 AS(SELECT 1 AS c FROM L0 AS A, L0 AS B),
              L2 AS(SELECT 1 AS c FROM L1 AS A, L1 AS B),
              L3 AS(SELECT 1 AS c FROM L2 AS A, L2 AS B),
              L4 AS(SELECT 1 AS c FROM L3 AS A, L3 AS B),
              Numbers AS(SELECT ROW_NUMBER() OVER(ORDER BY c) AS Number FROM L4)
         SELECT
         @list List,
         SUBSTRING(@list, Number, CHARINDEX(@delim,
@list + @delim, Number) - Number) AS   Value,
         Number AS StartingFrom,
         CHARINDEX(@delim, @list + @delim, Number) AS DelimeterPosition
         FROM Numbers
         WHERE Number <= CONVERT(INT, LEN(@list))
         AND SUBSTRING(@delim + @list, Number,
1) = @delim

The SUBSTRING statement, cuts characters from @list starting from character position (1, 5, 11 and 16).
SUBSTRING(@list, Number, CHARINDEX(@delim, @list + @delim, Number) - Number)

The number of characters to be cut is decided by CHARINDEX which will
return 4, 10, 15, 20 in each row, where it finds the delimiter character.
CHARINDEX(@delim, @list + @delim, Number) - Number
The above SELECT only works until the number of characters present in the
@list variable due to the condition
Number <= CONVERT(INT, LEN(@list))

The duplicate values are filtered out from the output list by the "WHERE" condition created
using the SUBSTRING function which will only return a value when it finds the delimiter
SUBSTRING(@delim + @list, Number, 1) = @delim

The output of the code snippet above would be:









ListValueStarting FromDelimiter Position
aaa,bbbbb,cccc,ddddaaa14
aaa,bbbbb,cccc,ddddbbbbb510
aaa,bbbbb,cccc,ddddcccc1115
aaa,bbbbb,cccc,dddddddd1620


Table valued function using Numbered List: Implementation
Now combining all the above explained pieces of SQL, we create our
table valued function which will parse the string and return a table having two columns viz. ID and Data.
      CREATE FUNCTION [dbo].[TableFormDelimetedString]
      (
            @param      NVARCHAR(MAX),
            @delimeter  NCHAR(1)
      )
      RETURNS @tmp TABLE
      (
            ID      INT      
        IDENTITY    
          (1,    
          1),
            Data        Varchar(MAX)
      )
      BEGIN
 
            ;WITH
                  L0   AS(SELECT 1 AS c   UNION   ALL   SELECT   1),
                  L1   AS(SELECT 1 AS c FROM L0 AS   A,   L0   AS   B),
                  L2   AS(SELECT 1 AS c FROM L1 AS   A,   L1   AS   B),
                  L3   AS(SELECT 1 AS c FROM L2 AS   A,   L2   AS   B),
                  L4   AS(SELECT 1 AS c FROM L3 AS   A,   L3   AS   B),
                  Numbers AS(SELECT ROW_NUMBER() OVER(ORDER BY c) AS Number FROM L4)
            INSERT INTO
               @tmp (Data)
            SELECT
             LTRIM(RTRIM(CONVERT(NVARCHAR(4000),
             SUBSTRING(@param, Number,
             CHARINDEX(@delimeter, @param + @delimeter, Number) - Number)
          ))) AS Value
         FROM   Numbers
         WHERE    Number <=
           CONVERT(INT,     LEN(@param))
          AND  SUBSTRING(@delimeter+ @param, Number, 1) = @delimeter
     RETURN
    END
Table valued function using Numbered List: Usage
So if we now invoke the above function like
SELECT * FROM [TableFormDelimetedString]('Andy:Roger:Thomas:Rob:Victor',':')

We will obtain the following result set:








IDData
1Andy
2Roger
3Thomas
4Rob
5Victor

Table valued function using recursive CTE



Here again I will first explain the pieces of code,which form
the building blocks of the final table valued function.
As we know, in a recursive CTE, we have one anchor part and one recursive part.
But if we create a CTE having only the anchor part it would look something like
     DECLARE
      @list NVARCHAR(MAX), @delim NCHAR(1) 
      SELECT @list = 'aaa,bbbbb,cccc,dddd', @delim
= ','   
      ;WITH CTETable (start, stop) AS  
      (
       SELECT start = CONVERT(bigint, 1), stop = CHARINDEX(@delim, @list +@delim, 1)
      )
       SELECT @list List, LTRIM(RTRIM(SUBSTRING(@list,
start,
              CASE
              WHEN
stop > 0
              THEN
stop - start
              ELSE
0
              END
               )))
AS Data
         start AS StartingFrom, stop
AS   DelimiterPosition
       FROM CTETable
The output of the SQL above will be like



ListValueStarting FromDelimiter Position
aaa,bbbbb,cccc,ddddaaa14



Now by adding a recursive member to the above CTE, which iterates over the stop variable, the SQL looks like
    DECLARE
    @list NVARCHAR(MAX),
    @delim NCHAR(1)
 
    SELECT @list = 'aaa,bbbbb,cccc,dddd', @delim = ',' 
    ;WITH CTETable (start, stop)  
        AS 
        (
        SELECT start = CONVERT(bigint, 1), stop = CHARINDEX(@delim, @list +@delim, 1)
        UNION ALL       -- added for recursive part of CTE
        SELECT start = stop + 1, stop = CHARINDEX(@delim, @list +  
        @delim, stop+ 1) FROM CTETable WHERE
stop > 0 -- added for recursive part of CTE
        )
        SELECT @list List, LTRIM(RTRIM(SUBSTRING(@list,
start,
        CASE
        WHEN stop > 0
        THEN stop - start
        ELSE 0
        END
        ))) AS Data
        start AS StartingFrom, stop AS DelimiterPosition
        FROM CTETable
        WHERE stop > 0

And gives the following result set






ListValueStarting FromDelimiter Position
aaa,bbbbb,cccc,ddddaaa14
aaa,bbbbb,cccc,ddddbbbbb510
aaa,bbbbb,cccc,ddddcccc1115
aaa,bbbbb,cccc,dddddddd1620


Table valued function using recursive CTE: Implementation
Finally we create a table valued function from the above code blocks, which looks like
      CREATE FUNCTION [dbo].[TableFormDelimetedStringWithoutNumberList]
      (@list NVARCHAR(MAX),
      @delim  NCHAR(1)   = ','
      )
       RETURNS @tmp TABLE
       (
        ID  INT IDENTITY   (1, 1),
        Data Varchar(MAX)
       )
      BEGIN
       ;WITH CTETable (start, stop)
        AS
        (
         SELECT start = CONVERT(bigint, 1),
           stop = CHARINDEX(@delim, @list + @delim)
         UNION ALL   -- added for recursive part of CTE
         SELECT start = stop + 1,
         stop = CHARINDEX(@delim, @list + @delim, stop + 1) -- added for recursive part of CTE
         FROM CTETable
         WHERE  stop > 0
        )
       INSERT INTO @tmp (Data)
        SELECT LTRIM(RTRIM(SUBSTRING(@list,
             start,
            CASE
            WHEN stop > 0
            THEN
            stop - start
            ELSE
            0
            END))) AS Data
            FROM CTETable
        WHERE stop > 0
     RETURN
    END
Table valued function using recursive CTE: Usage
So if we now invoke the above function like
     SELECT * FROM[TableFormDelimetedStringWithoutNumberList]('Andy:Roger:Thomas:Rob:Victor',':')
We will obtain the following result set:







IDData
1Andy
2Roger
3Thomas
4Rob
5Victor
Why I like these two implementations is because the looping has been handled
by the SQL server database engine itself, which would definitely be more efficient
than explicit SQL looping code written by a developer.

Sunday, August 9, 2009

The Open Closed Principle

Introduction

The open closed principle of object oriented design states:
"Software entities like classes, modules and functions should be open for extension but closed for modifications."
The Open Close Principle encourages software developers to design and write code in a fashion that adding new functionality would involve minimal changes to existing code.
Most changes will be handled as new methods and new classes.
Designs following this principle would result in resilient code which does not break on addition of new functionality.

The Open Close Principle Violation Example

The code below shows a resource allocator. The resource allocator currently handles timeslot and spaceslot resource allocation:



public class ResourceAllocator
{
public enum ResourceType
{
Time,
Space
}
public int Allocate(ResourceType resourceType)
{
int resourceId = default(int);
switch (resourceType)
{
case ResourceType.Time:
resourceId = FindFreeTimeSlot();
MakeTimeSlotBusy(resourceId);
break;
case ResourceType.Space:
resourceId = FindFreeSpaceSlot();
MakeSpaceSlotBusy(resourceId);
break;
default:
throw new InvalidOperationException ("Attempted to allocate invalid resource");
break;
}
return resourceId;
}
}



It is clear from the code below that it does not follow the Open Closed Principle.
The code of the resource allocator will have to be modified for every new resource type that needs to be supported.

This has several disadvantages:

  • The resource allocator code needs to be unit tested whenever a new resource type is added.
  • Adding a new resource type introduces considerable risk in the design as almost all aspects of resource allocation have to be modified.
  • Developer adding a new resource type has to understand the inner workings for the resource allocator.

Modified Code to Support Open Closed Principle

The following code presents a new design where the resource allocator is completely transparent to the actual resource types being supported.
This is accomplished by adding a new abstraction, resource pool.
The resource allocator directly interacts with the abstract class resource pool:




public enum ResourceType
{
Time,
Space
}
public class ResourceAllocator
{
Dictionary resourcePools = new Dictionary();

public void AddResourcePool(ResourceType resourceType, ResourcePool pool)
{
if (!resourcePools.ContainsKey(resourceType))
{
resourcePools.Add(resourceType, pool);
}
}
public int Allocate(ResourceType resourceType)
{
int resourceId = default(int);
if (resourcePools.ContainsKey(resourceType))
{
resourceId = resourcePools[resourceType].FindFree();
resourcePools[resourceType].MarkBusy(resourceId);
}
else
{
throw new InvalidOperationException("Attempted to allocate invalid resource");
}
}
public int Free(ResourceType resourceType, int resourceId)
{
if (resourcePools.ContainsKey(resourceType))
{
resourcePools[resourceType].Free(resourceId);
}
else
{
throw new InvalidOperationException("Attempted to free invalid resource\n");
}
}
}

public abstract class ResourcePool
{
public abstract int FindFree();
public abstract void MarkBusy(int resourceId);
public abstract int Free(int resourceId);
}

public class TimeSlotPool : ResourcePool
{
public override int FindFree()
{ /*finds free time slot */ }
public override void MarkBusy(int resourceId)
{ /*marks slot as busy */ }
public override int Free(int resourceId)
{ /*releases slot */}
}

public class SpaceSlotPool : ResourcePool
{
public override int FindFree()
{ /*finds free space slot */ }
public override void MarkBusy(int resourceId)
{ /*marks slot as busy */ }
public override int Free(int resourceId)
{ /*releases slot */}
}

This has several advantages:

  • The resource allocator code need not be unit tested whenever a new resource type is added.
  • Adding a new resource type is fairly low risk as adding a new resource type does not involve changes to the resource allocator.
  • Developer adding a new resource type does not need understand the inner workings for the resource allocator.



Thursday, August 6, 2009

The Unit Of Work Pattern

When you're pulling data in and out of a database, it's important to keep track of what you've changed; otherwise, that data won't be written back into the database.

One of the most common design patterns that helps to form the unit, which is responsible for data persistance is the Unit of Work.

A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work.

The key thing about Unit of Work is that when it comes time to commit, the Unit of Work decides what to do. It carries out the inserts, updates, and deletes in the right order.

An article in the following link, discusses various aspects of this pattern and examines the issues around persistence ignorance.

http://msdn.microsoft.com/en-us/magazine/dd882510.aspx#id0420003

Enjoy...

Monday, June 1, 2009

Using SoapExtension to manage sessions

Intro

When scalability is a main issue in your application, it is common to design and build a stateless solution.
Stateless design allows our application to be duplicated to several servers as the amount of users grows, load balancing could be taken to the max, plus it saves resources "wasted" on session management.
The downside is that in most applications having session identifier or other session level parameters is really convenient (sometimes even necessary).

So if scalability is part of your design - distributed session management is probably something you are considering...
But if you don't really need a big session management solution that will cost you in performance (no matter what..) and you only want 2-3 parameters that will help you identify some user's preference without all the fuss, consider - soap extension.

Implementation

We start implementing by building a class that represents these parameters we want to pass constantly from client to server.
Surprisingly this class will inherit from SoapHeader.


[XmlRoot("Keystone", Namespace = "urn:com-sample-dpe:soapextension")]
public class SessionSoapHeader : SoapHeader
{
private string _customer;
private string _version;

public string Version
{
get { return _version; }
set { _version = value; }
}

public string Customer
{
get { return _customer; }
set { _customer = value; }
}
}


Next we'll create an attribute to make our proxy implement the use of this extension.


// Create a SoapExtensionAttribute for the SOAP Extension that can be
// applied to an XML Web service method.
[AttributeUsage(AttributeTargets.All)]
public class SessionSoapHeaderExtensionAttribute : SoapExtensionAttribute
{
public override Type ExtensionType
{
get { return typeof(SessionSoapHeaderExtension); }
}

public override int Priority
{
get
{
return 100;
}
set
{
}
}
}


And to the actual Soap extension...


public class SessionSoapHeaderExtension : SoapExtension
{
public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
{
return null;
}

public override object GetInitializer(Type WebServiceType)
{
return null;
}

public override void Initialize(object initializer)
{
return;
}

public override Stream ChainStream(Stream stream)
{
return stream;
}

public override void ProcessMessage(SoapMessage message)
{
switch (message.Stage)
{
case SoapMessageStage.BeforeSerialize:
//Add the CustomSoapHeader to outgoing client requests
if (message is SoapClientMessage)
{
AddHeader(message);
}
break;

case SoapMessageStage.AfterSerialize:
break;

case SoapMessageStage.BeforeDeserialize:
break;

case SoapMessageStage.AfterDeserialize:
if (message.Headers.Count > 0)
{
XmlElement headerXml = ((((System.Web.Services.Protocols.SoapUnknownHeader)
(message.Headers[0]))).Element as XmlElement);
if (headerXml != null)
{
if (headerXml.ChildNodes.Count > 0)
{
foreach (XmlNode headerItem in headerXml.ChildNodes)
{
if (headerItem.Name.ToLower().IndexOf("customer") != -1)
{
HttpContext.Current.Items.Add(headerItem.Name, headerItem.InnerText);
SessionSoapHeaderExtension.CustomerName = HttpContext.Current.Items[headerItem.Name].ToString();
}
if (headerItem.Name.ToLower().IndexOf("version") != -1)
{
HttpContext.Current.Items.Add(headerItem.Name, headerItem.InnerText);
SessionSoapHeaderExtension.Version = HttpContext.Current.Items[headerItem.Name].ToString();
}
}
}
}
}
if (message is SoapClientMessage)
{
//could be usefull
}
break;
}
}

private void AddHeader(SoapMessage message)
{
SessionSoapHeader header = new SessionSoapHeader();
header.Customer = (!string.IsNullOrEmpty(Customer) ? Customer : string.Empty);
header.Version = (!string.IsNullOrEmpty(Version) ? Version : string.Empty); ;
header.MustUnderstand = false;
message.Headers.Add(header);
}
}


Final step, configure both server & client to use this extension.

In server:


<webServices>
<soapExtensionTypes>
<add type="myNS.SessionSoapHeaderExtension, myNS"/>
</soapExtensionTypes>
</webServices>


In client you can configure similarly using strong name & registering the dll in GAC or add the attribute to the reference.cs of the required proxy.

client config:


<system.web>
<webServices>
<soapExtensionTypes>
<add type="myNS.SessionSoapHeaderExtension, myNS,Version=1.0,Culture=neutral, PublicKeyToken=f54c79bbbb6454bc" />
</soapExtensionTypes>
</webServices>
</system.web>



That's it, this simple implementation will allow you to pass user's preference or identification elegantly in header and save you the use of heavy session management solution when not needed.

Any kind of feedback or questions would be appreciated.

Till next time.
Diego

Sunday, May 10, 2009

Distributed Cache - Simplified

Following Google search engine or advertising in technology sites or magazines we can easily find a lot of sophisticated solutions for caching.
I found/know: MemCached, NCache, ScaleOut StateServer,
Shared Cache & even a Microsoft's implementation named Velocity.

Sometimes these solutions are the best thing for your project, they have a lot of bells and whistles, they were written by professionals who dedicated a lot of thought in all sort of cache situations (or states..) & some of them already proved themselves in production environment - so why invent the wheel?
But...
If you think about it...cache is really simple - at least the basics, so why not take full control?

Cache mechanism is one of the basic infrastructure in every medium & above project, either web application or winForm application.
The main goal of caching is to save roundtrips, from client to server, from application server to database server & in some cases from web server to application server.
The last one (web server to application server) is mainly used in web-sites to save html result of common queries instead of redirecting these again & again to the application.
In this article I will concentrate on the other two.

In every application there are lookup tables (mainly to fill combo-boxes with list of values), decision tables and other static or semi-static tables, these tables are read again & again whenever a form in the application containing them is opened and/or the application flow needs its values - these roundtrips from client to server & from application server to database are a waste of resources that could be saved if you save these in the client's & application server's memory.

As first step you need to decide how far you can go with this.

1. Which static tables are commonly used?

2. How big are they? (memory on server & client are not endless...).

3. What kind of access is needed to these tables? (if you search a lot in these tables using joins it won't be very efficient to cache them).

4. If these tables are tables that the user can update, how critical it is to refresh them & if so - how often is sufficient?

The answers to these questions are different in every application, but the main guidelines are to cache most (if not all) small static tables that are used in the application & never to cache big tables, tables that are updated often or tables that you can't 'live' with the fact that you're querying an old snapshot of the real one (I'm talking about a few seconds old..).
You are left with the question what is the limit of medium tables (in size) & what is often updated - as said before these changes from application to application.

So let's start building...

We'll start with the database:

1. sysApplicationServers table: this table will function as registration table, every application server on load will 'register' itself here and will unregister itself when unloaded. Columns: IpAddress, FromDate.

2. CacheItemQueue table: this table will contain tables needed for refresh.

3. trig[table name]Cache trigger: every dynamic & cached table will have a trigger, this trigger will add a row in CacheItemQueue for every sysApplicationServers row (on insert, update & delete).

For example: we have two application server registered in sysApplicationServers, when we update users table the result of the trigger is a simple insert with the result:
Table, IpAddress, FromDate
-------------------------------------------
Users, appServer1IP, now
Users, appServer2IP, now

So this simple mechanism will 'let us know' when a table is modified so we can refresh its in memory snapshot (the insert will insert a new row only if there is no existing row for the same table+server combination).

Next we'll write a thread that will sample CacheItemQueue table in required interval, this thread will run in endless loop from application load.
When it identifies new table for it to load, it loads and deletes this row from cacheItemQueue.


...
CacheListenerThread cacheListenerThread = new CacheListenerThread();
thread = new Thread(cacheListenerThread.RunListener);
thread.Start();

while (true)
{
Thread.Sleep(Convert.ToInt32(AppConfigManager.GetAppSettingsValue("CacheRefreshInterval")));
RefreshCache();
}

public void RefreshCache()
{
string ipAddress = BasicUtil.GetLocalIp();
SqlCommand command = new SqlCommand("spCache_AsyncTablesLoader");
DatabaseManager.AddSqlParameter(command, "@ipAddress", ipAddress);

RefreshCacheInnerImpl(command);
}
...


The actual cache can be built using asp.net which has nice implementation, I chose Microsoft's Enterprise library to allow the cache to work also under a non web application server (in my case windows service).

Cache manager interface explains itself:


public interface ICacheManager
{
object Get(string key);

bool Add(string key, object value);

bool Contains(string key);

void LoadDataTable(string tableName);
}


LoadDataTable method will allow us to maintain cache tables that are loaded only on first use or reloaded if needed.

I mainly use this infrastructure to contain all sorts of dataTables but as you can see it's built to contain any object & also cache stuff with expiration date/time.
The Server implementation of cache manager:


..
using Microsoft.Practices.EnterpriseLibrary.Caching;
using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations;
..

public class ServerCache : ICacheManager
{
public delegate bool LoadDataTableDelegate(string tableName);

private static ServerCache serverCacheManager;
private CacheManager cacheManager;

private event LoadDataTableDelegate loadDataTableEvent;

private ServerCache()
{
try
{
cacheManager = CacheFactory.GetCacheManager();
}
catch (Exception ex)
{
throw new ApplicationException("Failed to initilize cache manager", ex);
}
}

public static void InitCache(LoadDataTableDelegate loadDataTable)
{
serverCacheManager = new ServerCache();
serverCacheManager.loadDataTableEvent += loadDataTable;
}

///
/// get the server cache in a lazy fashion.
///

///
public static ServerCache GetServerCache()
{
if (serverCacheManager == null)
{
string message = "cache was not loaded (should call InitCache)";
throw new ApplicationException(message);
}
return serverCacheManager;
}

///
/// get value from the cache by the given key
///

///
///
public object Get(string key)
{
return cacheManager.GetData(key);
}

///
/// check if object with given key exists in cache
///

///
///
public bool Contains(string key)
{
return cacheManager.Contains(key);
}

///
/// add item to cache
///

///
///
/// true if the key was overriden
public bool Add(string key, object value)
{
//if the key already exist - run the value over and return false
bool result = (cacheManager.Contains(key));
cacheManager.Add(key, value);
return result;
}

///
/// add item to cache with timeout
///

///
///
///
/// true if the key was overriden
public bool Add(string key, object value, TimeSpan expirationTime)
{
//check if there is an object which is already cached with the same key
bool result = (cacheManager.Contains(key));
cacheManager.Add(key, value, CacheItemPriority.Normal, null, new SlidingTime(expirationTime));
return result;
}

public void LoadDataTable(string tableName)
{
loadDataTableEvent(tableName);
}
}



The client implementation of ICacheManager is even simpler, it holds a static dictionary of objects, the LoadDataTable method can point to the server's gateway delegate or can be left alone if you download only static tables to client side.



public class ClientCache : ICacheManager
{
private static ClientCache clientCacheManager;

private static Dictionary cacheMap;

private ClientCache()
{
cacheMap = new Dictionary();
}

public static ClientCache GetClientCache()
{
if (clientCacheManager == null)
{
clientCacheManager = new ClientCache();
}
return clientCacheManager;
}

public object Get(string key)
{
object result;
cacheMap.TryGetValue(key, out result);
return result;
}

///
/// check if object with given key exists in cache
///

///
///
public bool Contains(string key)
{
return cacheMap.ContainsKey(key);
}

public bool Add(string key, object value)
{
bool overrideKey = cacheMap.ContainsKey(key);
if (overrideKey)
{
lock (cacheMap)
{
cacheMap.Remove(key);
cacheMap.Add(key, value);
}
}
else
{
cacheMap.Add(key, value);
}
return overrideKey;
}

public void LoadDataTable(string tableName)
{
string message = string.Format("table {0} was not loaded to client cache", tableName);
throw new ValidationException(message);
}

}



To allow the cache to be configured except the interval we sample the cacheItemQueue we use a simple xml that contains the list of tables to be cached.
Every element in this xml contains three attributes (except the name of the table of course):
1. loadOnStart: load on application load or on first call.
2. loadToClient: include table in the response to client's "getCache" method on client's load.
3. refreshOnUpdate: if a cache table can be updated this will be true (to be sure all tables that "refreshOnUpdate" have a trigger we have a deployment utility that uses this same xml to automaticly create these triggers and we check this matches on application load).

If I summarize the main idea:

1. On application load the cache data is retrieved to application server's memory, the server registers itself to receive updates & starts the thread checking for updates.

2. Each dynamic table in cache has a trigger that is used to eventually notify the application server about the update and force it to refresh, the refresh could be made to the whole table (usually small tables with small number of writes) or you can use a timestamp column to identify which rows were updated and selectively refresh the cache.

3. Every client on load retrieves a snapshot of the static cached tables to save roundtrips to server.

That's it for now, till next time...

Diego

Sunday, April 26, 2009

Using NLB for low budget load-balancing



In a project I worked on a couple of years ago I used NLB to experiment & prove the stateless-ness of the architecture I planned.
NLB is a service that comes with windows servers, it allows to load balance application for better availability.
I'm a fan of hardware load balancing myself, more reliable, dedicated machine for this important task and all sort of programmatic traffic manipulation, but...I realized this information could be useful for lower budget projects....so...here it is.

The Basics

I built the the experiment on two windows 2003 machines, one of them worked as the NLB itself
and application server (machine name "asavasrv01") and the 2nd machine as application machine (machine name "pensionsrv2").
In a real project if you have an old machine that can not perform as an application server you can always use it as the load balancer.
The sample application is win Form client and web service as server.


Setting the NLB cluster

1st step is adding NLB provider to network adapter.



2nd step - set NLB manager for multi cast and add each server to the server list of the NLB manager.




3rd and final step set the load (usually equally for each server, in this case 50%-50%...).



If you set everything your result should be look something like this:



Testing load-balancing

I built the simplest web service returning kinda "hello world" message but with a little tweak - the server name.

The client calls the VIP or cluster name given earlier, we can see the calls are divided between both servers:



Testing Availability

I unplugged one of the server from network to simulate server failure.
we can see the result in the NLB Manager:



On client side, since it calls the server in a loop we experience a failure of a few calls, but after 2-3 seconds the application recovered.

We can see the NLB detected the server failure and passed the load to one server.



Conclusions

After a short setup we saw how the NLB balances the load between the servers and how it detects server failure and keeps the application available!

but...

NLB can not (like other solutions...) detect application failure and does not analyze any server parameters for best performance and availability (and...and..)
it divides the network load between two or more servers in a round-robin order.
Hardly the best load balancing solution, but hey! it's free! if you don't have a budget for a fancy load-balancer solution, you can at least keep your application server more reliable & available.

Think about it!
Diego