Wednesday, November 26, 2008

ReSharper Fails to Load with a Generic Error

After installing Resharper 4.1 on Windows 2003 server, I got this message when I start Visual Studio 2008:

The Add-in 'JetBrains ReSharper 4.1' failed to load or caused an exception.

Error Message: The system cannot find the file specified.

Error Number: 80070002

I emailed JetBrains support and got a quick response.  It turns out my system did not have Extensibility.dll installed in the GAC (Global Assembly Cache), and ReSharper requires this dll to run. 

To figure this out, I opened explorer to the path "%windir%\assembly", which brings up a shell extension that displays the contents of your GAC.

The fix was easy, I just had to find the dll, which is installed with Visual Studio.  Once I knew the location, I opened the Visual Studio command prompt and ran

gacutil /i  "C:\Program Files\Common Files\Microsoft Shared\MSENV\PublicAssemblies"

Hopefully this saves someone else some time.

Thursday, September 4, 2008

Making the ASP.NET DataGrid Usable

If you have ever used an ASP.NET DataGrid (or GridView without a DataSource control), then you know there is a lot of boiler plate code needed.  The project I am working on right now has a bunch of grids that need to edit, add and page over a certain data type, so I we wanted to avoid duplicating all that code across every page. 
A little research and a prototype later, I came up with what I call the AutoGrid.  For the most part, it just handles the editing, paging and error handling internally.  Arguably more interesting is the fact that it is a generic control; I was under the impression that web controls could not have type parameters.  While this is technically true, there is a cool work around, which is detailed in a blog post by Eilon Lipton.

Basically, you need create a control with generic arguments, and whatever functionality you are after. Then to create another control to be used in asp.net markup. It needs to inherits from the first control and have a string property for each generic argument.  Finally you override the ControlBuilder for your first control and construct your second control by passing the type arguments to a Type  instance of the generic control.  The code for the non-generic markup control is below:
[ControlBuilder(typeof(GenericControlBuilder))]
public class GenericGrid : AutoGrid<Object>
{
  private string _objectType;

  public string ObjectType
  {
      get
      {
          if (_objectType == null)
          {
              return String.Empty;
          }
          return _objectType;
      }
      set
      {
          _objectType = value;
      }
  }
}

As you can see, it doesn't do much by itself.  The ControlBuilder is where the magic happens (this code is straight from Eilon's example):
public class GenericControlBuilder : ControlBuilder
{
  public override void Init(TemplateParser parser, ControlBuilder parentBuilder,
      Type type, string tagName, string id, IDictionary attribs)
  {

      Type newType = type;

      if (attribs.Contains("objecttype"))
      {
          // If objecttype is specified, create a generic type that is bound
          // to that argument and then hide the objecttype attribute.
          Type genericArg = Type.GetType(
              (string)attribs["objecttype"], true, true);
          Type genericType = typeof(AutoGrid<>);
          newType = genericType.MakeGenericType(genericArg);
          attribs.Remove("objecttype");
      }

      base.Init(parser, parentBuilder, newType, tagName, id, attribs);
  }
}

The other thing that makes this work is an IoC container.  In this case I'm using Structure Map, but any container would work.  The idea is that since the AutoGrid knows the type it is editing, it can ask for the appropriate data access class.  Chad Myers helped me get the configuration right, using a custom TypeScanner:
public class RepositoryConventionScanner : ITypeScanner
{
  public void Process(Type type, Registry registry)
  {
      Type repoForType = GetGenericParamFor(type.BaseType, typeof(Repository<>));

      if (repoForType != null)
      {
          var genType = typeof(IRepository<>).MakeGenericType(repoForType);
          registry
              .ForRequestedType(genType)
              .AddInstance(new ConfiguredInstance(type));
      }
  }

  private static Type GetGenericParamFor(Type typeToInspect, Type genericType)
  {
      if (typeToInspect != null
          && typeToInspect.IsGenericType
          && typeToInspect.GetGenericTypeDefinition().Equals(genericType))
      {
          return typeToInspect.GetGenericArguments()[0];
      }

      return null;
  }

Then in your Application_Start event, just tell Structure Map to run this scanner on your assembly:
StructureMapConfiguration
  .ScanAssemblies()
  .IncludeAssemblyContainingType<GenericGrid>()
  .With(new RepositoryConventionScanner());

I wrote this code for a pretty specific situation, your mileage may vary.  I would love to hear thoughts about this, if you care to see the whole solution you can get it here. 

Wednesday, June 25, 2008

Linq To SQL Caching

I ran into a weird behavior while trying out different usage patterns of Linq To SQL. I noticed that some queries were not hitting the database! Now I knew that Linq To SQL object tracking keeps cached copies of entities it retrieves, but my understanding was that it only used this for identity mapping and would never return stale results. After some Googling and then looking at the internals of the System.Data.Linq.Table class with Reflector, I came to the conclusion that it was indeed returning its cached results. This makes sense once you understand the way the data context works; I didn't realize the implications of object tracking. Once an object has been retrieved once by a data context, its values will not be updated by the database. This is key for the way optimistic concurrency support works in Linq to SQL, but if you are used to writing simple crud applications where you ignore concurrency it would be easy to overlook this.

On thing still puzzles me though, if I change my call from

context.Products;

to

context.Products.ToList();

I would always hit the database. It turns out that ToList calls GetEnumerator (which leads to a query being fired) whereas when I databind directly against the Table, it calls IListSource.GetList, which will return the cached table if it can. Why wouldn't you query the database to check for new objects that might have been added to your results, and why couldn't the same query use the cache when I call ToList on it?

Wednesday, June 18, 2008

Deferred Execution in Linq to SQL

Just like the last post, this one is motivated by a comment I got from someone identified as merlin981. Since we seem to have a running dialog, do you have a blog or other online presence? In any case, I wanted to explain my understanding of how Linq to SQL uses deferred execution because merlin and I seemed to have a very different ideas.

Let's take a look at a simple query like the one below.

var dbContext = new TestDataContext();
var result = from x in dbContext.Products
         select x;
At this point, the query is just and expression tree. When you iterate over the the results, the following single query executes against the database:
SELECT [t0].[Id], [t0].[Name], [t0].[Price], [t0].[CategoryId]
FROM [dbo].[Product] AS [t0]
At this point, I can access the Id, Name and CategoryId of all the products that were in the the database without any other connections to the database. On the other hand, if you were to do something like this:
foreach (var product in result)
{
  Response.Write(product.Category.Name);
}

This block of code is going to hit the database once for each product. Obviously we want to avoid that, and there are several ways to do so. One is to return an anonymous type containing just the columns we need:

var result = from x in dbContext.Products
         select new
         {
             x.Name,
             CategoryName = x.Category.Name
         };

foreach (var product in result)
{
    Response.Write(product.CategoryName);
}

This method will do an inner join and pull back just the columns we asked for. Another way is to specify load options for our original query:
var dbContext = new TestDataContext();
dbContext.LoadOptions.LoadWith<Product>(p => p.Category);
var result = from x in dbContext.Products
         select x;

This tells the Linq to SQL Execution engine to load all the fields in the Category entity for each product. The generated SQL is below.

SELECT [t0].[Id], [t0].[Name], [t0].[Price], [t0].[CategoryId], [t1].[Name] AS [Name2]
FROM [dbo].[Product] AS [t0]
INNER JOIN [dbo].[Category] AS [t1] ON [t1].[Id] = [t0].[CategoryId]

I hope this has been a helpful example of how Linq To SQL uses deferred execution.

Monday, June 9, 2008

Stored Procedures, a Best Practice?

I just saw merlin981's comment on my LINQ to SQL post, thanks for taking time to leave it!  That said, I think the term "Best Practice" is something of a misnomer here.  There has been much written on both sides of this debate.  One thing is for sure, though, a parameterized query is compiled just like a stored procedure on SQL Sever version 7.0 and on.  From Frans Bouma's blog, I found this article in the SQL Server's Books Online:

SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans.

So I think it is clear that sprocs will not be significantly faster than ad hoc SQL for simple cases.  This is not to say that you should never use sprocs, on the contrary, there are situations where sprocs will be the only good solution (for instance, complex data manipulation that requires temporary tables). The point is that using an ORM can make development easier by allowing you to ignore the SQL for the majority of cases.  If you see that parts of your application are slow, then you can fix that.

Merlin also mentioned that running queries directly against tables uses deferred execution like it is a bad thing.  Deferred execution is what allows LINQ to work at all, and can improve performance in many scenarios.  Of course, like any tool, it can get you into trouble if you don't understand it.

Thursday, May 29, 2008

Linq To SQL

Linq to SQL is an object relational mapper (ORM) that is included in .NET 3.5.  The Linq to SQL designer allows you to create objects from you database tables simply by dragging the tables from the server explorer to the design surface.  You can customize the generated types in the designer or write code in partial classes.  Once you have defined your model, you can query it with Linq syntax and the SQL provider will generate the appropriate database calls for you.

One limitation of Linq to SQL is a given object can only be populated from one table.  This means that your object will map almost one to one with your database tables.  For anything but the simplest applications, you don't want to work with the database tables.  For these situations, you could use Linq to SQL to populate domain objects.  Keep in mind that there is a performance overhead in creating these objects as opposed to using raw ADO.NET.  Rico Mariani has a good series of blog posts about Linq to SQL performance, but I suggest profiling your application if you think it will be a problem.

Thursday, May 15, 2008

ASP.NET 2.0 Databinding, Part III

In part I, I covered datasource controls and two way databinding.  In part II, I went into how to use table adapters for handling CRUD operations with two way binding.  Today I want to explain when it makes sense to use these features.

It is important to keep in mind that any technology, feature or pattern will only be useful in a certain context.  Two way databinding is a good fit for single table editing scenarios in applications that will not get heavy user loads.  For more complex joined data you have to write code and it forces you into a very data centric and hard to test design.  For applications that have complex editing scenarios you are often better off handling the GridView events directly.  If you are designing a system with lots of business rules, you should probably using a rich domain model.  I intend to write a post about using domain driven design patterns soon. 

By the same token, there are other options for handling data presentation.  Depending on you needs, you might want to consider using a commercial grid control like Telerik, Infragistics or Ext js.  Many of these controls will handle updating your datasource object for you, which gives you more freedom in the design of your data access strategy.

If you found this series useful, saw a mistake or have any questions please let me know.  I am always interested in feedback, so feel free to leave me a comment.

Monday, May 5, 2008

ASP.NET 2.0 Databinding, Part II

In part I, I talked about DataSource controls and how they enable two way databinding. Table adapters are another new feature of ASP.NET 2.0 that was designed to work with the ObjectDataSource to allow declarative databinding.

What do I mean by declarative? At a high level, declarative simply means you state what you want, not how you want it done (as opposed to imperative, where you say how something should be accomplished). When these terms are applied to ASP.NET, it usually means aspx markup (declarative) or C# code in your code-behind classes (imperative).

Table adapters generate ADO.NET code, specifically SqlDataAdapters and handles all the database connections for you. Using the designer in Visual Studio we can drag and drop tables, specify stored procedures or ad-hoc SQL and VS will generate a strongly typed DataSet based on the shape of the query, and a SqlDataAdapter for each query you add.

The code that is generated by the designers are partial classes, so you can add methods and properties or handle events on the table adapters or the DataSets they return. The usual example is adding validation logic, which is a nice idea, but implementing the validation with the ASP.NET validation controls generally provides a better user experience.

The table adapter classes and the query methods they expose are decorated with attributes that tell the databinding designers they support certain operations. You could implement your own objects that can work with two way binding that persist to a completely different database, or call a set of web services.

The first query you add to a table adapter defines the shape of the DataTable that is generated for you. If this query is a single table select statements, the designer can generate the insert, update and delete methods for you, otherwise you must manually write the SQL.

In part III, I will put the pieces together and look at where two way databinding and table adapters are a good fit.

ASP.NET 2.0 Databinding, Part I

This is the first of a series of posts about databinding in ASP.NET 2.0. These posts are mostly for my understanding and writing practice, but hopefully someone gets value out of them as well. To whoever does read this, I would love to hear some feedback.

The features I will be talking about are covered in a set of tutorials by Scott Mitchell that are much deeper and more detailed than these posts.

In ASP.NET 1.1, databinding was a manual, one way operation. You set the DataSource of a UI control to a collection and call DataBind. This is great, but what about getting the data back from the UI control? You have to manually access the values from the control and write code to map the values back to some kind of object.

In ASP.NET 2.0, there is support for two way databinding. What do I mean by two way? When a command is issued to a bound control, it will check if the DataSource object it is bound to supports the given command. If it does, the data will be passed back to the datasource control which will handle the operation (update, insert, delete).

This implementation has several parts. First, all data-bound controls now have a DataSourceID property. This property should correspond the ID property of a DataSourceControl object. These controls handle calling DataBind on the control and can be configured to handle most of the command events for a bound control (Edit, Update, Cancel, Delete, and Insert). Keep in mind that although DataSourceControl objects are part of the System.Web.UI namespace, they do not render anything themselves.

For the datasource to work, you must at least configure it to handle fetching data. For an ObjectDataSource, this means pointing it to an instance method that it should call to fetch the data. For a SqlDataSource, you have to give it a connection string and a select statement.

In part II, I will show how to use an ObjectDataSource in conjunction with TableAdapters (another ASP.NET 2.0 feature), to allow CRUD operations against a SQL Server database without writing a single line of C# code.

Thursday, May 1, 2008

Took the Plunge

I have been thinking about starting a blog for a while, so I am finally writing my first post. I recently attended the alt.net open spaces in Seattle, and some conversations I had there made me decide to start a blog. A little background; I have been a computer geek since I was a kid, but I never did any amount of programming until I started college in 2003. I have been working as an ASP.NET developer since January 2006, and I graduated from school in the beginning of 2007. I'm not sure what the goal of this blog is yet, for the time being I am going to post problems I run into ( and hopefully resolve:-) ). Hopefully other people will get some benefit from it as well, but if nothing else, it will be a good reference for me.