Monday, August 16, 2010

Dependency Injection Patterns

I had a twitter conversation a while back with Paul Hammant about different ways to use a IoC container. Paul is one of the leads on PicoContainer, one of the Java containers. It all started with him coming across the Common Service Locator, which is an adapter to allow frameworks to leverage a container without forcing consumers to use a specific one. He had a big problem with one part of the implementation, the ServiceLocator class. This class is a static gateway to the container, and he pointed out this is the antithesis of dependency injection and inversion of control. While this is true, there has to be at least one place (usual a few places) in your application that gets access to the container to inject everything else.

In asp.net, this is usually achieved by storing your container in the ApplicationState dictionary, and then accessing it where it is needed. This pattern ends up being recreated in every application that uses a container, so container authors hid this functionality behind a static gateway. Is this a bad thing? If used as intended, I don’t see much real difference between using a global dictionary and using a static gateway (in fact, you could argue that the gateway is cleaner). Paul’s point; however, was that giving developers access to this static gateway will lead to it being used all sorts of places it shouldn’t. In practice, I have seen applications where is happens to some extent, but in many cases it has more to do with the lack of DI support in asp.net.

Paul proposed a way to control access to the container using packages in java. I came up with something similar in .net, although I had to use a separate assembly to enforce separation. For asp.net mvc, it boils down to putting your controller factory and other extension points in their own assembly, and wrapping the container to make the various resolution methods internal. That way, they can only be used in this infrastructure assembly. I am not crazy about having to wrap the container, but it’s not that much code:

public class ContainerWrapper
{
 public const string ContainerKey = "Container";
 Container container;

 public void InitializeContainer()
 {
     container = new Container();
     container.Configure(c => c.Scan(s =>
     {
         s.WithDefaultConventions();
         s.AddAllTypesOf<IController>();
     }));

     HttpContext.Current.Application[ContainerKey] = this;
 }

 internal object GetInstance(Type type)
 {
     return container.GetInstance(type);
 }

 internal T GetInstance<T>()
 {
     return container.GetInstance<T>();
 }
}

This is example uses StructureMap, and is very simplistic. In a real application you would probably want to scan for registries that would contain the registration logic (those would be in you main web assembly). The ControllerFactory looks this:

public class StructureMapControllerFactory : DefaultControllerFactory
{
 protected override IController GetControllerInstance(
     RequestContext requestContext, Type controllerType)
 {
     var application = requestContext.HttpContext.Application;
     var container = (ContainerWrapper)application[ContainerWrapper.ContainerKey];
     return container.GetInstance(controllerType) as IController;
 }
}

A complete sample project showing this off is on Github.

So I’m curious about two things. First, in your experience, do developers abuse service locators when they could be using dependency injection? Second, would it help to hide the container in an infrastructure assembly, only exposing a the registration and base classes that access the container internally?

Tuesday, January 19, 2010

IIS 7.5 Required File System Premissions

To host a site on IIS 7.5, you need to grant the application pool account access and to the source files. By default this is set to ApplicationPoolIdentity, which is the built in system account IIS_IUSRS. If you don't do this, you will get the following error:
HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid.
If you want to allow anonymous access to the site, you also need to grant the Anonymous user identity access to these files. By default this is the build in system account IUSR. If you dont do this you will get this error:
HTTP Error 401.3 - Unauthorized You do not have permission to view this directory or page because of the access control list (ACL) configuration or encryption settings for this resource on the Web server..
Hopefully this saves someone else (and me!) time.

Wednesday, August 19, 2009

WebPart Notes

Here are some notes from building a SharePoint web part.
  • It seems when you deploy a feature containing web parts, you still have to add it to the web part gallery for each site.
  • If you want to reference controls in the CONTROLTEMPLATES directory, you can use the virtual path ~/_controltemplates.
  • If your controls have a code behind that is strong named, you must fully qualify the Inherits attribute of the @Control directive.
  • To remove broken webparts from a page, add the query string ?contents=1 to the URL. (Via http://sharepointinsight.blogspot.com/2008/06/remove-bad-or-broken-web-parts-from.html)
  • To disable friendly error messages (and see the yellow screen of death)
    • set the CallStack attribute of the SafeMode node to true
    • set CustomErrors mode attribute to Off
    • set debug attribute of the Compilation node to true

SharePoint Workflow Notes

Here are some notes from my experience building a SharePoint state machine workflow.

  • All fields delcared on the workflow object must be serializable type or marked nonSerializedAny activities before the workflow activated event will not have access to workflow properties
  • Remember to remove dll's from the GAC when building, GAC'ed versions will supersede local versions
  • Emailing from SharePoint requires the network service account to have local activation permission for "IIS WAMREG Admin Service" (a dcom service): http://www.cleverworkarounds.com/2007/10/25/dcom-fun-with-sharepoint/
  • There is a bug in the delay activity, here are some patches: http://blogs.msdn.com/sharepoint/archive/2008/07/15/announcing-availability-of-infrastructure-updates.aspx, http://support.microsoft.com/kb/953630/
  • SetSate is not like a return, the rest of the activities in that phase will still execute
  • You cannot have more than one list template in a feature
  • Set workflow logging level use this command:
    • stsadm -o setlogginglevel -category "Workflow Infrastructure" -tracelevel verbose
  • To get and set timer settings use this command:
  • To allow unsigned powershell scripts to run use this command:
    • Set-ExecutionPolicy Unrestricted

Workflow Versioning

  • There is no concept of versioning for Visual Studio created SharePoint workflows
  • Instead, include the version in the feature path, name, description and workflow name and description
  • Change the assembly version, and the version in each of the other places
  • Deploy as a new feature
  • Make sure the old workflow is marked "No new instances" after you associate the new version (this should be automatic)
  • Based on this, it seems like workflows should be in their own features to allow for simpler versioning
  • Delay activities were not working correctly
  • Worked with MS Premier support for weeks to try and resolve this
  • Resetting the SharePoint timer service made the delay activity work correctly
  • When they are initialized in the designer, setting the value to something else in the initialize event does not work
  • If you bind the timeout property to a dependency field then it can be set in the initialize timeout handler, much like SendEmailActivity properties

Wednesday, November 26, 2008

ReSharper Fails to Load with a Generic Error

After installing Resharper 4.1 on Windows 2003 server, I got this message when I start Visual Studio 2008:

The Add-in 'JetBrains ReSharper 4.1' failed to load or caused an exception.

Error Message: The system cannot find the file specified.

Error Number: 80070002

I emailed JetBrains support and got a quick response.  It turns out my system did not have Extensibility.dll installed in the GAC (Global Assembly Cache), and ReSharper requires this dll to run. 

To figure this out, I opened explorer to the path "%windir%\assembly", which brings up a shell extension that displays the contents of your GAC.

The fix was easy, I just had to find the dll, which is installed with Visual Studio.  Once I knew the location, I opened the Visual Studio command prompt and ran

gacutil /i  "C:\Program Files\Common Files\Microsoft Shared\MSENV\PublicAssemblies"

Hopefully this saves someone else some time.

Thursday, September 4, 2008

Making the ASP.NET DataGrid Usable

If you have ever used an ASP.NET DataGrid (or GridView without a DataSource control), then you know there is a lot of boiler plate code needed.  The project I am working on right now has a bunch of grids that need to edit, add and page over a certain data type, so I we wanted to avoid duplicating all that code across every page. 

A little research and a prototype later, I came up with what I call the AutoGrid.  For the most part, it just handles the editing, paging and error handling internally.  Arguably more interesting is the fact that it is a generic control; I was under the impression that web controls could not have type parameters.  While this is technically true, there is a cool work around, which is detailed in a blog post by Eilon Lipton. 

Basically, you need create a control with generic arguments, and whatever functionality you are after. Then to create another control to second control to be used in asp.net markup. It needs to inherits from the first control and have a string property for each generic argument.  Finally you override the ControlBuilder for your first control and construct your second control by passing the type arguments to a Type  instance of the generic control.  The code for the non-generic markup control is below:

[ControlBuilder(typeof(GenericControlBuilder))]
public class GenericGrid : AutoGrid<Object>
{
  private string _objectType;

  public string ObjectType
  {
      get
      {
          if (_objectType == null)
          {
              return String.Empty;
          }
          return _objectType;
      }
      set
      {
          _objectType = value;
      }
  }
}

As you can see, it doesn't do much by itself.  The ControlBuilder is where the magic happens (this code is straight from Eilon's example):

public class GenericControlBuilder : ControlBuilder
{
  public override void Init(TemplateParser parser, ControlBuilder parentBuilder,
      Type type, string tagName, string id, IDictionary attribs)
  {

      Type newType = type;

      if (attribs.Contains("objecttype"))
      {
          // If objecttype is specified, create a generic type that is bound
          // to that argument and then hide the objecttype attribute.
          Type genericArg = Type.GetType(
              (string)attribs["objecttype"], true, true);
          Type genericType = typeof(AutoGrid<>);
          newType = genericType.MakeGenericType(genericArg);
          attribs.Remove("objecttype");
      }

      base.Init(parser, parentBuilder, newType, tagName, id, attribs);
  }
}
The other thing that makes this work is an IoC container.  In this case I'm using Structure Map, but any IoC would work.  The idea is that since the AutoGrid knows the type it is editing, it can ask for the appropriate data access class.  Chad Myers helped me get the configuration right, using a custom TypeScanner:
public class RepositoryConventionScanner : ITypeScanner
{
  public void Process(Type type, Registry registry)
  {
      Type repoForType = GetGenericParamFor(type.BaseType, typeof(Repository<>));

      if (repoForType != null)
      {
          var genType = typeof(IRepository<>).MakeGenericType(repoForType);
          registry
              .ForRequestedType(genType)
              .AddInstance(new ConfiguredInstance(type));
      }
  }

  private static Type GetGenericParamFor(Type typeToInspect, Type genericType)
  {
      if (typeToInspect != null
          && typeToInspect.IsGenericType
          && typeToInspect.GetGenericTypeDefinition().Equals(genericType))
      {
          return typeToInspect.GetGenericArguments()[0];
      }

      return null;
  }
Then in your Application_Start event, just tell Structure Map to run this scanner on your assembly:
StructureMapConfiguration
  .ScanAssemblies()
  .IncludeAssemblyContainingType<GenericGrid>()
  .With(new RepositoryConventionScanner());
I wrote this code for a pretty specific situation, your mileage may vary.  I would love to hear thoughts about this, if you care to see the whole solution you can get it here. 

Wednesday, June 25, 2008

Linq To SQL Caching

I ran into a weird behavior while trying out different usage patterns of Linq To SQL. I noticed that some queries were not hitting the database! Now I knew that Linq To SQL object tracking keeps cached copies of entities it retrieves, but my understanding was that it only used this for identity mapping and would never return stale results. After some Googling and then looking at the internals of the System.Data.Linq.Table class with Reflector, I came to the conclusion that it was indeed returning its cached results. This makes sense once you understand the way the data context works; I didn't realize the implications of object tracking. Once an object has been retrieved once by a data context, its values will not be updated by the database. This is key for the way optimistic concurrency support works in Linq to SQL, but if you are used to writing simple crud applications where you ignore concurrency it would be easy to overlook this.

On thing still puzzles me though, if I change my call from

context.Products;

to

context.Products.ToList();

I would always hit the database. It turns out that ToList calls GetEnumerator (which leads to a query being fired) whereas when I databind directly against the Table, it calls IListSource.GetList, which will return the cached table if it can. Why wouldn't you query the database to check for new objects that might have been added to your results, and why couldn't the same query use the cache when I call ToList on it?