Posts
32
Comments
106
Trackbacks
0
Monday, February 6, 2012
Using Entity Framework to Query Dynamic Types

I ran across a unique scenario recently, and I’m not sure if I should be proud of the solution, or appalled at the Frankenstein result.  Either way, I thought I’d post about it to possibly help others in the same situation.

I was developing a website leveraging jQuery / AJAX to query views of data at runtime.  The kicker?  I didn’t know which views.  The choice of database view was determined via configuration and user choice.

I was leveraging jqGrid for the client-side grid presentation, sorting, filtering, etc.  I was also proving out an approach using the MVC library for jqGrid to bind and query for results (this isn't necessary for this discussion, but was the original reason to create this solution, as we’ll soon see).

The jqGrid MVC library uses IQueryable data sources to bind to the grid.  IQueryable in and of itself isn’t that hard to produce, but what about when you don’t know what the type will look like (read => the view you’re binding to is unknown at runtime)?  It sure would be nice to leverage an existing technology like Entity Framework, but EF assumes you know the tables / views in advance.  It’s not a bad assumption; but I definitely have an edge case here.

So, the summary is: how do you return an IQueryable<T> when T is unknown at compile-time?

There are some constraints here, all of which makes this solution more palatable/scary:

  1. I don’t need to worry about database updates.  Views are hard to update, anyway, usually requiring INSTEAD OF triggers.
  2. I could eliminate EF and just run a straight database query, mapping the datareader into a List<T>, where T is dynamically typed at runtime.  I could then return List<T>.AsQueryable(), but I lose something significant:  deferred execution.  I’m returning the entire resultset and then using Linq in memory to sort and filter the result.  What if I have a million rows?  Awful.

So, I set about to force EF into helping me solve the problem.  Nothing like forcing a square peg into a round hole.

First off, how do I create an EF context that will return dynamic results?  This part is fairly simple.

/// <summary>
/// Context class for dynamic queries and catalog information
/// </summary>
public class QueryCatalogContext : DbContext
{

    /// <summary>
    /// Initializes a new instance of the <see cref="QueryCatalogContext" /> class.
    /// </summary>
    /// <param name="model">The model.</param>
    public QueryCatalogContext(DbCompiledModel model)
        : base("DomainDB", model)
    {
        modelProvided = true;
    }

EF allows us to provide a model at runtime, so assuming I can define a model based on my run-time view, I’m OK there.  Now how do I return an IQueryable<T> from this context?
/// <summary>
/// Gets a set of generalized query results.
/// </summary>
/// <param name="query">The query.</param>
/// <param name="columns">The columns.</param>
/// <returns></returns>
public IQueryable<T> GetDynamicQueryResults<T>(QueryMetadata query)
{

    List<ColumnMetadata> columns = this.GetColumnMetadata(query);

    string className = QueryTypeFactory.GetType(query, columns).Name;

    // Build the query and execute
    string sql = string.Format("SELECT VALUE {0} FROM CodeFirstContainer.{0} AS {0}", className);
    return new ObjectQuery<T>(sql, ((IObjectContextAdapter)this).ObjectContext, MergeOption.NoTracking);

}

GetColumnMetadata is an internal function that returns a list of column (class property) metadata based on the particular query metadata (it does this by querying the database for view column metadata). QueryTypeFactory uses the Dynamic Linq Library to stamp out (and cache) types based on a set of properties (the columns I just retrieved).  I then turn that type info into an EF SQL statement and return an ObjectQuery<T> with no tracking.

OK, that’s fine and dandy, but how do I provide the model and, more importantly, how do I call the generic method if I don’t know what T is in advance?

The service class that actually executes the query using the context passes in the model and calls the generic method.  That mechanism follows:

/// <summary>
/// Gets the dynamic query results.
/// </summary>
/// <param name="query">The query.</param>
/// <param name="columns">The columns.</param>
/// <returns></returns>
public IQueryable<T> GetDynamicQueryResults<T>(QueryMetadata query, List<ColumnMetadata> columns) where T : class
{

    // Define a dynamic type and create a configuration for it
    DbModelBuilder builder = new DbModelBuilder(DbModelBuilderVersion.V4_1);
    EntityTypeConfiguration<T> configuration = (EntityTypeConfiguration<T>)Activator.CreateInstance(typeof(DynamicViewConfiguration<>).MakeGenericType(typeof(T)), query, columns);

    // Add configuration for dynamic type
    builder.Configurations.Add(configuration);

    using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["MyDB"].ConnectionString)) // Only need the connection for building the model;
    {
        // Can't use implicit IDisposable because the context is needed when the IQueryable is evaluated.
        QueryCatalogContext context = new QueryCatalogContext(builder.Build(connection).Compile());
        return context.GetDynamicQueryResults<T>(query);
    }
}

First, the service class creates a DbModelBuilder.  The magic happens in the next line.  Because I don’t know the type T at runtime, I need to use reflection to create an instance of the DynamicViewConfiguration<T> class.  I then pass this to the model builder (this class is an EntityTypeConfiguration<T>).  I use a new connection string for building the model (EF has to query the view for metadata) and then compile the model, passing it into the context constructor.  I then return the GetDynamicQueryResults<T> method, returning a true IQueryable<T> object.

The DynamicViewConfiguration<T> class is used to define the model for EF.  At a minimum, models must define what table they map to and what constitutes the primary key.  This is what the DynamicViewConfiguration<T> class does.

/// <summary>
/// A dynamic entity configuration for views specified at run-time.
/// </summary>
public class DynamicViewConfiguration<TDynamic> : EntityTypeConfiguration<TDynamic> where TDynamic : class
{

    /// <summary>
    /// Initializes a new instance of the <see cref="DynamicViewConfiguration" /> class.
    /// </summary>
    /// <param name="query">The query.</param>
    /// <param name="columns">The columns.</param>
    public DynamicViewConfiguration(QueryMetadata query, List<ColumnMetadata> columns)
        : base()
    {

        // Locate key column (TODO: Handle multiple key fields)
        ColumnMetadata key = columns.FirstOrDefault(c => c.IsKey) ?? columns[0];
        Activator.CreateInstance(typeof(DynamicViewLambdaGenerator<,>).MakeGenericType(typeof(TDynamic), key.DataType.ClrType), this, key.Name);

        ToTable(query.Name);

    }

}

DynamicViewLambdaGenerator<T> is an internal class that defines the lambda for the PK (the key field).

/// <summary>
/// Internal class to generate lambda expressions for the configuration
/// </summary>
internal class DynamicViewLambdaGenerator<TDynamic, TKey> where TDynamic : class
{

    // TODO: Change to a stateful approach and cache reflection and expressions if usage of this class increases
    private readonly Type type;
    private readonly string keyPropertyName;

    /// <summary>
    /// Initializes a new instance of the <see cref="PropertyLambdaGenerator" /> class.
    /// </summary>
    /// <param name="propertyName">Name of the property.</param>
    public DynamicViewLambdaGenerator(DynamicViewConfiguration<TDynamic> configuration, string keyPropertyName)
    {
        this.keyPropertyName = keyPropertyName;
        this.type = typeof(TDynamic);

        configuration.HasKey(this.Generate());
    }

    /// <summary>
    /// Generates this instance.
    /// </summary>
    /// <returns></returns>
    private Expression<Func<TDynamic, TKey>> Generate()
    {
        PropertyInfo property = type.GetProperty(keyPropertyName);
        ParameterExpression thing = Expression.Parameter(type);
        MemberExpression keyExpression = Expression.Property(thing, property);

        return Expression.Lambda<Func<TDynamic, TKey>>(keyExpression, thing);
    }
}

OK, bear with me.  This is where all the real funny stuff goes on.  In the DynamicViewConfiguration<T> constructor, we find the primary key column.  We use that to (again, through reflection, because we don’t know any generic types at compile time) create the lambda function for the EF entity configuration HasKey method.  We use the key property name, reflection, and more of the Dynamic Linq library to return the property of the type within the Generate() method.

After that, we call ToTable passing in the name of the view.  Voila.  We’ve defined the underlying view name and the key field.

With the context (and model) defined for our runtime type, the service class can then call GetDynamicQueryResults<T>, and as mentioned earlier, I get a true IQueryable<T> back.  I can use Linq to filter or sort the results and the view is not queried until the set is iterated.

The last missing piece is how to call the service class.  This is done in an MVC controller.

// Generate a type based on the return columns
Type type = QueryTypeFactory.GetType(query, columns);

MethodInfo generic = typeof(IQueryCatalogDynamic).GetMethod("GetDynamicQueryResults").MakeGenericMethod(type);
var data = generic.Invoke(repository, new object[] { query, columns });

The controller gets the runtime type through our type factory class, then calls the GetDynamicQueryResults on the service instance, and we invoke the method, passing in our parameters.  The controller uses this functionality to return an IQueryable<T> to the grid databinding function, which actually performs the sort and filter.  With literally millions of rows in some tables, the jqGrid client-side paging and sorting was nearly instant.

Now I know that this is sort of a bastardization of technology, but the use case is narrow, and it worked great for my scenario.  The purpose isn’t really how to uglify EF, but more so how some solutions can be solved with a little ingenuity (and a lot of head-banging against keyboards).

Posted On Monday, February 6, 2012 4:21 PM | Comments (0)
Monday, January 16, 2012
NuCon 2012, Irvine, CA

If you’re in southern California on February 16, 2012, come out to NuCon 2012.  Spend the day building an IT strategy for your organization through business and technical tracks.  Meet with peers and experts in your field, demo technology, and win prizes

Posted On Monday, January 16, 2012 10:20 AM | Comments (0)
Thursday, September 15, 2011
Sharepoint Conference 2011, Anaheim, CA

If you’ll be at the Sharepoint Conference this year, here’s something you’ll want to tuck into your back pocket.  See the following information on some cool Sharepoint solutions and some fun stuff to boot, courtesy of my employer.


From Oct. 3 to 6 at the SharePoint Conference 2011 in Anaheim, Calif. Neudesic experts will be on hand to demonstrate innovative “SharePoint Plus” solutions.  “SharePoint Plus” solutions enhance social, mobile, business intelligence and user experience capabilities to Microsoft’s popular business collaboration platform. One of Neudesic’s central “SharePoint Plus” solutions is Pulse, a software product that transforms SharePoint’s My Site experience into a captivating, socially collaborative environment.

To take a closer look at the benefits associated with integrating Neudesic Pulse and Neudesic SharePoint Services into SharePoint 2010, Neudesic will host “Space X Launches SharePoint 2010 with Neudesic Pulse for Out of this World Business Collaboration” on Thursday, Oct. 6 from 9 – 10:15 a.m. By implementing Neudesic products and services, Space X broke down existing paradigms of information and created more direct lines of communication in order to facilitate ideation and shared knowledge across the organization.  http://www.spacex.com/

Neudesic will also sponsor an evening celebration at Disneyland Park from 8 p.m.-1 a.m. featuring unlimited drinks and refreshments, access to many of the park’s rides and attractions, and a private viewing of Fantasmic.

Posted On Thursday, September 15, 2011 5:50 PM | Comments (0)
Monday, October 25, 2010
Azure Storage Explorer 4 Beta 1 Released

If you use Windows Azure simple storage, there’s a tool that you can use to browse your artifacts in storage.  Go grab the latest version.

Posted On Monday, October 25, 2010 10:04 AM | Comments (0)
Monday, September 20, 2010
ASP.Net Security Vulnerability Discovered

A significant security vulnerability was discovered in ASP.Net (all versions) over the weekend.  To learn about the issue and how to protect any sites you may have running on ASP.Net, check Scott Gu's link.  http://weblogs.asp.net/scottgu/archive/2010/09/18/important-asp-net-security-vulnerability.aspx

Posted On Monday, September 20, 2010 9:16 AM | Comments (0)
Wednesday, February 3, 2010
Firefox Delicious Add-in, RefControl, Missing Bookmarks, and Poor Software Design

I have been experiencing an ongoing issue with the Delicious Firefox add-in where the list of local bookmarks didn’t jive with the list on the Delicious website – I had more locally than on the Delicious website.  After an unfruitful exchange with Delicious support (does anyone ever really get any help with first level support?  No offense intended, people), I set out to figure out what was going on myself.

Turns out that another Firefox add-in called RefControl caused the post to the Delicious website to fail when adding new bookmarks.  The add-in uses a local SqlLite instance to cache your bookmarks offline so as to improve performance, so what was happening was that the add-in added the bookmark locally, but the post to the website failed, and so the bookmark never was really ‘saved’.  What’s the rub?  The add-in received an ‘authentication failed’ response from the Delicious services, but didn’t notify the user that an error occurred.  Happily, I kept on, thinking my bookmarks were safe and ubiquitously stored on Delicious’ servers.  Bad, bad design, that decision not to capture an exception and notify the user.

It wasn’t until I noticed the discrepancy between the add-in and the web that I ever realized there was a problem.  To make matters worse, there’s no way to sync between your local cache and the web, only a one-way from the web to your local cache.  So I had to hack up the SqlLite database of bookmarks, cull the ones that were missing from the web, and re-add them one by one.  Not a fun thing to make your users do.  Again, shame on you, Delicious.

Incidentally, the fix for using RefControl and Delicious together in Firefox is to add an exception to RefControl:  Allow del.icio.us as Normal.

Posted On Wednesday, February 3, 2010 4:24 PM | Comments (0)
Friday, October 9, 2009
Windows Azure Dev Fabric and the PathTooLongException

I’ve been working on converting a complex, distributed, real-world application to the cloud.  It’s been an exciting journey, but also one filled with little gotchas.  One such gotcha was a failure on startup of dev fabric when running a web role containing a series of WCF services.  The specific exception I got ended up being a PathTooLongException.

I dug into the problem and discovered that the folder hierarchy for the ‘deployed’ artifacts running on the local dev fabric is crazy long.  As my project artifacts were deployed, the dev fabric was throwing the PathTooLongException when it ran into an assembly whose full path exceeded the OS limit.  Because of the length of the root dev fabric deployment path, the flexibility of naming your web/worker roles and dependent assemblies is significantly limited.  Here’s what I found:

The root path of deployments made to dev fabric is something like (on Win 2008, anyway):

C:\Users\{username}\AppData\Local\dftmp\s0\

This isn’t crazy (yet), but each deployment consists of a series of hierarchical folders depending on the roles.  Within each role, dependent assemblies are located still deeper within the folder structure.  So, as an example, one of the assemblies might be located at (this was an actual path; names changed to protect the innocent):

C:\Users\{username}\AppData\Local\dftmp\s0\deployment(27)\res\deployment(27).{ServiceDefName}.{RoleName}.0\AspNetTemp\aspNetTemp\root\07ac5e60\bd64b387\assembly\dl3\2cbe1cb5\591b1f0f_1842ca01\{AssemblyName}

This is crazy long!  Without the username, service definition name, role name, and assembly name, I’m already up to 156 characters.  Considering the path length maximums are 248 for the folder name, and 260 for the complete filename, we’re already pretty far along.

Now, consider this:  sometimes, you use 3rd party assemblies, whose names can’t easily be changed.  Take, for example, one of the Enterprise Library assemblies: Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WCF.dll.  This just happened to be one of the assemblies being used by my WCF services.  Taking that assembly name into account, that’s chewing through another 69 characters, leaving just 23/35 characters total for the username, service definition name, and role name before hitting the directory/path limits, respectively.  That’s not a lot of characters, especially when you consider that some usernames are a concatenation of first and last, such as david.barrett.  Another 13 characters lost.

How do you minimize these tokens?

The ServiceDefName token in the path above can be modified by manually updating the value in the ServiceDefinition.csdef and ServiceConfiguration.cscfg without any effect on the code (in other words, you don’t have to actually change the name of the project or folder:

<ServiceDefinition name="A" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">…

The RoleName is taken from the project name.  This is the same name that shows in Visual Studio, and is not necessarily the name of the project file itself.  I typically name projects (and assemblies) with a ‘namespace’ flavor, such as Company.System.Subsystem/Feature.  In this case, I can’t afford that length, so I had to shorten it something like Svcs after creating the project initially as Company.System.Subsystem.  Blech.

Posted On Friday, October 9, 2009 3:55 PM | Comments (0)
Monday, August 3, 2009
Conferencing Add-In for Outlook: “An error occurred while executing this command”

I experienced this error recently and spent an eternity trying to resolve it.  I finally found the solution to my problem and wanted to document it for others.

For me, the issue only appeared when I tried to invite people to a scheduled meeting.  Under normal circumstances, blank (no invitees) scheduled meetings worked fine.  Ad-hoc meetings worked fine.  Some invitees worked, but others did not.  When an invitee caused the issue to appear, this message would be displayed either when I saved/sent the meeting, or tried to set an invitee as a meeting leader.

It turned out to be caused by the format of the invitee information.  Typically, on an Exchange-based email network, your global address book contains all the people within your company.  Sometimes, emails will get sent around that results in employee names being displayed something like My Name <my.name@company.com> instead of just My Name.  These entries can get cached in a frequent recipient list that will pop down when entering addressees.

Well, it turns out this format causes the Conferencing Add-In to barf.  When entering recipients into a LiveMeeting request, entries in this format (as opposed to the usual Exchange My Name format) will cause something somewhere to fail, resulting in this message.  Keep in mind that the name will look correct in the To: field, even though “behind the scenes” it’s actually the lengthy format above.

To resolve, either select a user from the Global Address Book when entering attendees or remove the entry for the cached list by hitting Ctrl-Delete while the name is highlighted in the list.  Then enter a name, force a lookup on the user (“Check Names”), and all should be well.

Posted On Monday, August 3, 2009 12:18 PM | Comments (6)
Tuesday, May 12, 2009
Azure Local Storage Endpoints

I’m constantly having to search for the local development storage endpoint Uris when I create a new Azure storage project from scratch, not to mention the account name and shared key, so I’m posting them here for my own reference.  Hopefully others will also find it useful.

   1: <ConfigurationSettings>
   2:         <Setting name="AccountName" 
   3:             value="devstoreaccount1"/>
   4:         <Setting name="AccountSharedKey" 
   5:             value="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="/>
   6:         <Setting name="BlobStorageEndpoint" 
   7:             value="http://127.0.0.1:10000/"/>
   8:         <Setting name="QueueStorageEndpoint" 
   9:             value="http://127.0.0.1:100001/"/>
  10:         <Setting name="TableStorageEndpoint" 
  11:             value="http://127.0.0.1:10002/"/>
  12: </ConfigurationSettings>
Posted On Tuesday, May 12, 2009 9:40 AM | Comments (0)
Tuesday, March 10, 2009
Team Build: Part 1

I’ve had the fortune of being involved in some pretty complex and elegant Team Build configurations.  I’ve started to blog about individual points several times, but ended up figuring that I should instead write a series of posts about common goals and general Team Build approaches.  I can’t take credit for all this content.  Some I’ve developed over time, some I’ve learned from others on the web, and yet other stuff I’ve learned from co-workers.  That said, here we go with Part 1.

General Approach

Every person has their personal preference, and this is typically taken to the nth degree with programmers.  What follows are my personal preferences.  Yours may vary.

First, I like to separate any custom team build stuff from the default TFSBuild.proj file.  I’ll update properties already existing in that file, or available via the Microsoft.TeamFoundation.Build.targets file, directly in TFSBuild.proj.  But all my custom stuff exists in a separate file, or files.  For example, I’ll aggregate all my custom tasks into a CustomTasks.targets file, and import that at the bottom of TFSBuild.proj:

   1: <ItemGroup>
   2:     <!--  ADDITIONAL REFERENCE PATH
   3:  The list of additional reference paths to use while resolving references. For example:
   4:  
   5:      <AdditionalReferencePath Include="C:\MyFolder\" />
   6:      <AdditionalReferencePath Include="C:\MyFolder2\" />
   7: -->
   8: </ItemGroup>
   9:  
  10: <!-- Add our custom target overrides -->
  11: <Import Project="Environment.proj"/>
  12: <Import Project="CustomTasks.targets" />
  13:  
  14: </Project>

Here, we’ve imported the CustomTasks.targets file that contains all of our customizations, and we’ve also imported an Environment.proj file that contains metadata about the environment to which we typically deploy this build configuration.  In this way, we can create additional build for additional environments, and only need to change the Environment.proj file.

Within the CustomTasks.targets file, I declare any custom MSBuild tasks I need, as well as define my overrides and custom targets.  (See http://msdn.microsoft.com/en-us/library/aa337604.aspx for a list of overridable team build targets).  An example skeleton is as follows:

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
   3:  
   4: <UsingTask TaskName="XmlMassUpdate" AssemblyFile="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.dll" />
   5: <UsingTask TaskName="Attrib" AssemblyFile="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.dll" />
   6: <UsingTask TaskName="XmlFile.GetValue" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft.Sdc\Microsoft.Sdc.Tasks.dll" />
   7: <UsingTask TaskName="XmlFile.SetValue" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft.Sdc\Microsoft.Sdc.Tasks.dll" />
   8:  
   9: <Target Name="BuildNumberOverrideTarget">
  10:   <CallTarget Targets="CalculateCustomBuildNumber" />
  11: </Target>
  12:  
  13: <Target Name="GenerateDocumentation">
  14:   <CallTarget Targets="CreateCustomDocumentation" />
  15: </Target>
  16:  
  17: <!-- Custom targets removed for brevity. -->
  18:  
  19: </Project>

For custom build tasks, I typically rely heavily on the Sdc and MSBuild Community tasks (http://www.codeplex.com/sdctasks and http://msbuildtasks.tigris.org/).  There’s not alot they can’t do, but if you find something, you can always develop a custom task yourself.

Adjusting Application Settings During a Deploy

One of the custom tasks I rely *heavily* on is the XmlMassUpdate task.  There are many scenarios where you want to tweak configuration settings when you deploy to an environment, and this task is perfect for it.  In a nutshell, you provide a set of configuration data (in this case, Xml), and the task dynamically updates/removes/adds the config content to match your desired settings.  I typically embed the configuration directly in the custom build file, but you could also declares this externally.  MSBuild supports including “random” Xml in the build file via the <ProjectExtensions/> node (anything within this content is ignored by MSBuild).  So, using this and the XmlMassUpdate tasks, we can do the following (example only):

   1: <ProjectExtensions xmlns:xmu="urn:msbuildcommunitytasks-xmlmassupdate">
   2:   <MyAppConfig xmlns="">
   3:     <configuration>
   4:       <appSettings>
   5:         <add xmu:key="key" key="MyDeployedValue" value="One" />
   6:         <add xmu:key="key" key="MySecondDeployedValue" value="Two" />
   7:       </appSettings>
   8:     </configuration>
   9:   </MyAppConfig>
  10: </ProjectExtensions>
  11:  
  12:  
  13: <Target Name="ConfigOverride">
  14: <Attrib Files="$(SolutionRoot)\MyApp.config" ReadOnly="false" />
   1: <XmlMassUpdate ContentFile="$(SolutionRoot)\MyApp.config"
   2:                        ContentRoot="/configuration"
   3:                        SubstitutionsFile="$(BuildDirectory)\BuildType\CustomTasks.targets"
   4:                        SubstitutionsRoot="/msb:Project/msb:ProjectExtensions/MyAppConfig/configuration"
   5:                        NamespaceDefinitions="msb=http://schemas.microsoft.com/developer/msbuild/2003" />
  21: <Target>

The attrib task is there because, by default, files grabbed from source control to be used during a build are read-only.  You’ll want to check the docs on how to use XmlMassUpdate, but the short of this says (assume you call the ConfigOverride target somewhere else in your build process):

On the file MyApp.config in my solution path, replace/modify content starting at the /configuration node, using the content included in the CustomTasks.targets file (this same file) starting with the node located at Project/ProjectExtensions/MyAppConfig/configuration.  In this case, it will update the appSettings/add node with an “key” attribute of ‘MyDeployedValue’ to have a “value” attribute of ‘One’, and similarly for the second entry.

I use this for all manner of things, from updating configs, to updating project files (they’re just Xml after all), to updating Sandcastle help file docs, to adjusting ClickOnce settings on a deploy.  We’ll see more about these in the next several posts.

Posted On Tuesday, March 10, 2009 4:32 PM | Comments (0)
Comment and opinions are my own and do not reflect on my company in any way.
Tag Cloud