Lessons Learned from the Death of CodeSpaces

For several years, I had been a customer of CodeSpaces, a popular host for Subversion source control repositories.  (If you’re not a techie – Subversion is a server application used to manage program code, allowing collaboration between teams of programmers.)  I used CodeSpaces to host the code repositories for many of my consulting clients, until…

Code Spaces is DownA few weeks ago, I was trying to check in some code and I got a strange error message from Subversion.  Then I tried going to the CodeSpaces website to log into my account, and I was met with a page titled “CodeSpaces Is Down!”  The page explained that someone compromised their Amazon EC2 hosting account, threatened to destroy their business if they didn’t pay a large sum of money, then deleted many of their virtual servers and backups when they didn’t comply.  Because the damage to their business was so great (both financially and to their brand reputation), CodeSpaces decided to cease operations entirely.

I contacted CodeSpaces support, and thankfully they were able to provide me with backups of my customers’ Subversion repositories.  Then I spent 2 days setting up my own Subversion server (with VisualSVN Server), importing all of the backups into SVN, setting up a Git server and migrating from SVN to Git.

Lesson #1 – Always Use Multi-Factor Authentication

If you have the option, always take advantage of multi-factor authentication, sometimes called “two-factor authentication” or “two-step verification.”  Many websites and service providers offer this option.  Basically, in addition to using your password, multi-factor authentication normally uses a physical device (typically a phone) to verify that you are really the account owner.  While typing your password on a login page, you must also enter a verification code.  This single-use code is retrieved via text message to your phone, or using an app on your Android/iOS device.  Some services even support a physical device that plugs into your computer’s USB port.  But most often, websites use Google Authenticator or Authy for verification.

Amazon EC2 offers multi-factor authentication.  If CodeSpaces had used it, it is highly unlikely that the hacker could have compromised their account.

I personally use multi-factor authentication on a number of services, but I’ve been lazy about it myself.  I plan to add it to more and more services when possible.  There is a great list of websites that offer two factor authentication.  I’ll be going through it and adding them to my Authy app.

Lesson #2 – Use a Unique Password for Each Website

People have recommended this for years, but nobody really does it.  You should never use the same password for more than one website.  You’re taking an awful risk if you’re using your banking website’s password for dozens of other sites.

Lately I’ve been migrating all of my passwords into 1Password.  If you haven’t heard of it, 1Password is a fantastic security manager for desktops and mobile devices.  It securely stores and syncs all of your passwords across all of your devices (if you so choose).  It plugs into all of your web browsers to generate secure passwords, and it can automatically fill them in when you need to log into a website.

This means you can have a unique complex password for every website, but you only need to remember a single master password (“one password”).  If one of your accounts gets compromised, hackers won’t be able to get into any of your other accounts.  This is extremely important for any website that has access to your financial info (bank accounts, credit cards).

Lesson #3 – Keep Backups in Multiple Places

CodeSpaces had regular backups of their Amazon virtual machines, but they were doing all backups to Amazon S3.  In other words, once the hacker had access to their Amazon web services account, they were free to delete the backups along with the virtual servers.  Had CodeSpaces backed up to a different third party service, or even done regular FTP downloads to their local office, they could have recovered from the disaster.

I personally use Windows Backup to regularly image my desktop to an external hard drive.  I also use Carbonite to back up all of my important files in the cloud.  I also back up all of my customers’ source code and database images on third party servers.  When it comes to backups, it’s good to be “redundantly redundant.”

Lesson #4 – Git is “Safer” than Subversion

The software development world has been moving toward Git source control in recent years, and I have been lagging behind.  I first tried Git a few years ago and I found the tools for Windows and Visual Studio to be lacking.  I missed having a simple GUI interface.  I also found that the “distributed repository” model encouraged developers to do a lot of work on their own machines before checking their code into the central repository.  This led to a lot of headaches with developers breaking each others’ code because they were sharing their work too infrequently.

I kept my opinion that “Subversion is better / easier” right up until CodeSpaces went out of business.  A major drawback to Subversion and other “central repository” systems is that the server has the only complete copy.  If the server breaks, or your Subversion host goes out of business, you’re out of luck.  With Git, every computer has a complete copy of the entire repository.  If the server breaks, or your Git host goes out of business, you just connect to a new server and push the repository.

I have found in recent months that the Git tools for Windows and Visual Studio have improved.  There is now a TortoiseGit tool that is very similar to TortoiseSVN.  There is also a decent Git Source Control Provider for Visual Studio, although it requires you to install 3 other tools first (msysGit, Git Extensions and TortoiseGit).

I also discovered that there is a very easy-to-install Git server (Bonobo Git Server) that works on Microsoft web servers.  It works like a normal website inside IIS, so you don’t need to install Apache or any other Linux-derived software.  I believe it even works on shared Windows hosts as long as it has the right privileges.

In Summary

I suggest that everyone should:

  • Use multi-factor authentication whenever possible.
  • Use distinct secure passwords for every website, especially those with financial info.
  • Use a password manager to keep track of your secure passwords.
  • Keep backups of important files in multiple places, including in your home/office and with cloud-based backup services.  A single backup location is not enough.
  • If you’re a software developer, move to a distributed repository system like Git.

People believe that “it could never happen to me…” right up until it happens to them.  Learn from other people’s mistakes.  It’s better to be safe than sorry.


Performance Enhancements with LINQ and PLINQO


I recently completed a performance review for a client’s ASP.NET MVC web app that was running slowly.  Thankfully, their application was using the LINQ-to-SQL version of PLINQO, so it was easy to identify and resolve the data access bottlenecks.  Below, I have explained some of the technologies and techniques that I used to solve their server-side performance issues.

Why I Still Use LINQ-to-SQL

You may be wondering why anyone would continue to use LINQ-to-SQL (L2S) when Microsoft is pushing Entity Framework (EF) on .NET developers.  There are several reasons:

  1. Lightweight – Compared to EF, L2S is much lighter.  There is less code and less overhead.
  2. Simplicity – I don’t mind complexity as long as it serves a purpose, but the bloat of EF is usually not necessary.
  3. Performance – Performance tests have consistently shown faster application performance with L2S.
  4. Cleaner SQL – The actual T-SQL generated by L2S tends to be simpler and cleaner than that of EF.
  5. EnhancementsPLINQO adds a ton of features to L2S including caching, future queries, batched queries, bulk update/delete, auditing, a business rule engine and more.  Many of these features are not available with Entity Framework out-of-the-box.

Didn’t Microsoft say LINQ-to-SQL was dead?

Contrary to popular belief, Microsoft has never said that LINQ to SQL is dead, and improvements are being made according to Damien Guard’s LINQ to SQL 4.0 feature list.

Microsoft also released the following statement about their plan to continue support for LINQ to SQL:

Question #3: Where does Microsoft stand on LINQ to SQL?

Answer: We would like to be very transparent with our customers about our intentions for future innovation with respect to LINQ to SQL and the Entity Framework.

In .NET 4.0, we continue to invest in both technologies. Within LINQ to SQL, we made a number of performance and usability enhancements, as well as updates to the class designer and code generation. Within the Entity Framework, we listened to a great deal to customer feedback and responded with significant investments including better foreign key support, T4 code generation, and POCO support.

Moving forward, Microsoft is committing to supporting both technologies as important parts of the .NET Framework, adding new features that meet customer requirements. We do, however, expect that the bulk of our overall investment will be in the Entity Framework, as this framework is built around the Entity Data Model (EDM). EDM represents a key strategic direction for Microsoft that spans many of our products, including SQL Server, .NET, and Visual Studio. EDM-based tools, languages and frameworks are important technologies that enable our customers and partners to increase productivity across the development lifecycle and enable better integration across applications and data sources.

Reasons to Use Entity Framework

The main reasons to use EF are:

  • Support for Other Databases – It can be used with databases other than Microsoft SQL Server, including MySQL and Oracle.  Of course, if you’re programming an application in .NET, you’re probably using MSSQL anyway.
  • POCO Support – This stands for “Plain Old Code Objects” or “Plain Old C# Objects.”  The gist is that you can take a code-first approach to designing your data entities.  This is opposed to the database-first approach used by many ORMs.
  • Abstraction – EF lets you add greater abstraction to your data objects so they aren’t as tightly coupled with your database schema.  With L2S, all of your entities are locked into the Active Record Pattern where objects and properties are mapped directly to database tables and columns.

In my opinion, those arguments aren’t good enough to outweigh the benefits of the L2S version of PLINQO in most business scenarios.  If you really need to use EF, you should check out the Entity Framework version of PLINQO.  It doesn’t have all of the features of the L2S version, but it provides some of the same benefits.

What About NHibernate?

There is also an NHibernate version of PLINQO.  NHibernate is just as bloated as EF, but it is the most mature Object Relational Mapper (ORM) available.  As far as I know, it has more features than any other ORM, and there are many extensions available, along with a ton of documentation and support forums.  This is the only flavor of PLINQO that supports multiple database technologies including MSSQL, MySQL, Oracle, DB2 and others.

How to Identify Query Performance Issues

There are two common tools available to intercept the underlying SQL activity that is being executed by LINQ:

  • SQL Server Profiler – This SQL client tool is included with Developer, Standard and Enterprise editions of SQL Server.  It works best when using a local copy of the database.  Tip – It helps to add a filter to the trace settings to only display activity from the SQL login used by the web application.
  • LINQ to SQL Profiler – This is a code based logging tool.  It is less convenient than SQL Profiler but it works well if you are connecting to a remote SQL Server.  http://www.codesmithtools.com/product/frameworks/plinqo/tour/profiler

Eager Loading versus Lazy Loading

By default, LINQ-to-SQL uses “lazy loading” in database queries.  This means that only the data model(s) specifically requested in the query will be returned.  Related entities (objects or collections related by foreign key) are not immediately loaded.  However, the related entities will be automatically loaded later if they are referenced.  Essentially, only the primary object in the object graph will be hydrated during the initial query.  Related objects are hydrated on-demand only if they are called later.

In general, lazy loading performs well because it limits the amount of data that is fetched from the database.  But in some scenarios it introduces a large volume of redundant queries back to SQL Server.  This often occurs when the application encounters a loop which references related entities.  Following is an example of that scenario.

Lazy loading sample controller code (C#):

public ActionResult Index() { 
    MyDataContext db = new MyDataContext();
    List<User> users = db.Users.ToList();
    return View(users);  

Lazy loading sample view code (Razor):

@model IEnumerable<User>
@foreach (var user in Model) {
    <b>Username:</b> @user.Username <br />
    @foreach (var role in user.RoleList) {
        <li>@role.RoleName - @role.RoleType.RoleTypeName</li> 

With the above code, the List<User> object will initially be filled with data from the Users table.  When the MVC Razor view is executed, the code will loop through each User to output the Username and the list of Roles.  Lazy loading will automatically pull in the RoleList for each User (@user.RoleList.RoleName), and a separate SQL query will be executed for each User record.  Also, another query for RoleType will be executed for every Role (@role.RoleType.RoleTypeName).

In our hypothetical example, let’s assume there are 100 Users in the database.  Each User has 3 Roles attached (N:N relationship).  Each Role has 1 RoleType attached (N:1 relationship).  In this case, 401 total queries will be executed:  1 query returning 100 User records, then 100 queries fetching Role records (3 Roles per User), then 300 queries fetching RoleType records.  L2S is not even smart enough to cache the RoleType records even though the same records will be requested many times.  Granted, the lookup queries are simple and efficient (they fetch related data by primary key), but the large volume of round-trips to SQL are unnecessary.

Instead of relying on lazy loading to pull related entities, “eager loading” can be used to hydrate the object graph ahead of time.  Eager loading can proactively fetch entities of any relationship type (N:1, 1:N, 1:1, or N:N).  With L2S, eager loading is fully configurable so that developers can limit which relationships are loaded.

There are two methods of eager loading with L2S, both of which are part of Microsoft’s default implementation (i.e. these are not PLINQO-specific features).  The first is to use a projection query to return results into a flattened data object called a “view model.”  The second is to use DataLoadOptions to specify the relationship(s) to load.

View Model Approach

This approach to eager loading works well for many-to-one (N:1) and one-to-one (1:1) relationships but it does not handle many-to-many (N:N) relationships.  However, N:1 is the most common type of lookup, so it works for the majority of scenarios.

The idea is to cast the result set into a “view model” data object which is a simple container for all of the required data output.  This can be an anonymous type, but it is generally recommended that developers use a defined data type (see the PLINQO Future Queries section below for details).

Because our first example used a N:N relationship (Users to Roles), it cannot be improved using this methodology.  However, it could be useful for loading other User relationships.  Following is an example of eager loading data from a related User Profile entity (1:1) and a User Type entity (N:1).

View model sample controller code (C#):

public Class UserViewModel {
    public string Username { get; set; }
    public string UserTypeName { get; set; }
    public string Email { get; set; }
    public string TwitterHandle { get; set; }

public ActionResult Index() { 
    MyDataContext db = new MyDataContext();
    List<UserViewModel> users = db.Users
        .Select(u => new UserResult() { 
            Username = u.Username,
            UserType = u.UserType.UserTypeName,
            Email = u.UserProfile.Email,
            TwitterHandle = u.UserProfile.TwitterHandle
    return View(users);  

View model sample view code (Razor):

@model IEnumerable<UserViewModel>
@foreach (var user in Model) {
    <b>Username:</b> @user.Username <br />
    <b>User Type:</b> @user.UserTypeName <br />
    <b>Email:</b> @user.Email <br />
    <b>Twitter: </b> @user.TwitterHandle

With this controller code, the controller’s LINQ query loads all of the UserResult properties in one SQL query (the query performs the appropriate joins with the UserType and UserProfile tables).  The actual SQL output performs inner or outer joins (depending on whether the foreign key column is nullable) to collect all of the data in one round-trip.

Another benefit to the view model approach is a reduction of the volume of data sent over the network.  Ordinarily, LINQ pulls all columns from each database table, regardless of how many data columns are actually displayed later.  When a simplified return type is used, you are explicitly specifying the columns that should be returned from SQL Server.  This is beneficial if your database table contains a large number of columns, or it has columns with large data sizes like varchar(MAX) or other BLOBs.

DataLoadOptions Approach

An easier way to eager load data is to specify DataLoadOptions for the LINQ database context.  Each relationship is added to the DataLoadOptions via the LoadWith method.  Note:  This can only be done before any queries are executed.  The DataLoadOptions property may not be set or modified once any objects are attached to the context.  See http://msdn.microsoft.com/en-us/library/system.data.linq.dataloadoptions%28v=vs.110%29.aspx for details.

All relationship types are allowed in DataLoadOptions, therefore it is the only way to eager load N:N relationships.  Also, there is no limit to the number of association types that can be eager loaded.  LINQ is also intelligent enough to ignore any associations that do not apply to the query.  Therefore, a single “master” DataLoadOptions can be applied to multiple queries throughout the application.

Revisiting the first example (Users and Roles), here is a slightly modified version.

DataLoadOptions sample controller code (C#):

public ActionResult Index() { 
    MyDataContext db = new MyDataContext();

    DataLoadOptions options = new DataLoadOptions();
    options.LoadWith<User>(u => u.RoleList); //Load all Roles for each User
    options.LoadWith<Role>(r => r.RoleType); //Load all RoleTypes for each Role
    db.LoadOptions = options;

    List<User> users = db.Users.ToList();
    return View(users);  

DataLoadOptions sample view code (Razor):

@model IEnumerable<User>
@foreach (var user in Model) {
    <b>Username:</b> @user.Username <br />
    @foreach (var role in user.RoleList) {
        <li>@role.RoleName - @role.RoleType.RoleTypeName</li> 

Note that the Razor view code has not changed at all.  The only difference is the controller code which adds DataLoadOptions to the LINQ data context.  Although the view code is identical, this time only a single SQL query is executed, compared to 401 queries for the original controller code sample.

It is also worth noting that view model and DataLoadOptions approaches can be used together (i.e. they are not mutually exclusive).  Any LoadWith relationships will be processed even when used inside a projection query.

View model with DataLoadOptions sample controller code (C#):

public Class UserViewModel {
    public string Username { get; set; }
    public string UserTypeName { get; set; }
    public string Email { get; set; }
    public string TwitterHandle { get; set; }
    public IEnumerable<Role> RoleList { get; set; }

public ActionResult Index() { 
    MyDataContext db = new MyDataContext();

    DataLoadOptions options = new DataLoadOptions();
    options.LoadWith<User>(u => u.RoleList); //Load all Roles for each User
    options.LoadWith<Role>(r => r.RoleType); //Load all RoleTypes for each Role
    db.LoadOptions = options;

    List<UserViewModel> users = db.Users
        .Select(u => new UserResult() { 
            Username = u.Username,
            UserType = u.UserType.UserTypeName,
            Email = u.UserProfile.Email,
            TwitterHandle = u.UserProfile.TwitterHandle,
            RoleList = u.RoleList
    return View(users);  

View model with DataLoadOptions sample view code (Razor):

@model IEnumerable<UserViewModel>
@foreach (var user in Model) {
    <b>Username:</b> @user.Username <br />
    <b>User Type:</b> @user.UserTypeName <br />
    <b>Email:</b> @user.Email <br />
    <b>Twitter: </b> @user.TwitterHandle    <b>Roles:</b>
    @foreach (var role in user.RoleList) {
        <li>@role.RoleName - @role.RoleType.RoleTypeName</li> 

This example is “the best of both worlds” because it pulls all required data in a single query, but it does not pull unnecessary columns from the User, UserType or UserProfile tables.  It would still pull all columns from the Role and RoleList tables.

PLINQO Caching

PLINQO adds intelligent caching features to the LINQ data context.  Caching is implemented as a LINQ query extension.  The .FromCache() extension can be used for single return types or collections, and FromCacheFirstOrDefault() can be used for single return objects.

PLINQO caching example (C#):

//Returns one user, or null if not found
User someUser = db.Users.Where(u => u.Username = "administrator").FromCacheFirstOrDefault(); 

//Returns a collection of users, or an empty collection if not found
IEnumerable<User> adminUsers = db.Users.Where(u => u.Username.Contains("admin")).FromCache();

Note:  These query extension methods change the return type for collections.  Normally LINQ returns an IQueryable<T>.  The cache extension methods return IEnumerable<T> instead.  However, it is still possible to convert the output .ToList() or .ToArray().

Converting cached collections sample (C#):

//Convert IEnumerable<User> to list
List<User> userList = db.Users.FromCache().ToList();

//Convert IEnumerable<User> to array
User[] userArray = db.Users.FromCache().ToArray();

The caching in PLINQO also allows developers to specify a cache duration.  This can be passed as an integer (the # of seconds to retain cache), or a string can be passed which refers to a caching profile from the web.config or app.config.

Cache duration sample (C#):

//Cache for 5 minutes (300 seconds)
IEnumerable<User> users = db.Users.FromCache(300);

//Cache according to the "Short" profile in web.config
IEnumerable<Role> roles = db.Roles.FromCache("Short");

The “Short” value above refers to a cache profile name in the web.config or app.config settings.

Cache settings in web.config (XML):

I recommend creating 3 caching profiles in the web.config.  I created a profile called “Short” which is a 5 minute duration.  Because this is a default setting, any .FromCache() commands will use the “Short” profile unless another is specified.

    <section name="cacheManager" type="CodeSmith.Data.Caching.CacheManagerSection, CodeSmith.Data" />
<cacheManager defaultProvider="HttpCacheProvider" defaultProfile="Short">
        <add name="Brief" description="Brief cache" duration="0:0:10" />
        <add name="Short" description="Short cache" duration="0:5:0" />
        <add name="Long" description="Long cache" duration="1:0:0" />
        <add name="HttpCacheProvider" description="HttpCacheProvider" type="CodeSmith.Data.Caching.HttpCacheProvider, CodeSmith.Data" />

The “Short” profile will cache items for 5 minutes.  This is a good choice for most scenarios.  There is also a profile called “Long” which will cache for 1 hour, best suited for reference data which rarely changes.  There is also a special-case profile called “Brief” which is useful in scenarios where a single data set is requested repeatedly in a single page load.

CacheProfile class (C#):

You may want to create a CacheProfile.cs class to standardize your references.  Here is an example that can be customized to your needs.

/// <summary>
/// Cache duration names correspond to caching profiles in the web.config or app.config
/// </summary>
public static class CacheProfile
    /// <summary>
    /// 10 second cache duration.
    /// </summary>
    public const string Brief = "Brief";

    /// <summary>
    /// 5 minute cache duration.
    /// </summary>
    public const string Short = "Short";

    /// <summary>
    /// 1 hour cache duration.
    /// </summary>
    public const string Long = "Long";

CacheProfile class usage (C#):

IEnumerable<User> users = db.Users.FromCache(CacheProfile.Brief); //Uses "Brief" profile
IEnumerable<User> users = db.Users.FromCache(); //Assumes "Short" profile
IEnumerable<User> users = db.Users.FromCache(CacheProfile.Long); //Uses "Long" profile

PLINQO’s caching system has additional features like cache groups, explicit cache invalidation, and a cache manager for non-LINQ objects.  See http://www.codesmithtools.com/product/frameworks/plinqo/tour/caching for full documentation.

PLINQO Future Queries

The “futures” capability of PLINQO allows for intelligent batching of queries to reduce round-trips to SQL Server.  This is particularly helpful for MVC because multiple objects are often sent to the same view.  Because the view is executed after the controller action, often all of the objects passed can be batched in a single SQL query.

Futures usage in controller action (C#):

public ActionResult EditUser(int userId) { 
    MyDataContext db = new MyDataContext();
    User user = db.Users.Where(u => u.UserID == userId).Future(); //Deferred
    ViewBag.RoleOptions = new SelectList(db.Roles.Future(), "RoleID", "RoleName", user.UserID); //Deferred
    ViewBag.StateOptions = new SelectList(db.States.Future(), "StateID", "StateName", user.StateID); //Deferred
    //All Future queries will be executed when the view calls any of the objects
    return View(user);

Instead of executing these 3 queries independently, they will be batched into a single SQL query.  The Future() extension makes use of LINQ-to-SQL’s deferred execution feature.  In other words, LINQ does not actually execute any of these queries until one of the collections is enumerated.

Enumeration occurs when:

  1. Any code attempts to access the contents of the collection, usually by looping through the collection (for each item in collection), or…
  2. The objects in the collection are counted.  For example: RoleOptions.Count(), or…
  3. The collection is converted to another type through .ToList() or .ToArray().

Caveats of future queries:

  • If you want to use .Future() then you should not use .ToList() or .ToArray().  This causes immediate enumeration of the query.  Frankly, there is rarely a need to immediately convert to a list or an array.  The .Future() return type of IEnumerable<T> works perfectly well for most scenarios.
  • Razor views tend to throw errors if futures were used with an anonymous type.

If you want to use futures, make sure that your results are returned into a known data type.  In other words, the following will throw a runtime error when the Razor view is executed.

Anonymous type which causes runtime error (C#):

//Fails at runtime when the view enumerates the collection
ViewBag.UserOptions = new SelectList(db.Users.Select(new { ID = u.UserID, FirstName = u.FirstName }).Future(), "UserID", "FirstName");

However, the same Future query will work fine if a type is specified.

Future query with known type (C#):

//Works fine at runtime because UserViewModel is a known type
ViewBag.UserOptions = new SelectList(db.Users.Select(new UserViewModel() { ID = u.UserID, FirstName = u.FirstName }).Future(), "UserID", "FirstName");

Another nice feature is that PLINQO allows caching and futures together.  There are extension methods which combine both features:

Futures with caching sample (C#):

//Returns one user, or null if not found
User someUser = db.Users.Where(u => u.Username = "administrator").FutureCacheFirstOrDefault(); 

//Returns a collection of users 
IEnumerable<User> adminUsers = db.Users.Where(u => u.Username.Contains("admin")).FutureCache();

The term “FromCache” is simply changed to “FutureCache.”  These extension methods support the same cache duration settings as FromCache().

For more details on PLINQO Future Queries, see http://www.codesmithtools.com/product/frameworks/plinqo/tour/futurequeries

Best Practices for Query Reuse

Often, there are a lot of common queries that are repeated throughout an application.  Ideally, your application should have a data service layer or business logic layer to house common queries for easy reuse.  However, if that is not currently in place, the simplest solution is to add methods to each controller as appropriate.  This would still improve code reuse with minimal programming changes.

Sample redundant controller code (C#):

public ActionResult FirstPage() { 
    ViewBag.CustomerList = new SelectList(db.Customers.OrderBy(c => c.CustomerID), "CustomerID", "CustomerName");
    return View();
public ActionResult SecondPage() { 
    ViewBag.CustomerList = new SelectList(db.Customers.OrderBy(c => c.CustomerID), "CustomerID", "CustomerName");
    return View();

Suggested replacement code (C#):

public IEnumerable<SelectListItem> GetCustomerList()
	return (from c in db.Customers orderby c.CustomerName 
            select new SelectListItem() { Value = c.CustomerID.ToString(), Text = c.CustomerName })			

public ActionResult FirstPage() { 
    ViewBag.CustomerList = new SelectList(GetCustomerList(), "Value", "Text");
    return View();
public ActionResult SecondPage() { 
    ViewBag.CustomerList = new SelectList(GetCustomerList(), "Value", "Text");
    return View();

This reduces the amount of redundant copy-paste query code while also implementing Futures and Caching.  Because the return type is IEnumerable<SelectListItem> it avoids the anonymous type issues described above.  It can also handle lists where a selected value must be specified.

Sample selected value in a SelectList (C#):

public IEnumerable<SelectListItem> GetCountryList()
	return (from c in db.Countries orderby c.CountryName
            select new SelectListItem() { Value = c.CountryID.ToString(), Text = c.CountryName })			
public ActionResult EditUser(int userId) 
     User user = db.Users.GetByKey(userId);
     ViewBag.CountryList = new SelectList(GetCountryList(), "Value", "Text", user.CountryID);
     return View(user); 

On a similar note, controllers often reuse the same relationships is multiple actions.  Because of this, it is usually beneficial to create a private function to specify the common DataLoadOptions.  This is preferably done in a business layer, but the controller is acceptable in a pinch.

MVC Controller DataLoadOptions sample (C#):

private DataLoadOptions EagerLoadUserRelationships()
    DataLoadOptions options = new DataLoadOptions();
    options.LoadWith<User>(u => u.Country);
    options.LoadWith<User>(u => u.RoleList);
    options.LoadWith<Role>(r => r.RoleType);
    return options;

public ActionResult Index() {
    MyDataContext db = new MyDataContext();
    db.LoadOptions = EagerLoadUserRelationships();
    Users userList = db.Users.Where(...).Future();

Other Performance Improvements

PLINQO has a number of other performance improvements including:

  • Bulk updates (update many records with a single command).
  • Bulk deletion (delete many records with a single command).
  • Stored procedures with multiple result sets.
  • Batch queries.

These features are documented here:  http://www.codesmithtools.com/product/frameworks/plinqo/tour/performance

I hope you found this useful.  Happy coding!

Fix for Login Error After DNN 7 Upgrade

Short Version

When upgrading from DNN 6.x to DNN 7.x, the installation wizard seems to miss an important setting in the web.config.  It also does not delete old DLLs from the /bin/ folder.  Because you are performing an upgrade rather than a clean installation, there are two versions of the AspNetMembershipProvider available, and the web.config may point to the wrong one.

Here is the fix.  Update your web.config to change the AspNetMembershipProvider section so that it uses the DotNetNuke.dll assembly instead of the DotNetNuke.Provider.AspNetProvider.dll assembly.

    <members defaultProvider="AspNetMembershipProvider">
        <clear />
        <add name="AspNetMembershipProvider" 
          providerPath="~\Providers\MembershipProviders\AspNetMembershipProvider\" />

After removing “.Provider.AspNetProvider” from the configuration, everything should work fine.

The Full Story

Yesterday I spent several hours debugging and fixing a DotNetNuke installation after upgrading from version 6.2.6 to 7.2.1.  The upgrade appeared to work without any errors.  However, when I tried to log in, I got an error.  I tried searching around for a resolution.  I found a number of similar complaints on the DNN forums but nobody had solved the problem.

Since I’m a software engineer, I decided to try to dig into the source code.  I found the underlying error message and figured out that there was a problem occurring inside the UserController.cs class file.  Whenever it would call MembershipProvider.Instance().UpdateUser(user) it would blow up with the error message, “System.ArgumentException: Parameter count does not match Parameter Value count.”  I determined that this was due to a mismatch of C# code sending the wrong parameters to a stored procedure (dbo.UpdateUser).

I looked through the source and found that the only membership provider instance was AspNetMembershipProvider.cs.  I tried setting breakpoints in that membership provider, but Visual Studio never reached them.  I tried adding logging but nothing was output.  I searched around Google some more, and here is the only workaround that I could find:  http://www.itfunk.com/2013/08/26/dot-net-nuke-a-critical-error-has-occurred-an-unexpected-error-has-occurred/  That blogger suggested changing the stored procedure definition to fix the mismatch.  But I was looking right at the database, and the C# code matched the stored procedure parameters.

Finally, a light bulb came on:  The new AspNetMembershipProvider code was not being reached at all.  I pulled up the source code for DNN 6 and compared it with DNN 7, and I found that the membership provider had been moved into the “main” DotNetNuke.dll assembly.  Previously it was in its own assembly (DotNetNuke.Provider.AspNetProvider.dll).  So there were two versions of the membership provider in the /bin/ folder, and the web.config was still pointing at the old DNN 6 version.  I updated my web.config to point to the right assembly and VOILA!  Problem solved.

In summary, upgraded DNN 7 installations may still be referencing the old DNN 6 authentication provider.  This produces an error when the authentication provider calls the UpdateUser stored procedure during login.  The DNN 7 version of the stored procedure has more parameters than DNN 6 did, causing the “parameter count” error.  A simple web.config change (shown above) updates DNN to use the correct AspNetMembershipProvider.

Maybe I can send a bill for my consulting work to DNN Corp.  🙂

Why I Avoid Stored Procedures (And You Should Too)

Subtitle:  Object Relational Mappers are the New Standard Practice for Application Development

Okay – DBA’s are already upset after reading the title.  I understand.  Stored procedures have been the standard practice of most professional software developers for more than a decade.  But like my recommendations on Flash, there are newer and better options than some old tried-and-true technologies.

I’ll have you know that I’m not an extremist on this subject.  I have personally worked on many applications that use stored procedures, and I have written plenty of them myself to follow a client’s “established best practices.”  Stored procedures still have a place in some scenarios.  However, given a choice, I avoid stored procedures as often as possible.

Why Do Developers Use Stored Procedures?

Traditionally, there have been a lot of reasons given for the use of stored procedures.  The most popular arguments include:

  • Performance – The query plan for stored procedures is compiled in SQL Server so that subsequent requests can run faster.  Also, a single stored procedure can perform multiple SQL commands, reducing traffic between an application and the database server.
  • Security – Stored procedures are well defined database objects that can be locked down with security measures.  Use of typed parameters can help prevent SQL injection attacks.
  • Code Re-Use – Database queries can be written once and re-used multiple times without writing the same SQL commands over and over again.
  • Business Logic Encapsulation – The database can house a lot of business logic so that the brain of your application is kept “neatly in one place.”

Years ago, I was completely on-board with this train of thought.  But around 2005, I started working with Object Relational Mappers (ORMs) and it changed my whole thinking about stored procedures.

What is an Object Relational Mapper?

An ORM is a code-based tool for application developers to work with databases.  The purpose of an ORM is to create a code representation of the data model.  Once you have this code in place you can access your database without writing a line of SQL code.  No stored procedures (or ad-hoc SQL) needed.  A lengthy study is beyond the scope of this article, but you can read about ORMs on Wikipedia.


ORMs are very efficient at Create [INSERT], Read [SELECT], Update and Delete (CRUD) operations.  Usually at least 90% of SQL commands used in an application are simple CRUD operations.  ORMs automatically write parameterized queries for you so that you never have to spend time writing CRUD from scratch.

Writing stored procedures for simple operations is a waste of time and it muddies up the database.  I cringe when I see a list of 500 stored procedures in a single database.  I cringe further when I discover that most of the stored procedures contain mind-numbingly simple queries like “select * from [tablename] where [col1]=@param” and “update [tablename] set [col 1]=@param where [id]=@userId”.

It drives me batty when I know that someone took the time to write all of those stored procedures, then an application developer manually wrote code to call the stored procedure, convert and pass in parameters, check the result and pass back a DataSet.  I commonly see ~20 lines of C# or VB application code to call a simple one-line stored procedure.

With LINQ, I can accomplish more functionality with less code.  By writing “var result = context.TableName.Where(tn => tn.UserId == userId);” the data context writes a parameterized SQL query, the query is executed, the result is returned and translated into a strongly-typed set of objects (much better than a DataSet).  I get more benefit out of one line of LINQ code than a stored procedure coupled with 20 lines of code to call it.

Good N-Tier Application Design

N-Tier Application DiagramIf you are familiar with n-tier (or multi-tier) application design, you know that the business layer belongs in the middle of your application stack.  In other words, business logic should be compiled application code that is testable via automated unit tests.

Your data access layer should be comprised of an ORM and basic repository methodology.  Some people would throw stored procedures into the diagram as a second data access layer, but this is unnecessary and (IMHO) incorrect.

A database should be limited to the role of a “persistence layer” – a technology agnostic storage mechanism.  When business logic is scattered through stored procedures, database triggers and application code, the n-tier model is broken.

Since T-SQL is technology-specific, your stored procedures would need to be re-written in order to migrate to MySQL, Oracle or another database.  If you avoid stored procedures and use an ORM like NHibernate, moving to a new database architecture is basically as simple as changing a connection string (okay, maybe not quite that simple in the real world).

Benefits of Object Relational Mappers

Every ORM is different, some with more benefits than others.  My personal favorite is still PLINQO for LINQ-to-SQL, and there are versions for Entity Framework and NHibernate now.

These are the most common benefits to look for in a good ORM:

  • Less Hand-Written Code – There is substantial time and cost savings when using a good ORM versus hand-writing stored procedures and stored procedure calls.
  • Rapid Application Development – Developers are all under pressure to build more features with less time.  Tools like PLINQO offer code generation with intelligent re-generation.  This is extremely important because PLINQO can update your data layer to match the database with two clicks of a mouse.  If you’ve been using stored procedures and your data model changes, you’re in for a lot of fun tracking down and fixing all of the references.
  • Maintainability & Refactoring – The foundational principle of agile development is the expectation that applications will need to be changed in the future.  The data model and business rules will change.  If you use a good ORM, these changes can be handled quickly and easily with a refactoring tool (like ReSharper).  Compiling your application is often enough to verify that your changes were done properly.  But if you used stored procedures as the basis of your data access and/or business logic layer, you will be frightened to make any changes for fear of breaking something.  This “stored procedure paralysis” leads to hacks and workarounds instead of properly addressing new requirements.  I just saw this happen a few weeks ago on a client’s project.
  • Performance – A LINQ data context uses deferred execution for selects, and it batches record updates and deletes in a single database round-trip on Submit().  Good ORMs offer additional performance enhancements like caching, futures and batch operations (select, update and delete).  These can have a dramatic impact on application speed.
  • Proper N-Tier Design – Your database should merely be a “persistence layer.”  It should not contain the business logic of your application.  Using an ORM helps you architect a solution where your business logic resides in strongly typed managed code (instead of being scattered through stored procedures and triggers).  This keeps the separation of concerns more clear, avoiding a broken or inverted n-tier structure.
  • Business Logic Encapsulation – The data entities that are generated by your ORM are effectively business objects.  Instead of working with “dumb DataSets,” these entities can enforce business rules that can interact with your presentation layer and data layer.

Common Objections to Object Relational Mappers

  • “Stored procedures are faster.” – In speed tests that I’ve seen, parameterized queries perform just as well as stored procedures (sometimes better).  From what I’ve read, SQL Server does cache the execution plan of parameterized queries.
  • “LINQ syntax is not as flexible as T-SQL.” – This is true but misleading.  95% of T-SQL queries I’ve seen can be accomplished easily in LINQ syntax (even aggregate functions).  If you look at some sample LINQ commands you may be surprised what is possible.  Also, you can execute custom SQL or call a stored procedure from LINQ and return the results in a strongly-typed object.
  • “ORMs add too much overhead.” – Compared to what?  If you’re comparing with straight ADO.NET and a DataReader, there is a slight performance hit.  If you’re comparing with explicit hydration of custom business objects, there is likely a performance gain.  Plus, good use of caching, futures and batch operations more than make up for any overhead.  Remember that data access speed is not the only measure of good application design.
  • “ORMs pull unnecessary data.” – By default most ORMs will return every column in the table(s) that you requested.  However, most ORMs support projection queries where you can limit the columns returned.  The results of your projection are returned into a strongly typed collection.
  • “Stored procedures still provide better security.” – If you need to give data access to someone other than your development team, you should build a secure web service instead of leaving your database open to direct access by multiple clients.  As for SQL injection attacks, parameterized queries are just as safe as stored procedures, if not more so.
  • “ORMs should only be used to call stored procedures.” – Most ORMs can call stored procedures, so these two technologies are not mutually exclusive.  However, calling stored procedures from an ORM negates many of the benefits of ORM technology and design patterns.  It adds work and complexity while reducing benefits.

When Should I Use Stored Procedures?

I see several cases where stored procedures are still acceptable:

  • You’re doing data warehousing / ETL / data aggregation.  An ORM is usually not the right tool for this kind of bulk data management.  You should consider a tool like SQL Server Integration Services (SSIS), BizTalk, or another dedicated ETL tool.
  • You’re working on a legacy application where everything is already built using stored procedures.  Sometimes you have to go with the flow.  Sometimes adding a second tool is worse than supporting the wrong tool.
  • You need to feed data to SQL Server Reporting Services (SSRS).  Strangely, the CLR integration for SQL Server does not have LINQ support, so T-SQL is your only option with SSRS.  But frankly, SSRS isn’t always the best choice either.  There are a lot of good third party tools for report generation to HTML, PDF and Excel that can interface with your ORM.  Since SSRS is not available in the Express Edition of SQL Server (and definitely not available in MySQL or Oracle) you are limiting your deployment options again.
  • Your query absolutely cannot be performed with LINQ syntax.  This is rare, but does happen occasionally.  Thankfully, stored procedures or custom SQL can be called from a LINQ data context.
  • You need to perform a lot of complex SQL statements from a small set of input.  Although there are lots of ways to handle batch operations with PLINQO, sometimes “you gotta do what you gotta do” to address a performance issue.
  • You need to push thousands of records to SQL in a single statement.  This is most efficient through table-valued stored procedure parameters in SQL Server.

I hope you find this information useful.  Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Is Adobe Flash Support Really Coming to the iPhone and iPad?

Today, someone posted a statement on Wikipedia saying:

On Mar 8, 2011, it was announced that Flash support would be coming to the iPad, iPad 2 and iPhone.

When I read that, I immediately wondered:

  • Is the battle between Apple and Adobe over?
  • Has Adobe come up with a way to install Flash without violating Apple’s iOS license?
  • Has someone released a great third-party Flash plug-in on the Apple Store?

The answer to all 3 questions is a resounding NO!!! The statement on Wikipedia is completely misleading.  The footnote citation references an article titled “Flash is coming to the iPad, iPad 2 and iPhone.”  The cited article is really about Wallaby, a tool for converting basic Flash animations to HTML 5.  Flash is not coming to the iPad, iPad 2, iPhone or iPod.  In other words, this is actually an example of HTML 5 being used to replace Flash.

I anticipate that Flash is going to decrease in popularity over the next 5 years.  I’m well aware that HTML 5 currently has a lower adoption rate than Flash Player, but that won’t last forever.  And many of the UI components built on Flash could be easily replaced with simpler controls using HTML 4 / CSS / JavaScript.

I’m not saying that HTML is a direct replacement for all the fancy animation that Flash can do.  I’m saying that a well-designed HTML/JavaScript interface is a better choice than Flash because:

  1. HTML/JavaScript works on more devices and browsers (including 90 million iPhones).
  2. HTML/JavaScript behaves like the rest of the web without special effort (example:  no complicated programming is needed to build scroll-bars that respond to a mouse wheel).
  3. Users don’t need or want super-fancy animation and “clever” interfaces.  They want something simple that gets the job done.

Overall, the Wallaby announcement is further confirmation of the conclusions in my recent article:  Top 10 Reasons Web Developers Should Avoid Flash.  I took some heat from Flash developers over that article, but I believe my recommendation is sound advice.

If you’re a Flash developer, please read my entire article (including the conclusion) and do your own research.  I’m not a “Flash hater.”  I’m an IT consultant who is trying to help people find the right tool for the job.  If “the job” is a public-facing website, Flash is usually the wrong tool (with exceptions noted in my other Flash article).

Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Top 10 Reasons Web Developers Should Avoid Flash

Subtitle:  Is Adobe Flash Still Relevant in Web 2.0?

I remember when I first saw Macromedia Flash (now Adobe Flash) more than a decade ago.  I was blown away seeing smooth animation and vector-based graphics running in a web browser.  I thought to myself, “This is the future of the web.”  And it was…  for a while…

During the early years of the Web, Flash was the only good option for animation and “sprucing up” a website.  Your choice was to either “have a boring HTML website” or use Flash, so it became wildly popular.  But today, its popularity is diminishing.  I’ll tell you why.

Here are the top 10 reasons why Flash is becoming irrelevant:

  • Device Incompatibility
  • Poor Search Engine Optimization (SEO)
  • Not an Open Web Standard
  • Better Alternatives Exist
  • Poor Maintainability
  • Complicated Client / Server Support
  • Poor Accessibility
  • Poor Usability
  • Poor Stability / Performance / Security
  • Apple Rejects Flash

Device Incompatibility

The Internet is no longer limited to desktops and laptops.  Today people access the web from mobile phones (iPhone, Android, Windows Mobile, Blackberry among others), gaming consoles (X-BOX 360, PS3, Wii), and various TV-based browsers (set-top boxes and even TV’s with built-in web browsers).  With most of these devices, Flash support is either nonexistent, or severely lacking.

Flash isn’t officially available for 64-bit browsers.  When you buy your brand-new Windows 7 laptop and open Internet Explorer (64-bit), go to the Adobe Flash Player download page. You’ll get a message saying “Flash Player 10.1 is not currently available for your 64-bit web browser.” You’re stuck either using a 32-bit browser, or using the Flash Player “Square” beta (which has been in beta for years).

Poor Search Engine Optimization (SEO)

Flash is not fully readable by search engines like Google, Bing and Yahoo.  It’s true that you can embed some meta information, but nothing comparable to real HTML content.  Search engines cannot infer the meaning, structure and relevance of a website built entirely using Flash.

Not an Open Web Standard

The Adobe Flash format is closed and proprietary.  It is not an open standard like HTML 5, CSS or JavaScript.  Adobe solely controls the future of the Flash format, its feature set and the Flash Player plug-in.  Adobe claims that 95% of website visitors have Flash Player, but third party studies show that it may be closer to 50% when factoring in all Internet-capable devices.

Better Alternatives Exist

Browsers have come a long way since Flash was introduced.  So have HTML, CSS and JavaScript.  Today, developers can take advantage of JavaScript frameworks like JQuery.  These libraries provide nice animation, effects and UI controls that facilitate a dynamic AJAX-driven Web 2.0 user experience.

Flash is not the only option for video.  HTML 5 supports embedding videos in a web browser without Flash.  The H264 video format has already been adopted by many websites, most notably YouTube.  H264 provides much better quality video than Flash (FLV format) at a smaller file size [yes, I know that Flash can play H264 videos; my point is that Flash Player won’t be required to watch H264 videos].  Many of the devices mentioned above (under “Device Incompatibility”) already support embedded video, or will support it soon.  Devices like the iPhone/iPod/iPad even have H264 decoders built into the hardware so you can watch high definition videos with minimal CPU / battery power.

Poor Maintainability

The only comprehensive tool for Flash development is Adobe Flash from (you guessed it) Adobe.  There are other shortcut tools for making Flash animations (like Swish) but only Adobe’s proprietary tools give you full control.  By comparison, there are many high-quality tools for editing HTML, CSS and JavaScript, including free and open-source options (heck, you could even use Notepad).

After you’ve released your Flash website, it’s also a hassle to maintain.  Changing a Flash animation can be complicated work, and it requires editing the original uncompiled .FLA file.  Then it has to be recompiled into a .SWF before being released back to the web server.

There are also human resource issues to consider.  Application development with Flash requires a very specific skill-set, and it’s rare to find a developer who is an expert at graphic design, animation, Action Script programming, data-driven client/server interaction, and server-side application architecture.   Even if you find a great Flash developer, this often leaves businesses stuck relying on a single person to handle all updates.  Pray that your Flash developer doesn’t leave or misplace the latest .FLA file.

Complicated Client / Server Support

Flash was created primarily for showing pretty animations in a web browser.  It was not intended to handle client/server scenarios where a database is involved.  Adobe has done a lot of work in this arena, and Flash can communicate with server-side data, but it’s a major hassle compared to other options.  Flash is generally not the best option for data-driven applications.

It is considerably faster, easier and more cost-effective to develop applications using a web language.  For example, ASP.NET and PHP can easily retrieve data from SQL Server and generate HTML for a web browser.  Flash introduces additional layers of complexity and more points of failure, making the application development process harder than it should be.

Poor Stability / Performance / Security

Flash is known to have issues in the areas of stability, performance and security.  It has been the cause of many browser crashes.  It requires a lot of CPU power, and can bring low-powered computers/devices to their knees.  I’ve seen mobile phones, netbooks and gaming consoles completely freeze simply because a user tried to watch a Flash video.  I’ll grant that Flash developers can influence performance, but it shouldn’t take an expert to make something that works well on all devices.

Poor Usability

Flash websites (i.e. the whole website is one big Flash object)  introduce several usability problems:

  1. Normal browser navigation doesn’t work. If you click on something inside the Flash animation, you can’t click the back button to return to the previous section.  This leaves users confused or frustrated.
  2. Bookmarks don’t work. You can’t bookmark a specific section of a Flash website.
  3. Touch devices aren’t fully supported. Many Flash applications rely on a mouse rollover for interaction.  This rules out most mobile phones, tablet devices and touch-screen PC’s.
  4. The “Find in page” feature doesn’t work. You can’t use the browser’s in-page search.
  5. Multilingual / localization support is complicated to implement. Any multilingual support must be built from scratch.  Automated translation tools (Google Translate, Yahoo BabelFish) do not work on Flash content.
  6. The user interface is often awkward. This is not the fault of Adobe, but of many Flash developers.  It’s common for Flash developers to add long intro animations (yawn) and special effects that look pretty but waste the user’s time.  Instead of a normal menu, a Flash developer may try to get fancy and create a spinning orb for navigation.  Simplicity = usability (look at CraigsList.com), and Flash was created to be “fancy” not “simple.”

Poor Accessibility

Because Flash .SWF files are compiled (binary, not text), screen readers cannot read them.  That is, text-based web browsers for the sight impaired do not work.  This is not a concern for some people, but large corporations and government websites care about accessibility.

Apple Rejects Flash

The most intriguing article I’ve read about the future of Adobe Flash came from Steve Jobs (founder of Apple).  His article “Thoughts on Flash” sums up the reasons why the iPhone, iPod and iPad do not (and never will) support Adobe Flash.  I’d think twice before building a website that leaves 90 million iPhones out in the cold.

Steve Jobs is not the first to reject Flash.  Industry experts have expressed concerns for many years.  Usability expert Jacob Nielson published an article in October 2000 titled “Flash: 99% Bad” stating that “99% of the time, the presence of Flash on a website constitutes a usability disease…  it encourages design abuse, it breaks with the Web’s fundamental interaction principles, and it distracts attention from the site’s core value.”

Most of the issues I’ve mentioned are also described in detail at Wikipedia’s Adobe Flash article.  Someone posted a statement on Wikipedia saying, “On Mar 8, 2011, it was announced that Flash support would be coming to the iPad, iPad 2 and iPhone.”  This is completely untrue.  The citation references an article about Wallaby, a tool for converting basic Flash animations to HTML 5.  In other words, this is actually an example of HTML 5 being used to replace Flash.


Flash was a cool technology, but it’s not the future of web development.  It’s time for web developers to move on.

I don’t hate Flash, and I’m not ignorant of its feature set.  I think Flash is an powerful technology with a lot of capabilities. It can be used in a lot of scenarios. I just don’t think it should be used in many of them. Flash use should be limited to instances where HTML/JavaScript/CSS can’t do the job.

I see three legitimate reasons to use Flash:

  • Display of video (until the HTML 5 standard has sufficient adoption)
  • Banner ads (because Flash sure beats GIF/JPG for advertisement)
  • Browser-based games (because Flash beats Java in this arena)

My argument is that Flash should not be used for things like:

  • Development of an entire website.
  • Development of complex data-driven applications.
  • UI components such as data grids, content rotators, tree views, input forms, etc.

Some of my readers have expressed that HTML/CSS/JavaScript/JQuery are not a substitute for all of the animation power of Flash.  I completely agree – Flash is pretty unbeatable in terms of fancy animation, transitions and effects.  My point is that users don’t care about a super-fancy interface.  They care about one that works on their device and is simple to use.

When Flash is used instead of HTML/CSS/JavaScript on a public-facing website, you are guaranteeing that some users will not be able to use it.  To me, device incompatibility is the most important reason to avoid Flash.  If you’re a web developer, you should aim to produce a site that everyone can use from any device.  Note:  This article is intended for web developers building public websites, not in-house applications (where your organization can control adoption and ensure each user has Flash).

I would rather invest my time developing a website that everyone can use, even if it’s not as fancy.  That’s my opinion, and my recommendation to my clients.  My readers are welcome to form their own opinions and make a different recommendation to their clients.

PS – Microsoft’s Silverlight has many of the same shortcomings.  My recommendation is the same:  use HTML, CSS and JavaScript instead.  Then use whichever server-side technology you like.

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Free DotNetNuke Modules: Part 1

I’ve been a fan of DotNetNuke for several years, but I haven’t had many opportunities to use it in the real world until recently. As I run into different business needs, I often face the eternal question about DotNetNuke modules: “Should I built it or buy it?” True, there are a lot of great modules available for sale, but there are also a surprising number of free modules available. I intend to catalog some of my favorites through this series.

When looking for a DotNetNuke module, most people start at SnowCovered.com which is the official DotNetNuke Marketplace. This is the most common place to find modules available for purchase. However, it occurred to me that if you go to their Modules category and sort by price, there are a number of modules listed for $0.00. If you click this link, you’ll see that the first 3-4 pages of results include dozens of free modules.

Another popular place to look is the DotNetNuke Forge which has a mix of free open-source modules and commercial modules for sale. It also provides downloads for official DotNetNuke projects that are not included in the main DNN installation process. On the main Forge screen, click the “Filter By” drop-down and select “Core DotNetNuke Projects” then click “Go” to search. You can also find Core Project updates on the New Releases page (note: you may need to register or log in to see the module downloads).

There are quite a number of open-source DotNetNuke projects available on CodePlex. Beware that these projects are in various stages of development, so unless you’re a developer willing to do some of your own quality assurance, I would stick to popular projects that have an active development community. If a project hasn’t been updated in a year, you should probably find an alternative.

In addition to these well-known freebie sources, there are quite a number of free modules and upgrades available from individual developers and companies. Following are a handful of my current favorites:

  • Advanced Control Panel – by Oliver Hine. This is a replacement for the standard DotNetNuke control panel. It is a great step forward in ease of use for non-technical site administrators. The author also published several other free modules including a photo gallery, weather, file upload, Google Analytics enhanced tracking, and an enhanced permissions/workflow for content editing.
  • Friendly URL Provider – by iFinity. While DotNetNuke did incorporate “friendly URLs” some time ago, this free module produces much shorter and cleaner URLs than the standard DNN provider. It even supports “extensionless” URLs and 301 redirects for non-friendly URLs. The author also sells the iFinity URL Master module for greater fine-tuning.
  • NB_Store – on CodePlex. In my experience, the official DotNetNuke Store module has been clunky and flaky (and it caused my portal to be painfully slow until I manually deleted the module). NB_Store is a nice open-source alternative.
  • DNN Menu – by DNN Garden. This is a search-engine friendly alternative to the default SolPartMenu and DNNMenu. There are commercial alternatives (like Snapsis Menu) but free is hard to beat.
  • Amazon S3 Folders Provider – by Brandon Haynes. This is a file storage/retrieval provider using Amazon S3 to store files remotely. It adds remote storage to the regular file management of DotNetNuke and essentially allows websites to have unlimited file storage.
  • DNN SiteMap Module – by Derek Trauger. This module displays a real-time HTML site map which is useful both for end users and search engines to find relevant content.

That’s nowhere near comprehensive, but it’s a good start. I’ll add more articles as I discover more noteworthy freebie modules. Please use the comments area to suggest your own favorite free DotNetNuke modules (no commercial advertisements, please). Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Microsoft Announces Continued Support for LINQ-to-SQL

I got my weekly MSDN Flash email today and saw an article titled “Top Ten Questions to Microsoft on Data.” I was pleasantly surprised to read the following:

Question #3: Where does Microsoft stand on LINQ to SQL?

Answer: We would like to be very transparent with our customers about our intentions for future innovation with respect to LINQ to SQL and the Entity Framework.

In .NET 4.0, we continue to invest in both technologies. Within LINQ to SQL, we made a number of performance and usability enhancements, as well as updates to the class designer and code generation. Within the Entity Framework, we listened to a great deal to customer feedback and responded with significant investments including better foreign key support, T4 code generation, and POCO support.

Moving forward, Microsoft is committing to supporting both technologies as important parts of the .NET Framework, adding new features that meet customer requirements. We do, however, expect that the bulk of our overall investment will be in the Entity Framework, as this framework is built around the Entity Data Model (EDM). EDM represents a key strategic direction for Microsoft that spans many of our products, including SQL Server, .NET, and Visual Studio. EDM-based tools, languages and frameworks are important technologies that enable our customers and partners to increase productivity across the development lifecycle and enable better integration across applications and data sources.

This is great news for fans of LINQ-to-SQL and PLINQO. There has been much debate over Microsoft’s “official” stance on L2S and it’s nice to see something definitive. I was personally concerned for a while, but my reservations have been put at ease.

If you’re still unsure about using LINQ-to-SQL, please check out my other articles on PLINQO. I’ve tried a slew of OR/M systems (NHibernate, ActiveRecord, NetTiers, Wilson OR Mapper, Table Adapters) and I still find PLINQO to be my best option in most cases. Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

PLINQO 5.0 is Released

Hey – I’m actually not behind-the-times with this announcement. Yesterday, the CodeSmith team announced the arrival of PLINQO 5.0. It’s getting hard to keep up with all the new versions. Who says LINQ-to-SQL is dead?

Some feature highlights:

  • Support for Visual Studio 2010 and .NET 4
  • New SQL Cache Dependency option
  • Improved eager loading features
  • Numerous bug fixes

Check out the official links:

Also be sure to check out my other articles on PLINQO:

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Verizon FIOS Blocking SMTP Port (Outgoing Email)

The Long Story…

I recently found out the hard way that the Verizon FIOS Internet Service Provider (ISP) has implemented port blocking for outgoing (SMTP) email to non-Verizon mail servers. I use a third-party POP3/SMTP mail server that comes with my web hosting account (I avoid using the free email accounts that come with ISP’s like Verizon to avoid changing my email address every time I switch Internet providers). Yesterday, I noticed that my Outlook 2007 was unable to send an email. It just sat in my outbox after repeated attempts to send-and-receive. I gave up and shut down my computer for the night assuming that a reboot would fix the problem.

This morning, I tried sending the same email with the same result. Outlook kept saying it was unable to access the outgoing SMTP mail server. I was able to download email via POP3 and I was able to access my webmail. So I tried sending email through one of my web applications which uses SMTP, and to my surprise, that worked fine.

I was starting to write an email to my web host complaining that they must have changed their SMTP authentication to block external IP addresses from sending email. Then it hit me – maybe my web host isn’t the one blocking traffic. The problem could be with Verizon. So I searched around a bit and found an announcement from Verizon that explained the whole thing. I changed my Outlook settings and now I’m back in business. I’m glad I didn’t send that angry support email to my web host. 🙂

Long Story Short…

Verizon FIOS is blocking outgoing (SMTP) email on Port 25. They’ve been rolling out this change since 2009 but apparently didn’t inform all of their customers. They say they’re doing it to reduce the spread of email viruses.

The “solution” is to change your SMTP port. Thankfully, many email providers (including mine) support Port 587 as an alternate SMTP port. Unfortunately, there will be some users who don’t have that option and are stuck not being able to send mail at all. Those unlucky folks will need to talk to their email provider or drop Verizon as their ISP.

If you’re having trouble sending mail using Verizon FIOS, check out the following links:

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]