Version 4.0.1 of PLINQO Released

Okay, I’m a little behind on blogging. Since I previously wrote multiple posts regarding PLINQO (Professional LINQ-to-SQL), I wanted to drop a quick update that version 4.0.1 was recently released. I also skipped writing about the release of version 4.0.0 because I couldn’t find a definitive enhancement list. I stumbled across it today, so I’m posting both.

Version 4.0.0 Highlights:

  • Futures Support – Allows creation of a queue of objects to be loaded all at once. This differs from the old Multiple Result Sets feature in that it defers execution until the data is really needed. It’s also easier to support more than 2 result sets in a single call.
  • Caching Improvements – Added support for various caching providers including Memcached.
  • Detach/Attach Entities – Added more methods for serialization/deserialization so detached entities can be stored as binary or XML.
  • More DetailsClick here for the full set of enhancements

Version 4.0.1 Highlights:

  • DataContextName – You can finally control the name of the DataContext that is generated. This is long overdue and greatly appreciated.
  • Pagination Improvements – Added methods for NextPage, PreviousPage and GoToPage for PagedList.
  • Null Handling – Added NotNull rule and attribute and improved SQL queries that use null comparisons.
  • More DetailsClick here for the full set of enhancements

I know that some people are a little hesitant of continuing to use LINQ-to-SQL (L2S) given Microsoft’s shift in direction to LINQ-to-Entities (L2E). However, Microsoft has not dropped support for L2S in .NET 4.0. They actually added some features to LINQ-to-SQL in the recent release of .NET 4.0 and Visual Studio 2010. L2S is widely adopted and (from what I can tell) MS intends to continue supporting it in future versions of .NET, even though they aren’t going to develop it further.

At this time, the PLINQO team intends to provide LINQ-to-Entities support in a future release. This means that PLINQO users should require little-to-no-work in making the switch to L2E. In the meantime, I’m happy using PLINQO as my primary OR/M on new projects.

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

PLINQO 3.0 – Even Better

I wrote a blog article a few months ago giving kudos to the guys at CodeSmith after my discovery of PLINQO 2.0. Since then, I haven’t done much with LINQ-to-SQL because the legacy projects at my day job use Castle ActiveRecord with NHibernate. But I recently started a new project (giving me the freedom to investigate other technologies) and was pleasantly surprised to find PLINQO 3.0. In addition to a new major code revision, I found that CodeSmith released a new website with more information about the benefits of PLINQO and sample usage.

In case you’re not familiar with PLINQO, this set of code-generation templates is designed to enhance the LINQ-to-SQL development experience. They’re not only a time-saver like most code generation templates, but they allow you to overcome many of the limitations of “raw” LINQ-to-SQL. See Does LINQ StinQ? Not with PLINQO!

My first article covered some of the main benefits of PLINQO 2.0, including:

  • Generates one file per entity instead of one massive DBML file.
  • Generates partial classes where custom code can be written and won’t be overwritten.
  • Generated entity files are added to the project as code behind files to their corresponding custom entity files.
  • Adds customizable business rules engine to enforce entity validation, business and security rules.
  • Generation of entity manager classes… Provides access to common queries based on primary keys, foreign keys, and indexes.
  • Ability to automatically remove object prefix and suffixes (ie. tbl and usp) [based on RegEx].

In addition to those features, PLINQO 3.0 has the following benefits:

  • Entity Detach – Detach entities from one DataContext and attach to another (very useful for caching scenarios).
  • Entity Clone – Create copies of entities in-memory, set only the properties that need to be changed and persist as a new object with a new primary key.
  • Many-to-Many Relationships – Yes, M:M can be done in LINQ-to-SQL without writing goofy code to manage the link tables.
  • Auditing – The app can track all property changes complete with a copy of the old value and new value. Tracked changes can be read iteratively or dumped to an XML string.
  • Batch Updates and Deletes – You can perform updates and deletes on records based on criteria on the SQL Server without pulling each record into your app first. I’d already been using another implementation of this concept, but it’s nice to have it built into PLINQO.
  • Multiple Result Sets – PLINQO can pull multiple recordsets back in a single request. This can be done either by using a stored procedure or using the ExecuteQuery method passing a list of queries as parameters.

I think some of those benefits may have existed in the 2.0 release, but weren’t documented. I’m glad to see they’re starting to provide more documentation and samples. It would still be nice to see more, however (as it occurs to me) your custom PLINQO code really sits on top of LINQ-to-SQL, so all of the standard LINQ documentation applies.

I do have some suggestions for CodeSmith to implement in future versions of PLINQO:

  • I’m fond of the IRepository pattern because of unit testing with frameworks such as Rhino Mocks. I’ve seen a couple of implementations of IRepository with LINQ (example 1, example 2). This should be a code generation option.
  • I’d like to see a DataContext session factory with per-web-request lifestyle. This is available in other ORM systems like ActiveRecord. After some digging, I found an example of this that also demonstrates integration with Microsoft’s MVC and Castle Windsor (IoC). Sweet.
  • There are some helpful LINQ libraries out there, such as LINQKit and the LINQ Dynamic Query Library. It would be nice to include these and/or other free libraries with PLINQO.
  • I’ve gotten the impression that Microsoft is going to favor the Entity Framework (LINQ-to-Entities) over LINQ-to-SQL. I’d love to see PLINQO adapted to support the Entity Framework. That would certainly placate the domain-driven design fans along with those who use db’s other than MS SQL.

Finally, a bit of a rant: I’m kind-of annoyed that PLINQO only has one way to select the tables you want to include in code generation: you have to write a RegEx to identify tables to exclude. I’ve worked on several projects where I want to generate entities for less than 50% of the tables in my database. For instance, when writing modules for DotNetNuke, I only want to generate entities for my 5 tables, not the 100+ tables that come with a DNN installation.

NetTiers had a dialog to select tables for code generation. It sure would be nice to bring that back in PLINQO. If a dialog box is too much trouble, at least there could be a switch to specify whether my RegEx is an include list or an exclude list. I submitted a ticket to CodeSmith on this one. Please vote and add comments on their website if you support this idea. How about it, CodeSmith? 🙂

See the new PLINQO website at http://www.plinqo.com/ for downloads, documentation and an offer to get a free copy of CodeSmith. I also suggest that you watch both introductory videos: Video 1. Video 2.

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Does LINQ StinQ? Not with PLINQO!

There has been much debate over Microsoft’s first major Object Relational Mapper (ORM) called LINQ (Language-Integrated Query). Microsoft has released a several flavors of LINQ suitable for different purposes, most notably LINQ-to-SQL and LINQ-to-Entities. Most developers use LINQ-to-SQL given that LINQ-to-Entities is brand new and most info available online is about LINQ-to-SQL, so for the purposes of this article (and in most people’s minds), “LINQ” == “LINQ-to-SQL”. [Yes, I know LINQ is really an expression language and not an ORM, but let’s not get technical on a technology blog. :-)]

LINQ has been met with mixed reactions from the development community. Some are enthralled that Microsoft has finally built their own “supported” ORM. Some love the ease-of-use in being able to drag tables onto a designer surface in Visual Studio 2008 (reminiscent of TableAdapters). Others like the (usually) clean and efficient SQL queries generated by the ORM.

To me, the best feature of LINQ is the integration of a strongly-typed data query language built into .NET. Gone are the days of query building from strings. Even an impressive ORM like NHibernate with its own query language (HQL) suffers from the lack of a strongly-typed query language (leading to hybrid attempts like LINQ-to-NHibernate). Altogether, it is a powerful and efficient system of data access when you understand it and use it properly.

Of course, LINQ is not without its problems and critics. I’ve heard a lot of complaints about “lack of support” for implementing dynamic queries, that is, creating queries with joins and other non-comparison criteria at runtime. There are some solutions for this like the LINQ Dynamic Query Library or Predicate Builder.

One of the biggest concerns from enterprise-level developers is that LINQ is a “black box” and the Visual Studio designer writes a ton of “hidden code” to map the entities to the database. Bear in mind that all ORMs are a black box to some degree. Even though NHibernate is open-source, most developers use a DLL from a release and never tinker with its inner workings. NHibernate extensions like Castle ActiveRecord even hide the XML column mappings from the developer. But in all honesty, LINQ-to-SQL is very bad at allowing the developer open access to the mysterious column mapping code and has a number of issues when you make changes to the underlying database schema.

A couple of months ago, I decided to try out LINQ-to-SQL on one of my pet projects. It took some getting used to, but after pulling some hair out, learning some important lessons and finding some handy tools, it worked pretty well. I even found that I can usually avoid many dynamic query issues by chaining sub-queries together to execute a single query thanks to Deferred Execution.

My biggest gripes centered around the creation and maintenance of the entity mappings in the DBML file. Microsoft’s O/R Designer documentation openly admits:

…the designer is a one-way code generator. This means that only changes that you make to the designer surface are reflected in the code file. Manual changes to the code file are not reflected in the O/R Designer. Any changes that you make manually in the code file are overwritten when the designer is saved and code is regenerated.

More than that, when you make changes to your database schema, you either need to manually update the entity through the O/R Designer or you need to delete and re-add the entity, losing any customizations you’ve made (including entity relationships). Suddenly the code-generation time savings doesn’t make up for customization frustration.

Enter “Professional LINQ to Objects,” a.k.a. PLINQO, a code generation tool for LINQ that does what you want. I’ve used other ORM frameworks from CodeSmith before, particularly NetTiers. I believe in the value of code-generation when it’s implemented properly. One of the most important attributes of a good code-gen tool is that it lets you re-generate code without overwriting your customizations. PLINQO delivers intelligent regeneration to LINQ-to-SQL.

PLINQO works by generating your DBML file for you, but surprisingly this doesn’t break Visual Studio’s O/R Designer. In fact, you can still open and modify the .dbml file with the Designer and your changes will not be overwritten next time you generate. You can also modify many aspects of the entities via code (including validation attributes) and your code customizations will be untouched by re-generation. Sweet!

Some other benefits of PLINQO (as noted on their site):

  • Generates one file per entity instead of one massive [DBML] file.
  • Generates partial classes where custom code can be written and won’t be overwritten.
  • Generated entity files are added to the project as code behind files to their corresponding custom entity files.
  • Adds customizable business rules engine to enforce entity validation, business and security rules.
  • Generation of entity manager classes… Provides access to common queries based on primary keys, foreign keys, and indexes.
  • Ability to automatically remove object prefix and suffixes (ie. tbl and usp) [based on RegEx].

Another nice bonus is that you configure and execute the code generation right inside VS 2008. CodeSmith Pro has Visual Studio integration that lets you add a .csp (CodeSmith Project) file inside your VS project and manage all the settings from Solution Explorer. Just right-click your .csp and select “Generate Output” and your .dbml, entities, managers and query classes appear in your solution.

If you’ve been hesitant to try LINQ because of “black box” or code re-generation concerns, now is the time to give it a shot. Download a trial of CodeSmith and try the PLINQO 2.0 release or latest development trunk. Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

How To: Flush Response Buffer in Castle MonoRail

I recently had a MonoRail project where client-side performance was becoming a problem.  A Portlet (an extended View Component) in our master template was calling a web service multiple times during page generation, which often caused a delay in the browser download and therefore caused user complaints, which (naturally) led to our client complaining.  For a number of reasons, there was no simple way to make the web service call asynchronous.

The “quick fix” solution was to implement as many client-side performance improvements as possible.  I reviewed Yahoo’s “Best Practices for Speeding Up Your Website” and realized there was something I hadn’t tried.  Yahoo suggests “flushing the response early”, recommending that you flush between the </head> and <body> tags.  The example in PHP is great, if I were using PHP.  🙂

By default ASP.NET buffers the response until the whole page is generated (web service requests and all).  This delay causes the browser to appear to freeze for a couple of seconds after the user clicks a link.  Users would click a link and think it didn’t work, so they clicked it a second time, third time, etc.  If I could flush the response after the </head> and after the header/main-menu HTML then the browser would show something happening faster and the users would stop being click-happy (or click-angry).

To disable response buffering for an individual page in ASP.NET you can add buffer=”false” to the page directive as shown here:

To disable response buffering for an entire web application you can add buffer=”false” to the <pages> setting under <system.web> as shown here:

<system.web>
    <pages buffer="false" />
</system.web>

Since MonoRail is an httpHandler for ASP.NET, the same setting in web.config applies.  It should also be possible to change the setting on a single MonoRail controller action by specifying the following inside the method code as shown here (on sample Index page):

public void Index()
{
    Context.UnderlyingContext.Response.Buffer = false;
}

The next trick is to explicitly force the flush within the page inline with the HTML that should be broken up.  In ASP.NET you could simply insert Response.Flush() into the page as shown here:

</head>
<% Response.Flush(); %>
<body>

In MonoRail the same idea applies but with slightly different syntax.  The Response.Flush() method can still be accessed but requires a more complex path due to MonoRail’s namespace layout.  With NVelocity, the code looks like the following:

</head>
$Context.UnderlyingContext.Response.Flush()
<body>

After applying those changes to my project, response flushing appeared to work in MonoRail…  until I rolled it out to our production environment.  The final gotcha was that our HTTP Compression (GZip & Deflate) settings in IIS were interfering.  I opened ZipEnable and told IIS to cease compressing dynamic content and suddenly response flushing worked.  So apparently the HTTP compression in IIS buffers the entire page content before compressing and delivering to the client browser.

Lesson of the Day:

Even though I disabled HTTP compression for dynamic content and it technically took ~1 sec longer for total download on the client browser, users thought that the site was far faster.  Actual speed is not as important as perceived speed.  If the browser seems to immediately respond to a click, the user is more satisfied because something appears to be happening even if the page doesn’t completely fill-in right away.

Browser Performance Tip:

Response.Flush() can be used strategically multiple places within a page.  I use it immediately after the </head> tag, also after the header/main-menu HTML and sometimes after large div containers like sidebar navigation.

Another quick tip is to move as many JavaScript includes as possible to the bottom of your HTML (just before the closing </body> tag), then put a Response.Flush() before the <script> tags.  Why? Because browsers often pause rendering while they are downloading JS files in case there is inline content to be rendered.  Flushing the response before the JavaScript file references will let the content render faster.  Of course, some JavaScripts can’t be moved due to inline rendering or dependency issues, but you can experiment with which scripts can be pushed to the bottom to be loaded last.

I hope this post is helpful to you MonoRail fans out there.  Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Should the web be object-oriented?

The enterprise-level development community has long held the stance that Object-Oriented Programming (OO or OOP) is the best method of developing software.  But does this really apply to web development?

For almost a decade, Microsoft has devoted its web technologies to the principle that HTML should be OO.  ASP.NET WebForms is still their Internet flagship.  As a VB developer 10 years ago, I had the thought, “It be great if there was a way to publish my VB Windows apps to the web!”  A couple of years later, voila! Microsoft took the WinForms principles and applied them to HTML.  In the words of Chris Daughtry, “Be careful what you wish for, ’cause you just might get it all…  and then some you don’t want.”

Web pages are composed of HTML which is nothing more than text with markup for presentation.  The browser does translate the text into a Document Object Model (DOM) but the server has no direct connection to the DOM.  The server should only care about generating HTML script rather than worrying about client-side state.

However, an ASP.NET WebForm does exactly that.  It utilizes form postbacks and hidden form fields (like ViewState) to keep the server synchronized with the client-side DOM.  In some ways, this is a brilliant idea and brings a real OO feel to web development.  Unfortunately it is not without its drawbacks.  The developer has lessened control over the resulting HTML.  Often the document becomes bloated with large ViewState that needs to be passed back-and-forth.  Some HTML magically appears at run-time such as references to JavaScript libraries.  Element IDs you named in your HTML (i.e. <asp: Panel id=”myelement”>) are renamed at runtime by ASP.NET (i.e. <div id=”ctl00_formcontainer_row5_myelement”>), meaning that you need to jump through hoops to write custom JavaScript.  Seemingly simple objects become a mess of HTML with ASP making the decision on how to render the output for each <asp:___> tag.

This isn’t a rant about the perils of ASP.NET but rather a chance to examine the philosophy of bringing OOP to web development.  In my opinion, it doesn’t belong.  The server is a document generator and the browser is a document reader.  One could argue that WebForms is an attempt to simulate the browser’s DOM in the server, though most would argue that it’s an attempt to simulate server-side objects in the DOM.  In either case, it’s clever voodoo that makes HTML generation more complex than it is.

Before moving to the .NET world about 6 years ago I had worked primarily with “Classic” ASP and ColdFusion.  While these were limited languages compared to .NET, I have grown to miss the simplicity and flexibility of dynamic HTML creation that they offered.  Both web languages were largely procedural.  Some OO concepts existed and OO support has grown in ColdFusion.  Still it has occurred to me that the use of procedural code for HTML generation fits the bill far better than forcing OOP on a simple document scripting language.

Don’t get me wrong – OOP is invaluable for development of the lower tiers of an n-tier application, namely the repository (hopefully implementing an object-relational mapper) and services.  Data persistence and business logic are perfect for OOP.  But the same principles don’t apply to the presentation layer if your goal is to create HTML.

What’s my point?  Well, don’t fall for the assumption that OOP is always superior to procedural design, especially when it comes to HTML generation.  When I want to dynamically create HTML without surprises, I prefer a simple web language (ColdFusion, PHP) over WebForms any day.

But, alas, I’m committed to the .NET world.  What can I do?  Okay, technically I could eliminate all <asp:___> tags and use simple HTML in my ASPX pages.  Then I’d skip the whole postback lifecycle, essentially treating ASP.NET like “Classic” ASP.  But that would be the “wrong” way to build sites with .NET (even though I’ve considered it many times).

I have personally been using Castle MonoRail for 2 years.  MonoRail is a .NET implementation of the increasingly popular Model-View-Controller (MVC) design pattern.  For creation of HTML it allows plugging in various view engines.  I typically prefer NVelocity which is a very simplistic template language.  There are some drawbacks (such as lack of Intellisense for NVelocity) but HTML is treated as a simple document once again.

Microsoft is seeing the light as well.  They have branched off ASP.NET into two directions:  WebForms and MVC.  Like MonoRail, Microsoft’s MVC doesn’t force server-side representations of client-side objects.  It will also address MonoRail’s glaring shortcomings:  lack of strongly-typed views, views being interpreted at run-time (instead of compiled), lack of 3rd party support, lack of documentation and lack of human resources (developers).  I’m sticking with MonoRail for now, but as MS continues to improve their framework and adoption of Microsoft’s MVC grows, the choice will become obvious.

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Welcome

Welcome to my blog.  I’ll be talking about life as a technologist mitigating the IT trifecta between developers, management and technology itself.  It’s an experience being the hub between these factions so I’ll share some of my experiences and tips & tricks as I discover them.  I don’t claim to be an expert in any area (“We don’t know a millionth of one percent about anything.” – Thomas Edison) but I hope that sharing solutions to my trials and tribulations may save you some of the same frustration.