How To: Flush Response Buffer in Castle MonoRail

I recently had a MonoRail project where client-side performance was becoming a problem.  A Portlet (an extended View Component) in our master template was calling a web service multiple times during page generation, which often caused a delay in the browser download and therefore caused user complaints, which (naturally) led to our client complaining.  For a number of reasons, there was no simple way to make the web service call asynchronous.

The “quick fix” solution was to implement as many client-side performance improvements as possible.  I reviewed Yahoo’s “Best Practices for Speeding Up Your Website” and realized there was something I hadn’t tried.  Yahoo suggests “flushing the response early”, recommending that you flush between the </head> and <body> tags.  The example in PHP is great, if I were using PHP.  🙂

By default ASP.NET buffers the response until the whole page is generated (web service requests and all).  This delay causes the browser to appear to freeze for a couple of seconds after the user clicks a link.  Users would click a link and think it didn’t work, so they clicked it a second time, third time, etc.  If I could flush the response after the </head> and after the header/main-menu HTML then the browser would show something happening faster and the users would stop being click-happy (or click-angry).

To disable response buffering for an individual page in ASP.NET you can add buffer=”false” to the page directive as shown here:

To disable response buffering for an entire web application you can add buffer=”false” to the <pages> setting under <system.web> as shown here:

<system.web>
    <pages buffer="false" />
</system.web>

Since MonoRail is an httpHandler for ASP.NET, the same setting in web.config applies.  It should also be possible to change the setting on a single MonoRail controller action by specifying the following inside the method code as shown here (on sample Index page):

public void Index()
{
    Context.UnderlyingContext.Response.Buffer = false;
}

The next trick is to explicitly force the flush within the page inline with the HTML that should be broken up.  In ASP.NET you could simply insert Response.Flush() into the page as shown here:

</head>
<% Response.Flush(); %>
<body>

In MonoRail the same idea applies but with slightly different syntax.  The Response.Flush() method can still be accessed but requires a more complex path due to MonoRail’s namespace layout.  With NVelocity, the code looks like the following:

</head>
$Context.UnderlyingContext.Response.Flush()
<body>

After applying those changes to my project, response flushing appeared to work in MonoRail…  until I rolled it out to our production environment.  The final gotcha was that our HTTP Compression (GZip & Deflate) settings in IIS were interfering.  I opened ZipEnable and told IIS to cease compressing dynamic content and suddenly response flushing worked.  So apparently the HTTP compression in IIS buffers the entire page content before compressing and delivering to the client browser.

Lesson of the Day:

Even though I disabled HTTP compression for dynamic content and it technically took ~1 sec longer for total download on the client browser, users thought that the site was far faster.  Actual speed is not as important as perceived speed.  If the browser seems to immediately respond to a click, the user is more satisfied because something appears to be happening even if the page doesn’t completely fill-in right away.

Browser Performance Tip:

Response.Flush() can be used strategically multiple places within a page.  I use it immediately after the </head> tag, also after the header/main-menu HTML and sometimes after large div containers like sidebar navigation.

Another quick tip is to move as many JavaScript includes as possible to the bottom of your HTML (just before the closing </body> tag), then put a Response.Flush() before the <script> tags.  Why? Because browsers often pause rendering while they are downloading JS files in case there is inline content to be rendered.  Flushing the response before the JavaScript file references will let the content render faster.  Of course, some JavaScripts can’t be moved due to inline rendering or dependency issues, but you can experiment with which scripts can be pushed to the bottom to be loaded last.

I hope this post is helpful to you MonoRail fans out there.  Happy coding!

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]
Advertisements

Should the web be object-oriented?

The enterprise-level development community has long held the stance that Object-Oriented Programming (OO or OOP) is the best method of developing software.  But does this really apply to web development?

For almost a decade, Microsoft has devoted its web technologies to the principle that HTML should be OO.  ASP.NET WebForms is still their Internet flagship.  As a VB developer 10 years ago, I had the thought, “It be great if there was a way to publish my VB Windows apps to the web!”  A couple of years later, voila! Microsoft took the WinForms principles and applied them to HTML.  In the words of Chris Daughtry, “Be careful what you wish for, ’cause you just might get it all…  and then some you don’t want.”

Web pages are composed of HTML which is nothing more than text with markup for presentation.  The browser does translate the text into a Document Object Model (DOM) but the server has no direct connection to the DOM.  The server should only care about generating HTML script rather than worrying about client-side state.

However, an ASP.NET WebForm does exactly that.  It utilizes form postbacks and hidden form fields (like ViewState) to keep the server synchronized with the client-side DOM.  In some ways, this is a brilliant idea and brings a real OO feel to web development.  Unfortunately it is not without its drawbacks.  The developer has lessened control over the resulting HTML.  Often the document becomes bloated with large ViewState that needs to be passed back-and-forth.  Some HTML magically appears at run-time such as references to JavaScript libraries.  Element IDs you named in your HTML (i.e. <asp: Panel id=”myelement”>) are renamed at runtime by ASP.NET (i.e. <div id=”ctl00_formcontainer_row5_myelement”>), meaning that you need to jump through hoops to write custom JavaScript.  Seemingly simple objects become a mess of HTML with ASP making the decision on how to render the output for each <asp:___> tag.

This isn’t a rant about the perils of ASP.NET but rather a chance to examine the philosophy of bringing OOP to web development.  In my opinion, it doesn’t belong.  The server is a document generator and the browser is a document reader.  One could argue that WebForms is an attempt to simulate the browser’s DOM in the server, though most would argue that it’s an attempt to simulate server-side objects in the DOM.  In either case, it’s clever voodoo that makes HTML generation more complex than it is.

Before moving to the .NET world about 6 years ago I had worked primarily with “Classic” ASP and ColdFusion.  While these were limited languages compared to .NET, I have grown to miss the simplicity and flexibility of dynamic HTML creation that they offered.  Both web languages were largely procedural.  Some OO concepts existed and OO support has grown in ColdFusion.  Still it has occurred to me that the use of procedural code for HTML generation fits the bill far better than forcing OOP on a simple document scripting language.

Don’t get me wrong – OOP is invaluable for development of the lower tiers of an n-tier application, namely the repository (hopefully implementing an object-relational mapper) and services.  Data persistence and business logic are perfect for OOP.  But the same principles don’t apply to the presentation layer if your goal is to create HTML.

What’s my point?  Well, don’t fall for the assumption that OOP is always superior to procedural design, especially when it comes to HTML generation.  When I want to dynamically create HTML without surprises, I prefer a simple web language (ColdFusion, PHP) over WebForms any day.

But, alas, I’m committed to the .NET world.  What can I do?  Okay, technically I could eliminate all <asp:___> tags and use simple HTML in my ASPX pages.  Then I’d skip the whole postback lifecycle, essentially treating ASP.NET like “Classic” ASP.  But that would be the “wrong” way to build sites with .NET (even though I’ve considered it many times).

I have personally been using Castle MonoRail for 2 years.  MonoRail is a .NET implementation of the increasingly popular Model-View-Controller (MVC) design pattern.  For creation of HTML it allows plugging in various view engines.  I typically prefer NVelocity which is a very simplistic template language.  There are some drawbacks (such as lack of Intellisense for NVelocity) but HTML is treated as a simple document once again.

Microsoft is seeing the light as well.  They have branched off ASP.NET into two directions:  WebForms and MVC.  Like MonoRail, Microsoft’s MVC doesn’t force server-side representations of client-side objects.  It will also address MonoRail’s glaring shortcomings:  lack of strongly-typed views, views being interpreted at run-time (instead of compiled), lack of 3rd party support, lack of documentation and lack of human resources (developers).  I’m sticking with MonoRail for now, but as MS continues to improve their framework and adoption of Microsoft’s MVC grows, the choice will become obvious.

Kick it on DotNetKicks.com [Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]