Much of what drives consumer behavior on the Web is also true for corporate intranets, and nothing kills a business Web site like poor performance.
E-commerce consultants cite an attention span on the order of eight seconds as the threshold for abandoning a slow retail site.
On an intranet, where broadband connections are the norm, user expectations are higher still. Pages that don’t appear instantly stand a good chance of never being seen.
Different norms apply, of course, to research sites and rich media, such as online training videos.
But for an important class of intranet applications – portals, discussion forums, sales force automation and priority content delivery (e.g., regulatory updates) – slow pages might as well be no pages.
Yet performance is often overlooked during intranet development.
Reasons for this are several:
- Some performance factors, such as slow remote-office links or overall network traffic, lie outside the developer’s control;
- Response times are rarely part of an application’s requirements; conditions for load testing tend to be guesswork;
- Coordination between those responsible for building, administering and maintaining distributed systems is the exception, not the norm.
Nevertheless, developers interested in building usable systems have an obligation to take account of performance during the design process.
System performance is a pervasive design consideration that impacts every step of the development life cycle.
From specification and analysis to design, testing and deployment, decisions are made that influence application speed and scalability.
Because Web applications are distributed and can be complex, optimizing end-to-end performance is a challenging problem that requires both science and art to solve.
Best practices for building well performing intranets fall into five categories: planning for performance, performance-aware design, testing and optimizing, deployment and configuration, and system tuning.
Planning For Performance
It seems obvious but it’s rarely stated: systems can only be designed to meet performance goals if those goals have been identified.
The first step toward meeting user performance expectations is to ask users early on what range of response times will be acceptable.
Do this while specifying the system, in your Use Cases (or User Stories, or wherever client expectations are captured in your methodology).
Including performance criteria in Use Cases provides guidance for all subsequent optimizations: it helps developers choose between alternative implementations, informs deployment and configuration decisions, and sets criteria for acceptance and load testing.
“Getting your web applications to do what you want is one thing; getting them to do it fast and efficiently is often another” (Professional JSP 2nd Edition, Wrox Press, Inc., 2001).
It’s hard enough making your pages work. Learning the factors that affect performance on a given platform can seem like studying for a Ph.D.
The way to look at this somewhat daunting task, particularly in adverse economic times, is as an opportunity.
One way of distinguishing yourself from the growing mass of developers who know ASP, JSP or SQL syntax is by digging deep to understand the performance impacts of your design decisions.
The Resources section below is a good place to start.
Testing And Optimizing
Even experts can be surprised. The impact of some design choices can be hard to predict and may remain unclear before testing.
In addition, the only way to know whether a system meets user expectations is to test it, not just in isolation but under conditions that simulate real patterns of use.
Start by testing your individual pages, scripts and applications, a process called unit testing that should go hand-in-glove with performance-aware design.
If a page takes twenty seconds to render on your local machine, it’s not going to get faster over a remote link with dozens of users hitting the server.
Fix it before the source of the problem gets lost in a large distributed system.
While unit testing can expose flagrant design flaws, subtler bottlenecks can remain hidden until the system is tested as a whole.
Even then, so-called intermittent problems – a troubleshooter’s nightmare – might appear only under certain, hard-to-repeat conditions, requiring extensive testing under simulated load to discover, isolate and repair.
Intranet applications rarely warrant this level of scrubbing, of course – unless they drive corporate revenue, as for instance an executive dashboard or trading application might.
In other words, where the cost of poor performance is high, it pays to invest in rigorous testing, whether the system is a B2C shopping cart application or a decisionmaker’s intranet portal.
Deployment And Configuration
Decisions in the deployment phase can have a strong effect on overall system performance.
Many of these decisions actually occur during the planning phase, such as the decision to use a particular server architecture (e.g., Microsoft IIS/ASP versus JSP/Servlets) or a mandate to use a company-standard browser version (e.g., IE 5.5 versus NN 6.0).
The performance impact of such decisions is implicit, and assessing that impact early in the lifecycle is a strong argument for using a rapid, iterative development process in combination with frequent performance testing.
Other deployment decisions, such as the selection of a native code compiler for Java or a particular clustering product, are more flexible and can be made with specified performance goals in mind.
Beyond selection, most system elements need to be configured, and small changes in memory, cache size, locking parameters, etc.
Can hugely impact response. As with design, getting setups right is a matter of education and experimenting with particular platforms and versions.
A good rule of thumb for optimizing configurations is, “Don’t reinvent the wheel.” Participate in platform-specific forums and newsgroups to take advantage of others’ hard won experiences.
Then, keep a journal of what works to leverage your own experiences into corporate best practices.
Enhancing performance in the final stages of development and subsequently in production is largely a matter of tuning.
Whether of an application server, load balancer, database, or Java Virtual Machine (JVM), tuning can be a powerful means of bolstering throughput and scalability.
That’s no excuse for relying excessively on late-phase tuning to cover a lack of up-front planning, however. As a remedy for performance bottlenecks tuning is often too little, too late.
Consider a database application found late in the development cycle to scale poorly.
By that point, database tuning may seem like the best (or only) cure, and the DBAs happily bring down the staging system – postponing further testing and development – to work their magic.
But do the database caches really need to be resized, or is the bottleneck due to an archaic ODBC driver? Would connection pooling help?
Or is the culprit an inappropriate use of container-managed persistence? The best way to avoid this kind of last-minute floundering is to follow a program of performance-aware design and testing throughout the lifecycle.
Planning for performance, adhering to best coding practices, testing and optimizing, deploying with an eye to performance, and judicious system tuning constitute the means to meeting system performance expectations.
Often, executing this plan requires expertise across a wide range of products and technologies.
By taking account of the guidelines introduced here and by studying performance-aware design on the platforms you use, you’ll be doing your part in keeping the intranet a vibrant, responsive resource.
“Maximizing the Performance of Your Active Server Pages,” by Nancy Winnick Cluts, MSDN Library
“25+ ASP Tips to Improve Performance and Style,” by Len Cardinal, MSDN Library
“Exchange 2000 Application Development: Performance Optimizations,” by Loren Curtis and Dale Koetke, MSDN Library
“Performance and Scalability Testing,” by James Francisco, MSDN Library. One of the first articles to address performance testing of Web services.
“Performance,” Chapter 20, Professional JSP 2nd Edition (see Books listing below for details).
Java Performance Tuning Tips
“Web Load Test Planning: Predicting how your Web site will respond to stress,” article by Alberto Savoia, STQE Magazine
“Mining Gold from Server Logs: Using existing site data to improve Web testing,” by Karen Johnson, STQE Magazine
Scaling for E-Business: Technologies, Models, Performance, and Capacity Planning, by Menasce, Daniel A., and Virgilio A. F. Almeida (Prentice Hall, May 2000).
Quality Web Systems: Performance, Security, and Usability, by Elfriede Dustin, Jeff Rashka, Douglas McDiarmid (Addison-Wesley, August 2001).
Web Performance Tuning: Speeding Up the Web, by Patrick Killelea (O’Reilly & Associates, October 1998). Aging material scants Microsoft platforms, but still interesting for coverage of general principles.
Professional JSP 2nd Edition, by Simon Brown, et al. (Wrox Press, Inc., April 2001)
Practical Java: Programming Language Guide, by Peter Haggar (Addison-Wesley, February 2000). Pages 97-160 focus on performance considerations for object-oriented programming languages in general, Java in particular.
Java Performance Tuning, by Jack Shirazi (O’Reilly & Associates, September 2000).
ASP Application Performance and Tuning, by Jeff Niblack (New Riders Publishing, January 2002). Coming soon.