Related Links
Home Page for Improving .NET Application Performance and Scalability
Chapter 5, Improving Managed Code Performance
Send feedback to Scale@microsoft.com
Summary: This chapter provides common issues, design guidelines, and coding techniques to improve the performance of your ASP.NET pages and controls. This chapter provides a common language runtime thread pool tuning formula for reducing contention, threading guidelines, specific techniques to reduce page size, scalable solutions for managing state, data caching guidelines and much more.
Contents
Objectives Overview How to Use This Chapter Architecture Performance and Scalability Issues Design Considerations Implementation Considerations Threading Explained Threading Guidelines Resource Management Pages Server Controls Data Binding Caching Explained Caching Guidelines State Management Application State Session State View State HTTP Modules String Management Exception Management COM Interop Data Access Security Considerations IIS 6.0 Considerations Deployment Considerations Summary Additional Resources
Objectives
- Improve page response times.
- Design scalable Web applications.
- Use server controls efficiently.
- Use efficient caching strategies.
- Analyze and apply appropriate state management techniques.
- Minimize view state impact.
- Improve performance without impacting security.
- Minimize COM interop scalability issues.
- Optimize threading.
- Optimize resource management.
- Avoid common data binding mistakes.
- Use security settings to reduce server load.
- Avoid common deployment mistakes.
Overview
To build ASP.NET applications that meet your performance objectives, you need to understand the places where bottlenecks typically occur, the causes of the bottlenecks, and the steps to take to prevent the bottlenecks from occurring in your application. A combination of sound architecture and design, best practice coding techniques, and optimized platform and Microsoft® .NET Framework configuration is required. This chapter addresses each of these areas.
The chapter starts by examining the architecture of an ASP.NET application and then explains the anatomy of a Web request as it progresses through the HTTP and ASP.NET pipeline. The chapter explains the processing that occurs at each stage and identifies common performance bottlenecks. The chapter then provides a series of ASP.NET design guidelines. By following the guidelines in this section, you can help ensure that your top-level design does not create performance issues that can only be corrected by costly reengineering efforts. Finally, the chapter provides a series of sections that discusses top ASP.NET performance issues. These issues include page and control issues, caching, resource management, session and view state issues, threading, exception and string management, COM interop, and more.
How to Use This Chapter
Use this chapter to help improve the performance of your ASP.NET applications. You can apply the design considerations, coding techniques, and optimized platform and .NET Framework configuration information in this chapter to new and existing applications. To get the most out of this chapter, do the following:
- Jump to topics or read from beginning to end. The main headings in this chapter help you locate the topics that interest you. Alternatively, you can read the chapter from beginning to end to gain a thorough appreciation of performance and scalability design issues.
-
Use the checklist. Use "
- Know the ASP.NET runtime infrastructure. Understanding the runtime infrastructure can help you write code that is optimized for performance.
- Know the major performance and scalability issues. Read "Performance and Scalability Issues" in this chapter to learn about the major issues that affect the performance and scalability of your ASP.NET application. It is important to understand these key issues so that you can effectively identify performance and scalability problems and apply the recommendations presented in this chapter.
- Design with performance in mind. Read "Design Considerations" in this chapter to learn about best practice design guidelines.
- Use the "Architecture" section of this chapter. This section helps you understand how ASP.NET works. By understanding the architecture, you can make better design and implementation choices.
- Use the "Design Considerations" section of this chapter. This section helps you understand the high-level decisions that affect implementation choices for ASP.NET code.
- Read Chapter 13, "
- Measure your application performance. Read the "ASP.NET" and ".NET Framework Technologies" sections of Chapter 15, "
- Test your application performance. Read Chapter 16, "
- Tune your application performance. Read the "ASP.NET" section of Chapter 17, "
Architecture
ASP.NET requires a host. On Windows Server™ 2003, the default host is the Internet Information Services (IIS) 6.0 worker process (W3wp.exe). When you use the ASP.NET Process Model, the host is the ASP.NET worker process (Aspnet_wp.exe).
When a request is received by ASP.NET, the request is handled by the HttpRuntime object. The HttpRuntime is responsible for application creation and initialization, managing the request queue and thread pool, and dispatching the incoming requests to the correct application. After the request is dispatched to the appropriate application, the request is passed through a pipeline. This pipeline is a staged, event-based execution framework consisting of multiple HttpModule objects and a single HttpHandler object. This architecture is shown in Figure 6.1.
Figure 6.1: ASP.NET runtime infrastructure
HttpModule objects participate in the pipeline by handling predefined events that ASP.NET exposes. These events include BeginRequest, AuthenticateRequest, and EndRequest. The request flows through the pipeline of HttpModule objects and is then run by a single HttpHandler. After the event handler is completed, the request then flows back through the pipeline and is sent to the client.
Throughout the entire lifetime of a request, a context is exposed. The HttpContext object encapsulates information about individual requests and their associated responses.
Performance and Scalability Issues
The main issues that can adversely affect the performance and scalability of your ASP.NET application are summarized below. Subsequent sections in this chapter provide strategies and technical information to prevent or resolve each of these issues.
- Resource affinity. Resource affinity can prevent you from adding more servers, or resource affinity can reduce the benefits of adding more CPUs and memory. Resource affinity occurs when code needs a specific thread, CPU, component instance, or server.
- Excessive allocations. Applications that allocate memory excessively on a per-request basis consume memory and create additional work for garbage collection. The additional garbage collection work increases CPU utilization. These excessive allocations may be caused by temporary allocations. For example, the excessive allocations may be caused by excessive string concatenation that uses the += operator in a tight loop.
- Failure to share expensive resources. Failing to call the Dispose or Close method to release expensive resources, such as database connections, may lead to resource shortages. Closing or disposing resources permits the resources to be reused more efficiently.
- Blocking operations. The single thread that handles an ASP.NET request is blocked from servicing additional user requests while the thread is waiting for a downstream call to return. Calls to long-running stored procedures and remote objects may block a thread for a significant amount of time.
- Misusing threads. Creating threads for each request incurs thread initialization costs that can be avoided. Also, using single-threaded apartment (STA) COM objects incorrectly may cause multiple requests to queue up. Multiple requests in the queue slow performance and create scalability issues.
- Making late-bound calls. Late-bound calls require extra instructions at runtime to identify and load the code to be run. Whether the target code is managed or unmanaged, you should avoid these extra instructions.
- Misusing COM interop. COM interop is generally very efficient, although many factors affect its performance. These factors include the size and type of the parameters that you pass across the managed/unmanaged boundary and crossing apartment boundaries. Crossing apartment boundaries may require expensive thread switches.
- Large pages. Page size is affected by the number and the types of controls on the page. Page size is also affected by the data and images that you use to render the page. The more data you send over the network, the more bandwidth you consume. When you consume high levels of bandwidth, you are more likely to create a bottleneck.
- Failure to use data caching appropriately. Failure to cache static data, caching too much data so that the items get flushed out, caching user data instead of application-wide data, and caching infrequently used items may limit your system's performance and scalability.
- Failure to use output caching appropriately. If you do not use output caching or if you use it incorrectly, you can add avoidable strain to your Web server.
- Inefficient rendering. Interspersing HTML and server code, performing unnecessary initialization code on page postback, and late-bound data binding may all cause significant rendering overhead. This may decrease the perceived and true page performance.
Design Considerations
Building high-performance ASP.NET applications is significantly easier if you design with performance in mind. Make sure you develop a performance plan from the outset of your project. Never try to add performance as a post-build step. Also, use an iterative development process that incorporates constant measuring between iterations.
By following best practice design guidelines, you significantly increase your chances of creating a high-performance Web application. Consider the following design guidelines:
- Consider security and performance.
- Partition your application logically.
- Evaluate affinity.
- Reduce round trips.
- Avoid blocking on long-running tasks.
- Use caching.
- Avoid unnecessary exceptions.
Consider Security and Performance
Your choice of authentication scheme can affect the performance and scalability of your application. You need to consider the following issues:
- Identities. Consider the identities you are using and the way that you flow identity through your application. To access downstream resources, you can use the ASP.NET process identity or another specific service identity. Or, you can enable impersonation and flow the identity of the original caller. If you connect to Microsoft SQL Server™, you can also use SQL authentication. However, SQL authentication requires you to store credentials in the database connection string. Storing credentials in the database connection string is not recommended from a security perspective. When you connect to a shared resource, such as a database, by using a single identity, you benefit from connection pooling. Connection pooling significantly increases scalability. If you flow the identity of the original caller by using impersonation, you cannot benefit from efficient connection pooling, and you have to configure access control for multiple individual user accounts. For these reasons, it is best to use a single trusted identity to connect to downstream databases.
-
Managing credentials.
Consider the way that you manage credentials. You have to decide if
your application stores and verifies credentials in a database, or if
you want to use an authentication mechanism provided by the operating
system where credentials are stored for you in the Active Directory®
directory service.
You should also determine the number of concurrent users that your application can support and determine the number of users that your credential store (database or Active Directory) can handle. You should perform capacity planning for your application to determine if the system can handle the anticipated load.
- Protecting credentials. Your decision to encrypt and decrypt credentials when they are sent over the network costs additional processing cycles. If you use authentication schemes such as Windows® Forms authentication or SQL authentication, credentials flow in clear text and can be accessed by network eavesdroppers. In these cases, how important is it for you to protect them as they are passed across the network? Decide if you can choose authentication schemes that are provided by the operating system, such as NTLM or the Kerberos protocol, where credentials are not sent over the network to avoid encryption overhead.
- Cryptography. If your application only needs to ensure that information is not tampered with during transit, you can use keyed hashing. Encryption is not required in this case, and it is relatively expensive compared to hashing. If you need to hide the data that you send over the network, you require encryption and probably keyed hashing to ensure data validity. When both parties can share the keys, using symmetric encryption provides improved performance in comparison to asymmetric encryption. Although larger key sizes provide greater encryption strength, performance is slower relative to smaller key sizes. You must consider this type of performance and balance the larger key sizes against security tradeoffs at design time.
More Information
For more information, see "Performance Comparison: Security Design Choices" on MSDN at http://msdn.microsoft.com/library/en-us/dnbda/html/bdadotnetarch15.asp.
Partition Your Application Logically
Use layering to logically partition your application logic into presentation, business, and data access layers. This helps you create maintainable code, but it also permits you to monitor and optimize the performance of each layer separately. A clear logical separation also offers more choices for scaling your application. Try to reduce the amount of code in your code-behind files to improve maintenance and scalability.
Do not confuse logical partitioning with physical deployment. A logical separation enables you to decide whether to locate presentation and business logic on the same server and clone the logic across servers in a Web farm, or to decide to install the logic on servers that are physically separate. The key point to remember is that remote calls incur a latency cost, and that latency increases as the distance between the layers increases.
For example, in-process calls are the quickest calls, followed by cross-process calls on the same computer, followed by remote network calls. If possible, try to keep the logical partitions close to each other. For optimum performance you should place your business and data access logic in the Bin directory of your application on the Web server.
For more information about these and other deployment issues, see "Deployment Considerations" later in this chapter.
Evaluate Affinity
Affinity can improve performance. However, affinity may affect your ability to scale. Common coding practices that introduce resource affinity include the following:
- Using in-process session state. To avoid server affinity, maintain ASP.NET session state out of process in a SQL Server database or use the out-of-process state service running on a remote machine. Alternatively, design a stateless application, or store state on the client and pass it with each request.
- Using computer-specific encryption keys. Using computer-specific encryption keys to encrypt data in a database prevents your application from working in a Web farm because common encrypted data needs to be accessed by multiple Web servers. A better approach is to use computer-specific keys to encrypt a shared symmetric key. You use the shared symmetric key to store encrypted data in the database.
More Information
For more information about how to encrypt and decrypt data in a shared database, without introducing affinity, see Chapter 14, "Building Secure Data Access," in Improving Web Application Security: Threats and Countermeasures on MSDN at http://msdn.microsoft.com/library/en-us/dnnetsec/html/ThreatCounter.asp.
Reduce Round Trips
Use the following techniques and features in ASP.NET to minimize the number of round trips between a Web server and a browser, and between a Web server and a downstream system:
- HttpResponse.IsClientConnected. Consider using the HttpResponse.IsClientConnected property to verify if the client is still connected before processing a request and performing expensive server-side operations. However, this call may need to go out of process on IIS 5.0 and can be very expensive. If you use it, measure whether it actually benefits your scenario.
- Caching. If your application is fetching, transforming, and rendering data that is static or nearly static, you can avoid redundant hits by using caching.
- Output buffering. Reduce roundtrips when possible by buffering your output. This approach batches work on the server and avoids chatty communication with the client. The downside is that the client does not see any rendering of the page until it is complete. You can use the Response.Flush method. This method sends output up to that point to the client. Note that clients that connect over slow networks where buffering is turned off, affect the response time of your server. The response time of your server is affected because your server needs to wait for acknowledgements from the client. The acknowledgements from the client occur after the client receives all the content from the server.
-
Server.Transfer. Where possible, use the Server.Transfer method instead of the Response.Redirect method. Response.Redirect sends
a response header to the client that causes the client to send a new
request to the redirected server by using the new URL. Server.Transfer avoids this level of indirection by simply making a server-side call.
You cannot always just replace Response.Redirect calls with Server.Transfer calls because Server.Transfer uses a new handler during the handler phase of request processing. If you need authentication and authorization checks during redirection, use Response.Redirect instead of Server.Transfer because the two mechanisms are not equivalent. When you use Response.Redirect, ensure you use the overloaded method that accepts a Boolean second parameter, and pass a value of false to ensure an internal exception is not raised.
Also note that you can only use Server.Transfer to transfer control to pages in the same application. To transfer to pages in other applications, you must use Response.Redirect.
More Information
For more information, see Knowledge Base article 312629, "PRB: ThreadAbortException Occurs If You Use Response.End, Response.Redirect, or Server.Transfer," at http://support.microsoft.com/default.aspx?scid=kb;en-us;312629.
Avoid Blocking on Long-Running Tasks
If you run long-running or blocking operations, consider using the following asynchronous mechanisms to free the Web server to process other incoming requests:
- Use asynchronous calls to invoke Web services
or remote objects when there is an opportunity to perform additional
parallel processing while the Web service call proceeds. Where
possible, avoid synchronous (blocking) calls to Web services because
outgoing Web service calls are made by using threads from the ASP.NET
thread pool. Blocking calls reduce the number of available threads for
processing other incoming requests.
For more information, see "Avoid Asynchronous Calls Unless You Have Additional Parallel Work" later in this chapter.
- Consider using the OneWay attribute on Web methods or remote object methods if you do not need a response. This "fire and forget" model allows the Web server to make the call and continue processing immediately. This choice may be an appropriate design choice for some scenarios.
- Queue work, and then poll for completion from the client. This permits the Web server to invoke code and then let the Web client poll the server to confirm that the work is complete.
More Information
For more information about how to implement these mechanisms, see "Threading Guidelines" later in this chapter.
Use Caching
A well-designed caching strategy is probably the single most important performance-related design consideration. ASP.NET caching features include output caching, partial page caching, and the cache API. Design your application to take advantage of these features.
Caching can be used to reduce the cost of data access and rendering output. Knowing how your pages use or render data enables you to design efficient caching strategies. Caching is particularly useful when your Web application constantly relies on data from remote resources such as databases, Web services, remote application servers, and other remote resources. Applications that are database intensive may benefit from caching by reducing the load on the database and by increasing the throughput of the application. As a general rule, if caching is cheaper than the equivalent processing, you should use caching. Consider the following when you design for caching:
- Identify data or output that is expensive to create or retrieve. Caching data or output that is expensive to create or retrieve can reduce the costs of obtaining the data. Caching the data reduces the load on your database server.
- Evaluate the volatility. For caching to be effective, the data or output should be static or infrequently modified. Lists of countries, states, or zip codes are some simple examples of the type of data that you might want to cache. Data or output that changes frequently is usually less suited to caching but can be manageable, depending upon the need. Caching user data is typically only recommended when you use specialized caches, such as the ASP.NET session state store.
- Evaluate the frequency of use. Caching data or output that is frequently used can provide significant performance and scalability benefits. You can obtain performance and scalability benefits when you cache static or frequently modified data and output alike. For example, frequently used, expensive data that is modified on a periodic basis may still provide large performance and scalability improvements when managed correctly. If the data is used more often than it is updated, the data is a candidate for caching.
- Separate volatile data from nonvolatile data. Design user controls to encapsulate static content such as navigational aids or help systems, and keep them separate from more volatile data. This permits them to be cached. Caching this data decreases the load on your server.
- Choose the right caching mechanism. There are many different ways to cache data. Depending on your scenario, some are better than others. User-specific data is typically stored in the Session object. Static pages and some types of dynamic pages such as non-personalized pages that are served to large user sets can be cached by using the ASP.NET output cache and response caching. Static content in pages can be cached by using a combination of the output cache and user controls. The ASP.NET caching features provide a built-in mechanism to update the cache. Application state, session state, and other caching means do not provide a built-in mechanism to update the cache.
Avoid Unnecessary Exceptions
Exceptions add significant overhead to your application. Do not use exceptions to control logic flow, and design your code to avoid exceptions where possible. For example, validate user input, and check for known conditions that can cause exceptions. Also, design your code to fail early to avoid unnecessary processing.
If your application does not handle an exception, it propagates up the stack and is ultimately handled by the ASP.NET exception handler. When you design your exception handling strategy, consider the following:
- Design code to avoid exceptions. Validate user input and check for known conditions that can cause exceptions. Design code to avoid exceptions.
- Avoid using exceptions to control logic flow. Avoid using exception management to control regular application logic flow.
- Avoid relying on global handlers for all exceptions. Exceptions cause the runtime to manipulate and walk the stack. The further the runtime traverses the stack searching for an exception handler, the more expensive the exception is to process.
- Catch and handle exceptions close to where they occur. When possible, catch and handle exceptions close to where they occur. This avoids excessive and expensive stack traversal and manipulation.
- Do not catch exceptions you cannot handle. If your code cannot handle an exception, use a try/finally block to ensure that you close resources, regardless of whether an exception occurs. When you use a try/finally block, your resources are cleaned up in the finally block if an exception occurs, and the exception is permitted to propagate up to an appropriate handler.
- Fail early to avoid expensive work. Design your code to avoid expensive or long-running work if a dependent task fails.
- Log exception details for administrators. Implement an exception logging mechanism that captures detailed information about exceptions so that administrators and developers can identify and remedy any issues.
- Avoid showing too much exception detail to users. Avoid displaying detailed exception information to users, to help maintain security and to reduce the amount of data that is sent to the client.
Implementation Considerations
When you move from application design to application development, consider the technical details of your ASP.NET application. Key ASP.NET performance measures include response times, speed of throughput, and resource management.
You can improve response times by reducing page sizes, reducing your reliance on server controls, and using buffering to reduce chatty communication with the client. You can avoid unnecessary work by caching resources.
Throughput can be improved by making effective use of threads. Tune the thread pool to reduce connections, and to avoid blocking threads because blocking threads reduce the number of available worker threads.
Poor resource management can place excessive loads on server CPU and memory. You can improve resource utilization by effectively using pooled resources, by explicitly closing or disposing resources you open, and by using efficient string management.
When you follow best practice implementation guidelines, you increase the performance of your application by using well-engineered code and a well-configured application platform. The following sections describe performance considerations for ASP.NET features and scenarios.
Threading Explained
ASP.NET processes requests by using threads from the .NET thread pool. The thread pool maintains a pool of threads that have already incurred the thread initialization costs. Therefore, these threads are easy to reuse. The .NET thread pool is also self-tuning. It monitors CPU and other resource utilization, and it adds new threads or trims the thread pool size as needed. You should generally avoid creating threads manually to perform work. Instead, use threads from the thread pool. At the same time, it is important to ensure that your application does not perform lengthy blocking operations that could quickly lead to thread pool starvation and rejected HTTP requests.
Formula for Reducing Contention
The formula for reducing contention can give you a good empirical start for tuning the ASP.NET thread pool. Consider using the Microsoft product group-recommended settings that are shown in Table 6.1 if the following conditions are true:
- You have available CPU.
- Your application performs I/O bound operations such as calling a Web method or accessing the file system.
- The ASP.NET Applications/Requests In Application Queue performance counter indicates that you have queued requests.
Table 6.1: Recommended Threading Settings for Reducing Contention
| Configuration setting | Default value (.NET Framework 1.1) | Recommended value |
|---|---|---|
| maxconnection | 2 | 12 * #CPUs |
| maxIoThreads | 20 | 100 |
| maxWorkerThreads | 20 | 100 |
| minFreeThreads | 8 | 88 * #CPUs |
| minLocalRequestFreeThreads | 4 | 76 * #CPUs |
To address this issue, you need to configure the following items in the Machine.config file. Apply the recommended changes that are described in the following section, across the settings and not in isolation. For a detailed description of each of these settings, see "Thread Pool Attributes" in Chapter 17, "Tuning .NET Application Performance."
- Set maxconnection to 12 * # of CPUs. This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. In this case, ASP.NET is the client. Set maxconnection to 12 * # of CPUs.
- Set maxIoThreads to 100. This setting controls the maximum number of I/O threads in the .NET thread pool. This number is automatically multiplied by the number of available CPUs. Set maxloThreads to 100.
- Set maxWorkerThreads to 100. This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs. Set maxWorkerThreads to 100.
- Set minFreeThreads to 88 * # of CPUs. This setting is used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of requests that can run concurrently to maxWorkerThreads – minFreeThreads. Set minFreeThreads to 88 * # of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100).
- Set minLocalRequestFreeThreads to 76 * # of CPUs. This setting is used by the worker process to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads but it only applies to localhost requests from the local computer. Set minLocalRequestFreeThreads to 76 * # of CPUs.
Note The recommendations that are provided in this section are not rules. They are a starting point. Test to determine the appropriate settings for your scenario. If you move your application to a new computer, ensure that you recalculate and reconfigure the settings based on the number of CPUs in the new computer.
If your ASPX Web page makes multiple calls to Web services on a per-request basis, apply the recommendations.
The recommendation to limit the ASP.NET runtime to 12 threads for handling incoming requests is most applicable for quick-running operations. The limit also reduces the number of context switches. If your application makes long-running calls, first consider the design alternatives presented in the "Avoid Blocking on Long-Running Tasks" section. If the alternative designs cannot be applied in your scenario, start with 100 maxWorkerThreads, and keep the defaults for minFreeThreads. This ensures that requests are not serialized in this particular scenario. Next, if you see high CPU utilization and context-switching when you test your application, test by reducing maxWorkerThreads or by increasing minFreeThreads.
The following occurs if the formula has worked:
- CPU utilization increases.
- Throughput increases according to the ASP.NET Applications\Requests/Sec performance counter.
- Requests in the application queue decrease according to the ASP.NET Applications/Requests In Application Queue performance counter.
If using the recommended settings does not improve your application performance, you may have a CPU bound scenario. By adding more threads you increase thread context switching. For more information, see "ASP.NET Tuning" in Chapter 17, "Tuning .NET Application Performance."
More Information
For more information, see Knowledge Base article 821268, "PRB: Contention, Poor Performance, and Deadlocks When You Make Web Service Requests from ASP.NET Applications," at http://support.microsoft.com/default.aspx?scid=kb;en-us;821268.
Threading Guidelines
This section discusses guidelines that you can use to help improve threading efficiency in ASP.NET. The guidelines include the following:
- Tune the thread pool by using the formula to reduce contention.
- Consider minIoThreads and minWorkerThreads for burst load.
- Do not create threads on a per-request basis.
- Avoid blocking threads.
- Avoid asynchronous calls unless you have additional parallel work.
Tune the Thread Pool by Using the Formula to Reduce Contention
If you have available CPU and if requests are queued, configure the ASP.NET thread pool. For more information about how to do this, see "Formula for Reducing Contention" in the preceding "Threading Explained" section. The recommendations in "Threading Explained" are a starting point.
When your application uses the common language runtime (CLR) thread pool, it is important to tune the thread pool correctly. Otherwise, you may experience contention issues, performance problems, or possible deadlocks. Your application may be using the CLR thread pool if the following conditions are true:
- Your application makes Web service calls.
- Your application uses the WebRequest or HttpWebRequest classes to make outgoing Web requests.
- Your application explicitly queues work to the thread pool by calling the QueueUserWorkItem method.
More Information
For more information, see Knowledge Base article 821268, "PRB: Contention, Poor Performance, and Deadlocks When You Make Web Service Requests from ASP.NET Applications," at http://support.microsoft.com/default.aspx?scid=kb;en-us;821268.
Consider minIoThreads and minWorkerThreads for Burst Load
If your application experiences burst loads where there are prolonged periods of inactivity between the burst loads, the thread pool may not have enough time to reach the optimal level of threads. A burst load occurs when a large number of users connect to your application suddenly and at the same time. The minIoThreads and minWorkerThreads settings enable you to configure a minimum number of worker threads and I/O threads for load conditions.
At the time of this writing, you need a supported fix to configure these settings. For more information, see the following Knowledge Base articles:
- 810259, "FIX: SetMinThreads and GetMinThreads API Added to Common Language Runtime ThreadPool Class," at http://support.microsoft.com/default.aspx?scid=kb;en-us;810259
- 827419, "PRB: Sudden Requirement for a Larger Number of Threads from the ThreadPool Class May Result in Slow Computer Response Time," at http://support.microsoft.com/default.aspx?scid=kb;en-us;827419
Do Not Create Threads on a Per-Request Basis
Creating threads is an expensive operation that requires initialization of both managed and unmanaged resources. You should avoid manually creating threads on each client request for server-based applications such as ASP.NET applications and Web services.
Consider using asynchronous calls if you have work that is not CPU bound that can run in parallel with the call. For example, this might include disk I/O bound or network I/O bound operations such as reading or writing files, or making calls to another Web method.
You can use the infrastructure provided by the .NET Framework to perform asynchronous operations by calling the Beginsynchronous and Endsynchronous methods (where synchronous represents the synchronous method name). If this asynchronous calling pattern is not an option, then consider using threads from the CLR thread pool. The following code fragment shows how you queue a method to run on a separate thread from the thread pool.
WaitCallback methodTarget = new WaitCallback(myClass.UpdateCache);
bool isQueued = ThreadPool.QueueUserWorkItem(methodTarget);
Avoid Blocking Threads
Any operation that you perform from an ASP.NET page that causes the current request thread to block means that one less worker thread from the thread pool is available to service other ASP.NET requests. Avoid blocking threads.
Avoid Asynchronous Calls Unless You Have Additional Parallel Work
Make asynchronous calls from your Web application only when your application has additional parallel work to perform while it waits for the completion of the asynchronous calls, and the work performed by the asynchronous call is not CPU bound. Internally, the asynchronous calls use a worker thread from the thread pool; in effect, you are using additional threads.
At the same time that you make asynchronous I/O calls, such as calling a Web method or performing file operations, the thread that makes the call is released so that it can perform additional work, such as making other asynchronous calls or performing other parallel tasks. You can then wait for completion of all of those tasks. Making several asynchronous calls that are not CPU bound and then letting them run simultaneously can improve throughput.
More Information
For more information about ASP.NET threading and asynchronous communication, see "ASP.NET Pipeline: Use Threads and Build Asynchronous Handlers in Your Server-Side Web Code" at http://msdn.microsoft.com/msdnmag/issues/03/06/Threading/default.aspx.
Resource Management
Poor resource management from pages and controls is one of the primary causes of poor Web application performance. Poor resource management can place excessive loads on CPUs and can consume vast amounts of memory. When CPU or memory thresholds are exceeded, applications might be recycled or blocked until the load on the server is lower. For more information, see "Resource Management" in Chapter 3, "Design Guidelines for Application Performance." Use the following guidelines to help you manage your resources efficiently:
- Pool resources.
- Explicitly call Dispose or Close on resources you open.
- Do not cache or block on pooled resources.
- Know your application allocation pattern.
- Obtain resources late and release them early.
- Avoid per-request impersonation.
Pool Resources
ADO.NET provides built-in database connection pooling that is fully automatic and requires no specific coding. Make sure that you use the same connection string for every request to access the database.
Make sure you release pooled resources so that they can be returned to the pool as soon as possible. Do not cache pooled resources or make lengthy blocking calls while you own the pooled resource, because this means that other clients cannot use the resource in the meantime. Also, avoid holding objects across multiple requests.
Explicitly Call Dispose or Close on Resources You Open
If you use objects that implement the IDisposable interface, make sure you call the Dispose method of the object or the Close method if one is provided. Failing to call Close or Dispose prolongs the life of the object in memory long after the client stops using it. This defers the cleanup and can contribute to memory pressure. Database connection and files are examples of shared resources that should be explicitly closed. The finally clause of the try/finally block is a good place to ensure that the Close or Dispose method of the object is called. This technique is shown in the following Visual Basic® .NET code fragment.
Try
conn.Open()
…Finally
If Not(conn Is Nothing) Then
conn.Close()
End If
End Try
In Visual C#®, you can wrap resources that should be disposed, by using a using block. When the using block completes, Dispose is called on the object listed in the brackets on the using statement. The following code fragment shows how you can wrap resources that should be disposed by using a using block.
SqlConnection conn = new SqlConnection(connString);
using (conn)
{
conn.Open();
. . .
} // Dispose is automatically called on the connection object conn here.
More Information
For more information, see "Finalize and Dispose Guidelines" in Chapter 5, "
If your application uses resources that are pooled, release the
resource back to the pool. Caching the pooled resources or making
blocking calls from a pooled resource reduces the availability of the
pooled resource for other users. Pooled resources include database
connections, network connections, and Enterprise Services pooled
objects. Poor memory allocation patterns may cause the garbage collector to
spend most of its time collecting objects from Generation 2. Collecting
objects from Generation 2 leads to poor application performance and
high loads on the CPU. Coding techniques that cause large numbers of temporary allocations
during a short interval put pressure on the garbage collector. For
example, when you perform a large number of string concatenation
operations by using the += operator in a tight loop, or when you use String.Split
for every request, you may put pressure on the garbage collector. All
of these operations create hidden objects (temporary allocations). Use
tools such as the CLR Profiler and System Monitor to better understand
allocation patterns in your application. More Information For more information, see "
For more information about the mechanics of garbage collection and
generations, see "Garbage Collection Explained" in Chapter 5, "
Open critical, limited, and shared resources just before you need
them, and release them as soon as you can. Critical, limited, and
shared resources include resources such as database connections,
network connections, and transactions. Identify and, if necessary, authorize the caller at the Web server.
Obtain access to system resources or application-wide resources by
using the identity of the Web application process or by using a fixed
service account. System resources are resources such as event logs.
Application-wide resources are resources such as databases. Avoiding
per-request impersonation minimizes security overhead and maximizes
resource pooling. The efficiency of your ASP.NET page and code-behind page logic plays
a large part in determining the overall performance of your Web
application. The following guidelines relate to the development of
individual .aspx and .ascx Web page files. Processing large page sizes increases the load on the CPU, increases
the consumption of network bandwidth, and increases the response times
for clients. Avoid designing and developing large pages that accomplish
multiple tasks, particularly where only a few tasks are normally
executed for each request. Where possible logically partition your
pages. To trim your page size, you can do one or all of the following: The following sample table does not contain white spaces. Save these two tables in separate text files by using Notepad, and
then view the size of each file. The second table saves several bytes
simply by removing the white space. If you had a table with 1,000 rows,
you could reduce the response time by just removing the white spaces.
In intranet scenarios, removing white space may not represent a huge
saving. However, in an Internet scenario that involves slow clients,
removing white space can increase response times dramatically. You can
also consider HTTP compression; however, HTTP compression affects CPU
utilization. You cannot always expect to design your pages in this way.
Therefore, the most effective method for removing the white space is to
use an Internet Server API (ISAPI) filter or an HttpModule object. An ISAPI filter is faster than an HttpModule;
however, the ISAPI filter is more complex to develop and increases CPU
utilization. You might also consider IIS compression. IIS compression
can be added by using a metabase entry. Additionally, you can trim page size in the following ways: More Information For more information about IIS compression, see Knowledge Base article, 322603, "HOW TO: Enable ASPX Compression in IIS," at http://support.microsoft.com/default.aspx?scid=kb;en-us;322603. Because buffering is enabled by default, ASP.NET batches work on the
server and avoid chatty communication with the client. The disadvantage
to this approach is that for a slow page, the client does not see any
rendering of the page until it is complete. You can use Response.Flush to mitigate this situation because Response.Flush
sends output up to that point to the client. Clients that connect over
slow networks affect the response time of your server because your
server has to wait for acknowledgements from the client to proceed.
Because you sent headers with the first send, there is no chance to do
it later. If buffering is turned off, you can enable buffering by using the following methods: When you run your ASP.NET application by using the ASP.NET process
model, it is even more important to have buffering enabled. The ASP.NET
worker process first sends responses to IIS in the form of response
buffers. After the ISAPI filter is running, IIS receives the response
buffers. These response buffers are 31 KB in size., After IIS receives
the response buffers, it then sends that actual response back to the
client. With buffering disabled, instead of using the entire 31-KB
buffer, ASP.NET can only send a few characters to the buffer. This
causes extra CPU processing in both ASP.NET as well as in IIS. This may
also cause memory consumption in the IIS process to increase
dramatically. Use the Page.IsPostBack property to ensure that you
only perform page initialization logic when a page is first loaded and
not in response to client postbacks. The following code fragment shows
how to use the Page.IsPostBack property. Partition the content in your page to increase caching potential.
Partitioning your page content enables you to make different decisions
about how you retrieve, display, and cache the content. You can use
user controls to segregate static content, such as navigational items,
menus, advertisements, copyrights, page headers, and page footers. You
should also separate dynamic content and user-specific content for
maximum flexibility when you want to cache content. More Information For more information, see "Partial Page or Fragment Caching" later in this chapter. As the number of assemblies that are loaded in a process grows, the
virtual address space can become fragmented. When the virtual address
space is fragmented, out-of-memory conditions are more likely to occur.
To prevent a large number of assemblies from loading in a process,
ASP.NET tries to compile all pages that are in the same directory into
a single assembly. This occurs when the first request for a page in
that directory occurs. Use the following techniques to reduce the
number of assemblies that are not batch compiled: When debug is set to true, the following occurs: Before you run performance tests and before you move your application into production, be sure that debug is set to false in the Web.config file and at the page level. By default, debug is set to false
at the page level. If you do need to set this attribute during
development time, it is recommended that you set it at the Web.config
file level, as shown in the following fragment. The following shows how to set debug to false at the page level. Expensive loops in any application can cause performance problems.
To reduce the overhead that is associated with code inside loops, you
should follow these recommendations: More Information For more information about the recommendations in this section, see "Iterating and Looping" in Chapter 5, "
Response.Redirect sends a metatag to the client that makes the client send a new request to the server by using the new URL. Server.Transfer avoids this indirection by making a server-side call. When you use Server.Transfer,
the URL in the browser does not change, and load test tools may
incorrectly report the page size because different pages are rendered
for the same URL. The Server.Transfer, Response.Redirect, and Response.End methods all raise ThreadAbortException exceptions because they internally call Response.End. The call to Response.End causes this exception. Consider using the overloaded method to pass false as the second parameter so that you can suppress the internal call to Response.End. More Information For more information, see Knowledge Base article 312629, "PRB:
ThreadAbortException Occurs If You Use Response.End, Response.Redirect,
or Server.Transfer," at http://support.microsoft.com/default.aspx?scid=kb;en-us;312629. Prevalidating data can help reduce the round trips that are required
to process a user's request. In ASP.NET, you can use validation
controls to implement client-side validation of user input. More Information For more information on validation controls, see the following: You can use server controls to encapsulate and to reuse common
functionality. Server controls provide a clean programming abstraction
and are the recommended way to build ASP.NET applications. When server
controls are used properly, they can improve output caching and code
maintenance. The main areas you should review for performance
optimizations are view state and control composition. Use the following
guidelines when you develop server controls: View state is serialized and deserialized on the server. To save CPU
cycles, reduce the amount of view state that your application uses.
Disable view state if you do not need it. Disable view state if you are
doing at least one of the following: More Information For more information about view state, see "View State" later in this chapter. The HTTP protocol is stateless; however, server controls provide a
rich programming model that manages state between page requests by
using view state. Server controls require a fixed amount of processing
to establish the control and all of its child controls. This makes
server controls relatively expensive compared to HTML controls or
possibly static text. Scenarios where server controls are expensive
include the following: When you do not need rich interaction, replace server controls with
an inline representation of the user interface that you want to
present. You might be able to replace a server control under the
following conditions: Alternatives to server controls include simple rendering, HTML elements, inline Response.Write
calls, and raw inline angle brackets (<% %>). It is essential to
balance your tradeoffs. Avoid over optimization if the overhead is
acceptable and if your application is within the limits of its
performance objectives. Deeply nested hierarchies of controls compound the cost of creating
a server control and its child controls. Deeply nested hierarchies
create extra processing that could be avoided by using a different
design that uses inline controls, or by using a flatter hierarchy of
server controls. This is especially important when you use list
controls such as Repeater, DataList, and DataGrid because they create additional child controls in the container. For example, consider the following Repeater control. Assuming there are 50 items in the data source, if you enable tracing for the page that contains the Repeater control, you would see that the page actually contains more than 200 controls. Table 6.2: Partial Repeater Control Hierarchy The ASP.NET list controls are designed to handle many different
scenarios and may not be optimized for your scenario. In situations
where performance is critical, you can choose from the following
options: For general background information about server controls, see
Knowledge Base article 306459, "INFO: ASP.NET Server Controls
Overview," at http://support.microsoft.com/default.aspx?scid=kb;en-us;306459. Data binding is another common area that often leads to performance
problems if it is used inefficiently. If you use data binding, consider
the following recommendations: Calling Page.DataBind invokes the page-level method. The page-level method in turn calls the DataBind method of every control on the page that supports data binding. Instead of calling the page-level DataBind, call DataBind on specific controls. Both approaches are shown in the following examples. The following line calls the page level DataBind. The page level DataBind in turn recursively calls DataBind on each control. The following line calls DataBind on the specific control. The DataBinder.Eval method uses reflection to
evaluate the arguments that are passed in and to return the results. If
you have a table that has 100 rows and 10 columns, you call DataBinder.Eval 1,000 times if you use DataBinder.Eval on each column. Your choice to use DataBinder.Eval is multiplied 1,000 times in this scenario. Limiting the use of DataBinder.Eval during data binding operations significantly improves page performance. Consider the following ItemTemplate element within a Repeater control using DataBinder.Eval. There are alternatives to using DataBinder.Eval in this scenario. The alternatives include the following: You can gain even better performance with explicit casting if you use a DataReader to bind your control and use the specialized methods to retrieve your data. Cast the Container.DataItem as a DbDataRecord. The explicit casting depends on the type of data source you are binding to; the preceding code illustrates an example. For more information about data binding, see the following Knowledge Base articles: Caching avoids redundant work. If you use caching properly, you can
avoid unnecessary database lookups and other expensive operations. You
can also reduce latency. The ASP.NET cache is a simple, scalable, in-memory caching service
provided to ASP.NET applications. It provides a time-based expiration
facility, and it also tracks dependencies on external files,
directories, or other cache keys. It also provides a mechanism to
invoke a callback function when an item expires in the cache. The cache
automatically removes items based on a least recently used (LRU)
algorithm, a configured memory limit, and the CacheItemPriority enumerated value of the items in the cache. Cached data is also lost when your application or worker process recycles. ASP.NET provides the following three caching techniques: These caching techniques are briefly summarized in the following sections. You should use the cache API to programmatically cache
application-wide data that is shared and accessed by multiple users.
The cache API is also a good place for data that you need to manipulate
in some way before you present the data to the user. This includes data
such as strings, arrays, collections, and data sets. Some common scenarios where you might want to use the cache API include the following: You should avoid using the cache API in the following circumstances: The cache API permits you to insert items in the cache that have a
dependency upon external conditions. Cached items are automatically
removed from the cache when the external conditions change. You use
this feature by using a third parameter on the Cache.Insert method that accepts an instance of a CacheDependency class. The CacheDependency
class has eight different constructors that support various dependency
scenarios. These constructors include file-based, time-based, and
priority-based dependencies, together with dependencies that are based
on existing dependencies. You can also run code before serving data from the cache. For
example, you might want to serve cached data for certain customers, but
for others you might want to serve data that is updated in real time.
You can perform this type of logic by using the HttpCachePolicy.AddValidationCallback method. The output cache enables you to cache the contents of entire pages
for a specific duration of time. It enables you to cache multiple
variations of the page based on query strings, headers, and userAgent
strings. The output cache also enables you to determine where to cache
the content, for example on a proxy, server, or a client. Like the
cache API, output caching enables you to save time retrieving data.
Output caching also saves time rendering content. You should enable
output caching on dynamically generated pages that do not contain
user-specific data in scenarios where you do not need to update the
view on every request. Some common scenarios that are ideal for output caching include the following: Avoid using output caching in the following circumstances: Partial page or fragment caching is a subset of output caching. It
includes an additional attribute that allows you to cache a variation
based on the properties of the user control (.ascx file.) Fragment caching is implemented by using user controls in conjunction with the @OutputCache
directive. Use fragment caching when caching the entire content of a
page is not practical. If you have a mixture of static, dynamic, and
user-specific content in your page, partition your page into separate
logical regions by creating user controls. These user controls can then
be cached, independent of the main page, to reduce processing time and
to increase performance. Some common scenarios that make good candidates for fragment caching include the following: You should avoid using fragment caching under the following conditions: If your application uses the same user control on multiple pages, make the pages share the same instance by setting the Shared attribute of the user control @ OutputCache directive to true. This can save a significant amount of memory. Consider the following guidelines when you are designing a caching strategy: Partial page caching enables you to cache parts of a page by using
user controls. Use user controls to partition your page. For example,
consider the following simple page which contains static, dynamic, and
user-specific information. You can partition and cache this page by using the following code: By partitioning the content, as shown in the sample, you can cache
selected portions of the page to reduce processing and rendering time. Configuring and tuning the memory limit is critical for the cache to
perform optimally. The ASP.NET cache starts trimming the cache based on
a LRU algorithm and the CacheItemPriority enumerated
value assigned to the item after memory consumption is within 20
percent of the configured memory limit. If the memory limit is set too
high, it is possible for the process to be recycled unexpectedly. Your
application might also experience out-of-memory exceptions. If the
memory limit is set too low, it could increase the amount of time spent
performing garbage collections, which decreases overall performance. Empirical testing shows that the likelihood of receiving
out-of-memory exceptions increases when private bytes exceed 800
megabytes (MB). A good rule to follow when determining when to increase
or decrease this number is that 800 MB is only relevant for .NET
Framework 1.0. If you have .NET Framework 1.1 and if you use the /3 GB
switch, you can go up to 1,800 MB. When using the ASP.NET process model, you configure the memory limit in the Machine.config file as follows. This value controls the percentage of physical memory that the
worker process is allowed to consume. The process is recycled if this
value is exceeded. In the previous sample, if there are 2 gigabytes
(GB) of RAM on your server, the process recycles after the total
available physical RAM falls below 50 percent of the RAM; in this case
1 GB. In other words, the process recycles if the memory used by the
worker process goes beyond 1 GB. You monitor the worker process memory
by using the process performance counter object and the private bytes counter. For more information about how to tune the memory limit and about
the /3GB switch, see "Configure the Memory Limit" and "/3GB Switch" in
Chapter 17, "
It is important to cache the right data. If you cache the wrong data, you may adversely affect performance. Cache application-wide data and data that is used by multiple users.
Cache static data and dynamic data that is expensive to create or
retrieve. Data that is expensive to retrieve and that is modified on a
periodic basis can still provide performance and scalability
improvements when managed properly. Caching data even for a few seconds
can make a big difference to high volume sites. Datasets or custom
classes that use optimized serialization for data binding are also good
candidates for caching. If the data is used more often than it is
updated, it is also a candidate for caching. Do not cache expensive resources that are shared, such as database connections, because this creates contention. Avoid storing DataReader
objects in the cache because these objects keep the underlying
connections open. It is better to pool these resources. Do not cache
per-user data that spans requests — use session state for that. If you
need to store and to pass request-specific data for the life of the
request instead of repeatedly accessing the database for the same
request, consider storing the data in the HttpContext.Current.Cache object. Just because your data updates every ten minutes does not mean that
your cache needs to be updated every ten minutes. Determine how
frequently you have to update the data to meet your service level
agreements. Avoid repopulating caches for data that changes frequently.
If your data changes frequently, that data may not be a good candidate
for caching. If you want to cache rendered output, you should consider using
output caching or fragment caching. If the rendered output is used
elsewhere in the application, use the cache API to store the rendered
output. If you need to manipulate the data, then cache the data by
using the cache API. For example, if you need the data to be bound to a
combo box, convert the retrieved data to an ArrayList object before you cache it. If your page is relatively static across multiple user requests,
consider using page output caching to cache the entire page for a
specified duration. You specify the duration based on the nature of the
data on the page. A dynamic page does not always have to be rebuilt for
every request just because it is a dynamic page. For example, you might
be able to cache Web-based reports that are expensive to generate for a
defined period. Caching dynamic pages for even a minute or two can
increase performance drastically on high volume pages. If you need to remove an item from the cache instead of waiting until the item expires, you can use the HttpResponse.RemoveOutputCacheItem method. This method accepts an absolute path to the page that you want to remove as shown in the following code fragment. The caveat here is that this is specific to a server, because the
cache is not shared across a Web farm. Also, it cannot be used from a
user control. The @OutputCache directive allows you to determine the cache location of the page by using the Location attribute. The Location attribute provides the following values: Unless you know for certain that your clients or your proxy server will cache responses, it is best to keep the Location attribute set to Any, Server, or ServerAndClient. Otherwise, if there is not a downstream cache available, the attribute effectively negates the benefits of output caching. The VaryBy attributes allow you to cache different versions of the same page. ASP.NET provides four VaryBy attributes: The VaryBy attribute determines the data that is cached. The following sample shows how to use the VaryBy attribute. The setting shown in the previous sample would make the following pages have the same cached version: If you add b to the VaryByParam attribute,
you would have three separate versions of the page rather than one
version. It is important for you to be aware of the number of
variations of the cached page that could be cached. If you have two
variables (a and b), and a has 5 different combinations, and b
has 10 different combinations, you can calculate the total number of
cached pages that could exist by using the following formula: (MAX a × MAX b) + (MAX a + MAX b) = 65 total variations When you make the decision to use a VaryBy
attribute, make sure that there are a finite number of variations
because each variation increases the memory consumption on the Web
server. Windows Server 2003 and IIS 6.0 provide kernel caching. ASP.NET
pages can automatically benefit from the IIS 6.0 kernel cache. Kernel
caching produces significant performance gains because requests for
cached responses are served without switching to user mode. More Information For more information, see "Kernel Mode Caching" in "IIS 6.0 Considerations" later in this chapter. For more information on caching in general, see the following Knowledge Base articles: For more information about programmatic caching, see "Using
Programmatic Caching" in "Understanding Caching Technologies" of the Caching Architecture Guide for .NET Framework Applications at http://msdn.microsoft.com/library/en-us/dnbda/html/CachingArchch2.asp. Web applications present specific challenges for state management.
This is especially true for Web applications that are deployed in Web
farms. The choices that you make regarding where and how state is
stored have a significant impact on the performance and scalability of
your application. There are several different types of state: Guidelines that are specific to application state, session state,
and view state are included in later sections. The following are
guidelines that address the broad issues that concern state management
in general: Use cookies, query strings, and hidden controls for storing
lightweight, user-specific state that is not sensitive such as
personalization data. Do not use them to store security-sensitive
information because the information can be easily read or manipulated. More Information For more information about the security implications of using these
various state management techniques, see Chapter 10, "Building Secure
ASP.NET Pages and Controls" in Improving Web Application Security: Threats and Countermeasures on MSDN at http://msdn.microsoft.com/library/en-us/dnnetsec/html/thcmch10.asp. If you need to serialize state, consider the serialization costs.
For example, you might want to serialize state to store in a remote
state store. Only store what is absolutely necessary, and prefer simple
types rather than complex objects to reduce the impact of serialization. Application state is used to store application-wide static
information. ASP.NET includes application state primarily for
compatibility with classic Active Server Pages (ASP) technology so that
it is easier to migrate existing applications to ASP.NET. If you use application state, use the following guidelines to ensure your application runs optimally: You should store data in static members of the application class instead of in the Application object. This increases performance because you can access a static variable faster than you can access an item in the Application dictionary. The following is a simplified example. Application state is application-wide and specific to a server. Even
though you can store read-write data, it advisable to only store
read-only data to avoid server affinity. Consider using the Cache object. The Cache object is a better alternative for read-only data. Storing STA COM objects in application state bottlenecks your
application because the application uses a single thread of execution
when it accesses the component. Avoid storing STA COM objects in
application state. For more information about application state, see Knowledge Base
article, 312607, "INFO: Application Instances, Application Events, and
Application State in ASP.NET," at http://support.microsoft.com/default.aspx?scid=kb;en-us;312607. If you need session state in ASP.NET, there are three session state
modes that you can choose from. Each mode offers varying degrees of
performance and scalability as described in the following list: For more information, see Knowledge Base article 323262,"INFO:
ASP.NET Session State with SqlServer Mode in a Failover Cluster," at http://support.microsoft.com/default.aspx?scid=kb;en-us;323262. The in-process state store provides excellent performance and scales
well. However, most high volume Web applications run in a Web farm. To
be able to scale out, you need to choose between the session state
service and the SQL Server state store. With either of these choices,
you have to understand the associated impact of network latency and
serialization, and you have to measure them to ensure that your
application meets its performance objectives. Use the following
information to help choose a state store: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters. To ensure optimized session state performance, follow these guidelines: You incur serialization overhead if you use the StateServer or the SQLServer
out-of-process state stores. The simpler the object graph, the faster
it should serialize. To minimize serialization costs, use basic types
such as Int, Byte, Decimal, String, DateTime, TimeSpan, Guid, IntPtr, and UintPrt.
ASP.NET uses an optimized internal serialization method to serialize
basic types. Complex types are serialized using a relatively slow BinaryFormatter object. For complex types, you can use the Serializable attribute, or you can implement the ISerializable interface. Using this interface provides you with more precise control and may speed up serialization. Minimize what you serialize. Disable serialization when you do not
use it, and mark specific fields from a serializable class that you
want to exclude with the NonSerialized attribute. Alternatively, control the serialization process by using the ISerializable interface. More Information For more information, see "
If you do not use session state, disable session state to eliminate
redundant session processing performed by ASP.NET. You might not use
session state because you store simple state on the client and then
pass it to the server for each request. You can disable session state
for all applications on the server, for specific applications, or for
individual pages, as described in the following list: You can also remove the session state module from <httpModules> to completely remove session processing overhead. Storing STA COM objects in session state causes thread affinity.
Thread affinity severely affects performance and scalability. If you do
use STA COM objects in session state, be sure to set the AspCompat attribute of the @ Page directive. More Information For more information, see "COM Interop" later in this chapter. Page requests that use session state internally use a ReaderWriterLock
object to manage session data. This allows multiple reads to occur at
the same time when no lock is held. When the writer acquires the lock
to update session state, all read requests are blocked. Normally two
calls are made to the database for each request. The first call
connects to the database, marks the session as locked, and executes the
page. The second call writes any changes and unlocks the session. For
pages that only read session data, consider setting EnableSessionState to ReadOnly as shown in the following sample. Setting EnableSessionState to ReadOnly is particularly useful when you use frames. In this event, the default setting serializes the execution of the page because a ReaderWriterLock is used. By setting EnableSessionState to ReadOnly,
you avoid blocking, and you send fewer calls to the database. One
option is to disable sessions in the configuration file as shown
earlier, and to set the ReadOnly attribute on a page-by-page basis. For more information about session state, see "Underpinnings of the Session State Implementation in ASP.NET" on MSDN at http://msdn.microsoft.com/library/en-us/dnaspp/html/ASPNetSessionState.asp. For additional information on session state, see the following Knowledge Base articles: View state is used primarily by server controls to retain state only
on pages that post data back to themselves. The information is passed
to the client and read back in a specific hidden variable called _VIEWSTATE. ASP.NET
makes it easy to store any types that are serializable in view state.
However, this capability can easily be misused and performance reduced.
View state is an unnecessary overhead for pages that do not need it. As
the view state grows larger. it affects performance in the following
ways: Transmitting a huge amount of view state can significantly affect
application performance. The change in performance becomes more marked
when your Web clients use slow, dial-up connections. Consider testing
for different bandwidth conditions when you work with view state.
Optimize the way your application uses view state by following these
recommendations: View state is turned on in ASP.NET by default. Disable view state if
you do not need it. For example, you might not need view state because
your page is output-only or because you explicitly reload data for each
request. You do not need view state when the following conditions are
true: There are several ways to disable view state at various levels: This approach allows you to selectively enable view state just for those pages that need it by using the EnableViewState attribute of the @ Page directive. As you increase the number of objects you put into view state, the
size of your view state dictionary grows, and the processing time
required to serialize and to deserialize the objects increases. Use the
following guidelines when you put objects into view state: By enabling tracing for the page, you can monitor the view state
size for each control. The view state size for each control appears in
the leftmost column in the control tree section of the trace output.
Use this information as a guide to determine if there are any controls
that you can reduce the amount of view state for or if there are
controls that you can disable view state for. More Information For related information, see "Taking a Bite out of ASP.NET ViewState" on MSDN at http://msdn.microsoft.com/library/en-us/dnaspnet/html/asp11222001.asp. HTTP modules are filters that allow you to add preprocessing and
postprocessing to HTTP request and response messages as they flow
through the ASP.NET pipeline. They are commonly used for authorization,
authentication, logging, and machine-level error handling. HTTP modules
run on every request, so whatever processing they perform has global
impact either on the application or the computer, depending on where
you register them. If you develop HTTP modules, consider the following: Avoid placing long-running code in an HTTP module for the following reasons: Long-running or blocking code reduces the concurrent requests that can run in ASP.NET. For every synchronous event, there is also an asynchronous version
of that event. Although asynchronous events still logically block the
request for the duration of the asynchronous work, they do not block
the ASP.NET thread. For more information on HTTP modules, see the MSDN Magazine article, "ASP.NET Pipeline: Use Threads and Build Asynchronous Handlers in Your Server-Side Web Code," at http://msdn.microsoft.com/msdnmag/issues/03/06/threading/default.aspx. In addition, see Knowledge Base article 307985, "INFO: ASP.NET HTTP Modules and HTTP Handlers Overview," at http://support.microsoft.com/default.aspx?scid=kb;en-us;307985. When you build output, you often need to concatenate strings. This
is an expensive operation because it requires temporary memory
allocation and subsequent collection. As a result, you should minimize
the amount of string concatenation that you perform. There are three
common ways to concatenate strings in your pages to render data: The most effective way to determine the option to choose is to
measure the performance of each option. If your application relies
heavily on temporary buffers, consider implementing a reusable buffer
pool of character arrays or byte arrays. Use the following guidelines when you are managing your strings: Where possible, avoid using loops to concatenate strings for formatting page layout. Consider using Response.Write
instead. This approach writes output to the ASP.NET response buffers.
When you are looping through datasets or XML documents, using Response.Write is a highly efficient approach. It is more efficient than concatenating the content by using the += operator before writing the content back to the client. Response.Write
internally appends strings to a reusable buffer so that it does not
suffer the performance overhead of allocating memory, in addition to
cleaning that memory up. In many cases it is not feasible to use Response.Write. For example, you might need to create strings to write to a log file or to build XML documents. In these situations, use a StringBuilder
object as a temporary buffer to hold your data. Measure the performance
of your scenario by trying various initial capacity settings for the StringBuilder object. When you are building custom controls, the Render, RenderChildren, and RenderControl methods provide access to the HtmlTextWriter object. The HtmlTextWriter writes to the same reusable buffer as Response.Write. In the same way as Response.Write, HtmlTextWriter does not suffer the performance overhead of allocating memory in addition to cleaning up the memory. For more information about strings, see "String Operations" in Chapter 5, "
To determine if your application is creating excessive temporary
memory allocation due to inefficient string concatenations, use
performance counters, as discussed in the "Memory" topic in "CLR and
Managed Code" in Chapter 15, "
Exceptions are expensive. By knowing the causes of exceptions, and
by writing code that avoids exceptions and that handles exceptions
efficiently, you can significantly improve the performance and
scalability of your application. When you design and implement
exception handling, consider the following guidelines to ensure optimum
performance: The first step in managing exceptions is to implement a global error
handler in the Global.asax file or in the code-behind file.
Implementing a global error handler traps all unhandled exceptions in
your application. Inside the handler, you should, at a minimum, log the
following information to a data store such as a database, the Windows
event log, or a log file: In your Global.asax file or your code-behind page, use the Application_Error event to handle your error logic, as shown in the following code sample: More Information For more information, see "Rich Custom Error Handling with ASP.NET" on MSDN at http://msdn.microsoft.com/library/en-us/dnaspp/html/customerrors.asp. Also, see Knowledge Base article 306355, "HOW TO: Create Custom Error Reporting Pages in ASP.NET by Using Visual C# .NET," at http://support.microsoft.com/default.aspx?scid=kb;en-us;306355. To reduce the number of exceptions occurring in your application,
you need to effectively monitor your application for exceptions. You
can do the following: To guarantee resources are cleaned up when an exception occurs, use a try/finally block. Close the resources in the finally clause. Using a try/finally block ensures that resources are disposed even if an exception occurs. The following code fragment demonstrates this. The following is a list of common techniques you can use to avoid exceptions: Instead, use the following code to access session state information. The following code is used to call the login. It is better to create an enumeration of possible values and then change the Login method to return that enumeration, as follows. The following code is used to call Login. For
more information, see Knowledge Base article 312629, "PRB:
ThreadAbortException Occurs If You Use Response.End, Response.Redirect,
or Server.Transfer," at http://support.microsoft.com/default.aspx?scid=kb;en-us;312629. Page timeouts that are set too high can cause problems if parts of
your application are operating slowly. For example, page timeouts that
are set too high may cause the following problems: The default page timeout is 90 seconds. You can change this value to accommodate your application scenario. Consider the following scenario where an ASP.NET front-end
application makes calls to a remote Web service. The remote Web service
then calls a mainframe database. If, for any reason, the Web service
calls to the mainframe start blocking, your front-end ASP.NET pages
continue to wait until the back end calls time out, or the page timeout
limit is exceeded. As a result, the current request times out, ASP.NET
starts to queue incoming requests, and those incoming requests may time
out, too. It is more efficient for your application to time out these
requests in less than 90 seconds. Additionally, timing out the requests
in less than 90 seconds improves the user experience. In most Internet and intranet scenarios, 30 seconds is a very
reasonable timeout limit. For high traffic pages such as a home page,
you might want to consider lowering the timeout limit. If your
application takes a long time to generate certain pages, such as report
pages, increase the timeout limit for those pages. For more information about exception handling, see the following MSDN articles: For more information about the various timeout parameters and how to
configure them, see "Configure Timeouts Aggressively" in Chapter 17, "
Calling COM objects from ASP.NET may present performance challenges
because you have to deal with threading issues, marshaling data types,
and transitions across the boundary between managed and unmanaged code.
Because ASP.NET runs requests on multithreaded apartment (MTA) threads,
working with STA COM components may be especially challenging. Use the following guidelines to improve COM interop performance: When you call an STA object, such as a Visual Basic 6.0 component, from an ASP.NET page, use the page-level ASPCOMPAT attribute. Use the ASPCOMPAT
attribute as shown in the following sample, to denote that the events
in your page should run using a thread from the STA thread pool rather
than a default MTA thread. STA object calls require an STA thread. If you do not use the ASPCOMPAT attribute, all STA object calls are serialized on the host STA thread and a serious bottleneck occurs. Avoid storing COM objects in state containers such as session state
or application state. COM objects are not serializable, and although
calling the object may work with a single-server deployment, affinity
and serialization issues will prevent your application from working
when it is moved to a Web farm. Even though it is technically possible to store STA objects in
session state, do not do so because it causes thread affinity issues.
If you do so, requests to the STA object have to be run on the same
thread that created the object, and this quickly becomes a bottleneck
as the number of users increases. More Information For more information, see Knowledge Base article 817005, "FIX:
Severe Performance Issues When You Bind Session State to Threads in
ASPCompat Mode," at http://support.microsoft.com/default.aspx?scid=kb;en-us;817005. Do not create STA objects in a page constructor because this causes
a thread switch to the host STA and causes all calls to be serialized.
Although the ASPCOMPAT attribute ensures that an STA thread from the STA thread pool is used for page events such as onload, button_click, and other page events, other parts of your page such as the constructor are run by using an MTA thread. Late binding requires extra instructions to locate the target code,
whether this is a COM class or executing a method by name. Methods such
as Server.CreateObject, Activator.CreateInstance and MethodInfo.Invoke allow late bound execution of code. When you migrate ASP code, use the new keyword to allow early bound calls. The following example uses early binding. The new operator is used to create a classic ActiveX® Data Objects (ADO) connection. The following example uses late binding. The <object> tag along with the class attribute is used to create an ADO connection object. ADODB.Connection represents the namespace and class name. The second ADODB represents the assembly name. The following example also uses late binding. GetType is used to obtain the type and this is passed to the overloaded Server.CreateObject method that is provided by ASP.NET. For these code samples to work, add a reference to the Microsoft ActiveX Data Objects X.X Library in Visual Studio®.NET. Replace X.X
with the version number that you want to use. This approach causes an
interop assembly to be used if one exists or creates one automatically
for you. If you are not using Visual Studio .NET, use the TlbImp.exe
file to generate the interop assembly. It is recommended that you look
for and use a primary interop assembly. Copy the generated interop
assembly to the Bin directory of your application. For more information about COM interop performance and issues, see Chapter 7, "
Almost all ASP.NET applications use some form of data access. Data
access is typically a focal point for improving performance because the
majority of application requests require data that comes from a
database. Use the following guidelines to improve your data access: Paging large query result sets can significantly improve the
performance of an application. If you have large result sets, implement
a paging solution that achieves the following: Several paging solutions are available; each solution solves the
problems that are inherent to specific scenarios. The following
paragraphs briefly summarize the solutions. For implementation-specific
details, see the "How To: Page Records in .NET Applications" in the
"How To" section of this guide. A relatively quick and easy solution is to use the automatic paging provided by the DataGrid
object. However, this solution works only for tables that have unique
incrementing columns; it is not suitable for large tables. With the
custom paging approach, you set AllowPaging and AllowCustomPaging properties to true, and then set the PageSize and VirtualItemCount properties. Then the StartIndex (the last browsed row) and NextIndex (StartIndex + PageSize) properties are calculated. The StartIndex and NextIndex
values are used as ranges for the identity column to retrieve and
display the requested page. This solution does not cache data; it pulls
only the relevant records across the network. There are several solutions available for tables that do not have
unique incrementing column numbers. For tables that have a clustered
index and and that do not require special server-side coding, use the subquery
solution to track the number of rows to skip from the start. From the
resulting records, use the TOP keyword in conjunction with the <pagesize>
element to retrieve the next page of rows. Only the relevant page
records are retrieved over the network. Other solutions use either the Table
data type or a global temporary table with an additional IDENTITY
column to store the queried results. This column is used to limit the
range of rows fetched and displayed. This requires server-side coding. For more information and implementation details for paging solutions, see "http://support.microsoft.com/default.aspx?scid=kb;en-us;318131. Use a DataReader object if you do not need to cache
data, if you are displaying read - only data, and if you need to load
data into a control as quickly as possible. The DataReader is the optimum choice for retrieving read-only data in a forward-only manner. Loading the data into a DataSet object and then binding the DataSet to the control moves the data twice. This method also incurs the relatively significant expense of constructing a DataSet. In addition, when you use the DataReader, you can use the specialized type-specific methods to retrieve the data for better performance. Allowing users to request and retrieve more data than they can
consume puts an unnecessary strain on your application resources. This
unnecessary strain causes increased CPU utilization, increased memory
consumption, and decreased response times. This is especially true for
clients that have a slow connection speed. From a usability standpoint,
most users do not want to see thousands of rows presented as a single
unit. Limit the amount of data that users can retrieve by using one of the following techniques: If you have application-wide data that is fairly static and
expensive to retrieve, consider caching the data in the ASP.NET cache. For more information about data access, see Chapter 12, "
Security and performance are often at the center of design
tradeoffs, because additional security mechanisms often negatively
impacts performance. However, you can reduce server load by filtering
unwanted, invalid, or malicious traffic, and by constraining the
requests that are allowed to reach your Web server. The earlier that
you block unwanted traffic, the greater the processing overhead that
you avoid. Consider the following recommendations: Constrain the traffic to your Web Server to avoid unnecessary
processing. For example, block invalid requests at your firewall to
limit the load on your Web server. In addition, do the following: Partition pages that require authenticated access from pages that
support anonymous access. To avoid authentication overhead, set the
authentication mode to None in the Web.config file in
the directory that contains the anonymous pages. The following line
shows how to set the authentication mode in the Web.config file. Consider using client-side validation to avoid sending unwanted
traffic to the server. However, do not trust client-side validation
alone because it can easily be bypassed. For security reasons, you
should implement the equivalent server-side checks for every client
check. Per-request impersonation where you use the original caller's
identity to access the database places severe scalability constraints
on your application. Per-request impersonation prevents the effective
use of database connection pooling. The trusted subsystem model is the
preferred and scalable alternative. With this approach, you use a fixed
service account to access the database and to pass the identity of the
original caller at the application level if the identity of the
original caller is required. For example, you might pass the identity
of the original caller through stored procedure parameters. More Information For more information about the trusted subsystem model, see Chapter 3, "Authentication and Authorization," in Building Secure ASP.NET Applications on MSDN at http://msdn.microsoft.com/library/en-us/secmod/html/secmod00.asp. Instead of caching sensitive data, retrieve the data when you need
it. When you measure application performance, if you discover that
retrieving the data on a per-request basis is very costly, measure the
cost to encrypt, cache, retrieve, and decrypt the data. If the cost to
retrieve the data is higher than the cost to encrypt and decrypt the
data, consider caching encrypted data. When you design the folder structure of your Web site, clearly
differentiate between the publicly accessible areas and restricted
areas that require authenticated access and Secure Sockets Layer (SSL).
Use separate subfolders beneath the virtual root folder of your
application to hold restricted pages such as forms logon pages,
checkout pages, and any other pages that users transmit sensitive
information to that needs to be secured by using HTTPS. By doing so,
you can use HTTPS for specific pages without incurring the SSL
performance overhead across your entire site. Using SSL is expensive. Only use SSL for pages that require it. This
includes pages that contain or capture sensitive data, such as pages
that accept credit card numbers and passwords. Use SSL only if the
following conditions are true: For pages where you must use SSL, follow these guidelines: Navigating between HTTP and HTTPs using redirects uses the protocol
of the current page instead of the protocol of the target page. When
your redirects use relative links (..\publicpage.aspx) to sites that
are not secure from a site that uses HTTPS, these public pages are
served by using the HTTPS protocol. This use of the HTTPS protocol
incurs unnecessary overhead. To avoid this problem, use the absolute
link instead of the relative link for your redirects. For example, use
an absolute link such as http://yourserver/publicpage.aspx.
The same applies when you navigate from pages that use HTTP to pages
that use HTTPS. The following code fragment shows how to create a
redirect from a page that uses HTTP to a page that uses HTTPS. Consider a hardware solution for SSL processing. Terminating SSL
sessions at a load balancer by using a hardware accelerator generally
offers better performance, particularly for sites that experience heavy
use. If you are not using SSL hardware, tune the ServerCacheTimer
property to avoid having to renegotiate the SSL handshakes with browser
clients. The largest use of resources when you use SSL occurs during
the initial handshake, where asymmetric public/private-key encryption
is used. After a secure session key is generated and exchanged, faster,
symmetric encryption is used to encrypt application data. Monitor your SSL connections and increase the value of the ServerCacheTime registry entry if you find that a longer time is better for your scenario. More Information For more information about how to change the ServerCacheTime value, see Knowledge Base article 247658, "HOW TO: Configure Secure Sockets Layer Server and Client Cache Elements," at http://support.microsoft.com/default.aspx?scid=kb;en-us;247658. For more information about security-related performance considerations, see the following: On Microsoft Windows Server 2003, the IIS 6.0 architecture is
different from IIS 5.0 on Windows 2000 Server. IIS 6.0 enables multiple
processes to be used to host separate Web applications. This is shown
in Figure 6.2. Figure 6.2: IIS 6.0 architecture IIS 6.0 includes a new HTTP listener (HTTP.Sys)
that is implemented in the kernel. Requests are routed to one of the
multiple worker process instances (W3wp.exe) that host ASP.NET
applications and Web services. The primary difference between the ASP.NET architecture under
Windows 2000 and Windows Server 2003 is that under Windows 2003, you
can use separate IIS worker process instances to host Web applications By default, the IIS worker process instances run using the NT
Authority\NetworkService account. This account is a least-privileged
local account that acts as the computer account over the network. A Web
application that runs in the context of the Network Service account
presents the computer's credentials to remote servers for
authentication. IIS 6.0 also supports a backwards-compatibility mode that supports the IIS 5.0 ASP.NET worker process model. If you deploy your application on Windows Server 2003, ASP.NET pages
automatically benefit from the IIS 6.0 kernel cache. The kernel cache
is managed by the HTTP.sys kernel-mode device driver. This driver
handles all HTTP requests. Kernel mode caching may produce significant
performance gains because requests for cached responses are served
without switching to user mode. The following default setting in the Machine.config file ensures
that dynamically generated ASP.NET pages can use kernel mode caching,
subject to the requirements listed below. Dynamically generated ASP.NET pages are automatically cached subject to the following restrictions: More Information For more information about IIS 6.0 and kernel caching, see the IIS 6.0 Resource Kit at http://www.microsoft.com/downloads/details.aspx?FamilyID=80a1b6e6-829e-49b7-8c02-333d9c148e69&DisplayLang=en. By default, ASP.NET uses all CPUs available. In Web garden mode,
ASP.NET creates one process per CPU. Each process creates an affinity
to a single CPU. Web gardens offer an addition layer of reliability and
robustness. If a process crashes, there are other processes that still
service incoming requests. Web gardens may perform better under the following scenarios: To determine the effectiveness of Web gardens for your application,
run performance tests, and then compare your results with and without
Web gardens. Typically, in the two scenarios that are described in this
section, you are likely to notice a greater benefit with servers that
contain four or eight CPUs. By default, the ASP.NET Process Model is not enabled in IIS 6.0. If
you enable Web gardens, you may adversely affect the performance of the
garbage collector. The performance of the garbage collector may be
affected because the server version of the garbage collector is still
used while bound to a single CPU. The disadvantage is that this creates
one worker process per CPU. Because there is a worker process for each
CPU, additional system resources are consumed. You can enable Web gardens in IIS 6.0 by using the Internet Information Services Manager. To do so, follow these steps: In the <processModel> section of the Machine.config file, set the webGarden attribute to true, and then configure the cpuMask attribute as follows. The cpuMask attribute specifies the CPUs on a
multiprocessor server that are eligible to run ASP.NET processes. By
default, all CPUs are enabled and ASP.NET creates one process for each
CPU. If the webGarden attribute is set to false, the cpuMask attribute is ignored, and only one worker process runs. The value of the cpuMask
attribute specifies a bit pattern that indicates the CPUs that are
eligible to run ASP.NET threads. Table 6.3 shows some examples. Table 6.3: Processor Mask Bit Patterns More Information For more information about how to use ASP.NET Web gardens, see
Knowledge Base article 815156, "HOW TO: Restrict ASP.NET to Specific
Processors in a Multiprocessor System," at http://support.microsoft.com/default.aspx?scid=kb;en-us;815156. By default, if you have a multiple processor server, the server GC
is loaded. If you have a single processor server, the workstation GC is
loaded. At the time of writing, .NET Framework version 1.1 Service Pack
1 (SP1) provides a switch that enables you to configure whether ASP.NET
loads the server or the workstation GC. You can use this switch to
configure ASP.NET to load the workstation GC even on a
multiple-processor server. If you host several isolated worker processes on Windows Server 2003
on a multiprocessor computer, use the workstation GC and nonconcurrent
mode. The server GC is optimized for throughput, memory consumption, and
multiprocessor scalability. However, using the workstation GC when you
are running Windows Server 2003 on a multiprocessor server can
dramatically reduce the memory used per-worker process, and greatly
increases the number of isolated worker processes that you can host.
Disabling concurrent garbage collection further increases the number of
isolated worker processes that you can run. To configure ASP.NET to use the workstation GC, add the following
configuration to the Aspnet.config file. The Aspnet.config file is in
the same directory as the Aspnet_isapi.dll file. More Information For more information about garbage collection in general and about
server and workstation GCs, see "Garbage Collection Explained" in
Chapter 5, "
Physical deployment plays a key role in determining the performance
and scalability characteristics of your application. Unless you have a
compelling reason to introduce a remote middle tier, you should deploy
your Web application's presentation, business, and data access layers
on the Web server. The only remote hop should be the hop to the
database. This section discusses the following key deployment
considerations: Although process hops are not as expensive as machine hops, you
should avoid process hops where possible. Process hops cause added
overhead because they require interprocess communication (IPC) and
marshaling. For example, if your solution uses Enterprise Services, use
library applications where possible, unless you need to put your
Enterprise Services application on a remote middle tier. If possible, avoid the overhead of interprocess and intercomputer
communication. Unless your business requirements dictate the use of a
remote middle tier, keep your presentation, business, and data access
logic on the Web server. Deploy your business and data access
assemblies to the Bin directory of your application. However, you might
require a remote middle tier for any of the following reasons: If you do have to deploy by using a remote middle tier, ensure you
recognize this early so that you can measure and test by using the same
environment. The HTTP pipeline sequence is determined by settings in the
Machine.config file. Put the modules that you do not use inside
comments. For example, if you do not use Forms authentication, you
should either put the entry for Forms authentication in the
Machine.config file in a comment or, explicitly remove the entry in
your Web.config file for a particular application. The following sample
shows how to comment out an entry. The following sample from a Web.config file shows how to remove the entry for a specific application. If you have other applications on your Web server that are using the
HTTP module that you do not use, remove the HTTP module from the
Web.config file of the application. Do this instead of using comments
to disable the HTTP module in the Machine.config file. Before you deploy your application, configure the memory limit.
Configuring the memory limit ensures optimal ASP.NET cache performance
and server stability. More Information For more information, see "Configure the Memory Limit" in the "Caching Guidelines" section of this chapter. Before you deploy your application, disable tracing and debugging.
Tracing and debugging may cause performance issues. Tracing and
debugging are not recommended while your application is running in
production. Disable tracing and debugging in the Machine.config and Web.config files, as shown in the following sample. More Information For more information, see Knowledge Base article 815157, "HOW TO: Disable Debugging for ASP.NET Applications" at http://support.microsoft.com/default.aspx?scid=kb;en-us;815157. Problems may arise when updates to .aspx or .ascx pages occur
without an application restart. Consider the following scenario. Assume
you have five pages in a directory as follows. When a page in the Mydir directory is first requested, all pages in
that directory are compiled into a single assembly, as shown in the
following sample. If Page1.aspx is updated, a new single assembly is created for
Page1.aspx. Now there are two assemblies, as shown in the following
sample. If Page2.aspx is updated, a new single assembly is created for Page2.aspx. Now there are three assemblies. To ensure that you do not experience this problem and generate
multiple assemblies, follow these steps when you want to update
content: This approach to updating content also solves another problem. If a
server is put into rotation before batch compilation is complete, some
pages may be compiled as a single assembly. If another request is made
during batch compilation for a page in the same directory that is being
batch compiled, that page is compiled as a single assembly. Taking the
Web server out of rotation and then putting it back in rotation helps
you avoid this problem. XCOPY deployment is designed to make deployment easy because you do
not have to stop your application or IIS. However, for production
environments you should remove a server from rotation, stop IIS,
perform the XCOPY update, restart IIS, and then put the server back
into rotation. It is particularly important to follow this sequence under heavy
load conditions. For example, if you copy 50 files to a virtual
directory, and each file copy takes 100 milliseconds, the entire file
copy takes 5 seconds. During that time, the application domain of your
application may be unloaded and loaded more than once. Also, certain
files may be locked by the XCOPY process (Xcopy.exe). If the XCOPY
process locks certain files, the worker process and the compilers
cannot access the files. If you do want to use XCOPY deployment for updates, the .NET Framework version 1.1 includes the waitChangeNotification and maxWaitChangeNotification settings. You can use these settings to help resolve the XCOPY issues described in this section. The value of the waitChangeNotification setting should be based on the amount of time that it takes to use XCOPY to copy your largest file. The maxWaitChangeNotification setting should be based on the total amount of time that XCOPY uses to copy all the files plus a small amount of extra time. More Information For more information, see the following Knowledge Base articles: So that your users do not have to experience the batch compile of
your ASP.NET files, you can initiate batch compiles by issuing one
request to a page per directory and then waiting until the processor
idles again before putting the Web server back into rotation. This
increases the performance that your users experience, and it decreases
the burden of batch compiling directories while handling requests at
the same time. Consider using Web gardens if your application uses STA objects
heavily or if your application accesses a pool of resources that are
bound by the number of processes. To determine the effectiveness of Web gardens for your application,
run performance tests, and then compare the results with and without
Web gardens. HTTP compression is supported by most modern browsers and by IIS.
HTTP compression is likely to improve performance when the client
accesses the Web server over a low bandwidth connection. A perimeter network protects your intranet from intrusion by
controlling access from the Internet or from other large networks. It
consists of a combination of systems such as proxy servers, packet
filtering, gateways, and other systems that enforce a boundary between
two or more networks. If your perimeter network includes a proxy server, consider enabling caching on your proxy server to improve performance. This chapter discusses the common pitfalls and bottlenecks that can
occur during ASP.NET application development. It shows you the steps
you need to take to avoid and overcome these issues. By following the
guidance and advice in this chapter, you can build extremely high
performance ASP.NET applications. Building high performance applications of any type requires you to
consider performance from the outset. You have to create a sound
architecture and design that takes into account any restrictions that
might be imposed by your physical deployment environment. During
development, you need to ensure that you adopt best practice coding
techniques. You have to continually measure performance to ensure that
your application operates within the boundaries determined by your
performance objectives. Measuring should continue throughout the life
cycle. Finally, at deployment time you have to consider the
configuration of the environment that your application will run in. For more information, see the following resources: For more information about IIS 6.0, see the following resource: Do Not Cache or Block on Pooled Resources
Know Your Application Allocation Pattern
Obtain Resources Late and Release Them Early
Avoid Per-Request Impersonation
Note Impersonation on
its own does not cause performance issues. However, impersonation often
prevents efficient resource pooling. This is a common cause of
performance and scalability problems.
Pages
Trim Your Page Size
// with white space
<table>
<tr>
<td>hello</td>
<td>world</td>
</tr>
</table>// without white space
<table>
<tr><td>hello</td><td>world</td></tr>
</table>
Note When using the
ASP.NET process model, the ASP.NET worker process sends responses back
to the client, it first sends them through IIS in 31-kilobyte (KB)
chunks. This applies to .NET Framework 1.1, but it could change in
future versions. The more 31-KB chunks that ASP.NET has to send through
IIS, the slower your page runs. You can determine how many chunks
ASP.NET requires for your page by browsing the page, viewing the
source, and then saving the file to disk. To determine the number of
chunks, divide the page size by 31.
Enable Buffering
// Response.Buffer is available for backwards compatibility; do not use.
Response.BufferOutput = true;Use Page.IsPostBack to Minimize Redundant Processing
if (Page.IsPostBack == false) {
// Initialization logic
} else {
// Client post-back logic
}Partition Page Content to Improve Caching Efficiency and Reduce Rendering
Ensure Pages Are Batch Compiled
Ensure Debug Is Set to False
Note A common pitfall is
to set this attribute at the page level during development and then
forget to set it back when the application is moved to production.
Optimize Expensive Loops
Consider Using Server.Transfer Instead of Response.Redirect
Use Client-Side Validation
Note Ensure that you also use server-side validation for security reasons.
Server Controls
Identify the Use of View State in Your Server Controls
Use Server Controls Where Appropriate
Avoid Creating Deep Hierarchies of Controls
<asp:repeater id=r runat=server>
<itemtemplate>
<asp:label runat=server><%# Container.DataItem %><br></asp:label>
</itemtemplate>
</asp:repeater>
Control ID
Type
Repeater
System.Web.UI.WebControls.Repeater
repeater:_ctl0
System.Web.UI.WebControls.RepeaterItem
repeater_ctl0:_ctl1
System.Web.UI.LiteralControl
repeater_ctl0:_ctl0
System.Web.UI.WebControls.Label
repeater_ctl0:_ctl2
System.Web.UI.LiteralControl
repeater:_ctl49
System.Web.UI.WebControls.RepeaterItem
repeater_ctl49:_ctl1
System.Web.UI.LiteralControl
repeater_ctl49:_ctl0
System.Web.UI.WebControls.Label
repeater_ctl49:_ctl2
System.Web.UI.LiteralControl
}
}
}More Information
Data Binding
Avoid Using Page.DataBind
DataBind();
yourServerControl.DataBind();
Minimize Calls to DataBinder.Eval
</tr>
</ItemTemplate>
</tr>
</ItemTemplate><ItemTemplate>
<tr>
<td><%# ((DbDataRecord)Container.DataItem).GetString(0) %></td>
<td><%# ((DbDataRecord)Container.DataItem).GetInt(1) %></td>
</tr>
</ItemTemplate>More Information
Caching Explained
Cache API
Output Caching
Partial Page or Fragment Caching
Caching Guidelines
Separate Dynamic Data from Static Data in Your Pages
[main.aspx]
<html>
<body>
<table>
<tr><td colspan=3>Application Header – Welcome John Smith</td></tr>
<tr><td>Menu</td><td>Dynamic Content</td><td>Advertisments</td></tr>
<tr><td colspan=3>Application Footer</td></tr>
</table>
</html>Footer
Configure the Memory Limit
More Information
Cache the Right Data
Refresh Your Cache Appropriately
Cache the Appropriate Form of the Data
Use Output Caching to Cache Relatively Static Pages
Note The next version of
ASP.NET (code-named "Whidbey") is likely to support a database cache
dependency. If it is implemented, this database cache dependency will
allow you to remove items from the cache when data changes in the
database.
Choose the Right Cache Location
Note The Location attribute does not apply to user controls.
Use VaryBy Attributes for Selective Caching
Use Kernel Caching on Windows Server 2003
More Information
State Management
Store Simple State on the Client Where Possible
Consider Serialization Costs
Application State
Use Static Properties Instead of the Application Object to Store Application State
<%
private static string[] _states[];
private static object _lock = new object();
public static string[] States
{
get {return _states;}
}
public static void PopulateStates()
{
//ensure this is thread safe
if(_states == null)
{
lock(_lock)
{
//populate the states… }
}
}
public void Application_OnStart(object sender, EventArgs e)
{
PopulateStates();
}
%>Use Application State to Share Static, Read-Only Data
Do Not Store STA COM Objects in Application State
More Information
Session State
Choosing a State Store
Prefer Basic Types to Reduce Serialization Costs
Note You should only implement the ISerializable
interface as a last resort. New formatters provided by future versions
of the .NET Framework and improvements to the framework provided
serialization will not be utilized once you take this approach. Prefer
the NonSerialized attribute.
Disable Session State If You Do Not Use It
<sessionState mode='Off'/>
<sessionState mode='Off'/>
Avoid Storing STA COM Objects in Session State
Use the ReadOnly Attribute When You Can
More Information
View State
Disable View State If You Do Not Need It
Minimize the Number of Objects You Store In View State
Determine the Size of Your View State
HTTP Modules
Avoid Long-Running and Blocking Calls in Pipeline Code
Consider Asynchronous Events
More Information
String Management
Use Response.Write for Formatting Output
Use StringBuilder for Temporary Buffers
Use HtmlTextWriter When Building Custom Controls
More Information
Exception Management
Implement a Global.asax Error Handler
e.GetType().FullName,
e.Message,
e.StackTrace);
}
//Log the exception and inner exception information.
}
}Monitor Application Exceptions
Use Try/Finally on Disposable Resources
try
{
conn.Open();
...
}
finally
{
if(null!=conn)
conn.close;
}Write Code That Avoids Exceptions
public void Login(string UserName, string Password) {}
try
{
Login(userName,password);
}
catch (InvalidUserNameException ex)
{…}
catch (InvalidPasswordException ex)
{…}public enum LoginResult
{
Success,InvalidUserName, InvalidPassword, AccountLockedOut
}
public LoginResult Login(string UserName, string Password) {}LoginResult result = Login(userName,password)
switch(result)
{
case Success:
. . .
case InvalidUserName:
. . .
case InvalidPassword:
}Set Timeouts Aggressively
More Information
COM Interop
Use ASPCOMPAT to Call STA COM Objects
Avoid Storing COM Objects in Session State or Application State
Avoid Storing STA Objects in Session State
Do Not Create STA Objects in a Page Constructor
Supplement Classic ASP Server.CreateObject with Early Binding
… Connection con = new Connection();
More Information
Data Access
Use Paging for Large Result Sets
More Information
Use a DataReader for Fast and Efficient Data Binding
Prevent Users from Requesting Too Much Data
Consider Caching Data
More Information
Security Considerations
Constrain Unwanted Web Server Traffic
Turn Off Authentication for Anonymous Access
Validate User Input on the Client
Avoid Per-Request Impersonation
Avoid Caching Sensitive Data
Segregate Secure and Non-Secure Content
Only Use SSL for Pages That Require It
Use Absolute URLs for Navigation
Consider Using SSL Hardware to Offload SSL Processing
Tune SSL Timeout to Avoid SSL Session Expiration
More Information
IIS 6.0 Considerations
Process Model
Kernel Mode Caching
Web Gardens
Note Do not use the in-process session state store or any technique that causes process affinity if Web gardens are enabled.
IIS 6.0 vs. the ASP.NET Process Model
Enabling Web Gardens by Using IIS 6.0
Enabling Web Gardens by Using the ASP.NET Process Model
Configuring the cpuMask Attribute
CPUs
Hex
Bit Pattern
Results
2
0x3
11
2 processes, uses CPU 0 and 1.
4
0xF
1111
4 processes, uses CPU 0, 1, 2, and 3.
4
0xC
1100
2 processes, uses CPU 2 and 3.
4
0xD
1101
3 processes, uses CPU 0, 2 and 3.
8
0xFF
11111111
8 processes, uses CPU 0, 1, 2, 3, 4, 5, 6, and 7.
8
0xF0
11110000
4 processes, uses CPU 4 ,5, 6 and 7.
Garbage Collector Configuration Flag
When to Use the Workstation GC
Configuring the Workstation GC
</runtime>
</configuration>
Deployment Considerations
Avoid Unnecessary Process Hops
Understand the Performance Implications of a Remote Middle Tier
Short Circuit the HTTP Pipeline
-->
</httpModules></httpModules>
Configure the Memory Limit
Disable Tracing and Debugging
</system.web>
</configuration>
Note You may also want
to verify that tracing and debugging are not enabled on individual
pages. Individual page settings override the settings in the Web.config
file.
Ensure Content Updates Do Not Cause Additional Assemblies to Be Loaded
\mydir
Page1.aspx
Page2.aspx
Page3.aspx
Page4.aspx
Page5.aspxAssembly1.dll {page1.aspx, page2.aspx, page3.aspx, page4.aspx}
Assembly1.dll {page1.aspx, page2.aspx, page3.aspx, page4.aspx, page5.aspx}
Assembly2.dll {page1.aspx}Assembly1.dll {page1.aspx, page2.aspx, page3.aspx, page4.aspx, page5.aspx}
Assembly2.dll {page1.aspx}
Assembly3.dll {page2.aspx}
Avoid XCOPY Under Heavy Load
Note These settings are
also available in a hotfix for .NET Framework version 1.0. For more
information, see Knowledge Base article 810281, "Error Message: Cannot
Access File AssemblyName Because It Is Being Used by Another Process,"
at http://support.microsoft.com/default.aspx?scid=kb;en-us;810281.
Consider Precompiling Pages
Consider Web Garden Configuration
Consider Using HTTP Compression
Consider Using Perimeter Caching
Summary
Additional Resources