Ultra-fast ASP.NET

Rick Kiessig

Mentioned 30

Provides information on building scalable Web sites using ASP.NET.

More on Amazon.com

Mentioned in questions and answers.

I have an application that consists of a database and several services. One of these services adds information to the database (triggered by a user).

Another service periodically queries the databases for changes and uses the new data as input for processing.

Until now I used a configurable timer that queries the database every 30 seconds or so. I read about Sql 2005 featuring Notification of changes. However, in Sql 2008 this feature is deprecated.

What is the best way of getting notified of changes that occurred in the database directly in code? What are the best practices?

Notification Services was deprecated, but you don't want to use that anyway.

You might consider Service Broker messages in some scenarios; the details depend on your app.

In most cases, you can probably use SqlDependency or SqlCacheDependency. The way they work is that you include a SqlDependency object with your query when you issue it. The query can be a single SELECT or a complex group of commands in a stored procedure.

Sometime later, if another web server or user or web page makes a change to the DB that might cause the results of the previous query to change, then SQL Server will send a notification to all servers that have registered SqlDependency objects. You can either register code to run when those events arrive, or the event can simply clear an entry in the Cache.

Although you need to enable Service Broker to use SqlDependency, you don't need to interact with it explicitly. However, you can also use it as an alternative mechanism; think of it more as a persistent messaging system that guarantees message order and once-only delivery.

The details of how to use these systems are a bit long for a forum post. You can either Google for them, or I also provide examples in my book (Ultra-Fast ASP.NET).

I’m interested in knowing some different approaches for retrieving data from Analysis Services, to use in either objects in code, or for end-user reporting.

I’ve used two different approaches in the past, one was using ADOMD to pull results and put these into a dataset, the other was using SQL OPENQUERY to a linked SSAS server to get results out as a SQL stored procedure result set. Both of these had advantages and disadvantages.

Over the years I’ve seen various questions along this line, so forgive me for any duplication, but what other methods are there for getting SSAS data into a format where other people’s code could use it?

I’ve considered XML result sets from SSAS over HTTP, then Linq to XML – Anyone have any experience with that?

Ideally I’d like a dataset with typed columns, or objects with properties, but I’m more interested in general approach than code samples. How have you got data from SSAS, apart from SSRS/Other dashboard controls?

XMLA is the "high power" approach -- but I'm not aware of a toolkit or library that really exposes the full capabilities of XMLA; I think you would have craft it up yourself. For the projects I've done, that's just way too much work.

Instead, I used ADOMD.NET for retrieving results in code; the CellSet class in particular is fairly rich. For end user analysis (slice and dice), most often I use Excel Pivot Charts (which are fabulous!); sometimes I also use Visio Pivot Diagrams. For fixed reporting, Reporting Services can access SSAS directly, and it even has it's own query builder.

BTW, in case it helps, I have a chapter in my book about integrating SSAS with web sites as a way of offloading SQL Server: Ultra-Fast ASP.NET. My code examples use ADOMD; I also walk through building a simple cube, configuring automatic updates with SSIS, using proactive caching, building simple MDX queries, etc.

ORM seems to be a fast-growing model, with both pros and cons in their side. From Ultra-Fast ASP.NET of Richard Kiessig (http://www.amazon.com/Ultra-Fast-ASP-NET-Build-Ultra-Scalable-Server/dp/1430223839/ref=pd_bxgy_b_text_b):

"I love them because they allow me to develop small, proof-of-concept sites extremely quickly. I can side step much of the SQL and related complexity that I would otherwise need and focus on the objects, business logic and presentation. However, at the same time, I also don't care for them because, unfortunately, their performance and scalability is usually very poor, even when they're integrated with a comprehensive caching system (the reason for that becomes clear when you realize that when properly configured, SQL Server itself is really just a big data cache"

My questions are:

  • What is your comment about Richard's idea. Do you agree with him or not? If not, please tell why.

  • What is the best suitable fields for ORM and traditional database query? in other words, where you should use ORM and where you should use traditional database query :), which kind/size... of applications you should undoubtedly choose ORM/traditional database query

Thanks in advance

I can't agree to the common complain about ORMs that they perform bad. I've seen many plain-SQL applications until now. While it is theoretically possible to write optimized SQL, in reality, they ruin all the performance gain by writing not optimized business logic.

When using plain SQL, the business logic gets highly coupled to the db model and database operations and optimizations are up to the business logic. Because there is no oo model, you can't pass around whole object structures. I've seen many applications which pass around primary keys and retrieve the data from the database on each layer again and again. I've seen applications which access the database in loops. And so on. The problem is: because the business logic is already hardly maintainable, there is no space for any more optimizations. Often when you try to reuse at least some of your code, you accept that it is not optimized for each case. The performance gets bad by design.

An ORM usually doesn't require the business logic to care too much about data access. Some optimizations are implemented in the ORM. There are caches and the ability for batches. This automatic (and runtime-dynamic) optimizations are not perfect, but they decouple the business logic from it. For instance, if a piece of data is conditionally used, it loads it using lazy loading on request (exactly once). You don't need anything to do to make this happen.

On the other hand, ORM's have a steep learning curve. I wouldn't use an ORM for trivial applications, unless the ORM is already in use by the same team.

Another disadvantage of the ORM is (actually not of the ORM itself but of the fact that you'll work with a relational database an and object model), that the team needs to be strong in both worlds, the relational as well as the oo.

Conclusion:

  • ORMs are powerful for business-logic centric applications with data structures that are complex enough that having an OO model will advantageous.
  • ORMs have usually a (somehow) steep learning curve. For small applications, it could get too expensive.
  • Applications based on simple data structures, having not much logic to manage it, are most probably easier and straight forward to be written in plain sql.
  • Teams with a high level of database knowledge and not much experience in oo technologies will most probably be more efficient by using plain sql. (Of course, depending on the applications they write it could be recommendable for the team to switch the focus)
  • Teams with a high level of oo knowledge and only basic database experience are most probably more efficient by using an ORM. (same here, depending on the applications they write it could be recommendable for the team to switch the focus)

I have many operations in the database that need to trigger application code. Currently I am using database polling, but I hear that SQL Server Service Broker can give me MSMQ-like functionality.

  1. Can I listen to SQL Server Service Broker queues from .NET applications running on a different machine?
  2. If so, should I do it?
  3. If not, what would you recommend?

To answer your questions:

Can I listen to SQL Server Service Broker queues from .NET applications running on a different machine?

Yes.

If so, should I do it?

If not, what would you recommend?

You might consider using SqlDependency. It uses Service Broker behind the scenes, but not explicitly.

You can register a SqlDependency object with a SELECT query or a stored procedure. If another command changes the data that was returned from the query, then an event will fire. You can register an event handler, and run whatever code you like at that time. Or, you can use SqlCacheDependency, which will just remove the associated object from the Cache when the event fires.

You can also use Service Broker directly. However, in that case you will need to send and receive your own messages, as with MSMQ.

In load-balanced environments, SqlDependency is good for cases where code needs to run on every web server (such as flushing the cache). Service Broker messages are better for code than should only run once -- such as sending an email.

In case it helps, I cover both systems in detail with examples in my book (Ultra-Fast ASP.NET).

I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers?

We created a web application which works fine when, say, there are 500 users at the same time. But now we need to make it work for 10 000 users (working with the web app at the same time).

So we need to set up 20 web servers and make something so that 10 000 users could work with the web app by typing "www.MyWebApp.ru" in their web browsers, though their requests would be handled by 20 web-servers, without their knowing that.

1) Is there special standard software to create an ASP.NET web farm?

2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?

I found very little information on ASP.NET web farms and scalability on the web: in most cases, articles on scalability tell how to optimize and ASP.NET app and make it run faster. But I found no example of a "Hello world"-like ASP.NET web app running on 2 web servers.

Would be great if someone could post a link to an article or, better, tell about one's own experience in ASP.NET "web farming" and addressing scalability issues.

Thank you, Mikhail.

1) Is there special standard software to create an ASP.NET web farm?

No.

2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?

No.

To build a web farm, you will need some form of load balancing. For up to 8 servers or so, you can use Network Load Balancing (NLB), which is built in to Windows. For more than 8 servers, you should use a hardware load balancer.

However, load balancing is really just the tip of the iceberg. There are many other issues that you should address, including things like:

  1. State management (cookies, ViewState, session state, etc)
  2. Caching and cache invalidation
  3. Database loading (managing round-trips, partitioning, disk subsystem, etc)
  4. Application pool management (WSRM, pool resets, partitioning)
  5. Deployment
  6. Monitoring

In case it might be helpful, I cover many of these issues in my book: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

I will be deploying my web application this weekend on a testing server. I have already had a couple of attempts at putting it up and have found trouble with:

  • Database connection
  • Authentication
  • Masterpage references

What major/minor pitfalls have you found and how would I go about avoiding or fixing them?

Or is there a one stop fix all for deploying web applications?

In the end, easy deployment should be part of the architectural-level design. It's one of those things that can be tricky to shoe-horn in at the end of a project. In addition to just getting the site running, you also need to include things like versioning, configuration changes, build process, support for multiple servers (if appropriate), etc.

A few guidelines:

  1. Centralize as many of your configuration parameters as you can
  2. Use a build process that lets you switch from local to production mode
  3. Flag config parameters with "debug" or "production", to make it easy to know which is which
  4. It's generally a good idea to pre-build a site in your dev environment, and deploy in binary form
  5. There are add-ins for Visual Studio that can help simplify / streamline the process
  6. Consider using image-based deployment for larger multi-server environments
  7. Consider using a staging environment, where things are 99% the same as your production site
  8. Don't forget to include IIS configuration details as part of your deployment process

In case it's of any interest, I cover deployment issues in my book: Ultra-Fast ASP.NET.

Here's my particular situation...

I have a decent amount of experience with webforms, and I must say, a lot of it has been pretty frustrating. I like that there are lots of built-in controls, but then I discover that they don't quite do what I want, out of the box. I end up rolling my own controls (that inherit from the built-in controls), such as GridViewThatCanSortItself or GridViewThatHasASelectionColumn (these may not have been the actual names, but you get the idea). I've often wondered, while struggling mightily to build such classes, whether figuring out the often convoluted event model was worth it. My attempts to use css to style things have been frustrating as well. There are some ASP.NET controls that will result in one html tag for one set of attributes and a different tag with another set of attributes. You don't realize this until you notice your css only works half the time.

So, my brain starts to wonder, could ASP.NET MVC be the answer? Reading some of the posts on SO has basically given me the impression that, while webforms definitely has its issues, I'd only be trading one set of problems for another. It even seems like Microsoft is trying to talk me out of it:

Quote from the asp.net site (http://www.asp.net/learn/mvc/tutorial-01-cs.aspx)

ASP.NET MVC...works well for Web applications that are supported by large teams of developers and Web designers who need a high degree of control over the application behavior.

That is really not me. Most of my projects are relatively small, and I'm usually the only programmer. I sometimes need to create very custom or unusual UI's, but I definitely don't have a team of programmers who can build components for me.

There is also the issue of javascript. I have a definite working knowledge of html and css, but I can't say the same for javascript. As clumsy and bloated as they are, I've been able to do some smooth enough looking things with UpdatePanels. I'm not sure how much time I'd need to spend just learning the javascript to be able to handle even simple AJAX scenarios in ASP.NET MVC.

I'm about to start working on a relatively simple and small web app, so now would be the time to take the plunge if I'm going to take the plunge. This app will use a SQL Server Express (2005 or 2008) back-end, and I'm thinking of also trying out SqlMetal as an ORM solution. So, that's already one thing I'm going to have to learn, although I at least have experience with--and really like--LinqToXml and LinqToObject. The pages of the web app will have some data grids (some with link columns), input boxes, labels, drop-down lists, check boxes, radio buttons, and submit/action buttons. I don't foresee anything more complicated than that. There will be about six or seven pages total.

Questions:

Given my experience, how painful will it be to learn ASP.NET MVC? Will it be worth it?

I've read some earlier questions comparing webforms to MVC, so I'm curious, How has MVC evolved over the past year or so? Is there anything new that would make the learning curve less steep?

Do I literally have to write code to generate all html by hand or are there code/libraries readily available in the community to assist with the process? (I know I read something about "html helpers"--that may be what I'm asking about here.)

Any other advice?

Update

Another question that occurred to me: Is the transition from ASP.NET webforms to MVC anything like going from standard WPF (using code-behind) to MVVM? I found learning WPF itself to be pretty challenging (and I still couldn't say I really get everything about it), but learning to work with WPF using the MVVM pattern was a relatively painless transition. So, I'm wondering how similar a jump it is to go from webforms to ASP.NET MVC.

Some developers seem to have an aversion to component-oriented programming. For others, it feels natural. If you find yourself constantly fighting the standard components, then it's easy enough to roll your own from scratch--which you would basically end up doing in MVC anyway. If you find yourself fighting the unit test model with web forms, you will find things easier with MVC.

However, MVC isn't a cure-all; there's a lot to learn. Some apps will be less complex than with web forms, and some will be much more complex.

I've found that web forms don't really gel with many developers until they deeply understand the page life cycle and use of ViewState. Until that point, there seems to be a lot of trial and error -- but it's easier to learn that than MVC with IOC, etc. As far as customizing output, it's often easier to use control adapters than to subclass the control. In case it helps, I walk through these issues from the web forms side in my book: Ultra-Fast ASP.NET.

In the end, I think it's partly a mindset thing, and which model fits the way you solve problems and think about your application better.

I have asp.net website name http://www.go4sharepoint.com

I have tried almost all ways to improve performance of this site, I have even check firebug and page speed addon on Firefox, but somehow i am not pleased with the result.

I also tried like removing whitespace, remove viewstate, optimizing code which renders it, applied GZip, I have also no heavy session variables used, but still when i compare with other popular websites it is not upto the mark.

I have check CodeProject website and was surprise that even though they have lot of stuff displayed there website is loading fast and they also have good loading rate.

To all experts, Please suggest me where i am going wrong in my development.

Thank you.

A stackoverflow user actually has a good book on this subject:

http://www.amazon.com/gp/product/1430223839?ie=UTF8&tag=mfgn21-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=1430223839

A couple of recommendations after looking at your site:

  1. Put some of your static files (images, js, etc.) on different domains so that they can be downloaded at the same time. (also turn off cookies for those domains)
  2. Use image sprites instead of separate images.
  3. Move around when things are loaded. It looks like the script files for the ads are holding up content. You should make content of the site load first by putting it in the HTML before the ads. Also, make sure that the height and width of things are specified such that the layout doesn't change as things are downloaded, this makes the site feel slow. Use Google Chrome's developer tools to look at the download order and timeline of all your object downloads.
  4. Most of the slowness looks like it's coming from downloading items from sharepointads.com. Perhaps fewer adds, or have them use space already reserved for them by specifying height and width.
  5. Add a far future expires time to the header for all static content.
  6. Serve scaled images. Currently the browser is resizing the images. You could save tons of bandwidth by serving the images already the proper size.

Also, download YSlow (from yahoo) and Page Speed (from google)

In my pre-ASP.NET development environment, there was a near-universal best practice:

* NEVER use the native controls!
* Instead, subclass ALL the controls, and ALWAYS use the subclassed version.

Why? Because that gave you a hook... one place to write code and have it applied throughout your application.

For example: Suppose you decide that you want a question mark icon to appear to the right of every TextBox in your webforms app. The icon is rendered, and hovering over it pops up bubble help -- iff there is text in the TextBox.ToolTip property.

How would you accomplish that, if you're using the MS-provided TextBox control?

If you consistently used a subclassed version of TextBox in your application, then you could go to that object, and add the method that renders the icon, stocked with your favorite bubblehelp javascript.

Presto! All of your app's TextBoxes sprout little question mark icons -- or they will, when you set their ToolTip text.

Over time, you can easily adapt and enhance ALL your TextBoxes, because they all have a base class that you can modify. You add a feature where ToolTips are set from a resource file. Next, you add a ShowOnLeft property that presents the icon on the left side of the TextBox. Do you like how the iPhone password control shows the last character you type, obscuring the earlier characters? Override your subclassed TextBox's default behavior for passwords with a method to implement that behavior.

I have never encountered advocates for this practice, in ASP.NET. Have I just missed it? An article describing two dozen ASP.NET design patterns doesn't have anything related. The posts about how to subclass server controls describe special-purpose one-offs, like a TextBox that only accepts digits -- but none of them recommend the pervasive "ALWAYS use subclassed controls!" policy that I subscribed to in olden days.

Does it make sense, to apply this ancient wisdom, when working in ASP.NET? To always use the subclassed equivalent of the native server controls?

If not -- why not? Are there other ways to skin this cat? A technique that provides you with just one place where you can augment ALL your application's instances of a given control?

I'd love to hear about that. I want my TextBoxQMark control. :-)

TIA - Hoytster

Most of the time, subclasses aren't needed.

You can use tag mapping or control adapters instead to accomplish many types of customizations.

In case it helps, I discuss both approaches in my book, along with sample code: Ultra-Fast ASP.NET.

I would like to learn ASP.NET and just wanted some input as to which book to read.

Just a quick selection of a few good ones I can suggest for easy intro's and good writting style.

Andrew Troelson gives a nice front to back on the full 3.5 framework . Its not ASP.NET specific or too deef of a dive on anything, but can be a nice general lookup for lots of things you run into. It also gives a decent backstory of the framework and language.

If your diving into ASP.NET for the first time, you may as well do it right. The Ultra-Fast ASP.NET book gives a step by step as many others, but focuses on some of the performance side and options to consider for different site types and common pitfalls.

enjoy.

If you are a newcomer to programming in general and want to learn asp.net mvc. Get this book: ASP.NET MVC Cookbook. Its for beginners and a good book to start. If you are experienced programmer - Steven Sandersons book is the best choice. So I would recommend get ASP.NET MVC 2.0 cookbook to start and Steve Sandersons book to get deeper and become a real master.

I have a web application that uses ASP.NET with "InProc" session handling. Normally, everything works fine, but a few hundred requests each day take significantly longer to run than normal. In the IIS logs, I can see that these pages (which usually require 2-5 seconds to run) are running for 20+ seconds.

I enabled Failed Request Tracing in Verbose mode, and found that the delay is happening in the AspNetSessionData section. In the example shown below, there was a 39-second gap between AspNetSessionDataBegin and AspNetSessionDataEnd.

I'm not sure what to do next. I can't find any reason for this delay, and I can't find any more logging features that could be enabled to tell me what's happening here. Does anyone know why this is happening, or have any suggestions for additional steps I can take to find the problem?

My app usually stores 1-5MB in session for each user, mostly cached data for searches. The server has plenty of available memory, and only runs about 50 users.

Screenshot of Failed Request Trace

It could be caused by lock contention for the session state. Take a look at the last paragraph of MSDN's ASP.NET Session State Overview. See also K. Scott Allen's helpful post on this subject.

If a page is annotated with EnableSessionState="True" (or inherits the web.config default), then all requests for that page will acquire a write lock on the session state. All other requests that use session state -- even if they do not acquire a write lock -- are blocked until that request finishes.

If a page is annotated with EnableSessionState="ReadOnly", then the page will not acquire a write lock and so will not block other requests. (Though it may be blocked by another request holding the write lock.)

To eliminate this lock contention, you may want to implement your own [finer grained] locking around the HttpContext.Cache object or static WeakReferences. The latter is probably more efficient. (See pp. 118-122 of Ultra-Fast ASP.NET by Richard Kiessig.)

Currently, my entire website does updating from SQL parameterized queries. It works, we've had no problems with it, but it can occasionally be very slow.

I was wondering if it makes sense to refactor some of these SQL commands into classes so that we would not have to hit the database so often. I understand hitting the database is generally the slowest part of any web application For example, say we have a class structure like this:

Project (comprised of) Tasks (comprised of) Assignments

Where Project, Task, and Assignment are classes.

At certain points in the site you are only working on one project at a time, and so creating a Project class and passing it among pages (using Session, Profile, something else) might make sense. I imagine this class would have a Save() method to save value changes.

Does it make sense to invest the time into doing this? Under what conditions might it be worth it?

I work in a company which is primarily concerned with desktop apps not served over the internet.

Part of my value is I have a web based background and proficiencies in ASP.NET, JavaScript, JQuery etc.

The issue I'm having is that compared to a traditional desktop application building a rich web based app is more time consuming. This is understandable in terms of the hoops that need to be jumped through for web development. This is a cause of frustration for those not familiar with the ways of the web.

However because people are used to the non web world where I work I need to utilise every possible tool and technique to be productive as possible while building web based applications.

As such I'm looking for what other people do to be as productive as possible while making web based applications. I'm primarily thinking of ASP.NET (not using MVC) but apart from that everything is open.

The best choice of tools tends to vary quite a bit from one developer to another. It's also very dependent on the details of the type of application you're building.

For me, Visual Studio Team Suite is indispensable. It has a rich set of visual designers and other features that have a big impact on my productivity. Deep integration with bug reporting and source control is another huge time saver (I use Team Foundation Server).

With web forms, you can of course often use drag-and-drop components, which improves productivity for some people (although not me). I can't say I'm a fan of most third-party components. They can be quirky, take a long time to learn well, and then still not do exactly what I want them to. It's also a skill that often isn't portable from one job to the next.

You can leverage jQuery to help simplify scripting -- although even with jQuery I find scripting to be one of the most time-consuming and error prone aspects of web development.

If it's suitable for your environment, you might explore Silverlight. That way, you can often have the best of both worlds, building desktop-like apps using desktop-like tools, but with web-based deployment (Expression Blend is an awesome tool). You can also use Silverlight as a replacement for JavaScript in many cases, with code that's compiled and type-safe.

Good tools on the data side are critical, such as SQL Server Profiler. Visual Studio Team Data (part of Team Suite) is invaluable for it's data generation, unit testing and deployment and management aspects.

Something very underrated by many web developers is building up an appropriate infrastructure: things like logging and performance counters, that can help you track down problems quickly when they occur. A solid configuration system is also important.

In case it helps, I put together a longer list of tools. I also write about something I call the "ultra-fast approach" in my book, which is about more than just building web sites that run fast; it's also about how to build them quickly and reliably: Ultra-Fast ASP.NET.

One of my Clients has a reservation based system. Similar to air lines. Running on MS SQL 2005.

The way the previous company has designed it is to create an allocation as a set of rows.

Simple Example Being:

AllocationId | SeatNumber | IsSold

1234         | A01        | 0

1234         | A02        | 0

In the process of selling a seat the system will establish an update lock on the table.

We have a problem at the moment where the locking process is running slow and we are looking at ways to speed it up.

The table is already efficiently index, so we are looking at a hardware solution to speed up the process. The table is about 5 mil active rows and sits on a RAID 50 SAS array.

I am assuming hard disk seek time is going to be the limiting factor in speeding up update locks when you have 5mil rows and are updating 2-5 rows at a time (I could be wrong).

I've herd about people using index partition over several disk arrays, has anyone had similar experiences with trying to speed up locking? can anyone give me some advise onto a possible solution on what hardware might be able to be upgraded or what technology we can take advantage of in order to speed up the update locks (without moving to a cluster)?

Here are a few ideas:

  1. Make sure your data and logs are on separate spindles, to maximize write performance.
  2. Configure your drives to only use the first 30% or so for data, and have the remainder be for backups (minimize seek / random access times).
  3. Use RAID 10 for the log volume; add more spindles as needed for performance (write performance is driven by the speed of the log)
  4. Make sure your server has enough RAM. Ideally, everything needed for a transaction should be in memory before the transaction starts, to minimize lock times (consider pre-caching). There are a bunch of performance counters you can check for this.
  5. Partitioning may help, but it depends a lot on the details of your app and data...

I'm assuming that the T-SQL, indexes, transaction size, etc, have already been optimized.

In case it helps, I talk about this subject in detail in my book (including SSDs, disk array optimization, etc) -- Ultra-Fast ASP.NET.

Situation: ASP.NET live website that's occasionally too busy.

Adding full profiling to the code will be too heavy an impact on the performance. Using performance monitor, we've quickly find a saw-teeth figure in the "bytes in all heaps" counter, which pair with the GC counters. We consider certain pieces of code as being the culprit.

Is there a way or possibility to temporarily "inject" profiling, either for certain pages, libs, functions or whatever? Preferably as lightweight as possible, as any additional overhead might bring down this fragile system. I'm aware that .NET does not support method callback hooks (as is common with AOP).

A few ideas:

  1. Use custom Windows performance counters. They are very lightweight (1 to 2%), and you can use them not just to count, but also for timing measurements, to look at how often certain operations are slower than a threshold, etc.
  2. If you're using IIS 7, you can enable Failed Request Tracing. To limit the perf impact, be careful to not enable it for too many pages. Those traces can provide lots of detail, and you can inject more info into them programmatically if you need to.
  3. Use the Windows Event log to write custom details under exceptional conditions. Perf impact is minimal as long as you don't over-do it.
  4. One cause of sawtooth memory behavior can be not calling Dispose() when you should (or wrapping IDisposable objects in using statements, which will call it for you); you might want to review your code to look for that.

In case it's helpful, you might also be interested in the performance tips from my book: Ultra-Fast ASP.NET.

Edit: you might also try using .NET Memory Profiler (free trial available) to attach to the running process. It's fairly invasive compared to counters, but if you need to capture a snapshot of the current state of memory to debug your problem, there may not be a better choice.

Background

My business is getting more and more requests for web development and as such I'm adding another .NET Developer to the team.

My current development environment is really poor, just a single PC (Windows XP) with a local IIS installation hosting web development projects. I open those with Visual Studio 2008 Professional, with no version control in use.

Problems with this Approach

This approach doesn't fit any requirements of team based development and hinders Test Driven Development, Versioning and Deploying. Because of this, I plan to install a domain environment with Windows Server 2008 and an additional Web server devoted to development.

Business Requirements

  • Team Development on 3-tiered Web Applications
  • Capability for Test Driven Development
  • Source Control Server
  • Easy deployment process

Questions

  • What do I need to make this happen?
  • What approaches do you use for team based web development?
  • How many servers do you have?
  • Can all of these run on the same server?
  • What other pitfalls should I watch out for?
* What do I need to make this happen?
* What approaches do you use for team based web development?
* How many servers do you have?
* Can all of these run on the same server?
* What other pitfalls should I watch out for?

I'm a big fan of the Microsoft tools, in part because they're well integrated with each other. Subversion may not cost anything to buy, but that doesn't mean it's free: you still have to pay your team while they learn, use and maintain it, and to fix any problems that might come up along the way. In my experience, software costs tend to be small when you take the long-term view.

Without knowing anything about your team, your product, your budget, your software process, and many other factors that should be folded into a decision like this, I can offer the following very general recommendations:

  1. Use Visual Studio Team Edition for your development. Team Test provides load test capabilities. Team Data provides database versioning and automated deployment and update, as well as data generation and database unit testing. Team Developer provides mechanisms for unit testing web apps, plus profiling and static code analysis. Team Suite provides all features in one package.
  2. Use Team Foundation Server for source control; it includes automated builds, a SharePoint-based team portal, transactional check-ins, full integration with Visual Studio, integrated bug reporting, uses SQL Server as the repository, provides team-oriented statistics (bugs per day, etc), facilitates hand-off of test data from QA to Devs, etc.
  3. For hardware, get one dedicated server for TFS and builds. Use a Windows domain, and have a separate server as the domain controller and for misc storage and to hold backups from the TFS server (Exchange might run there, too). Each dev has their own machine, including enough resources to run a local copy of SQL Developer, assuming your app uses a database. Gigabit networks connecting everything.
  4. For larger apps or teams, it's also a good idea to have an internal test environment of one or more servers that's configured somewhat similarly to your production servers, in terms of things like separate volumes for DB data and logs, load balancing / web garden, etc. Nightly builds get deployed there, for testing the next day. That can be one server, or in some environments, possibly a virtual server.
  5. Automated deployment gets more important the larger your project becomes. There are some cool options there in multi-server environments, such as Windows Deployment Services (WDS).
  6. Don't forget about monitoring and logging.

In case it's helpful, I cover a number of these issues and options in the Infrastructure and Operations section of my book: Ultra-Fast ASP.NET.

I am making a a request to an image and the response headers that I get back are:

Accept-Ranges:bytes
Content-Length:4499
Content-Type:image/png
Date:Tue, 24 May 2011 20:09:39 GMT
ETag:"0cfe867f5b8cb1:0"
Last-Modified:Thu, 20 Jan 2011 22:57:26 GMT
Server:Microsoft-IIS/7.5
X-Powered-By:ASP.NET

Note the absence of the Cache-Control header.

On subsequent requests on Chrome, Chrome knows to go to the cache to retrieve the image. How does it know to use the cache? I was under the impression that I would have to tell it with the Cache-Control header.

To set the Cache-Control You have to specify it yourself. You can either do it in web.config , IIS Manager for selected folders (static, images ...) or set it in code. The HTTP 1.1 standard recommends one year in future as the maximum expiration time.

Setting expiration date one year in future is considered good practice for all static content in your site. Not having it in headers results in If-Modified-Since requests which can take longer then first time requests for small static files. In these calls ETag header is used.

When You have Cache-Control: max-age=315360000 basic HTTP responses will outnumber If-Modified-Since> calls and because of that it is good to remove ETag header and result in smaller static file response headers. IIS doesn't have setting for that so You have to do response.Headers.Remove("ETag"); in OnPreServerRequestHeaders()

And if You want to optimize Your headers further You can remove X-Powered-By:ASP.NET in IIS settings and X-Aspnet-Version header (altough I don't see in Your response) in web.config - enableVersionHeader="false" in system.web/httpRuntime element.

For more tips I suggest great book - http://www.amazon.com/Ultra-Fast-ASP-NET-Build-Ultra-Scalable-Server/dp/1430223839

I am working on a website and this is my first web project.

Scenario for Session

I have created a database for my project with security level little bit high. I want to manage session for each and every user who is logging in to my website. Session state can be used using Cookie as well as URL, only one at a time.

Now I went over with all four session state modes. i.e 1. InProc 2. State Server 3. Sql Server 4. Custom

Now after reviewing from all these modes I am in confusion which one should I use Sql Server or Custom.

Basically i want to store session related information in my own database instead of Aspnet_db which is a default database provided by microsoft. I have created all tables related to login and registration. But I dont know how to store session into my database. What tables do I need to create so as to maintain into database.

I want to create a complete log of session and login related information into my database(Persistant atleast for 1 year). I want to use machinekey as AES and SHA1.

<sessionState mode="Custom" cookieless="AutoDetect" timeout="15" regenerateExpiredSessionId="true" stateNetworkTimeout="10" >
    </sessionState>
    <machineKey decryption="AES" 
                validation="SHA1"  
                decryptionKey="7E047D50A7E430181CCAF7E0D1771330D15D8A58AEDB8A1158F97EEF59BEB45D" 
                validationKey="68B439A210151231F3DBB3F3985E220CFEFC0662196B301B84105807E3AD27B6475DFC8BB546EC69421F38C1204ACFF7914188B5003C1DCF3E903E01A03C8578"/>

<add name="conString" connectionString="Data Source=192.168.1.5; Initial Catalog=dbName; Integrated Security=True;" providerName="System.Data.SqlClient" />

What all things do i need to specify in webconfig ?

My Data Source= 192.168.1.5 Database name= db.mdf

What I need to know about

  1. What tables do i need to add to my database to store session related information. eg. Session id (Any other field is also stored or not), Session Time, Session Start Time, Session End Time, Session Expire Time. I dont know what all things are usually taken.
  2. Do I need to encrypt Session Id before storing into database. If Yes

Encryption will be automatic or do i need to write some code to do this other than that I wrote in web config above.

  1. How mode='custom' will be used into web config using my database.

in following code

<sessionState mode="Custom" cookieless="AutoDetect" timeout="15" regenerateExpiredSessionId="true" stateNetworkTimeout="10" > 
</sessionState> 

If you're using the SQL Server session provider, you should run aspnet_regsql to create the tables you need:

aspnet_regsql –E -S localhost –ssadd –sstype p

(replace localhost with .\SQLEXPRESS if you're using SQL Express)

You can also specify a custom DB name with the -d flag if you don't want the command to create the aspnetdb database. You can also run the command without flags to use wizard mode.

If you want to build a custom session provider (not a small task), you might start by looking at the script that's run by the command above:

C:\Windows\Microsoft.NET\Framework\v2.0.50727\InstallPersistSqlState.sql

Although it depends on your requirements, generally encryption of session state doesn't add much value. However, if your data is particularly sensitive, then it might be worth considering. Note, though, that the biggest risk with session state normally isn't on the DB side, rather it's on the client side, with one user being able to steal a session from another user, by getting access to their session cookie. Because of that, before I resorted to encrypting on the DB side, I would at least use SSL for all pages that reference the session cookie.

In case it helps, I cover many aspects of customizing session state in my book, although I stop short of demonstrating a full custom provider: Ultra-Fast ASP.NET.

I'm building an ASP.Net website. I have a "cart" class which stores the items in the users cart. I don't want to re query the database every time the page reloads to populate the cart items stored in this object. Is the best way to store/persist instantiated objects by putting them in a session and store the session to a database (we're on SQL Server 2k8)? It seems like that's what most are recommending from reading other posts on StackOverflow. Our site has a pretty high amount of traffic, so its easy to imagine 1000's of these objects being active at any given time.

I'm new to building ASP.Net websites. Is it common practice to persist user objects (not just simple variables in a session or cookie, but class objects)... also along the lines of persistent objects, I plan on creating a static class which stores commonly used site-wide data such as a List of U.S. states... are there any pitfalls with doing this? I don't want to shoot myself in the foot.

Update:

We are in a farm environment so storing sessions in a daatabase seems out of the question... if one server goes down we roll over to another... in which case the session data may be lost. We were considering using a separate server for storing sessions, which would work in our farm environment but I'm iffy about storing that many instantiated objects in memory.

Since you said that you don't want to requery the DB each time a page loads, Session state would be a poor choice -- since that's exactly what it does (assuming you're using SQL mode, since InProc mode won't work for a web farm). In fact, there are normally two round-trips to the DB for each request: one at the beginning to read the session object and update the session expiration time, and another at the end to update it. Sessions also impose locks on your pages while they're active, which can be an issue for sites that use Ajax or frames or where users often use multiple windows.

In general, you will be much better off from a performance and scalability perspective by storing the objects in SQL Server yourself. With that approach, you can do things like cache the objects using a SqlDependency or SqlCacheDependency to avoid round-trips. In a web farm, it also often helps to use cookies to help ensure that everything is in sync, since there can be a slight delay from when the DB is updated to when the cache entries are cleared on all servers by way of notifications.

In case it helps, I cover these types of issues in detail in my book: Ultra-Fast ASP.NET.

I’m looking for the best way of using threads considering scalability and performance.

In my site I have two scenarios that need threading:

  1. UI trigger: for example the user clicks a button, the server should read data from the DB and send some emails. Those actions take time and I don’t want the user request getting delayed. This scenario happens very frequently.

  2. Background service: when the app starts it trigger a thread that run every 10 min, read from the DB and send emails.

The solutions I found:

A. Use thread pool - BeginInvoke: This is what I use today for both scenarios. It works fine, but it uses the same threads that serve the pages, so I think I may run into scalability issues, can this become a problem?

B. No use of the pool – ThreadStart: I know starting a new thread takes more resources then using a thread pool. Can this approach work better for my scenarios? What is the best way to reuse the opened threads?

C. Custom thread pool: Because my scenarios occurs frequently maybe the best way is to start a new thread pool?

Thanks.

Out of your three solutions, don't use BeginInvoke. As you said, it will have a negative impact on scalability.

Between the other two, if the tasks are truly background and the user isn't waiting for a response, then a single, permanent thread should do the job. A thread pool makes more sense when you have multiple tasks that should be executing in parallel.

However, keep in mind that web servers sometimes crash, AppPools recycle, etc. So if any of the queued work needs to be reliably executed, then moving it out of process is a probably a better idea (such as into a Windows Service). One way of doing that, which preserves the order of requests and maintains persistence, is to use Service Broker. You write the request to a Service Broker queue from your web tier (with an async request), and then read those messages from a service running on the same machine or a different one. You can also scale nicely that way by simply adding more instances of the service (or more threads in it).

In case it helps, I walk through using both a background thread and Service Broker in detail in my book, including code examples: Ultra-Fast ASP.NET.

I ve developed an asp.net web application with YUI as the javascript library... My site was very slow that it took three minutes to view my page on my first visit....

  1. When inspected through firebug, My yui file was too heavy with size 278kb...
  2. what can be done to improve performance?

I agree that YUI is a bit too heavy for many sites.

In case it helps, you might want to have a look at my book for some ideas on how to make things run faster: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

we have a 500gb database that performs about 10,000 writes per minute.

This database has a requirements for real time reporting. To service this need we have 10 reporting databases hanging off the main server.

The 10 reporting databases are all fed from the 1 master database using transactional replication.

The issue is that the server and replication is starting to fail with PAGEIOLATCH_SH errors - these seem to be caused by the master database being overworked. We are upgrading the server to a quad proc / quad core machine.

As this database and the need for reporting is only going to grow (20% growth per month) I wanted to know if we should start looking at hardware (or other 3rd party application) to manage the replication (what should we use) OR should we change the replication from the master database replicating to each of the reporting databases to the Master replicating to reporting server 1, reporting server 1 replicating to reporting server 2

Ideally the solution will cover us to a 1.5tb database, with 100,000 writes per minute

Any help greatly appreciated

Depending on what you're inserting, a load of 100,000 writes/min is pretty light for SQL Server. In my book, I show an example that generates 40,000 writes/sec (2.4M/min) on a machine with simple hardware. So one approach might be to see what you can do to improve the write performance of your primary DB, using techniques such as batch updates, multiple writes per transaction, table valued parameters, optimized disk configuration for your log drive, etc.

If you've already done as much as you can on that front, the next question I have is what kind of queries are you doing that require 10 reporting servers? Seems unusual, even for pretty large sites. There may be a bunch you can do to optimize on that front, too, such as offloading aggregation queries to Analysis Services, or improving disk throughput. While you can, scaling-up is usually a better way to go than scaling-out.

I tend to view replication as a "solution of last resort." Once you've done as much optimization as you can, I would look into horizontal or vertical partitioning for your reporting requirements. One reason is that partitioning tends to result in better cache utilization, and therefore higher total throughput.

If you finally get to the point where you can't escape replication, then the hierarchical approach suggested by fyjham is definitely a reasonable one.

In case it helps, I cover most of these issues in depth in my book: Ultra-Fast ASP.NET.

We have an application that hits a web service successfully, and the data returned updates our DB. What I'm trying to do is allow the user to continue using other parts of our web app while the web service processes their request and returns the necessary data.

Is this asynchronous processing? I've seen some console app samples on the msdn site, but considering this is a web form using a browser I'm not sure those samples apply. What if the user closes the browser window mid request? Currently we're using the Message Queue which "waits" for the web service to respond then handles the DB update, but we'd really like to get rid of that.

I'm (obviously) new to async requests and could use some help figuring this out. Does anyone have some code samples or pertinent articles I could check out?

Yes, what you're describing is async processing.

The best solution depends to some degree on the nature of the web services call and how you want to handle the results. A few tips that might help:

  1. One approach is to send a request from the initial web request to a background thread. This works best if your users don't need to see the results of the call as soon as it completes.
  2. Another approach is to have your server-side code make an async web services call. This is the way to go if your users do need to see the results. The advantage of an async call on the server side is that it doesn't tie up an ASP.NET worker thread waiting for results (which can seriously impair scalability)
  3. Your server-side code can be structured either as a web page (*.aspx) or a WCF service, depending on what you want to have it return. Both forms support async.
  4. From the client, you can use an async XMLHTTP request (Ajax). That way, you will receive a notification event when the call completes.
  5. Another approach for long-running tasks is to write them to a persistent queue using Service Broker. This works best for things that you'd like users to be able to start and then walk away from and see the results later, with an assurance that the task will be completed.

In case it helps, I cover each of these techniques in detail in my book, along with code examples: Ultra-Fast ASP.NET.

i am new to threading. my boss gave me a scenario.we have a list of object which is 0f 70gb,we want to load it from a database.it takes time i want to utilize cpu(multi threading) in this scenario .i.e partializing data ,load first part, than second part,whensecond part is loaded mean while first part is processed, what i shuld do plese guide me.

Do you have control over how the data is loaded into the database, and what the hardware looks like?

To load 70GB of data, you will be I/O bound at first. If the data lives in a single volume, trying to use multiple threads will just cause the disk heads to thrash as they seek back-and-forth across the drive.

That means your first step should be to maximize the performance of your disk subsystem. You can do that by doing things like:

  1. Limiting your disk partition size to the first third of the drive
  2. Putting as many spindles as you can into a single large RAID volume, up to the speed of your disk controller
  3. Using SSDs instead of rotating magnetic drives
  4. Using a high-speed disk controller
  5. Using multiple disk controllers
  6. Spreading your drives out among multiple controllers

Once you have that part done, the next step is to partition your data among as many disks and controllers as possible, while still allowing your log file to be on a volume by itself. If you can fill two entire controllers with fast RAID volumes, then divide your data among them. In some cases, it can help to use SQL Server's table partitioning mechanisms to help with the process and to force certain parts of the table to be on certain physical volumes.

After the partitioning is done, then you can build your app to have one thread read from each physically separate partition, to avoid disk thrashing.

Once you're past being I/O bound, then you can start thinking about ways to optimize the CPU side of things -- but it's pretty unusual to get to that point.

Depending on how much speed you need, this type of thing can get complex (and expensive) quickly....

In case it helps, I discuss a number of these infrastructure issues in detail in my book: Ultra-Fast ASP.NET.

I am using Context.RewritePath in Application_BeginRequest to make my url user friendly, everything works fine on my local machine but on the server(shared) i get 404 errors. do you have any idea how can i fix this problem?

thanks

Under Cassini, Application_BeginRequest runs for all files. Under IIS, it only runs for files with managed handlers, such as *.aspx files.

For the general case, you will need to create your own HttpModule. Here's an example (based off of a similar one from my book: Ultra-Fast ASP.NET):

using System;
using System.Web;

namespace Samples
{
    public class RewriteExample : IHttpModule
    {
        public void Init(HttpApplication context)
        {
            context.BeginRequest += OnBeginRequest;
        }

        void OnBeginRequest(object sender, EventArgs e)
        {
            HttpApplication application = (HttpApplication)source;
            HttpContext context = application.Context;
            // re-write URL here...
        }

        public void Dispose()
        {
        }
    }
}

Then register it in web.config (this is for IIS; using Cassini is slightly different):

<system.webServer>
  <modules>
    . . .
    <add name="RewriteExample" type="Samples.RewriteExample" />
  </modules>
</system.webServer>

I just want some tricks for increase ASP.Net application. This question is a little wide, but if you can give me some general tips I will appreciate.

I'm running an ASP.NET app in which I have added an insert/update query to the [global] Page_Load. So, each time the user hits any page on the site, it updates the database with their activity (session ID, time, page they hit). I haven't implemented it yet, but this was the only suggestion given to me as to how to keep track of how many people are currently on my site.

Is this going to kill my database and/or IIS in the long run? We figure that the site averages between 30,000 and 50,000 users at one time. I can't have my site constantly locking up over a database hit with every single page hit for every single user. I'm concerned that's what will happen, however this is the first time I have attempted a solution like this so I may just be overly paranoid.

As a direct answer to your question, yes, running a database query in-line with every request is a bad idea:

  1. Synchronous requests will tie up a thread, which will reduce your scalability (fewer simultaneous activities)
  2. DB inserts (or updates) are writes to the DB, which will put a load on your log volume
  3. DB accesses shouldn't be required in a single server / single AppPool scenario

I answered your question about how to count users in the other thread:

Best way to keep track of current online users

If you are operating in a multi-server / load-balanced environment, then DB accesses may in fact be required. In that case:

  1. Queue them to a background thread so the foreground request thread doesn't have to wait
  2. Use Resource Governor in SQL 2008 to reduce contention with other DB accesses
  3. Collect several updates / inserts together into a single batch, in a single transaction, to minimize log disk I/O pressure
  4. Return the current count with each DB access, to minimize round-trips

In case it's of any interest, I cover sync/async threading issues and the techniques above in detail in my book, along with code examples: Ultra-Fast ASP.NET.

I'm working on as asp.net application. The application is reasonably large and involves lot of pages with many reference images and scripts. I have hosted such content on a cookie-free sub-domain.

My problem is - I have to manually update the path for all images and scripts upon deployment, by making them absolute-references to cookie-free domain content from relative-references of actual domain. How do I automate this ? Has anybody done this ?

StackOverflow also uses cookie-free domain for various images on the site. Here's example upvote image which loaded from http://sstatic.net/so/img/vote-arrow-up.png

alt text

You can use tag transforms or control adapters.

In case it helps, I cover both approaches in my book, along with code examples: Ultra-Fast ASP.NET

My web application requires as little lag as possible. I have tried hosting it on a dedicated server, but users on the other side of world have complained about latency issues.

So I am considering using CDN or Amazon services.... would either help resolve this?

The application uses a lot of AJAX, so latency can be an issue.

A CDN will only improve the performance of your static content -- if your Ajax code requires active content, then it won't help for that.

Amazon AWS might help, but it depends on the details of your application. Amazon isn't particularly well-known for delivering a low-latency solution.

Most apps that require low latency end up addressing the issue from many directions. A combination of a CDN and dedicated servers is certainly one approach. One key there is choosing the right data center for your servers (a low-latency hub).

In case it might help, I wrote a book about this subject: Ultra-Fast ASP.NET, which includes a discussion of client-side issues, hardware infrastructure, CDNs, caching, and many other issues that can impact latency.