Tips for ULS Logging Part 2

In part 1 of ULS logging tips (http://blogs.technet.com/b/speschka/archive/2011/01/23/tips-for-uls-logging.aspx) I included a some code to add a custom ULS logging Area and demonstrated how to use it within your project.  After working with it a bit I noticed a few things:

1.       There was some code in there that didn’t jive up with the latest SDK – basically some code pieces implemented that the SDK implied I should not need, and some stuff that I was doing slightly differently than the SDK example (which, by the way, needs to be updated and I will work on that separately).

2.       Logging would only occur when I set the TraceSeverity to Medium; there was not an effective way to layer in different trace levels.

3.       The most super annoying part – when I tried to invoke my custom class that is required for my custom Area, it would fail when I tried doing any logging during a POST activity.  It triggered the all too common error about not being able to update an SPWeb during an POST unless setting the AllowUnsafeUpdates property to true.  Now setting that on an SPWeb during a log event, just to make my log event work, seemed borderline insane (my apologies to all the insane people out there by the way).  So I decided there must be a better way.

In this posting then, I’m going to improve upon the previous example and add to it along the way.  Here’s what we’re going to show:

1.       An improved class for the custom Area – it actually slims down some in this version.

2.       Integration with Configure diagnostic logging in the Monitoring section of Central Admin.  This integration will further allow support for configuring tracing levels on the custom Area and Category.

3.       Registration information – a very simple way to both register and unregister the custom ULS logging class so it integrates with Central Admin, and can be removed from Central Admin.

To begin with, let’s look at the new streamlined version of the class.  There are many similarities between this and the version I showed in the first posting, but this is a little slimmer and simpler.  Here’s what it looks like now.
    [Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
    public class SteveDiagnosticService : SPDiagnosticsServiceBase
    {
 
        private const string LOG_AREA = “Steve Area”;
 
 
        public enum LogCategories
        {
            SteveCategory
        }  
 
 
        public SteveDiagnosticService()
        {
        } 
 
 
        public SteveDiagnosticService(string name, SPFarm parent)
            : base(name, parent)
        {
        } 
 
 
        public static SteveDiagnosticService Local
        {
            get
            {
                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();
            }
        }  
 
 
 
        public void LogMessage(ushort id, LogCategories LogCategory,
TraceSeverity traceSeverity, string message,
params object[] data)
        {
 
            if (traceSeverity != TraceSeverity.None)
            {
                SPDiagnosticsCategory category
                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];
                Local.WriteTrace(id, category, traceSeverity, message, data);
            }
 
        }
 
 
        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()
        {
yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
        }
    }
 
Here are the important changes from the first version:

1.        Add the Guid attribute to the class:

[Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
 
I added a Guid attribute to the class because SharePoint requires it in order to uniquely identify it in the configuration database.

2.       Changed the default constructor:

        public SteveDiagnosticService()
        {
        } 
 
Now it’s just a standard empty constructor.  Before I called the other overload for the constructor that takes a name for the service and an SPFarm.  Just less code is all, which is a good thing when you can get away with it.

3.       Deleted the HasAdditionalUpdateAccess override.  Again, turned out I wasn’t really using it, so continuing with the “less is more” theme I removed it.

 

4.       Shortened the ProvideAreas method up significantly; now it matches the same pattern that is used in the SDK:

 

yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
So that addressed problem #1 above – my code is now a little cleaner and meaner.  The other problems – lack of tracing levels, throwing an exception when logging during a POST, and integration with central admin – were all essentially fixed by taking the code a bit further and registering.  The example in the SDK is currently kind of weak in this area but I was able to get it to do what we need.  To simplify matters, I created a new feature for my assembly that contains my custom ULS class I described above.  I added a feature receiver and it, I register the custom ULS assembly during the FeatureInstalled event, and I unregister it during the FeatureUninstalling event.  This was a good solution in my case, because I made my feature a Farm scoped feature so it auto activates when the solution is added and deployed.  As it turns out, the code to do this register and unregister is almost criminally simple; here it is:
public override void FeatureInstalled(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Update();
}
catch (Exception ex)
       {
throw new Exception(“Error installing feature: “ + ex.Message);
}
}
 
 
public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Delete();
}
catch (Exception ex)
{
throw new Exception(“Error uninstalling feature: “ + ex.Message);
}
}
 
The other thing to point out here is that I was able to refer to “SteveDiagnosticService” because I a) added a reference to the assembly with my custom logging class in my project where I packaged up the feature and b) I added a using statement to my feature receiver class for the assembly with my custom logging class.
By registering my custom ULS logging class I get a bunch of benefits:

·         I no longer get any errors about updating SPWeb problems when I write to the ULS log during a POST

·         My custom logging Area and Category shows up in central admin so I can go into Configure diagnostic logging and change the trace and event level.  For example, when it’s at the default level of Medium tracing, all of my ULS writes to the log that are of a TracingSeverity.Medium are written to the log, but those that are TracingSeverity.Verbose are not.  If I want to have my TracingSeverity.Verbose entries start showing up in the log, I simply go Configure diagnostics logging and change the Trace level in there to Verbose.

Overall the solution is simpler and more functional.  I think this is one of those win-win things I keep hearing about.
P.S.  I want to record my protest vote here for LaMarcus Aldridge of the Portland Trailblazers.  His failure to get added to the Western Conference NBA All Star team is tragically stupid.  Okay, enough personal commentary, I know y’all are looking for truly Share-n-Dipity-icous info when you come here.  Hope this is useful.

Tips for ULS Logging

UPDATE 2-4-2011:  I recommend taking a look at the updated example for this at http://blogs.technet.com/b/speschka/archive/2011/02/04/tips-for-uls-logging-part-2.aspx.  The new example is better and more functional.

When I added some ULS logging to a recent project I noticed one annoying little side effect.  In the ULS logs the Area was showing up as “Unknown”.  I realize that some aspects of this have been covered other places but I just wanted to pull together a quick post that describes what I found to be the most expedient way to deal with it (certainly much less painful than some of the other solutions I’ve seen described).  One thing worth pointing out is that if you don’t want to do this at all, I believe the Logging Framework that the best practices team puts up on CodePlex may do this itself.  But I usually write my own code as much as possible so I’ll briefly describe the solution here.

The first thing to note is that most of the SDK documentation operates on the notion of creating a new SPDiagnosticsCategory.  The constructor for a new instance of the class allows you to provide the Category name, and when you do that you will definitely see any custom Category name you want to use show up in the ULS log in the Category column.  In most cases if you’re doing your own custom logging you also want to have a custom Area to go hand in hand with your custom Category(s).  Unfortunately, the SDK puts you through a considerably more complicated exercise to make that happen, because you cannot use a simple constructor to create a new Area and use it – you have to write your own class that derives from SPDiagnosticsServiceBase.

The way I’ve chosen to implement the whole thing is to create one CS file that contains both my logging class and my diagnostic service base class.  I’ll start first with the diagnostic base class – below I’ve pasted in the entire class, and then I’ll walk through the things that are worth noting:

    public class SteveDiagnosticService : SPDiagnosticsServiceBase

    {

 

        private const string LOG_AREA = “Steve Area”;

 

 

        public enum LogCategories

        {

            SteveCategory

        }  

 

 

 

        public SteveDiagnosticService()

            : base(“Steve Diagnostics Service”, SPFarm.Local)

        {

        } 

 

 

 

        public SteveDiagnosticService(string name, SPFarm parent)

            : base(name, parent)

        {

        } 

 

 

 

        protected override bool HasAdditionalUpdateAccess()

        {

            return true;

        }

 

 

        public static SteveDiagnosticService Local

        {

            get

            {

                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();

            }

        }  

 

 

 

        public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

            string message, params object[] data)

        {

 

            if (traceSeverity != TraceSeverity.None)

            {

                SPDiagnosticsCategory category

                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

                Local.WriteTrace(id, category, traceSeverity, message, data);

            }

 

        }

 

 

        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

        {

 

            List<SPDiagnosticsCategory> categories = new List<SPDiagnosticsCategory>();

 

            categories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

            SPDiagnosticsArea area = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, categories);

 

            List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>();

 

            areas.Add(area);

 

            return areas;

        }

    }

 

Let’s look at the interesting parts now:

private const string LOG_AREA = “Steve Area”;

Here’s where I define what the name of the Area is that I’m going to write to the ULS log.

public enum LogCategories

{

SteveCategory

}    

This is the list of Categories that I’m going to add to my custom Area.  In this case I only have one category I’m going to use with this Area, but if you wanted to several of them you would just expand the contents of this enum.

public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

string message, params object[] data)

{

if (traceSeverity != TraceSeverity.None)

{

SPDiagnosticsCategory category

= Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

 

Local.WriteTrace(id, category, traceSeverity, message, data);

}

}

 

This is one of the two important methods we implement – it’s where we actually write to the ULS log.  The first line is where I get the SPDiagnosticCategory, and I make reference to the Area it belongs to when I do that.  In the second line I’m just calling the base class method on the local SPDiagnosticsServiceBase class to write to the ULS log, and as part of that I pass in my Category which is associated with my Area.

protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

{

 

List<SPDiagnosticsCategory> theCategories = new List<SPDiagnosticsCategory>();

 

theCategories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

SPDiagnosticsArea theArea = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, theCategories);

 

List<SPDiagnosticsArea> theArea = new List<SPDiagnosticsArea>();

 

theArea.Add(area);

 

return theArea;

}

 

In this override I return to SharePoint all of my custom Areas.  In this case I’m only going to use my one Area, so that’s all I send back.  Also, as I mentioned above, I’m only using one custom Category with my Area.  If I wanted to use multiple custom Categories then I would 1) add them to the enum I described earlier and 2) add each one to my theCategories List instance I’ve defined in this method.

That’s really the main magic behind adding a custom Area and getting it to show up in the appropriate column in the ULS logs.  The logging class implementation is pretty straightforward too, I’ll paste in the main part of it here and then explain a little further:

    public class Log

    {

 

        private const int LOG_ID = 11100;

 

 

        public static void WriteLog(string Message, TraceSeverity TraceLogSeverity)

        {

            try

            {

                //in this simple example, I’m always using the same category

                //you could of course pass that in as a method parameter too

                //and use it when you call SteveDiagnosticService

                SteveDiagnosticService.Local.LogMessage(LOG_ID,

                    SteveDiagnosticService.LogCategories.SteveCategory,

                    TraceLogSeverity, Message, null);

            }

            catch (Exception writeEx)

            {

                //ignore

                Debug.WriteLine(writeEx.Message);

            }

        }

    }

 

This code is then pretty easy to implement from my methods that reference it.  Since WriteLog is a static method my code is just Log.WriteLog(“This is my error message”, TraceSeverity.Medium); for example.  In this example then, in the ULS log it creates an entry in the Area “Steve Area” and the Category “SteveCategory” with the message “This is my error message”.

How Your Client Application can know when your SharePoint 2010 Farm is Busy

SharePoint 2010 has a number of new features built in to keep your farm up and running as happily and healthy as possible.  One of the features it uses to do that is called Http Throttling.  It allows you to set thresholds for different performance counters.  You decide which counters to use and what thresholds are “healthy” for your environment.  I covered this concept in more detail in this post:  http://blogs.technet.com/speschka/archive/2009/10/27/adding-throttling-counters-in-sharepoint-2010.aspx. 

One of the things you may not know is that client applications get information about the health of the farm based on those throttling rules every time they make a request to the farm.  When SharePoint responds, it adds a special response header called X-SharePointHealthScore.  If you’re writing a client application that works with data in a SharePoint server, you should consider regularly reading the value of this header to determine whether it’s okay to continue at your current pace, or as the score gets higher, consider dialing back the number of requests you are making.

To demonstrate this concept I’ve created an web part for admins that I’ll be releasing later this year (I haven’t decided exactly how I’m going to distribute it).  It’s designed to let you monitor one to many farms based on the response header I’ve described above.  It’s a nice little quick view of the health of your farm.  Here’s an example of what the web part looks like – in this case it’s retrieving health data for two different farms I have:

The web part is built using a combination of web part, Silverlight, jQuery, and ASPX page.  When it receives an error, like the server is busy error you can get with Http Throttling, you can click the plus sign at the bottom of the web part to see the error details.  Here’s an example of what that looks like:

Take a look at this when you’re writing your client applications, and understand how best to respond when your farm is getting busy.

Creating Health Monitor Rules for SharePoint 2010

Hey all, I saw an interesting discussion today about creating new health rules in SharePoint 2010 and thought I would share.  All the credit for this one goes to Chris C. who was good enough to share this information.  The discussion started because of some confusion about how to add a new health rule.  In the public beta you can go into Central Admin, hit up the rule definitions and add a new one.  Unfortunately you will find that doesn’t really work.  By RTM we should / will have the new rule definition button disabled, because that isn’t the right way to get it into the system.  Instead, here in Chris’ words are how you should approach this.

A health rule is basically a class (derived from either SPHealthAnalysisRule or SPRepairableHealthAnalysisRule) which implements a Check() method, some string properties that describe the problem to the administrator, and optionally a Repair() method (if an SPRepairableHealthAnalysisRule).

For adding health rules to your farm:

  • Create your class and compile it into a dll with a strong name
  • GAC the dll on each of the machines in your farm
  • Run the following In OM/PowerShell:
    $loc=”C:\mydlllocation.dll”
    $asm=[System.Reflection.Assembly]::LoadFrom($loc)   <- $asm should be GAC’d
    [Microsoft.SharePoint.Administration.Health.SPHealthAnalyzer]::RegisterRules($asm)

Once these steps are complete, you should see your rule(s) in the Health Rules List (i.e. http://centraladmin:8080/Lists/HealthRules/AllItems.aspx).  This is the full list of rules.  Each of these runs on a different schedule (defined in the class) inside of a timer job.  Once a rule is run, a result list item is stored in the health reports (or “Problems and Solutions”) list.  The reason you are not seeing any generated reports is most likely because your OOB rules have not been run yet.

Thanks again Chris for sharing this valuable information.

Playing with Large List Throttling

If you’ve followed my blog entries by now then you’ve seen quite a bit of information on throttling for large lists in SharePoint 2010.  One of the things you may find is that you will have scenarios where you want to be able to toggle the enforcement of the throttling on a list by list basis.  As I explained in a previous post, an SPList object does have an EnableThrottling property.  With that useful bit of information in hand, I wrote a little web part that allows you to manage list throttling in different site collections pretty easily.  It’s implemented as a web part, as shown here:

 

As you can see, it’s a pretty simple web part, with a fairly easy to use interface.  You can toggle the list throttling on or off for any individual list, or turn it off or on for all lists in a site collection.  I’ve attached the WSP for this web part to this posting.  Feel free to use it and abuse it, there’s really only a couple of important things to remember at this point.  1) This web part MUST BE RUN IN THE CENTRAL ADMIN SITE!  This restriction *might* be lifted by the time we RTM, but we’ll just have to wait and see.  2)  This web part will only work with Windows (classic) auth sites for now.  Hope you find it useful.

You can download the attachment here:

Using the Developer Dashboard in SharePoint 2010

The developer dashboard is a new feature in SharePoint 2010 that is design to provide additional performance and tracing information that can be used to debug and troubleshoot issues with page rendering time.  The dashboard is turned off by default, but can be enabled via the object model or stsadm (and PowerShell too, I just haven’t put together the script for it yet).  When the dashboard is turned on you will find information about the controls, queries and execution time that occur as part of the page rendering process; this information appears at the bottom of the page.  Here’s an example of what the “short” version of the output looks like (NOTE: this screen shot is from a build prior to the public beta so your bits will look a little different):

 

As you can see, it provides information from the perspective of the event pipeline, the web server and database.  On the left side you can see the different events that fired in the page processing pipeline and within that, you can see how long individual web parts took to within those events.  On the top right hand side you see information about the page processing as whole, including the overall execution time, the amount of memory used in the processing of the page request and the correlation ID, which can be of great value when trying to link the page render to entries in the ULS log.  Underneath the server information you will find a list of the different database calls that were made through the object model by various components in the page itself as well as the controls it hosts – all useful information.

You may also notice the database calls are actually a hyperlink.  This is another pretty cool feature in that when you click on it, it shows the call stack from what triggered that particular database call, the SQL that was execute and the IO stats:

 

Enabling the developer dashboard is fairly easy.  If you’re doing it via the object model, the code looks something like this; to turn it on:

SPWebService cs = SPWebService.ContentService;

cs.DeveloperDashboardSettings.DisplayLevel = SPDeveloperDashboardLevel.On;

cs.DeveloperDashboardSettings.Update();

 

NOTE:  This code will not work in a web part if the web part is hosted in any site except the central admin site.  We specifically check for and block that scenario because the developer dashboard is a farm-wide setting.  If you code it up in a web part and try to execute it in a non-central admin site, it will throw a security exception.

To turn it off you set the DisplayLevel to SPDeveloperDashboard.Off; for on demand use of the dashboard you can set the value to SPDeveloperDashboard.OnDemand.   When you set it to OnDemand, it adds a small icon to the upper right hand corner of the page; you click the icon to toggle the dashboard on and off.  The icon looks like this:

 

You can also turn it off and on with stsadm; you just need to make sure you are running the command as a farm administrator:

Stsadm –o setproperty –pn developer-dashboard –pv ondemand (or “on” or “off”)

 

The on demand setting is really the optimal setting in my opinion.  Here’s what it gives you:  once it is set to on demand, site collection admins can turn it on or off.  When they do, it only turns it on or off for that particular site collection.  Equally as good, only the person that turned it on sees the output – your everyday users will not see the output from developer dashboard so this becomes a really valuable troubleshooting tool.  Even more interesting is that if you have multiple site collection admins and one of them toggles it on, the output is displayed only for that person, not for every site collection admin.   Want more flexibility?  Well you can even change the permissions that are required to see the dashboard output.  The DeveloperDashboardSettings has a property called RequiredPermissions.  You can assign a collection of base permissions (like EditLists, CreateGroups, ManageAlerts, or whatever you want) to it; only those people that have those permissions will be able to see the output.  So you have a great deal of flexibility and granularity in deciding when to use it the dashboard output and who will see it.

 

Okay, so this all seems good – all my web parts and code I run within the page will just show up and we’ll have this great troubleshooting info, right?  Well, not exactly unfortunately.  Take a look at the output from the dashboard again – you’ll notice a finite set of events that are reported.  Those are tied to events in the base web part class so they cannot be expanded for any random click event for example.  Any code you have in your override for OnInit or Render will automatically be captured in this pipeline, but code in other places will not.  All is not lost however!  We’ve also introduced a new class to the object model called the SPMonitoredScope.  Among other things, it helps to keep track of useful usage and tracing information just like the developer dashboard uses.

 

In order to get the rest of your code included in the developer dashboard output, you need to wrap it up in a new monitored scope, with something like this:

 

using (SPMonitoredScope GetListsBtnScope = new

     SPMonitoredScope(“GetListsBtn_Click”))

{

//your code goes here

}

The name I used here – “GetListsBtn_Click” – is what will appear in the developer dashboard output.  Here’s an example:

 

 

 

This should be one of your first best practices for developing code for SharePoint 2010 – use SPMonitoredScope!   This can only help you understand and manage the performance of your components as you deploy from development to production.

 

There’s a ton of great out of the box value here, but there is also one piece missing that is worth mentioning.  Even if you use SPMonitoredScope, if your code is a sandbox component (i.e. a User Solution), the output from it will not be captured in Developer Dashboard.  The reason it doesn’t get captured is that sandbox components execute in a completely different process from the page request – that’s why it’s sandboxed.  As a result though, we can’t pipe the tracing information back into the page processing event pipeline.  Sorry about that one folks, but we still have a lot of capabilities here that we should be taking advantage of.

 

I hope after reading this you will see the value in the developer dashboard, understand how to turn it on and off, and know what you have to do to get all of your code to be managed through this pipeline.

 

Working with Large Lists in SharePoint 2010 – List Throttling

List throttling is one of the new options in SharePoint 2010 that enable to set limits on how severely users can put the beat down on your servers.  In a nutshell, what it does is allow you to set a limit for how many rows of data can be retrieved for a list or library at any one time.  The most basic example of this would be if you had a list with thousands of items, and someone created a view that would return all of the items in the list in a single page.  List throttling ensures that such a request would not be allowed to execute.  The hit on the server is alleviated, and the user gets a nice little message that says sorry, we can’t retrieve all of the data you requested because it exceeds the throttle limit for this list. 

The kinds of operations that can trigger hitting this limit though aren’t limited to viewing data – that’s just the easiest example to demonstrate.  There are other actions that can impact a large number of rows whose execution would fall into the list throttle limits.  For example, suppose you had a list with 6000 items and a throttle limit of 5000.  You create a view that only displays 50 items at a time, but it does a sort on a non-indexed column.  Behind the scenes, this means that we need to sort all 6000 items and then fetch the first 50.  If you are going to delete a web with large flat lists you potentially have the same problem.  We need to select all of the items for all of the lists as part of the site deletion, so we could again hit the throttling limit.  These are just a few examples but you can start to imagine some of the others.

So how does this work and how do we manage it?  It all starts in central admin.  List throttling is an attribute that you will generally manage at the web application level.  So if you go into Central Admin, click on Application Management, then click on Manage Web Applications.  Click a single web application to select it, then in the ribbon click on the General Settings drop down and select the Resource Throttling menu item.  It displays a dialog with the several options; I’ll only cover the ones related to list item throttling in this blog:

·        List View Threshold – this is the maximum number of items that can be retrieved in one request.  The default value is 5,000.  Important to note as well, the smallest you make this value is 2,000.

·        Object Model Override – as described above, this option needs to be enabled in order to enable super users to retrieve items through the object model, up to the amount defined in the List query size threshold for auditors and administrators.

·        List View Threshold for Auditors and Administrators – this is a special limit for “super users”.  It is important to understand that this DOES NOT allow these super users to see more items in a list view.  This property is used in conjunction with the Allow object model override property described below.  That means that if the Allow object model override property is set to Yes, then these super users can retrieve up to the number of items set in this property, but only via the object model.  The way you become a “super user” is a farm admin creates a web application policy for you that grants you rights to a permission level that includes either the Site Collection Administrator and/or Site Collection Auditor rights.  By default both the Full Read and Full Control permission levels include these rights, and you can create your own custom policy permission levels that do as well.  Creating this policy is done the same way as it was in SharePoint 2007. 

·        List View Lookup Threshold – again, nothing to do with the maximum number of rows returned from a list but it’s right in the middle of these so I couldn’t leave it out.  This one is self-explanatory I think.

·        Daily Time Window for Large Queries – this option allows you to create a block of time during the day, typically when usage is low, that you will allow queries to run and not enforce the list throttling limits. The one thing to remember here is that if you execute a query during that time period, it will run until complete.  So if it’s still running when the daily window period closes, the query will continue to execute until all results have been returned.

There are a couple of additional exceptions to the information above:

1.       If you are box administrator on the WFE where the data is being requested, and you have at least Read rights to the list data, then you will see ALL the rows.  That means if you have 10,000 rows in a list and you execute a query or view that has no filters, you will get back all 10,000 rows.

2.       In the object model a list (and library) is represented by the SPList class.  It has a new property in SharePoint 2010 called EnableThrottling.  On a list by list basis you can set this property to false.  When you do that, throttling will not be enabled for views or object model queries.  So again, if your list has 10,000 items and you execute a query or view that has no filters, you will get back all 10,000 rows.

In order to retrieve information using the object model in order to retrieve up to the number of items specified in the List query size threshold for auditors and administrators property, there is a property you need to set in your query object.  The property is called QueryThrottleMode and it applies to the SPQuery and SPSiteDataQuery classes.  You simply set this property to Override and then use your class instance to query.  Here’s a simplified example:

using (SPSite theSite = new SPSite(“http://foo&#8221;)) 

{

using (SPWeb theWeb = theSite.RootWeb)

{

SPList theList = theWeb.Lists[“My List Name”];

 

SPQuery qry = new SPQuery();

qry.QueryThrottleMode = SPQueryThrottleOption.Override;

 

//set the Query property as needed to retrieve your data

 

            SPListItemCollection coll = theList.GetItems(qry);

 

            //do something with the data

}

}

Now that you know what the properties are about, let’s talk about the ways in which you use them.  Assume the following scenario:

·         # of items:  3000

·         EnableThrottling property: true

·         Default view:  display items in batches of 3000

·         List View Threshold property:  2500

·         List View Threshold for Auditors and Administrators :  2800

·        Object Model Override :  Yes

·         Method for OM Query:  SPQuery with  QueryThrottleMode = Override, query retrieves all items

Here’s how users would be able to access the data in the list:

User Type

List View

Object Model

Reader

No items shown; over threshold

No items returned; over threshold

Super User

No items shown; over threshold

No items returned; over admin and auditor threshold

Box Admin

3000 items shown per page

3000 items returned

Now let’s change the rules; the differences from the original scenario are highlighted in yellow:

·         # of items:  3000

·         EnableThrottling property: true

·         Default view:  display items in batches of 3000

·         List View Threshold property:  2500

·         List View Threshold for Auditors and Administrators :  3500

·         Object Model Override :  Yes

·         Method for OM Query:  SPQuery with  QueryThrottleMode = Override, query retrieves all items

User Type

List View

Object Model

Reader

No items shown; over threshold

No items returned; over threshold

Super User

No items shown; over threshold

3000 items returned

Box Admin

3000 items shown per page

3000 items returned

Another scenario:

  • # of items:  3000
  • EnableThrottling property: false
  • Default view:  display items in batches of 3000
  • List View Threshold property:  2500
  • List View Threshold for Auditors and Administrators :  3500
  • Object Model Override :  Yes
  • Method for OM Query:  SPQuery with  QueryThrottleMode = Override, query retrieves all items

User Type

List View

Object Model

Reader

3000 items shown per page

3000 items returned

Super User

3000 items shown per page

3000 items returned

Box Admin

3000 items shown per page

3000 items returned

Final scenario:

  • # of items:  3000
  • EnableThrottling property: true
  • Default view:  display items in batches of 2500
  • List View Threshold property:  2500
  • List View Threshold for Auditors and Administrators :  3500
  • Object Model Override :  Yes
  • Method for OM Query:  SPQuery with  QueryThrottleMode = Override, query retrieves all items

User Type

List View

Object Model

Reader

2500 items shown per page

No items returned; over threshold

Super User

2500 items shown per page

3000 items returned

Box Admin

2500 items shown per page

3000 items returned

List throttling is a powerful tool but there are a few rules and roles you need to remember when planning your implementation.  Hopefully this blog will help identify and clarify the functionality for you so that you can implement a design that makes sense for your scenario.