Monitoring Large Lists in SharePoint Online with Office365Mon

One of the enduring realities I saw over and over in my many years working at Microsoft with SharePoint, is that customers LOVE the lists they can create in SharePoint.  They’re super easy to set up, you can quick do all kinds of sorting, filtering and searching on them, and it requires no technical expertise to get up and running and with them.  This led to another enduring reality, which is that many customers started loving lists TOO much.  I saw many, many customers over the years that had lists that had just exploded in size.  As these lists grew larger, the performance in using them tends to get worse and worse.  This problem was also compounded by the fact that many developers saw SharePoint lists as a quick and easy place to store data for their application.  That meant even bigger lists sizes, and more people using them more often.

Over the years, we developed lots of documentation and options for improving the performance of your lists.  As customers have moved to SharePoint Online in Office 365 though, we would occasionally hear people ask if it had the same large list limitations as SharePoint on premises does…and the answer is yes, it does.  Now as more customers are moving their SharePoint on premises data to SharePoint Online, we see increasing concern about how the lists they do have are going to perform once it’s all up in Office 365.  Fortunately, at Office365Mon, we’ve just released a new feature designed specifically to help you stay on top of this issue.

List Monitoring is a feature that lets you select one or more lists in SharePoint Online for us to monitor.  For the lists that we’re monitoring, we will do a couple of things:  first, we’ll issue health probes for each list that we’re monitoring and render the default view for it to see what the performance is like.  That’s typically one of the first places where you’ll see performance issues with a large list.  You can configure List Monitoring so that it will send you a notification if it takes longer than “x” seconds to render the default view, where “x” is a number that you decide.

The second thing we’ll do is keep tabs on how large the list is, i.e. how many items it contains.  Again, you can set a threshold for us to look for, and when a monitored list gets bigger than that threshold, we’ll send you a notification to alert you to it.  So, for example, if you’re worried about a large list approaching that magic 5,000 item limit, you can have us notify you when it’s getting close.  Here’s a screen shot of where you configure the monitoring thresholds:

MonLargeLists1

Selecting the lists to be monitored is also quite simple – we provide you with a collection of all of the lists in the SharePoint Online site that we’re monitoring, and you can just check boxes next to the lists you want us to monitor for you.  It can be any of the lists that come out of the box with SharePoint, or any custom list that you’ve created:

MonLargeLists2

Once we’ve started monitoring lists for you, not only will we notify you according to the thresholds you’ve configured, but as you’ve come to expect from Office365Mon, we also have a nice set of reports you can use to see where you stand.  To begin with, you can see the performance of the recent health probes we’ve issued against monitored lists in our Average Response Time report.  It shows the performance of all of the resources that we’re monitoring for you, including monitored lists.  We also have a new report that shows you the average performance each month just for your monitored lists:

MonLargeLists3

In addition to that, we have a report that shows you the size of your monitored lists each day, so you can visualize any growth trends that might be happening that you need to get in front of:

MonLargeLists4

We also provide a monthly view of the average size of each monitored list, so you have a longer-term view of how rapidly your lists are growing:

MonLargeLists5

Being aware of large lists and their impact on performance is one of the best ways to ensure a good experience for your users.  I’ve heard many, many times from customers that say “our site is slow”.  There are lots of reasons why that might be, but a couple of the most common reasons are slow query times and large lists.  At Office365Mon we’ve provided monitoring for your query execution time for nearly a year now.  With the new List Monitoring feature, now you can also know when you have large list performance problems.  Once you know that, you can start working on a mitigation strategy – splitting the data out into multiple lists, creating customized views of the data, etc., etc., etc.  There are a lot of different things you can do to work on improving the performance, but if you don’t know you have a problem then you’ll forever be stuck wondering why your users keep telling you that your “site is slow”.  Take advantage of features like these to stay in the know and stay in control of your Office 365 tenant, and keep everyone happy.  Start by visiting us at https://www.office365mon.com and clicking the Configure…List Monitoring menu.

This is yet another feature at Office365Mon that was driven from feedback by our customers.  I hope you’ll take a look at this feature and as always, let us know how we can make it better as well as ways in which we might be able to help you to do Office 365 monitoring even better.

From Sunny Phoenix,

Steve

 

Tips for ULS Logging Part 2

In part 1 of ULS logging tips (http://blogs.technet.com/b/speschka/archive/2011/01/23/tips-for-uls-logging.aspx) I included a some code to add a custom ULS logging Area and demonstrated how to use it within your project.  After working with it a bit I noticed a few things:

1.       There was some code in there that didn’t jive up with the latest SDK – basically some code pieces implemented that the SDK implied I should not need, and some stuff that I was doing slightly differently than the SDK example (which, by the way, needs to be updated and I will work on that separately).

2.       Logging would only occur when I set the TraceSeverity to Medium; there was not an effective way to layer in different trace levels.

3.       The most super annoying part – when I tried to invoke my custom class that is required for my custom Area, it would fail when I tried doing any logging during a POST activity.  It triggered the all too common error about not being able to update an SPWeb during an POST unless setting the AllowUnsafeUpdates property to true.  Now setting that on an SPWeb during a log event, just to make my log event work, seemed borderline insane (my apologies to all the insane people out there by the way).  So I decided there must be a better way.

In this posting then, I’m going to improve upon the previous example and add to it along the way.  Here’s what we’re going to show:

1.       An improved class for the custom Area – it actually slims down some in this version.

2.       Integration with Configure diagnostic logging in the Monitoring section of Central Admin.  This integration will further allow support for configuring tracing levels on the custom Area and Category.

3.       Registration information – a very simple way to both register and unregister the custom ULS logging class so it integrates with Central Admin, and can be removed from Central Admin.

To begin with, let’s look at the new streamlined version of the class.  There are many similarities between this and the version I showed in the first posting, but this is a little slimmer and simpler.  Here’s what it looks like now.
    [Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
    public class SteveDiagnosticService : SPDiagnosticsServiceBase
    {
 
        private const string LOG_AREA = “Steve Area”;
 
 
        public enum LogCategories
        {
            SteveCategory
        }  
 
 
        public SteveDiagnosticService()
        {
        } 
 
 
        public SteveDiagnosticService(string name, SPFarm parent)
            : base(name, parent)
        {
        } 
 
 
        public static SteveDiagnosticService Local
        {
            get
            {
                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();
            }
        }  
 
 
 
        public void LogMessage(ushort id, LogCategories LogCategory,
TraceSeverity traceSeverity, string message,
params object[] data)
        {
 
            if (traceSeverity != TraceSeverity.None)
            {
                SPDiagnosticsCategory category
                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];
                Local.WriteTrace(id, category, traceSeverity, message, data);
            }
 
        }
 
 
        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()
        {
yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
        }
    }
 
Here are the important changes from the first version:

1.        Add the Guid attribute to the class:

[Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
 
I added a Guid attribute to the class because SharePoint requires it in order to uniquely identify it in the configuration database.

2.       Changed the default constructor:

        public SteveDiagnosticService()
        {
        } 
 
Now it’s just a standard empty constructor.  Before I called the other overload for the constructor that takes a name for the service and an SPFarm.  Just less code is all, which is a good thing when you can get away with it.

3.       Deleted the HasAdditionalUpdateAccess override.  Again, turned out I wasn’t really using it, so continuing with the “less is more” theme I removed it.

 

4.       Shortened the ProvideAreas method up significantly; now it matches the same pattern that is used in the SDK:

 

yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
So that addressed problem #1 above – my code is now a little cleaner and meaner.  The other problems – lack of tracing levels, throwing an exception when logging during a POST, and integration with central admin – were all essentially fixed by taking the code a bit further and registering.  The example in the SDK is currently kind of weak in this area but I was able to get it to do what we need.  To simplify matters, I created a new feature for my assembly that contains my custom ULS class I described above.  I added a feature receiver and it, I register the custom ULS assembly during the FeatureInstalled event, and I unregister it during the FeatureUninstalling event.  This was a good solution in my case, because I made my feature a Farm scoped feature so it auto activates when the solution is added and deployed.  As it turns out, the code to do this register and unregister is almost criminally simple; here it is:
public override void FeatureInstalled(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Update();
}
catch (Exception ex)
       {
throw new Exception(“Error installing feature: “ + ex.Message);
}
}
 
 
public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Delete();
}
catch (Exception ex)
{
throw new Exception(“Error uninstalling feature: “ + ex.Message);
}
}
 
The other thing to point out here is that I was able to refer to “SteveDiagnosticService” because I a) added a reference to the assembly with my custom logging class in my project where I packaged up the feature and b) I added a using statement to my feature receiver class for the assembly with my custom logging class.
By registering my custom ULS logging class I get a bunch of benefits:

·         I no longer get any errors about updating SPWeb problems when I write to the ULS log during a POST

·         My custom logging Area and Category shows up in central admin so I can go into Configure diagnostic logging and change the trace and event level.  For example, when it’s at the default level of Medium tracing, all of my ULS writes to the log that are of a TracingSeverity.Medium are written to the log, but those that are TracingSeverity.Verbose are not.  If I want to have my TracingSeverity.Verbose entries start showing up in the log, I simply go Configure diagnostics logging and change the Trace level in there to Verbose.

Overall the solution is simpler and more functional.  I think this is one of those win-win things I keep hearing about.
P.S.  I want to record my protest vote here for LaMarcus Aldridge of the Portland Trailblazers.  His failure to get added to the Western Conference NBA All Star team is tragically stupid.  Okay, enough personal commentary, I know y’all are looking for truly Share-n-Dipity-icous info when you come here.  Hope this is useful.

Tips for ULS Logging

UPDATE 2-4-2011:  I recommend taking a look at the updated example for this at http://blogs.technet.com/b/speschka/archive/2011/02/04/tips-for-uls-logging-part-2.aspx.  The new example is better and more functional.

When I added some ULS logging to a recent project I noticed one annoying little side effect.  In the ULS logs the Area was showing up as “Unknown”.  I realize that some aspects of this have been covered other places but I just wanted to pull together a quick post that describes what I found to be the most expedient way to deal with it (certainly much less painful than some of the other solutions I’ve seen described).  One thing worth pointing out is that if you don’t want to do this at all, I believe the Logging Framework that the best practices team puts up on CodePlex may do this itself.  But I usually write my own code as much as possible so I’ll briefly describe the solution here.

The first thing to note is that most of the SDK documentation operates on the notion of creating a new SPDiagnosticsCategory.  The constructor for a new instance of the class allows you to provide the Category name, and when you do that you will definitely see any custom Category name you want to use show up in the ULS log in the Category column.  In most cases if you’re doing your own custom logging you also want to have a custom Area to go hand in hand with your custom Category(s).  Unfortunately, the SDK puts you through a considerably more complicated exercise to make that happen, because you cannot use a simple constructor to create a new Area and use it – you have to write your own class that derives from SPDiagnosticsServiceBase.

The way I’ve chosen to implement the whole thing is to create one CS file that contains both my logging class and my diagnostic service base class.  I’ll start first with the diagnostic base class – below I’ve pasted in the entire class, and then I’ll walk through the things that are worth noting:

    public class SteveDiagnosticService : SPDiagnosticsServiceBase

    {

 

        private const string LOG_AREA = “Steve Area”;

 

 

        public enum LogCategories

        {

            SteveCategory

        }  

 

 

 

        public SteveDiagnosticService()

            : base(“Steve Diagnostics Service”, SPFarm.Local)

        {

        } 

 

 

 

        public SteveDiagnosticService(string name, SPFarm parent)

            : base(name, parent)

        {

        } 

 

 

 

        protected override bool HasAdditionalUpdateAccess()

        {

            return true;

        }

 

 

        public static SteveDiagnosticService Local

        {

            get

            {

                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();

            }

        }  

 

 

 

        public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

            string message, params object[] data)

        {

 

            if (traceSeverity != TraceSeverity.None)

            {

                SPDiagnosticsCategory category

                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

                Local.WriteTrace(id, category, traceSeverity, message, data);

            }

 

        }

 

 

        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

        {

 

            List<SPDiagnosticsCategory> categories = new List<SPDiagnosticsCategory>();

 

            categories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

            SPDiagnosticsArea area = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, categories);

 

            List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>();

 

            areas.Add(area);

 

            return areas;

        }

    }

 

Let’s look at the interesting parts now:

private const string LOG_AREA = “Steve Area”;

Here’s where I define what the name of the Area is that I’m going to write to the ULS log.

public enum LogCategories

{

SteveCategory

}    

This is the list of Categories that I’m going to add to my custom Area.  In this case I only have one category I’m going to use with this Area, but if you wanted to several of them you would just expand the contents of this enum.

public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

string message, params object[] data)

{

if (traceSeverity != TraceSeverity.None)

{

SPDiagnosticsCategory category

= Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

 

Local.WriteTrace(id, category, traceSeverity, message, data);

}

}

 

This is one of the two important methods we implement – it’s where we actually write to the ULS log.  The first line is where I get the SPDiagnosticCategory, and I make reference to the Area it belongs to when I do that.  In the second line I’m just calling the base class method on the local SPDiagnosticsServiceBase class to write to the ULS log, and as part of that I pass in my Category which is associated with my Area.

protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

{

 

List<SPDiagnosticsCategory> theCategories = new List<SPDiagnosticsCategory>();

 

theCategories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

SPDiagnosticsArea theArea = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, theCategories);

 

List<SPDiagnosticsArea> theArea = new List<SPDiagnosticsArea>();

 

theArea.Add(area);

 

return theArea;

}

 

In this override I return to SharePoint all of my custom Areas.  In this case I’m only going to use my one Area, so that’s all I send back.  Also, as I mentioned above, I’m only using one custom Category with my Area.  If I wanted to use multiple custom Categories then I would 1) add them to the enum I described earlier and 2) add each one to my theCategories List instance I’ve defined in this method.

That’s really the main magic behind adding a custom Area and getting it to show up in the appropriate column in the ULS logs.  The logging class implementation is pretty straightforward too, I’ll paste in the main part of it here and then explain a little further:

    public class Log

    {

 

        private const int LOG_ID = 11100;

 

 

        public static void WriteLog(string Message, TraceSeverity TraceLogSeverity)

        {

            try

            {

                //in this simple example, I’m always using the same category

                //you could of course pass that in as a method parameter too

                //and use it when you call SteveDiagnosticService

                SteveDiagnosticService.Local.LogMessage(LOG_ID,

                    SteveDiagnosticService.LogCategories.SteveCategory,

                    TraceLogSeverity, Message, null);

            }

            catch (Exception writeEx)

            {

                //ignore

                Debug.WriteLine(writeEx.Message);

            }

        }

    }

 

This code is then pretty easy to implement from my methods that reference it.  Since WriteLog is a static method my code is just Log.WriteLog(“This is my error message”, TraceSeverity.Medium); for example.  In this example then, in the ULS log it creates an entry in the Area “Steve Area” and the Category “SteveCategory” with the message “This is my error message”.

How Your Client Application can know when your SharePoint 2010 Farm is Busy

SharePoint 2010 has a number of new features built in to keep your farm up and running as happily and healthy as possible.  One of the features it uses to do that is called Http Throttling.  It allows you to set thresholds for different performance counters.  You decide which counters to use and what thresholds are “healthy” for your environment.  I covered this concept in more detail in this post:  http://blogs.technet.com/speschka/archive/2009/10/27/adding-throttling-counters-in-sharepoint-2010.aspx. 

One of the things you may not know is that client applications get information about the health of the farm based on those throttling rules every time they make a request to the farm.  When SharePoint responds, it adds a special response header called X-SharePointHealthScore.  If you’re writing a client application that works with data in a SharePoint server, you should consider regularly reading the value of this header to determine whether it’s okay to continue at your current pace, or as the score gets higher, consider dialing back the number of requests you are making.

To demonstrate this concept I’ve created an web part for admins that I’ll be releasing later this year (I haven’t decided exactly how I’m going to distribute it).  It’s designed to let you monitor one to many farms based on the response header I’ve described above.  It’s a nice little quick view of the health of your farm.  Here’s an example of what the web part looks like – in this case it’s retrieving health data for two different farms I have:

The web part is built using a combination of web part, Silverlight, jQuery, and ASPX page.  When it receives an error, like the server is busy error you can get with Http Throttling, you can click the plus sign at the bottom of the web part to see the error details.  Here’s an example of what that looks like:

Take a look at this when you’re writing your client applications, and understand how best to respond when your farm is getting busy.

Creating Health Monitor Rules for SharePoint 2010

Hey all, I saw an interesting discussion today about creating new health rules in SharePoint 2010 and thought I would share.  All the credit for this one goes to Chris C. who was good enough to share this information.  The discussion started because of some confusion about how to add a new health rule.  In the public beta you can go into Central Admin, hit up the rule definitions and add a new one.  Unfortunately you will find that doesn’t really work.  By RTM we should / will have the new rule definition button disabled, because that isn’t the right way to get it into the system.  Instead, here in Chris’ words are how you should approach this.

A health rule is basically a class (derived from either SPHealthAnalysisRule or SPRepairableHealthAnalysisRule) which implements a Check() method, some string properties that describe the problem to the administrator, and optionally a Repair() method (if an SPRepairableHealthAnalysisRule).

For adding health rules to your farm:

  • Create your class and compile it into a dll with a strong name
  • GAC the dll on each of the machines in your farm
  • Run the following In OM/PowerShell:
    $loc=”C:\mydlllocation.dll”
    $asm=[System.Reflection.Assembly]::LoadFrom($loc)   <- $asm should be GAC’d
    [Microsoft.SharePoint.Administration.Health.SPHealthAnalyzer]::RegisterRules($asm)

Once these steps are complete, you should see your rule(s) in the Health Rules List (i.e. http://centraladmin:8080/Lists/HealthRules/AllItems.aspx).  This is the full list of rules.  Each of these runs on a different schedule (defined in the class) inside of a timer job.  Once a rule is run, a result list item is stored in the health reports (or “Problems and Solutions”) list.  The reason you are not seeing any generated reports is most likely because your OOB rules have not been run yet.

Thanks again Chris for sharing this valuable information.

Playing with Large List Throttling

If you’ve followed my blog entries by now then you’ve seen quite a bit of information on throttling for large lists in SharePoint 2010.  One of the things you may find is that you will have scenarios where you want to be able to toggle the enforcement of the throttling on a list by list basis.  As I explained in a previous post, an SPList object does have an EnableThrottling property.  With that useful bit of information in hand, I wrote a little web part that allows you to manage list throttling in different site collections pretty easily.  It’s implemented as a web part, as shown here:

 

As you can see, it’s a pretty simple web part, with a fairly easy to use interface.  You can toggle the list throttling on or off for any individual list, or turn it off or on for all lists in a site collection.  I’ve attached the WSP for this web part to this posting.  Feel free to use it and abuse it, there’s really only a couple of important things to remember at this point.  1) This web part MUST BE RUN IN THE CENTRAL ADMIN SITE!  This restriction *might* be lifted by the time we RTM, but we’ll just have to wait and see.  2)  This web part will only work with Windows (classic) auth sites for now.  Hope you find it useful.

You can download the attachment here:

Using the Developer Dashboard in SharePoint 2010

The developer dashboard is a new feature in SharePoint 2010 that is design to provide additional performance and tracing information that can be used to debug and troubleshoot issues with page rendering time.  The dashboard is turned off by default, but can be enabled via the object model or stsadm (and PowerShell too, I just haven’t put together the script for it yet).  When the dashboard is turned on you will find information about the controls, queries and execution time that occur as part of the page rendering process; this information appears at the bottom of the page.  Here’s an example of what the “short” version of the output looks like (NOTE: this screen shot is from a build prior to the public beta so your bits will look a little different):

 

As you can see, it provides information from the perspective of the event pipeline, the web server and database.  On the left side you can see the different events that fired in the page processing pipeline and within that, you can see how long individual web parts took to within those events.  On the top right hand side you see information about the page processing as whole, including the overall execution time, the amount of memory used in the processing of the page request and the correlation ID, which can be of great value when trying to link the page render to entries in the ULS log.  Underneath the server information you will find a list of the different database calls that were made through the object model by various components in the page itself as well as the controls it hosts – all useful information.

You may also notice the database calls are actually a hyperlink.  This is another pretty cool feature in that when you click on it, it shows the call stack from what triggered that particular database call, the SQL that was execute and the IO stats:

 

Enabling the developer dashboard is fairly easy.  If you’re doing it via the object model, the code looks something like this; to turn it on:

SPWebService cs = SPWebService.ContentService;

cs.DeveloperDashboardSettings.DisplayLevel = SPDeveloperDashboardLevel.On;

cs.DeveloperDashboardSettings.Update();

 

NOTE:  This code will not work in a web part if the web part is hosted in any site except the central admin site.  We specifically check for and block that scenario because the developer dashboard is a farm-wide setting.  If you code it up in a web part and try to execute it in a non-central admin site, it will throw a security exception.

To turn it off you set the DisplayLevel to SPDeveloperDashboard.Off; for on demand use of the dashboard you can set the value to SPDeveloperDashboard.OnDemand.   When you set it to OnDemand, it adds a small icon to the upper right hand corner of the page; you click the icon to toggle the dashboard on and off.  The icon looks like this:

 

You can also turn it off and on with stsadm; you just need to make sure you are running the command as a farm administrator:

Stsadm –o setproperty –pn developer-dashboard –pv ondemand (or “on” or “off”)

 

The on demand setting is really the optimal setting in my opinion.  Here’s what it gives you:  once it is set to on demand, site collection admins can turn it on or off.  When they do, it only turns it on or off for that particular site collection.  Equally as good, only the person that turned it on sees the output – your everyday users will not see the output from developer dashboard so this becomes a really valuable troubleshooting tool.  Even more interesting is that if you have multiple site collection admins and one of them toggles it on, the output is displayed only for that person, not for every site collection admin.   Want more flexibility?  Well you can even change the permissions that are required to see the dashboard output.  The DeveloperDashboardSettings has a property called RequiredPermissions.  You can assign a collection of base permissions (like EditLists, CreateGroups, ManageAlerts, or whatever you want) to it; only those people that have those permissions will be able to see the output.  So you have a great deal of flexibility and granularity in deciding when to use it the dashboard output and who will see it.

 

Okay, so this all seems good – all my web parts and code I run within the page will just show up and we’ll have this great troubleshooting info, right?  Well, not exactly unfortunately.  Take a look at the output from the dashboard again – you’ll notice a finite set of events that are reported.  Those are tied to events in the base web part class so they cannot be expanded for any random click event for example.  Any code you have in your override for OnInit or Render will automatically be captured in this pipeline, but code in other places will not.  All is not lost however!  We’ve also introduced a new class to the object model called the SPMonitoredScope.  Among other things, it helps to keep track of useful usage and tracing information just like the developer dashboard uses.

 

In order to get the rest of your code included in the developer dashboard output, you need to wrap it up in a new monitored scope, with something like this:

 

using (SPMonitoredScope GetListsBtnScope = new

     SPMonitoredScope(“GetListsBtn_Click”))

{

//your code goes here

}

The name I used here – “GetListsBtn_Click” – is what will appear in the developer dashboard output.  Here’s an example:

 

 

 

This should be one of your first best practices for developing code for SharePoint 2010 – use SPMonitoredScope!   This can only help you understand and manage the performance of your components as you deploy from development to production.

 

There’s a ton of great out of the box value here, but there is also one piece missing that is worth mentioning.  Even if you use SPMonitoredScope, if your code is a sandbox component (i.e. a User Solution), the output from it will not be captured in Developer Dashboard.  The reason it doesn’t get captured is that sandbox components execute in a completely different process from the page request – that’s why it’s sandboxed.  As a result though, we can’t pipe the tracing information back into the page processing event pipeline.  Sorry about that one folks, but we still have a lot of capabilities here that we should be taking advantage of.

 

I hope after reading this you will see the value in the developer dashboard, understand how to turn it on and off, and know what you have to do to get all of your code to be managed through this pipeline.