Monitoring Large Lists in SharePoint Online with Office365Mon

One of the enduring realities I saw over and over in my many years working at Microsoft with SharePoint, is that customers LOVE the lists they can create in SharePoint.  They’re super easy to set up, you can quick do all kinds of sorting, filtering and searching on them, and it requires no technical expertise to get up and running and with them.  This led to another enduring reality, which is that many customers started loving lists TOO much.  I saw many, many customers over the years that had lists that had just exploded in size.  As these lists grew larger, the performance in using them tends to get worse and worse.  This problem was also compounded by the fact that many developers saw SharePoint lists as a quick and easy place to store data for their application.  That meant even bigger lists sizes, and more people using them more often.

Over the years, we developed lots of documentation and options for improving the performance of your lists.  As customers have moved to SharePoint Online in Office 365 though, we would occasionally hear people ask if it had the same large list limitations as SharePoint on premises does…and the answer is yes, it does.  Now as more customers are moving their SharePoint on premises data to SharePoint Online, we see increasing concern about how the lists they do have are going to perform once it’s all up in Office 365.  Fortunately, at Office365Mon, we’ve just released a new feature designed specifically to help you stay on top of this issue.

List Monitoring is a feature that lets you select one or more lists in SharePoint Online for us to monitor.  For the lists that we’re monitoring, we will do a couple of things:  first, we’ll issue health probes for each list that we’re monitoring and render the default view for it to see what the performance is like.  That’s typically one of the first places where you’ll see performance issues with a large list.  You can configure List Monitoring so that it will send you a notification if it takes longer than “x” seconds to render the default view, where “x” is a number that you decide.

The second thing we’ll do is keep tabs on how large the list is, i.e. how many items it contains.  Again, you can set a threshold for us to look for, and when a monitored list gets bigger than that threshold, we’ll send you a notification to alert you to it.  So, for example, if you’re worried about a large list approaching that magic 5,000 item limit, you can have us notify you when it’s getting close.  Here’s a screen shot of where you configure the monitoring thresholds:

MonLargeLists1

Selecting the lists to be monitored is also quite simple – we provide you with a collection of all of the lists in the SharePoint Online site that we’re monitoring, and you can just check boxes next to the lists you want us to monitor for you.  It can be any of the lists that come out of the box with SharePoint, or any custom list that you’ve created:

MonLargeLists2

Once we’ve started monitoring lists for you, not only will we notify you according to the thresholds you’ve configured, but as you’ve come to expect from Office365Mon, we also have a nice set of reports you can use to see where you stand.  To begin with, you can see the performance of the recent health probes we’ve issued against monitored lists in our Average Response Time report.  It shows the performance of all of the resources that we’re monitoring for you, including monitored lists.  We also have a new report that shows you the average performance each month just for your monitored lists:

MonLargeLists3

In addition to that, we have a report that shows you the size of your monitored lists each day, so you can visualize any growth trends that might be happening that you need to get in front of:

MonLargeLists4

We also provide a monthly view of the average size of each monitored list, so you have a longer-term view of how rapidly your lists are growing:

MonLargeLists5

Being aware of large lists and their impact on performance is one of the best ways to ensure a good experience for your users.  I’ve heard many, many times from customers that say “our site is slow”.  There are lots of reasons why that might be, but a couple of the most common reasons are slow query times and large lists.  At Office365Mon we’ve provided monitoring for your query execution time for nearly a year now.  With the new List Monitoring feature, now you can also know when you have large list performance problems.  Once you know that, you can start working on a mitigation strategy – splitting the data out into multiple lists, creating customized views of the data, etc., etc., etc.  There are a lot of different things you can do to work on improving the performance, but if you don’t know you have a problem then you’ll forever be stuck wondering why your users keep telling you that your “site is slow”.  Take advantage of features like these to stay in the know and stay in control of your Office 365 tenant, and keep everyone happy.  Start by visiting us at https://www.office365mon.com and clicking the Configure…List Monitoring menu.

This is yet another feature at Office365Mon that was driven from feedback by our customers.  I hope you’ll take a look at this feature and as always, let us know how we can make it better as well as ways in which we might be able to help you to do Office 365 monitoring even better.

From Sunny Phoenix,

Steve

 

Introducing Monitoring for Azure Secured Web Sites from Office365Mon

Today we’re excited to announce another new monitoring feature at Office365Mon.  Beginning today, we are now offering you the capability to monitor virtually any web site or REST API using the same proven, enterprise grade monitoring capabilities of Office365Mon.  The same service we use for Office 365 monitoring can now be used to monitor your sites that you deploy to Azure web sites, or your SharePoint hosted apps for on-prem or Office 365 sites, or virtually any other site!  You get the power of a monitoring infrastructure that sends out 10 to 20 million health probes a month to keep you in the know of your own web sites and REST APIs now.

All of this will come with the same kind of integration that you’re used to seeing at Office365Mon.  Setup will be extremely quick and easy, all done in your browser as well as via our Office365Mon Management API.  You’ll get the same sort of alert notifications as you do with the other resources we monitor for you – text messages, emails, and webhook notifications.  You’ll also see all of the data we capture about the health and performance of these sites in the same exact reports you use today, whether that’s one of our Standard or Advanced reports, you download your report data from our site, or you use Power BI with the Office365Mon Content Pack.  Here’s an example of a performance report that’s monitoring both Office 365 sites as well as other sites we have hosted in Microsoft Azure:

websitemonitoring

As you can see from the chart above, we’re monitoring:

Your sites and REST APIs can be hosted anywhere of course, as long as it has a public endpoint we can connect to.  It can be an anonymous site, or it can be secured with Azure Active Directory.  We can also monitor REST APIs as long as they don’t require any parameters.

This feature is available in Preview today and ready for you to begin trying out.  Get started by creating your Office365Mon subscription and then adding some sites to monitor here.  Pricing and licensing has not been set yet, but the good news is that like all new features, all existing customers will get 90 days to try it out for free.  Especially while this is in Preview, it’s a great opportunity to take a look and give us any feedback you have so we can fine-tune it to meet your needs.  Like many of the things you see at Office365Mon, this is another feature that was created based on feedback from our customers.

We hope you enjoy this new feature and will take the time to try it out.

From Sunny Phoenix,

Steve

 

Using the Office 365 Batch Operations API

As I was looking around for a way to batch certain operations with the Office 365 API the other day, I stumbled upon a Preview of just such a thing, called “Batch Outlook REST Requests (Preview)” – https://msdn.microsoft.com/en-us/office/office365/api/batch-outlook-rest-requests.  The fundamentals of how it works is fairly straightforward, but it’s completely lacking implementation details for those using .NET.  So, I decided to write a small sample application that demonstrates using this new API / feature / whatever you want to call it.

First, let’s figure out why you might want to use this.  The most common reason is you are doing a bunch of operations and don’t want to go through the overhead of creating, establishing, and tearing down an HTTP session for each operation.  That can slow down quickly and burn up a lot of resources.  Now when I was first looking at this, I was also interested in how it might impact throttling limits that Office 365 imposes.  Turns out I had a little misunderstanding of that, but fortunately Abdel B. and Venkat A. explained Exchange throttling to me, and so now I will share with you.

My confusion about impact on throttling that batch operations might have was borne out of the fact that SharePoint Online has an API throttling limit that has been somewhat ubiquitously defined as no more than 1 REST call per second over an extended time.  So…kind of specific, but also a little vague.  Exchange Online throttling is arguably even less specific, but they do have some good information about how to know when it happens and what to do about it.

In Exchange Online, different operations may have a different impact on the system, and it may also be impacted by demands from other clients.  So when making REST API calls to Exchange Online your code should account getting a throttling response back.  A throttled response in Exchange Online returns a standard http status code 429 (Too many requests).  The service also returns a Retry-After header with the number of seconds to resubmit the request.  Now that you know what a throttled response from Exchange Online looks like, you can develop your code to include a process for retry and resubmission.

The batching feature lets you work around the overhead of multiple calls by allowing you to send in up to 20 operations in a single request.  That means 1 connection to create, establish and tear down instead of 20.  This is goodness.

The basic process of doing batch operations using this feature is to create what I’ll call a “container” operation.  In it, you will put all of the individual operations you want to perform against a particular mailbox.  Note that I said mailbox – this is important to remember for two reasons:  1) the batch feature only works today with Outlook REST APIs and 2) the individual operations should all target the same mailbox.  That makes sense as well when you consider that you have to authenticate to do these operations, and since they are all wrapped up in this “container” operation, you’re doing so in the context of that operation.

The “container” operation that I’m talking about is POST’ed to the $batch endpoint in Outlook:  https://outlook.office.com/api/beta/$batch.  The Url is hard-coded to the “beta” path for now because this API is still in preview.  In order for you to POST to the $batch endpoint you need to provide an access token in the authorization header, the same way as you would if you were making each of the individual calls contained in your container operation.  I’m not going to cover the process of getting an access token in this post because it’s not really in scope, but if you’re curious you can just look at the sample code included with this post or search my blog for many posts on that type of topic.

While I’m not going to cover getting an access token per se, it’s important to describe one higher level aspect of your implementation, which is to create an application in your Azure Active Directory tenant.  Generally speaking, you don’t access an Office 365 REST API directly; instead, you create an application and configure it with the permissions you need to execute the various Outlook REST APIs you’ll be using.  In my case, I wanted to be able to read emails, send emails and delete emails, so in my application I selected the following permissions:

batchop1

So with that background, here are the basic steps you’ll go through; I’ll include more details on each one below:

  1. If you don’t have an access token, go get one.
  2. Create your “container” operation – this is a MultipartContent POST.
  3. Create your individual operations – add each one to your MultipartContent.
  4. POST the “container” operation to the $batch endpoint.
  5. Enumerate the results for each individual operation.

 

Step 1 – Get an Access Token

As I described above, I’m not going to cover this in great detail here.  Suffice to say, you’ll need to create an application in Azure Active Directory as I briefly alluded to above.  As part of that, you’ll also need to do “standard Azure apps for Office 365” stuff in order to get an access token.  Namely, you’ll need to create a client secret, i.e. “Key”, and copy it along with the client ID to your client application in order to convert the access code you get from Azure into an AuthenticationResult, which contains the access token.  This assumes you are using ADAL; if you are not, then you’ll have your own process to get the access token.

 

Step 2 – Create Your Container Operation

The “container” operation is really just a MultipartContent object that you’ll POST to the $batch endpoint.  Unfortunately, there is scarce documentation on how to create these, which is in large part why I wrote this post.  The code to get you started though is just this simple:

 

//create a new batch ID

string batchId = Guid.NewGuid().ToString();

//create the multipart content that is used for a batch process

MultipartContent mpc = new MultipartContent(“mixed”, “batch_” + batchId);

The main thing to note here is just that each “container” operation requires a unique batch identifier.  A Guid is perfect for this, so that’s what I’m using to identify my batch operation.

 

Step 3 – Create Individual Operations and Add to the Container Operation

The actual code you write here will vary somewhat, depending on what your operation is.  For example, a request to send an email message is going to be different from one to get a set of messages.  The basic set of steps though are similar:

  1. Create a new HttpRequestMessage. This is going to be how you define whether the individual operation is a GET, a POST, or something else, what Url to use, etc.  Here’s the code I used for the operation to send a new email:  HttpRequestMessage rqMsg = new HttpRequestMessage(HttpMethod.Post, BATCH_URI_BASE + “me/sendmail”);  It’s worth noting that you ALWAYS send your individual operations to the $batch endpoint to be included in the batch process.  For example, if you were using v2 of the Outlook API, to send a message you would use the Url https://outlook.office.com/api/v2.0/me/sendmail.  However, to use the $batch endpoint, since it’s in beta, you use the Url https://outlook.office.com/api/beta/me/sendmail.
  2. Create the content for your operation. In my case I used a custom class I created to represent a mail message, I “filled it all out”, and then I serialized it to a JSON string.  I then took my string to create the content for the operation, like this:  StringContent sc = new StringContent(msgData, Encoding.UTF8, “application/json”);  So in this case I’m saying I want some string content that is encoded as UTF8 and whose content type is application/json.
  3. Add your content to the HttpRequestMessage: Content = sc;
  4. Wrap up your HttpRequestMessage into an instance of the HttpMessageContent class. Note that you’ll need to add a reference to System.Net.Http.Formatting in order to use this class.  Here’s what it looks like:  HttpMessageContent hmc = new HttpMessageContent(rqMsg);  We’re doing this so that we can set the appropriate headers on this operation when it’s executed as part of the batch.
  5. Set the headers on the HttpMessageContent object: Headers.ContentType = new MediaTypeHeaderValue(“application/http”); and also hmc.Headers.Add(“Content-Transfer-Encoding”, “binary”);  You now have a single operation that you can add to the “container” operation.
  6. Add your individual operation to the “container” operation: Add(hmc);  That’s it – now just repeat these steps for each operation you want to execute in your batch.

Side note:  I realize some of this code may be difficult to follow when it’s intertwined with comments like I’ve done above.  If you’re get squinty eyed, just download the ZIP file that accompanies this post, and you can see all of the code end to end.

 

Step 4 – Post the Container Operation to the $Batch Endpoint

There’s not a lot to step 4.  You can just POST it now, but there’s one other point I want to make.  Your “container” operation may contain many individual operations.  There are a couple of points about that worth remembering.  First, the individual operations are not guaranteed to be performed in any specific order.  If you need them to be performed in a specific order, then either don’t do them in a batch or do them in separate batches.  Second, by default, at the point that any individual operation encounters an error, execution stops and no further operations in the batch will be executed.  However, you can override this behavior by setting a Prefer header in your “container” operation.  Here’s how you do that:

mpc.Headers.Add(“Prefer”, “odata.continue-on-error”);

With that done (or not, depending on your requirements), you can go ahead and POST your “container” operation to the $batch endpoint, like this:

HttpResponseMessage hrm = await hc.PostAsync(BATCH_URI_BASE + “$batch”, mpc);

With that done, it’s time to look at the results, which is covered in the next step.

 

Step 5 – Enumerate the Results for Each Individual Operation

At a high level, you can see if the overall batch operation worked the same way you would if it were just one operation:

if (hrm.IsSuccessStatusCode)

The important thing to understand though, is that even though the “container” POST may have worked without issue, one or more of the individual operations contained within may have had issues.  So how do you pull them all out to check?  Well, using the MultipartMemoryStreamProvider class is how I did it.  This is another class that requires a reference to System.Net.Http.Formatting in order to use, but you should already have it from the other steps above so that shouldn’t be a problem.

So we start out by getting all of the responses from each individual operation back like this:

MultipartMemoryStreamProvider responses = await hrm.Content.ReadAsMultipartAsync();

You can then enumerate over the array of HttpContent objects to look at the individual operations.  The code to do that looks like this:

for(int i=0; i < responses.Contents.Count;i++)

{

string results = await responses.Contents[i].ReadAsStringAsync();

}

It’s a little different from having an HttpResponseMessage for each one in that you have to do a little parsing.  For example, in my sample batch I sent two emails and then got the list of all of the emails in the inbox.  As I enumerate over the content for each one, here’s what ReadAsStringAsync returns for sending a message:

HTTP/1.1 202 Accepted

Okay, so you get to parse the return status code…should be doable.  It can get a little more cumbersome depending on the operation type.  For example, here’s what I got back when I asked for the list of messages in the inbox as part of the batch:

HTTP/1.1 200 OK

OData-Version: 4.0

Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8

{“@odata.context”:”https://outlook.office.com/api/beta/$metadata#Me/MailFolders(‘Inbox&#8217;)/Messages”,”value”:[{“@odata.id”:”https://outlook.office.com/api/beta/Users(’05d6cc47-5a79-4906-88e6-c39fcd595e15@b098aeb9-ce11-43ce-a49f-ee4b5a4b0a71&#8242;)/Messages(‘AAMkADEyMzQ3MzllLWM2NmItNGY2ZS04MWE1LTQwNjdiZDc1ZGYxNwBGAAAAAADRpmW4I…}}]}

Okay, so I trimmed a bunch of detail out of the middle there, but the gist is this – you would have to parse out your HTTP status code that was returned, and then parse out where your data begins.  Both quite doable, I just kind of hate having to do the 21st century version of screen scraping, but it is what it is.  The net is you can at least go look at each and every individual operation you submitted and figure out if they worked, retrieve and process data, etc.

Summary

That’s the short tour of using the Outlook batch API.  There are a handful of things you need to know about how it works and what it’s limitations are, and I’ve pointed out all of the ones I know about in this post.  The trickier part by far is understanding how to create a batch request using the .NET framework, as well as how to parse the results, and I covered both of those aspects of it as well.

As I mentioned a few times in this post, I just zipped up my entire sample project and have attached it to this post so you can download it and read through it to your heart’s content.  It does contain the details specific to my application in Azure AD, so you’ll need to create your own and then update the values in the app if you want to run this against your own tenant.  The ZIP file with the code is below: