Teams and Skype Jitter and Call Quality Monitoring from Office365Mon

Today we are announcing a new set of features for monitoring Microsoft Teams and Skype for Business call quality at Office365Mon.Com.  This package of features is now included with our Distributed Probe and Diagnostics agent, which we allow you to install in as many different geographic locations as you like.  The agent now has new features that will monitor call quality metrics for things like:

  • Jitter
  • Packet loss
  • Packet reorder ratio
  • Round trip time latency
  • Calling firewall issues

As always with our agent, from each different location you can choose notification thresholds for these different call quality metrics, as you see here:


The notification values we start with are pre-configured for the minimum performance requirements recommended by Microsoft.  As always though, you can set up these notification thresholds on a location by location basis to match the network performance characteristics of each one of your different deployment areas.

In addition to performance monitoring, each time we do a check we also test network connectivity to a variety of calling service endpoints that Microsoft uses in the region your agent is deployed.  Each one of these endpoints is defined by an IP address, port and protocol that your Teams and Skype clients may need to access.  We test every one of these to ensure that there aren’t any network configuration issues that could block you from making calls from a particular location, as well as to be able to alert you when a service endpoint is unavailable.  This can also help you identify potential issues when users are unable to make or sustain calls with the Teams or Skype clients.

Once you turn on call quality monitoring, we’ll feed the data we collect from our testing back to your Office365Mon subscription, so you can dig into reporting data later on to see what the call quality metrics are like across the different locations where you’ve installed the agent.  For example, here’s what the jitter looks like from two different locations:


Here’s what the packet loss looks like from those same locations:


And finally, here’s what the round trip time latency looks like:


Each of the preceding three charts shows you data from recent tests – meaning in the last couple of hours or so – but we also roll up this data by day as well as month.  That means you always have a current, short term and long term view of the call quality at your different office locations.

For firewall issues, we keep a log of each time we encounter issues reaching a particular IP address, port and protocol.  So in addition to the notifications you get, you’ll also see a history of those issues, as shown here:


All of the data is sortable so you can see where you’re having the most problems by location, address, port, or protocol type.


Another Feature Based on Your Feedback

The Call Quality monitoring feature from Office365Mon is yet another example of where your feedback to our team has resulted in new and enhanced capabilities.  We’re always interested in what you think about our service – good and bad – and ideas and suggestions you may have for ways in which we can do a better job of helping you Stay In the Know, and Stay In Control.  Call Quality monitoring is available for customers with our Enterprise Platinum license.  You can find out more about our licenses and features by viewing our product matrix at  In addition, if you don’t have an Office365Mon subscription for Office 365 monitoring yet, you can get a free 90-day trial of our Enterprise Platinum license by visiting us at  I hope you’ll consider taking a few minutes to download and install our latest agent and try out the Call Quality monitoring feature, along with all of the many other features available from Office365Mon.Com.

From Sunny Phoenix,

Microsoft Teams Monitoring Now Available from Office365Mon.Com

I know that it seems like many of you have been waiting a long time for this but…we’re happy to announce that Office365Mon is now offering monitoring for Microsoft Teams!  We have just released a couple of new monitoring options for Microsoft Teams into Preview mode at Office365Mon.Com.  Our support is launching with two specific functional areas of Teams we’re monitoring – 1) the overarching Teams service, and 2) the Channels feature of Microsoft Teams, which is one of the core pillars of the platform.

To do this we’re starting with two new features, which we call Microsoft Teams Monitoring, and Microsoft Teams Advanced Monitoring.  Over time, as Microsoft adds new features to the Microsoft Teams service, we’ll roll those into our Teams monitoring packages so you can stay on top of them.  As with all the other features at Office365Mon, our goal is to make the setup and management for monitoring Microsoft teams as easy and painless as possible.  To get started, you’ll just go into our Configure Office365Mon page at https://, scroll down and then simply click on the Enable button in the Microsoft Teams Info section of the page.  That’s it – that’s all you need to do to get started.  Just like we do with other Office 365 services that we monitor – like SharePoint, Exchange, and OneDrive – we also give you the flexibility to focus in on a specific resource for monitoring purposes.  So we’ll go out and get a list of all the Microsoft Teams you are joined to, and then you can pick which one you want to use as we start monitoring it.

We also simplify the process of configuring monitoring, in that you don’t have to go pick which Teams, and Channels to monitor – just clicking the Enable button is enough to get you started.  After that, if you have both Microsoft Teams monitoring features, then we’ll monitor both Teams and Channels; if you only have one, then we’ll just monitor Teams.  We figure that out for you so you don’t have to hassle with it.

From a reporting standpoint, Microsoft Teams data then begins to automatically show up in reports, just like any other Office 365 resources.  So you don’t need to go find special reports just for Microsoft Teams, because the Teams monitoring data shows up with all of the reports you already know and love.  We’re also able to do more in-depth analysis of your Microsoft Teams performance as well, meaning as we look at the performance for your Teams tenant, we can break down where time is spent in processing requests.  Is it happening in your tenant itself, so there is slowness in the cloud?  Or is there perhaps a performance issue on your network as your users utilize the Teams feature sets.  Here’s a quick look at one of the reports from our Distributed Probes and Diagnostics agent – it ALSO now includes support for Microsoft Teams so you can see what the Teams performance is like across all of the different geographic locations where you have users:


You’ll also see data from Teams now integrated into our Tenant Performance and Health reports.  This is one that we’ve recently updated so that it shows you not only the tenant-level performance of the recent health probes that have been issued, but it also has an underlay that displays the historical data for the same point in time as when you are viewing the report.  So if you’re looking at the report at 2PM on a Thursday, then you’ll see historically what the performance has been like at 2PM on a Thursday so you know if your current performance is in line with what you normally see, or if you’ve hit a performance issue in your tenant.  Here’s an example of that:


Also, just like many of the other services we monitor, you can also compare what the overall performance is like for your Teams tenant versus all other Office365Mon customers that are monitoring Microsoft Teams.  Here’s what that looks like:


Start Monitoring Teams Today

As we do every time we release a new feature, we’ve enabled Microsoft Teams and Teams Channel monitoring for all existing Office365Mon customers.  You just need to go to the Configure Office365Mon page at and click the Enable button in the Microsoft Teams Info section.  Once you do that, you’ll be able to see Teams performance data begin to show up in the reports mentioned above along with several others.  You’ll also get notifications when we detect an outage with your Microsoft Teams service.

For new customers, we turn on Microsoft Teams monitoring along with all of our other features for free for 90 days when you create a trial subscription.  Just visit our site at and click on the big Start Now link on the home page.  We don’t require any payment information up front, so you can be up and running in about two minutes to monitor Office 365.

As I always say, if you have feedback on this feature or any other feature you would like to see, please do not hesitate to contact me.  You can always email us at, and be assured that I read each and every feature feedback and suggestion we get through there.

I hope you’ll find our new Microsoft Teams monitoring features to be a welcome addition to the wide array of monitoring services we offer here at Office365Mon.Com.

From Sunny Phoenix,



Deeper Real Time Office 365 Performance Data from Office365Mon.Com

We’ve just released some new and improved reports that I think many of you will find valuable and interesting at Office365Mon.Com.  A couple of these reports give you performance information about your own Office 365 tenant that’s even deeper and more insightful than we’ve ever had before.  In addition to that, we’ve simplified some of our outage reports, and also added additional historical outage reporting options that let you drill down even further into the stability of the different services in your Office 365 subscription.

For real time performance data on your Office 365 tenant, we started monitoring your tenant health scores and request durations in June, 2018:  For a quick refresher, the health score is something we get for SharePoint Online and OneDrive for Business that is between 0 and 10 and represents the overall health of your tenant.  When your score is 0 things are as healthy as possible; the more the score increases, the less healthy your tenant is.  Request duration is the amount of time that it takes to process synthetic transactions that we send to your tenant while we monitor it.  As request durations increase, users begin to see it as “Office 365 is slow today” or “our network is slow”.

We’ve provided you a real-time view of the scores and durations since June 2018, and while there is a lot of value in that, the problem is that there wasn’t a good way for you to look at those numbers and understand if what you were seeing was really bad, or if those numbers are actually typical of what you have been getting from your tenant.  The new reports we have today plug that gap and give you some great and easy to understand insight so you know exactly how you’re doing.  To get started, here’s a screenshot of the new request duration report:


The first thing that probably catches your eye is the wavy blue and yellow data in the background.  That data is actually historical data for what the request duration has been during the exact time frame that you’re currently looking at.  So if you look at this report at 8AM and then again at 5PM, that wavy data will be different.  The idea is that you can see not only what your current performance is like, but what it has historically been like at this same exact time of day.  Not only that, but as we gather more historical data, then we fine-tune what you see here even further – down to the same exact day of week with the historical data.  So that means, for example, if you look at the current performance at 2PM on a Tuesday, then eventually the wavy data you see in the background will also be for 2PM on a Tuesday.  This is incredibly useful to be able to do as much of an “apples to apples” comparison as possible when you are looking for performance issues in your tenant, as well as to understand if the data you are currently seeing is “typical” for what you normally get in your tenant.  That’s exactly what you see in the chart above – so overall at the point in time we took this report screenshot, our tenant overall was actually performing a little better than past Wednesdays at this same time of day.

As with all of our graphical reports at Office365Mon.Com, you can also drill into this further by just clicking on items in the chart’s legend.  So for example, if you just want to look at what’s going on with SharePoint Online only, here’s what the chart looks like:


Again, it’s very interesting in how you see some similar changes in performance to what we’ve previously seen on Wednesdays during this time frame, but overall still looking a little better than usual.

We also do the same thing with health scores, so for comparison, here’s the new health score report with this same historical data underlay:


Again, as you can see from this, it’s consistent with the request duration data in that overall the health scores are right about where they normally are at this time of day on a Wednesday.


Easier to Understand Outage Information

In addition to the deeper real time data, we’ve also simplified and improved the usability and level of detail around outages and the reasons they happen.  First, we’ve changed the existing reports we’ve had from the beginning on outages and outage reasons so that now they only show information on outages that have occurred in the last 60 days.  Previously, they contained data on every outage we ever monitored for you.

Next, we added a couple of new reports  – Outage History and Outage Reason History.  The Outage History report lets you see all of the outages that ever occurred, but it breaks it down by resource, so you can view all of the times you had a SharePoint outage, or an Exchange outage, etc.  As you can see from the screenshot here, it’s a much simpler way to view this data – in this case for OneDrive for Business:


For Outage Reason History, the data actually gets a lot more interesting and insightful.  First of all, we give you a breakdown of the different reasons for outages broken down by calendar quarter.  This allows you to see trends in where the supportability issues have been for your tenant.  Here’s an example:


Just at a quick glance you can see that originally back when we first started monitoring our Office 365 tenant, a lot of outages were Internal Server Errors and Unauthorized (usually meaning there was an Azure AD availability issue).  Again though, just like with the Outage History report, you can drill down into specific Office 365 services to see what the outage history is like for each one.  Here’s an example of that:


In this case we’re looking at the outage history for SharePoint Online.  Again, you can also see here how earlier the outages were primarily Unauthorized (meaning an Azure AD issue), but now we see more Service Unavailable errors.  Overall though, what’s also interesting is that you see we have far fewer outages with SharePoint Online than we did when we first started monitoring.  This is a great historical perspective to have and understand, as you see changes to the service and changes in the reliability in your tenant over time.


Some More Data, Some Better Data

I think these new reports will continue to help you Stay in the Know and Stay in Control of your Office 365 tenant.  You get deeper and more meaningful data than ever before, and we continue to build and expand the service to try and help keep you on top of your tenant at all times.  As always, if you have feedback on these or any other feature, I strongly encourage you to contact us at  I read every customer recommendation and piece of feedback we receive.  If you haven’t started monitoring Office 365 yet, then please stop by our site at and click the big Start Now link on the home page for a free 90-day trial so you can see exactly how you can put monitoring to work for you.

From Sunny Phoenix,


New Centralized Office 365 Storage Monitoring from Office365Mon.Com

Today we’re announcing the release of a new centralized storage monitoring feature for Office 365 resources.  In addition to that, as part of this same feature we are providing Office 365 usage information trends across your tenant over a rolling 52-week year.

Our new feature is called Usage Monitoring.  In Office 365 today, when resources get close to their allocated storage, notifications can be sent to the resource owner.  For example, when you reach a certain percentage of your allocated storage for a SharePoint site or OneDrive for Business site, the site collection admins can get email notifications.  With Exchange Online, the mailbox owner can get a notification when they reach their mailbox’s Issue Quota Warning storage level.  There isn’t the capability though to keep your company administrators connected to all of these events when resource storage reaches critical levels.  This is important because when individual users get these messages, the first thing they normally do is call the help desk and escalate the situation into something much more urgent than it needs to be.

With Usage Monitoring from Office365Mon.Com, we’ve made it super simple to set up centralized notifications for these events.  You simply tell us what types of storage you want us to monitor – Exchange, SharePoint sites, and/or OneDrive sites – and then you tell us at what percentage of the allocated storage you want us to notify you.  You can see the configuration options below:


Once configured, any time any resource in your organization reaches these levels we’ll send out emails to everyone that you’ve included for notification in your Office365Mon subscription.  In addition to that, we also send out a webhook notification that includes the type of resource that has triggered the notification, along with a list of all the resources that are exceeding the configured amount.  Like many of our features at Office365Mon.Com, this keeps you In the Know and In Control so you can get in front of these situations before they become a problem!

In addition to letting you know when you’re hitting storage limits, you also get several reports that fill you in on what services your users are using.  We also have several reports that will give you trend information around what sort of activities and with what frequency they are using them within the major services.  For example, you can see how many active users you’re getting per week in each of the major Office 365 services:


You can see the message activity from Exchange:


The feature usage within Skype for Business:


As well as the adoption rate of Microsoft Teams features:


In addition to this, we also create reports for the Top 100 SharePoint sites, Top 100 OneDrive for Business sites, and Top 100 Mailboxes.  The resources are ordered by the amount of storage they are consuming.  By updating the report data once a week, you can easily spot trends that are occurring in usage and adoption of the different services and features in your Office 365 subscription.

The Usage Monitoring feature is included with the Enterprise Platinum license from Office365Mon.Com.  Existing customers can take advantage of this new set of features today.  New customers who are interested in this or other features of Office365Mon can go to our web site at and click the big Start Now link on the home page to begin a free 90-day trial.  We don’t ask for any payment information up front, so you can just go right in and start creating your new Office365Mon subscription to try out all of the amazing features of the service.

As always, we’d love to hear any feedback you have on this feature, or suggestions for new ways to support your needs with Office 365.  I personally read every single suggestion that comes into our mailbox, so let us know what you think!

From Sunny Phoenix,




Monitoring Geographically Distributed Office 365 Tenants at Office365Mon.Com

One of the questions we see from time to time is around where and how we monitor Office 365, and what all gets monitored.  In a nutshell, what we deliver out of the box is monitoring from a set of Azure cloud services, that will then go and do performance and availability monitoring of your Office 365 tenant, where ever those resources may be.

In terms of what gets monitored, that really boils down to “how many” of a particular resource type are we going to watch for you.  Some people have misconceptions around that, thinking that they may want to monitor every single SharePoint site, or every single mailbox, in an entire organization.  While some companies may have used solutions like that when all of their services were on premises, that is a model that doesn’t really make sense when the data is hosted in the cloud.  The reasons why, and the way we DO look at ensuring coverage for a tenant, dovetails nicely into a broader conversation about “how do we do that” when you have parts of your tenant distributed geographically.

To begin with, let’s talk a little bit about why you really don’t need, want, or even should try monitoring every single resource in your tenant.  Here are a few of the most obvious reasons:

  • Security – in order to monitor a resource, you need to have access to the resource. Opening up your tenant and granting access to your entire corporate email and document repository set to a monitoring service (or any application for that matter) is about as bad an idea as you’ll come by.  This is how data gets leaked people.
  • Signal overload – over time, you’ll see many short, transient errors with your cloud resources. It doesn’t mean the service or even your tenant is going down; it’s just the nature of life on the Internet.  If you try and wade through tens, hundreds or thousands of these signals a day, you’ll be worn out probably before your first coffee break.  You need to be able to establish an appropriate cross section of your service offering to sample, not gorge yourself on data.
  • Necessity – frankly, it just isn’t necessary. What I’ve seen from my many years working at Microsoft, and then subsequently at Office365Mon, is that you will get an excellent view of the health of your tenant by monitoring one to a few resources based on data center.  For example, in most cases (but not all), if one SharePoint site is up, they are ALL up.  It’s incredibly rare to have only one or two site collections within a tenant down and all the others up, or vice versa.  With Exchange it’s a lot of the same thing – while it’s more likely there that you may randomly have issues with one or two mailboxes now and again, in most cases when one mailbox is up within a data center, they’re all up; when one is down, they’re all down.  Again, this is not true in all cases (and Exchange itself has some other factors based on its architecture that can contribute to more of these outages than SharePoint), but it’s a good general rule to follow.

So how does that pertain to monitoring geographically distributed Office 365 resources?  Like this – today, Exchange can have mailboxes for a single tenant split between different data centers.  In the future, SharePoint Online will support having different segments of its service split across multiple data centers.  For more details on what’s happening with SharePoint Online in this respect, see this TechNet article:  Multi-Geo Capabilities in OneDrive and SharePoint Online in Office 365.   At Office365Mon, we tackle this in a couple of different ways:

  1. Cloud probes – when your data is split across multiple data centers, create multiple Office365Mon subscriptions. With each subscription we can target a different resource or resources in the different geographic locations where you have resources.  All of our licenses at Office365Mon support having multiple subscriptions; for more details see our Pricing page.  By pulling a representative resource or two from each different data center and configuring Office365Mon subscriptions to monitor them, we can track all of your data centers with our cloud-based probes.
  2. Distributed Probes – we also have a feature that you can download and install locally called Distributed Probes and Diagnostics. This can be installed on as many different devices in as many different locations as you want.  So you can install the agent on different physical or virtual machines that are in or near the same regions where your Office 365 resources are at.  Each of these devices issues health probes from the location at which it’s installed, and then it “reports back” with both performance and availability data so you can keep track of what’s happening with your Office 365 tenant worldwide.

When you start breaking down your monitoring plan by mapping it to the geographic regions in which you have data, and then matching that to Office365Mon subscriptions and Distributed Probes, you can pretty easily and pretty quickly develop and deploy your Office 365 monitoring in a way that will keep you in the know and in the control, no matter how big your organization is.  As always, the first step is to create your Office365Mon subscription, which you can do at  The first 90 days is free and you don’t need to provide any payment information up front.  You can continue to add additional subscriptions during your trial period and map out a workable, sensible monitoring strategy.

As always, we love to hear feedback so if you have questions feel free to shoot them to our support staff at

From Sunny Phoenix,



Monitoring Large Lists in SharePoint Online with Office365Mon

One of the enduring realities I saw over and over in my many years working at Microsoft with SharePoint, is that customers LOVE the lists they can create in SharePoint.  They’re super easy to set up, you can quick do all kinds of sorting, filtering and searching on them, and it requires no technical expertise to get up and running and with them.  This led to another enduring reality, which is that many customers started loving lists TOO much.  I saw many, many customers over the years that had lists that had just exploded in size.  As these lists grew larger, the performance in using them tends to get worse and worse.  This problem was also compounded by the fact that many developers saw SharePoint lists as a quick and easy place to store data for their application.  That meant even bigger lists sizes, and more people using them more often.

Over the years, we developed lots of documentation and options for improving the performance of your lists.  As customers have moved to SharePoint Online in Office 365 though, we would occasionally hear people ask if it had the same large list limitations as SharePoint on premises does…and the answer is yes, it does.  Now as more customers are moving their SharePoint on premises data to SharePoint Online, we see increasing concern about how the lists they do have are going to perform once it’s all up in Office 365.  Fortunately, at Office365Mon, we’ve just released a new feature designed specifically to help you stay on top of this issue.

List Monitoring is a feature that lets you select one or more lists in SharePoint Online for us to monitor.  For the lists that we’re monitoring, we will do a couple of things:  first, we’ll issue health probes for each list that we’re monitoring and render the default view for it to see what the performance is like.  That’s typically one of the first places where you’ll see performance issues with a large list.  You can configure List Monitoring so that it will send you a notification if it takes longer than “x” seconds to render the default view, where “x” is a number that you decide.

The second thing we’ll do is keep tabs on how large the list is, i.e. how many items it contains.  Again, you can set a threshold for us to look for, and when a monitored list gets bigger than that threshold, we’ll send you a notification to alert you to it.  So, for example, if you’re worried about a large list approaching that magic 5,000 item limit, you can have us notify you when it’s getting close.  Here’s a screen shot of where you configure the monitoring thresholds:


Selecting the lists to be monitored is also quite simple – we provide you with a collection of all of the lists in the SharePoint Online site that we’re monitoring, and you can just check boxes next to the lists you want us to monitor for you.  It can be any of the lists that come out of the box with SharePoint, or any custom list that you’ve created:


Once we’ve started monitoring lists for you, not only will we notify you according to the thresholds you’ve configured, but as you’ve come to expect from Office365Mon, we also have a nice set of reports you can use to see where you stand.  To begin with, you can see the performance of the recent health probes we’ve issued against monitored lists in our Average Response Time report.  It shows the performance of all of the resources that we’re monitoring for you, including monitored lists.  We also have a new report that shows you the average performance each month just for your monitored lists:


In addition to that, we have a report that shows you the size of your monitored lists each day, so you can visualize any growth trends that might be happening that you need to get in front of:


We also provide a monthly view of the average size of each monitored list, so you have a longer-term view of how rapidly your lists are growing:


Being aware of large lists and their impact on performance is one of the best ways to ensure a good experience for your users.  I’ve heard many, many times from customers that say “our site is slow”.  There are lots of reasons why that might be, but a couple of the most common reasons are slow query times and large lists.  At Office365Mon we’ve provided monitoring for your query execution time for nearly a year now.  With the new List Monitoring feature, now you can also know when you have large list performance problems.  Once you know that, you can start working on a mitigation strategy – splitting the data out into multiple lists, creating customized views of the data, etc., etc., etc.  There are a lot of different things you can do to work on improving the performance, but if you don’t know you have a problem then you’ll forever be stuck wondering why your users keep telling you that your “site is slow”.  Take advantage of features like these to stay in the know and stay in control of your Office 365 tenant, and keep everyone happy.  Start by visiting us at and clicking the Configure…List Monitoring menu.

This is yet another feature at Office365Mon that was driven from feedback by our customers.  I hope you’ll take a look at this feature and as always, let us know how we can make it better as well as ways in which we might be able to help you to do Office 365 monitoring even better.

From Sunny Phoenix,



Introducing Monitoring for Azure Secured Web Sites from Office365Mon

Today we’re excited to announce another new monitoring feature at Office365Mon.  Beginning today, we are now offering you the capability to monitor virtually any web site or REST API using the same proven, enterprise grade monitoring capabilities of Office365Mon.  The same service we use for Office 365 monitoring can now be used to monitor your sites that you deploy to Azure web sites, or your SharePoint hosted apps for on-prem or Office 365 sites, or virtually any other site!  You get the power of a monitoring infrastructure that sends out 10 to 20 million health probes a month to keep you in the know of your own web sites and REST APIs now.

All of this will come with the same kind of integration that you’re used to seeing at Office365Mon.  Setup will be extremely quick and easy, all done in your browser as well as via our Office365Mon Management API.  You’ll get the same sort of alert notifications as you do with the other resources we monitor for you – text messages, emails, and webhook notifications.  You’ll also see all of the data we capture about the health and performance of these sites in the same exact reports you use today, whether that’s one of our Standard or Advanced reports, you download your report data from our site, or you use Power BI with the Office365Mon Content Pack.  Here’s an example of a performance report that’s monitoring both Office 365 sites as well as other sites we have hosted in Microsoft Azure:


As you can see from the chart above, we’re monitoring:

Your sites and REST APIs can be hosted anywhere of course, as long as it has a public endpoint we can connect to.  It can be an anonymous site, or it can be secured with Azure Active Directory.  We can also monitor REST APIs as long as they don’t require any parameters.

This feature is available in Preview today and ready for you to begin trying out.  Get started by creating your Office365Mon subscription and then adding some sites to monitor here.  Pricing and licensing has not been set yet, but the good news is that like all new features, all existing customers will get 90 days to try it out for free.  Especially while this is in Preview, it’s a great opportunity to take a look and give us any feedback you have so we can fine-tune it to meet your needs.  Like many of the things you see at Office365Mon, this is another feature that was created based on feedback from our customers.

We hope you enjoy this new feature and will take the time to try it out.

From Sunny Phoenix,



Using the Office 365 Batch Operations API

As I was looking around for a way to batch certain operations with the Office 365 API the other day, I stumbled upon a Preview of just such a thing, called “Batch Outlook REST Requests (Preview)” –  The fundamentals of how it works is fairly straightforward, but it’s completely lacking implementation details for those using .NET.  So, I decided to write a small sample application that demonstrates using this new API / feature / whatever you want to call it.

First, let’s figure out why you might want to use this.  The most common reason is you are doing a bunch of operations and don’t want to go through the overhead of creating, establishing, and tearing down an HTTP session for each operation.  That can slow down quickly and burn up a lot of resources.  Now when I was first looking at this, I was also interested in how it might impact throttling limits that Office 365 imposes.  Turns out I had a little misunderstanding of that, but fortunately Abdel B. and Venkat A. explained Exchange throttling to me, and so now I will share with you.

My confusion about impact on throttling that batch operations might have was borne out of the fact that SharePoint Online has an API throttling limit that has been somewhat ubiquitously defined as no more than 1 REST call per second over an extended time.  So…kind of specific, but also a little vague.  Exchange Online throttling is arguably even less specific, but they do have some good information about how to know when it happens and what to do about it.

In Exchange Online, different operations may have a different impact on the system, and it may also be impacted by demands from other clients.  So when making REST API calls to Exchange Online your code should account getting a throttling response back.  A throttled response in Exchange Online returns a standard http status code 429 (Too many requests).  The service also returns a Retry-After header with the number of seconds to resubmit the request.  Now that you know what a throttled response from Exchange Online looks like, you can develop your code to include a process for retry and resubmission.

The batching feature lets you work around the overhead of multiple calls by allowing you to send in up to 20 operations in a single request.  That means 1 connection to create, establish and tear down instead of 20.  This is goodness.

The basic process of doing batch operations using this feature is to create what I’ll call a “container” operation.  In it, you will put all of the individual operations you want to perform against a particular mailbox.  Note that I said mailbox – this is important to remember for two reasons:  1) the batch feature only works today with Outlook REST APIs and 2) the individual operations should all target the same mailbox.  That makes sense as well when you consider that you have to authenticate to do these operations, and since they are all wrapped up in this “container” operation, you’re doing so in the context of that operation.

The “container” operation that I’m talking about is POST’ed to the $batch endpoint in Outlook:$batch.  The Url is hard-coded to the “beta” path for now because this API is still in preview.  In order for you to POST to the $batch endpoint you need to provide an access token in the authorization header, the same way as you would if you were making each of the individual calls contained in your container operation.  I’m not going to cover the process of getting an access token in this post because it’s not really in scope, but if you’re curious you can just look at the sample code included with this post or search my blog for many posts on that type of topic.

While I’m not going to cover getting an access token per se, it’s important to describe one higher level aspect of your implementation, which is to create an application in your Azure Active Directory tenant.  Generally speaking, you don’t access an Office 365 REST API directly; instead, you create an application and configure it with the permissions you need to execute the various Outlook REST APIs you’ll be using.  In my case, I wanted to be able to read emails, send emails and delete emails, so in my application I selected the following permissions:


So with that background, here are the basic steps you’ll go through; I’ll include more details on each one below:

  1. If you don’t have an access token, go get one.
  2. Create your “container” operation – this is a MultipartContent POST.
  3. Create your individual operations – add each one to your MultipartContent.
  4. POST the “container” operation to the $batch endpoint.
  5. Enumerate the results for each individual operation.


Step 1 – Get an Access Token

As I described above, I’m not going to cover this in great detail here.  Suffice to say, you’ll need to create an application in Azure Active Directory as I briefly alluded to above.  As part of that, you’ll also need to do “standard Azure apps for Office 365” stuff in order to get an access token.  Namely, you’ll need to create a client secret, i.e. “Key”, and copy it along with the client ID to your client application in order to convert the access code you get from Azure into an AuthenticationResult, which contains the access token.  This assumes you are using ADAL; if you are not, then you’ll have your own process to get the access token.


Step 2 – Create Your Container Operation

The “container” operation is really just a MultipartContent object that you’ll POST to the $batch endpoint.  Unfortunately, there is scarce documentation on how to create these, which is in large part why I wrote this post.  The code to get you started though is just this simple:


//create a new batch ID

string batchId = Guid.NewGuid().ToString();

//create the multipart content that is used for a batch process

MultipartContent mpc = new MultipartContent(“mixed”, “batch_” + batchId);

The main thing to note here is just that each “container” operation requires a unique batch identifier.  A Guid is perfect for this, so that’s what I’m using to identify my batch operation.


Step 3 – Create Individual Operations and Add to the Container Operation

The actual code you write here will vary somewhat, depending on what your operation is.  For example, a request to send an email message is going to be different from one to get a set of messages.  The basic set of steps though are similar:

  1. Create a new HttpRequestMessage. This is going to be how you define whether the individual operation is a GET, a POST, or something else, what Url to use, etc.  Here’s the code I used for the operation to send a new email:  HttpRequestMessage rqMsg = new HttpRequestMessage(HttpMethod.Post, BATCH_URI_BASE + “me/sendmail”);  It’s worth noting that you ALWAYS send your individual operations to the $batch endpoint to be included in the batch process.  For example, if you were using v2 of the Outlook API, to send a message you would use the Url  However, to use the $batch endpoint, since it’s in beta, you use the Url
  2. Create the content for your operation. In my case I used a custom class I created to represent a mail message, I “filled it all out”, and then I serialized it to a JSON string.  I then took my string to create the content for the operation, like this:  StringContent sc = new StringContent(msgData, Encoding.UTF8, “application/json”);  So in this case I’m saying I want some string content that is encoded as UTF8 and whose content type is application/json.
  3. Add your content to the HttpRequestMessage: Content = sc;
  4. Wrap up your HttpRequestMessage into an instance of the HttpMessageContent class. Note that you’ll need to add a reference to System.Net.Http.Formatting in order to use this class.  Here’s what it looks like:  HttpMessageContent hmc = new HttpMessageContent(rqMsg);  We’re doing this so that we can set the appropriate headers on this operation when it’s executed as part of the batch.
  5. Set the headers on the HttpMessageContent object: Headers.ContentType = new MediaTypeHeaderValue(“application/http”); and also hmc.Headers.Add(“Content-Transfer-Encoding”, “binary”);  You now have a single operation that you can add to the “container” operation.
  6. Add your individual operation to the “container” operation: Add(hmc);  That’s it – now just repeat these steps for each operation you want to execute in your batch.

Side note:  I realize some of this code may be difficult to follow when it’s intertwined with comments like I’ve done above.  If you’re get squinty eyed, just download the ZIP file that accompanies this post, and you can see all of the code end to end.


Step 4 – Post the Container Operation to the $Batch Endpoint

There’s not a lot to step 4.  You can just POST it now, but there’s one other point I want to make.  Your “container” operation may contain many individual operations.  There are a couple of points about that worth remembering.  First, the individual operations are not guaranteed to be performed in any specific order.  If you need them to be performed in a specific order, then either don’t do them in a batch or do them in separate batches.  Second, by default, at the point that any individual operation encounters an error, execution stops and no further operations in the batch will be executed.  However, you can override this behavior by setting a Prefer header in your “container” operation.  Here’s how you do that:

mpc.Headers.Add(“Prefer”, “odata.continue-on-error”);

With that done (or not, depending on your requirements), you can go ahead and POST your “container” operation to the $batch endpoint, like this:

HttpResponseMessage hrm = await hc.PostAsync(BATCH_URI_BASE + “$batch”, mpc);

With that done, it’s time to look at the results, which is covered in the next step.


Step 5 – Enumerate the Results for Each Individual Operation

At a high level, you can see if the overall batch operation worked the same way you would if it were just one operation:

if (hrm.IsSuccessStatusCode)

The important thing to understand though, is that even though the “container” POST may have worked without issue, one or more of the individual operations contained within may have had issues.  So how do you pull them all out to check?  Well, using the MultipartMemoryStreamProvider class is how I did it.  This is another class that requires a reference to System.Net.Http.Formatting in order to use, but you should already have it from the other steps above so that shouldn’t be a problem.

So we start out by getting all of the responses from each individual operation back like this:

MultipartMemoryStreamProvider responses = await hrm.Content.ReadAsMultipartAsync();

You can then enumerate over the array of HttpContent objects to look at the individual operations.  The code to do that looks like this:

for(int i=0; i < responses.Contents.Count;i++)


string results = await responses.Contents[i].ReadAsStringAsync();


It’s a little different from having an HttpResponseMessage for each one in that you have to do a little parsing.  For example, in my sample batch I sent two emails and then got the list of all of the emails in the inbox.  As I enumerate over the content for each one, here’s what ReadAsStringAsync returns for sending a message:

HTTP/1.1 202 Accepted

Okay, so you get to parse the return status code…should be doable.  It can get a little more cumbersome depending on the operation type.  For example, here’s what I got back when I asked for the list of messages in the inbox as part of the batch:

HTTP/1.1 200 OK

OData-Version: 4.0

Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8


Okay, so I trimmed a bunch of detail out of the middle there, but the gist is this – you would have to parse out your HTTP status code that was returned, and then parse out where your data begins.  Both quite doable, I just kind of hate having to do the 21st century version of screen scraping, but it is what it is.  The net is you can at least go look at each and every individual operation you submitted and figure out if they worked, retrieve and process data, etc.


That’s the short tour of using the Outlook batch API.  There are a handful of things you need to know about how it works and what it’s limitations are, and I’ve pointed out all of the ones I know about in this post.  The trickier part by far is understanding how to create a batch request using the .NET framework, as well as how to parse the results, and I covered both of those aspects of it as well.

As I mentioned a few times in this post, I just zipped up my entire sample project and have attached it to this post so you can download it and read through it to your heart’s content.  It does contain the details specific to my application in Azure AD, so you’ll need to create your own and then update the values in the app if you want to run this against your own tenant.  The ZIP file with the code is below: