Office365Mon Now Offers Monitoring Out of Regional Data Centers Starting with Germany


Be in the know. Be in control

Office365Mon is now available to be run out of regional Azure data centers around the globe.  In addition to the US data centers, Office365Mon is available today via Germany data centers.


At Office365Mon.Com we’ve heard from customers of our Office 365 monitoring service that had certain data sovereignty restrictions placed upon them.  This means that in some areas, there are government regulations that require data for a company in that region, to be stored in that region.

In addition to that, today Office365Mon issues its health probes out of a US-based data center.  Customers can always (and in fact are encouraged) to augment that with our Distributed Probes and Diagnostics feature, which lets you issue probes from any location where you have users.  However, some folks were hoping to have our cloud-based probes issued from a data center closer to where their users live.

Today we’re happy to announce an option that solves both problems – you can now create Office365Mon subscriptions in regional data centers.  Our first data center is available now, and is located in Germany.  This means not only is your subscription created in Germany, and your health probes are issued out of Germany, but also all of the data we capture is also stored in that same German data center.  That allows you to meet data sovereignty requirements in that region, as well as potentially getting cloud-based probes closer to your users.

One of the first things to call out is that all of the features you get from our US-based service, you will also find in our German-based service.  So no matter where you go, you will find feature parity.  If you have users spread across the globe and want to create Office365Mon subscriptions in different geographies, that’s okay too.  One of the cool things we do is pull together data from all of your subscriptions in every data center when we show you our Power BI reports.  For example, here’s a screenshot the Outages report with data from every subscription in every region for our tenant:


Of course, one of the cool things about Power BI is that you also have a slew of filtering and report building features available to you.  Here’s a screenshot of that same report, only this time it’s filtered to show only data from the Office365Mon subscription that we have in Germany:


The net of this is that if you know Office365Mon today, it will look, work, and act exactly the same no matter what data center your subscription is located.

The next question we expect to get of course is – that’s great, but what if I want Office365Mon in a different geography?  The answer is…that’s great!  Email us and let us know what you are interested in, and we can provide you with an idea of when it can be done.  As long as the different regions have parity in the Azure features that Office365Mon requires, it should be possible to deploy Office365Mon to virtually any regional Azure data center.

There’s only one real difference with using Office365Mon in a regional data center – you have to work with our support team to enable you to get going.  If you contact us at, we will give you all of the details about the on-boarding process, pricing, etc.

We know that this has been in important issue for many of our European friends and customers, so I hope you’ll take a few minutes to take a look at our PDF from our US site or our German site.  Of course, you can also visit our US site directly at, or our new German site at

Thanks as always for your great suggestions and ideas!
From Sunny Phoenix,


Now You Can Monitor OneDrive and Power BI with Office365Mon

Today we’ve released two new features that customers have been asking us about – monitoring for OneDrive for Business and Power BI.  These are just the latest set of services in an ever increasing stable of monitored features and services from Office365Mon.  Let’s take a quick look at each.

OneDrive for Business

Office365Mon has actually been able to monitor OneDrive for Business since our first release over two years ago.  However, many customers weren’t really aware of that and didn’t realize that when you configure monitoring for SharePoint Online, you could just tell it to go monitor a OneDrive site.  What we’ve done is to break OneDrive out as a separate monitored item on your Office365Mon subscription.  This gets you a couple of things – first, now you can monitor both a SharePoint Online site and a OneDrive site with the same subscription.  Second, it also allows us to give you a more granular breakdown of both performance and availability with your SharePoint Online tenant.  For example, here’s a report with the average performance across all Office365Mon customers for the primary Office 365 Services:


As you can see with the yellow bar, now you can see what performance is like just on OneDrive sites vs. SharePoint Online sites from all of our customers.  Of course, you’ll also be able to see what performance is like just for your tenant in OneDrive.  We think breaking OneDrive out in this way will give you an even deeper view into the health and performance of your tenant, which is a great thing.

Power BI in Office 365

The second feature we added is brand new, and something that we’ve heard customers ask us about for a while now.  Power BI is increasingly becoming a critical part of the toolset that customers are using, so of course it’s important to know how well it’s performing and when it goes down.  Our new monitoring for Power BI does all of that, just as you’ve come to expect from Office365Mon.  The configuration of Power BI monitoring is drop dead simple, which is also something you’ve come to expect from us.  You literally click a single button – that’s it – as shown in this screenshot below:


Once you’ve enabled it, health probes begin a minute or two later; can’t get much easier than that!  Once we start collecting the data, details on performance and availability of Power BI show up in all of the same reports that you’ve already been using.  For example, here’s a screenshot of the recent health probes report, showing you the performance of your monitored resources:


As you can see, you get the performance of your individual resources that are being monitored – including both Power BI and OneDrive – and that is overlaid on top of a bar chart with the overall numbers from all Office365Mon customers with the average performance for Exchange Online, SharePoint Online, OneDrive and Power BI.  Our goal is always to show you as much relevant data as possible and then give you the opportunity to drill down and filter as best suits your requirements.  Also, like all of our reports, you can add this to your own corporate intranet for anyone you want to be able to see, by using our Dashboard Reports feature.  Meanwhile, should Power BI go down, you can also take comfort in the fact that you’ll get all of the same notifications from us as you do today for all of your other monitored services.  Power BI is just another one of “the team” now, and is monitored just like all of the other features and services we’re keeping an eye on for you.


Try it Out

Both the OneDrive and Power BI monitoring features are available for you to try out now.  All existing Office365Mon customers as well as all new customers will have access to these features for 90 days.  New customers can go create an Office365Mon subscription at  No payment information is required up front, and if you choose not to use it there’s nothing you need to do – after 90 days we just stop monitoring stuff for you.

For all Office365Mon paid subscriptions, we will continue to monitor OneDrive for you.  Power BI monitoring requires the Enterprise Platinum license.  If you have that, monitoring Power BI will continue for you; if you don’t have that license, you can upgrade to it at any time by going to our Payments page at


Thanks Again

As always, thanks again for the many terrific suggestions you all have provided.  We’re always looking for ways in which we can do a better job meeting your needs, and the best way to do that is for you to let us know what’s important to you.  Office365Mon really has a wide range of monitoring features now, so I hope if you haven’t tried it yet, or maybe if it’s been a while since you’ve looked at it, you’ll come by and take it out for a spin for 90 days.  We think you’ll find that it’s well worth the 2 minutes of your time to get things set up and going.


From Sunny Phoenix,


Monitoring Exchange Online Errors You Might Not Otherwise Notice with Office365Mon

We introduced monitoring for the email transport for Office 365 customers back in February of this year after strong demand from our customers (  We had provided monitoring of the email service itself since Day 1, but if your mailbox is up but the messages you send don’t go out, or the message people send you don’t make it in, then email is of pretty limited value.  As we moved the email transport monitoring service out into production and began gathering more and more data, we started finding a few common errors that many tenants were experiencing, but which are innocuous enough you might not even know that they are happening.

We decided to create some new reports around the email transport monitoring feature to help customers understand better when these situations occur.  Sometimes they point out problems that customers can resolve themselves, and other times it’s just good to understand better when and where the hiccups are in your email service.  In a nutshell, as we capture more data around errors in the transport, we are bubbling up the common errors into these new reports so you can see for yourself when they happen and how frequently they happen.

If you go into the Advanced Reports gallery on our site at, there are new “Recent Email Transport Errors” and “Monthly Email Transport Errors” reports.  The Recent Email Transport Errors report is a simple tabular list of the 100 most recent instances of these common errors described above.  Here’s a sample report:


As you can see, we indicate whether the issue was with an inbound or outbound email, when it happened (most recent first), and what the problem was.  This report has already paid off big for one of our customers, because with the information in it they were able to determine that they had an old MX record in DNS, and that MX record pointed to a server that was no longer available.  This showed up in our reports every time that server was selected for us to send mail to it, and as a result they were able to clean that up in their environment.  Some of the other common problems we see are things like Outlook API service is temporarily unavailable, the anti-spam features have incorrectly marked an outbound message as spam (and as a result it gets stuck in the Drafts folder)…happens most frequently with email accounts by the way, and requests to the service are unauthorized (which could be your access token has expired or Azure Active Directory is temporarily unavailable).  Some things you may be able to fix yourself; other things it’s just good information to have so you are aware of how well your transport features are performing.

The Monthly Email Transport Errors report helps you stay on top of this by presenting a bar chart with a count of each type of error, so you can see how frequently each type of error is occurring in your tenant each month.  Here’s an example of that:


At Office365Mon our position is you can never have too much good information.  Based on the early results of this reporting, we’re already seeing good outcomes and actionable data for our customers.  If you aren’t set up for Office 365 monitoring yet, please visit our web site at to get started.  Once you’ve configured basic monitoring, you can turn on email transport monitoring at to get these reports yourself.  We’ll be continuing to mine through the data and expand out the collection of common errors as we see them.

As always, please feel free to send us your feedback at and thank you for all of the great ideas you’ve sent us already.

From Sunny Phoenix,


Monitoring Large Lists in SharePoint Online with Office365Mon

One of the enduring realities I saw over and over in my many years working at Microsoft with SharePoint, is that customers LOVE the lists they can create in SharePoint.  They’re super easy to set up, you can quick do all kinds of sorting, filtering and searching on them, and it requires no technical expertise to get up and running and with them.  This led to another enduring reality, which is that many customers started loving lists TOO much.  I saw many, many customers over the years that had lists that had just exploded in size.  As these lists grew larger, the performance in using them tends to get worse and worse.  This problem was also compounded by the fact that many developers saw SharePoint lists as a quick and easy place to store data for their application.  That meant even bigger lists sizes, and more people using them more often.

Over the years, we developed lots of documentation and options for improving the performance of your lists.  As customers have moved to SharePoint Online in Office 365 though, we would occasionally hear people ask if it had the same large list limitations as SharePoint on premises does…and the answer is yes, it does.  Now as more customers are moving their SharePoint on premises data to SharePoint Online, we see increasing concern about how the lists they do have are going to perform once it’s all up in Office 365.  Fortunately, at Office365Mon, we’ve just released a new feature designed specifically to help you stay on top of this issue.

List Monitoring is a feature that lets you select one or more lists in SharePoint Online for us to monitor.  For the lists that we’re monitoring, we will do a couple of things:  first, we’ll issue health probes for each list that we’re monitoring and render the default view for it to see what the performance is like.  That’s typically one of the first places where you’ll see performance issues with a large list.  You can configure List Monitoring so that it will send you a notification if it takes longer than “x” seconds to render the default view, where “x” is a number that you decide.

The second thing we’ll do is keep tabs on how large the list is, i.e. how many items it contains.  Again, you can set a threshold for us to look for, and when a monitored list gets bigger than that threshold, we’ll send you a notification to alert you to it.  So, for example, if you’re worried about a large list approaching that magic 5,000 item limit, you can have us notify you when it’s getting close.  Here’s a screen shot of where you configure the monitoring thresholds:


Selecting the lists to be monitored is also quite simple – we provide you with a collection of all of the lists in the SharePoint Online site that we’re monitoring, and you can just check boxes next to the lists you want us to monitor for you.  It can be any of the lists that come out of the box with SharePoint, or any custom list that you’ve created:


Once we’ve started monitoring lists for you, not only will we notify you according to the thresholds you’ve configured, but as you’ve come to expect from Office365Mon, we also have a nice set of reports you can use to see where you stand.  To begin with, you can see the performance of the recent health probes we’ve issued against monitored lists in our Average Response Time report.  It shows the performance of all of the resources that we’re monitoring for you, including monitored lists.  We also have a new report that shows you the average performance each month just for your monitored lists:


In addition to that, we have a report that shows you the size of your monitored lists each day, so you can visualize any growth trends that might be happening that you need to get in front of:


We also provide a monthly view of the average size of each monitored list, so you have a longer-term view of how rapidly your lists are growing:


Being aware of large lists and their impact on performance is one of the best ways to ensure a good experience for your users.  I’ve heard many, many times from customers that say “our site is slow”.  There are lots of reasons why that might be, but a couple of the most common reasons are slow query times and large lists.  At Office365Mon we’ve provided monitoring for your query execution time for nearly a year now.  With the new List Monitoring feature, now you can also know when you have large list performance problems.  Once you know that, you can start working on a mitigation strategy – splitting the data out into multiple lists, creating customized views of the data, etc., etc., etc.  There are a lot of different things you can do to work on improving the performance, but if you don’t know you have a problem then you’ll forever be stuck wondering why your users keep telling you that your “site is slow”.  Take advantage of features like these to stay in the know and stay in control of your Office 365 tenant, and keep everyone happy.  Start by visiting us at and clicking the Configure…List Monitoring menu.

This is yet another feature at Office365Mon that was driven from feedback by our customers.  I hope you’ll take a look at this feature and as always, let us know how we can make it better as well as ways in which we might be able to help you to do Office 365 monitoring even better.

From Sunny Phoenix,



Migrating an Office Task Pane App to a Toolbar Add In

If you’re one of those less-informed people, like myself apparently, you may have recently received an email or otherwise found out that the Office task pane app you published in the Office Store a while ago is no longer in compliance.  It is now a requirement that any task pane apps that were previously submitted – back when that was the only app form factor available – need to be refactored to support the Add In toolbar model.  This announcement was easy to miss since it only appeared rather randomly once on some Office team blog.  Rather ironic given that this comes from the team that provides so many office communications tools.

But I digress…once you find out you need to update your app to support the new model (as well as the old model, since not every single customer is on Office 365 and/or Office 2016), how do you do it?  Unfortunately the fine folks that made this policy change neglected to provide a guide on how to do so.  They merely provide links to very general and fairly vague documentation, so since I just went through this process I thought I would capture it for others who may need to do the same thing.

Here are the steps I went through to move my Outlook task pane app to the Add In toolbar model.

Step 1 – Get the latest version of the Office Tools for Visual Studio

The actual location to get this may vary depending on your locale and version of Visual Studio, so I’ll just leave it at “go get it and install it”.

Step 2 – Start Working with Your Application’s Manifest File

Interestingly enough, once the latest tools are installed, it’s actually gone a step backwards from a tooling perspective.  When I double-click on my app manifest in Visual Studio 2015 now, it no longer opens the visual designer that lets you set attributes in your manifest.  Instead it just opens the underlying Xml file and you get to edit Xml by hand.  I could swear I’ve seen this before…oh wait – I did – like 15 years ago.  Visual Studio does at least provide some editing advantages over Notepad, so there is a certain amount of forward progress.

Step 3 – Update the top-level OfficeApp Element

The main change needed to the manifest file is going to be to add a VersionOverrides node and populate it with the correct data.  Note that you can keep all of your existing content in your Xml file, which you’ll actually want to do to ensure that it is supported on clients that do not have Office 2016.  In order to add the VersionOverrides node though, you first need to update the top-level OfficeApp element.  This is to add some namespaces that are needed to support the content in VersionOverrides.  Here’s an example of my new OfficeApp element:


One important point to note:  my element includes the “mailappor” namespace and is of type “MailApp”.  If your app is not an Outlook add-in, then the values you will need to add will be different.  While I could try and point you to some documentation, I actually recommend doing this to make sure you have it exactly right (I found the documentation to be a bit painful):  just create a new project in Visual Studio for an Office Add In that is the same type as the one you are updating.  Then you can grab the correct namespaces and types from the Xml file for the manifest it creates.  Much quicker and easier than searching for and through sketchy documentation.

Step 4 – Add the VersionOverrides Node at the END of Your Existing Content

Here’s something that isn’t at all obvious – you can’t just add the VersionOverrides node anywhere in your app’s manifest.  You need to add it at the end – after any Rules you may have (like in a Rule node), which is after your Permissions node.  Your VersionOverrides node may be perfectly formed, but if you place it any higher in the manifest, you will get an Xml validation error and your life will suck.

Step 5 – Add Your VersionOverrides Node and Start Editing

For this step, I’m just going to paste in a big bunch of Xml and then start talking through what it is and how it’s all linked together.  So to begin, here’s the Xml – both some stuff from my original manifest file and the new VersionOverrides node:


Okay, let’s start breaking some of this down.

Step 5.1 – The VersionOverrides Node

My VersionOverrides node looks like this:


The main thing I want to point out here is the namespace.  Again, my app is an Add In for Outlook, so it uses that namespace.  If your app is not for Outlook, it will be different.  Again, look at the content in your sample application you created above to get the correct value.

Step 5.2 – Set All of the Common Property Values

Within the VersionOverrides node, there’s a chunk of properties that will likely be used in different form factors – for example reading a message, composing a message, reading an appointment, etc.   So let’s look at this chunk of Xml:


As you can see, these are the resources for your app – icons, icon titles, tool tips, that sort of thing.  You’ll notice that you need icons that are 16×16, 32×32 and 80×80.  Add those to your Add In’s web site and then plug in the Url.

For the functionFile I cheated…just meaning I don’t have any UI-less triggers, so I just plugged in the home page for my Add In’s site.  If you have a javascript file for this, then you would put the Url here.

The ShortStrings and LongStrings section are where you put the strings that will be used for the icon titles and tool tips.  Plug in whatever is appropriate for your app.

Now, here’s an important one – look at the item with the id “messageReadTaskPaneUrl” – THIS is where you plug in the Url that you used before in your original task pane app.  For example, I’m going to paste in side by side here a couple of chunks of Xml from above.  The first one is for my original task pane definition, and the second one is my messageReadTaskPaneUrl node:


Hey look – the Urls are the same!  Finally, starting to make some sense.

Here’s the big thing that is not clearly called out in the documentation – look at the id’s all of the elements in the resources section.  You’re going to plug these same id’s in when you declare the various form factors your application supports – such as message read, message compose, etc.  Let’s see how that works next.

Step 5.3 – Add Your Form Factor Implementations

This step is what ties it all together.  You create a definition for each form factor you want to support, like Message Read, Message Compose, etc.  I’m going to paste in the Xml for one of my form factors below and then talk through it:


Let’s look at some of these individual elements now:

  • ExtensionPoint xsi:type – this is what defines the form factor, in this case Message Read
  • Label resid=”groupLabel” – this corresponds to the ShortString I have in the resources section with the id “groupLabel”
  • Label resid=”paneReadButtonLabel” – this corresponds to the ShortString I have in the resources section with the id “paneReadButtonLabel”
  • Title resid=”paneReadSuperTipTitle” – you probably get this by now – corresponds to the id in ShortStrings
  • Description resid=”paneReadSuperTipDescription” – again, guessing you understand here. What you’ll see is the icons you’ve defined show up in the toolbar (in the Outlook 2016 client) or in the Outlook Web Access UI.  In the toolbar the icon will have the title I put in the “paneReadSuperTipTitle”, and when you hover over it you will get a tooltip that contains the paneReadSuperTipDescription.
  • Icon node – you’ll see the three different icons, with each one pointing to the different id’s that were added in the resource section.
  • Action – this is the money node – the SourceLocation has a resid value that points to the id in our Resources section with the Url that we used in our task pane app; in my example that’s

That’s really it – once you understand how all the pieces relate to each other in the manifest file, then it’s really pretty simple to add other form factors (or ExtensionPoint nodes as they’re called in the Xml).  After that, you just need to click the Publish item on the shortcut menu for your App project and click the Package the add-in button.  It’s a little silly because all it does is open a window with your manifest’s Xml file.  Then you can click the button at the bottom of the page to Visit the Seller Dashboard, update your manifest, and you’re done!  Just wait for it to get through the validation process successfully and your Office Store application will be updated.

That’s it – hopefully you can use this guide to walk through the process of updating your own applications.  I’ve uploaded my own manifest file so you can see the entire thing and hopefully it will help you see how it all plugs in together – you can get it from!AEFcZajG99GWlIYw.  In case it doesn’t automatically download for you, look for the file. Good luck!!


Using the Office 365 Batch Operations API

As I was looking around for a way to batch certain operations with the Office 365 API the other day, I stumbled upon a Preview of just such a thing, called “Batch Outlook REST Requests (Preview)” –  The fundamentals of how it works is fairly straightforward, but it’s completely lacking implementation details for those using .NET.  So, I decided to write a small sample application that demonstrates using this new API / feature / whatever you want to call it.

First, let’s figure out why you might want to use this.  The most common reason is you are doing a bunch of operations and don’t want to go through the overhead of creating, establishing, and tearing down an HTTP session for each operation.  That can slow down quickly and burn up a lot of resources.  Now when I was first looking at this, I was also interested in how it might impact throttling limits that Office 365 imposes.  Turns out I had a little misunderstanding of that, but fortunately Abdel B. and Venkat A. explained Exchange throttling to me, and so now I will share with you.

My confusion about impact on throttling that batch operations might have was borne out of the fact that SharePoint Online has an API throttling limit that has been somewhat ubiquitously defined as no more than 1 REST call per second over an extended time.  So…kind of specific, but also a little vague.  Exchange Online throttling is arguably even less specific, but they do have some good information about how to know when it happens and what to do about it.

In Exchange Online, different operations may have a different impact on the system, and it may also be impacted by demands from other clients.  So when making REST API calls to Exchange Online your code should account getting a throttling response back.  A throttled response in Exchange Online returns a standard http status code 429 (Too many requests).  The service also returns a Retry-After header with the number of seconds to resubmit the request.  Now that you know what a throttled response from Exchange Online looks like, you can develop your code to include a process for retry and resubmission.

The batching feature lets you work around the overhead of multiple calls by allowing you to send in up to 20 operations in a single request.  That means 1 connection to create, establish and tear down instead of 20.  This is goodness.

The basic process of doing batch operations using this feature is to create what I’ll call a “container” operation.  In it, you will put all of the individual operations you want to perform against a particular mailbox.  Note that I said mailbox – this is important to remember for two reasons:  1) the batch feature only works today with Outlook REST APIs and 2) the individual operations should all target the same mailbox.  That makes sense as well when you consider that you have to authenticate to do these operations, and since they are all wrapped up in this “container” operation, you’re doing so in the context of that operation.

The “container” operation that I’m talking about is POST’ed to the $batch endpoint in Outlook:$batch.  The Url is hard-coded to the “beta” path for now because this API is still in preview.  In order for you to POST to the $batch endpoint you need to provide an access token in the authorization header, the same way as you would if you were making each of the individual calls contained in your container operation.  I’m not going to cover the process of getting an access token in this post because it’s not really in scope, but if you’re curious you can just look at the sample code included with this post or search my blog for many posts on that type of topic.

While I’m not going to cover getting an access token per se, it’s important to describe one higher level aspect of your implementation, which is to create an application in your Azure Active Directory tenant.  Generally speaking, you don’t access an Office 365 REST API directly; instead, you create an application and configure it with the permissions you need to execute the various Outlook REST APIs you’ll be using.  In my case, I wanted to be able to read emails, send emails and delete emails, so in my application I selected the following permissions:


So with that background, here are the basic steps you’ll go through; I’ll include more details on each one below:

  1. If you don’t have an access token, go get one.
  2. Create your “container” operation – this is a MultipartContent POST.
  3. Create your individual operations – add each one to your MultipartContent.
  4. POST the “container” operation to the $batch endpoint.
  5. Enumerate the results for each individual operation.


Step 1 – Get an Access Token

As I described above, I’m not going to cover this in great detail here.  Suffice to say, you’ll need to create an application in Azure Active Directory as I briefly alluded to above.  As part of that, you’ll also need to do “standard Azure apps for Office 365” stuff in order to get an access token.  Namely, you’ll need to create a client secret, i.e. “Key”, and copy it along with the client ID to your client application in order to convert the access code you get from Azure into an AuthenticationResult, which contains the access token.  This assumes you are using ADAL; if you are not, then you’ll have your own process to get the access token.


Step 2 – Create Your Container Operation

The “container” operation is really just a MultipartContent object that you’ll POST to the $batch endpoint.  Unfortunately, there is scarce documentation on how to create these, which is in large part why I wrote this post.  The code to get you started though is just this simple:


//create a new batch ID

string batchId = Guid.NewGuid().ToString();

//create the multipart content that is used for a batch process

MultipartContent mpc = new MultipartContent(“mixed”, “batch_” + batchId);

The main thing to note here is just that each “container” operation requires a unique batch identifier.  A Guid is perfect for this, so that’s what I’m using to identify my batch operation.


Step 3 – Create Individual Operations and Add to the Container Operation

The actual code you write here will vary somewhat, depending on what your operation is.  For example, a request to send an email message is going to be different from one to get a set of messages.  The basic set of steps though are similar:

  1. Create a new HttpRequestMessage. This is going to be how you define whether the individual operation is a GET, a POST, or something else, what Url to use, etc.  Here’s the code I used for the operation to send a new email:  HttpRequestMessage rqMsg = new HttpRequestMessage(HttpMethod.Post, BATCH_URI_BASE + “me/sendmail”);  It’s worth noting that you ALWAYS send your individual operations to the $batch endpoint to be included in the batch process.  For example, if you were using v2 of the Outlook API, to send a message you would use the Url  However, to use the $batch endpoint, since it’s in beta, you use the Url
  2. Create the content for your operation. In my case I used a custom class I created to represent a mail message, I “filled it all out”, and then I serialized it to a JSON string.  I then took my string to create the content for the operation, like this:  StringContent sc = new StringContent(msgData, Encoding.UTF8, “application/json”);  So in this case I’m saying I want some string content that is encoded as UTF8 and whose content type is application/json.
  3. Add your content to the HttpRequestMessage: Content = sc;
  4. Wrap up your HttpRequestMessage into an instance of the HttpMessageContent class. Note that you’ll need to add a reference to System.Net.Http.Formatting in order to use this class.  Here’s what it looks like:  HttpMessageContent hmc = new HttpMessageContent(rqMsg);  We’re doing this so that we can set the appropriate headers on this operation when it’s executed as part of the batch.
  5. Set the headers on the HttpMessageContent object: Headers.ContentType = new MediaTypeHeaderValue(“application/http”); and also hmc.Headers.Add(“Content-Transfer-Encoding”, “binary”);  You now have a single operation that you can add to the “container” operation.
  6. Add your individual operation to the “container” operation: Add(hmc);  That’s it – now just repeat these steps for each operation you want to execute in your batch.

Side note:  I realize some of this code may be difficult to follow when it’s intertwined with comments like I’ve done above.  If you’re get squinty eyed, just download the ZIP file that accompanies this post, and you can see all of the code end to end.


Step 4 – Post the Container Operation to the $Batch Endpoint

There’s not a lot to step 4.  You can just POST it now, but there’s one other point I want to make.  Your “container” operation may contain many individual operations.  There are a couple of points about that worth remembering.  First, the individual operations are not guaranteed to be performed in any specific order.  If you need them to be performed in a specific order, then either don’t do them in a batch or do them in separate batches.  Second, by default, at the point that any individual operation encounters an error, execution stops and no further operations in the batch will be executed.  However, you can override this behavior by setting a Prefer header in your “container” operation.  Here’s how you do that:

mpc.Headers.Add(“Prefer”, “odata.continue-on-error”);

With that done (or not, depending on your requirements), you can go ahead and POST your “container” operation to the $batch endpoint, like this:

HttpResponseMessage hrm = await hc.PostAsync(BATCH_URI_BASE + “$batch”, mpc);

With that done, it’s time to look at the results, which is covered in the next step.


Step 5 – Enumerate the Results for Each Individual Operation

At a high level, you can see if the overall batch operation worked the same way you would if it were just one operation:

if (hrm.IsSuccessStatusCode)

The important thing to understand though, is that even though the “container” POST may have worked without issue, one or more of the individual operations contained within may have had issues.  So how do you pull them all out to check?  Well, using the MultipartMemoryStreamProvider class is how I did it.  This is another class that requires a reference to System.Net.Http.Formatting in order to use, but you should already have it from the other steps above so that shouldn’t be a problem.

So we start out by getting all of the responses from each individual operation back like this:

MultipartMemoryStreamProvider responses = await hrm.Content.ReadAsMultipartAsync();

You can then enumerate over the array of HttpContent objects to look at the individual operations.  The code to do that looks like this:

for(int i=0; i < responses.Contents.Count;i++)


string results = await responses.Contents[i].ReadAsStringAsync();


It’s a little different from having an HttpResponseMessage for each one in that you have to do a little parsing.  For example, in my sample batch I sent two emails and then got the list of all of the emails in the inbox.  As I enumerate over the content for each one, here’s what ReadAsStringAsync returns for sending a message:

HTTP/1.1 202 Accepted

Okay, so you get to parse the return status code…should be doable.  It can get a little more cumbersome depending on the operation type.  For example, here’s what I got back when I asked for the list of messages in the inbox as part of the batch:

HTTP/1.1 200 OK

OData-Version: 4.0

Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8


Okay, so I trimmed a bunch of detail out of the middle there, but the gist is this – you would have to parse out your HTTP status code that was returned, and then parse out where your data begins.  Both quite doable, I just kind of hate having to do the 21st century version of screen scraping, but it is what it is.  The net is you can at least go look at each and every individual operation you submitted and figure out if they worked, retrieve and process data, etc.


That’s the short tour of using the Outlook batch API.  There are a handful of things you need to know about how it works and what it’s limitations are, and I’ve pointed out all of the ones I know about in this post.  The trickier part by far is understanding how to create a batch request using the .NET framework, as well as how to parse the results, and I covered both of those aspects of it as well.

As I mentioned a few times in this post, I just zipped up my entire sample project and have attached it to this post so you can download it and read through it to your heart’s content.  It does contain the details specific to my application in Azure AD, so you’ll need to create your own and then update the values in the app if you want to run this against your own tenant.  The ZIP file with the code is below:


The New Azure Converged Auth Model and Office 365 APIs

Microsoft is currently working on a new authentication and authorization model that is simply referred to as “v2” in the docs that they have available right now (UPDATE:  they prefer the “v2 auth endpoint”). I decided to spend some time taking a look around this model and in the process have been writing up some tips and other useful information and code samples to help you out as you start making your way through it. Just like I did with other reviews of the new Azure B2B and B2C features, I’m not going to try and explain to you exactly what they are because Microsoft already has a team of humans doing that for you. Instead I’ll work on highlighting other things you might want to know when you actually go about implementing this.

One other point worth noting – this stuff is still pre-release and will change, so I’m going to “version” my blog post here and I will update as I can. Here we go.

Version History

1.0 – Feb. 3, 2016

What it Does and Doesn’t Do

One of the things you should know first and foremost before you jump in is exactly what it does today, and more importantly, what it does not. I’ll boil it down for you like this – it does basic Office 365: email, calendar and contacts. That’s pretty much it today; you can’t even query Azure AD as it stands today. So if you’re building an Office app, you’re good to go.

In addition to giving you access to these Microsoft services, you can secure your own site using this new model. One of the other terms you will see used to describe the model (besides “v2”) is the “converged model”. It’s called the converged model really for two reasons:

  • You no longer need to distinguish between “Work and School Accounts” (i.e. and “Microsoft Accounts” (i.e. This is probably the biggest achievement in the new model. Either type of account can authenticate and access your applications.
  • You have a single URL starting point to use for accessing Microsoft cloud services like email, calendar and contacts. That’s nice, but not exactly enough to make you run out and change your code up.

Support for different account types may be enough to get you looking at this. You will find a great article on how to configure your web application using the new model here: There is a corresponding article on securing Web API here:

Once you have that set up, you can see here what it looks like to have both types of accounts available for sign in:


The accounts with the faux blue badges are “Work or School Accounts” and the Microsoft logo account is the “Microsoft Account”. Equally interesting is the fact that you can remain signed in with multiple accounts at the same time. Yes! Bravo, Microsoft – well done.  🙂


Knowing now what it does and doesn’t do, it’s a good time to talk about some tips that will come in handy as you start building applications with it. A lot of what I have below is based on the fact that I found different sites and pages with different information as I was working through building my app. Some pages are in direct contradiction with each other, some are typos, and some are just already out of date. This is why I added the “version” info to my post, so you’ll have an idea about whether / when this post has been obsoleted.


Tip #1 – Use the Correct Permission Scope Prefix

You will see information about using permission scopes, which is something new to the v2 auth endpoint. In short, instead of defining the permissions your application requires in the application definition itself as you do with the current model, you tell Azure what permissions you require when you go through the process of acquiring an access token. A permission scope looks something like this: This is the essence of the first tip – make sure you are using the correct prefix for your permission scope. In this case the permission prefix is “” and is used to connect to the Outlook API resource endpoint.  Alternatively your app can also call the Microsoft Graph endpoint under, in which case you should use the prefix for your permission scope.  The net of this tip is fairly simple – make sure you are using the same host name for your permission scopes as you do for your API endpoints.  So if you want to read email messages using the API endpoint of, make sure you use a permission scope of  If you mix and match – use a permission scope with a prefix of but try to access an API at; – you will get an access token; however when you try and use it you will get a 401 unauthorized response.

Also…I found in one or more places where it used a prefix of “”; note that it is missing the “s” after “http”. This is another example of a typo, so just double check if you are copying and pasting code from somewhere.


Tip #2 – Not All Permission Scopes Require a Prefix

After all the talk in the previous tip about scope prefixes, it’s important to note that not all permission scopes require a prefix. There are a handful that work today without a prefix: “openid”, “email”, “profile”, and “offline_access”. To learn what each of these permission scopes grants you can see this page:  One of the more annoying things in reviewing the documentation is that I didn’t see a simple example that demonstrated exactly what these scopes should look like. So…here’s an example of the scopes and how they’re used to request a code, which you can use to get an access token (and refresh token if you include the “offline_access” permission – see, I snuck that explanation in there). Note that they are “delimited” with a space between each one:

string mailScopePrefix = “;;

string scopes =
“openid email profile offline_access ” +
mailScopePrefix + “Mail.Read ” + mailScopePrefix + “Contacts.Read ” +
mailScopePrefix + “Calendars.Read “;

string authorizationRequest = String.Format(             “{0}&scope={1}&redirect_uri={2}&state={3}&response_mode=form_post&nonce={4}”,
Uri.EscapeDataString(“Index;” + scopes + “;” + redirUri),

//return a redirect response
return new RedirectResult(authorizationRequest);

Also worth noting is that the response_type in the example above is “code+id_token”; if you don’t include the “openid” permission then Azure will return an error when it redirects back to your application after authentication.

When you log in you see the standard sort of Azure consent screen that models the permission scopes you asked for:


Again, worth noting, the consent screen actually will look different based on what kind of account you’ve authenticated with. The image above is from a “Work or School Account”. Here’s what it looks like when you use a “Microsoft Account” (NOTE: it includes a few more permission scopes than the image above):


For completeness, since I’m sure most folks today use ADAL to get an access token from a code, here’s an example of how you can take the code that was returned and do a POST yourself to get back a JWT token that includes the access token:

public ActionResult ProcessCode(string code, string error, string error_description, string resource, string state)

string viewName = “Index”;

if (!string.IsNullOrEmpty(code))

string[] stateVals = state.Split(“;”.ToCharArray(),


viewName = stateVals[0];
string scopes = stateVals[1];
string redirUri = stateVals[2];

//create the collection of values to send to the POST
List<KeyValuePair<string, string>> vals =
new List<KeyValuePair<string, string>>();

vals.Add(new KeyValuePair<string, string>(“grant_type”, “authorization_code”));
vals.Add(new KeyValuePair<string, string>(“code”, code));
vals.Add(new KeyValuePair<string, string>(“client_id”, APP_CLIENT_ID));
vals.Add(new KeyValuePair<string, string>(“client_secret”, APP_CLIENT_SECRET));
vals.Add(new KeyValuePair<string, string>(“scope”, scopes));
vals.Add(new KeyValuePair<string, string>(“redirect_uri”, redirUri));

string loginUrl = “;;

//make the request
HttpClient hc = new HttpClient();

//form encode the data we’re going to POST
HttpContent content = new FormUrlEncodedContent(vals)

//plug in the post body
HttpResponseMessage hrm = hc.PostAsync(loginUrl, content).Result;

if (hrm.IsSuccessStatusCode)
//get the raw token
string rawToken = hrm.Content.ReadAsStringAsync().Result;

//deserialize it into this custom class I created
//for working with the token contents
AzureJsonToken ajt =
//some error handling here
//some error handling here

return View(viewName);


One note – I’ll explain more about the custom AzureJsonToken class below.  To reiterate Tips #1 and 3, since my permission scope is based on the host name, I can only use the access token from this code with the Outlook API endpoint, such as or  Yes, that is not a typo, you should be able to use the access token against either version of the API.


Tip #3 – Use the Correct Service Endpoint

Similar to the first tip, I saw examples of articles that talked about using the v2 auth endpoint but used a Graph API permission scope with a Outlook API endpoint. So again, to be clear, make sure your host names are consistent between the permission scopes and API endpoints, as explained in Tip #1.


Tip #4 – Be Aware that Microsoft Has Pulled Out Basic User Info from Claims

One of the changes that Microsoft made in v2 of the auth endpoint is that they have arbitrarily decided to pull out most meaningful information about the user from the token that is returned to you. They describe this here –

In the original Azure Active Directory service, the most basic OpenID Connect sign-in flow would provide a wealth of information about the user in the resulting id_token. The claims in an id_token can include the user’s name, preferred username, email address, object ID, and more.

We are now restricting the information that the openid scope affords your app access to. The openid scope will only allow your app to sign the user in, and receive an app-specific identifier for the user. If you want to obtain personally identifiable information (PII) about the user in your app, your app will need to request additional permissions from the user.

This is sadly classic Microsoft behavior – we know what’s good for you better then you know what’s good for you – and this horse has already left the barn; you can get this info today using the v1 auth endpoint. Never one to be deterred by logic however, they are pressing forward with this plan. That means more work on you, the developer. What you’ll want to do in order to get basic user info is a couple of things:

  1. Include the “email” and “profile” scopes when you are obtaining your code / token.
  2. Ask for both a code and id_token when you redirect to Azure asking for a code (as shown in the C# example above).
  3. Extract out the id_token results into a useable set of claims after you exchange your code for a token.

On the last point, the Microsoft documents say that they provide a library for doing this, but did not include any further detail about what it is or how you use it, so that was only marginally helpful. But hey, this is a tip, so here’s a little more. Here’s how you can extract out the contents of the id_token:

  1. Add the Security.IdentityModel.Tokens.Jwt NuGet package to your application.
  2. Create a new JwtSecurityToken instance from the id_token. Here’s a sample from my code:

JwtSecurityToken token = new JwtSecurityToken(ajt.IdToken);

  1. Use the Payload property of the token to find the individual user attributes that were included. Here’s a quick look at the most useful ones:

token.Payload[“name”] //display name
token.Payload[“oid”] //object ID
token.Payload[“preferred_username”] //upn usually
token.Payload[“tid”] //tenant ID

As noted above though in my description of the converged model, remember that not every account type will have every value. You need to account for circumstances when the Payload contents are different based on the type of account.


Tip #5 – Feel Free to Use the Classes I Created for Deserialization

I created some classes for deserializing the JSON returned from various service calls into objects that can be used for referring to things like the access token, refresh token, id_token, messages, events, contacts, etc. Feel free to take the examples attached to this posting and use them as needed.

Microsoft is updating its helper libraries to use Office 365 objects with v2 of the app model so you can use them as well; there is also a pre-release version of ADAL that you can use to work with access tokens, etc. For those of you rolling your own however (or who aren’t ready to fully jump onto the pre-release bandwagon) here are some code snippets from how I used them:

//to get the access token, refresh token, and id_token
AzureJsonToken ajt =

This next set is from some REST endpoints I created to test everything out:

//to get emails
Emails theMail = JsonConvert.DeserializeObject<Emails>(data);

//to get events
Events theEvent = JsonConvert.DeserializeObject<Events>(data);

//to get contacts
Contacts theContact = JsonConvert.DeserializeObject<Contacts>(data);

You end up with a nice little dataset on the client this way. Here’s an example of the basic email details that comes down to my client side script:


Finally, here’s it all pulled together, in a UI that only a Steve could love – using the v2 app model, some custom REST endpoints and a combination of jQuery and Angular to show the results (yes, I really did combine jQuery and Angular, which I know Angular people hate):








The new auth endpoint is out and available for testing now, but it’s not production ready. There’s actually a good amount of documentation out there – hats off to dstrockis at Microsoft, the writer who’s been pushing all the very good content out. There’s also a lot of mixed and conflicting documents though so try and follow some of the tips included here. The attachment to this posting includes sample classes and code so you can give it a spin yourself, but make sure you start out by at least reading about how to set up your new v2 applications here: You can also see all the Getting Started tutorials here: Finally, one last article that I recommend reading (short but useful):
You can download the attachment here: