Office365Mon Now Provides Reports Anywhere with Dashboard Reports

One of the most common requests we receive at Office365Mon has to do with the different types of reporting we provide on the availability and performance of a customer’s Office 365 tenant.  There are dozens of reports out of the box, as well as snapshot reports with the up to the minute availability of these services, and many of our customers have asked how they could get these reports for their own internal web sites.  Today we are announcing a feature to do exactly that – the Dashboard Reports feature from Office365Mon.

The Dashboard Reports feature is designed to let you take the virtually all of the reports available in our site that we get when we monitor Office 365 for you, and add them to any web site you wish with literally a single HTML tag.  In addition to being simple though, it’s also secure.  In order to use Dashboard Reports, you need to go to the Office365Mon site and get a secret key that is required to display Dashboard Reports.  That prevents anyone else from displaying your Office365Mon data.  In addition, you can remove and add keys as needed, so if a key is compromised at some point you can quickly and easily remove it.

In addition to the reports that we’ve had in our report gallery, the two most common requests we had from customers who want to display data in their own internal sites has been the current status information widgets from the My Info page (https://www.office365mon.com/Features/MyInfo).  To enable the display of these widgets in the Dashboard Reports features, we created two new reports and added them to the report gallery.  They display the same information as you find on the My Info page, and look like this:

Service Status

serviceinforpt

Tenant Status

tenantinforpt

Complete instructions for adding reports to your site using the Dashboard Reports feature can be found here:  https://www.office365mon.com/Configure/Dashboard.  It takes just a couple of minutes to create a key to use with your Dashboard Reports, and then to create the HTML tag to actually render it.  The documentation also describes how you can do things like color the report background as well as select which report to show.  Here’s an example of a Dashboard Report being shown in a completely different site from Office365Mon:

dashboardreport

The new Dashboard Reports feature makes it super easy and simple to build your own custom Office 365 health dashboards now with just a few lines of HTML.  Visit our site at https://www.office365mon.com to learn about it and all of the other Office 365 monitoring features available from Office365Mon.Com.

From Sunny Phoenix,

Steve

Monitor for Changes in Your Version of Exchange Online with Office365Mon

At Office365Mon we monitor all sorts of things about Office 365.  One of the early customer requests that we incorporated into our service was the ability to monitor for changes in the version of SharePoint that have been pushed out to your SharePoint Online tenant.  We have had several customers ask for the same type of monitoring and notification when changes occur in Exchange Online, and today we’re happy to announce the availability of just such a feature.

New versions can bring with them changes in the user interface and potential issues for customizations and apps.  That’s why for many customers, it’s critical for them to know when the version of SharePoint Online or Exchange Online changes.  Enabling Office365Mon to stay on top of these changes is quite simple.  When you go into configure your Office365Mon subscription, there is a box that you can check to monitor for version changes:

Monitor for Exchange Online Version Changes

exoversionchange

Monitor for SharePoint Online Version Changes

spoversionchange

One of the main differences between the two is that to monitor for changes in version for Exchange Online, we need a higher level of access to the mailbox being monitored than if we’re just monitoring for availability and performance.  As a result, we strongly recommend that you set up a separate mailbox just for monitoring if you want to have us check for version changes.  This is one of our standard security best practices in any case for Least Privileged Access, and the extra permissions needed to monitor for version changes only reinforces operating with that philosophy.

One of the other changes that we made as part of this feature release is to add webhook notifications for changes in version for both SharePoint Online and Exchange Online.  A webhook allows you go get a programmatic notification, and then you can execute your own workflows when you receive it.  For example, when you get a webhook notification that the version has changed, you can kick off a process to do validation of the features in your tenant or custom applications to ensure that they are all still working as expected with the new version.

We hope you find these new capabilities useful to the management of your Office 365 tenant.  It’s yet another feature set that has been driven entirely based on feedback from our customers.  Please try it out and let us know what you think!

From Sunny Phoenix,

Steve

 

Introducing Monitoring for Azure Secured Web Sites from Office365Mon

Today we’re excited to announce another new monitoring feature at Office365Mon.  Beginning today, we are now offering you the capability to monitor virtually any web site or REST API using the same proven, enterprise grade monitoring capabilities of Office365Mon.  The same service we use for Office 365 monitoring can now be used to monitor your sites that you deploy to Azure web sites, or your SharePoint hosted apps for on-prem or Office 365 sites, or virtually any other site!  You get the power of a monitoring infrastructure that sends out 10 to 20 million health probes a month to keep you in the know of your own web sites and REST APIs now.

All of this will come with the same kind of integration that you’re used to seeing at Office365Mon.  Setup will be extremely quick and easy, all done in your browser as well as via our Office365Mon Management API.  You’ll get the same sort of alert notifications as you do with the other resources we monitor for you – text messages, emails, and webhook notifications.  You’ll also see all of the data we capture about the health and performance of these sites in the same exact reports you use today, whether that’s one of our Standard or Advanced reports, you download your report data from our site, or you use Power BI with the Office365Mon Content Pack.  Here’s an example of a performance report that’s monitoring both Office 365 sites as well as other sites we have hosted in Microsoft Azure:

websitemonitoring

As you can see from the chart above, we’re monitoring:

Your sites and REST APIs can be hosted anywhere of course, as long as it has a public endpoint we can connect to.  It can be an anonymous site, or it can be secured with Azure Active Directory.  We can also monitor REST APIs as long as they don’t require any parameters.

This feature is available in Preview today and ready for you to begin trying out.  Get started by creating your Office365Mon subscription and then adding some sites to monitor here.  Pricing and licensing has not been set yet, but the good news is that like all new features, all existing customers will get 90 days to try it out for free.  Especially while this is in Preview, it’s a great opportunity to take a look and give us any feedback you have so we can fine-tune it to meet your needs.  Like many of the things you see at Office365Mon, this is another feature that was created based on feedback from our customers.

We hope you enjoy this new feature and will take the time to try it out.

From Sunny Phoenix,

Steve

 

Cloud Solution Services Now Available from Office365Mon and TechStar Group

Today I’m very excited to announce a new offering from Office365Mon and our newest Gold Star Partner, TechStar Group.  We are teaming together to take the skills and experience we’ve built while creating one of the most scalable and extensible cloud service solutions you’ll find for monitoring Office 365 – Office365Mon of course!

At Office365Mon we send out tens of millions of health probe queries every single month.  Now we’re bringing those skills out to help you with your move the cloud as well as part of our partnership with TechStar Group.   Our architecture and design demonstrates superb quality at a scale that is beyond what most of even the largest enterprises do. We send out tens of millions of health probe queries every month at Office365Mon, and we do so utilizing all of the leading edge technologies needed to accomplish such a huge task – Azure Active Directory, Office 365 APIs, Azure web apps, Azure cloud services, Azure storage and queueing, SQL Azure storage, and more.

Now, we’ve teamed up with our Gold Star Partner TechStar Group to unleash our team of experts on your cloud projects. We cover everything from architecture and design for new Azure and Office 365 projects, development, cloud migration, and much more. All projects are given oversight by former Microsoft Senior Principal Architect and Office365Mon founder Steve Peschka, so you can rest assured that the work being done will meet the highest quality bar. In addition, TechStar Group brings a number of former Microsoft employees and others with many years of successful technology delivery projects to the table. Together we offer a shared team that’s capable of delivering on the biggest and most challenging cloud projects.

We hope if you are working on cloud applications, services, migrations, etc. that you’ll consider contacting us to find out how our battle tested team of cloud savvy architects and engineers can help you make your own move to the cloud a successful one.  Please contact us to talk to us about your needs:

  • Office365Mon:  Steve Peschka, speschka@office365mon.com
  • TechStar Group:  Garrison Walls, garrison@techstargroup.com

Thanks as always.

From Sunny Phoenix,

Steve

 

YARBACS or Yet Another RBAC Solution

RBAC is all the rage (as it should be) these days, but it’s not really new.  The concepts have been in use for many years, it’s just a case of bringing it into modern cloud-based scenarios.  Azure continues to invest in this concept so that we can increasingly lock down the control and use of various cloud-based services they offer.  There’s other “interesting” aspects to these scenarios though that can impact the availability and usefulness of it – things such as existing systems and identities, cross tenant applications, etc.

It’s these interesting aspects that bring me to this post today.  At Office365Mon we are ALL about integration with Azure Active Directory and all of the goodness it provides.  We also have had from day 1 the concept of allowing users from different tenants to be able to work on a set of resources using a common security model.  That really means that when you create Office365Mon subscriptions, you can assign subscription administrators to anyone with an Azure AD account.  It doesn’t require Azure B2C, or Azure B2B, or any kind of trust relationship between organizations.  It all “just works”, which is exactly what you want.  IMPORTANT DISCLAIMER:  the RBAC space frequently changes.  You should check in often to see what’s provided by the service and decide for yourself what works best for your scenario.

“Just works” in this case started out being really just a binary type operation – depending on who you are, you either had access or you didn’t.  There was no limited access based on one or more roles that you had.  As we got into more complex application scenarios it became increasingly important to develop some type of RBAC support capabilities, but there really wasn’t an out of the box solution for us.  That led us to develop this fairly simple framework of YARBACS as I call it, or “Yet Another RBAC Solution”.  It’s not a particularly complicated approach (I don’t think) so I thought I’d share the high level details here in case it may help those of you who are living with the same sort of constraints that we have.  We have a large existing user base, everyone is effectively a cloud user to us, we don’t have any type of trust relationship with other Azure AD tenants, but we need to be able to support a single resource (Office365Mon subscription) to be managed by users from any number of tenants and with varying sets of permissions.

With that in mind, these are the basic building blocks that we use for our YARBACS:

  1. Enable support for Roles in our Azure AD secured ASP.NET application
  2. Create application roles in our AD tenant that will be used for access control
  3. Store user and role relationships so that it can be used in our application
  4. Add role attributes to views, etc.
  5. Use a custom MVC FilterAction to properly populate users from different tenants with application roles from our Azure AD tenant

Let’s walk through each of these in a little more detail.

 

Enable Support for Roles

Enabling support for roles in your Azure AD secured application is something that has been explained very nicely by Dushyant Gill, who works as a PM on the Azure team.  You can find his blog explaining this process in more detail here:  http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/.   Rather than trying to plagiarize or restate his post here, just go read his.  J   The net effect is that you will end up with code in your Startup.Auth.cs class in your ASP.NET project that looks something like this:

app.UseOpenIdConnectAuthentication(

new OpenIdConnectAuthenticationOptions

{

ClientId = clientId,

Authority = Authority,

TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters

{

//for role support

RoleClaimType = “roles”

},

… //other unrelated stuff here

Okay, step 1 accomplished – your application supports the standard use of ASP.NET roles now – permission demands, IsInRole, etc.

 

Create Application Roles in Azure AD

This next step is also covered in Dushyant’s blog post mentioned above.  I’ll quickly recap here, but for complete steps and examples see Dushyant’s blog post.  Briefly, you’ll go to the application for which you want to use YARBACS and create applications roles.  Download the app manifest locally and add roles for the application – as many as you want.  Upload the manifest back to Azure AD, and now you’re ready to start using them.  For your internal users, you can navigate to the app in the Azure management portal and click on the Users tab.  You’ll see all of the users that have used your app there.  You can select whichever user(s) you want to assign to an app role and then click on the Assign button at the bottom.  The trick of course is now adding users to these groups (i.e. roles) when the user is not part of your Azure AD tenant.

 

Store User and Role Relationships for the Application

This is another step that requires no rocket science.  I’m not going to cover any implementation details here because it’s likely going to be different for every company in terms of how and what they implement it.  The goal here is simple though – you need something – like a SQL database – to store the user identifier and the role(s) that user is associated with.

In terms of identifying your user in code so you can look up what roles are associated with it, I use the UPN.  When you’ve authenticated to Azure AD you’ll find the UPN is in the ClaimsPrincipal.Current.Identity.Name property.  To be fair, it’s possible for a user’s UPN to change, so if you find that to be a scenario that concerns you then you should use something else.  As an alternative, I typically have code in my Startup.Auth.cs class that creates an AuthenticationResult as part of registering the application in the user’s Azure AD tenant after consent has been given.  You can always go back and look at doing something with the AuthenticationResult’s UserInfo.UniqueId property, which is basically just a GUID that identifies the user in Azure AD and never changes, even if the UPN does.

Now that you have the information stored, when we build our MVC ActionFilter we’ll pull this data out to plug into application roles.

 

Add Role Attributes to Views, Etc.

This is where the power of the ASP.NET permissions model and roles really comes into play.  In step 1 after you follow the steps in Dushyant’s blog, you are basically telling ASP.NET that anything in the claim type “roles” should be considered a role claim.  As such, you can start using it the way you would any other kind of ASP.NET role checking.  Here are a couple of examples:

  • As a PrincipalPermission demand – you can add a permission demand to a view in your controller (like an ActionResult) with a simple attribute, like this:  [PrincipalPermission(SecurityAction.Demand, Role = “MyAppAdmin”)].  So if someone tries to access a view and they have not been added to the MyAppAdmin role (i.e. if they don’t have a “roles” claim with that value), then they will get denied access to that view.
  • Using IsInRole – you can use the standard ASP.NET method to determine if a user is in a particular role and then based on that make options available, hide UI, redirect the user, etc. Here’s an example of that:  if (System.Security.Claims.ClaimsPrincipal.Current.IsInRole(“MyAppAdmin “))…  In this case if the user has the “roles” claim with a value of MyAppAdmin then I can do whatever – enabling a feature, etc.

Those are just a couple of simple and the most common examples, but as I said, anything you can do with ASP.NET roles, you can now do with this YARBACS.

 

Use a Custom FilterAction to Populate Roles for Users

This is really where the magic happens – we look at an individual and determine what roles we want to assign to it.   To start with, we create a new class and have it inherit from ActionFilterAttribute.  Then we override the OnActionExecuting event and plug in our code to look for the roles for the user and assign them.

Here’s what the skeleton of the class looks like:

public class Office365MonSecurity : ActionFilterAttribute

{

public override void OnActionExecuting(ActionExecutingContext filterContext)

{

//code here to assign roles

}

}

Within the override, we plug in our code.  To begin with, this code isn’t going to do any good unless the user is already authenticated.  If they haven’t then I have no way to identify them and look up the roles (short of something like a cookie, which would be a really bad approach for many reasons).  So first I check to make sure the request is authenticated:

if (filterContext.RequestContext.HttpContext.Request.IsAuthenticated)

{

//assign roles if user is authenticated

}

 

Inside this block then, I can do my look up and assign roles because I know the user has been authenticated and I can identify him.  So here’s a little pseudo code to demonstrate:

SqlHelper sql = new SqlHelper();

List<string> roles = sql.GetRoles(ClaimsPrincipal.Current.Identity.Name);

 

foreach(string role in roles)

{

Claim roleClaim = new Claim(“roles”, role);

ClaimsPrincipal.Current.Identities.First().AddClaim(roleClaim);

}

 

So that’s a pretty good and generic example of how you can implement it.  Also, unlike application roles that you define through the UI in the Azure management portal, you can assign multiple roles to a user this way.  It works because they just show up as role claims.  Here’s an example of some simple code to demonstrate:

 

//assign everyone to the app reader role

Claim readerClaim = new Claim(“roles”, “MyAppReader”);

ClaimsPrincipal.Current.Identities.First().AddClaim(readerClaim);

 

//add a role I just made up

Claim madeUpClaim = new Claim(“roles”, “BlazerFan”);

ClaimsPrincipal.Current.Identities.First().AddClaim(madeUpClaim);

This code actually demonstrates a couple of interesting things.  First, as I pointed out above, it adds multiple roles to the user.  Second, it demonstrates using a completely “made up” role.  What I mean by that is that the “BlazerFan” role does not exist in the list of application roles in Azure AD for my app.  Instead I just created it on the fly, but again it all works because it’s added as a standard Claim of type “roles”, which is what we’ve configured our application to use as a role claim.  Here’s a partial snippet of what my claims collection looks like after running through this demo code:

yarbacs

To actually use the FilterAction, I just need to add it as an attribute on the controller(s) where I’m going to use it.  Here’s an example – Office365MonSecurity is the class name of my FilterAction:

[Office365MonSecurity]

public class SignupController : RoutingControllerBase

There you have it.  All of the pieces to implement your own RBAC solution when the current service offering is not necessarily adequate for your scenario.  It’s pretty simple to implement and maintain, and should support a wide range of scenarios.

Using the Office 365 Batch Operations API

As I was looking around for a way to batch certain operations with the Office 365 API the other day, I stumbled upon a Preview of just such a thing, called “Batch Outlook REST Requests (Preview)” – https://msdn.microsoft.com/en-us/office/office365/api/batch-outlook-rest-requests.  The fundamentals of how it works is fairly straightforward, but it’s completely lacking implementation details for those using .NET.  So, I decided to write a small sample application that demonstrates using this new API / feature / whatever you want to call it.

First, let’s figure out why you might want to use this.  The most common reason is you are doing a bunch of operations and don’t want to go through the overhead of creating, establishing, and tearing down an HTTP session for each operation.  That can slow down quickly and burn up a lot of resources.  Now when I was first looking at this, I was also interested in how it might impact throttling limits that Office 365 imposes.  Turns out I had a little misunderstanding of that, but fortunately Abdel B. and Venkat A. explained Exchange throttling to me, and so now I will share with you.

My confusion about impact on throttling that batch operations might have was borne out of the fact that SharePoint Online has an API throttling limit that has been somewhat ubiquitously defined as no more than 1 REST call per second over an extended time.  So…kind of specific, but also a little vague.  Exchange Online throttling is arguably even less specific, but they do have some good information about how to know when it happens and what to do about it.

In Exchange Online, different operations may have a different impact on the system, and it may also be impacted by demands from other clients.  So when making REST API calls to Exchange Online your code should account getting a throttling response back.  A throttled response in Exchange Online returns a standard http status code 429 (Too many requests).  The service also returns a Retry-After header with the number of seconds to resubmit the request.  Now that you know what a throttled response from Exchange Online looks like, you can develop your code to include a process for retry and resubmission.

The batching feature lets you work around the overhead of multiple calls by allowing you to send in up to 20 operations in a single request.  That means 1 connection to create, establish and tear down instead of 20.  This is goodness.

The basic process of doing batch operations using this feature is to create what I’ll call a “container” operation.  In it, you will put all of the individual operations you want to perform against a particular mailbox.  Note that I said mailbox – this is important to remember for two reasons:  1) the batch feature only works today with Outlook REST APIs and 2) the individual operations should all target the same mailbox.  That makes sense as well when you consider that you have to authenticate to do these operations, and since they are all wrapped up in this “container” operation, you’re doing so in the context of that operation.

The “container” operation that I’m talking about is POST’ed to the $batch endpoint in Outlook:  https://outlook.office.com/api/beta/$batch.  The Url is hard-coded to the “beta” path for now because this API is still in preview.  In order for you to POST to the $batch endpoint you need to provide an access token in the authorization header, the same way as you would if you were making each of the individual calls contained in your container operation.  I’m not going to cover the process of getting an access token in this post because it’s not really in scope, but if you’re curious you can just look at the sample code included with this post or search my blog for many posts on that type of topic.

While I’m not going to cover getting an access token per se, it’s important to describe one higher level aspect of your implementation, which is to create an application in your Azure Active Directory tenant.  Generally speaking, you don’t access an Office 365 REST API directly; instead, you create an application and configure it with the permissions you need to execute the various Outlook REST APIs you’ll be using.  In my case, I wanted to be able to read emails, send emails and delete emails, so in my application I selected the following permissions:

batchop1

So with that background, here are the basic steps you’ll go through; I’ll include more details on each one below:

  1. If you don’t have an access token, go get one.
  2. Create your “container” operation – this is a MultipartContent POST.
  3. Create your individual operations – add each one to your MultipartContent.
  4. POST the “container” operation to the $batch endpoint.
  5. Enumerate the results for each individual operation.

 

Step 1 – Get an Access Token

As I described above, I’m not going to cover this in great detail here.  Suffice to say, you’ll need to create an application in Azure Active Directory as I briefly alluded to above.  As part of that, you’ll also need to do “standard Azure apps for Office 365” stuff in order to get an access token.  Namely, you’ll need to create a client secret, i.e. “Key”, and copy it along with the client ID to your client application in order to convert the access code you get from Azure into an AuthenticationResult, which contains the access token.  This assumes you are using ADAL; if you are not, then you’ll have your own process to get the access token.

 

Step 2 – Create Your Container Operation

The “container” operation is really just a MultipartContent object that you’ll POST to the $batch endpoint.  Unfortunately, there is scarce documentation on how to create these, which is in large part why I wrote this post.  The code to get you started though is just this simple:

 

//create a new batch ID

string batchId = Guid.NewGuid().ToString();

//create the multipart content that is used for a batch process

MultipartContent mpc = new MultipartContent(“mixed”, “batch_” + batchId);

The main thing to note here is just that each “container” operation requires a unique batch identifier.  A Guid is perfect for this, so that’s what I’m using to identify my batch operation.

 

Step 3 – Create Individual Operations and Add to the Container Operation

The actual code you write here will vary somewhat, depending on what your operation is.  For example, a request to send an email message is going to be different from one to get a set of messages.  The basic set of steps though are similar:

  1. Create a new HttpRequestMessage. This is going to be how you define whether the individual operation is a GET, a POST, or something else, what Url to use, etc.  Here’s the code I used for the operation to send a new email:  HttpRequestMessage rqMsg = new HttpRequestMessage(HttpMethod.Post, BATCH_URI_BASE + “me/sendmail”);  It’s worth noting that you ALWAYS send your individual operations to the $batch endpoint to be included in the batch process.  For example, if you were using v2 of the Outlook API, to send a message you would use the Url https://outlook.office.com/api/v2.0/me/sendmail.  However, to use the $batch endpoint, since it’s in beta, you use the Url https://outlook.office.com/api/beta/me/sendmail.
  2. Create the content for your operation. In my case I used a custom class I created to represent a mail message, I “filled it all out”, and then I serialized it to a JSON string.  I then took my string to create the content for the operation, like this:  StringContent sc = new StringContent(msgData, Encoding.UTF8, “application/json”);  So in this case I’m saying I want some string content that is encoded as UTF8 and whose content type is application/json.
  3. Add your content to the HttpRequestMessage: Content = sc;
  4. Wrap up your HttpRequestMessage into an instance of the HttpMessageContent class. Note that you’ll need to add a reference to System.Net.Http.Formatting in order to use this class.  Here’s what it looks like:  HttpMessageContent hmc = new HttpMessageContent(rqMsg);  We’re doing this so that we can set the appropriate headers on this operation when it’s executed as part of the batch.
  5. Set the headers on the HttpMessageContent object: Headers.ContentType = new MediaTypeHeaderValue(“application/http”); and also hmc.Headers.Add(“Content-Transfer-Encoding”, “binary”);  You now have a single operation that you can add to the “container” operation.
  6. Add your individual operation to the “container” operation: Add(hmc);  That’s it – now just repeat these steps for each operation you want to execute in your batch.

Side note:  I realize some of this code may be difficult to follow when it’s intertwined with comments like I’ve done above.  If you’re get squinty eyed, just download the ZIP file that accompanies this post, and you can see all of the code end to end.

 

Step 4 – Post the Container Operation to the $Batch Endpoint

There’s not a lot to step 4.  You can just POST it now, but there’s one other point I want to make.  Your “container” operation may contain many individual operations.  There are a couple of points about that worth remembering.  First, the individual operations are not guaranteed to be performed in any specific order.  If you need them to be performed in a specific order, then either don’t do them in a batch or do them in separate batches.  Second, by default, at the point that any individual operation encounters an error, execution stops and no further operations in the batch will be executed.  However, you can override this behavior by setting a Prefer header in your “container” operation.  Here’s how you do that:

mpc.Headers.Add(“Prefer”, “odata.continue-on-error”);

With that done (or not, depending on your requirements), you can go ahead and POST your “container” operation to the $batch endpoint, like this:

HttpResponseMessage hrm = await hc.PostAsync(BATCH_URI_BASE + “$batch”, mpc);

With that done, it’s time to look at the results, which is covered in the next step.

 

Step 5 – Enumerate the Results for Each Individual Operation

At a high level, you can see if the overall batch operation worked the same way you would if it were just one operation:

if (hrm.IsSuccessStatusCode)

The important thing to understand though, is that even though the “container” POST may have worked without issue, one or more of the individual operations contained within may have had issues.  So how do you pull them all out to check?  Well, using the MultipartMemoryStreamProvider class is how I did it.  This is another class that requires a reference to System.Net.Http.Formatting in order to use, but you should already have it from the other steps above so that shouldn’t be a problem.

So we start out by getting all of the responses from each individual operation back like this:

MultipartMemoryStreamProvider responses = await hrm.Content.ReadAsMultipartAsync();

You can then enumerate over the array of HttpContent objects to look at the individual operations.  The code to do that looks like this:

for(int i=0; i < responses.Contents.Count;i++)

{

string results = await responses.Contents[i].ReadAsStringAsync();

}

It’s a little different from having an HttpResponseMessage for each one in that you have to do a little parsing.  For example, in my sample batch I sent two emails and then got the list of all of the emails in the inbox.  As I enumerate over the content for each one, here’s what ReadAsStringAsync returns for sending a message:

HTTP/1.1 202 Accepted

Okay, so you get to parse the return status code…should be doable.  It can get a little more cumbersome depending on the operation type.  For example, here’s what I got back when I asked for the list of messages in the inbox as part of the batch:

HTTP/1.1 200 OK

OData-Version: 4.0

Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8

{“@odata.context”:”https://outlook.office.com/api/beta/$metadata#Me/MailFolders(‘Inbox&#8217;)/Messages”,”value”:[{“@odata.id”:”https://outlook.office.com/api/beta/Users(’05d6cc47-5a79-4906-88e6-c39fcd595e15@b098aeb9-ce11-43ce-a49f-ee4b5a4b0a71&#8242;)/Messages(‘AAMkADEyMzQ3MzllLWM2NmItNGY2ZS04MWE1LTQwNjdiZDc1ZGYxNwBGAAAAAADRpmW4I…}}]}

Okay, so I trimmed a bunch of detail out of the middle there, but the gist is this – you would have to parse out your HTTP status code that was returned, and then parse out where your data begins.  Both quite doable, I just kind of hate having to do the 21st century version of screen scraping, but it is what it is.  The net is you can at least go look at each and every individual operation you submitted and figure out if they worked, retrieve and process data, etc.

Summary

That’s the short tour of using the Outlook batch API.  There are a handful of things you need to know about how it works and what it’s limitations are, and I’ve pointed out all of the ones I know about in this post.  The trickier part by far is understanding how to create a batch request using the .NET framework, as well as how to parse the results, and I covered both of those aspects of it as well.

As I mentioned a few times in this post, I just zipped up my entire sample project and have attached it to this post so you can download it and read through it to your heart’s content.  It does contain the details specific to my application in Azure AD, so you’ll need to create your own and then update the values in the app if you want to run this against your own tenant.  The ZIP file with the code is below:

 

Expanding the Office365Mon Subscription Management API

Today we’re happy to announce a slew of new APIs that have been added to our Subscription Management API tool set.  The Subscription Management API at Office365Mon has long been a market differentiator with other solutions in the Office 365 monitoring space.  Our first releases allowed you to manage the basics of the core monitoring features of Office365Mon.  Based on customer demand, we have just released a significant expansion of those APIs.  Our total feature set for managing your Office365Mon subscription has gone from 28 APIs to 46.

The new API support allows you to do things like configure the cloud service for your Distributed Probes and Diagnostics deployments (https://www.office365mon.com/Configure/OnPremProbes), which allows you to issue health probes in conjunction with our cloud service from any geographic location where you have users.  You can also configure the integration with the Office 365 Service Communication API (https://www.office365mon.com/Signup/Status), which allows you to stay up to date with any changes in the status of all of the services and features you have in your Office 365 tenant.  You can also manage Office365Mon’s monitoring of the SharePoint Online search service (https://www.office365mon.com/Configure/SearchMon).  This is critical for virtually all SharePoint customers, since so much of the content is driven by search – such as Content by Query web parts, search-based navigation, etc. – in addition to being used by many, many custom applications.  With these new APIs, you can now manage virtually every single thing using our API that you can do in the browser on our site.

Today’s announcement marks another set of innovative features that were developed based on feedback from you, our customers.  We hope you’ll find these to be valuable additions to the management of your Office365Mon subscriptions.  As always, if you have other requests or ideas for features you would like to see in our service, please just send us a note at support@office365mon.com.

From Sunny Phoenix,

Steve