YARBACS or Yet Another RBAC Solution

RBAC is all the rage (as it should be) these days, but it’s not really new.  The concepts have been in use for many years, it’s just a case of bringing it into modern cloud-based scenarios.  Azure continues to invest in this concept so that we can increasingly lock down the control and use of various cloud-based services they offer.  There’s other “interesting” aspects to these scenarios though that can impact the availability and usefulness of it – things such as existing systems and identities, cross tenant applications, etc.

It’s these interesting aspects that bring me to this post today.  At Office365Mon we are ALL about integration with Azure Active Directory and all of the goodness it provides.  We also have had from day 1 the concept of allowing users from different tenants to be able to work on a set of resources using a common security model.  That really means that when you create Office365Mon subscriptions, you can assign subscription administrators to anyone with an Azure AD account.  It doesn’t require Azure B2C, or Azure B2B, or any kind of trust relationship between organizations.  It all “just works”, which is exactly what you want.  IMPORTANT DISCLAIMER:  the RBAC space frequently changes.  You should check in often to see what’s provided by the service and decide for yourself what works best for your scenario.

“Just works” in this case started out being really just a binary type operation – depending on who you are, you either had access or you didn’t.  There was no limited access based on one or more roles that you had.  As we got into more complex application scenarios it became increasingly important to develop some type of RBAC support capabilities, but there really wasn’t an out of the box solution for us.  That led us to develop this fairly simple framework of YARBACS as I call it, or “Yet Another RBAC Solution”.  It’s not a particularly complicated approach (I don’t think) so I thought I’d share the high level details here in case it may help those of you who are living with the same sort of constraints that we have.  We have a large existing user base, everyone is effectively a cloud user to us, we don’t have any type of trust relationship with other Azure AD tenants, but we need to be able to support a single resource (Office365Mon subscription) to be managed by users from any number of tenants and with varying sets of permissions.

With that in mind, these are the basic building blocks that we use for our YARBACS:

  1. Enable support for Roles in our Azure AD secured ASP.NET application
  2. Create application roles in our AD tenant that will be used for access control
  3. Store user and role relationships so that it can be used in our application
  4. Add role attributes to views, etc.
  5. Use a custom MVC FilterAction to properly populate users from different tenants with application roles from our Azure AD tenant

Let’s walk through each of these in a little more detail.

 

Enable Support for Roles

Enabling support for roles in your Azure AD secured application is something that has been explained very nicely by Dushyant Gill, who works as a PM on the Azure team.  You can find his blog explaining this process in more detail here:  http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/.   Rather than trying to plagiarize or restate his post here, just go read his.  J   The net effect is that you will end up with code in your Startup.Auth.cs class in your ASP.NET project that looks something like this:

app.UseOpenIdConnectAuthentication(

new OpenIdConnectAuthenticationOptions

{

ClientId = clientId,

Authority = Authority,

TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters

{

//for role support

RoleClaimType = “roles”

},

… //other unrelated stuff here

Okay, step 1 accomplished – your application supports the standard use of ASP.NET roles now – permission demands, IsInRole, etc.

 

Create Application Roles in Azure AD

This next step is also covered in Dushyant’s blog post mentioned above.  I’ll quickly recap here, but for complete steps and examples see Dushyant’s blog post.  Briefly, you’ll go to the application for which you want to use YARBACS and create applications roles.  Download the app manifest locally and add roles for the application – as many as you want.  Upload the manifest back to Azure AD, and now you’re ready to start using them.  For your internal users, you can navigate to the app in the Azure management portal and click on the Users tab.  You’ll see all of the users that have used your app there.  You can select whichever user(s) you want to assign to an app role and then click on the Assign button at the bottom.  The trick of course is now adding users to these groups (i.e. roles) when the user is not part of your Azure AD tenant.

 

Store User and Role Relationships for the Application

This is another step that requires no rocket science.  I’m not going to cover any implementation details here because it’s likely going to be different for every company in terms of how and what they implement it.  The goal here is simple though – you need something – like a SQL database – to store the user identifier and the role(s) that user is associated with.

In terms of identifying your user in code so you can look up what roles are associated with it, I use the UPN.  When you’ve authenticated to Azure AD you’ll find the UPN is in the ClaimsPrincipal.Current.Identity.Name property.  To be fair, it’s possible for a user’s UPN to change, so if you find that to be a scenario that concerns you then you should use something else.  As an alternative, I typically have code in my Startup.Auth.cs class that creates an AuthenticationResult as part of registering the application in the user’s Azure AD tenant after consent has been given.  You can always go back and look at doing something with the AuthenticationResult’s UserInfo.UniqueId property, which is basically just a GUID that identifies the user in Azure AD and never changes, even if the UPN does.

Now that you have the information stored, when we build our MVC ActionFilter we’ll pull this data out to plug into application roles.

 

Add Role Attributes to Views, Etc.

This is where the power of the ASP.NET permissions model and roles really comes into play.  In step 1 after you follow the steps in Dushyant’s blog, you are basically telling ASP.NET that anything in the claim type “roles” should be considered a role claim.  As such, you can start using it the way you would any other kind of ASP.NET role checking.  Here are a couple of examples:

  • As a PrincipalPermission demand – you can add a permission demand to a view in your controller (like an ActionResult) with a simple attribute, like this:  [PrincipalPermission(SecurityAction.Demand, Role = “MyAppAdmin”)].  So if someone tries to access a view and they have not been added to the MyAppAdmin role (i.e. if they don’t have a “roles” claim with that value), then they will get denied access to that view.
  • Using IsInRole – you can use the standard ASP.NET method to determine if a user is in a particular role and then based on that make options available, hide UI, redirect the user, etc. Here’s an example of that:  if (System.Security.Claims.ClaimsPrincipal.Current.IsInRole(“MyAppAdmin “))…  In this case if the user has the “roles” claim with a value of MyAppAdmin then I can do whatever – enabling a feature, etc.

Those are just a couple of simple and the most common examples, but as I said, anything you can do with ASP.NET roles, you can now do with this YARBACS.

 

Use a Custom FilterAction to Populate Roles for Users

This is really where the magic happens – we look at an individual and determine what roles we want to assign to it.   To start with, we create a new class and have it inherit from ActionFilterAttribute.  Then we override the OnActionExecuting event and plug in our code to look for the roles for the user and assign them.

Here’s what the skeleton of the class looks like:

public class Office365MonSecurity : ActionFilterAttribute

{

public override void OnActionExecuting(ActionExecutingContext filterContext)

{

//code here to assign roles

}

}

Within the override, we plug in our code.  To begin with, this code isn’t going to do any good unless the user is already authenticated.  If they haven’t then I have no way to identify them and look up the roles (short of something like a cookie, which would be a really bad approach for many reasons).  So first I check to make sure the request is authenticated:

if (filterContext.RequestContext.HttpContext.Request.IsAuthenticated)

{

//assign roles if user is authenticated

}

 

Inside this block then, I can do my look up and assign roles because I know the user has been authenticated and I can identify him.  So here’s a little pseudo code to demonstrate:

SqlHelper sql = new SqlHelper();

List<string> roles = sql.GetRoles(ClaimsPrincipal.Current.Identity.Name);

 

foreach(string role in roles)

{

Claim roleClaim = new Claim(“roles”, role);

ClaimsPrincipal.Current.Identities.First().AddClaim(roleClaim);

}

 

So that’s a pretty good and generic example of how you can implement it.  Also, unlike application roles that you define through the UI in the Azure management portal, you can assign multiple roles to a user this way.  It works because they just show up as role claims.  Here’s an example of some simple code to demonstrate:

 

//assign everyone to the app reader role

Claim readerClaim = new Claim(“roles”, “MyAppReader”);

ClaimsPrincipal.Current.Identities.First().AddClaim(readerClaim);

 

//add a role I just made up

Claim madeUpClaim = new Claim(“roles”, “BlazerFan”);

ClaimsPrincipal.Current.Identities.First().AddClaim(madeUpClaim);

This code actually demonstrates a couple of interesting things.  First, as I pointed out above, it adds multiple roles to the user.  Second, it demonstrates using a completely “made up” role.  What I mean by that is that the “BlazerFan” role does not exist in the list of application roles in Azure AD for my app.  Instead I just created it on the fly, but again it all works because it’s added as a standard Claim of type “roles”, which is what we’ve configured our application to use as a role claim.  Here’s a partial snippet of what my claims collection looks like after running through this demo code:

yarbacs

To actually use the FilterAction, I just need to add it as an attribute on the controller(s) where I’m going to use it.  Here’s an example – Office365MonSecurity is the class name of my FilterAction:

[Office365MonSecurity]

public class SignupController : RoutingControllerBase

There you have it.  All of the pieces to implement your own RBAC solution when the current service offering is not necessarily adequate for your scenario.  It’s pretty simple to implement and maintain, and should support a wide range of scenarios.

Using the Office 365 Batch Operations API

As I was looking around for a way to batch certain operations with the Office 365 API the other day, I stumbled upon a Preview of just such a thing, called “Batch Outlook REST Requests (Preview)” – https://msdn.microsoft.com/en-us/office/office365/api/batch-outlook-rest-requests.  The fundamentals of how it works is fairly straightforward, but it’s completely lacking implementation details for those using .NET.  So, I decided to write a small sample application that demonstrates using this new API / feature / whatever you want to call it.

First, let’s figure out why you might want to use this.  The most common reason is you are doing a bunch of operations and don’t want to go through the overhead of creating, establishing, and tearing down an HTTP session for each operation.  That can slow down quickly and burn up a lot of resources.  Now when I was first looking at this, I was also interested in how it might impact throttling limits that Office 365 imposes.  Turns out I had a little misunderstanding of that, but fortunately Abdel B. and Venkat A. explained Exchange throttling to me, and so now I will share with you.

My confusion about impact on throttling that batch operations might have was borne out of the fact that SharePoint Online has an API throttling limit that has been somewhat ubiquitously defined as no more than 1 REST call per second over an extended time.  So…kind of specific, but also a little vague.  Exchange Online throttling is arguably even less specific, but they do have some good information about how to know when it happens and what to do about it.

In Exchange Online, different operations may have a different impact on the system, and it may also be impacted by demands from other clients.  So when making REST API calls to Exchange Online your code should account getting a throttling response back.  A throttled response in Exchange Online returns a standard http status code 429 (Too many requests).  The service also returns a Retry-After header with the number of seconds to resubmit the request.  Now that you know what a throttled response from Exchange Online looks like, you can develop your code to include a process for retry and resubmission.

The batching feature lets you work around the overhead of multiple calls by allowing you to send in up to 20 operations in a single request.  That means 1 connection to create, establish and tear down instead of 20.  This is goodness.

The basic process of doing batch operations using this feature is to create what I’ll call a “container” operation.  In it, you will put all of the individual operations you want to perform against a particular mailbox.  Note that I said mailbox – this is important to remember for two reasons:  1) the batch feature only works today with Outlook REST APIs and 2) the individual operations should all target the same mailbox.  That makes sense as well when you consider that you have to authenticate to do these operations, and since they are all wrapped up in this “container” operation, you’re doing so in the context of that operation.

The “container” operation that I’m talking about is POST’ed to the $batch endpoint in Outlook:  https://outlook.office.com/api/beta/$batch.  The Url is hard-coded to the “beta” path for now because this API is still in preview.  In order for you to POST to the $batch endpoint you need to provide an access token in the authorization header, the same way as you would if you were making each of the individual calls contained in your container operation.  I’m not going to cover the process of getting an access token in this post because it’s not really in scope, but if you’re curious you can just look at the sample code included with this post or search my blog for many posts on that type of topic.

While I’m not going to cover getting an access token per se, it’s important to describe one higher level aspect of your implementation, which is to create an application in your Azure Active Directory tenant.  Generally speaking, you don’t access an Office 365 REST API directly; instead, you create an application and configure it with the permissions you need to execute the various Outlook REST APIs you’ll be using.  In my case, I wanted to be able to read emails, send emails and delete emails, so in my application I selected the following permissions:

batchop1

So with that background, here are the basic steps you’ll go through; I’ll include more details on each one below:

  1. If you don’t have an access token, go get one.
  2. Create your “container” operation – this is a MultipartContent POST.
  3. Create your individual operations – add each one to your MultipartContent.
  4. POST the “container” operation to the $batch endpoint.
  5. Enumerate the results for each individual operation.

 

Step 1 – Get an Access Token

As I described above, I’m not going to cover this in great detail here.  Suffice to say, you’ll need to create an application in Azure Active Directory as I briefly alluded to above.  As part of that, you’ll also need to do “standard Azure apps for Office 365” stuff in order to get an access token.  Namely, you’ll need to create a client secret, i.e. “Key”, and copy it along with the client ID to your client application in order to convert the access code you get from Azure into an AuthenticationResult, which contains the access token.  This assumes you are using ADAL; if you are not, then you’ll have your own process to get the access token.

 

Step 2 – Create Your Container Operation

The “container” operation is really just a MultipartContent object that you’ll POST to the $batch endpoint.  Unfortunately, there is scarce documentation on how to create these, which is in large part why I wrote this post.  The code to get you started though is just this simple:

 

//create a new batch ID

string batchId = Guid.NewGuid().ToString();

//create the multipart content that is used for a batch process

MultipartContent mpc = new MultipartContent(“mixed”, “batch_” + batchId);

The main thing to note here is just that each “container” operation requires a unique batch identifier.  A Guid is perfect for this, so that’s what I’m using to identify my batch operation.

 

Step 3 – Create Individual Operations and Add to the Container Operation

The actual code you write here will vary somewhat, depending on what your operation is.  For example, a request to send an email message is going to be different from one to get a set of messages.  The basic set of steps though are similar:

  1. Create a new HttpRequestMessage. This is going to be how you define whether the individual operation is a GET, a POST, or something else, what Url to use, etc.  Here’s the code I used for the operation to send a new email:  HttpRequestMessage rqMsg = new HttpRequestMessage(HttpMethod.Post, BATCH_URI_BASE + “me/sendmail”);  It’s worth noting that you ALWAYS send your individual operations to the $batch endpoint to be included in the batch process.  For example, if you were using v2 of the Outlook API, to send a message you would use the Url https://outlook.office.com/api/v2.0/me/sendmail.  However, to use the $batch endpoint, since it’s in beta, you use the Url https://outlook.office.com/api/beta/me/sendmail.
  2. Create the content for your operation. In my case I used a custom class I created to represent a mail message, I “filled it all out”, and then I serialized it to a JSON string.  I then took my string to create the content for the operation, like this:  StringContent sc = new StringContent(msgData, Encoding.UTF8, “application/json”);  So in this case I’m saying I want some string content that is encoded as UTF8 and whose content type is application/json.
  3. Add your content to the HttpRequestMessage: Content = sc;
  4. Wrap up your HttpRequestMessage into an instance of the HttpMessageContent class. Note that you’ll need to add a reference to System.Net.Http.Formatting in order to use this class.  Here’s what it looks like:  HttpMessageContent hmc = new HttpMessageContent(rqMsg);  We’re doing this so that we can set the appropriate headers on this operation when it’s executed as part of the batch.
  5. Set the headers on the HttpMessageContent object: Headers.ContentType = new MediaTypeHeaderValue(“application/http”); and also hmc.Headers.Add(“Content-Transfer-Encoding”, “binary”);  You now have a single operation that you can add to the “container” operation.
  6. Add your individual operation to the “container” operation: Add(hmc);  That’s it – now just repeat these steps for each operation you want to execute in your batch.

Side note:  I realize some of this code may be difficult to follow when it’s intertwined with comments like I’ve done above.  If you’re get squinty eyed, just download the ZIP file that accompanies this post, and you can see all of the code end to end.

 

Step 4 – Post the Container Operation to the $Batch Endpoint

There’s not a lot to step 4.  You can just POST it now, but there’s one other point I want to make.  Your “container” operation may contain many individual operations.  There are a couple of points about that worth remembering.  First, the individual operations are not guaranteed to be performed in any specific order.  If you need them to be performed in a specific order, then either don’t do them in a batch or do them in separate batches.  Second, by default, at the point that any individual operation encounters an error, execution stops and no further operations in the batch will be executed.  However, you can override this behavior by setting a Prefer header in your “container” operation.  Here’s how you do that:

mpc.Headers.Add(“Prefer”, “odata.continue-on-error”);

With that done (or not, depending on your requirements), you can go ahead and POST your “container” operation to the $batch endpoint, like this:

HttpResponseMessage hrm = await hc.PostAsync(BATCH_URI_BASE + “$batch”, mpc);

With that done, it’s time to look at the results, which is covered in the next step.

 

Step 5 – Enumerate the Results for Each Individual Operation

At a high level, you can see if the overall batch operation worked the same way you would if it were just one operation:

if (hrm.IsSuccessStatusCode)

The important thing to understand though, is that even though the “container” POST may have worked without issue, one or more of the individual operations contained within may have had issues.  So how do you pull them all out to check?  Well, using the MultipartMemoryStreamProvider class is how I did it.  This is another class that requires a reference to System.Net.Http.Formatting in order to use, but you should already have it from the other steps above so that shouldn’t be a problem.

So we start out by getting all of the responses from each individual operation back like this:

MultipartMemoryStreamProvider responses = await hrm.Content.ReadAsMultipartAsync();

You can then enumerate over the array of HttpContent objects to look at the individual operations.  The code to do that looks like this:

for(int i=0; i < responses.Contents.Count;i++)

{

string results = await responses.Contents[i].ReadAsStringAsync();

}

It’s a little different from having an HttpResponseMessage for each one in that you have to do a little parsing.  For example, in my sample batch I sent two emails and then got the list of all of the emails in the inbox.  As I enumerate over the content for each one, here’s what ReadAsStringAsync returns for sending a message:

HTTP/1.1 202 Accepted

Okay, so you get to parse the return status code…should be doable.  It can get a little more cumbersome depending on the operation type.  For example, here’s what I got back when I asked for the list of messages in the inbox as part of the batch:

HTTP/1.1 200 OK

OData-Version: 4.0

Content-Type: application/json;odata.metadata=minimal;odata.streaming=true;IEEE754Compatible=false;charset=utf-8

{“@odata.context”:”https://outlook.office.com/api/beta/$metadata#Me/MailFolders(‘Inbox&#8217;)/Messages”,”value”:[{“@odata.id”:”https://outlook.office.com/api/beta/Users(’05d6cc47-5a79-4906-88e6-c39fcd595e15@b098aeb9-ce11-43ce-a49f-ee4b5a4b0a71&#8242;)/Messages(‘AAMkADEyMzQ3MzllLWM2NmItNGY2ZS04MWE1LTQwNjdiZDc1ZGYxNwBGAAAAAADRpmW4I…}}]}

Okay, so I trimmed a bunch of detail out of the middle there, but the gist is this – you would have to parse out your HTTP status code that was returned, and then parse out where your data begins.  Both quite doable, I just kind of hate having to do the 21st century version of screen scraping, but it is what it is.  The net is you can at least go look at each and every individual operation you submitted and figure out if they worked, retrieve and process data, etc.

Summary

That’s the short tour of using the Outlook batch API.  There are a handful of things you need to know about how it works and what it’s limitations are, and I’ve pointed out all of the ones I know about in this post.  The trickier part by far is understanding how to create a batch request using the .NET framework, as well as how to parse the results, and I covered both of those aspects of it as well.

As I mentioned a few times in this post, I just zipped up my entire sample project and have attached it to this post so you can download it and read through it to your heart’s content.  It does contain the details specific to my application in Azure AD, so you’ll need to create your own and then update the values in the app if you want to run this against your own tenant.  The ZIP file with the code is below:

 

Analyzing and Fixing Azure Web Sites with the SCM Virtual Directory

There’s so many things you do every day as part of operating and maintaining your Azure web sites.  They’re a common target for developers because you get 10 free sites with your Azure subscription, and if you know what you’re doing you can spin that up into even more applications by using custom virtual directories as I’ve previously explained here:  https://samlman.wordpress.com/2015/02/28/developing-and-deploying-multiple-sharepoint-2013-apps-to-a-single-azure-web-site/.  That example is specific to using them for SharePoint Apps, but you can follow the same process to use them for standard web apps as well.

Typically, you go through your publishing and management process using two out of the box tools – Visual Studio and the Azure browser management pages.  What happens though when you need to go beyond the simple deploy and configure features of these tools?  Yes, there are third-party tools out there than can help with these scenarios, but many folks don’t realize that there’s also a LOT that you can do with something that ships out of the box in Azure, which is the Kudo Services, or as I’ve called it above, the SCM virtual directory.

The SCM virtual directory is present in every Azure web site.  To access it, you merely insert “scm” between your web name and the host name.  For example, if you have an Azure web site at “contoso.azurewebsites.net”, then you would navigate to “contoso.scm.azurewebsites.net”.  Once you authenticate and get in, you’ll arrive at the home page for what they call the Kudu Services.  In this post I really just wanted to give you an overview of some of the features of the Kudu Services and how to find them, which I kind of just did.  🙂  At the end though I’ll include a link to more comprehensive documentation for Kudu.

Going back to my example, I found out about all of the tools and analysis available with the Kudu Services a few months ago when I was trying to publish an update to an Azure web site.  Try as I might, the deployment kept failing because it said a file in the deployment was being used by another process on the server.  Now of course, I don’t own the “server” in this case, because it’s an Azure server running the IIS service.  So that’s how I started down this path of “how am I gonna fix that” in Azure.  SCM came to the rescue.

To begin with, here’s a screenshot of the Kudu home page:

Kudu1

As you can see right off the bat, you get some basic information about the server and version on the home page.  The power of these features come as you explore some of the other menu options available.  When you hop over to the Environment link, you get a list of the System Info, App Settings, Connection Strings, Environment variables, PATH info, HTTP headers, and the ever popular Server variables.  As a long time ASP.NET developer I will happily admit that there have been many times when I’ve done a silly little enumeration of all of the Server variables when trying to debug some issue.  Now you can find them all ready to go for you, as shown in this screenshot:

Kudu2

Now back to that pesky “file in use” problem I was describing above.  After trying every imaginable hack I could think of back then, I eventually used the “Debug console” in the Kudu Services.   These guys really did a nice job on this and offer both a Command prompt shell as well as a PowerShell prompt.  In my case, I popped open the Command prompt and quickly solved my issue.  Here’s an example:

Kudu3

One of the things that’s cool about this as well is that as I motored around the directory structure with my old school DOS skills, i.e. “cd wwwroot”, the graphical display of the directory structure was kept in sync above the command prompt.  This really worked out magnificently, I had no idea how else I was going to get that issue fixed.

Beyond the tools I’ve shown already, there are several additional tools you will find, oddly enough, under the Tools menu.  Want to get the IIS logs?  No problem, grab the Diagnostic Dump.  You can also get a log stream, a dashboard of web jobs, a set of web hooks, the deployment script for your web site, and open a Support case.

Finally, you can also add Site Extensions to your web site.  There are actually a BUNCH of them that you can choose from.  Here’s the gallery from the Site Extensions menu:

Kudu4

Of course, there’s many more than fit on this single screen shot.  All of the additional functionality and the ease with which you can access it is pretty cool though.  Here’s an example of the Azure Websites Event Viewer.  You can launch it from the Installed items in your gallery and it pops open right in the browser:

Kudu5

So that’s a quick overview of the tools.  I used them some time ago and then when I needed them a couple of months ago I couldn’t remember the virtual directory name.  I Bing’d my brains out unsuccessfully trying to find it, until it hit me when I looked at one of my site deployment scripts – they go to the SCM vdir as well.  Since I had such a hard time finding it I thought I would capture it here and hopefully your favorite search engine will find enough of keywords in this post to help you track it down when you need it.

Finally, for a full set of details around what’s in the Kudu Services, check out their GitHub wiki page at https://github.com/projectkudu/kudu/wiki.

The New Azure Converged Auth Model and Office 365 APIs

Microsoft is currently working on a new authentication and authorization model that is simply referred to as “v2” in the docs that they have available right now (UPDATE:  they prefer the “v2 auth endpoint”). I decided to spend some time taking a look around this model and in the process have been writing up some tips and other useful information and code samples to help you out as you start making your way through it. Just like I did with other reviews of the new Azure B2B and B2C features, I’m not going to try and explain to you exactly what they are because Microsoft already has a team of humans doing that for you. Instead I’ll work on highlighting other things you might want to know when you actually go about implementing this.

One other point worth noting – this stuff is still pre-release and will change, so I’m going to “version” my blog post here and I will update as I can. Here we go.

Version History

1.0 – Feb. 3, 2016

What it Does and Doesn’t Do

One of the things you should know first and foremost before you jump in is exactly what it does today, and more importantly, what it does not. I’ll boil it down for you like this – it does basic Office 365: email, calendar and contacts. That’s pretty much it today; you can’t even query Azure AD as it stands today. So if you’re building an Office app, you’re good to go.

In addition to giving you access to these Microsoft services, you can secure your own site using this new model. One of the other terms you will see used to describe the model (besides “v2”) is the “converged model”. It’s called the converged model really for two reasons:

  • You no longer need to distinguish between “Work and School Accounts” (i.e. fred@contoso.com) and “Microsoft Accounts” (i.e. fred@hotmail.com). This is probably the biggest achievement in the new model. Either type of account can authenticate and access your applications.
  • You have a single URL starting point to use for accessing Microsoft cloud services like email, calendar and contacts. That’s nice, but not exactly enough to make you run out and change your code up.

Support for different account types may be enough to get you looking at this. You will find a great article on how to configure your web application using the new model here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-devquickstarts-dotnet-web/. There is a corresponding article on securing Web API here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapi-dotnet/.

Once you have that set up, you can see here what it looks like to have both types of accounts available for sign in:

v2App_ConvergedSignIn

The accounts with the faux blue badges are “Work or School Accounts” and the Microsoft logo account is the “Microsoft Account”. Equally interesting is the fact that you can remain signed in with multiple accounts at the same time. Yes! Bravo, Microsoft – well done.  🙂

Tips

Knowing now what it does and doesn’t do, it’s a good time to talk about some tips that will come in handy as you start building applications with it. A lot of what I have below is based on the fact that I found different sites and pages with different information as I was working through building my app. Some pages are in direct contradiction with each other, some are typos, and some are just already out of date. This is why I added the “version” info to my post, so you’ll have an idea about whether / when this post has been obsoleted.

 

Tip #1 – Use the Correct Permission Scope Prefix

You will see information about using permission scopes, which is something new to the v2 auth endpoint. In short, instead of defining the permissions your application requires in the application definition itself as you do with the current model, you tell Azure what permissions you require when you go through the process of acquiring an access token. A permission scope looks something like this: https://outlook.office.com/Mail.Read. This is the essence of the first tip – make sure you are using the correct prefix for your permission scope. In this case the permission prefix is “https://outlook.office.com/” and is used to connect to the Outlook API resource endpoint.  Alternatively your app can also call the Microsoft Graph endpoint under https://graph.microsoft.com, in which case you should use the prefix https://graph.microsoft.com/ for your permission scope.  The net of this tip is fairly simple – make sure you are using the same host name for your permission scopes as you do for your API endpoints.  So if you want to read email messages using the API endpoint of https://outlook.office.com/api/v2.0/me/messages, make sure you use a permission scope of https://outlook.office.com/Mail.Read.  If you mix and match – use a permission scope with a prefix of https://graph.microsoft.com but try to access an API at https://outlook.office.com&#8230; – you will get an access token; however when you try and use it you will get a 401 unauthorized response.

Also…I found in one or more places where it used a prefix of “http://outlook.office.com/”; note that it is missing the “s” after “http”. This is another example of a typo, so just double check if you are copying and pasting code from somewhere.

 

Tip #2 – Not All Permission Scopes Require a Prefix

After all the talk in the previous tip about scope prefixes, it’s important to note that not all permission scopes require a prefix. There are a handful that work today without a prefix: “openid”, “email”, “profile”, and “offline_access”. To learn what each of these permission scopes grants you can see this page: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-compare/#scopes-not-resources.  One of the more annoying things in reviewing the documentation is that I didn’t see a simple example that demonstrated exactly what these scopes should look like. So…here’s an example of the scopes and how they’re used to request a code, which you can use to get an access token (and refresh token if you include the “offline_access” permission – see, I snuck that explanation in there). Note that they are “delimited” with a space between each one:

string mailScopePrefix = “https://outlook.office.com/&#8221;;

string scopes =
“openid email profile offline_access ” +
mailScopePrefix + “Mail.Read ” + mailScopePrefix + “Contacts.Read ” +
mailScopePrefix + “Calendars.Read “;

string authorizationRequest = String.Format(             “https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=code+id_token&client_id={0}&scope={1}&redirect_uri={2}&state={3}&response_mode=form_post&nonce={4}”,
Uri.EscapeDataString(APP_CLIENT_ID),
Uri.EscapeDataString(scopes),
Uri.EscapeDataString(redirUri),
Uri.EscapeDataString(“Index;” + scopes + “;” + redirUri),
Uri.EscapeDataString(Guid.NewGuid().ToString()));

//return a redirect response
return new RedirectResult(authorizationRequest);

Also worth noting is that the response_type in the example above is “code+id_token”; if you don’t include the “openid” permission then Azure will return an error when it redirects back to your application after authentication.

When you log in you see the standard sort of Azure consent screen that models the permission scopes you asked for:

v2App_WorkAccountPermsConsent

Again, worth noting, the consent screen actually will look different based on what kind of account you’ve authenticated with. The image above is from a “Work or School Account”. Here’s what it looks like when you use a “Microsoft Account” (NOTE: it includes a few more permission scopes than the image above):

v2App_OutlookAccountPermsConsent

For completeness, since I’m sure most folks today use ADAL to get an access token from a code, here’s an example of how you can take the code that was returned and do a POST yourself to get back a JWT token that includes the access token:

public ActionResult ProcessCode(string code, string error, string error_description, string resource, string state)

{
string viewName = “Index”;

if (!string.IsNullOrEmpty(code))
{

string[] stateVals = state.Split(“;”.ToCharArray(),

StringSplitOptions.RemoveEmptyEntries);

viewName = stateVals[0];
string scopes = stateVals[1];
string redirUri = stateVals[2];

//create the collection of values to send to the POST
List<KeyValuePair<string, string>> vals =
new List<KeyValuePair<string, string>>();

vals.Add(new KeyValuePair<string, string>(“grant_type”, “authorization_code”));
vals.Add(new KeyValuePair<string, string>(“code”, code));
vals.Add(new KeyValuePair<string, string>(“client_id”, APP_CLIENT_ID));
vals.Add(new KeyValuePair<string, string>(“client_secret”, APP_CLIENT_SECRET));
vals.Add(new KeyValuePair<string, string>(“scope”, scopes));
vals.Add(new KeyValuePair<string, string>(“redirect_uri”, redirUri));

string loginUrl = “https://login.microsoftonline.com/common/oauth2/v2.0/token&#8221;;

//make the request
HttpClient hc = new HttpClient();

//form encode the data we’re going to POST
HttpContent content = new FormUrlEncodedContent(vals)

//plug in the post body
HttpResponseMessage hrm = hc.PostAsync(loginUrl, content).Result;

if (hrm.IsSuccessStatusCode)
{
//get the raw token
string rawToken = hrm.Content.ReadAsStringAsync().Result;

//deserialize it into this custom class I created
//for working with the token contents
AzureJsonToken ajt =
JsonConvert.DeserializeObject<AzureJsonToken>(rawToken);
}
else
{
//some error handling here
}
}
else
{
//some error handling here
}

return View(viewName);

}

One note – I’ll explain more about the custom AzureJsonToken class below.  To reiterate Tips #1 and 3, since my permission scope is based on the host name outlook.office.com, I can only use the access token from this code with the Outlook API endpoint, such as https://outlook.office.com/api/v1.0/me/messages or https://outlook.office.com/api/v2.0/me/messages.  Yes, that is not a typo, you should be able to use the access token against either version of the API.

 

Tip #3 – Use the Correct Service Endpoint

Similar to the first tip, I saw examples of articles that talked about using the v2 auth endpoint but used a Graph API permission scope with a Outlook API endpoint. So again, to be clear, make sure your host names are consistent between the permission scopes and API endpoints, as explained in Tip #1.

 

Tip #4 – Be Aware that Microsoft Has Pulled Out Basic User Info from Claims

One of the changes that Microsoft made in v2 of the auth endpoint is that they have arbitrarily decided to pull out most meaningful information about the user from the token that is returned to you. They describe this here – https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-compare/:

In the original Azure Active Directory service, the most basic OpenID Connect sign-in flow would provide a wealth of information about the user in the resulting id_token. The claims in an id_token can include the user’s name, preferred username, email address, object ID, and more.

We are now restricting the information that the openid scope affords your app access to. The openid scope will only allow your app to sign the user in, and receive an app-specific identifier for the user. If you want to obtain personally identifiable information (PII) about the user in your app, your app will need to request additional permissions from the user.

This is sadly classic Microsoft behavior – we know what’s good for you better then you know what’s good for you – and this horse has already left the barn; you can get this info today using the v1 auth endpoint. Never one to be deterred by logic however, they are pressing forward with this plan. That means more work on you, the developer. What you’ll want to do in order to get basic user info is a couple of things:

  1. Include the “email” and “profile” scopes when you are obtaining your code / token.
  2. Ask for both a code and id_token when you redirect to Azure asking for a code (as shown in the C# example above).
  3. Extract out the id_token results into a useable set of claims after you exchange your code for a token.

On the last point, the Microsoft documents say that they provide a library for doing this, but did not include any further detail about what it is or how you use it, so that was only marginally helpful. But hey, this is a tip, so here’s a little more. Here’s how you can extract out the contents of the id_token:

  1. Add the Security.IdentityModel.Tokens.Jwt NuGet package to your application.
  2. Create a new JwtSecurityToken instance from the id_token. Here’s a sample from my code:

JwtSecurityToken token = new JwtSecurityToken(ajt.IdToken);

  1. Use the Payload property of the token to find the individual user attributes that were included. Here’s a quick look at the most useful ones:

token.Payload[“name”] //display name
token.Payload[“oid”] //object ID
token.Payload[“preferred_username”] //upn usually
token.Payload[“tid”] //tenant ID

As noted above though in my description of the converged model, remember that not every account type will have every value. You need to account for circumstances when the Payload contents are different based on the type of account.

 

Tip #5 – Feel Free to Use the Classes I Created for Deserialization

I created some classes for deserializing the JSON returned from various service calls into objects that can be used for referring to things like the access token, refresh token, id_token, messages, events, contacts, etc. Feel free to take the examples attached to this posting and use them as needed.

Microsoft is updating its helper libraries to use Office 365 objects with v2 of the app model so you can use them as well; there is also a pre-release version of ADAL that you can use to work with access tokens, etc. For those of you rolling your own however (or who aren’t ready to fully jump onto the pre-release bandwagon) here are some code snippets from how I used them:

//to get the access token, refresh token, and id_token
AzureJsonToken ajt =
JsonConvert.DeserializeObject<AzureJsonToken>(rawToken);

This next set is from some REST endpoints I created to test everything out:

//to get emails
Emails theMail = JsonConvert.DeserializeObject<Emails>(data);

//to get events
Events theEvent = JsonConvert.DeserializeObject<Events>(data);

//to get contacts
Contacts theContact = JsonConvert.DeserializeObject<Contacts>(data);

You end up with a nice little dataset on the client this way. Here’s an example of the basic email details that comes down to my client side script:

v2App_MessageCollection

Finally, here’s it all pulled together, in a UI that only a Steve could love – using the v2 app model, some custom REST endpoints and a combination of jQuery and Angular to show the results (yes, I really did combine jQuery and Angular, which I know Angular people hate):

Emails

v2App_Emails

Events

v2App_Events

Contacts

v2App_Contacts

Summary

The new auth endpoint is out and available for testing now, but it’s not production ready. There’s actually a good amount of documentation out there – hats off to dstrockis at Microsoft, the writer who’s been pushing all the very good content out. There’s also a lot of mixed and conflicting documents though so try and follow some of the tips included here. The attachment to this posting includes sample classes and code so you can give it a spin yourself, but make sure you start out by at least reading about how to set up your new v2 applications here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-v2-app-registration/. You can also see all the Getting Started tutorials here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-appmodel-v2-overview/#getting-started. Finally, one last article that I recommend reading (short but useful): https://msdn.microsoft.com/en-us/office/office365/howto/authenticate-Office-365-APIs-using-v2.
You can download the attachment here:

Using Roles in Azure Applications

I was spending some time today (finally) looking at how to get what I really consider the baseline functionality of claims – apps, users and roles – all working together with one of my Azure AD apps.  Azure has been pushing out pieces of an RBAC-based infrastructure for a few months now, and I wanted to see about integrating my old friend Roles into my app.  Alex Simmons posted a great overview and introduction to the concept here:  http://blogs.technet.com/b/ad/archive/2014/12/18/azure-active-directory-now-with-group-claims-and-application-roles.aspx.  As I began rolling out some code and testing some different scenarios, I wanted to capture a few notes about this process for those of you heading down this path as well:

  • It’s important to distinguish and understand that you can use two different types of “roles” – the Azure AD groups you belong to, as well as custom application roles you create for your application.  Technically you can also mix and match them, by adding AD groups to an application role, but that’s a little outside the scope of where I want to take this.
  • One of the big differences in using Azure AD groups instead of an application role is that your application has to also get consent for rights to Read the Directory.
  • Dushyant Gill put together a really excellent blog post on adding application roles to your Azure AD app here:  http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/.  You really should take a look at that before you start building this yourself.
  • One thing that is NOT obvious at all – if you only add one role, then you won’t get a role picker when you assign users to the role in your app.  Instead when you click on the Assign User link it will just “do it” without any other dialog.  The behavior is a little odd because you’re not really sure what happened when you first do it.  If you create more than one app role then you do get the picker to select which role.
  • You can only assign a person to one role unless you have Azure AD Premium.
  • You really MUST make the change to your Startup.Auth.cs file to configure what claim type should be considered the “Role” claim type.  If you don’t do this, none of the standard role checks work because the claim value is not contained in a standard role type – the type it uses is either “groups” (if using Azure AD groups) or “roles” (if using app roles).  The exact syntax for this change is at the end of Dushyant’s blog mentioned above, in the TokenValidationParameters code snippet (specifically the RoleClaimType attribute).
  • Once you have this all integrated you can use all my favorite role checking and demand options, like:
    • [PrincipalPermission(SecurityAction.Demand, Role = “yourRoleName”)]
    • [Authorize(Roles = “yourRoleName”)]
    • if (ClaimsPrincipal.Current.IsInRole(“yourRoleName”)) { //do something }

It’s good stuff, try it out!  🙂

Access Denied Error with App Only Access Token When Reading Profile Info

This is yet another rather strange error that I ran across and couldn’t find any info out on the interwebs about it so I though I would document it here.  Suppose you have a SharePoint App that needs to access some User Profile information.  You will probably use the PeopleManager class and ask for user profile properties using the PersonProperties class or one of the methods off of the PeopleManager class.  You write your code up using the standard TokenHelper semantics to get a user + app key to retrieve this information, i.e. something like var clientContext = spContext.CreateUserClientContextForSPHost().  In your AppManifest file you ask for (at a minimum) Read rights to User Profiles (Social).   Works great, okay, good start.

Now you determine that you need to retrieve that same information but use an App Only token.  So you use whatever method you want to get an App Only token.  You use the same code but now you get an Access Denied error message.  Why is that – App Only tokens are supposed to have the same or greater rights than user + app tokens.  Well…for right now…I don’t know why not.  NOTE:  I DO understand needing to be a tenant admin to install an app that requires access to User Profiles, but this is different; it happens after the app is installed.  But I do know how I fixed it.  I added Tenant…Read rights to my AppManifest file.  Now my App Only token is able to read properties from the User Profile in o365.  Just thought I would share this “not at all obvious” tip so that if you get stuck hopefully your favorite search engine will find this post.  Happy coding!

More Info on an Old Friend – the Custom Claims Provider, Uri Parameters and EntityTypes in SharePoint 2013

Back to oldie but a goodie – the custom claims provider for SharePoint.  I believe this applies to SharePoint 2010 as well but honestly I have only tested what I’m about to describe on SharePoint 2013 and don’t have the bandwidth to go back and do a 2010 test as well.  What I wanted to describe today is the values you may expect to get, and the values you actually get, in a custom claims provider method for FillClaimsForEntity, FillResolve and FillSearch.  Chances are they may not be what you expect.

  • FillClaimsForEntity – this method is invoked at login time and is used for claims augmentation.  The Uri parameter that you get (“context”) will NOT tell you what site a person is authenticating into.  The value you will get back is the root of the web application zone.  So if a user is trying to get into https://www.contoso.com/sites/accounting, the value for the “context” parameter will be https://www.contoso.com.
  • FillResolve – this method has two different overrides, but both include a Uri parameter (“context”).  The value you get for this parameter varies.  When you’re in a site collection the Uri will be for the site collection you are in.  When you are in central admin and you are invoked when picking a site collection administrator, “it varies”.  When you are first selecting a site collection administrator, the Uri value will be of the root site collection again – NOT the site collection you are trying to set the admin for.  Unfortunately, this gets a bit more convoluted yet – when you click on the OK button to save your changes, the FillResolve method will get invoked again.  This time however, the Uri value WILL BE the actual site collection Url for which you just changed the site collection admin.
  • FillSearch – this method is invoked when someone is doing a search in people picker.  It exhibits the same behavior as FillResolve above – when you’re in a site collection you get the correct site collection Uri.  When you’re in central admin and you are trying to search for an entity to add as a site collection admin, the value for the Uri parameter is the root site collection – NOT the site collection for which you are setting the admin.

All of the details above apply to the Uri parameter in those methods.  There’s one other thing to be aware of as well however, and that’s the array of entity types that are provided in these methods.  The string[] entityTypes parameter is important to understand because that lets you know what type of result (i.e. PickerEntity) you should return when FillResolve or FillSearch is invoked.  The main scenario where it matters is when you are setting the site collection administrator.  To this day, SharePoint has a limitation that only an individual user (or better stated really – only an identity claim) can be added as a site collection administrator.  What I’ve found is that when you are trying to set the site collection administrator and your custom claims provider is invoked from within central admin, the entityType array correctly contains only one value – SPClaimEntityTypes.User.  HOWEVER…if you are in a site collection and you try and change the site collection administrators from within it, the entityType is returning five different entity types, instead of just User.  At this point I don’t know of a good way to distinguish that from a legitimate request within the site collection where all of those entity types would be appropriate – such as when you’re adding a user / claim to a SharePoint group or permission level.

I don’t know if or how much any of this may change in the future (i.e. are they considered bugs or not), so for now I just wanted to document the behavior so as you’re creating your designs around your farm implementations you know better exactly what info you’ll have within your custom claims provider to help make fine grained security decisions.