YARBACS or Yet Another RBAC Solution

RBAC is all the rage (as it should be) these days, but it’s not really new.  The concepts have been in use for many years, it’s just a case of bringing it into modern cloud-based scenarios.  Azure continues to invest in this concept so that we can increasingly lock down the control and use of various cloud-based services they offer.  There’s other “interesting” aspects to these scenarios though that can impact the availability and usefulness of it – things such as existing systems and identities, cross tenant applications, etc.

It’s these interesting aspects that bring me to this post today.  At Office365Mon we are ALL about integration with Azure Active Directory and all of the goodness it provides.  We also have had from day 1 the concept of allowing users from different tenants to be able to work on a set of resources using a common security model.  That really means that when you create Office365Mon subscriptions, you can assign subscription administrators to anyone with an Azure AD account.  It doesn’t require Azure B2C, or Azure B2B, or any kind of trust relationship between organizations.  It all “just works”, which is exactly what you want.  IMPORTANT DISCLAIMER:  the RBAC space frequently changes.  You should check in often to see what’s provided by the service and decide for yourself what works best for your scenario.

“Just works” in this case started out being really just a binary type operation – depending on who you are, you either had access or you didn’t.  There was no limited access based on one or more roles that you had.  As we got into more complex application scenarios it became increasingly important to develop some type of RBAC support capabilities, but there really wasn’t an out of the box solution for us.  That led us to develop this fairly simple framework of YARBACS as I call it, or “Yet Another RBAC Solution”.  It’s not a particularly complicated approach (I don’t think) so I thought I’d share the high level details here in case it may help those of you who are living with the same sort of constraints that we have.  We have a large existing user base, everyone is effectively a cloud user to us, we don’t have any type of trust relationship with other Azure AD tenants, but we need to be able to support a single resource (Office365Mon subscription) to be managed by users from any number of tenants and with varying sets of permissions.

With that in mind, these are the basic building blocks that we use for our YARBACS:

  1. Enable support for Roles in our Azure AD secured ASP.NET application
  2. Create application roles in our AD tenant that will be used for access control
  3. Store user and role relationships so that it can be used in our application
  4. Add role attributes to views, etc.
  5. Use a custom MVC FilterAction to properly populate users from different tenants with application roles from our Azure AD tenant

Let’s walk through each of these in a little more detail.


Enable Support for Roles

Enabling support for roles in your Azure AD secured application is something that has been explained very nicely by Dushyant Gill, who works as a PM on the Azure team.  You can find his blog explaining this process in more detail here:  http://www.dushyantgill.com/blog/2014/12/10/roles-based-access-control-in-cloud-applications-using-azure-ad/.   Rather than trying to plagiarize or restate his post here, just go read his.  J   The net effect is that you will end up with code in your Startup.Auth.cs class in your ASP.NET project that looks something like this:


new OpenIdConnectAuthenticationOptions


ClientId = clientId,

Authority = Authority,

TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters


//for role support

RoleClaimType = “roles”


… //other unrelated stuff here

Okay, step 1 accomplished – your application supports the standard use of ASP.NET roles now – permission demands, IsInRole, etc.


Create Application Roles in Azure AD

This next step is also covered in Dushyant’s blog post mentioned above.  I’ll quickly recap here, but for complete steps and examples see Dushyant’s blog post.  Briefly, you’ll go to the application for which you want to use YARBACS and create applications roles.  Download the app manifest locally and add roles for the application – as many as you want.  Upload the manifest back to Azure AD, and now you’re ready to start using them.  For your internal users, you can navigate to the app in the Azure management portal and click on the Users tab.  You’ll see all of the users that have used your app there.  You can select whichever user(s) you want to assign to an app role and then click on the Assign button at the bottom.  The trick of course is now adding users to these groups (i.e. roles) when the user is not part of your Azure AD tenant.


Store User and Role Relationships for the Application

This is another step that requires no rocket science.  I’m not going to cover any implementation details here because it’s likely going to be different for every company in terms of how and what they implement it.  The goal here is simple though – you need something – like a SQL database – to store the user identifier and the role(s) that user is associated with.

In terms of identifying your user in code so you can look up what roles are associated with it, I use the UPN.  When you’ve authenticated to Azure AD you’ll find the UPN is in the ClaimsPrincipal.Current.Identity.Name property.  To be fair, it’s possible for a user’s UPN to change, so if you find that to be a scenario that concerns you then you should use something else.  As an alternative, I typically have code in my Startup.Auth.cs class that creates an AuthenticationResult as part of registering the application in the user’s Azure AD tenant after consent has been given.  You can always go back and look at doing something with the AuthenticationResult’s UserInfo.UniqueId property, which is basically just a GUID that identifies the user in Azure AD and never changes, even if the UPN does.

Now that you have the information stored, when we build our MVC ActionFilter we’ll pull this data out to plug into application roles.


Add Role Attributes to Views, Etc.

This is where the power of the ASP.NET permissions model and roles really comes into play.  In step 1 after you follow the steps in Dushyant’s blog, you are basically telling ASP.NET that anything in the claim type “roles” should be considered a role claim.  As such, you can start using it the way you would any other kind of ASP.NET role checking.  Here are a couple of examples:

  • As a PrincipalPermission demand – you can add a permission demand to a view in your controller (like an ActionResult) with a simple attribute, like this:  [PrincipalPermission(SecurityAction.Demand, Role = “MyAppAdmin”)].  So if someone tries to access a view and they have not been added to the MyAppAdmin role (i.e. if they don’t have a “roles” claim with that value), then they will get denied access to that view.
  • Using IsInRole – you can use the standard ASP.NET method to determine if a user is in a particular role and then based on that make options available, hide UI, redirect the user, etc. Here’s an example of that:  if (System.Security.Claims.ClaimsPrincipal.Current.IsInRole(“MyAppAdmin “))…  In this case if the user has the “roles” claim with a value of MyAppAdmin then I can do whatever – enabling a feature, etc.

Those are just a couple of simple and the most common examples, but as I said, anything you can do with ASP.NET roles, you can now do with this YARBACS.


Use a Custom FilterAction to Populate Roles for Users

This is really where the magic happens – we look at an individual and determine what roles we want to assign to it.   To start with, we create a new class and have it inherit from ActionFilterAttribute.  Then we override the OnActionExecuting event and plug in our code to look for the roles for the user and assign them.

Here’s what the skeleton of the class looks like:

public class Office365MonSecurity : ActionFilterAttribute


public override void OnActionExecuting(ActionExecutingContext filterContext)


//code here to assign roles



Within the override, we plug in our code.  To begin with, this code isn’t going to do any good unless the user is already authenticated.  If they haven’t then I have no way to identify them and look up the roles (short of something like a cookie, which would be a really bad approach for many reasons).  So first I check to make sure the request is authenticated:

if (filterContext.RequestContext.HttpContext.Request.IsAuthenticated)


//assign roles if user is authenticated



Inside this block then, I can do my look up and assign roles because I know the user has been authenticated and I can identify him.  So here’s a little pseudo code to demonstrate:

SqlHelper sql = new SqlHelper();

List<string> roles = sql.GetRoles(ClaimsPrincipal.Current.Identity.Name);


foreach(string role in roles)


Claim roleClaim = new Claim(“roles”, role);




So that’s a pretty good and generic example of how you can implement it.  Also, unlike application roles that you define through the UI in the Azure management portal, you can assign multiple roles to a user this way.  It works because they just show up as role claims.  Here’s an example of some simple code to demonstrate:


//assign everyone to the app reader role

Claim readerClaim = new Claim(“roles”, “MyAppReader”);



//add a role I just made up

Claim madeUpClaim = new Claim(“roles”, “BlazerFan”);


This code actually demonstrates a couple of interesting things.  First, as I pointed out above, it adds multiple roles to the user.  Second, it demonstrates using a completely “made up” role.  What I mean by that is that the “BlazerFan” role does not exist in the list of application roles in Azure AD for my app.  Instead I just created it on the fly, but again it all works because it’s added as a standard Claim of type “roles”, which is what we’ve configured our application to use as a role claim.  Here’s a partial snippet of what my claims collection looks like after running through this demo code:


To actually use the FilterAction, I just need to add it as an attribute on the controller(s) where I’m going to use it.  Here’s an example – Office365MonSecurity is the class name of my FilterAction:


public class SignupController : RoutingControllerBase

There you have it.  All of the pieces to implement your own RBAC solution when the current service offering is not necessarily adequate for your scenario.  It’s pretty simple to implement and maintain, and should support a wide range of scenarios.

Bug Alert for April CU and Migrating Users

Just heard about a nasty little bug in the April CU from my friend Syed.  He was using the SPWebApplication.MigrateUsers method to migrate accounts from one claim value to another (i.e. like if you were migrating from Windows claims to SAML claims, or in his case, changing identity claim values).  Turns out after doing the migration the actual claim value in the content databases was not getting updated.  He managed to track this down to a bug in the April CU, but found that it was fixed again in the June CU.  So…if you are doing any kind of claims migrations – including updating an identity claim value – make sure that you are NOT on the April CU, but instead apply one after that.

Thanks Syed for passing this info along.

SAML in a Box – Inviting External Users to Your SharePoint Farm

Those of you who follow the Share-n-Dipity blog know that I don’t really do much in the way of product endorsements.  However, you have probably also figured out that I’m a big SAML fan, so when a friend of mine recently released a new product for this market it really caught my eye.  If you’ve followed the blog for a while you may have seen me reference a friend mine by the name of Israel Vega, Jr.  He worked with me at Microsoft for many years and last year decided to head out on his own.  “Iz”, as we call him, has done a ton of work with SharePoint over the years and was really one of the go to guys for it in Microsoft Services when he left.

He has just released a new product called CloudExtra.NET, which is obviously a play on “cloud extranet”.  What I really like about his solution is that it’s SAML in a box – basically a turn-key product that brings a lot of things customers frequently ask me about to a packaged product.  He uses SAML authentication and a custom claims provider to enable you to use any number of cloud-based directories out of the box.  That includes Azure Active Directory for internal users, and anything you can connect to ACS for your external users.  That part is pretty cool because he has basically built “invited users” for on-premises SharePoint farms, a feature that currently is only available to Office 365. For those of you who aren’t ready to do that yet, this is a pretty slick way to do it with your own farms now too. 

He’s added some nice features that round out his solution.  He has a Terms and Conditions agreement for it so you can make invited users agree to your terms of use before gaining access to your farm.  He also has a product that’s cool in its own right and is bundled with CloudExtra.NET, which is a business impact application.  It lets you assign an impact rating to sites (like High, Medium and Low) and then also use that to apply a site policy to it when you do so.  He ties it all together with farm-wide reporting on sites based on business impact.  There are also a bunch of reporting and management features built in for invited users that make it a very solid offering.

I gleaned all this from about a 30 minute demo that Iz gave me, and I’m hoping to spend some time in the not too distant future doing a hands-on review of the product to check it out for myself.  If it sounds like something that might be interested to you then you should go visit his web site at http://cloudextra.net.  Thanks Iz for the SAML love!

Remote SharePoint Index, Hybrid, and SAML

Today’s post combines a few different topics, that dare I say, started as a dare (I think).  The gauntlet thrown out was whether or not Remote SharePoint Index would work for SAML users as well as Windows users.  For those of you not completely familiar with Remote SharePoint Index, I covered it in some detail in my blog post here:  https://samlman.wordpress.com/2015/03/01/setting-up-an-oauth-trust-between-farms-in-sharepoint-2013/.  The gist of it is that you set up an OAuth trust between two SharePoint farms, and it lets you get search results from both farms into a single search center results page.  All of our examples in this area have been around using Windows users, but when the question was posed my best guess was “yeah, seems like that should work”.  As it turns out, it’s true, it really does.  In order to get this to work it requires the following:

  • Two SharePoint farms
  • Both farms have an SPTrustedIdentityTokenIssuer configured (ideally using the same Name, but in theory should not be required.  In theory.)
  • Both farms have at least one web app configured to use the SPTrustedIdentityTokenIssuer
  • Both farms have UPA configured and running
  • Both farms are doing a profile import from the same Active Directory forest.
  • Both farms have the profile import Authentication Provider Type configured to be Trusted Claims Provider Authentication and Authentication Provider Instance configured to be their SPTrustedSecurityToken Issuer.  When you’re done it should look like this:    

After you’ve set all of that up, you also need to into Manage User Properties in the UPA and map both the Claim User Identifier and (usually) the Work email properties.  In my case I used email address as the identity claim, so I mapped both of those properties to the “mail” attribute in Active Directory.  If you are using a different identity claim then you would map the Claim User Identifier accordingly and would not need to map Work email.  By default, the Claim User Identifier is mapped to samAccountName, so you’ll want to Remove that mapping, then add one for mail.  There’s nothing to helpful about doing this, you literally just type in “mail” in the Attribute field and then click the Add button.  When you’re done it will look like this:  .

You’ll probably want to do a full import after that (unless you have a really huge directory).  The biggest gotcha here is that you can really only set up one profile connection per Active Directory domain (at least when using Active Directory import instead of FIM).  The reason that may be a problem is that most folks do an import to get all their Windows users into the UPA.  However if you want to use the same domain to import users for both Windows and SAML auth into the UPA, it won’t work.  You’ll get an error when you try to create a second connection to the same domain.  So you basically have to be all in – all SAML or all Windows; otherwise you’ll end up getting all sorts of seeming random results.

Once you’ve get everything configured and you’ve run a profile import in both farms, then you can set up the first part of this scenario, which is doing cross farm searches by SAML users using Remote SharePoint Index.  I literally used the same exact steps that I described in my previous blog post I referenced at the start of this posting (setting up an oauth trust between farms). Once that trust is set up then it’s just a matter of creating a new result source that is configured as a Remote SharePoint Index, using the Url to the web application in the other farm (the one you used when creating the trust between the farms), and then creating a query rule to execute your query against the remote farm.  When it’s all said and done you end up being able to authenticate into your local farm as a SAML user and search results from the remote farm.  In the screenshot here I’m logged in as a SAML user; you see his full display name because it was brought in when I did the profile sync:

Now, the real beauty of this solution is that we can actually get this to work when using the SharePoint 2013 hybrid search features with Office 365.  In this case I have a farm where I already had the hybrid features configured and working.  When I created my SPTrustedIdentityTokenIssuer, I made sure to include 3 of the 4 claim types (the only ones I populate) that are used in rehydrating users for OAuth calls – email, Windows account, and UPN.  Since those values are also synchronized up to my Office 365 tenant (and thus put into the UPA for my tenant up there), I’m able to rehydrate the user in Office 365 and get search results from there as well, even though I’m logged in locally as a SAML user.  Very cool!!  Here’s a screen shot of the same SAML user getting both local search results as well as hybrid search results from Office 365:

Hopefully this helps to clear up some of the mystery about SAML users and OAuth, as well as how it interacts with some of the built-in features of SharePoint 2013 like Remote SharePoint Index and Hybrid Search.  Clearly the complicating factor in all of this is the UPA and managing your user profile imports.  That would be the first planning item I would be thinking about if you want to use SAML authentication on site, in addition to SharePoint Hybrid.  It’s also clearly a sticking point if you wanted to use both Windows and SAML auth in the same farm with the same users from the same domain.

SAML Support for SharePoint-Hosted Apps with ADFS 3.0

This is another case where I’m just passing information along here, based on the great work of others.  As you probably know, we did not have a good story for SharePoint-hosted apps in web application that uses SAML authentication with ADFS 2.0.  However, I have had reports from a couple of different teams now that they ARE working with ADFS 3.0.  The main differences that are needed to make this work include:

  • In ADFS you need to define a wildcard WS-Fed endpoint.  For example, normally for a SharePoint web application, in ADFS you create a relying party and set the WS-Fed endpoint to be something like https://www.foo.com/_trust/.  To do the same thing with apps, you take your apps namespace – assume it’s “contosoapps.com” – and add a WS-Fed endpoint like this:  https://*.contosoapps.com/_trust/.
  • Configure the SharePoint STS to send the wreply parameter.  You can do that with PowerShell that looks like this:

$sts = Get-SPTrustedIdentityTokenIssuer
$sts.UseWReplyParameter = $true

One other thing to note – the behavior to use the wreply parameter is supposed to be turned on by default in an upcoming CU.  I heard it was the April 2014 CU actually but have not had a chance to see if that is really in there or not.  It won’t hurt to run the PowerShell above though.

This is good news, thanks for those of you that shared your experiences!

More Info on an Old Friend – the Custom Claims Provider, Uri Parameters and EntityTypes in SharePoint 2013

Back to oldie but a goodie – the custom claims provider for SharePoint.  I believe this applies to SharePoint 2010 as well but honestly I have only tested what I’m about to describe on SharePoint 2013 and don’t have the bandwidth to go back and do a 2010 test as well.  What I wanted to describe today is the values you may expect to get, and the values you actually get, in a custom claims provider method for FillClaimsForEntity, FillResolve and FillSearch.  Chances are they may not be what you expect.

  • FillClaimsForEntity – this method is invoked at login time and is used for claims augmentation.  The Uri parameter that you get (“context”) will NOT tell you what site a person is authenticating into.  The value you will get back is the root of the web application zone.  So if a user is trying to get into https://www.contoso.com/sites/accounting, the value for the “context” parameter will be https://www.contoso.com.
  • FillResolve – this method has two different overrides, but both include a Uri parameter (“context”).  The value you get for this parameter varies.  When you’re in a site collection the Uri will be for the site collection you are in.  When you are in central admin and you are invoked when picking a site collection administrator, “it varies”.  When you are first selecting a site collection administrator, the Uri value will be of the root site collection again – NOT the site collection you are trying to set the admin for.  Unfortunately, this gets a bit more convoluted yet – when you click on the OK button to save your changes, the FillResolve method will get invoked again.  This time however, the Uri value WILL BE the actual site collection Url for which you just changed the site collection admin.
  • FillSearch – this method is invoked when someone is doing a search in people picker.  It exhibits the same behavior as FillResolve above – when you’re in a site collection you get the correct site collection Uri.  When you’re in central admin and you are trying to search for an entity to add as a site collection admin, the value for the Uri parameter is the root site collection – NOT the site collection for which you are setting the admin.

All of the details above apply to the Uri parameter in those methods.  There’s one other thing to be aware of as well however, and that’s the array of entity types that are provided in these methods.  The string[] entityTypes parameter is important to understand because that lets you know what type of result (i.e. PickerEntity) you should return when FillResolve or FillSearch is invoked.  The main scenario where it matters is when you are setting the site collection administrator.  To this day, SharePoint has a limitation that only an individual user (or better stated really – only an identity claim) can be added as a site collection administrator.  What I’ve found is that when you are trying to set the site collection administrator and your custom claims provider is invoked from within central admin, the entityType array correctly contains only one value – SPClaimEntityTypes.User.  HOWEVER…if you are in a site collection and you try and change the site collection administrators from within it, the entityType is returning five different entity types, instead of just User.  At this point I don’t know of a good way to distinguish that from a legitimate request within the site collection where all of those entity types would be appropriate – such as when you’re adding a user / claim to a SharePoint group or permission level.

I don’t know if or how much any of this may change in the future (i.e. are they considered bugs or not), so for now I just wanted to document the behavior so as you’re creating your designs around your farm implementations you know better exactly what info you’ll have within your custom claims provider to help make fine grained security decisions.

Claim Type Exceptions with Custom Claims Providers in SharePoint 2013

This issue applies to SharePoint 2010 as well but…suppose you have created a custom claims provider and one of the things you want to do is to have some custom claim types that you use.  What I mean by custom claim types is just that they are not one of the standard out of the box types like email, upn, role, etc.  Now in this posting here: http://blogs.technet.com/b/speschka/archive/2011/04/02/how-to-add-additional-claims-in-adfs-2-0-that-can-be-consumed-in-sharepoint-2010.aspx – from way back in 2011, I described the format that you need to use for your claim type in order to be accepted by SharePoint.

Suppose that you have built and deployed your custom claims provider though and you decide you really need another claim type – we’ll call it http://www.foo.com/bar. So you start using the claim by creating a new SPClaim and that works just fine.  But then you want to use the claim with something where you require an encoded value – like adding it to a web app policy or a SharePoint group or whatever.  So you create an instance of the SPClaimManager and you try the EncodeClaim method on the value of your SPClaim.  Instead of happiness, what you get is an error that says you have an ArgumentException, the param name is claimType, and in the stack trace you see it happened in the EncodeClaimIntoFormsSuffix method.

You may think hey – I just need to add this claim type to the list of claims my provider supports. I do that by modifying my code in the FillClaimTypes override of my custom claims provider.  That actually is correct – you DO need to do that.  So you do that, recompile and redeploy the assembly…and continue to get the same error.  Well the fact of the matter is that you’ve done everything correctly from a coding standpoint.  The problem is that this is a different flavor of the same problem that SharePoint SPTrustedIdentityTokenIssuers have, which is the claims collection is immutable.  That just means that after you create the claims mappings for an SPTrustedIdentityTokenIssuer, you can not go back and change it afterwards.  Yes, I know people have posted code on how to do that; it is also unsupported to do so.

So, how do you fix this?  If you are NOT the default claims provider, you should be able to just uninstall and remove the custom claims provider feature, remove all instances of it from the GAC and deploy all over again.  If you ARE the default claims provider, then unfortunately the fix is the same way you would as if you needed to change the claims mappings for your SPTrustedIdentityTokenIssuer.  You need to change the configuration for any web apps that are using your SPTrustedIdentityTokenIssuer so that they no longer use it, and then remove your custom claims provider.  Then you need to remove the SPTrustedIdentityTokenIssuer, redeploy your custom claims provider, recreate your SPTrustedIdentityTokenIssuer, make your custom claims provider the default provider, then go back and add the SPTrustedIdentityTokenIssuer to the web apps that need it.  It wears me out just typing out that sentence, let alone doing the work.

Once you’ve done your redeployment though, SharePoint should pick up on your new collection of claims that you will be using.  Until you need to change them again.