Analyzing and Fixing Azure Web Sites with the SCM Virtual Directory

There’s so many things you do every day as part of operating and maintaining your Azure web sites.  They’re a common target for developers because you get 10 free sites with your Azure subscription, and if you know what you’re doing you can spin that up into even more applications by using custom virtual directories as I’ve previously explained here:  https://samlman.wordpress.com/2015/02/28/developing-and-deploying-multiple-sharepoint-2013-apps-to-a-single-azure-web-site/.  That example is specific to using them for SharePoint Apps, but you can follow the same process to use them for standard web apps as well.

Typically, you go through your publishing and management process using two out of the box tools – Visual Studio and the Azure browser management pages.  What happens though when you need to go beyond the simple deploy and configure features of these tools?  Yes, there are third-party tools out there than can help with these scenarios, but many folks don’t realize that there’s also a LOT that you can do with something that ships out of the box in Azure, which is the Kudo Services, or as I’ve called it above, the SCM virtual directory.

The SCM virtual directory is present in every Azure web site.  To access it, you merely insert “scm” between your web name and the host name.  For example, if you have an Azure web site at “contoso.azurewebsites.net”, then you would navigate to “contoso.scm.azurewebsites.net”.  Once you authenticate and get in, you’ll arrive at the home page for what they call the Kudu Services.  In this post I really just wanted to give you an overview of some of the features of the Kudu Services and how to find them, which I kind of just did.  🙂  At the end though I’ll include a link to more comprehensive documentation for Kudu.

Going back to my example, I found out about all of the tools and analysis available with the Kudu Services a few months ago when I was trying to publish an update to an Azure web site.  Try as I might, the deployment kept failing because it said a file in the deployment was being used by another process on the server.  Now of course, I don’t own the “server” in this case, because it’s an Azure server running the IIS service.  So that’s how I started down this path of “how am I gonna fix that” in Azure.  SCM came to the rescue.

To begin with, here’s a screenshot of the Kudu home page:

Kudu1

As you can see right off the bat, you get some basic information about the server and version on the home page.  The power of these features come as you explore some of the other menu options available.  When you hop over to the Environment link, you get a list of the System Info, App Settings, Connection Strings, Environment variables, PATH info, HTTP headers, and the ever popular Server variables.  As a long time ASP.NET developer I will happily admit that there have been many times when I’ve done a silly little enumeration of all of the Server variables when trying to debug some issue.  Now you can find them all ready to go for you, as shown in this screenshot:

Kudu2

Now back to that pesky “file in use” problem I was describing above.  After trying every imaginable hack I could think of back then, I eventually used the “Debug console” in the Kudu Services.   These guys really did a nice job on this and offer both a Command prompt shell as well as a PowerShell prompt.  In my case, I popped open the Command prompt and quickly solved my issue.  Here’s an example:

Kudu3

One of the things that’s cool about this as well is that as I motored around the directory structure with my old school DOS skills, i.e. “cd wwwroot”, the graphical display of the directory structure was kept in sync above the command prompt.  This really worked out magnificently, I had no idea how else I was going to get that issue fixed.

Beyond the tools I’ve shown already, there are several additional tools you will find, oddly enough, under the Tools menu.  Want to get the IIS logs?  No problem, grab the Diagnostic Dump.  You can also get a log stream, a dashboard of web jobs, a set of web hooks, the deployment script for your web site, and open a Support case.

Finally, you can also add Site Extensions to your web site.  There are actually a BUNCH of them that you can choose from.  Here’s the gallery from the Site Extensions menu:

Kudu4

Of course, there’s many more than fit on this single screen shot.  All of the additional functionality and the ease with which you can access it is pretty cool though.  Here’s an example of the Azure Websites Event Viewer.  You can launch it from the Installed items in your gallery and it pops open right in the browser:

Kudu5

So that’s a quick overview of the tools.  I used them some time ago and then when I needed them a couple of months ago I couldn’t remember the virtual directory name.  I Bing’d my brains out unsuccessfully trying to find it, until it hit me when I looked at one of my site deployment scripts – they go to the SCM vdir as well.  Since I had such a hard time finding it I thought I would capture it here and hopefully your favorite search engine will find enough of keywords in this post to help you track it down when you need it.

Finally, for a full set of details around what’s in the Kudu Services, check out their GitHub wiki page at https://github.com/projectkudu/kudu/wiki.

Azure B2B First Look

This is a companion piece to the Azure B2C First Look I published last week. This post is some first thoughts around the new Azure B2B features that were recently announced. The goal of Azure B2B is to reduce the complexity and rigidity of managing business to business relationships and sharing data. The announcement around the B2B feature correctly characterizes this problem as typically being solved either by federating between two companies, or creating a directory (or OU or whatever) in your own internal directory for external users. Both solutions have pros and cons to them. The Azure B2B solution is what I would consider something of a hybrid approach between the two. External users continue to use their one and only external account for authentication so they don’t have to remember multiple account names and passwords. However, they also get added (maybe “linked” is a better way to describe it) within the Azure AD tenant into which they are “added” (a process I’ll explain below). Overall though, even in preview it’s a pretty painless and easy way to connect people up from different organizations.

As with the Azure B2C post, I’m not going to try and explain every bit of the feature and how to configure it – Microsoft has a team of writers tasked with just that. Instead I’ll give you a few links to get started and then share some real world experience now with how it looks and works. So…to get started I would recommend taking a look at this content:

 

Getting Started

The process for getting started is a little peculiar, but there is a certain logic to it. In short, when you want to allow users from other companies to access your resources, you have to create a CSV file with information about them and import that into Azure AD, which you can do via the portal. In fact PowerShell support for doing this is not there at this time, but I’ll be a little surprised if it doesn’t show up by RTM. This is where the second link above will be useful to you – it explains the format of the CSV file you need to create and provides an example of what it looks like.

In my case, for my first test I started with my SamlMan.Com tenant and built out a simple ASP.NET MVC application secured with Azure AD. I got that all working and then invited my Office365Mon.Com account. The CSV for it looked something like this:

Email,DisplayName,InviteAppID,InviteReplyUrl,InviteAppResources,InviteGroupResources,InviteContactUsUrl

speschka@office365mon.com,Office365Mon Steve,FFEAF5BD-528B-4B53-8324-E4A94C1F0F06,,65b11ba2-50f7-4ed2-876d-6a29515904b2,,http://azure.microsoft.com/services/active-directory/

The main things worth pointing out in this simple CSV is the display name I gave – Office365Mon Steve – because that’s how it’s going to show up in the SamlMan.Com AD tenant. Also the GUID that starts with 65b11… is the Client ID of my MVC application that I described above. Once you import the CSV file Azure sends an email to everyone in it, and then each person has to open the email, click a link and accept the invitation. You can see a report in Azure of the status of each person you invited.

Here’s what the report looks like:

When you get your invitation email it looks something like this (some additional customization options are available, but for testing I’m waaaaay to lazy to make it look pretty):

You’ll notice the email is addressed to the display name I put into the CSV file. I’ll show the invite pages in Azure a little later. Now, once that the person has accepted the invitation, as I noted above, they show up in the Azure AD tenant from which the invitation was sent. Here’s what my SamlMan.Com directory looks like after the invite was accepted:

Now with the invitation accepted I can log into the ASP.NET application as speschka@office365mon.com and it works just perfectly. Here’s yet another screenshot of my favorite home page, with all of the current user’s claims enumerated on the page:

You can see again that display name of “Office365Mon Steve” getting carried throughout, so make sure you use something that won’t require constant tinkering.

 

Using it with Other Cloud Applications

One of the things I thought was nice was the integration with other cloud applications. Obviously the images above illustrate integrating with my own custom apps, but what about the apps people frequently use in the cloud, like Office 365? Well, it works pretty slick for that as well. To work through this scenario I flipped things around and used my Office365Mon tenant and invited my account at SamlMan.Com. In the SharePoint spirit of things…I started out by creating a new group in my Azure AD tenant, like this:

Next I went to my SharePoint Online site and added this Azure AD group to the SharePoint Viewers group:

Next I got the object ID of the Azure AD group by running this bit of PoweShell: Get-MsolGroup | fl DisplayName, ObjectId. With that in hand, I created a new CSV file to invite my SamlMan.Com user. The CSV file looked like this:

Email,DisplayName,InviteAppID,InviteReplyUrl,InviteAppResources,InviteGroupResources,InviteContactUsUrl

steve@samlman.com,SamlMan Steve,6787E4D5-0159-4B74-B56D-AA7C36715C0E,,,95419596-ff8a-4db1-c406-ed4ad13fd272,https://www.office365mon.com

The most interesting part of this CSV is the GUID that starts with 95419… It’s the object ID of my SharePoint Readers group. Now when my invited user accepts his invitation he or she will get added to that group. I get the standard email invite that I showed above. When I click on the link to accept it, I am treated to a page in Azure that looks like this (you can also customize the appearance of this, which I chose not to do):

Now after I go through the process to accept the invitation, I went back to the Office365Mon SharePoint Online team site and was able to successfully authenticate and get in with Viewer rights:

Wrap Up

Overall everything with the B2B feature went incredibly smoothly with no hiccups or glitches of any kind; most of the Azure guys really seem to have their act together these days. J Honestly, it’s impressive and I know a lot of teams at Microsoft that could really take a lesson from how engaged the Azure team is with the community, taking feedback, and delivering great products. I think the biggest challenge I’m imagining after my first look is just the long-term user management; for example, when an invited user’s account is deleted from its home Azure AD tenant it would be nice to have it trigger deletes in all of the other AD tenants where it’s been invited. Stuff like that…but I think it will come in time. This is very good start however.

Azure B2C First Look

I’ve spent some time the last couple of days working with the new Azure B2C features and have had the chance to get some things working, as well as note a couple of things that you want to be aware of as you start playing with this yourself. The Azure B2C feature is the first really strong play towards obsoleting Microsoft ACS, or Access Control Services. Prior to B2C, the ACS service was the only Microsoft-hosted identity service that let you create identity provider relationships with social identity providers such as Facebook and Google. It’s an aging platform however, and one that Microsoft has publicly said is on the way out. Since they came out with that statement, folks have been between a bit of a rock and hard spot when wanting to use these social identity providers. You could write your own, use a paid identity hub service like Auth0, or spin “something” up on ACS, knowing full well that its end of life was coming. Even at that, on ACS the connectivity with Google to create new identity provider relationships broke last year so things were not looking good.

The Azure B2C feature now slides into public preview up on Azure. It provides hooks today to let you authenticate using Facebook, Google, Amazon and LinkedIn, as well as any local accounts you have in your new B2C Azure Active Directory. Support for Windows Live accounts ironically is not there yet, but the team has said that it is coming. Along those lines, here’s a couple of links that you should check out to really get started and dig your teeth into this:

While there are some good links to get you started, I wanted to focus on some of the early rough edges so you can hopefully get up and running just a little bit quicker. I’ll preface this by saying the Azure team has really has their stuff together pretty well. There’s videos, sample apps, blog posts, and some good documentation. They also say that they are supporting it via forums on StackOverflow.Com, although so far my one post up there has been getting just a bit lonely. We’ll see. In the meantime, let’s move on to some things to keep in mind.

Sample Code Problems

You may work through the sample applications to build out your own site, which is the logical way to test it. In my case I built a standard VS 2015 ASP.NET MVC application so I followed along that sample. I actually only had to copy one set of files over from the Github sample that they use, which is a folder called PolicyAuthHandlers and the set of classes it contains. When I thought I had it all built out, I ran it and quite quickly got the dreaded yellow screen of death (i.e. an unhandled exception). I spent some time poking through this and realized that there is a problem in the Startup.Auth.cs file, in the OnRedirectToIdentityProvider method. Specifically, this line of code fires when you first authenticate as we try and figure out which Azure B2C policy to use (you can read more about policies in the links above):

OpenIdConnectConfiguration config = await gr.GetConfigurationByPolicyAsync(CancellationToken.None, notification.OwinContext.Authentication.AuthenticationResponseChallenge.Properties.Dictionary[Startup.PolicyKey]);

The problem is that the notification.OwinContext.Authentication.AuthenticationResponseChallenge property is null, so the whole thing blows up. For now, I’m working around it by replacing this parameter (which is just the name of the policy to use) with this other static string you’ll find in the sample code: SignUpPolicyId.

You Get a 404 Response When You Try and Authenticate

After you fix the sample code problem, then you will likely find that when your browser gets redirected to authenticate, it throws a 404 error (or more precisely, Azure throws the exception). After much digging around on that one, I finally realized that they had created a bad query string for me to login with. Specifically, it contained two question marks. In web land, that means it ignores all parameters after that second question mark. The second question mark needs to be an asterisk, and then everything moves right along. Here’s a screenshot of what the URL looks like:

I’ve highlighted the two question marks to make the easier to spot. The way to fix it? So far, each time I authenticate, I wait for the 404 error and then I manually go in and change it in the browser address bar. So I just change the second question mark to an ampersand, hit enter, and away I go.

UPDATE 9/23/15:  Clarky888 was good enough to point out on StackOverflow.Com that this problem is fixed if you update your reference for Microsoft.IdentityModel.Protocol.Extensions to v1.0.2.206221351.  I did this and found that everything works quite well now.

You Get a Login Error About the User Already Exists

One of the problems with my “fix” for the first problem I described in this post is that I set the policy that should be used to one that is only supposed to be fired when you are creating a new account. By “creating a new account”, I mean you are logging into the site for the first time. You can log in with whatever social identity provider you want, and then Azure B2C creates what for now I’ll call a “shadow account” in the Azure B2C AD tenant. For example, after I logged in with all of my different social and local accounts, here’s what my list of users looks like in my Azure B2C AD tenant:

So in the fix to the first problem, I described changing the second question mark to an ampersand. Well, the first query string parameter is the name of your B2C policy that’s going to be used. So in my case, I’m hard coding it in my Startup.Auth.cs file to use my sign “up” policy. However, I also have a different policy for accounts that already exist, called my sign “in” policy. In the address bar picture above you can see the name of my sign up policy is b2c_1_firstdemosignup. So for a user that has already signed up, when I get that 404 error I change the name of the policy as well as the second question mark. I just type over the last two characters so my policy name is b2c_1_firstdemosignin and then change the next character from a question mark to an ampersand, hit enter, and we’re working again. This brings up the “money” shot so to speak – here you can see all of my identity providers alive on my Azure B2C sign in page:

For this post I decided to log in with Amazon…mostly because I’ve never used Amazon as an identity provider before so I thought I would breathe deeply on all the oauth2 goodness. Here’s the login dialog from Amazon after I select it as my identity provider:

Once I sign in then I have a little code that just enumerates the set of claims I have and spits them all out on the page for me. Of course it’s big and ugly, which I think is exactly the kind of work you all have come to expect from me.  🙂

There you go – there’s your end to end login experience with Azure B2C. Pretty cool stuff.

Other Miscellaneous Issues

There are a couple of other little “things” that I noticed that might be worth remembering as well; here they are:

  • It doesn’t recognize the username / password of my account that’s a global admin on my Azure AD B2C tenant.  This is a bit of weird one, but I think I know why.  To back up a little, as I mentioned earlier, you have to create a new Azure AD tenant and when you do, configure it as a B2C tenant.  So you log into the Azure management portal and go through that exercise.  When you do, it adds the account you are currently logged in with as the global admin of the new Azure AD B2C tenant.  However, it doesn’t actually create an account (and corresponding password) for it in that tenant – it just notes that it’s from a different Azure AD tenant.  That’s why I think it won’t let me log in with that account.  Kind of sorta makes sense in a certain Microsoft “by design” kind of way.
  • You can add custom claim types in your B2C tenant.  So for example, I added “favoriteTeam” and “favoriteSport”.  However the claim type that Azure sends to you precedes all of your custom attributes with “extension_”, so “favoriteTeam” becomes “extension_favoriteTeam”.  I mostly add this so you are aware of it when making authorization decisions based on custom claim values.  I don’t know if this will change for RTM or not, but I guess we’ll find out together.
  • It doesn’t seem to pass along the access tokens (if provided) by these social identity providers.  For example, with ACS if you logged in via Facebook it also passed along an access token from Facebook in your claims collection.  That was really sweet because you could then extract out that access token and use it to make calls to the Facebook Graph API.  That functionality is not in there today, not sure if it will be when it’s released.  It was a very nice feature, so hopefully it will make a comeback.

That’s it – there’s my first look at Azure B2C. I’ll be working with it more and adding some other posts about it in the future. As always, you can reach out to me at steve@samlman.com with your own observations, comments and questions…or just post ‘em here so everyone can see them.  😉

Do You Need An Account In Azure Active Directory if Using ADFS?

Today’s topic is a little spin on a question that seems to be coming up more frequently, specifically when folks are using a combination of Azure Active Directory and ADFS. That question is, if I’m using ADFS do I really need to have an account in an Azure Active Directory (AAD) tenant? Well, of course, the answer is it depends.

If all you are going to use AAD for is as some kind of application registry, but all of your applications are hosted elsewhere, you really don’t need an account in AAD. You can just add your domain to the list of domains in your Azure subscription, set up the domain as a federated domain, and configure the ADFS endpoint where folks should get redirected to in order to authenticate.

Where this scenario breaks is if you are securing your applications with AAD. For example, in your AAD tenant you go to the Applications tab, you give it some configuration information, and you add one or more permissions that your application is going to need. In this case, AAD is being used to secure access to the application, and so at a minimum you have to ask for at least the right to log in and read the profile information – this is needed for AAD to be able to send your set of claims off to your application after you authenticate. In that case, there is going to be a consent process that users go through. The first time they log into the application, Azure will present a page to them that describes all of the permissions the application is asking for and the user has to consent to allow the application to have access in order to continue. This is “the consent process” (as simply explained by Steve).

In this case, if there is not a corresponding user object in AAD, the user ends up getting an error page after authenticating into AAD. The error message, which is in the fine print near the bottom of the page, will say something like hey, this external user has to be added to the directory. What this really means is hey, I need to be able to track if this user consents to letting this app get at his information, and if I can’t track his consent then we’re going to have to stop here. The way to fix this issue in this case is just to set up directory synchronization. That will populate AAD with all of the users and groups in the on premise AD, even though users will not be using AAD to authenticate. Once you do that, your error will go away and the whole end to end process will work for you.

So the net – if you are trying to use AAD to secure your application – you need users in AAD too. If not, you don’t need to populate the directory. In most cases people will end up wanting to secure apps in AAD so DirSync will be needed. If you end up going that way, the Azure AD Connect tool is used to do the synchronization for you: https://msdn.microsoft.com/en-us/library/azure/dn832695.aspx.

Fun with Azure Key Vault Services

I was able to spend a little time recently with a new Azure service, the Key Vault service, for some work I was doing. It’s a pretty valuable and not too difficult service that solves an age old problem – where can I securely keep secrets for my applications in Windows Azure. Actually because of the way it’s implanted you really don’t even need to have your application hosted in Azure…but I’m getting a little ahead of myself. Let’s start with the basics. As a precursor to what I have here, I’ll just point out that there’s actually some pretty good documentation on this service available at http://azure.microsoft.com/en-us/services/key-vault/.

Getting Started

Before you start trying to build anything, you really need to have the latest version of the Azure PowerShell cmdlets, as well as the new cmdlets they’ve built for working with Key Vault. You can get the very latest of the Azure PowerShell cmdlets by going here: https://github.com/Azure/azure-powershell/releases. You can get the Key Vault cmdlets by going here: https://gallery.technet.microsoft.com/scriptcenter/Azure-Key-Vault-Powershell-1349b091.

Create a New Vault and Secret(s)

The next step is to crack open your Azure PowerShell window and load up the Key Vault cmdlets. You can do that like this:

Set-ExecutionPolicy Bypass -Scope Process

import-module C:\DirectoryYouExtractedKeyVaultCmdletsTo\KeyVaultManager

I’m just turning off policy to only run signed cmdlets with the first line of code (and just in this process), and then loading up the cmdlets with the next line of code. After that you need to connect to your Azure AD tenant like this:

add-azureaccount

If you have multiple subscriptions and you want to target the specific subscription where you want to create your Key Vault and secrets and keys, then you can do this:

Set-AzureSubscription -SubscriptionId some-guid-here

You’ll see a list of guids for your subscription after you log in with the add-azureaccount cmdlet. Now that you’re logged in and set in your subscription, you can do the first step, which is to create a new vault. The PowerShell for it is pretty easy – just this one line of code:

New-AzureKeyVault -VaultName “SteveDemo” -ResourceGroupName “SteveResources” -Location “West US”

There are a few things worth noting here:

  • The VaultName must be unique amongst ALL vaults in Azure.  It’s just a like an Azure storage account in that sense.  The name will become part of the unique Url you use to address it later.
  • The ResourceGroupName can be whatever you want.  If it doesn’t exist, the cmdlets will create it.
  • The locations are limited right now.  In the US you can create a vault in the West and East but not Central.  Azure should have some documentation somewhere on which locations you can use (i.e. I’m sure they do, I just haven’t gone looking for it yet).

Okay cool – we got a vault created, now we can create keys and secrets. In this case I’m going to use the scenario where I need some kind of database connection string. Assume as well that like in most cases, I have a separate database for dev and production. So what I’m going to do is just create two secrets, one for each. Here’s what that looks like:

$conStrKey = ConvertTo-SecureString ‘myDevConStr’ -AsPlainText -Force

$devSecret = Set-AzureKeyVaultSecret -VaultName ‘SteveDemo’ -Name ‘DevConStr’ -SecretValue $ conStrKey

$ conStrKey = ConvertTo-SecureString ‘myProdConStr’ -AsPlainText -Force

$prodSecret = Set-AzureKeyVaultSecret -VaultName ‘SteveDemo’ -Name ‘ProdConStr’ -SecretValue $ conStrKey

Awesome, we’re ready to go, right? Mmm, almost, but not quite. Like all things Azure, you need to configure an application that you’ll use to access these secrets. But actually its different (of course) than other apps, in that you don’t pick it from a list of services and select the features you want to use. That may come later, but it’s not here yet.

Grant Your Application Rights to the Secrets

There are a few steps here to get your app rights to the secrets you created.

  1. Create a new application in Azure AD.  I won’t go through all the steps here because they’re documented all over the place.  But the net is you create a new app, say it’s one you are developing, and type in whatever value you want for the sign in and app ID.  After that you need to go copy the client ID and save it somewhere, then create a new key and save it somewhere.  You’ll use these later.
  2. Go back to your PowerShell window and grant rights to read keys and/or secrets to your application.  You can do that with a line of PowerShell that looks like this:

Set-AzureKeyVaultAccessPolicy -VaultName ‘SteveDemo’ -ServicePrincipalName theClientIdOfYourAzureApp -PermissionsToKeys all -PermissionsToSecrets all

In this case I kind of cheated and took the easy way out by granting rights to all possible permissions to my app. If you just wanted it to be able to read secrets then you could configure it that way. One other thing worth noting – one of the most common errors I have seen from folks using this service is that they only remember to grant PermissionsToKeys or PermissionsToSecrets but not both. If you do that then you will get these kind of weird errors that are say something like “operation ‘foo’ is not allowed”. Well, yeah, technically that’s correct. What’s really happening is that you’re forbidden from doing something that you have not expressly granted your application rights to do. So be on the lookout for that.

Use Your Secrets

Okay cool, now that we’ve got our app all set up we can start using these secrets, right? Yes! 🙂  The first thing I would recommend doing is downloading the sample application that shows off performing all of the major Key Vault functions. The latest sample is at http://www.microsoft.com/en-us/download/details.aspx?id=45343. I just know they will change the location any day now and my blog will be out of date, but hey, that’s what it is today. Now there is supposed to be a Nuget package that would allow you to use this more easily from .NET managed code, but I currently cannot find it. If you go get the code download though you will see a sample project that you can easily compile for a nice assembly-based access to the vault.

Going back to something I mentioned at the beginning though – about not “even need(ing) to have your application hosted in Azure” – one of the really great things about this service is that it’s all accessible via REST. So…that gives you LOTS of options, both for working with the content in the vault, as well as the tools, platforms, and hosting options for your applications that use it. For me, I feel that the most common use case by a wide, wide margin is an application that needs to read a secret – like a connection string. So what I did was go through the sample code and reverse engineer out a few things. The result was a much smaller, simpler piece that I use just for retrieving a secret. The pseudo-code goes something like this:

  1. Get an Azure AD access token
    1. Use the client ID and client secret you obtained earlier, and combine that with the Azure AD resource ID for the Key Vault service – which is https://vault.azure.net.
    2. Get the access token by creating a ClientCredential (with the client ID and client secret), and using the resource ID.  The code looks something like this:

var clientCredential = new ClientCredential(client_id, client_secret);
var context = new AuthenticationContext(Authority + authorityContext, null);
var result = context.AcquireToken(“https://vault.azure.net”, clientCredential);

There’s one other REALLY important thing to note here. For those of you familiar with Azure AD and AuthenticationResults, you may be used to getting your them using the Common Login authority. In this case, you will get an AuthenticationResult, but you will get a 403 Forbidden when trying to use it. Mucho painful debugging to figure this out. You MUST use the tenant ID of the Azure AD instance where your application is registered. The tenant ID is just another GUID you can get out of the Azure portal. So for instance in the code above where it’s getting the AuthenticationContext, those two variables look like this:

const string Authority = “https://login.windows.net/”;
string authorityContext = “cc64c719-d217-4c44-dd24-42c18f9cb9f2”;

  1. Create a new HttpClient and plug in the access token from step 1 into an authorization header.  That code looks something like this:

HttpClient hc = new HttpClient();

hc.DefaultRequestHeaders.Authorization = new
System.Net.Http.Headers.AuthenticationHeaderValue(
“Bearer”, accessToken);

  1. Configure the request headers to indicate that you accept application/json, like this:

hc.DefaultRequestHeaders.Accept.Add((new
MediaTypeWithQualityHeaderValue(“application/json”)));

  1. Create the Url you are going to use to access your secret.  Here’s my example:

string targetUri = “https://stevedemo.vault.azure.net/secrets/prodconstr?api-version=2014-12-08-preview”;

A couple of key things to remember about this Url:

  • “stevedemo” is the name of my vault.  This is why it has to be unique in all of Azure.
  • “prodconstr” is the name of the secret I created.  There is also an ability to add a GUID after that which corresponds to a specific version of a secret if needed.  I won’t cover it here, but it’s in the documentation if you need it.
  1. Request your secret.  My code looks like this:

//NOTE:  Using await here without ConfigureAwait causes the
//thing to disappear into the ether
HttpResponseMessage hrm = await hc.GetAsync(new
Uri(targetUri)).ConfigureAwait(false);

  1. Get the results.  My code:

if (hrm.IsSuccessStatusCode)
{
Secret data = await DeserializeAsync<Secret>(hrm);
result = data.Value;
}

In the spirit of being honest… 🙂 …I borrowed the code to deserialize the results into a class. It’s extremely short and the class is quite simple, so it only takes about 30 seconds of copy and paste.

Using the Secret

So there you have it – a pretty quick and easy way to get at your secrets. Using it within my code then is really simple. I have one line of code to retrieve the secret using the code shown above:

conStr = VaultSecret.GetSecret(KEY_VAULT_NAME, KEY_VAULT_SECRET_NAME).Result;

Where VaultSecret is the name of my class where I put the code I showed above, GetSecret is the method I created that contains that code, and, well, hopefully the two parameters are self-explanatory. Overall – good stuff, all things being equal relatively quick to get up and going with it.

Debugging SharePoint Apps That Are Hosted In Windows Azure Web Sites

Today, I’m going to be the lazy human I’m so frequently accused of being by my somewhat faithful dog Shasta, and bring together two posts written by two other folks into one uber “ain’t it cool how this all works together post” by me.  Here are the two concepts we’re combining today:

Now, once our SharePoint App has been published to a Windows Azure web site, the error prone and/or forward-thinking amongst you may be wondering…um, great…so what do I do to track down bugs?  Well that’s where the second piece of brilliant advice that I had nothing to do with comes in.

Now, let’s briefly walk through the steps to combine these two nuggets of goodness:

  1. Create a SharePoint provider hosted app and verify that it works.
  2. Create an Azure web site and download publishing profile. (in Vesa’s video)
  3. Use appregnew.aspx to get a client ID and client secret. (in Vesa’s video)
  4. Publish the App to your Windows Azure site using the publishing profile, client ID and client secret retrieved in the previous steps. (in Vesa’s video)
  5. Create the app package, install it to your app catalog, and add it to your site. (in Vesa’s video)
  6. Open Server Explorer in Visual Studio 2013, right-click on the Windows Azure node and select Connect to Windows Azure…
  7. Expand to see all the Azure services, and then expand the collection of Web Sites.
  8. Right-click on the Azure web site where you published your provider-hosted app and select Attach Debugger.
  9. The browser opens to your Azure web site, and VS.NET starts up in debugging mode.  Set your breakpoints in your code and start debugging!

See the remotely debugging Azure web sites post for the details on pre-requisites, but in short you need Visual Studio 2013 and the Azure 2.2 SDK for VS 2013; you will find a link to that in the blog post. (NOTE:  that same post also describes how to do this with Visual Studio 2012 but I have not tried that)  This actually works pretty great and I was able to get a first-hand experience using it when I went through the steps for this blog post.  As it turns out, the SharePoint site where I installed my sample application uses the Url https://sps2.  Well, the problem of course is that in my Azure Web site, my code was trying to make a CSOM call to an endpoint at “sps2”.  That works great when I’m in my lab environment, but out in the interwebs that Azure lives in of course it cannot resolve to a simple NetBIOS name (remember, this code is running server side, not client side).  So as a result it was blowing up.  By using this cool new debugging feature I was able to find my issue, appropriately for this debugging post.  Here’s a screenshot of it in action:

 

Creating and Using a Certificate for the CSUpload Tool with Azure IaaS Services

In my posting on using SharePoint up in Azure IaaS services (http://blogs.technet.com/b/speschka/archive/2012/06/17/creating-an-azure-persistent-vm-for-an-isolated-sharepoint-farm.aspx), one of my friends – Mike Taghizadeh, who demands that he be mentioned   🙂  – noticed that I didn’t have instructions for how to create a certificate and use that with the csupload command line tool.   So to help those that may be having the same issue I am going to make a quick run through that process here.

To begin with, really the easiest way to create a certificate that you can use for this purpose is to open the IIS Manager on Windows 7 or Windows Server 2008 or later, and create a self-signed certificate in there.  You can find it in IIS Manager by clicking on the server name, then double click on Server Certificates in the middle pane.  That shows all of the installed certificates, and in the right task pane you will see an option to Create Self-Signed Certificate…

After you create your certificate you need to export it twice – once with the private key, and once without.  The reason you do it twice is because you need to upload the certificate without the private key to Azure.  The certificate with the private key needs to be added to your personal certificate store on the computer where you are making your connection to Azure with csupload.  When you create the certificate in the IIS Manager it puts the certificate in the machine’s personal store, that’s why you need to export it and add it to your own personal store.

Exporting the certificates is fairly straightforward – just click on it in the IIS Manager then click on the Details tab of the certificate properties and click the Copy to File… button.  I’m confident you can use the wizard to figure out how to export it with and without the private key.  Once you have the export with the private key (the .pfx file), open the Certificates MMC snap-in and import it into the Personal store for your user account.  For the export without the private key, just navigate to the Azure portal and upload it there.  You want to click on the Hosted Services, Storage Accounts and CDN link in the bottom left navigation, and then click on the Management Certificates in the top left navigation.  Note that if you don’t see these navigation options you are probably viewing the new preview Azure management portal, and you need to switch back to the current Azure management portal.  You can do that by hovering over the green PREVIEW button in the top center of the page, then clicking the link to “Take me to the previous portal”.

When you’re in the Management Certificates section you can upload the certificate you exported (the .cer file).  What’s nice about doing it this way is that you can also copy the subscription ID and certificate thumbprint right out of the portal.  You’ll need both of these when you create the connection string for csupload.  If you click on the subscription, or click on the certificate, you’ll see these values in the right info pane in the Azure Management Portal.  Once you’ve copied the values out, you can plug them into a connection string for csupload like this:

csupload Set-Connection SubscriptionID=yourSubscriptionID;CertificateThumbprint=yourThumbprint;ServiceManagementEndpoint=https://management.core.windows.net

Once you do that you are good to go and start using csupload.  If you get an error message that says:  “Cannot access the certificate specified in the connection string. Verify that the certificate is installed and accessible.  Cannot find a connection string.” – that means that the certificate cannot be found in your user’s Personal certificate store.  Make sure that you have uploaded the .pfx file into it and try again.

 

Creating an Azure Persistent VM for an Isolated SharePoint Farm

The first step in being able to create a persistent VM in Azure is to get your account upgraded to take advantage of these features, which are all in preview.  Once the features are enabled you can follow this process to get the various components configured to support running an isolated SharePoint farm. 

In this case, by “isolated farm” I mean one in which there are two virtual images.  In my scenario one image is running Active Directory, DNS and ADFS.  The other image is running SharePoint 2010 and SQL 2012.  The second image is joined to the forest running on the first image.  The IP address used by the domain controller (SRDC) is 192.168.30.100; the IP address for the SharePoint server (SRSP) is 192.168.30.150 and 192.168.30.151.

IMPORTANT:  Make sure you enable Remote Desktop on your images before uploading them to Azure (it is disabled by default).  Without it, you will find it very difficult to manage your farm, specifically things like Active Directory and ADFS.

  1. Create a new Virtual Network.  Since I’m talking about an isolated farm here, that implies that I do not want to connect or integrate with my corporate identity or name services (like AD and DNS).  Instead I’m going to configure a new virtual network for my small farm.  Here are the sub steps for creating a virtual network for this scenario:
    1. Click New…Network…Custom Create.
    2. Type in a Name for the network, then from the Affinity Group drop down select Create a new affinity group.  Select either West US or East US as the region and type a name for the affinity group.  In this example I use the name SamlAffinity for the affinity group and   SamlNetwork for the network name.  Click the Next button.
    3. Enter the address space in CIDR format for the IP address you want to use then click the Next button.  Now your first question may be what is CIDR format?  That’s beyond the scope of this, but suffice to say that you can figure out CIDR format by going to a web site that will calculate it for you, like http://ip2cidr.com/.  In this case I wanted to use the entire 192.168.30.0 subnet, so I entered 192.168.30.0/24.  Note that you can optionally also create different subnets of use within your virtual network, but it is not needed for this scenario so I did not do it.  Click the Next button.
    4. For this particular scenario you can skip selecting a DNS server because we’ll configure it on the servers themselves once they’re up and running.  Click the Finish button to complete this task and create the virtual network.
  2. Create a new storage account to store your virtual machines.  This step is fairly straightforward – in the new management portal click on New…Storage…Quick Create.  Give it a unique name and click on the Region/Affinity Group drop down.  Select the affinity group you created in the previous step, then click the Create Storage Account button.  For this example, I’ve called my new storage account samlvms.
  3. Upload your images that you will be using.  In this case I have two images – SRDC and SRSP – that need to push up to the storage account I created earlier – samlvms.   Uploading the image can be done with the csupload tool that is part of the Windows Azure 1.7 SDK, which you can get from https://www.windowsazure.com/en-us/develop/other/.  The documentation for using csupload can be found at http://msdn.microsoft.com/en-us/library/windowsazure/gg466228.aspx.  Detailed instructions on creating and uploading a VHD image using this new command can also be found at http://www.windowsazure.com/en-us/manage/windows/common-tasks/upload-a-vhd/.  A few other notes:
    1. UPDATE:  I added instructions for how to create the certificate used with the csupload tool.  You can find it at http://blogs.technet.com/b/speschka/archive/2012/09/15/creating-and-using-a-certificate-for-the-csupload-tool-with-azure-iaas-services.aspx.
    2. In this case I’m using images that already are ready to go – they are not sysprepped, they have a domain controller or SharePoint and SQL installed and just need to be started up.  You can use that as the basis for a new virtual machine but you need to use the Add-Disk command instead of the Add-PersistedVMImage command.  Use the latter if you have a sysprepped image upon which you want to base new images.
    3. Figuring out your management certificate thumbprint when you create the connection can be somewhat mystical.  The detailed instructions above include information on how to get this.  In addition, if you have already been publishing applications with Visual Studio then you can use the same certificate it does.  You have to go into Visual Studio, select Publish, then in the account drop down click on the Manage… link.  From there you can get the certificate that’s used.  If you are trying to use csupload on a different machine then you’ll also need to copy it (including the private key) and then move it to where ever you are using csupload.  Once you copy it over you need to add it to your personal certificate store; otherwise csupload will complain that it is unable to find a matching thumbprint or certificate.
    4. Here’s an example of the commands I used:
      1. csupload Set-Connection “SubscriptionID=mySubscriptionID;CertificateThumbprint=myThumbprintDetails;ServiceManagementEndpoint=https://management.core.windows.net”
      2. csupload Add-Disk -Destination “http://samlvms.blob.core.windows.net/srsp.vhd&#8221; -Label “SAML SharePoint” -LiteralPath “C:\srsp.vhd” -OS Windows -Overwrite
      3. csupload Add-Disk -Destination “http://samlvms.blob.core.windows.net /srdc.vhd” -Label “SAML DC” -LiteralPath “C:\ srdc.vhd” -OS Windows -Overwrite
  4. Once the images are uploaded, you can create new virtual machines based on them.
  5. Click on the New…Virtual Machine…From Gallery.
  6. Click on My Disks on the left, and then select the image you want to create from your image library on the right, then click the Next button.
  7. Type a machine name and select a machine size, then click the Next button.
    1. Select standalone virtual machine (unless you are connecting to an existing one) and enter an available DNS name, select your region and subscription, then click the Next button
    2. Either use no availability set, select an existing one, or create a new one; when finished, click the Finish button to complete the wizard.

Your images may go through multiple states, including “Stopped”, before it finally enters the running state.  Once it starts running, you need to give it a couple minutes or so to boot up, and then you can select it in the Azure portal and click the Connect button on the bottom of the page.  That creates and downloads and RDP connection that you can use to connect to your image and work with it.

It’s also important to note that your network settings are not preserved.  What I mean by that is my images were using static IP addresses, but after restarting the images in Azure they were using DHCP and getting local addresses, so the images require some reconfiguration to work.

Networking Changes

The networking configuration is changed for the images once they are started in Azure.  Azure persistent VMs use DHCP, but the leases last indefinitely so it acts very similar to fixed IP addresses.  One of the big limits though is that you can only have one IP address per machine, so that means the second lab for the SAML Ramp will not be feasible.

To begin with though you need to correct DNS and the domain controller, so RDP into the domain controller first (SRDC in my scenario).  Restart the Net Logon service, either through the Services applet or in a command prompt by typing net stop netlogon followed by net start netlogon.   This will reset your new DHCP address as one of the host addresses for the domain.  Next you need to delete the old host address for the domain, which for me was 192.168.30.100.  Open up DNS Manager and then double-click on the Forward Lookup Zone for your domain.  Find the host (A) record with the old address, 192.168.30.100 in my case,  (it will also say “(same as parent folder)” in the Name column) and delete it. 

Next you need to change the DNS server for your network adapter to point to the DHCP address that was assigned to the image.  Open a command prompt and type ipconfig and press Enter.  The IPv4 Address that is shown is what needs to be used as the DNS server address.  To change it, right click on the network icon in the taskbar and select Open Network and Sharing Center.  Click on the change adapter settings link.  Right-click on the adapter and choose Properties.

When the Properties dialog opens, uncheck the box next to Internet Protocol Version 6.  Click on Internet Protocol Version 4 but DO NOT uncheck the box, then click on the Properties button.  In the DNS section click on the radio button that says Use the following DNS server addresses and for the Preferred DNS server enter the DHCP address for the SRDC server that you retrieved using ipconfig.  Click the OK button to close the Internet Protocol Version 4 Properties dialog, then click the OK button again to close the network adapter Properties dialog.  You can now close the Network Connections window.

Now if you open a command prompt and type ping your Active Directory forest name it should resolve the name and respond with a ping; on my image it responded with address 192.168.30.4.

On the SharePoint server you just need to change the Primary DNS server IP address to the IP address of the domain controller, which in this example was 192.168.30.4.  After doing so you should be able to ping your domain controller name and Active Directory forest name.  Once this is working you need to get the new IP address that’s been assigned to the SharePoint server and update DNS on the domain controller if you used any static host names for your SharePoint sites.  One limitation that could NOT be addressed in this scenario is the fact that my SharePoint server used multiple IP addresses; persistent images in Azure currently only support a single IP address.

The Azure Custom Claim Provider for SharePoint Project Part 3

In Part 1 of this series, I briefly outlined the goals for this project, which at a high level is to use Windows Azure table storage as a data store for a SharePoint custom claims provider.  The claims provider is going to use the CASI Kit to retrieve the data it needs from Windows Azure in order to provide people picker (i.e. address book) and type in control name resolution functionality. 

In Part 2, I walked through all of the components that run in the cloud – the data classes that are used to work with Azure table storage and queues, a worker role to read items out of queues and populate table storage, and a WCF front end that lets a client application create new items in the queue as well as do all the standard SharePoint people picker stuff – provide a list of supported claim types, search for claim values and resolve claims.

In this, the final part in this series, we’ll walk through the different components used on the SharePoint side.  It includes a custom component built using the CASI Kit to add items to the queue as well as to make our calls to Azure table storage.  It also includes our custom claims provider, which will use the CASI Kit component to connect SharePoint with those Azure functions.

To begin with let’s take a quick look at the custom CASI Kit component.  I’m not going to spend a whole lot of time here because the CASI Kit is covered extensively on this blog.  This particular component is described in Part 3 of the CASI Kit series.  Briefly though, what I’ve done is created a new Windows class library project.  I’ve added references to the CASI Kit base class assembly and the other required .NET assemblies (that I describe in part 3).  I’ve added a Service Reference in my project to the WCF endpoint I created in Part 2 of this project.  Finally, I added a new class to the project and have it inherit the CASI Kit base class, and I’ve added the code to override the ExecuteRequest method.  As you have hopefully seen in the CASI Kit series, here’s what my code looks like to override ExecuteRequest:

       public class DataSource : AzureConnect.WcfConfig

       {

              public override bool ExecuteRequest()

              {

                     try

                     {

                           //create the proxy instance with bindings and endpoint the base class

                           //configuration control has created for this

                           AzureClaims.AzureClaimsClient cust =

                                  new AzureClaims.AzureClaimsClient(this.FedBinding,

                                   this.WcfEndpointAddress);

 

                           //configure the channel so we can call it with

                           //FederatedClientCredentials.

                           SPChannelFactoryOperations.ConfigureCredentials<AzureClaims.IAzureClaims>

                                   (cust.ChannelFactory,

                                   Microsoft.SharePoint.SPServiceAuthenticationMode.Claims);

 

                           //create a channel to the WCF endpoint using the

                           //token and claims of the current user

                           AzureClaims.IAzureClaims claimsWCF =

                                  SPChannelFactoryOperations.CreateChannelActingAsLoggedOnUser

                                  <AzureClaims.IAzureClaims>(cust.ChannelFactory,

                                  this.WcfEndpointAddress,

                                  new Uri(this.WcfEndpointAddress.Uri.AbsoluteUri));

 

                           //set the client property for the base class

                           this.WcfClientProxy = claimsWCF;

                     }

                     catch (Exception ex)

                     {

                           Debug.WriteLine(ex.Message);

                     }

 

                     //now that the configuration is complete, call the method

                     return base.ExecuteRequest();

              }

       }

 

“AzureClaims” is the name of the Service Reference I created, and it uses the IAzureClaims interface that I defined in my WCF project in Azure.  As explained previously in the CASI Kit series, this is basically boilerplate code, I’ve just plugged in the name of my interface and class that is exposed in the WCF application.  The other thing I’ve done, as is also explained in the CASI Kit series, is to create an ASPX page called AzureClaimProvider.aspx.  I just copied and pasted in the code I describe in Part 3 of the CASI Kit series and substituted the name of my class and the endpoint it can be reached at.  The control tag in the ASPX page for my custom CASI Kit component looks like this:

<AzWcf:DataSource runat=”server” id=”wcf” WcfUrl=”https://spsazure.vbtoys.com/ AzureClaims.svc” OutputType=”Page” MethodName=”GetClaimTypes” AccessDeniedMessage=”” />

 

The main things to note here are that I created a CNAME record for “spsazure.vbtoys.com” that points to my Azure application at cloudapp.net (this is also described in Part 3 of the CASI Kit).  I’ve set the default MethodName that the page is going to invoke to be GetClaimTypes, which is a method that takes no parameters and returns a list of claim types that my Azure claims provider supports.  This makes it a good test to validate the connectivity between my Azure application and SharePoint.  I can simply go to http://anySharePointsite/_layouts/AzureClaimProvider.aspx and if everything is configured correctly I will see some data in the page.  Once I’ve deployed my project by adding the assembly to the Global Assembly Cache and deploying the page to SharePoint’s _layouts directory that’s exactly what I did – I hit the page in one of my sites and verified that it returned data, so I knew my connection between SharePoint and Azure was working.

Now that I have the plumbing in place, I finally get to the “fun” part of the project, which is to do two things:

  1. 1.       Create “some component” that will send information about new users to my Azure queue
  2. 2.       Create a custom claims provider that will use my custom CASI Kit component to provide claim types, name resolution and search for claims.

This is actually a good point to stop and step back a little.  In this particular case I just wanted to roll something out as quickly as possible.  So, what I did is I created a new web application and I enabled anonymous access.  As I’m sure you all know, just enabling it at the web app level does NOT enable it at the site collection level.  So for this scenario, I also enabled at the root site collection only; I granted access to everything in the site.  All other site collections, which would contain any information for members only, does NOT have anonymous enabled so users have to be granted rights to join.

The next thing to think about is how to manage the identities that are going to use the site.  Obviously, that’s not something I want to do.  I could have come up with a number of different methods to sync accounts into Azure or something goofy like that, but as I explained in Part 1 of this series, there’s a whole bunch of providers that do that already so I’m going to let them keep doing what they do.  What I mean by that is that I took advantage of another Microsoft cloud service called ACS, or Access Control Service.  In short ACS acts like an identity provider to SharePoint.  So I just created a trust between my SharePoint farm and the instance of ACS that I created for this POC.  In ACS I added SharePoint as a relying party, so ACS knows where to send users once they’ve authenticated.  Inside of ACS, I also configured it to let users sign in using their Gmail, Yahoo, or Facebook accounts.  Once they’ve signed in ACS gets a single claim back that I’ll use – email address – and sends it back to SharePoint.

Okay, so that’s all of the background on the plumbing – Azure is providing table storage and queues to work with the data, ACS is providing authentication services, and CASI Kit is providing the plumbing to the data.

So with all that plumbing described, how are we going to use it?  Well I still wanted the process to become a member pretty painless, so what I did is I wrote a web part to add users to my Azure queue.  What it does is it checks to see if the request is authenticated (i.e. the user has clicked the Sign In link that you get in an anonymous site, signed into one of the providers I mentioned above, and ACS has sent me back their claim information).  If the request is not authenticated the web part doesn’t do anything.  However if the request is authenticated, it renders a button that when clicked, will take the user’s email claim and add it to the Azure queue.  This is the part about which I said we should step back a moment and think about it.  For a POC that’s all fine and good, it works.  However you can think about other ways in which you could process this request.  For example, maybe you write the information to a SharePoint list.  You could write a custom timer job (that the CASI Kit works with very nicely) and periodically process new requests out of that list.  You could use the SPWorkItem to queue the requests up to process later.  You could store it in a list and add a custom workflow that maybe goes through some approval process, and once the request has been approved uses a custom workflow action to invoke the CASI Kit to push the details up to the Azure queue.  In short – there’s a LOT of power, flexibility and customization possible here – it’s all up to your imagination.  At some point in fact I may write another version of this that writes it to a custom list, processes it asynchronously at some point, adds the data to the Azure queue and then automatically adds the account to the Visitors group in one of the sub sites, so the user would be signed up and ready to go right away.  But that’s for another post if I do that.

So, all that being said – as I described above I’m just letting the user click a button if they’ve signed in and then I’ll use my custom CASI Kit component to call out the WCF endpoint and add the information to the Azure queue.  Here’s the code for the web part – pretty simple, courtesy of the CASI Kit:

public class AddToAzureWP : WebPart

{

 

       //button whose click event we need to track so that we can

       //add the user to Azure

       Button addBtn = null;

       Label statusLbl = null;

 

       protected override void CreateChildControls()

       {

              if (this.Page.Request.IsAuthenticated)

              {

                     addBtn = new Button();

                     addBtn.Text = “Request Membership”;

                     addBtn.Click += new EventHandler(addBtn_Click);

                     this.Controls.Add(addBtn);

 

                     statusLbl = new Label();

                     this.Controls.Add(statusLbl);

              }

       }

 

       void addBtn_Click(object sender, EventArgs e)

       {

              try

              {

                     //look for the claims identity

                     IClaimsPrincipal cp = Page.User as IClaimsPrincipal;

 

                     if (cp != null)

                     {

                           //get the claims identity so we can enum claims

                           IClaimsIdentity ci = (IClaimsIdentity)cp.Identity;

 

                           //look for the email claim

                           //see if there are claims present before running through this

                           if (ci.Claims.Count > 0)

                           {

                                  //look for the email address claim

                                  var eClaim = from Claim c in ci.Claims

                                         where c.ClaimType == “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221;

                                         select c;

 

                                  Claim ret = eClaim.FirstOrDefault<Claim>();

 

                                  if (ret != null)

                                  {

                                         //create the string we’re going to send to the Azure queue:  claim, value, and display name

                                         //note that I’m using “#” as delimiters because there is only one parameter, and CASI Kit

                                         //uses ; as a delimiter so a different value is needed.  If ; were used CASI would try and

                                         //make it three parameters, when in reality it’s only one

                                         string qValue = ret.ClaimType + “#” + ret.Value + “#” + “Email”;

 

                                         //create the connection to Azure and upload

                                         //create an instance of the control

                                         AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

                                         //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

                                         //cache is not actually used in this case though

                                         cfgCtrl.WcfUrl = AzureCCP.SVC_URI;

                                         cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

                                         cfgCtrl.MethodName = “AddClaimsToQueue”;

                                         cfgCtrl.MethodParams = qValue;

                                         cfgCtrl.ServerCacheTime = 10;

                                         cfgCtrl.ServerCacheName = ret.Value;

                                         cfgCtrl.SharePointClaimsSiteUrl = this.Page.Request.Url.ToString();

 

                                         //execute the method

                                         bool success = cfgCtrl.ExecuteRequest();

 

                                         if (success)

                                         {

                                                //if it worked tell the user

                                                statusLbl.Text = “<p>Your information was successfully added.  You can now contact any of ” +

                                                       “the other Partner Members or our Support staff to get access rights to Partner ” +

                                                       “content.  Please note that it takes up to 15 minutes for your request to be ” +

                                                       “processed.</p>”;

                                         }

                                         else

                                         {

                                                statusLbl.Text = “<p>There was a problem adding your info to Azure; please try again later or ” +

                                                       “contact Support if the problem persists.</p>”;

                                         }

                                  }

                           }

                     }

              }

              catch (Exception ex)

              {

                     statusLbl.Text = “There was a problem adding your info to Azure; please try again later or ” +

                           “contact Support if the problem persists.”;

                     Debug.WriteLine(ex.Message);

              }

       }

}

 

So a brief rundown of the code looks like this:  I first make sure the request is authenticated; if it is I added the button to the page and I add an event handler for the click event of the button.  In the button’s click event handler I get an IClaimsPrincipal reference to the current user, and then look at the user’s claims collection.  I run a LINQ query against the claims collection to look for the email claim, which is the identity claim for my SPTrustedIdentityTokenIssuer.  If I find the email claim, I create a concatenated string with the claim type, claim value and friendly name for the claim.  Again, this isn’t strictly required in this scenario, but since I wanted this to be usable in a more generic scenario I coded it up this way.  That concatenated string is the value for the method I have on the WCF that adds data to the Azure queue.  I then create an instance of my custom CASI Kit component and configure it to call the WCF method that adds data to the queue, then I call the ExecuteRequest method to actually fire off the data. 

If I get a response indicating I was successful adding data to the queue then I let the user know; otherwise I let him know there was a problem and hey may need to check again later.  In a real scenario of course I would have even more error logging so I could track down exactly what happened and why.  Even with this as is though, the CASI Kit will write any error information to the ULS logs in a SPMonitoredScope, so everything it does for the request will have a unique correlation ID with which we can view all activity associated with the request.  So I’m actually in a pretty good state right now.

Okay – we’ve walked through all the plumbing pieces, and I’ve shown how data gets added to the Azure queue and from there pulled out by a worker process and added into table storage.  That’s really the ultimate goal because now we can walk through the custom claims provider.  It’s going to use the CASI Kit to call out and query the Azure table storage I’m using.  Let’s look at the most interesting aspects of the custom claims provider.

First let’s look at a couple of class level attributes:

//the WCF endpoint that we’ll use to connect for address book functions

//test url:  https://az1.vbtoys.com/AzureClaimsWCF/AzureClaims.svc

//production url:  https://spsazure.vbtoys.com/AzureClaims.svc

public static string SVC_URI = “https://spsazure.vbtoys.com/AzureClaims.svc&#8221;;

 

//the identity claimtype value

private const string IDENTITY_CLAIM =

       “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221;;

 

//the collection of claim type we support; it won’t change over the course of

//the STS (w3wp.exe) life time so we’ll cache it once we get it

AzureClaimProvider.AzureClaims.ClaimTypeCollection AzureClaimTypes =

       new AzureClaimProvider.AzureClaims.ClaimTypeCollection();

 

First, I use a constant to refer to the WCF endpoint to which the CASI Kit should connect.  You’ll note that I have both my test endpoint and my production endpoint.  When you’re using the CASI Kit programmatically, like we will in our custom claims provider, you always have to tell it where that WCF endpoint is that it should talk to.

Next, as I’ve described previously I’m using the email claim as my identity claim.  Since I will refer to it a number of times throughout my provider, I’ve just plugged into a constant at the class level.

Finally, I have a collection of AzureClaimTypes.  I explained in Part 2 of this series why I’m using a collection, and I’m just storing it here at the class level so that I don’t have to go and re-fetch that information each time my FillHierarchy method is invoked.  Calls out to Azure aren’t cheap, so I minimize them where I can.

Here’s the next chunk of code:

internal static string ProviderDisplayName

{

       get

       {

              return “AzureCustomClaimsProvider”;

       }

}

 

internal static string ProviderInternalName

{

       get

       {

              return “AzureCustomClaimsProvider”;

       }

}

 

//*******************************************************************

//USE THIS PROPERTY NOW WHEN CREATING THE CLAIM FOR THE PICKERENTITY

internal static string SPTrustedIdentityTokenIssuerName

{

       get

       {

              return “SPS ACS”;

       }

}

 

 

public override string Name

{

       get

       {

              return ProviderInternalName;

       }

}

 

The reason I wanted to point this code out is because since my provider is issuing identity claims, it MUST be the default provider for the SPTrustedIdentityTokenIssuer.  Explaining how to do that is outside the scope of this post, but I’ve covered it elsewhere in my blog.  The main thing to remember about doing that is that you must have a strong relationship between the name you use for your provider and the name used for the SPTrustedIdentityTokenIssuer.  The value I used for the ProviderInternalName is the name that I must plug into the ClaimProviderName property for the SPTrustedIdentityTokenIssuer.  Also, I need to use the name of the SPTrustedIdentityTokenIssuer when I’m creating identity claims for users.  So I’ve created an SPTrustedIdentityTokenIssuer called “SPS ACS” and I’ve added that to my SPTrustedIdentityTokenIssuerName property.  That’s why I have these values coded in here.

Since I’m not doing any claims augmentation in this provider, I have not written any code to override FillClaimTypes, FillClaimValueTypes or FillEntityTypes.  The next chunk of code I have is FillHierarchy, which is where I tell SharePoint what claim types I support.  Here’s the code for that:

try

{

       if (

               (AzureClaimTypes.ClaimTypes == null) ||

               (AzureClaimTypes.ClaimTypes.Count() == 0)

       )

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “GetClaimTypes”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = “GetClaimTypes”;

              cfgCtrl.SharePointClaimsSiteUrl = context.AbsoluteUri;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              if (success)

              {

                     //if it worked, get the list of claim types out

                     AzureClaimTypes =

                                                (AzureClaimProvider.AzureClaims.ClaimTypeCollection)cfgCtrl.QueryResultsObject;

              }

       }

 

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       //at this point we have whatever claim types we’re going to have, so add them to the hierarchy

       //check to see if the hierarchyNodeID is null; it will be when the control

       //is first loaded but if a user clicks on one of the nodes it will return

       //the key of the node that was clicked on.  This lets you build out a

       //hierarchy as a user clicks on something, rather than all at once

       if (

               (string.IsNullOrEmpty(hierarchyNodeID)) &&

               (AzureClaimTypes.ClaimTypes.Count() > 0)

              )

       {

              //enumerate through each claim type

              foreach (AzureClaimProvider.AzureClaims.ClaimType clm in AzureClaimTypes.ClaimTypes)

              {

                     //when it first loads add all our nodes

                     hierarchy.AddChild(new

                                                Microsoft.SharePoint.WebControls.SPProviderHierarchyNode(

                                   ProviderInternalName, clm.FriendlyName, clm.ClaimTypeName, true));

              }

       }

}

catch (Exception ex)

{

       Debug.WriteLine(“Error filling hierarchy: ” + ex.Message);

}

 

So here I’m looking to see if I’ve grabbed the list of claim types I support already.  If I haven’t then I create an instance of my CASI Kit custom control and make a call out to my WCF to retrieve the claim types; I do this by calling the GetClaimTypes method on my WCF class.  If I get data back then I plug it into the class-level variable I described earlier called AzureClaimTypes, and then I add it to the hierarchy of claim types I support.

The next methods we’ll look at is the FillResolve methods.  The FillResolve methods have two different signatures because they do two different things.  In one scenario we have a specific claim with value and type and SharePoint just wants to verify that is valid.  In the second case a user has just typed some value into the SharePoint type in control and so it’s effectively the same thing as doing a search for claims.  Because of that, I’ll look at them separately.

In the case where I have a specific claim and SharePoint wants to verify the values, I call a custom method I wrote called GetResolveResults.  In that method I pass in the Uri where the request is being made as well as the claim type and claim value SharePoint is seeking to validate.  The GetResolveResults then looks like this:

//Note that claimType is being passed in here for future extensibility; in the

//current case though, we’re only using identity claims

private AzureClaimProvider.AzureClaims.UniqueClaimValue GetResolveResults(string siteUrl,

       string searchPattern, string claimType)

{

       AzureClaimProvider.AzureClaims.UniqueClaimValue result = null;

 

       try

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “ResolveClaim”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = claimType + “;” + searchPattern;

              cfgCtrl.MethodParams = IDENTITY_CLAIM + “;” + searchPattern;

              cfgCtrl.SharePointClaimsSiteUrl = siteUrl;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              //if the query encountered no errors then capture the result

              if (success)

                     result = (AzureClaimProvider.AzureClaims.UniqueClaimValue)cfgCtrl.QueryResultsObject;

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

 

       return result;

}

 

So here I’m creating an instance of the custom CASI Kit control then calling the ResolveClaim method on my WCF.  That method takes two parameters, so I pass that in as semi-colon delimited values (because that’s how CASI Kit distinguishes between different param values).  I then just execute the request and if it finds a match it will return a single UniqueClaimValue; otherwise the return value will be null.  Back in my FillResolve method this is what my code looks like:

protected override void FillResolve(Uri context, string[] entityTypes, SPClaim resolveInput, List<PickerEntity> resolved)

{

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //look for matching claims

              AzureClaimProvider.AzureClaims.UniqueClaimValue result =

                     GetResolveResults(context.AbsoluteUri, resolveInput.Value,

                     resolveInput.ClaimType);

 

              //if we found a match then add it to the resolved list

              if (result != null)

              {

                     PickerEntity pe = GetPickerEntity(result.ClaimValue, result.ClaimType,

                     SPClaimEntityTypes.User, result.DisplayName);

                           resolved.Add(pe);

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

 

So I’m checking first to make sure that the request is for a User claim, since that the only type of claim my provider is returning.  If the request is not for a User claim then I drop out.  Next I call my method to resolve the claim and if I get back a non-null result, I process it.  To process it I call another custom method I wrote called GetPickerEntity.  Here I pass in the claim type and value to create an identity claim, and then I can add that PickerEntity that it returns to the List of PickerEntity instances passed into my method.  I’m not going to go into the GetPickerEntity method because this post is already incredibly long and I’ve covered how to do so in other posts on my blog.

Now let’s talk about the other FillResolve method.  As I explained earlier, it basically acts just like a search so I’m going to combine the FillResolve and FillSearch methods mostly together here.  Both of these methods are going to call a custom method I wrote called SearchClaims, that looks like this:

private AzureClaimProvider.AzureClaims.UniqueClaimValueCollection SearchClaims(string claimType, string searchPattern,

       string siteUrl)

{

                    

       AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

              new AzureClaimProvider.AzureClaims.UniqueClaimValueCollection();

 

       try

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “SearchClaims”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = claimType + “;” + searchPattern;

              cfgCtrl.MethodParams = claimType + “;” + searchPattern + “;200”;

              cfgCtrl.SharePointClaimsSiteUrl = siteUrl;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              if (success)

              {

                     //if it worked, get the array of results

                     results =

 (AzureClaimProvider.AzureClaims.UniqueClaimValueCollection)cfgCtrl.QueryResultsObject;

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(“Error searching claims: ” + ex.Message);

       }

 

       return results;

}

 

In this method, as you’ve seen elsewhere in this post, I’m just creating an instance of my custom CASI Kit control.  I’m calling the SearchClaims method on my WCF and I’m passing in the claim type I want to search in, the claim value I want to find in that claim type, and the maximum number of records to return.  You may recall from Part 2 of this series that SearchClaims just does a BeginsWith on the search pattern that’s passed in, so with lots of users there could easily be over 200 results.  However 200 is the maximum number of matches that the people picker will show, so that’s all I ask for.  If you really think that users are going to scroll through more than 200 results looking for a result I’m here to tell you that ain’t likely.

So now we have our colletion of UniqueClaimValues back, let’s look at how we use it our two override methods in the custom claims provider.  First, here’s what the FillResolve method looks like:

protected override void FillResolve(Uri context, string[] entityTypes, string resolveInput, List<PickerEntity> resolved)

{

       //this version of resolve is just like a search, so we’ll treat it like that

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //do the search for matches

              AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

                     SearchClaims(IDENTITY_CLAIM, resolveInput, context.AbsoluteUri);

 

              //go through each match and add a picker entity for it

              foreach (AzureClaimProvider.AzureClaims.UniqueClaimValue cv in results.UniqueClaimValues)

              {

                     PickerEntity pe = GetPickerEntity(cv.ClaimValue, cv.ClaimType, SPClaimEntityTypes.User, cv.DisplayName);

                     resolved.Add(pe);

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

It just calls the SearchClaims method, and for each result it gets back (if any), it creates a new PickerEntity and adds it to the List of them passed into the override.  All of them will show up in then in the type in control in SharePoint.  The FillSearch method uses it like this:

protected override void FillSearch(Uri context, string[] entityTypes, string searchPattern, string hierarchyNodeID, int maxCount, SPProviderHierarchyTree searchTree)

{

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //do the search for matches

              AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

                     SearchClaims(IDENTITY_CLAIM, searchPattern, context.AbsoluteUri);

 

              //if there was more than zero results, add them to the picker

              if (results.UniqueClaimValues.Count() > 0)

              {

                     foreach (AzureClaimProvider.AzureClaims.UniqueClaimValue cv in results.UniqueClaimValues)

                     {

                           //node where we’ll stick our matches

                           Microsoft.SharePoint.WebControls.SPProviderHierarchyNode matchNode = null;

 

                           //get a picker entity to add to the dialog

                           PickerEntity pe = GetPickerEntity(cv.ClaimValue, cv.ClaimType, SPClaimEntityTypes.User, cv.DisplayName);

 

                           //add the node where it should be displayed too

                           if (!searchTree.HasChild(cv.ClaimType))

                           {

                                  //create the node so we can show our match in there too

                                  matchNode = new

                                         SPProviderHierarchyNode(ProviderInternalName,

                                         cv.DisplayName, cv.ClaimType, true);

 

                                  //add it to the tree

                                  searchTree.AddChild(matchNode);

                           }

                           else

                                  //get the node for this team

                                  matchNode = searchTree.Children.Where(theNode =>

                                         theNode.HierarchyNodeID == cv.ClaimType).First();

 

                           //add the match to our node

                           matchNode.AddEntity(pe);

                     }

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

 

In FillSearch I’m calling my SearchClaims method again.  For each UniqueClaimValue I get back (if any), I look to see if I’ve added the claim type to the results hierarchy node.  Again, in this case I’ll always only return one claim type (email), but I wrote this to be extensible so you could use more claim types later.  So I add the hierarchy node if it doesn’t exist, or find it if it does.  I take the PickerEntity that I create created from the UniqueClaimValue and I add it to the hierarchy node.  And that’s pretty much all there is too it.

I’m not going to cover the FillSchema method or any of the four Boolean property overrides that every custom claim provider must have, because there’s nothing special in them for this scenario and I’ve covered the basics in other posts on this blog.  I’m also not going to cover the feature receiver that’s used to register this custom claims provider because – again – there’s nothing special for this project and I’ve covered it elsewhere.  After you compile it you just need to make sure that your assembly for the custom claim provider as well as custom CASI Kit component is registered in the Global Assembly Cache in each server on the farm, and you need to configure the SPTrustedIdentityTokenIssuer to use your custom claims provider as the default provider (also explained elsewhere in this blog). 

That’s the basic scenario end to end.  When you are in the SharePoint site and you try and add a new user (email claim really), the custom claim provider is invoked first to get a list of supported claim types, and then again as you type in a value in the type in control, or search for a value using the people picker.  In each case the custom claims provider uses the custom CASI Kit control to make an authenticated call out to Windows Azure to talk to our WCF, which uses our custom data classes to retrieve data from Azure table storage.  It returns the results and we unwrap that and present it to the user.  With that you have your complete turnkey SharePoint and Azure “extranet in a box” solution that you can use as is, or modify to suit your purposes.  The source code for the custom CASI Kit component, web part that registers the user in an Azure queue, and custom claims provider is all attached to this posting.  Hope you enjoy it, find it useful, and can start to visualize how you can tie these separate services together to create solutions to your problems.  Here’s some screenshots of the final solution:

Root site as anonymous user:

Here’s what it looks like after you’ve authenticated; notice that the web part now displays the Request Membership button:

Here’s an example of the people picker in action, after searching for claim values that starts with “sp”:

 

 

 

You can download the attachment here:

The Azure Custom Claim Provider for SharePoint Project Part 2

In Part 1 of this series, I briefly outlined the goals for this project, which at a high level is to use Windows Azure table storage as a data store for a SharePoint custom claims provider.  The claims provider is going to use the CASI Kit to retrieve the data it needs from Windows Azure in order to provide people picker (i.e. address book) and type in control name resolution functionality. 

In Part 3 I create all of the components used in the SharePoint farm.  That includes a custom component based on the CASI Kit that manages all the commnication between SharePoint and Azure.  There is a custom web part that captures information about new users and gets it pushed into an Azure queue.  Finally, there is a custom claims provider that communicates with Azure table storage through a WCF – via the CASI Kit custom component – to enable the type in control and people picker functionality.

Now let’s expand on this scenario a little more.

This type of solution plugs in pretty nicely to a fairly common scenario, which is when you want a minimally managed extranet.  So for example, you want your partners or customers to be able to hit a website of yours, request an account, and then be able to automatically “provision” that account…where “provision” can mean a lot of different things to different people.  We’re going to use that as the baseline scenario here, but of course, let our public cloud resources do some of the work for us.

Let’s start by looking at the cloud components were going to develop ourselves:

  • A table to keep track of all the claim types we’re going to support
  • A table to keep track of all the unique claim values for the people picker
  • A queue where we can send data that should be added to the list of unique claim values
  • Some data access classes to read and write data from Azure tables, and to write data to the queue
  • An Azure worker role that is going to read data out of the queue and populate the unique claim values table
  • A WCF application that will be the endpoint through which the SharePoint farm communicates to get the list of claim types, search for claims, resolve a claim, and add data to the queue

Now we’ll look at each one in a little more detail.

Claim Types Table

The claim types table is where we’re going to store all the claim types that our custom claims provider can use.  In this scenario we’re only going to use one claim type, which is the identity claim – that will be email address in this case.  You could use other claims, but to simplify this scenario we’re just going to use the one.  In Azure table storage you add instances of classes to a table, so we need to create a class to describe the claim types.  Again, note that you can instances of different class types to the same table in Azure, but to keep things straightforward we’re not going to do that here.  The class this table is going to use looks like this:

namespace AzureClaimsData

{

    public class ClaimType : TableServiceEntity

    {

 

        public string ClaimTypeName { get; set; }

        public string FriendlyName { get; set; }

 

        public ClaimType() { }

 

        public ClaimType(string ClaimTypeName, string FriendlyName)

        {

            this.PartitionKey = System.Web.HttpUtility.UrlEncode(ClaimTypeName);

            this.RowKey = FriendlyName;

 

            this.ClaimTypeName = ClaimTypeName;

            this.FriendlyName = FriendlyName;

        }

    }

}

 

I’m not going to cover all the basics of working with Azure table storage because there are lots of resources out there that have already done that.  So if you want more details on what a PartitionKey or RowKey is and how you use them, your friendly local Bing search engine can help you out.  The one thing that is worth pointing out here is that I am Url encoding the value I’m storing for the PartitionKey.  Why is that?  Well in this case, my PartitionKey is the claim type, which can take a number of formats:  urn:foo:blah, http://www.foo.com/blah, etc.  In the case of a claim type that includes forward slashes, Azure cannot store the PartitionKey with those values. So instead we encode them out into a friendly format that Azure likes.  As I stated above, in our case we’re using the email claim so the claim type for it is http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

Unique Claim Values Table

The Unique Claim Values table is where all the unique claim values we get our stored. In our case, we are only storing one claim type – the identity claim – so by definition all claim values are going to be unique.  However I took this approach for extensibility reasons.  For example, suppose down the road you wanted to start using Role claims with this solution.  Well it wouldn’t make sense to store the Role claim “Employee” or “Customer” or whatever a thousand different times; for the people picker, it just needs to know the value exists so it can make it available in the picker.  After that, whoever has it, has it – we just need to let it be used when granting rights in a site.  So, based on that, here’s what the class looks like that will store the unique claim values:

namespace AzureClaimsData

{

    public class UniqueClaimValue : TableServiceEntity

    {

 

        public string ClaimType { get; set; }

        public string ClaimValue { get; set; }

        public string DisplayName { get; set; }

 

        public UniqueClaimValue() { }

 

        public UniqueClaimValue(string ClaimType, string ClaimValue, string DisplayName)

        {

            this.PartitionKey = System.Web.HttpUtility.UrlEncode(ClaimType);

            this.RowKey = ClaimValue;

 

            this.ClaimType = ClaimType;

            this.ClaimValue = ClaimValue;

            this.DisplayName = DisplayName;

        }

    }

}

 

There are a couple of things worth pointing out here.  First, like the previous class, the PartitionKey uses a UrlEncoded value because it will be the claim type, which will have the forward slashes in it.  Second, as I frequently see when using Azure table storage, the data is denormalized because there isn’t a JOIN concept like there is in SQL.  Technically you can do a JOIN in LINQ, but so many things that are in LINQ have been disallowed when working with Azure data (or perform so badly) that I find it easier to just denormalize. If you folks have other thoughts on this throw them in the comments – I’d be curious to hear what you think.  So in our case the display name will be “Email”, because that’s the claim type we’re storing in this class.

The Claims Queue

The claims queue is pretty straightforward – we’re going store requests for “new users” in that queue, and then an Azure worker process will read it off the queue and move the data into the unique claim values table.  The primary reason for doing this is that working with Azure table storage can sometimes be pretty latent, but sticking an item in a queue is pretty fast.  Taking this approach means we can minimize the impact on our SharePoint web site.

Data Access Classes

One of the rather mundane aspects of working with Azure table storage and queues is you always have to write you own data access class.  For table storage, you have to write a data context class and a data source class.  I’m not going to spend a lot of time on that because you can read reams about it on the web, plus I’m also attaching my source code for the Azure project to this posting so you can at it all you want. 

There is one important thing I would point out here though, which is just a personal style choice.  I like to break out all my Azure data access code out into a separate project.  That way I can compile it into its own assembly, and I can use it even from non-Azure projects.  For example, in the sample code I’m uploading you will find a Windows form application that I used to test the different parts of the Azure back end.  It knows nothing about Azure, other than it has a reference to some Azure assemblies and to my data access assembly.  I can use it in that project and just as easily in my WCF project that I use to front-end the data access for SharePoint.

Here are some of the particulars about the data access classes though:

  • ·         I have a separate “container” class for the data I’m going to return – the claim types and the unique claim values.  What I mean by a container class is that I have a simple class with a public property of type List<>.  I return this class when data is requested, rather than just a List<> of results.  The reason I do that is because when I return a List<> from Azure, the client only gets the last item in the list (when you do the same thing from a locally hosted WCF it works just fine).  So to work around this issue I return claim types in a class that looks like this:

namespace AzureClaimsData

{

    public class ClaimTypeCollection

    {

        public List<ClaimType> ClaimTypes { get; set; }

 

        public ClaimTypeCollection()

        {

            ClaimTypes = new List<ClaimType>();

        }

 

    }

}

 

And the unique claim values return class looks like this:

namespace AzureClaimsData

{

    public class UniqueClaimValueCollection

    {

        public List<UniqueClaimValue> UniqueClaimValues { get; set; }

 

        public UniqueClaimValueCollection()

        {

            UniqueClaimValues = new List<UniqueClaimValue>();

        }

    }

}

 

 

  • ·         The data context classes are pretty straightforward – nothing really brilliant here (as my friend Vesa would say); it looks like this:

 

namespace AzureClaimsData

{

    public class ClaimTypeDataContext : TableServiceContext

    {

        public static string CLAIM_TYPES_TABLE = “ClaimTypes”;

 

        public ClaimTypeDataContext(string baseAddress, StorageCredentials credentials)

            : base(baseAddress, credentials)

        { }

 

 

        public IQueryable<ClaimType> ClaimTypes

        {

            get

            {

                //this is where you configure the name of the table in Azure Table Storage

                //that you are going to be working with

                return this.CreateQuery<ClaimType>(CLAIM_TYPES_TABLE);

            }

        }

 

    }

}

 

  • ·         In the data source classes I do take a slightly different approach to making the connection to Azure.  Most of the examples I see on the web want to read the credentials out with some reg settings class (that’s not the exact name, I just don’t remember what it is).  The problem with that approach here is that I have no Azure-specific context because I want my data class to work outside of Azure.  So instead I just create a Setting in my project properties and in that I include the account name and key that is needed to connect to my Azure account.  So both of my data source classes have code that looks like this to create that connection to Azure storage:

 

        private static CloudStorageAccount storageAccount;

        private ClaimTypeDataContext context;

 

 

        //static constructor so it only fires once

        static ClaimTypesDataSource()

        {

            try

            {

                //get storage account connection info

                string storeCon = Properties.Settings.Default.StorageAccount;

 

                //extract account info

                string[] conProps = storeCon.Split(“;”.ToCharArray());

 

                string accountName = conProps[1].Substring(conProps[1].IndexOf(“=”) + 1);

                string accountKey = conProps[2].Substring(conProps[2].IndexOf(“=”) + 1);

 

                storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error initializing ClaimTypesDataSource class: ” + ex.Message);

                throw;

            }

        }

 

 

        //new constructor

        public ClaimTypesDataSource()

        {

            try

            {

                this.context = new ClaimTypeDataContext(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials);

                this.context.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(3));

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error constructing ClaimTypesDataSource class: ” + ex.Message);

                throw;

            }

        }

 

  • ·         The actual implementation of the data source classes includes a method to add a new item for both a claim type as well as unique claim value.  It’s very simple code that looks like this:

 

        //add a new item

        public bool AddClaimType(ClaimType newItem)

        {

            bool ret = true;

 

            try

            {

                this.context.AddObject(ClaimTypeDataContext.CLAIM_TYPES_TABLE, newItem);

                this.context.SaveChanges();

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error adding new claim type: ” + ex.Message);

                ret = false;

            }

 

            return ret;

        }

 

One important difference to note in the Add method for the unique claim values data source is that it doesn’t throw an error  or return false when there is an exception saving changes.  That’s because I fully expect that people mistakenly or otherwise try and sign up multiple times.  Once we have a record of their email claim though any subsequent attempt to add it will throw an exception.  Since Azure doesn’t provide us the luxury of strongly typed exceptions, and since I don’t want the trace log filling up with pointless goo, I don’t worry about it when that situation occurs.

  • ·         Searching for claims is a little more interesting, only to the extent that it exposes again some things that you can do in LINQ, but not in LINQ with Azure.  I’ll add the code here and then explain some of the choices I made:

 

        public UniqueClaimValueCollection SearchClaimValues(string ClaimType, string Criteria, int MaxResults)

        {

            UniqueClaimValueCollection results = new UniqueClaimValueCollection();

            UniqueClaimValueCollection returnResults = new UniqueClaimValueCollection();

 

            const int CACHE_TTL = 10;

 

            try

            {

                //look for the current set of claim values in cache

                if (HttpRuntime.Cache[ClaimType] != null)

                    results = (UniqueClaimValueCollection)HttpRuntime.Cache[ClaimType];

                else

                {

                    //not in cache so query Azure

 

                    //Azure doesn’t support starts with, so pull all the data for the claim type

                    var values = from UniqueClaimValue cv in this.context.UniqueClaimValues

                                  where cv.PartitionKey == System.Web.HttpUtility.UrlEncode(ClaimType)

                                  select cv;

 

                    //you have to assign it first to actually execute the query and return the results

                    results.UniqueClaimValues = values.ToList();

 

                    //store it in cache

                    HttpRuntime.Cache.Add(ClaimType, results, null,

                        DateTime.Now.AddHours(CACHE_TTL), TimeSpan.Zero,

                        System.Web.Caching.CacheItemPriority.Normal,

                        null);

                }

 

                //now query based on criteria, for the max results

                returnResults.UniqueClaimValues = (from UniqueClaimValue cv in results.UniqueClaimValues

                           where cv.ClaimValue.StartsWith(Criteria)

                           select cv).Take(MaxResults).ToList();

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error searching claim values: ” + ex.Message);

            }

 

            return returnResults;

        }

 

The first thing to note is that you cannot use StartsWith against Azure data.  So that means you need to retrieve all the data locally and then use your StartsWith expression.  Since retrieving all that data can be an expensive operation (it’s effectively a table scan to retrieve all rows), I do that once and then cache the data.  That way I only have to do a “real” recall every 10 minutes.  The downside is that if users are added during that time then we won’t be able to see them in the people picker until the cache expires and we retrieve all the data again.  Make sure you remember that when you are looking at the results.

Once I actually have my data set, I can do the StartsWith, and I can also limit the amount of records I return.  By default SharePoint won’t display more than 200 records in the people picker so that will be the maximum amount I plan to ask for when this method is called.  But I’m including it as a parameter here so you can do whatever you want.

The Queue Access Class

Honestly there’s nothing super interesting here.  Just some basic methods to add, read and delete messages from the queue.

Azure Worker Role

The worker role is also pretty non-descript.  It wakes up every 10 seconds and looks to see if there are any new messages in the queue.  It does this by calling the queue access class.  If it finds any items in there, it splits the content out (which is semi-colon delimited) into its constituent parts, creates a new instance of the UniqueClaimValue class, and then tries adding that instance to the unique claim values table.  Once it does that it deletes the message from the queue and moves to the next item, until the it reaches the maximum number of message that can be read at one time (32), or there are no more messages remaining.

WCF Application

As described earlier, the WCF application is what the SharePoint code talks to in order to add items to the queue, get the list of claim types, and search for or resolve a claim value.  Like a good trusted application, it has a trust established between it and the SharePoint farm that is calling it.  This prevents any kind of token spoofing when asking for the data.  At this point there isn’t any finer grained security implemented in the WCF itself.  For completeness, the WCF was tested first in a local web server, and then moved up to Azure where it was tested again to confirm that everything works.

So that’s the basics of the Azure components of this solution.  Hopefully this background explains what all the moving parts are and how they’re used.  In the next part I’ll discuss the SharePoint custom claims provider and how we hook all of these pieces together for our “turnkey” extranet solution.  The files attached to this posting contain all of the source code for the data access class, the test project, the Azure project, the worker role and WCF projects.  It also contains a copy of this posting in a Word document, so you can actually make out my intent for this content before the rendering on this site butchered it.

You can download the attachment here: