The SPMigrateUsers Tool for Changing Account Identities in SharePoint 2010

There are times in SharePoint when you want or need to change an account identity.  The best example is with SAML claims.  In virtually of my examples I use email address as the identity claim for users.  I do this because a) most people have an email address and b) an email address is something that most users understand.  However, I get push back every now and then about using email address because people can change it.  When you change your email address, then obviously all the permissions you have would break.  To be honest, I don’t consider this to be a frequent scenario, or I wouldn’t use email address to begin with.  However I will grant you that it does occasionally happen, so what do you do when all of your SharePoint is secured with email addresses?

The key to controlling this is in a blog post I did previously on the IMigrateUserCallback interface:  http://blogs.technet.com/b/speschka/archive/2011/01/27/migrating-user-accounts-from-windows-claims-to-saml-claims.aspx.  In that post I describe how to migrate identities using this interface and provided an example of how to convert a Windows claims identity to a SAML claims identity.  What I decided to do is just write a little Windows application that will let you enter the credentials that you want to change and it will make the modification for you.  It’s goal in life is to be used as a simple one-off tool for making these changes, however you could easily take the source code (which I’m including with this post) and modify it to do something more creative, such as reading in a list of users from a file or database and doing the comparisons yourself.

What’s also nice about this tool though is that it can actually be used for multiple scenarios.  So not only can you use it to convert from one email address to another, you can also use it to convert from one group name to another.  This is another case where certain Ops folks tell me, hey, you should use SIDs for the group names (i.e. Role claims in SAML) because if you rename the group the SID remains the same.  While that’s also true, again, a) I don’t see that happen a ton, b) how many of you want to start typing in group SID names and adding them to SharePoint groups (please seek therapy now if you answer yes) and c) SIDs mean nothing outside of your local Active Directory – as soon as you move into a cloud-based service like Azure, Google, Yahoo, Facebook, etc. your SID will be as useless as [fill in your own “useless as a …” joke here].

The other thing that’s nice about this tool is that it doesn’t restrict you to just making changes within a single type of provider.  Want to change a Windows group to a SAML role claim?  You can do that.  Want to change a SAML identity claim to an FBA membership user?  You can do that.  Want to change an FBA role to an AD group?  You can do that.  You get the idea – I’ve tried just about every combination of different “users” and “groups” between different providers and so far all have converted back and forth successfully.

The tool itself is hopefully pretty straightforward to use; here’s a picture of it:

When the application first starts up it loads up a list of all the web applications.  For each web application it populates the two combo boxes below it with a list of all the providers being used on that web application.  If you have multiple SAML providers or multiple FBA providers, each one will be listed in the drop down.  You simply choose which provider you are migrating from and which you are migrating to.  In the Claim Value section you type in the value that want to migrate, and what you want to migrate it to.  Just type the value in the Plain Text Value edit fields and click either the identity claim button (the one on the left) or the group claim button (the one on the right).  The description in the text gives a full explanation of this and the text on the buttons changes so it makes more sense depending on which identity provider you are using.

For example, suppose you are only using SAML authentication and wanted to migrate the email address “steve@contoso.com” to “stevep@contoso.com”.  You would pick your web application and the SAML authentication provider would be selected by default in each drop down.  Then in the Before Values section you would type “steve@contoso.com” in the Plain Text Value edit and click the ID Claim button; that puts the correct encoded claim value in the Encoded Value edit.  Next you would type “stevep@contoso.com” in the After Values Plain Text Value edit.  Click on the ID Claim button again it puts the correct value in the Encoded Value edit box (NOTE:  In the picture above the button in the After Values section says “User” instead of “ID Claim” because in that example it is migrating from SAML claims to Windows claims).  Once all of your values have been provided just click the Migrate button to complete the process; a message box will appear informing you when the migration is complete.

In the process of testing this across several different web applications and several different authentication types I did run across a couple of issues that I want to raise here in case you see them as well.  In one case I got an Access Denied error message when trying to migrate the users for one particular web application.  I never was able to track down why this was occurring so the best I can say is that something is wonky in that web app, but I’m not sure what because it worked on all the four or five other web apps I tried in my farm.

The second thing is that in one case the migration said it completed successfully but I could not log in as the migrated user.  In digging into it further I found that the account I was migrating from was not being pushed through the IMigrateUserCallback function (i.e. it’s a SharePoint problem, not a coding problem with this application).  If that happens to you I recommend using the source code and Visual Studio to step through the debugger to make sure the account you’re migrating from is getting called.  Unfortunately I had one lonely FBA membership user that got stuck alone in the wilderness.

Finally one last thing to note – don’t freak out if you migrate an account from one value to another, then log in as the new user and see the old account name, etc. in the welcome control in the top right corner of the page.  The migrate function just changes the account name.  If the other user information changes at all then as long as you update the user profiles the correct information should get pushed down to all the site collections on the next sync with the profile system.

That’s it – have it and hope it helps.  As I said above the complete source code is included so feel free to play with it and modify as needed for your scenario.

Here’s a link to the source code:

 

Creating an Azure Persistent VM for an Isolated SharePoint Farm

The first step in being able to create a persistent VM in Azure is to get your account upgraded to take advantage of these features, which are all in preview.  Once the features are enabled you can follow this process to get the various components configured to support running an isolated SharePoint farm. 

In this case, by “isolated farm” I mean one in which there are two virtual images.  In my scenario one image is running Active Directory, DNS and ADFS.  The other image is running SharePoint 2010 and SQL 2012.  The second image is joined to the forest running on the first image.  The IP address used by the domain controller (SRDC) is 192.168.30.100; the IP address for the SharePoint server (SRSP) is 192.168.30.150 and 192.168.30.151.

IMPORTANT:  Make sure you enable Remote Desktop on your images before uploading them to Azure (it is disabled by default).  Without it, you will find it very difficult to manage your farm, specifically things like Active Directory and ADFS.

  1. Create a new Virtual Network.  Since I’m talking about an isolated farm here, that implies that I do not want to connect or integrate with my corporate identity or name services (like AD and DNS).  Instead I’m going to configure a new virtual network for my small farm.  Here are the sub steps for creating a virtual network for this scenario:
    1. Click New…Network…Custom Create.
    2. Type in a Name for the network, then from the Affinity Group drop down select Create a new affinity group.  Select either West US or East US as the region and type a name for the affinity group.  In this example I use the name SamlAffinity for the affinity group and   SamlNetwork for the network name.  Click the Next button.
    3. Enter the address space in CIDR format for the IP address you want to use then click the Next button.  Now your first question may be what is CIDR format?  That’s beyond the scope of this, but suffice to say that you can figure out CIDR format by going to a web site that will calculate it for you, like http://ip2cidr.com/.  In this case I wanted to use the entire 192.168.30.0 subnet, so I entered 192.168.30.0/24.  Note that you can optionally also create different subnets of use within your virtual network, but it is not needed for this scenario so I did not do it.  Click the Next button.
    4. For this particular scenario you can skip selecting a DNS server because we’ll configure it on the servers themselves once they’re up and running.  Click the Finish button to complete this task and create the virtual network.
  2. Create a new storage account to store your virtual machines.  This step is fairly straightforward – in the new management portal click on New…Storage…Quick Create.  Give it a unique name and click on the Region/Affinity Group drop down.  Select the affinity group you created in the previous step, then click the Create Storage Account button.  For this example, I’ve called my new storage account samlvms.
  3. Upload your images that you will be using.  In this case I have two images – SRDC and SRSP – that need to push up to the storage account I created earlier – samlvms.   Uploading the image can be done with the csupload tool that is part of the Windows Azure 1.7 SDK, which you can get from https://www.windowsazure.com/en-us/develop/other/.  The documentation for using csupload can be found at http://msdn.microsoft.com/en-us/library/windowsazure/gg466228.aspx.  Detailed instructions on creating and uploading a VHD image using this new command can also be found at http://www.windowsazure.com/en-us/manage/windows/common-tasks/upload-a-vhd/.  A few other notes:
    1. UPDATE:  I added instructions for how to create the certificate used with the csupload tool.  You can find it at http://blogs.technet.com/b/speschka/archive/2012/09/15/creating-and-using-a-certificate-for-the-csupload-tool-with-azure-iaas-services.aspx.
    2. In this case I’m using images that already are ready to go – they are not sysprepped, they have a domain controller or SharePoint and SQL installed and just need to be started up.  You can use that as the basis for a new virtual machine but you need to use the Add-Disk command instead of the Add-PersistedVMImage command.  Use the latter if you have a sysprepped image upon which you want to base new images.
    3. Figuring out your management certificate thumbprint when you create the connection can be somewhat mystical.  The detailed instructions above include information on how to get this.  In addition, if you have already been publishing applications with Visual Studio then you can use the same certificate it does.  You have to go into Visual Studio, select Publish, then in the account drop down click on the Manage… link.  From there you can get the certificate that’s used.  If you are trying to use csupload on a different machine then you’ll also need to copy it (including the private key) and then move it to where ever you are using csupload.  Once you copy it over you need to add it to your personal certificate store; otherwise csupload will complain that it is unable to find a matching thumbprint or certificate.
    4. Here’s an example of the commands I used:
      1. csupload Set-Connection “SubscriptionID=mySubscriptionID;CertificateThumbprint=myThumbprintDetails;ServiceManagementEndpoint=https://management.core.windows.net”
      2. csupload Add-Disk -Destination “http://samlvms.blob.core.windows.net/srsp.vhd” -Label “SAML SharePoint” -LiteralPath “C:\srsp.vhd” -OS Windows -Overwrite
      3. csupload Add-Disk -Destination “http://samlvms.blob.core.windows.net /srdc.vhd” -Label “SAML DC” -LiteralPath “C:\ srdc.vhd” -OS Windows -Overwrite
  4. Once the images are uploaded, you can create new virtual machines based on them.
  5. Click on the New…Virtual Machine…From Gallery.
  6. Click on My Disks on the left, and then select the image you want to create from your image library on the right, then click the Next button.
  7. Type a machine name and select a machine size, then click the Next button.
    1. Select standalone virtual machine (unless you are connecting to an existing one) and enter an available DNS name, select your region and subscription, then click the Next button
    2. Either use no availability set, select an existing one, or create a new one; when finished, click the Finish button to complete the wizard.

Your images may go through multiple states, including “Stopped”, before it finally enters the running state.  Once it starts running, you need to give it a couple minutes or so to boot up, and then you can select it in the Azure portal and click the Connect button on the bottom of the page.  That creates and downloads and RDP connection that you can use to connect to your image and work with it.

It’s also important to note that your network settings are not preserved.  What I mean by that is my images were using static IP addresses, but after restarting the images in Azure they were using DHCP and getting local addresses, so the images require some reconfiguration to work.

Networking Changes

The networking configuration is changed for the images once they are started in Azure.  Azure persistent VMs use DHCP, but the leases last indefinitely so it acts very similar to fixed IP addresses.  One of the big limits though is that you can only have one IP address per machine, so that means the second lab for the SAML Ramp will not be feasible.

To begin with though you need to correct DNS and the domain controller, so RDP into the domain controller first (SRDC in my scenario).  Restart the Net Logon service, either through the Services applet or in a command prompt by typing net stop netlogon followed by net start netlogon.   This will reset your new DHCP address as one of the host addresses for the domain.  Next you need to delete the old host address for the domain, which for me was 192.168.30.100.  Open up DNS Manager and then double-click on the Forward Lookup Zone for your domain.  Find the host (A) record with the old address, 192.168.30.100 in my case,  (it will also say “(same as parent folder)” in the Name column) and delete it. 

Next you need to change the DNS server for your network adapter to point to the DHCP address that was assigned to the image.  Open a command prompt and type ipconfig and press Enter.  The IPv4 Address that is shown is what needs to be used as the DNS server address.  To change it, right click on the network icon in the taskbar and select Open Network and Sharing Center.  Click on the change adapter settings link.  Right-click on the adapter and choose Properties.

When the Properties dialog opens, uncheck the box next to Internet Protocol Version 6.  Click on Internet Protocol Version 4 but DO NOT uncheck the box, then click on the Properties button.  In the DNS section click on the radio button that says Use the following DNS server addresses and for the Preferred DNS server enter the DHCP address for the SRDC server that you retrieved using ipconfig.  Click the OK button to close the Internet Protocol Version 4 Properties dialog, then click the OK button again to close the network adapter Properties dialog.  You can now close the Network Connections window.

Now if you open a command prompt and type ping your Active Directory forest name it should resolve the name and respond with a ping; on my image it responded with address 192.168.30.4.

On the SharePoint server you just need to change the Primary DNS server IP address to the IP address of the domain controller, which in this example was 192.168.30.4.  After doing so you should be able to ping your domain controller name and Active Directory forest name.  Once this is working you need to get the new IP address that’s been assigned to the SharePoint server and update DNS on the domain controller if you used any static host names for your SharePoint sites.  One limitation that could NOT be addressed in this scenario is the fact that my SharePoint server used multiple IP addresses; persistent images in Azure currently only support a single IP address.

403 Forbidden Errors When Failing Over a SQL 2012 Availability Group with SharePoint 2010

I just had a heck of a time getting failover of a SQL 2012 Availability Group to work correctly with SharePoint 2010, so I thought I would share the outcome in case it helps anyone else.  In short, I had my SQL 2012 Availability Group all set up and it appeared to be working correctly.  I created a new content database on the primary node in the group, then backed it up and added it to the list of databases managed by the Availability Group (AG).  So far, so good.  I could hit the SharePoint site and it rendered just fine.  However, after I failed over the AG to a new node, my SharePoint site would not come up any more.  Instead, I would get a 403 Forbidden error instead of the page content.  What was really vexing though is that I could open up SQL Server Manager and connect to my AG Listener just fine – I could query and get results for any of the tables in my content database that was now hosted on a different server.

After spending mucho time trying to figure this out, my friend and resident SQL nut job (in a good way!) Bryan P. pointed out that while the database account for my app pool account had moved over with my database, the SQL login did not.  What I mean by that is if I look in SQL Manager at the content database and look at Security…Users  I will see the SQL account for the app pool.  However, if I look at the top level Security node for the server and then Logins, there is not a corresponding login account for the app pool account.  So, I just created the login for the app pool account and then granted it rights to the content databases I was managing with the AG.  After making that change, everything worked fine on the SharePoint side – I can now fail over to any node in the cluster and my SharePoint site continues to work just fine.

This sounds like a good fact to be aware of, especially as you are creating app pools with new accounts and want your content databases to be protected with an AG – make sure you add those new accounts to the logins for each SQL 2012 server that is participating in your AGs.

Getting Welcome Emails to Work with a Custom Claims Provider in SharePoint 2010

A good “friend of the blog”, Israel V., was good enough to point out to me recently that pretty much all of the code samples that we have for custom claims providers contain an irritating little flaw – if you follow these samples then the welcome emails that get sent out when you add a new person to a site will not be sent out.  I, of course, am as guilty as anyone of this, so I took a look at the situation a little closer, as well as a quick review of some code that Israel had developed to work around this problem.

In a nutshell, you will see this problem happen if you are adding a user to a site collection for the very first time and so there is no email address associated with that person – because a profile sync hasn’t occurred or whatever.  So, as you can imagine the key here (and I’m boiling this down to the simplest case scenario) is to grab an email address for the user at the time they are added and then plug it into the appropriate property in your PickerEntity class.  Now let’s talk about some of the particulars.

WHERE you get the email address from is going to totally depend on your claims provider.  If you’re pulling your data from Active Directory then you can query AD to get it.  If you’re using SAML and email address is the identity claim, then you can simply reuse that.  Basically, “it depends” so you’ll need to make the call here.

WHEN you want to use it is when the FillResolve method is called.  As you know, this method can be called either after someone adds an entry via the People Picker or when they type a value in the type in control and click the resolve button.  As I’ve shown in many of my code samples, during that process you will create an instance of the PickerEntity class so that you can add it to the List<PickerEntity> that is passed into the method.

HOW you add it is just to set the property on the PickerEntity instance like so:

//needed to make welcome emails work:

pe.EntityData[PeopleEditorEntityDataKeys.Email] = “steve@stevepeschka.com”;

In this example “pe” is just the instance of the PickerEntity class that I created and return to my FillResolve method.

That’s all there really is too it.  The biggest trick may very well be just getting the email address value.  Once you have it though it’s pretty easy to add it to the PickerEntity to ensure your welcome emails will work.  I tested this out and verified that both a) the welcome emails were not getting sent out with my original custom claims provider and b) they DID start going out after incorporating this change.  Thanks again to Israel V. for the heads up and code sample on this issue.

The Issuer of a Token is not a Trusted Issuer Craziness with SAML Claims in SharePoint 2010

Let’s be honest – every now and then SharePoint lies to us.

Case in point – I was working with my friend Nidhish today, getting SAML working on a SharePoint site.  We started out be getting a strange HTTP 500 error when we hit the site.  That in and of itself is unusual in my experience.  So to try and understand the issue better we cracked open the ULS logs and found this error:  “The issuer of the token is not a trusted issuer.”  Now having set up SAML in SharePoint approximately 3,492,234 times, I was fairly confident that we had configured the certificates correctly.  Nonetheless, we then spent a fair amount of time looking at the certificates we had registered with the SPTrustedRootAuthority, comparing certificate thumbprints, double-checking the certificates in ADFS, recycling services and boxes etc.  Just made absolutely no sense at all because every aspect of the certificate configuration appeared to be correct.

Finally I decided to review all of the relying party settings in ADFS again, and that’s where I found the “real” problem.  Turns out the WS-Fed endpoint for the relying party was mistakenly set to “https://foo”, instead of “https://foo/_trust”.  All the certificates were in fact correct, but the request was getting redirected to the root instead of the _trust directory.  Once the WS-Fed endpoint was updated everything began working.  Just a little nugget that you may find helpful sometime.

Finally A USEFUL Way to Federate With Windows Live and SharePoint 2010 Using OAuth and SAML

Lots of folks have talked to me in the past about federating SharePoint with Windows Live.  On the surface it seems like a pretty good idea – Windows Live has millions of users, everyone logs in with their email address, which is something we use a lot as an identity claim, it’s a big scalable service, and we have various instructions out there for how to do it – either directly or via ACS (Access Control Service).  So why might I be so grumpy about using it with SharePoint?  Well, for those of you that have tried it before you know – when you federate with Windows Live you never get a user’s email address back as a claim.  All  you get is a special Windows Live identifier that is called a PUID.  As far as I know, “PUID” should stand for “Practically GUID”, because that’s pretty much what it looks like and about how useful it is. 

For example, if you DO federate with Windows Live, how do you add someone to a site?  You have to get their PUID, and then add the PUID to a SharePoint group or permission level.  Do you seriously know anyone that knows what their PUID is (if you are such a person, it’s time to find something else to do with your free time).  Even if you did magically happen to know what your PUID is, how useful do you think that is if you’re trying to grant users rights to different site collections?  Do you really think anyone else could pick you out of a PUID lineup (or people picker, as the case may be)?  Of course not!  And thus my frustration with it grows.

I actually thought that we might have a shot here at a more utopian solution with ACS.  ACS is really great in terms of providing out of the box hooks to several identity providers like Windows Live, Google, Yahoo, and Facebook.  With Facebook they even sprinkle a little magic on it and actually use OAuth to authenticate and then return a set of SAML claims.  Very cool!  So why don’t they do that with Windows Live as well?  Windows Live supports OAuth now so it seems like there’s an opportunity for something valuable to finally happen.  Well despite wishing it were so, the ACS folks have not come to the rescue here.  And therein lies the point of this preamble – I finally decided to just write one myself, and that is the point of this posting.

So why do we care about OAuth?  Well, contrary to the PUID you get when federating directly with Windows Live, OAuth support in Windows Live allows you to get a LOT more information about the user, including – wait for it – their email address.  So the plan of attack here is basically this:

  1. Write a custom Identity Provider using the Windows Identity Foundation (WIF).
  2. When a person is redirected to our STS, if they haven’t authenticated yet we redirect them again to Windows Live.  You have to create “an application” with Windows Live in order to do this, but I’ll explain more about that later.
  3. Once they are authenticated they get redirected back to the custom STS.  When they come back, the query string includes a login token; that login token can be exchanged for an access token.
  4. The STS then makes another request to Windows Live with the login code and asks for an access token.
  5. When it gets the access token back, it makes a final request to Windows Live with the access token and asks for some basic information about the user (I’ll explain what we get back later).
  6. Once we have the user information back from Windows Live, we use our custom STS to create a set of SAML claims for the user and populate it with the user info.  Then we redirect back to whatever application asked us to authenticate to begin with to let it do what it wants with the SAML tokens.  In this particular case I tested my STS with both a standard ASP.NET application as well as a SharePoint 2010 web app.

So…all the source code is attached to this posting, but there’s still some configuration to do, and you will have to recompile the application with the app ID and secret that you get from Windows Live, but other than doing that copy and paste there really isn’t any code you need to write to get going.  Now lets walk through everything you need to use it.

Create a Token Signing Certificate

You will need to create a certificate that will use to sign your SAML tokens.  There’s nothing special about the certificate you use to sign certificates, other than you need to make sure you have the private key for it.  In my case I have Certificate Services installed in my domain so I just opened the IIS Manager and selected the option to create a Domain Certificate.  I followed the wizard and before you know it I had a new certificate complete with private key.  For this project, I created a certificate called livevbtoys. 

As I’ll explain in the next section, when requests initially come into the STS the user is an anonymous user.  In order to use that certificate to sign SAML tokens then we need to grant the IIS process access to the private key for that certificate.  When an anonymous request comes in the IIS process identity is Network Service.  To give it rights to the key you need to:

  1. Start the MMC
  2. Add the Certificates snap-in.  Select the Computer store for the local computer.
  3. Open up the Personal…Certificates store and find the certificate you created for signing SAML tokens.  If you created as I explained above the certificate will be in there by default.  If you create it some other way you may need to add it to that store.
  4. Right click on the certificate and choose the option to Manage Private Keys.
  5. In the list of users that have rights to the keys, add Network Service and give it Read rights to it.

Note that if you don’t do this correctly, when you try running the application you may get an error that says something like “keyset does not exist”.  That just means that IIS process did not have sufficient rights to the private key, so it could not use it to sign the SAML token.

Install the Application and Required Assemblies

Installing the application in this sense really just means creating an ASP.NET application in IIS, copying the bits, and making sure the latest version of WIF is installed.  Once you get it configured and working on one server of course, you would want to add one or more additional servers to make sure you have a fault tolerant solution.  But I’ll just walk through the configuration needed on the single server.

I won’t go into how you create an ASP.NET application in IIS.  You can do with Visual Studio, in the IIS Manager, etc. 

NOTE:  If you use the code that’s provided here and just open the project in Visual Studio, it will complain about the host or site not existing. That’s because it’s using the name from my server.  The easiest way to fix this is just to manually edit the WindowsLiveOauthSts.sln file and change the https values in there to ones that actually exist in your environment.

Once it’s actually created there are a few things you want to make sure you do.

  1. Add PassiveSTS.aspx as the default document in the IIS Manager for the STS web site.
  2. Change the Authentication settings for the application in IIS so that all authentication types are disabled except for Anonymous Authentication.
  3. The STS needs to run over SSL, so you will need to acquire an appropriate certificate for that and make sure you update the bindings on the IIS virtual server where the custom STS application is used.
  4. Make sure you put the thumbprint of your token signing certificate in the thumbprint attribute of the add element in the trustedIssuers section of the web.config of your relying party (if you are NOT using SharePoint to test).  If you use the Add STS Reference wizard in Visual Studio it will do this for you.

That should be all of the configuration needed in IIS.

Update and Build the Custom STS Project

The attached zip file includes a Visual Studio 2010 project called WindowsLiveOauthSts.  Once IIS is configured and you’ve updated the WindowsLiveOauthSts.sln file as describe above, you should be able to open the project successfully in Visual Studio.  One of the first things you’ll need to do is to update the CLIENT_ID and CLIENT_SECRET constants in the PassiveSTS.aspx.cs class.  You get these when you create a new Windows Live application.  While I’m not going to cover that step-by-step (because there are folks at Windows Live who can help you with it), let me just point you to the location where you can go to create your Windows Live app:  https://manage.dev.live.com/Applications/Index?wa=wsignin1.0.  Also, when you create your application, make sure you set the Redirect Domain to the location where your custom STS is hosted, i.e. https://myserver.foo.com.

Now that you have your ID and secret here’s what needs to be updated in the application:

  1. Update the CLIENT_ID and CLIENT_SECRET constants in the PassiveSTS.aspx.cs class.
  2. In the web.config file update the SigningCertificateName in the appSettings section.  Note that you don’t have to change the IssuerName setting but you obviously can if you want.
  3. Update the token signing certificate for the FederationMetadata.xml document in the STS project.  Once you’ve selected the certificate you’re going to use, you can use the test.exe application included in this posting to get the string value for the certificate.  It needs to be copied in to replace the two X509Certificate element values in federationmetadata.xml.

There’s one other thing worth pointing out here – in the CustomSecurityTokenService.cs file you have the option of setting a variable called enableAppliesToValidation to true and then providing a list of Urls that can use this custom STS.  In my case I have chosen not to restrict it in any way, so that variable is false.  If you do want to lock down your custom STS then you should change that now.  Once all of these changes have been made you can recompile the application and it’s ready to go.

One other note here – I also included a sample ASP.NET application that I used for testing while I was building this.  It’s in a project called LiveRP.  I’m not really going to cover it in here; suffice to say it’s there if you want to try testing things out.  Just remember to change the thumbprint for the STS token signing certificate as described above.

SharePoint Configuration

At this point everything is configured and should be working for the custom STS.  The only thing left to do really is to create a new SPTrustedIdentityToken issuer in SharePoint and configure a new or existing web application to use it.  There are a few things you should know about configuring the SPTrustedIdentityTokenIssuer though; I’m going to give you the PowerShell that I used to create mine and then explain it:

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“c:\livevbtoys.cer”)

New-SPTrustedRootAuthority -Name “SPS Live Token Signing Certificate” -Certificate $cert

 

$map = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221; -IncomingClaimTypeDisplayName “EmailAddress” -SameAsIncoming

$map2 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/id&#8221; -IncomingClaimTypeDisplayName “WindowsLiveID” -SameAsIncoming

$map3 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/full_name&#8221; -IncomingClaimTypeDisplayName “FullName” -SameAsIncoming

$map4 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/first_name&#8221; -IncomingClaimTypeDisplayName “FirstName” -SameAsIncoming

$map5 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/last_name&#8221; -IncomingClaimTypeDisplayName “LastName” -SameAsIncoming

$map6 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/link&#8221; -IncomingClaimTypeDisplayName “Link” -SameAsIncoming

$map7 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/gender&#8221; -IncomingClaimTypeDisplayName “Gender” -SameAsIncoming

$map8 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/locale&#8221; -IncomingClaimTypeDisplayName “Locale” -SameAsIncoming

$map9 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/updated_time&#8221; -IncomingClaimTypeDisplayName “WindowsLiveLastUpdatedTime” -SameAsIncoming

$map10 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/account&#8221; -IncomingClaimTypeDisplayName “AccountName” -SameAsIncoming

$map11 = New-SPClaimTypeMapping -IncomingClaimType “http://blogs.technet.com/b/speschka/claims/accesstoken&#8221; -IncomingClaimTypeDisplayName “WindowsLiveAccessToken” -SameAsIncoming

$realm = “https://spslive.vbtoys.com/_trust/&#8221;

$ap = New-SPTrustedIdentityTokenIssuer -Name “SpsLive” -Description “Window Live oAuth Identity Provider for SAML” -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map,$map2,$map3,$map4,$map5,$map6,$map7,$map8,$map9,$map10,$map11 -SignInUrl “https://spr200.vbtoys.com/WindowsLiveOauthSts/PassiveSTS.aspx&#8221; -IdentifierClaim “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221;

Here are the things worth noting:

  1. As I stated above, I created a certificate called livevbtoys.cer to sign my tokens with, so I added that to my SPTrustedRootAuthority list and then associate it with my token issuer.
  2. I created claims mappings for all of the claims that my custom STS is returning.  As you can see, it’s SIGNIFICANTLY MORE AND BETTER than you would ever get if you just federated directly to Windows Live.  One other thing to note here – I include the access token that I got from Windows Live as a claim here.  While that works with Facebook, I haven’t tested it so I can’t say for sure if Windows Live will let you reuse it or not.  But maybe that will be the topic of a future post.
  3. The $realm value is critically important.  It must point to the root site of your web application, and include the /_trust/ directory. If you do this wrong, you will just get 500 errors from SharePoint when you get redirected back after authentication.
  4. The –SignInUrl parameter when creating the token issuer is the absolute Url to PassiveSTS.aspx page for my custom STS.

That’s pretty much it – once it’s set up you are still using the out of the box people picker and claims providers so you won’t have any lookup capabilities, as you would expect.  You grant rights to people with the email addresses that they use to sign into Windows Live.  You could actually extend this example and also use the Azure claims provider I blogged about here:  http://blogs.technet.com/b/speschka/archive/2012/02/11/the-azure-custom-claim-provider-for-sharepoint-project-part-1.aspx.  That means you would be using this STS to enable you to authenticate with Windows Live and get some real SAML claims back, and then using the Azure custom claims provider project to add those authenticated users into your Azure directory store and the people picker to choose them.

The pictures tell it all, so here’s what it looks like when you first hit the SharePoint site and authenticate with Windows Live:

When you first sign in it will ask you if it’s okay to share your information with the custom STS application.  There’s nothing to concerned with here – that’s standard OAuth permissions happening.  Here’s what that looks like; note that it shows the data I’m asking for in the STS – you could ask for an entirely different set of data if you wanted.  You just need to look at the Window Live OAuth SDK to figure out what you need to change and how:

Once you accept, you get redirected back to the SharePoint site.  In this example I am using the SharePoint Claims web part I blogged about here:  http://blogs.technet.com/b/speschka/archive/2010/02/13/figuring-out-what-claims-you-have-in-sharepoint-2010.aspx.  You can see all the claims I got from Windows Live via OAuth that I now have as SAML claims thanks to my custom STS, as well as the fact that I’m signed in with my Windows Live email address that I created for this project (from the sign in control, top right corner):

 

You can download the attachment here:

One More Claims Migration Gotcha For SharePoint 2010

Hey folks, I’ve written previously about how to migrate code for claims users (such as Windows claims to SAML claims) in this post about the IMigrateUserCallback interface:  http://blogs.technet.com/b/speschka/archive/2011/01/27/migrating-user-accounts-from-windows-claims-to-saml-claims.aspx.  Just as with that post, our good friend Raju S. also had some other interesting information to add to this content today.  One of our other “friends of the blog” Israel V. noticed that after a recent migration he did the identities for workflows were not updated.  Turns out Raju had seen this before in a previous version of SharePoint (when migrating between different domains) and had done some code to fix up that issue.  The net of what you need to do here is go through and look at your workflow associations, and update the accounts that are associated with them. 

Each content type, list and web has a property called WorkflowAssociations where it stores this information.  It’s just a collection so you can enumerate through each one, but as you can imagine, this may take some time to walk through an entire web application so plan accordingly.  A specific workflow association is really just a chunk of Xml so it’s probably best to retrieve the AssociationData property and take a look at the Xml to get familiar with it.  As you review it, you should notice nodes for person, account ID and display name – those are going to be the values that you want to change.  After you change the Xml then you can just push it back into the AssociationData property and call the UpdateWorkflowAssociation method on the workflow association.

Thanks again to Israel for calling this problem out and to Raju for sharing his solution.

When Do You Need to Install a Custom Claims Provider for Search in SharePoint 2010

We’ve been having a few good (meaning “interesting”) discussions lately about custom claims providers and search.  As it turns out, there are instances when you need to install your custom claim provider on a search box (“box” being something I’ll define down below) in order to get security trimming working correctly in your search results.  When this applies though is different depending on whether you are using FAST Search 2010 or SharePoint Search 2010.

First a little explanation is probably useful here.  Every time a user makes a query, the claims in the user token are decoded.  However the SiteData Web Service, which is what the crawler uses to retrieve security information about content, always returns encoded claims.  So how do we reconcile that?

In FAST Search 2010, the claims are always decoded before they are stored by the FAST Search 2010 indexer.  We do this because the FAST Query servers don’t have the SharePoint bits installed, so they can’t encode the claims.  Remember the user claims come over decoded, so in order to match the encoded claims the SiteData Web Service returns, the user claims would have to be encoded to do a comparison.  Since we can’t install a custom claim provider on a FAST Query server we have to decode the claims that we get from the SiteData Web Service, and in order to do that any custom claims provider you are using must be installed in the FAST Content SSA.  Doing so allows us to decode the claims, store the claims decoded, and then when the decoded claims for a user are presented we can do a comparison.  So in the case of FAST, we are concerned with using any custom claims providers at crawl time.

For SharePoint Search 2010 its kind of the opposite problem.  SharePoint expects that it will be installed everywhere, so it works on the premise that at query time, it can encode the claims from the user so that a comparison can be made to the ACLs that are stored for the content.  Where you will find this breaks down though is in the scenario where you have not deployed the custom claims provider to servers that are running the query processor, also known as the Query and Site Settings Service.  In most cases, you install your custom claims provider on all servers in your farm – WFEs as well as application servers.  The query processor needs this custom claims provider installed in order to encode the claims.  So if everything is running in a single farm and you install the custom claims provider on all servers, you should be good.  The situation that came up recently (and that prompted this post) is a scenario where you have a separate services farm and you are consuming SharePoint Search services from that farm.  In that case you need to make sure that any custom claims providers are installed on every server in the services farm that is running the Query and Site Settings Service.  If you do not do that you will find that your users’ custom claims cannot be evaluated, and as a result they will typically see no search results returned.

This was an interesting scenario and took some great troubleshooting by a cast of characters here…especially calling out my brilliant little brother Luca for helping us get over the final hurdle here, as well as Sanjeev and Michael P.  Thanks all for helping us understand this better.

The Azure Custom Claim Provider for SharePoint Project Part 3

In Part 1 of this series, I briefly outlined the goals for this project, which at a high level is to use Windows Azure table storage as a data store for a SharePoint custom claims provider.  The claims provider is going to use the CASI Kit to retrieve the data it needs from Windows Azure in order to provide people picker (i.e. address book) and type in control name resolution functionality. 

In Part 2, I walked through all of the components that run in the cloud – the data classes that are used to work with Azure table storage and queues, a worker role to read items out of queues and populate table storage, and a WCF front end that lets a client application create new items in the queue as well as do all the standard SharePoint people picker stuff – provide a list of supported claim types, search for claim values and resolve claims.

In this, the final part in this series, we’ll walk through the different components used on the SharePoint side.  It includes a custom component built using the CASI Kit to add items to the queue as well as to make our calls to Azure table storage.  It also includes our custom claims provider, which will use the CASI Kit component to connect SharePoint with those Azure functions.

To begin with let’s take a quick look at the custom CASI Kit component.  I’m not going to spend a whole lot of time here because the CASI Kit is covered extensively on this blog.  This particular component is described in Part 3 of the CASI Kit series.  Briefly though, what I’ve done is created a new Windows class library project.  I’ve added references to the CASI Kit base class assembly and the other required .NET assemblies (that I describe in part 3).  I’ve added a Service Reference in my project to the WCF endpoint I created in Part 2 of this project.  Finally, I added a new class to the project and have it inherit the CASI Kit base class, and I’ve added the code to override the ExecuteRequest method.  As you have hopefully seen in the CASI Kit series, here’s what my code looks like to override ExecuteRequest:

       public class DataSource : AzureConnect.WcfConfig

       {

              public override bool ExecuteRequest()

              {

                     try

                     {

                           //create the proxy instance with bindings and endpoint the base class

                           //configuration control has created for this

                           AzureClaims.AzureClaimsClient cust =

                                  new AzureClaims.AzureClaimsClient(this.FedBinding,

                                   this.WcfEndpointAddress);

 

                           //configure the channel so we can call it with

                           //FederatedClientCredentials.

                           SPChannelFactoryOperations.ConfigureCredentials<AzureClaims.IAzureClaims>

                                   (cust.ChannelFactory,

                                   Microsoft.SharePoint.SPServiceAuthenticationMode.Claims);

 

                           //create a channel to the WCF endpoint using the

                           //token and claims of the current user

                           AzureClaims.IAzureClaims claimsWCF =

                                  SPChannelFactoryOperations.CreateChannelActingAsLoggedOnUser

                                  <AzureClaims.IAzureClaims>(cust.ChannelFactory,

                                  this.WcfEndpointAddress,

                                  new Uri(this.WcfEndpointAddress.Uri.AbsoluteUri));

 

                           //set the client property for the base class

                           this.WcfClientProxy = claimsWCF;

                     }

                     catch (Exception ex)

                     {

                           Debug.WriteLine(ex.Message);

                     }

 

                     //now that the configuration is complete, call the method

                     return base.ExecuteRequest();

              }

       }

 

“AzureClaims” is the name of the Service Reference I created, and it uses the IAzureClaims interface that I defined in my WCF project in Azure.  As explained previously in the CASI Kit series, this is basically boilerplate code, I’ve just plugged in the name of my interface and class that is exposed in the WCF application.  The other thing I’ve done, as is also explained in the CASI Kit series, is to create an ASPX page called AzureClaimProvider.aspx.  I just copied and pasted in the code I describe in Part 3 of the CASI Kit series and substituted the name of my class and the endpoint it can be reached at.  The control tag in the ASPX page for my custom CASI Kit component looks like this:

<AzWcf:DataSource runat=”server” id=”wcf” WcfUrl=”https://spsazure.vbtoys.com/ AzureClaims.svc” OutputType=”Page” MethodName=”GetClaimTypes” AccessDeniedMessage=”” />

 

The main things to note here are that I created a CNAME record for “spsazure.vbtoys.com” that points to my Azure application at cloudapp.net (this is also described in Part 3 of the CASI Kit).  I’ve set the default MethodName that the page is going to invoke to be GetClaimTypes, which is a method that takes no parameters and returns a list of claim types that my Azure claims provider supports.  This makes it a good test to validate the connectivity between my Azure application and SharePoint.  I can simply go to http://anySharePointsite/_layouts/AzureClaimProvider.aspx and if everything is configured correctly I will see some data in the page.  Once I’ve deployed my project by adding the assembly to the Global Assembly Cache and deploying the page to SharePoint’s _layouts directory that’s exactly what I did – I hit the page in one of my sites and verified that it returned data, so I knew my connection between SharePoint and Azure was working.

Now that I have the plumbing in place, I finally get to the “fun” part of the project, which is to do two things:

  1. 1.       Create “some component” that will send information about new users to my Azure queue
  2. 2.       Create a custom claims provider that will use my custom CASI Kit component to provide claim types, name resolution and search for claims.

This is actually a good point to stop and step back a little.  In this particular case I just wanted to roll something out as quickly as possible.  So, what I did is I created a new web application and I enabled anonymous access.  As I’m sure you all know, just enabling it at the web app level does NOT enable it at the site collection level.  So for this scenario, I also enabled at the root site collection only; I granted access to everything in the site.  All other site collections, which would contain any information for members only, does NOT have anonymous enabled so users have to be granted rights to join.

The next thing to think about is how to manage the identities that are going to use the site.  Obviously, that’s not something I want to do.  I could have come up with a number of different methods to sync accounts into Azure or something goofy like that, but as I explained in Part 1 of this series, there’s a whole bunch of providers that do that already so I’m going to let them keep doing what they do.  What I mean by that is that I took advantage of another Microsoft cloud service called ACS, or Access Control Service.  In short ACS acts like an identity provider to SharePoint.  So I just created a trust between my SharePoint farm and the instance of ACS that I created for this POC.  In ACS I added SharePoint as a relying party, so ACS knows where to send users once they’ve authenticated.  Inside of ACS, I also configured it to let users sign in using their Gmail, Yahoo, or Facebook accounts.  Once they’ve signed in ACS gets a single claim back that I’ll use – email address – and sends it back to SharePoint.

Okay, so that’s all of the background on the plumbing – Azure is providing table storage and queues to work with the data, ACS is providing authentication services, and CASI Kit is providing the plumbing to the data.

So with all that plumbing described, how are we going to use it?  Well I still wanted the process to become a member pretty painless, so what I did is I wrote a web part to add users to my Azure queue.  What it does is it checks to see if the request is authenticated (i.e. the user has clicked the Sign In link that you get in an anonymous site, signed into one of the providers I mentioned above, and ACS has sent me back their claim information).  If the request is not authenticated the web part doesn’t do anything.  However if the request is authenticated, it renders a button that when clicked, will take the user’s email claim and add it to the Azure queue.  This is the part about which I said we should step back a moment and think about it.  For a POC that’s all fine and good, it works.  However you can think about other ways in which you could process this request.  For example, maybe you write the information to a SharePoint list.  You could write a custom timer job (that the CASI Kit works with very nicely) and periodically process new requests out of that list.  You could use the SPWorkItem to queue the requests up to process later.  You could store it in a list and add a custom workflow that maybe goes through some approval process, and once the request has been approved uses a custom workflow action to invoke the CASI Kit to push the details up to the Azure queue.  In short – there’s a LOT of power, flexibility and customization possible here – it’s all up to your imagination.  At some point in fact I may write another version of this that writes it to a custom list, processes it asynchronously at some point, adds the data to the Azure queue and then automatically adds the account to the Visitors group in one of the sub sites, so the user would be signed up and ready to go right away.  But that’s for another post if I do that.

So, all that being said – as I described above I’m just letting the user click a button if they’ve signed in and then I’ll use my custom CASI Kit component to call out the WCF endpoint and add the information to the Azure queue.  Here’s the code for the web part – pretty simple, courtesy of the CASI Kit:

public class AddToAzureWP : WebPart

{

 

       //button whose click event we need to track so that we can

       //add the user to Azure

       Button addBtn = null;

       Label statusLbl = null;

 

       protected override void CreateChildControls()

       {

              if (this.Page.Request.IsAuthenticated)

              {

                     addBtn = new Button();

                     addBtn.Text = “Request Membership”;

                     addBtn.Click += new EventHandler(addBtn_Click);

                     this.Controls.Add(addBtn);

 

                     statusLbl = new Label();

                     this.Controls.Add(statusLbl);

              }

       }

 

       void addBtn_Click(object sender, EventArgs e)

       {

              try

              {

                     //look for the claims identity

                     IClaimsPrincipal cp = Page.User as IClaimsPrincipal;

 

                     if (cp != null)

                     {

                           //get the claims identity so we can enum claims

                           IClaimsIdentity ci = (IClaimsIdentity)cp.Identity;

 

                           //look for the email claim

                           //see if there are claims present before running through this

                           if (ci.Claims.Count > 0)

                           {

                                  //look for the email address claim

                                  var eClaim = from Claim c in ci.Claims

                                         where c.ClaimType == “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221;

                                         select c;

 

                                  Claim ret = eClaim.FirstOrDefault<Claim>();

 

                                  if (ret != null)

                                  {

                                         //create the string we’re going to send to the Azure queue:  claim, value, and display name

                                         //note that I’m using “#” as delimiters because there is only one parameter, and CASI Kit

                                         //uses ; as a delimiter so a different value is needed.  If ; were used CASI would try and

                                         //make it three parameters, when in reality it’s only one

                                         string qValue = ret.ClaimType + “#” + ret.Value + “#” + “Email”;

 

                                         //create the connection to Azure and upload

                                         //create an instance of the control

                                         AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

                                         //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

                                         //cache is not actually used in this case though

                                         cfgCtrl.WcfUrl = AzureCCP.SVC_URI;

                                         cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

                                         cfgCtrl.MethodName = “AddClaimsToQueue”;

                                         cfgCtrl.MethodParams = qValue;

                                         cfgCtrl.ServerCacheTime = 10;

                                         cfgCtrl.ServerCacheName = ret.Value;

                                         cfgCtrl.SharePointClaimsSiteUrl = this.Page.Request.Url.ToString();

 

                                         //execute the method

                                         bool success = cfgCtrl.ExecuteRequest();

 

                                         if (success)

                                         {

                                                //if it worked tell the user

                                                statusLbl.Text = “<p>Your information was successfully added.  You can now contact any of ” +

                                                       “the other Partner Members or our Support staff to get access rights to Partner ” +

                                                       “content.  Please note that it takes up to 15 minutes for your request to be ” +

                                                       “processed.</p>”;

                                         }

                                         else

                                         {

                                                statusLbl.Text = “<p>There was a problem adding your info to Azure; please try again later or ” +

                                                       “contact Support if the problem persists.</p>”;

                                         }

                                  }

                           }

                     }

              }

              catch (Exception ex)

              {

                     statusLbl.Text = “There was a problem adding your info to Azure; please try again later or ” +

                           “contact Support if the problem persists.”;

                     Debug.WriteLine(ex.Message);

              }

       }

}

 

So a brief rundown of the code looks like this:  I first make sure the request is authenticated; if it is I added the button to the page and I add an event handler for the click event of the button.  In the button’s click event handler I get an IClaimsPrincipal reference to the current user, and then look at the user’s claims collection.  I run a LINQ query against the claims collection to look for the email claim, which is the identity claim for my SPTrustedIdentityTokenIssuer.  If I find the email claim, I create a concatenated string with the claim type, claim value and friendly name for the claim.  Again, this isn’t strictly required in this scenario, but since I wanted this to be usable in a more generic scenario I coded it up this way.  That concatenated string is the value for the method I have on the WCF that adds data to the Azure queue.  I then create an instance of my custom CASI Kit component and configure it to call the WCF method that adds data to the queue, then I call the ExecuteRequest method to actually fire off the data. 

If I get a response indicating I was successful adding data to the queue then I let the user know; otherwise I let him know there was a problem and hey may need to check again later.  In a real scenario of course I would have even more error logging so I could track down exactly what happened and why.  Even with this as is though, the CASI Kit will write any error information to the ULS logs in a SPMonitoredScope, so everything it does for the request will have a unique correlation ID with which we can view all activity associated with the request.  So I’m actually in a pretty good state right now.

Okay – we’ve walked through all the plumbing pieces, and I’ve shown how data gets added to the Azure queue and from there pulled out by a worker process and added into table storage.  That’s really the ultimate goal because now we can walk through the custom claims provider.  It’s going to use the CASI Kit to call out and query the Azure table storage I’m using.  Let’s look at the most interesting aspects of the custom claims provider.

First let’s look at a couple of class level attributes:

//the WCF endpoint that we’ll use to connect for address book functions

//test url:  https://az1.vbtoys.com/AzureClaimsWCF/AzureClaims.svc

//production url:  https://spsazure.vbtoys.com/AzureClaims.svc

public static string SVC_URI = “https://spsazure.vbtoys.com/AzureClaims.svc&#8221;;

 

//the identity claimtype value

private const string IDENTITY_CLAIM =

       “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&#8221;;

 

//the collection of claim type we support; it won’t change over the course of

//the STS (w3wp.exe) life time so we’ll cache it once we get it

AzureClaimProvider.AzureClaims.ClaimTypeCollection AzureClaimTypes =

       new AzureClaimProvider.AzureClaims.ClaimTypeCollection();

 

First, I use a constant to refer to the WCF endpoint to which the CASI Kit should connect.  You’ll note that I have both my test endpoint and my production endpoint.  When you’re using the CASI Kit programmatically, like we will in our custom claims provider, you always have to tell it where that WCF endpoint is that it should talk to.

Next, as I’ve described previously I’m using the email claim as my identity claim.  Since I will refer to it a number of times throughout my provider, I’ve just plugged into a constant at the class level.

Finally, I have a collection of AzureClaimTypes.  I explained in Part 2 of this series why I’m using a collection, and I’m just storing it here at the class level so that I don’t have to go and re-fetch that information each time my FillHierarchy method is invoked.  Calls out to Azure aren’t cheap, so I minimize them where I can.

Here’s the next chunk of code:

internal static string ProviderDisplayName

{

       get

       {

              return “AzureCustomClaimsProvider”;

       }

}

 

internal static string ProviderInternalName

{

       get

       {

              return “AzureCustomClaimsProvider”;

       }

}

 

//*******************************************************************

//USE THIS PROPERTY NOW WHEN CREATING THE CLAIM FOR THE PICKERENTITY

internal static string SPTrustedIdentityTokenIssuerName

{

       get

       {

              return “SPS ACS”;

       }

}

 

 

public override string Name

{

       get

       {

              return ProviderInternalName;

       }

}

 

The reason I wanted to point this code out is because since my provider is issuing identity claims, it MUST be the default provider for the SPTrustedIdentityTokenIssuer.  Explaining how to do that is outside the scope of this post, but I’ve covered it elsewhere in my blog.  The main thing to remember about doing that is that you must have a strong relationship between the name you use for your provider and the name used for the SPTrustedIdentityTokenIssuer.  The value I used for the ProviderInternalName is the name that I must plug into the ClaimProviderName property for the SPTrustedIdentityTokenIssuer.  Also, I need to use the name of the SPTrustedIdentityTokenIssuer when I’m creating identity claims for users.  So I’ve created an SPTrustedIdentityTokenIssuer called “SPS ACS” and I’ve added that to my SPTrustedIdentityTokenIssuerName property.  That’s why I have these values coded in here.

Since I’m not doing any claims augmentation in this provider, I have not written any code to override FillClaimTypes, FillClaimValueTypes or FillEntityTypes.  The next chunk of code I have is FillHierarchy, which is where I tell SharePoint what claim types I support.  Here’s the code for that:

try

{

       if (

               (AzureClaimTypes.ClaimTypes == null) ||

               (AzureClaimTypes.ClaimTypes.Count() == 0)

       )

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “GetClaimTypes”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = “GetClaimTypes”;

              cfgCtrl.SharePointClaimsSiteUrl = context.AbsoluteUri;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              if (success)

              {

                     //if it worked, get the list of claim types out

                     AzureClaimTypes =

                                                (AzureClaimProvider.AzureClaims.ClaimTypeCollection)cfgCtrl.QueryResultsObject;

              }

       }

 

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       //at this point we have whatever claim types we’re going to have, so add them to the hierarchy

       //check to see if the hierarchyNodeID is null; it will be when the control

       //is first loaded but if a user clicks on one of the nodes it will return

       //the key of the node that was clicked on.  This lets you build out a

       //hierarchy as a user clicks on something, rather than all at once

       if (

               (string.IsNullOrEmpty(hierarchyNodeID)) &&

               (AzureClaimTypes.ClaimTypes.Count() > 0)

              )

       {

              //enumerate through each claim type

              foreach (AzureClaimProvider.AzureClaims.ClaimType clm in AzureClaimTypes.ClaimTypes)

              {

                     //when it first loads add all our nodes

                     hierarchy.AddChild(new

                                                Microsoft.SharePoint.WebControls.SPProviderHierarchyNode(

                                   ProviderInternalName, clm.FriendlyName, clm.ClaimTypeName, true));

              }

       }

}

catch (Exception ex)

{

       Debug.WriteLine(“Error filling hierarchy: ” + ex.Message);

}

 

So here I’m looking to see if I’ve grabbed the list of claim types I support already.  If I haven’t then I create an instance of my CASI Kit custom control and make a call out to my WCF to retrieve the claim types; I do this by calling the GetClaimTypes method on my WCF class.  If I get data back then I plug it into the class-level variable I described earlier called AzureClaimTypes, and then I add it to the hierarchy of claim types I support.

The next methods we’ll look at is the FillResolve methods.  The FillResolve methods have two different signatures because they do two different things.  In one scenario we have a specific claim with value and type and SharePoint just wants to verify that is valid.  In the second case a user has just typed some value into the SharePoint type in control and so it’s effectively the same thing as doing a search for claims.  Because of that, I’ll look at them separately.

In the case where I have a specific claim and SharePoint wants to verify the values, I call a custom method I wrote called GetResolveResults.  In that method I pass in the Uri where the request is being made as well as the claim type and claim value SharePoint is seeking to validate.  The GetResolveResults then looks like this:

//Note that claimType is being passed in here for future extensibility; in the

//current case though, we’re only using identity claims

private AzureClaimProvider.AzureClaims.UniqueClaimValue GetResolveResults(string siteUrl,

       string searchPattern, string claimType)

{

       AzureClaimProvider.AzureClaims.UniqueClaimValue result = null;

 

       try

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “ResolveClaim”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = claimType + “;” + searchPattern;

              cfgCtrl.MethodParams = IDENTITY_CLAIM + “;” + searchPattern;

              cfgCtrl.SharePointClaimsSiteUrl = siteUrl;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              //if the query encountered no errors then capture the result

              if (success)

                     result = (AzureClaimProvider.AzureClaims.UniqueClaimValue)cfgCtrl.QueryResultsObject;

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

 

       return result;

}

 

So here I’m creating an instance of the custom CASI Kit control then calling the ResolveClaim method on my WCF.  That method takes two parameters, so I pass that in as semi-colon delimited values (because that’s how CASI Kit distinguishes between different param values).  I then just execute the request and if it finds a match it will return a single UniqueClaimValue; otherwise the return value will be null.  Back in my FillResolve method this is what my code looks like:

protected override void FillResolve(Uri context, string[] entityTypes, SPClaim resolveInput, List<PickerEntity> resolved)

{

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //look for matching claims

              AzureClaimProvider.AzureClaims.UniqueClaimValue result =

                     GetResolveResults(context.AbsoluteUri, resolveInput.Value,

                     resolveInput.ClaimType);

 

              //if we found a match then add it to the resolved list

              if (result != null)

              {

                     PickerEntity pe = GetPickerEntity(result.ClaimValue, result.ClaimType,

                     SPClaimEntityTypes.User, result.DisplayName);

                           resolved.Add(pe);

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

 

So I’m checking first to make sure that the request is for a User claim, since that the only type of claim my provider is returning.  If the request is not for a User claim then I drop out.  Next I call my method to resolve the claim and if I get back a non-null result, I process it.  To process it I call another custom method I wrote called GetPickerEntity.  Here I pass in the claim type and value to create an identity claim, and then I can add that PickerEntity that it returns to the List of PickerEntity instances passed into my method.  I’m not going to go into the GetPickerEntity method because this post is already incredibly long and I’ve covered how to do so in other posts on my blog.

Now let’s talk about the other FillResolve method.  As I explained earlier, it basically acts just like a search so I’m going to combine the FillResolve and FillSearch methods mostly together here.  Both of these methods are going to call a custom method I wrote called SearchClaims, that looks like this:

private AzureClaimProvider.AzureClaims.UniqueClaimValueCollection SearchClaims(string claimType, string searchPattern,

       string siteUrl)

{

                    

       AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

              new AzureClaimProvider.AzureClaims.UniqueClaimValueCollection();

 

       try

       {

              //create an instance of the control

              AzureClaimProvider.DataSource cfgCtrl = new AzureClaimProvider.DataSource();

 

              //set the properties to retrieve data; must configure cache properties since we’re using it programmatically

              //cache is not actually used in this case though

              cfgCtrl.WcfUrl = SVC_URI;

              cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

              cfgCtrl.MethodName = “SearchClaims”;

              cfgCtrl.ServerCacheTime = 10;

              cfgCtrl.ServerCacheName = claimType + “;” + searchPattern;

              cfgCtrl.MethodParams = claimType + “;” + searchPattern + “;200”;

              cfgCtrl.SharePointClaimsSiteUrl = siteUrl;

 

              //execute the method

              bool success = cfgCtrl.ExecuteRequest();

 

              if (success)

              {

                     //if it worked, get the array of results

                     results =

 (AzureClaimProvider.AzureClaims.UniqueClaimValueCollection)cfgCtrl.QueryResultsObject;

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(“Error searching claims: ” + ex.Message);

       }

 

       return results;

}

 

In this method, as you’ve seen elsewhere in this post, I’m just creating an instance of my custom CASI Kit control.  I’m calling the SearchClaims method on my WCF and I’m passing in the claim type I want to search in, the claim value I want to find in that claim type, and the maximum number of records to return.  You may recall from Part 2 of this series that SearchClaims just does a BeginsWith on the search pattern that’s passed in, so with lots of users there could easily be over 200 results.  However 200 is the maximum number of matches that the people picker will show, so that’s all I ask for.  If you really think that users are going to scroll through more than 200 results looking for a result I’m here to tell you that ain’t likely.

So now we have our colletion of UniqueClaimValues back, let’s look at how we use it our two override methods in the custom claims provider.  First, here’s what the FillResolve method looks like:

protected override void FillResolve(Uri context, string[] entityTypes, string resolveInput, List<PickerEntity> resolved)

{

       //this version of resolve is just like a search, so we’ll treat it like that

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //do the search for matches

              AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

                     SearchClaims(IDENTITY_CLAIM, resolveInput, context.AbsoluteUri);

 

              //go through each match and add a picker entity for it

              foreach (AzureClaimProvider.AzureClaims.UniqueClaimValue cv in results.UniqueClaimValues)

              {

                     PickerEntity pe = GetPickerEntity(cv.ClaimValue, cv.ClaimType, SPClaimEntityTypes.User, cv.DisplayName);

                     resolved.Add(pe);

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

It just calls the SearchClaims method, and for each result it gets back (if any), it creates a new PickerEntity and adds it to the List of them passed into the override.  All of them will show up in then in the type in control in SharePoint.  The FillSearch method uses it like this:

protected override void FillSearch(Uri context, string[] entityTypes, string searchPattern, string hierarchyNodeID, int maxCount, SPProviderHierarchyTree searchTree)

{

       //make sure picker is asking for the type of entity we return; site collection admin won’t for example

       if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.User))

              return;

 

       try

       {

              //do the search for matches

              AzureClaimProvider.AzureClaims.UniqueClaimValueCollection results =

                     SearchClaims(IDENTITY_CLAIM, searchPattern, context.AbsoluteUri);

 

              //if there was more than zero results, add them to the picker

              if (results.UniqueClaimValues.Count() > 0)

              {

                     foreach (AzureClaimProvider.AzureClaims.UniqueClaimValue cv in results.UniqueClaimValues)

                     {

                           //node where we’ll stick our matches

                           Microsoft.SharePoint.WebControls.SPProviderHierarchyNode matchNode = null;

 

                           //get a picker entity to add to the dialog

                           PickerEntity pe = GetPickerEntity(cv.ClaimValue, cv.ClaimType, SPClaimEntityTypes.User, cv.DisplayName);

 

                           //add the node where it should be displayed too

                           if (!searchTree.HasChild(cv.ClaimType))

                           {

                                  //create the node so we can show our match in there too

                                  matchNode = new

                                         SPProviderHierarchyNode(ProviderInternalName,

                                         cv.DisplayName, cv.ClaimType, true);

 

                                  //add it to the tree

                                  searchTree.AddChild(matchNode);

                           }

                           else

                                  //get the node for this team

                                  matchNode = searchTree.Children.Where(theNode =>

                                         theNode.HierarchyNodeID == cv.ClaimType).First();

 

                           //add the match to our node

                           matchNode.AddEntity(pe);

                     }

              }

       }

       catch (Exception ex)

       {

              Debug.WriteLine(ex.Message);

       }

}

 

In FillSearch I’m calling my SearchClaims method again.  For each UniqueClaimValue I get back (if any), I look to see if I’ve added the claim type to the results hierarchy node.  Again, in this case I’ll always only return one claim type (email), but I wrote this to be extensible so you could use more claim types later.  So I add the hierarchy node if it doesn’t exist, or find it if it does.  I take the PickerEntity that I create created from the UniqueClaimValue and I add it to the hierarchy node.  And that’s pretty much all there is too it.

I’m not going to cover the FillSchema method or any of the four Boolean property overrides that every custom claim provider must have, because there’s nothing special in them for this scenario and I’ve covered the basics in other posts on this blog.  I’m also not going to cover the feature receiver that’s used to register this custom claims provider because – again – there’s nothing special for this project and I’ve covered it elsewhere.  After you compile it you just need to make sure that your assembly for the custom claim provider as well as custom CASI Kit component is registered in the Global Assembly Cache in each server on the farm, and you need to configure the SPTrustedIdentityTokenIssuer to use your custom claims provider as the default provider (also explained elsewhere in this blog). 

That’s the basic scenario end to end.  When you are in the SharePoint site and you try and add a new user (email claim really), the custom claim provider is invoked first to get a list of supported claim types, and then again as you type in a value in the type in control, or search for a value using the people picker.  In each case the custom claims provider uses the custom CASI Kit control to make an authenticated call out to Windows Azure to talk to our WCF, which uses our custom data classes to retrieve data from Azure table storage.  It returns the results and we unwrap that and present it to the user.  With that you have your complete turnkey SharePoint and Azure “extranet in a box” solution that you can use as is, or modify to suit your purposes.  The source code for the custom CASI Kit component, web part that registers the user in an Azure queue, and custom claims provider is all attached to this posting.  Hope you enjoy it, find it useful, and can start to visualize how you can tie these separate services together to create solutions to your problems.  Here’s some screenshots of the final solution:

Root site as anonymous user:

Here’s what it looks like after you’ve authenticated; notice that the web part now displays the Request Membership button:

Here’s an example of the people picker in action, after searching for claim values that starts with “sp”:

 

 

 

You can download the attachment here:

The Azure Custom Claim Provider for SharePoint Project Part 2

In Part 1 of this series, I briefly outlined the goals for this project, which at a high level is to use Windows Azure table storage as a data store for a SharePoint custom claims provider.  The claims provider is going to use the CASI Kit to retrieve the data it needs from Windows Azure in order to provide people picker (i.e. address book) and type in control name resolution functionality. 

In Part 3 I create all of the components used in the SharePoint farm.  That includes a custom component based on the CASI Kit that manages all the commnication between SharePoint and Azure.  There is a custom web part that captures information about new users and gets it pushed into an Azure queue.  Finally, there is a custom claims provider that communicates with Azure table storage through a WCF – via the CASI Kit custom component – to enable the type in control and people picker functionality.

Now let’s expand on this scenario a little more.

This type of solution plugs in pretty nicely to a fairly common scenario, which is when you want a minimally managed extranet.  So for example, you want your partners or customers to be able to hit a website of yours, request an account, and then be able to automatically “provision” that account…where “provision” can mean a lot of different things to different people.  We’re going to use that as the baseline scenario here, but of course, let our public cloud resources do some of the work for us.

Let’s start by looking at the cloud components were going to develop ourselves:

  • A table to keep track of all the claim types we’re going to support
  • A table to keep track of all the unique claim values for the people picker
  • A queue where we can send data that should be added to the list of unique claim values
  • Some data access classes to read and write data from Azure tables, and to write data to the queue
  • An Azure worker role that is going to read data out of the queue and populate the unique claim values table
  • A WCF application that will be the endpoint through which the SharePoint farm communicates to get the list of claim types, search for claims, resolve a claim, and add data to the queue

Now we’ll look at each one in a little more detail.

Claim Types Table

The claim types table is where we’re going to store all the claim types that our custom claims provider can use.  In this scenario we’re only going to use one claim type, which is the identity claim – that will be email address in this case.  You could use other claims, but to simplify this scenario we’re just going to use the one.  In Azure table storage you add instances of classes to a table, so we need to create a class to describe the claim types.  Again, note that you can instances of different class types to the same table in Azure, but to keep things straightforward we’re not going to do that here.  The class this table is going to use looks like this:

namespace AzureClaimsData

{

    public class ClaimType : TableServiceEntity

    {

 

        public string ClaimTypeName { get; set; }

        public string FriendlyName { get; set; }

 

        public ClaimType() { }

 

        public ClaimType(string ClaimTypeName, string FriendlyName)

        {

            this.PartitionKey = System.Web.HttpUtility.UrlEncode(ClaimTypeName);

            this.RowKey = FriendlyName;

 

            this.ClaimTypeName = ClaimTypeName;

            this.FriendlyName = FriendlyName;

        }

    }

}

 

I’m not going to cover all the basics of working with Azure table storage because there are lots of resources out there that have already done that.  So if you want more details on what a PartitionKey or RowKey is and how you use them, your friendly local Bing search engine can help you out.  The one thing that is worth pointing out here is that I am Url encoding the value I’m storing for the PartitionKey.  Why is that?  Well in this case, my PartitionKey is the claim type, which can take a number of formats:  urn:foo:blah, http://www.foo.com/blah, etc.  In the case of a claim type that includes forward slashes, Azure cannot store the PartitionKey with those values. So instead we encode them out into a friendly format that Azure likes.  As I stated above, in our case we’re using the email claim so the claim type for it is http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

Unique Claim Values Table

The Unique Claim Values table is where all the unique claim values we get our stored. In our case, we are only storing one claim type – the identity claim – so by definition all claim values are going to be unique.  However I took this approach for extensibility reasons.  For example, suppose down the road you wanted to start using Role claims with this solution.  Well it wouldn’t make sense to store the Role claim “Employee” or “Customer” or whatever a thousand different times; for the people picker, it just needs to know the value exists so it can make it available in the picker.  After that, whoever has it, has it – we just need to let it be used when granting rights in a site.  So, based on that, here’s what the class looks like that will store the unique claim values:

namespace AzureClaimsData

{

    public class UniqueClaimValue : TableServiceEntity

    {

 

        public string ClaimType { get; set; }

        public string ClaimValue { get; set; }

        public string DisplayName { get; set; }

 

        public UniqueClaimValue() { }

 

        public UniqueClaimValue(string ClaimType, string ClaimValue, string DisplayName)

        {

            this.PartitionKey = System.Web.HttpUtility.UrlEncode(ClaimType);

            this.RowKey = ClaimValue;

 

            this.ClaimType = ClaimType;

            this.ClaimValue = ClaimValue;

            this.DisplayName = DisplayName;

        }

    }

}

 

There are a couple of things worth pointing out here.  First, like the previous class, the PartitionKey uses a UrlEncoded value because it will be the claim type, which will have the forward slashes in it.  Second, as I frequently see when using Azure table storage, the data is denormalized because there isn’t a JOIN concept like there is in SQL.  Technically you can do a JOIN in LINQ, but so many things that are in LINQ have been disallowed when working with Azure data (or perform so badly) that I find it easier to just denormalize. If you folks have other thoughts on this throw them in the comments – I’d be curious to hear what you think.  So in our case the display name will be “Email”, because that’s the claim type we’re storing in this class.

The Claims Queue

The claims queue is pretty straightforward – we’re going store requests for “new users” in that queue, and then an Azure worker process will read it off the queue and move the data into the unique claim values table.  The primary reason for doing this is that working with Azure table storage can sometimes be pretty latent, but sticking an item in a queue is pretty fast.  Taking this approach means we can minimize the impact on our SharePoint web site.

Data Access Classes

One of the rather mundane aspects of working with Azure table storage and queues is you always have to write you own data access class.  For table storage, you have to write a data context class and a data source class.  I’m not going to spend a lot of time on that because you can read reams about it on the web, plus I’m also attaching my source code for the Azure project to this posting so you can at it all you want. 

There is one important thing I would point out here though, which is just a personal style choice.  I like to break out all my Azure data access code out into a separate project.  That way I can compile it into its own assembly, and I can use it even from non-Azure projects.  For example, in the sample code I’m uploading you will find a Windows form application that I used to test the different parts of the Azure back end.  It knows nothing about Azure, other than it has a reference to some Azure assemblies and to my data access assembly.  I can use it in that project and just as easily in my WCF project that I use to front-end the data access for SharePoint.

Here are some of the particulars about the data access classes though:

  • ·         I have a separate “container” class for the data I’m going to return – the claim types and the unique claim values.  What I mean by a container class is that I have a simple class with a public property of type List<>.  I return this class when data is requested, rather than just a List<> of results.  The reason I do that is because when I return a List<> from Azure, the client only gets the last item in the list (when you do the same thing from a locally hosted WCF it works just fine).  So to work around this issue I return claim types in a class that looks like this:

namespace AzureClaimsData

{

    public class ClaimTypeCollection

    {

        public List<ClaimType> ClaimTypes { get; set; }

 

        public ClaimTypeCollection()

        {

            ClaimTypes = new List<ClaimType>();

        }

 

    }

}

 

And the unique claim values return class looks like this:

namespace AzureClaimsData

{

    public class UniqueClaimValueCollection

    {

        public List<UniqueClaimValue> UniqueClaimValues { get; set; }

 

        public UniqueClaimValueCollection()

        {

            UniqueClaimValues = new List<UniqueClaimValue>();

        }

    }

}

 

 

  • ·         The data context classes are pretty straightforward – nothing really brilliant here (as my friend Vesa would say); it looks like this:

 

namespace AzureClaimsData

{

    public class ClaimTypeDataContext : TableServiceContext

    {

        public static string CLAIM_TYPES_TABLE = “ClaimTypes”;

 

        public ClaimTypeDataContext(string baseAddress, StorageCredentials credentials)

            : base(baseAddress, credentials)

        { }

 

 

        public IQueryable<ClaimType> ClaimTypes

        {

            get

            {

                //this is where you configure the name of the table in Azure Table Storage

                //that you are going to be working with

                return this.CreateQuery<ClaimType>(CLAIM_TYPES_TABLE);

            }

        }

 

    }

}

 

  • ·         In the data source classes I do take a slightly different approach to making the connection to Azure.  Most of the examples I see on the web want to read the credentials out with some reg settings class (that’s not the exact name, I just don’t remember what it is).  The problem with that approach here is that I have no Azure-specific context because I want my data class to work outside of Azure.  So instead I just create a Setting in my project properties and in that I include the account name and key that is needed to connect to my Azure account.  So both of my data source classes have code that looks like this to create that connection to Azure storage:

 

        private static CloudStorageAccount storageAccount;

        private ClaimTypeDataContext context;

 

 

        //static constructor so it only fires once

        static ClaimTypesDataSource()

        {

            try

            {

                //get storage account connection info

                string storeCon = Properties.Settings.Default.StorageAccount;

 

                //extract account info

                string[] conProps = storeCon.Split(“;”.ToCharArray());

 

                string accountName = conProps[1].Substring(conProps[1].IndexOf(“=”) + 1);

                string accountKey = conProps[2].Substring(conProps[2].IndexOf(“=”) + 1);

 

                storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error initializing ClaimTypesDataSource class: ” + ex.Message);

                throw;

            }

        }

 

 

        //new constructor

        public ClaimTypesDataSource()

        {

            try

            {

                this.context = new ClaimTypeDataContext(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials);

                this.context.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(3));

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error constructing ClaimTypesDataSource class: ” + ex.Message);

                throw;

            }

        }

 

  • ·         The actual implementation of the data source classes includes a method to add a new item for both a claim type as well as unique claim value.  It’s very simple code that looks like this:

 

        //add a new item

        public bool AddClaimType(ClaimType newItem)

        {

            bool ret = true;

 

            try

            {

                this.context.AddObject(ClaimTypeDataContext.CLAIM_TYPES_TABLE, newItem);

                this.context.SaveChanges();

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error adding new claim type: ” + ex.Message);

                ret = false;

            }

 

            return ret;

        }

 

One important difference to note in the Add method for the unique claim values data source is that it doesn’t throw an error  or return false when there is an exception saving changes.  That’s because I fully expect that people mistakenly or otherwise try and sign up multiple times.  Once we have a record of their email claim though any subsequent attempt to add it will throw an exception.  Since Azure doesn’t provide us the luxury of strongly typed exceptions, and since I don’t want the trace log filling up with pointless goo, I don’t worry about it when that situation occurs.

  • ·         Searching for claims is a little more interesting, only to the extent that it exposes again some things that you can do in LINQ, but not in LINQ with Azure.  I’ll add the code here and then explain some of the choices I made:

 

        public UniqueClaimValueCollection SearchClaimValues(string ClaimType, string Criteria, int MaxResults)

        {

            UniqueClaimValueCollection results = new UniqueClaimValueCollection();

            UniqueClaimValueCollection returnResults = new UniqueClaimValueCollection();

 

            const int CACHE_TTL = 10;

 

            try

            {

                //look for the current set of claim values in cache

                if (HttpRuntime.Cache[ClaimType] != null)

                    results = (UniqueClaimValueCollection)HttpRuntime.Cache[ClaimType];

                else

                {

                    //not in cache so query Azure

 

                    //Azure doesn’t support starts with, so pull all the data for the claim type

                    var values = from UniqueClaimValue cv in this.context.UniqueClaimValues

                                  where cv.PartitionKey == System.Web.HttpUtility.UrlEncode(ClaimType)

                                  select cv;

 

                    //you have to assign it first to actually execute the query and return the results

                    results.UniqueClaimValues = values.ToList();

 

                    //store it in cache

                    HttpRuntime.Cache.Add(ClaimType, results, null,

                        DateTime.Now.AddHours(CACHE_TTL), TimeSpan.Zero,

                        System.Web.Caching.CacheItemPriority.Normal,

                        null);

                }

 

                //now query based on criteria, for the max results

                returnResults.UniqueClaimValues = (from UniqueClaimValue cv in results.UniqueClaimValues

                           where cv.ClaimValue.StartsWith(Criteria)

                           select cv).Take(MaxResults).ToList();

            }

            catch (Exception ex)

            {

                Trace.WriteLine(“Error searching claim values: ” + ex.Message);

            }

 

            return returnResults;

        }

 

The first thing to note is that you cannot use StartsWith against Azure data.  So that means you need to retrieve all the data locally and then use your StartsWith expression.  Since retrieving all that data can be an expensive operation (it’s effectively a table scan to retrieve all rows), I do that once and then cache the data.  That way I only have to do a “real” recall every 10 minutes.  The downside is that if users are added during that time then we won’t be able to see them in the people picker until the cache expires and we retrieve all the data again.  Make sure you remember that when you are looking at the results.

Once I actually have my data set, I can do the StartsWith, and I can also limit the amount of records I return.  By default SharePoint won’t display more than 200 records in the people picker so that will be the maximum amount I plan to ask for when this method is called.  But I’m including it as a parameter here so you can do whatever you want.

The Queue Access Class

Honestly there’s nothing super interesting here.  Just some basic methods to add, read and delete messages from the queue.

Azure Worker Role

The worker role is also pretty non-descript.  It wakes up every 10 seconds and looks to see if there are any new messages in the queue.  It does this by calling the queue access class.  If it finds any items in there, it splits the content out (which is semi-colon delimited) into its constituent parts, creates a new instance of the UniqueClaimValue class, and then tries adding that instance to the unique claim values table.  Once it does that it deletes the message from the queue and moves to the next item, until the it reaches the maximum number of message that can be read at one time (32), or there are no more messages remaining.

WCF Application

As described earlier, the WCF application is what the SharePoint code talks to in order to add items to the queue, get the list of claim types, and search for or resolve a claim value.  Like a good trusted application, it has a trust established between it and the SharePoint farm that is calling it.  This prevents any kind of token spoofing when asking for the data.  At this point there isn’t any finer grained security implemented in the WCF itself.  For completeness, the WCF was tested first in a local web server, and then moved up to Azure where it was tested again to confirm that everything works.

So that’s the basics of the Azure components of this solution.  Hopefully this background explains what all the moving parts are and how they’re used.  In the next part I’ll discuss the SharePoint custom claims provider and how we hook all of these pieces together for our “turnkey” extranet solution.  The files attached to this posting contain all of the source code for the data access class, the test project, the Azure project, the worker role and WCF projects.  It also contains a copy of this posting in a Word document, so you can actually make out my intent for this content before the rendering on this site butchered it.

You can download the attachment here: