Using Managed Metadata With Variations in SharePoint 2013

I was working with some folks who were having troubles getting metadata driven navigation working with variations recently and it turned out that there were a few details important to know in order to get this to work so I thought I would share them here.  The basic gist of the problem is this – you want to use variations for all of the well-known goodness it provides: automatically creating different locale subsites, automatically copying pages and lists from a source site to all of the target sites, etc.  In addition to that though you want to use the Managed Metadata Service to provide metadata driven navigation, so you can have friendly URLs in the site and a navigation scheme that is centrally-managed.  You can definitely do this, but there are some important steps to remember.

The first and most important thing to understand is that variations will not copy a term set term to a target site unless there is a page associated with it!  So if you just go in and create a term set and then expect all of those terms to be copied to target sites, you’ll be disappointed.  Variations is really designed to copy sites, lists and pages, so you need to use those same sort of constructs to get MMS terms to get copied along as well.   Once you have associated terms with pages in the source site collection, running the Variations Propagate Page Job Definition job for the web application in which your variations site collection exists will take care of creating the term reuse and copying the pages over.

The next important thing to remember is that you need to always use and work with a term set within the site collection where you are using variations.  For example, if open the term set tool from the Manage Services page in central admin and then use the MMS tool to Translate the term, you will likely see an error that reads something like this site has not been shared with you.  However if you open the tool from within the site collection where the variations are located you should not see this issue.

Finally, this is “like” a one-time copy operation.  What that means is that if you change attributes about the term, you move them into a different place in a term hierarchy, etc. – that won’t get picked up by the variations timer jobs.  So if you do makes changes with the terms themselves after they’ve been pushed out to the target sites you will likely need to do some manual updating, write scripts for the updates, etc.

This still allows you to use friendly URLs in your site; you just need to associate them with pages.  So as an example, suppose I have a term set that contains a term called Northwinds and I want that to show up as navigation in all my target sites.  Here’s the process I would go through (all done from the source variations site):

  1. Go to the Pages library and create a new page.  In this case I will call it Northwinds.  You can use whatever page template you want, just create the page and add content.
  2. Click on the page in the Pages library so you can go edit it.
  3. Click on the Page tab in the ribbon, then click on the Page URLs button: 

  1. Click the link to “Add a friendly URL to this page”, and then select the term you want this page to be associated with.

 

 

You can decide whether to associate it with just this term or this term and all child terms or all sibling terms.

  1. Click the Add button.
  2. Click the Finish button.
  3. Publish your page.
  4. Run the Variations Propagate Page Job Definition (it runs every 15 minutes by default but you can run it manually if you don’t want to wait).

That’s all there is too it.  Here’s a before and after example:

BEFORE:

AFTER:

Resolving a Problem Creating a New Encryption Key for Secure Store Service in SharePoint 2013

I ran across this problem today, which was caused by something that’s easy to forget so I thought I would share the issue and resolution.  I was in central admin and trying to create a new Encryption Key for the Secure Store Service.  When I tried to generate a new key it failed, and the ULS logs contained an error message like this:  User [0#.w|contoso\fred] tried [ChangeMasterSecretKey] operation, user does not have admin privileges to perform the operation.  I found this puzzling, so after a few tries I tried logging into as the farm administrator and creating the key.  Voila – it worked!  This however was not the end of the problems.  I then logged back in as myself, went to manage the Secure Store Service page and got a message that said my access was denied to the Secure Store Service.  I’m a farm admin, so what’s the deal?

Well…as it turns out for Secure Store Service you have to also go into Manage Service Applications, select the Secure Store Service, and then click the Administrators button in the ribbon.  Even though I’m a farm admin, I still have to specifically add my account as an Administrator for the Secure Store Service.  Reminded me of all the times and places we had to do this in SharePoint 2010, so this little event was a good reminder in SharePoint 2013 to check for these little gotchas again.  With my account added I can now generate or refresh a key, as well as generally just use the SSS.

Using Taxonomy AKA Managed Metadata AKA TermSets with CSOM in SharePoint 2013

I had the occasion to need to figure out accessing the Managed Metadata Store using the new client object model libraries for SharePoint 2013.  In snooping around, I found virtually no documentation on this topic whatsoever, other than generic boilerplate class definitions that have been clearly emitted by some automated process rather than a human who’s actually done any development with it.  So, for those of you looking around the Microsoft.SharePoint.Client.Taxonomy namespace, hopefully this will provide a little kick start to get you headed in the right direction.

The first thing to understand is that, much like the managed code object model, the TaxonomySession is the one that rules them all.  Unfortunately if you look at the fairly arcane MSDN docs on this topic it suggests that there is some usable constructor for this class.  Please look at the dancing monkey in my left hand and ignore that documentation – it is poo.  Instead the way to get started with the TaxonomySession is to create your ClientContext instance first, and then use that to create a new TaxonomySession.  For purposes of this discussion I will skip showing how to create your ClientContext instance because there are a ton of examples out there about how to do that; in my case I’ve written a simple SharePoint App and am using the new model to do the lifting for me there to get my ClientContext.  Once I have that I can get my TaxonomySession like so (“CheckClientContext() is the method I use to get my ClientContext variable, ctx, initialized):

 

CheckClientContext();

TaxonomySession ts = TaxonomySession.GetTaxonomySession(ctx);

 

So at this point I haven’t really done anything…there’s no actual data in my TaxonomySession that I can use, but it’s at least set up for use.  At this point I can now start loading up the collections of data that make up most of the elements in the Managed Metadata Service.  The key thing, like all collections you access via CSOM, is that you need to Load() them with the ClientContext before you try and access any of their members, otherwise you will error out.  So here’s an example of starting to drill down to get the collection of Term Stores (of which there is usually just one, but you get the point – illustrates how to use the collections):

ctx.Load(ts.TermStores);

TermStoreCollection tsc = ts.TermStores;

ctx.ExecuteQuery();

foreach (TermStore tStore in tsc)

{

     //do something here

}

 

From here you should be off and running at least as far as the pattern is concerned for using this.  What I’ll do here is paste some code that I use to enumerate through all the goo in my Managed Metadata Service, down to all of the individual terms.  Not a production app of course, but hopefully a good illustration of how to work your way through the model.  I mean “illustration” figuratively, since the formatting on this site continues to suck eggs:

 

CheckClientContext();

TaxonomySession ts = TaxonomySession.GetTaxonomySession(ctx);

ctx.Load(ts.TermStores);

TermStoreCollection tsc = ts.TermStores;

 

System.Text.StringBuilder sb = new System.Text.StringBuilder(4096);

 

//before referring to any member, need to execute query, i.e.

//TermStore tStore = tsc[0]; or foreach…

ctx.ExecuteQuery();

foreach (TermStore tStore in tsc)

{

    sb.Append(“Term Store: “ + tStore.Name + Environment.NewLine);

    ctx.Load(tStore.Groups);

    ctx.ExecuteQuery();

                   

     foreach (TermGroup tg in tStore.Groups)

    {

        sb.Append(“\t->” + “Term Group: “ + tg.Name + Environment.NewLine);

        ctx.Load(tg.TermSets);

        ctx.ExecuteQuery();

      foreach (TermSet tSet in tg.TermSets)

            {

                sb.Append(“\t\t->” + “Term Set: “ + tSet.Name + Environment.NewLine);

                ctx.Load(tSet.Terms);

                ctx.ExecuteQuery();

                     foreach (Term t in tSet.Terms)

                {

                    sb.Append(“\t\t\t->” + “Term: “ + t.Name + Environment.NewLine);

                }

            }

        }

    }

Debug.WriteLine(sb.ToString());

 

Here you can see an example of the output from my farm:

This should be enough to get you anywhere you need to in the Taxonomy client model.  Enjoy!

 

Mapping User Profiles for SAML Users with an AD Import in SharePoint 2013

This is a topic that becomes very important in SharePoint 2013, and that is making sure you have a fully populated user profile application.  In SharePoint 2013 the user profile system plays a critical role in the OAuth infrastructure, which is what allows certain trusted application scenarios to succeed by allowing other applications to act on behalf of a user.  In order for an application to be able to “know” what a user can do though, it needs to capture the list of attributes for that user so proper security trimming rules can be applied.  That’s the 100,000 foot level view of this, I will write more about user profiles, synchronization and the impact on different authentication choices in a later blog.

For now it’s enough to know that it’s very important and needs to be done.  Given this importance, and the fact that Bryan Porter’s seminal posting on this from SharePoint 2010 has disappeared since he moved his blog over, I decided it would be worth covering again.  The good news is that it isn’t super complicated if you are importing from Active Directory, which is what this post will cover. 

The first thing you should do is create your SPTrustedIdentityTokenIssuer; that is required before you can configure the profile import aspect of things.  Once that’s done open your browser and navigate to your UPA management page.  Click on the Configure Synchronization Connections link, and then click the link to Create A New Connection.  The only thing that’s different here compared to any other profile connection to AD is the Authentication Provider Type drop down.  In that drop down you want to select Trusted Claims Provider Authentication.  When you do, the Authentication Provider Instance drop down below that will populate a list of all the SPTrustedIdentityTokenProviders you have created.  Just select the one that should be used with this profile connection, and fill out all of the other connection properties as you normally would for importing from AD and save it.  Here’s a screenshot of what mine looks like:

 

Once that’s done the next thing you need to do is update the property mappings so SharePoint knows what field you are importing, contains the value that users will use as the identity claim.  To do that go back to the UPA management page and click on the Manage User Properties link.  Scroll down and find the Claim User Identifier property and Edit it.  If there is an existing Property Mapping for Synchronization value, delete it.  Add a new one that maps the property you are importing from AD as the identity claim value.  In my case, I’m using email address as the identity claim, and in AD the user’s email address is stored in the AD attribute called “mail”.  So I just select the profile connection I created above, I type in “mail” in the Attribute edit box and click the Add button.   It looks like this:

After I’m done this is what it looks like:

The Claim Provider Identifier and Claim Provider Type are supposed to be set automatically when you configure the profile import connection.

That’s it – now I can do profile imports and it will automatically map that identity claim value to the account name property in the profile system.  Here’s an example of what it looks like after I’ve run a profile import.  Note that instead of using domain\user for the account name, it is showing a SAML claims format with email address for the account name:

Setting Up the Subscription Settings Service Application in SharePoint 2013

Here’s a little tip that’s not particularly brilliant, but just useful, as you start working with the new application model in SharePoint 2013.  You always need a subscription settings service application, but it’s one of those things that you cannot create in the UI.  So just keep a bookmark to this post and come back as needed for a little copy and paste.  This brief PowerShell snippet will create service application and proxy in one fell swoop:

New-SPSubscriptionSettingsServiceApplication -ApplicationPool “SharePoint Web Services Default” -Name “Subscription Settings Service Application” -DatabaseName “SubscriptionSettingsDB” | New-SPSubscriptionSettingsServiceApplicationProxy

Once you’ve created the Subscription Settings service application, you can go configure your application domain in Central Admin.

Configuring Secure Store Service to Use Accounts Across A One Way Trust in SharePoint 2010

I fought with this recently and didn’t find any info about it out in the ether anywhere so just thought I’d share in case someone else runs across it.  Assume you have SharePoint installed in a classic resource forest scenario.  So the SharePoint farm is in what we’ll call “Resources” forest; it has a one-way outgoing trust with the “Users” forest, where all of the user accounts live.  That means that Resources trusts the accounts from Users, but Users does not trust the accounts from Resources.  So what happens if you want to add accounts from the Users forest into a Secure Store Service target application?  Well you just need to do the same kind of people picker customization that you would be doing for your content web applications, only you need to do it for the central admin web application in this case. 

For example, in order to select and resolve accounts from the Users forest in your end user web applications you would run the command stsadm -o setproperty -propertyname peoplepicker-searchadforests -propertyvalue blah-blah-blah -url http://yourWebApp.  So to enable this scenario, you just run the same exact command, only the -url parameter should be http://urlToCentralAdmin.  After you make that change you should be good to go.

Using Secure Store Service in a Custom Claims Provider with SharePoint 2010

I noticed an unusual wrinkle recently when using Secure Store Service (SSS) in a custom claims provider I was working on.  This is actually an interesting scenario because I was doing what many folks want to do – custom claims augmentation.  I needed to connect to a remote data source so I could query for some additional information about each user and then use that to determine what claims to augment or not.

As a general guideline for using data sources in custom claims providers, it’s important to remember that your custom claim provider assembly is going to be kept alive in memory by the SharePoint STS process.  That makes it a lot easier to retrieve “information” – whether that’s a dataset, a set of credentials, etc. – by storing it in a class level variable and then it is available for use until the next IISRESET.  The big limitation here is that not all SharePoint farm resources may be available to you at the time your custom claim provider class is instantiated, and that’s the moral of today’s story.

In this particular case I wanted to retrieve data from the SSS in the constructor for my custom claims provider, and then I was going to do “some other stuff” with it; in my case I was creating a WindowsIdentity from a domain across a one-way trust so I could use it to create an impersonation context that had permissions to query the remote Active Directory.  Where the issue occurred is that when I tried to do anything with my reference to the SSS in the constructor, it ALWAYS timed out.  It didn’t matter what method was called on the SSS, it just always failed after 60 seconds with a timeout error.

The fix was simply to move the code out of the constructor.  The same exact code worked perfectly when invoked from my override of the FillClaimsForEntity method.  It was really just luck and trial and error that I figured this out so it seemed like a good tip to share.

As long as we’re down this path of this particular problem (logging in to a remote domain and impersonating) it’s probably worth throwing out one other pattern that I got out of this, and one other gotcha.

As described above, because your assembly stays loaded in the STS process, you can “keep alive” your class level variables.  Since I obviously didn’t want to be repeatedly logging into the remote domain when I needed to query it, I created a class level variable for my WindowsIdentity.  The pattern went something like this:

  1. See if I’ve retrieved the SSS credentials yet
    1. If not, execute the code that:
      1. Retrieves the credentials from SSS
      2. Uses the LogonUser API to logon to the remote domain using the credentials I got from the SSS
      3. Instantiate my WindowsIdentity variable so it had the credentials of the remote user
  2. Check to see if my WindowsIdentity variable is null or not
    1. If not, execute the code that:
      1. Creates a new instance of a WindowsImpersonationContext from WindowsIdentity.Impersonate()
      2. Query the remote domain
      3. Call Undo on my WindowsImpersonationContext

That pattern seems to work well and is about as much performance as I can wring out of it so far.  Now here’s the gotcha – you do NOT want to call Impersonate() on your WindowsIdentity instance and then NOT call Undo on the resulting WindowsImpersonationContext afterwards.  If you do not undo the impersonation then in my experience the site will no longer render.  Add your Undo call back and everything starts working again.