SAML Support for SharePoint-Hosted Apps with ADFS 3.0

This is another case where I’m just passing information along here, based on the great work of others.  As you probably know, we did not have a good story for SharePoint-hosted apps in web application that uses SAML authentication with ADFS 2.0.  However, I have had reports from a couple of different teams now that they ARE working with ADFS 3.0.  The main differences that are needed to make this work include:

  • In ADFS you need to define a wildcard WS-Fed endpoint.  For example, normally for a SharePoint web application, in ADFS you create a relying party and set the WS-Fed endpoint to be something like https://www.foo.com/_trust/.  To do the same thing with apps, you take your apps namespace – assume it’s “contosoapps.com” – and add a WS-Fed endpoint like this:  https://*.contosoapps.com/_trust/.
  • Configure the SharePoint STS to send the wreply parameter.  You can do that with PowerShell that looks like this:

$sts = Get-SPTrustedIdentityTokenIssuer
$sts.UseWReplyParameter = $true
$sts.Update() 

One other thing to note – the behavior to use the wreply parameter is supposed to be turned on by default in an upcoming CU.  I heard it was the April 2014 CU actually but have not had a chance to see if that is really in there or not.  It won’t hurt to run the PowerShell above though.

This is good news, thanks for those of you that shared your experiences!

More Info on an Old Friend – the Custom Claims Provider, Uri Parameters and EntityTypes in SharePoint 2013

Back to oldie but a goodie – the custom claims provider for SharePoint.  I believe this applies to SharePoint 2010 as well but honestly I have only tested what I’m about to describe on SharePoint 2013 and don’t have the bandwidth to go back and do a 2010 test as well.  What I wanted to describe today is the values you may expect to get, and the values you actually get, in a custom claims provider method for FillClaimsForEntity, FillResolve and FillSearch.  Chances are they may not be what you expect.

  • FillClaimsForEntity – this method is invoked at login time and is used for claims augmentation.  The Uri parameter that you get (“context”) will NOT tell you what site a person is authenticating into.  The value you will get back is the root of the web application zone.  So if a user is trying to get into https://www.contoso.com/sites/accounting, the value for the “context” parameter will be https://www.contoso.com.
  • FillResolve – this method has two different overrides, but both include a Uri parameter (“context”).  The value you get for this parameter varies.  When you’re in a site collection the Uri will be for the site collection you are in.  When you are in central admin and you are invoked when picking a site collection administrator, “it varies”.  When you are first selecting a site collection administrator, the Uri value will be of the root site collection again – NOT the site collection you are trying to set the admin for.  Unfortunately, this gets a bit more convoluted yet – when you click on the OK button to save your changes, the FillResolve method will get invoked again.  This time however, the Uri value WILL BE the actual site collection Url for which you just changed the site collection admin.
  • FillSearch – this method is invoked when someone is doing a search in people picker.  It exhibits the same behavior as FillResolve above – when you’re in a site collection you get the correct site collection Uri.  When you’re in central admin and you are trying to search for an entity to add as a site collection admin, the value for the Uri parameter is the root site collection – NOT the site collection for which you are setting the admin.

All of the details above apply to the Uri parameter in those methods.  There’s one other thing to be aware of as well however, and that’s the array of entity types that are provided in these methods.  The string[] entityTypes parameter is important to understand because that lets you know what type of result (i.e. PickerEntity) you should return when FillResolve or FillSearch is invoked.  The main scenario where it matters is when you are setting the site collection administrator.  To this day, SharePoint has a limitation that only an individual user (or better stated really – only an identity claim) can be added as a site collection administrator.  What I’ve found is that when you are trying to set the site collection administrator and your custom claims provider is invoked from within central admin, the entityType array correctly contains only one value – SPClaimEntityTypes.User.  HOWEVER…if you are in a site collection and you try and change the site collection administrators from within it, the entityType is returning five different entity types, instead of just User.  At this point I don’t know of a good way to distinguish that from a legitimate request within the site collection where all of those entity types would be appropriate – such as when you’re adding a user / claim to a SharePoint group or permission level.

I don’t know if or how much any of this may change in the future (i.e. are they considered bugs or not), so for now I just wanted to document the behavior so as you’re creating your designs around your farm implementations you know better exactly what info you’ll have within your custom claims provider to help make fine grained security decisions.

Claim Type Exceptions with Custom Claims Providers in SharePoint 2013

This issue applies to SharePoint 2010 as well but…suppose you have created a custom claims provider and one of the things you want to do is to have some custom claim types that you use.  What I mean by custom claim types is just that they are not one of the standard out of the box types like email, upn, role, etc.  Now in this posting here: http://blogs.technet.com/b/speschka/archive/2011/04/02/how-to-add-additional-claims-in-adfs-2-0-that-can-be-consumed-in-sharepoint-2010.aspx – from way back in 2011, I described the format that you need to use for your claim type in order to be accepted by SharePoint.

Suppose that you have built and deployed your custom claims provider though and you decide you really need another claim type – we’ll call it http://www.foo.com/bar. So you start using the claim by creating a new SPClaim and that works just fine.  But then you want to use the claim with something where you require an encoded value – like adding it to a web app policy or a SharePoint group or whatever.  So you create an instance of the SPClaimManager and you try the EncodeClaim method on the value of your SPClaim.  Instead of happiness, what you get is an error that says you have an ArgumentException, the param name is claimType, and in the stack trace you see it happened in the EncodeClaimIntoFormsSuffix method.

You may think hey – I just need to add this claim type to the list of claims my provider supports. I do that by modifying my code in the FillClaimTypes override of my custom claims provider.  That actually is correct – you DO need to do that.  So you do that, recompile and redeploy the assembly…and continue to get the same error.  Well the fact of the matter is that you’ve done everything correctly from a coding standpoint.  The problem is that this is a different flavor of the same problem that SharePoint SPTrustedIdentityTokenIssuers have, which is the claims collection is immutable.  That just means that after you create the claims mappings for an SPTrustedIdentityTokenIssuer, you can not go back and change it afterwards.  Yes, I know people have posted code on how to do that; it is also unsupported to do so.

So, how do you fix this?  If you are NOT the default claims provider, you should be able to just uninstall and remove the custom claims provider feature, remove all instances of it from the GAC and deploy all over again.  If you ARE the default claims provider, then unfortunately the fix is the same way you would as if you needed to change the claims mappings for your SPTrustedIdentityTokenIssuer.  You need to change the configuration for any web apps that are using your SPTrustedIdentityTokenIssuer so that they no longer use it, and then remove your custom claims provider.  Then you need to remove the SPTrustedIdentityTokenIssuer, redeploy your custom claims provider, recreate your SPTrustedIdentityTokenIssuer, make your custom claims provider the default provider, then go back and add the SPTrustedIdentityTokenIssuer to the web apps that need it.  It wears me out just typing out that sentence, let alone doing the work.

Once you’ve done your redeployment though, SharePoint should pick up on your new collection of claims that you will be using.  Until you need to change them again.

Integrating Yammer and SharePoint 2013 Search Results

This post is in response to a question I get pretty frequently when talking about Yammer.  The question that many folks want to know is “how can I crawl my Yammer content from SharePoint?”  Well, unfortunately you can’t, and I have no trick up my sleeve to show you otherwise today to say otherwise.  However, that doesn’t mean that you can’t have a nice consolidated set of search results that includes content from both Yammer and SharePoint, and that’s the point of today’s post.  By the way, this code sample is based on a demo that I showed at the 2014 SharePoint Conference, so for those of you that were looking for some of that code, here you go.

As I’ve mentioned many times when talking to folks about Yammer development, there is a number of REST endpoints for your programming pleasure.  After mentioning this in about three different venues I decided it would just be easier to write something up to demonstrate and that’s exactly what I did.  What we’ll walk through today is the specific implementation that I used for this case.  So to begin with, let’s look at my out of the box search results page when I do a search for products.  Because of the incredibly bad way in which this blog site deals with pictures, I’m just going to give you a link to the picture here; CTRL + Click to open it in a new browser tab:  https://onedrive.live.com/?cid=96D1F7C6A8655C41&id=96D1F7C6A8655C41%217884&v=3&mkt=en-US#cid=96D1F7C6A8655C41&id=96D1F7C6A8655C41%217885&v=3.

Okay, so I’ve previously posted about using the Yammer REST endpoints from .NET.  For this example I simply built out from that sample code, as I’ve been suggesting all of you do when you find something else you need to code for that it does not cover.  You can find that original post and sample code at http://blogs.technet.com/b/speschka/archive/2013/10/05/using-the-yammer-api-in-a-net-client-application.aspx.  With that in hand, I decided to write my own REST endpoint for this example.  So why did I decide to do that?

  • I wanted to demonstrate the use of the new CORS support in WebAPI 2.1.  This allows me to define what hosts can make cross-domain calls into my REST endpoint to query for Yammer data.  That gives me an additional layer of security, plus it now lets me make those client-side cross domain calls to my endpoint.
  • I wanted to have something that delivered HTML as the result.  That allows me to ensure that anyone querying Yammer gets the same exact user experience no matter what application they are using.
  • I wanted to have something that could be reused in many different applications – could be other web apps, could be Windows 8 apps, could be mobile apps, whatever – with a simple REST endpoint there’s no end to the possible reuse.
  • In my scenario, I wanted to be able to issue queries against Yammer even for people that didn’t have Yammer accounts.  So in this scenario what I did was to use a service account to make all the query requests.  By “service account”, I just mean I created an account in Yammer whose only purpose in life is to provided me with an access token that I can use to programmatically read or write content with Yammer.  For more details about access tokens in Yammer please see my initial post on this topic that I linked to above.

So with that being said, I started out by adding all the custom classes for serializing Yammer JSON data to my project, along with my code for making GET and POST requests to Yammer (as described in the original blog post).

The next thing I did was build a little console app to test out sending in a query to Yammer and getting back a set of search results.  I do that to a) get down the process for issuing the query and b) getting the JSON back so I can build out a new class into which I can serialize the set of search results.  I’ve attached some source code to this posting, so you can find the YammerSearchResults class and see how I incorporated that into the project.  I’ll also look at it in a little more detail later in this post.

Once I had all of that foundational pieces in place, I created my new WebAPI endpoint.  There are many places out on the web where you can get your WebAPI info; for me I used the example from Mike Wasson here as my starting point for learning everything I needed for this project:  http://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api.  With my new WebAPI controller class created, I next enabled CORS support in my project by adding this to the WebApiConfig.cs file (in the App_Start folder)

//enable cross site requests

config.EnableCors();

Now that my project supports CORS, I need to flesh out the where and how for supporting it.  To do that I added this attribute to the WebAPI controller class:

 

[EnableCors(origins: http://localhost:1629,https://saml.vbtoys.com,https://sps2”, headers: “*”, methods: “*”)]

 

So the main things worth pointing out here are that I added a list of hosts that I trust in the origins attribute.  The “localhost:1629” reference is in there because that’s the web dev server that Visual Studio created for a test web application project I created.  I used that to test the functionality of the REST endpoint in just a standard HTML page before I ever tried to incorporate it into SharePoint.  The saml.vbtoys.com and sps2 hosts are ones that I use for SharePoint provider-hosted apps and SharePoint respectively.  So that allows me to use it both in my provider-hosted apps as well as SharePoint itself.  Finally the headers and methods attributes are configured to allow all headers and methods through.

The next thing I did was create the method in my WebAPI controller that I use to execute the query against Yammer and return results.  The method signature for it looks like this:

 

// GET api/<controller>?search=steve%20peschka

public HttpResponseMessage Get(string search)

 

So as you can see, I’m going to take a string as input – my search string – and I’m going to return an HttpResponseMessage.  Inside my method, the code should look quite familiar if you have seen my original post on Yammer dev with .NET – I’m just using my MakeGetRequest and GetInstanceFromJson methods:

 

List<SearchResult> finds = new List<SearchResult>();

string response = YammerREST.MakeGetRequest(searchUrl + “?search=” + search, accessToken);

YammerSearchResults ysr = YammerSearchResults.GetInstanceFromJson(response);

The “searchUrl” is just the standard Yammer REST endpoint for search, and the “accessToken” is the access token I got for my Yammer service account.  Okay, so I got my set of search results back, but one of the first things I noticed is that the search results that are returned include very little information about the user – basically just a Yammer User ID.  Of course, I not only wanted to show the author information, but I also wanted to show the picture of the Yammer user.  This unfortunately does require making another call out to REST to get this information.  In my particular scenario, I only wanted to show the top 3 results from Yammer, so what I did was simply enumerate through the results and get that user information for each one. 

One of the nice things I was able to do with the YammerSearchResults class that I created for this is I was able to reuse the YammerMessages class that I had created in my original posting.  So my search results include a collection of messages (where each message is a message that matches the search criteria), so I can simply use that code from before to enumerate through the results.  This is what that looks like:

foreach(YammerMessage ym in ysr.Messages.Messages)

{

//get the Yammer User that posted each message so we can pull in their picture url

string userUrl = oneUserUrl.Replace(“[:id]”, ym.SenderID);

response = YammerREST.MakeGetRequest(userUrl, accessToken);

         YammerUser yu = YammerUser.GetInstanceFromJson(response);

 

//some other stuff here I’ll describe next

}

While I’m enumerating through the messages in the search results, I go ahead and make another call out to get the information I want for each user.  Again, I’m able to use the MakeGetRequest and GetInstanceFromJson methods I described in my original post.  With that in hand, I can go ahead and create the dataset I’m going to use to generate the HTML of search results.  In order to do that, I created a local class definition in my controller and a List<> of that class type.  With those pieces in place I can create one record for each search result that includes both the search result information as well as the user information.  My List<> is called “finds” and the code for pulling this all together looks like this (and goes in the enumeration loop above, where it says “some other stuff I’ll describe next”):

                   

//add a new search results

finds.Add(new SearchResult(yu.FullName, yu.WebUrl, yu.FirstName,

ym.MessageContent.RichText, yu.PhotoUrl, DateTime.Parse(ym.CreatedAt),

ym.WebUrl));

 

iCount += 1;

                   

if (iCount == 3)

break;

 

As you can see, I’m plugging in the message from the search result with “ym.MessageContent.RichText”, and all of the rest of the fields are information about the user.  Now that I have my list of search results, the rest of the WebAPI controller method is kind of boring.  I just create a big long string in a StringBuilder instance, I add some style info and then I add HTML for each search result.  I then take the results of the big long string and stick it in an HttpResponseMessage to return, like this:

new HttpResponseMessage()

{

                Content = new StringContent(results),

                StatusCode = System.Net.HttpStatusCode.OK

};

Shaboom, and there you go.  Now, perhaps the best part of all of this is on the SharePoint side.  What I decided to do there was to create a new Search vertical.  All that really means is that I added a new search results page to the Pages library in an Enterprise Search site collection.  You then add a new search navigation item in the search settings for the site, and you point that navigation item at the page you added.  Then you go and customize the page to return whatever search results you want.  I’m not covering this in super detail here obviously because it’s not really on point with this topic, but it should be relatively easy to find on the web.  If not, say so in the comments and we can always cover that in another blog post.

But…I created my new search vertical and then the changes I made to it were pretty simple.  First, I just plugged in the out of the box search results web parts onto it.  Then, I added an out of the box script editor web part above those.  This is really the beauty of this solution to me – I didn’t have to create anything custom in SharePoint at all.  Since it’s all just client side script and code, I didn’t write a custom web part for SharePoint – I just created my script and pasted it into the out of the box web part.  To complete this story, I would LOVE to paste in here the HTML and javascript that I use in the script editor web part to make this all work.  However, it is completely unusable when pasted into the awesome blog editing tools on this site <sarcasm>.  So instead you’ll have to get the attachment to see it – just look for the text file called SharePoint Script Part Snippet.txt.

Now, with all that done, you can click here to see what the final results look here:

https://onedrive.live.com/?gologin=1&mkt=en-US#cid=96D1F7C6A8655C41&id=96D1F7C6A8655C41%217884&v=3

Note that in my particular case I chose to only show search results that were from messages.  You could have just as easily shown search results for people, files, etc.  Also I configured links on the search results so that you can click on any one of them to view the conversation in Yammer.  I also included a link with a “more search results” kind of functionality, and when you click on it a new browser tab opens with the search results page in Yammer, using the query terms that were entered on the page.  So it lets you slide over very easily into Yammer to get the full search experience that it provides too.

Also – as a side note – someone else at SPC mentioned that they took a similar approach but chose to return the results in an OpenSearch compliant Xml format so that it could be added as a federated search result.  That didn’t fit my particular scenario, but it’s an interesting take on things so I’m including it here for your consideration.  Nice work person that made this comment.  🙂  

You can download the attachment here:

Update to Guidance for Patching AppFabric on SharePoint 2013 Distributed Cache Servers

For those of you who had the “pleasure” (okay, slight sarcasm there) of watching me in the SharePoint Ignite materials talk about distributed cache, you may recall that our guidance at the time SharePoint 2013 shipped was that you should not patch AppFabric on SharePoint servers independently – meaning that if it needed patching the expectation was that it would be delivered with a SharePoint patch and AppFabric patches should not be applied.  Since that time, we’ve reviewed the plans there and have decided that in fact patching AppFabric directly is how it should be managed going forward.  So you will see a KB article here that describes doing just that: http://support.microsoft.com/kb/2843251.

This has been causing a bit of confusion recently so we were lucky enough to get final clarification on this so folks can move forward.  That being said now, you can (and should) go look for CU4 for AppFabric, because it has some bug fixes and performance improvements that will help you out in SharePoint land.  You can find that at http://support.microsoft.com/kb/2800726.

 

Programmatically Adding A Trusted Identity Token Issuer to a Web Application Zone in SharePoint 2010 and 2013

Seems like I haven’t had a chance to write a good SharePoint / SAML claims kind of article in a while.  I was happily plugging away on some code this weekend for a scenario I haven’t done before so I thought I would go ahead and post it here for the search engines.  The whole topic in general has kind of reached a point where I’m not really sure if anyone even needs these kinds of postings anymore, but ultimately I decided it’s worth jotting down because there will probably be someone, somewhere that needs to do this in a hurry so perhaps this post will help.

So the topic today is changing the configuration of a web application zone so that it uses an SPTrustedIdentityTokenIssuer – i.e. I want to configure my web app to use SAML claims.

The first challenge I typically face with folks is getting them to restate the problem – more precisely, in this case we’re not configuring a web app to use SAML, we’re configuring a zone in the web app to use SAML.  This is a key to remember because you need to know that as you start plugging through the object model to find and change these settings.  So let’s start working through some code.

The first step of course is going to be to get a reference to the web application you want to work with.  You can use the ContentService to do that, or I frequently just use a site.  Here’s an example:

using (SPSite s = new SPSite(https://foo))

{

SPWebApplication wa = s.WebApplication;

 }

Okay, so now that I have my web application, I need to get the IIS settings for the zone.  In this case I’m just going to hard-code it to look in the default zone:

Dictionary<SPUrlZone, SPIisSettings> allSettings = wa.IisSettings;

SPUrlZone z = SPUrlZone.Default;

SPIisSettings settings = allSettings[z];

As you can probably tell from the code, you can get the settings for any zone, but we’re grabbing them for the default zone.  Once I have that, I can look at what authentication providers the zone is using with this:

List<SPAuthenticationProvider> providers = settings.ClaimsAuthenticationProviders.ToList<SPAuthenticationProvider>();

So for example if you wanted to see if your provider was being used before trying to add it, this is where you would look.  In our case we’ll just assume it’s not being used by this zone yet, so next we need to get the SPTrustedClaimProvider that represents our SPTrustedIdentityTokenIssuer.  We’ll use this code to do that:

//get the local claim provider manager, which tracks all providers and get all trusted claim providers into a list

SPClaimProviderManager cpm = SPClaimProviderManager.Local;

List<SPTrustedClaimProvider> claimProviders = cpm.TrustedClaimProviders.ToList<SPTrustedClaimProvider>();

//look to see if ours exists

const string ISSUER_NAME = “MyTokenIssuer”;

var cp = from SPTrustedClaimProvider pd in claimProviders

where pd.DisplayName == ISSUERI_NAME

select pd;

SPTrustedClaimProvider ceProvider = null;

 

try

 {

ceProvider = cp.First<SPTrustedClaimProvider>();

}

catch

{

//it will throw an exception if it doesn’t exist

}

Okay, so at this point our provider should be in the ceProvider variable.  Once we have that, we have a couple of different options to configure  the zone to use it.  If you just want to add it to the collection of authentication providers the zone is using, use the AddClaimsAuthenticationProvider method of the IisSettings instance.  If you want to replace the list of authentication providers so it only uses your SAML provider (which also means it can’t be crawled, since that requires NTLM), use the ReplaceClaimsAuthenticationProviders method.  Here’s some code that demonstrates both methods, you would obviously only use one of them:

 

if (ceProvider != null)

{

SPAuthenticationProvider newProvider = new SPTrustedAuthenticationProvider(ceProvider.Name);

 

//this works to add it to the collection of claims providers

settings.AddClaimsAuthenticationProvider(newProvider);

 

//this can be used to replace it so you only have the trusted provider

providers = new List<SPAuthenticationProvider>();

providers.Add(newProvider);

settings.ReplaceClaimsAuthenticationProviders(providers);

 

//update the web application to save the authentication provider settings

wa.Update();                 

 }

So there you have it, hopefully if you ever need to do this now you can grab this code and run with it.

Setting Up BCS Hybrid Features End to End In SharePoint 2013

For those of you who will be attending SharePoint Conference 2014 (www.sharepointconference.com), I’m going to be teaching a half-day course after the three main days with Bill Baer and some other folks on SharePoint Hybrid features.  This is firming up to be a pretty good session from a presenter standpoint, because we’ll have folks from SharePoint Marketing (Bill), other members of the SharePoint Product Group (Luca, Steve W. and Neil), the Microsoft Technology Center in Irvine (Ali), and our lead / best escalation engineer for SharePoint hybrid (Manas).  That’s a whole lot of hybrid expertise in one location so if you have a chance to attend check out the “SharePoint Server 2013 and Office 365 Hybrid” session in the listings at http://www.sharepointconference.com/content/pre-post-days.

One of the things that I’ll be covering in that session is getting the BCS hybrid features to work.  At a high level, the goal of the BCS hybrid features are to allow you to access your on-premises line of business data from within your Office 365 tenant.  As I’ve started scraping together some content for this topic, I realized that I did not see a really comprehensive “end-to-end” walk through of how to make this work; it’s really a series of documents and blog posts.  So…while getting a sample up and running in the lab I made a list of all the different resources I pulled info from to get things going.  In this post I’m just going to do an outline of what those steps and resources are; if there is need / demand still, then once we’ve delivered on the SharePoint Conference I may try and flesh this out with pictures, examples and more detailed instructions.  For now I’m hoping you can just follow the steps in order, track down the links I provide to pull together the data you need, and use that to build your BCS hybrid applications.

1.  Find / Create an OData data source:  See 2nd half of http://blogs.technet.com/b/speschka/archive/2012/12/06/using-odata-and-ects-in-sharepoint-2013.aspx
2.  Publish your OData endpoint (https):  you need to have an HTTP/HTTPS endpoint from which your OData content is accessible
3.  Create an External Content Type based on your OData data source:  http://msdn.microsoft.com/library/office/jj163967.aspx
4.  Make your ECT file “tenant ready”:  See 1st half of http://blogs.technet.com/b/speschka/archive/2012/12/06/using-odata-and-ects-in-sharepoint-2013.aspx
5.  Create a connection to your on premises services:  Follow the tips described here:  http://blogs.technet.com/b/speschka/archive/2013/04/01/troubleshooting-tips-for-hybrid-bcs-connections-between-office-365-and-sharepoint-2013-on-premises.aspx
6.  Upload your model (.ECT) file to o365 BCS:  Follow the rest of the tips here:  http://blogs.technet.com/b/speschka/archive/2013/04/01/troubleshooting-tips-for-hybrid-bcs-connections-between-office-365-and-sharepoint-2013-on-premises.aspx
7.  Create a new External List and use the BCS model you uploaded:  once your model is uploaded successfully you can create a new External List in o365 and use that to work with your on-premises LOB data.

Let me know how far this gets you, or the trouble spots you might run up against and we’ll see what we can do to address them as time permits.  If you’re coming to the SharePoint Conference hopefully I’ll get a chance to meet up and talk with you there.

Oh No – Your BCS Model Doesn’t Show Up When You Create a Content Source For BDC

First let me point out that I gratuitously co-mingled the terms “BCS” and “BDC” in the title because I honestly never know what to call this anymore (I’ve heard the marketing language before but it just ain’t stickin’).  So…today’s post is something that I thought I had added to the blog before, but searching the site is rendering no results so here we are with a new post – hopefully the first on this topic.  Event today have led me to believe that my gray matter may be a bit off.  So here’s the scenario:

You create a new BCS model; for example, you create a SharePoint App, and then create an ECT on top of it that you can use with your App or as an External Content Type for an External List (kind of along the lines I described here: http://blogs.technet.com/b/speschka/archive/2014/01/06/setting-up-bcs-hybrid-features-end-to-end-in-sharepoint-2013.aspx).  Now you go into the Search Service application and create a new content source so you can crawl it (because we won’t index an external list, since the content is rendered asynchronously).  You select the “Line of Business Data” content type, and then you get a message that says “There are no external data sources to choose from.” (please tell me some search engine will index that phrase)  Confusing as heck, because you can create an External List from your ECT and get data back, so it should work just great, right?

In the interest of avoiding sarcasm in response to my rhetorical question, let me just say “no”.  To be able to add it to a search content source, you need to add the following properties:

1. <Property Name=”ShowInSearchUI” Type=”System.String”></Property>  – to the <LobSystemInstances><LobSystemInstance><Properties> section.  This is near the top of the Xml.
2. <Property Name=”RootFinder” Type=”System.String”></Property> – to the properties for the Method that is used to read all items for the list.  By default it will be called “ReadAll{entityName}”, where “{entityName}” is the name of the table, like “Customers”.
3. <Property Name=”RootFinder” Type=”System.String”></Property> – to the properties for the MethodInstance that implements the method in #2; by default it will have the same name as #2.  You can also find this by searching for Type=”Finder” Default=”true” in your Xml.

Now, you DO NOT need to add an actual value to these properties when you plug them into your ECT; they are like flags, so they infer a value just by being present.  Once you’ve made these changes to your model you’ll need to delete the existing model and external content type (both done in the BDC / BCS admin page), then import your ECT again.  After you do that, your external list(s) that used it before should still work, and when you go into create a new content source in Search it should show up there as well.  Once you get that set up, remember to check the advice in this blog post to finish setting up the content source: http://blogs.technet.com/b/speschka/archive/2013/02/04/resolving-the-directory-links-across-partitions-are-not-allowed-error-when-crawling-odata-bdc-sources.aspx.

Another BCS Hybrid Tip – Fixing “The Type name for the Secure Store provider is not valid” Error

Here’s another error that are actually pretty likely to see if you are setting up BCS hybrid.  When you configure your BCS connection in o365 to connect to your on premises data source, one of the things you need to configure is the security that’s going to be used to make the call into your data source.  In many cases you will want to use a Secure Store Service application and keep a set of credentials in there.  You can configure your connection to do that just fine, but when you try and import your data model in o365 you will get an error message that says “The type name for the secure store provider is not valid”.  If you look into this error further in your on prem ULS logs you will see something to the effect that it’s asking for version 16.0.0.0 of the secure store assembly.  That’s the version that’s running in o365 today, but on premises today you have version 15.0.0.0.

After looking at some different options, we ultimately decided that for now the best way to work around this problem is to add an assembly binding redirect to the web application in the on premises farm.  When I say “the web application”, what I mean is the web application your BCS connection in o365 points at in the connection.  In the connection itself, you will need to have an endpoint for the client.svc, and to do that you use a web application that you have published through your reverse proxy.  So if your web application is at https://www.foo.com, then in BCS you would configure the endpoint as https://www.foo.com/_vti_bin/client.svc

So…in the web.config for that web application you would add an assembly redirect that looks like this:

<dependentAssembly xmlns=”urn:schemas-microsoft-com:asm.v1″>
 <assemblyIdentity name=”Microsoft.Office.SecureStoreService” publicKeyToken=”71e9bce111e9429c” culture=”neutral” />
 <bindingRedirect oldVersion=”16.0.0.0″ newVersion=”15.0.0.0″ />
</dependentAssembly>

I’ll have more details on this and all the rest at the post-SPC training that Bill Baer and I are doing.

Configuring Windows Server 2012 R2 Web Application Proxy for SharePoint 2013 Hybrid Features

This is a topic that’s generated a lot of interest over the last couple of months and I’m happy to report that I was recently able to utilize the new Web Application Proxy (WAP) features of Windows Server 2012 R2 to act as a reverse proxy for the SharePoint 2013 hybrid features.  Specifically I configured it for use with the 2-way hybrid search features between SharePoint on premises and Office 365.  In all likelihood this will be the first level of support Microsoft will offer for using WAP with SharePoint hybrid, and then other hybrid features will likely follow suit afterwards.  For now let’s walk through getting this working in your environment.

To begin with, I’m going to assume that you have all of the other hybrid components in place, and really all we are doing at this point is getting the reverse proxy part of the solution configured.  That means that you have an on premises SharePoint farm, an Office 365 tenant, and dirsync and SSO configured already.  I’m also assuming that you’ve configured a result source with a Remote SharePoint Index and a query rule that uses that result source so you can retrieve search results from either farm.  What we’re going to do here is just add the reverse proxy piece to it so you can do it securely (versus having an open endpoint to your on premises farm laying wide open out on the Internet).

Getting WAP is really a multi-part process:

  • Get Server 2012 R2 and ADFS installed and configured
  • Get Server 2012 R2 and WAP installed and configured
  • Create a WAP application

The first thing to note is that you cannot install both AFDS and WAP on the same server, so you will need at add least two servers to your farm, and of course more if you are adding fault tolerance.  Ironically I found the most difficult thing in this whole process to be configuring ADFS, which was a surprise given how smoothly it went in previous versions.  But new things bring new opportunities, so it’s probably just a matter of getting used to how these things work.  So let’s start with ADFS, and I’ll try and call out the things that would have helped me move along more smoothly.

To begin with, you will go to the Add Roles and Features option in Server Manager to get started.  Once you select your server, check the box for Active Directory Federation Services.  You can complete the wizard to get the service installed, and that part should be completely painless and worry free.  Once the installation is complete you should notice a yellow warning icon in Server Manager, and that’s your indication that you have something to do; in this case, that “something” is configuring ADFS.   Click on the warning icon to see the status information, and then click on the link to configure the federation service on this server.

Since this is our first server we’ll accept the default option to create the first federation server and click the Next button to continue.  On the next page you select an account with AD domain admin permissions and then click Next.  The next page of the wizard is where I ended up messing myself up the first time, so hopefully a little explanation here will help you. 

The first thing you need to do is select the SSL certificate that you will use for connection to ADFS.  One thing worth pointing out right away is that ADFS does not use IIS in Server 2012 R2, so don’t bother looking around for it in the IIS Manager.  This also leads to a potential irritating point of confusion that I’ll explain a bit later.  So select a certificate (typically a wildcard or SAN certificate) that you will use for your SSL connection to ADFS.  If you’re like me, you have a wildcard certificate that you use for these purposes.  If you Import it (or choose it from the drop down), it automatically places the CN (i.e. subject name) of the certificate in the Federation Service Name – this is where you need to be careful. 

Basically what you should put in the Federation Service Name is the Url you want to use for ADFS requests.  So in mine it put “*.vbtoys.com” because that was my certificate’s subject name, and instead I had to put something like “adfs.vbtoys.com”.  Don’t forget to add an entry in DNS for the Federation Service Name.  Finally, the Federation Service Display Name can be whatever you want it to be, then click Next to continue the wizard.

On the next page you select the service account you are going to use, and then click Next.  On the next page of the wizard you’ll have the option of either creating a new Windows database for the service or using an existing SQL Server database.  Make your selection and click the Next button.  The next page is where you can review your selections or view the PowerShell script it will execute to create the service.  Once you’ve taken a look go ahead and click Next to validate all of your selections.  Assuming all the pre-requisite checks pass, you can now click the Configure button to begin configuring the service. 

One final point on this – EVERY single time I’ve ever run this wizard, it has ALWAYS given me a message about an error setting the SPN for the specified service account.  It should be setting a “HOST” SPN for the ADFS service account for the endpoint you defined for the ADFS service.  I believe the net of this is that if you are setting up a single server for your ADFS farm, then when you create the ADFS service you make the service name the same as your server name.  For example, if your server name is “adfs.foo.com”, it appears that at some point in the installation of features that Windows creates an SPN for “host/adfs.foo.com” and assigns it to the computer “adfs.foo.com”.  When you configure ADFS to use the service name “adfs.foo.com” it wants to assign that same SPN of “host/adfs.foo.com” to the ADFS service account.  Now, if you are using multiple servers (so you have a load balanced name for your ADFS endpoint), or if you just use some other name besides the name of the computer for the ADFS endpoint, you should not have this problem.  If for some reason you get turned sideways, you can always open adsiedit.msc and use the servicePrinicpalName attribute on a user or computer to edit the SPNs directly.  In my experience, if you are just using a single server then there really isn’t anything to worry about.

So with that long-winded bit of Kerberos trivia out the way, you should have completed the configuration of your ADFS server.  Now the logical thing to do is to test it to make sure it is working.  This is the potentially irritating point of confusion I mentioned earlier.  If you read http://technet.microsoft.com/en-us/library/dn528854.aspx it provides a couple of Urls that you can try to validate your ADFS installation.  What it doesn’t tell you is that if you try and run any of these tests on the ADFS server itself, they will fail.  So your big takeaway here is find another server to test this on, and then I recommend just trying to download the federation metadata Xml file, for example https://fs.contoso.com/federationmetadata/2007-06/federationmetadata.xml.  The article above explains this in more detail.

Okay, well the good news is that we got through hard part, the rest of this is all downhill from here.  For step 2 you need to install WAP on another Windows Server 2012 R2 machine.  One quick distinction worth making here is that the ADFS server must be joined to your domain; your WAP server on the other hand should not be.  The process is much simpler here – before you get started though, copy over the SSL certificate you used on your ADFS servers to the WAP server; you will use that later when running the WAP wizard.  You can now begin by opening up the Server Manager and go into to add roles and features again.  Once you’ve selected your server, check the Remote Access role then continue with the wizard.  A few steps later it will ask you what Remote Access services you want to install and then check the Web Application Proxy service.  You’ll see a pop up with a few other features that it requires and you can just click the Add Features button to continue.  After you finish the wizard, like before, you’ll see a warning icon in the Server Manager application.  Just like before, click on it then select the option to open the Web Application Proxy wizard.

In the wizard you’ll finally get to enter your Federation Service Name, which is the same as you entered above when you ran the ADFS configuration wizard.  You also provide a set of credentials for a local admin on the ADFS server(s) and then click Next in the wizard.  On the next page you need to select the certificate that you use for SSL to your ADFS server(s).  Unfortunately this step in the wizard does not include a button to import a certificate, so you need to open the Certificates snap-in in MMC and add it to the Local Computer’s Personal certificate store.  If you didn’t do this previously, you’ll need to click the Previous button after you add the certificate, then click the Next button again and your certificate should show up in the drop down list.  Click the Next button in the wizard one last time, then you can review the PowerShell it is going to run.  Click the Configure button to complete the process.

There’s one last gotcha that you might see at this point.  You may get an error that says “AD FS proxy could not be configured” and then some further details about failing to establish a trust relationship for the SSL/TLS secure channel.  Remember that if you are using domain issued certificates, and the WAP server is not joined to the domain, then it does not automatically trust the root certificate authority (i.e the “root CA”) for your SSL certificate.  If that is the case, you need to get a copy of the root CA certificate and add it to the Trusted Root Authority store for your local computer on the WAP server.  You can get the root CA certificate in a variety of ways; I just copied it out of the SSL certificate that I used for the ADFS server.  Once you add that you can click the Previous button in the wizard then click the Configure button again and have it do its thing.  At this point you should be good to go, the wizard completes, and the Remote Access Management Console pops up, with pretty much nothing in it.

As I mentioned before, things get progressively easier as we go.  The final step now is to create a new WAP application.  This just means we’re going to publish an endpoint for our on premises SharePoint farm, and Office 365 is going to send the hybrid search queries to that endpoint, where they will get passed back to our on premises farm.  The genius part of what the WAP folks did is boil all of this down to a single line of PowerShell and then – Kaboom! – you got hybrid. 

I’m going to borrow here from some documentation around the WAP feature that will be published soon to describe the PowerShell that you execute (sorry for stealing here UA team):

Add-WebApplicationProxyApplication -ExternalPreauthentication ClientCertificate -ExternalUrl <external URL> -BackendServerUrl <bridging URL> -name <friendly endpoint name> -ExternalCertificateThumbprint <certificate thumbprint> -ClientCertificatePreauthenticationThumbprint <certificate thumbprint> -DisableTranslateUrlInRequestHeaders:$False -DisableTranslateUrlInResponseHeaders:$False

  • Where <external Url> is the external address for the web application.
  • Where <bridging URL> is the internal or bridging URL you defined for the web application in your on-premises SharePoint Server 2013 farm.
  • Where <friendly endpoint name> is a name you choose to identify the endpoint in Web Application Proxy.
  • Where <certificate thumbprint> is the certificate thumbprint, as a string, of the certificate to use for the address specified by the ExternalUrl parameter. This value should be entered twice, once for the ExternalCertificateThumbprint parameter and again for the ClientCertificatePreauthenticationThumbprint parameter.

I will say that I found a couple of these parameters a little confusing myself when I first looked at them so let me provide an example:

  • I have a web application with a default zone of https://www.sphybridlab.com
  • I use a wildcard certificate for that web application, and that certificate has a thumbprint of 6898d6e24a441e7b73f18ecc9b6a72b742cf4ee0
  • I uploaded that same wildcard certificate to a Secure Store Application in Office 365 to use for client authentication on my hybrid queries

So the PowerShell I use to create my WAP application is this:

Add-WebApplicationProxyApplication -ExternalPreauthentication ClientCertificate -ExternalUrl “https://www.sphybridlab.com&#8221; -BackendServerUrl “https://www.sphybridlab.com&#8221; -name “SphybridLab.Com Hybrid Search” -ExternalCertificateThumbprint “6898d6e24a441e7b73f18ecc9b6a72b742cf4ee0” -ClientCertificatePreauthenticationThumbprint “6898d6e24a441e7b73f18ecc9b6a72b742cf4ee0” -DisableTranslateUrlInRequestHeaders:$False -DisableTranslateUrlInResponseHeaders:$False

 

That’s it – we’re golden after this.  Here are a few more details about the implementation to clarify things:

  • I have added that same exact wildcard certificate to a Secure Store Service application in my Office 365 tenant; I call that application “HybridCert”.

  • In Office 365 I have a result source that points to my on premises SharePoint farm at https://www.sphybridlab.com.  It is also configured to use the SSO Id of “HybridCert”.

 

When Office 365 attempts to issue a hybrid query to https://www.sphybridlab.com, it finds the IP address in GoDaddy to be 201.80.100.80.  It submits the query and includes the certificate from the “HybridCert” SSS application as certificate authentication for the request.  WAP is listening on that IP address and requires certificate authentication.  It gets the request, it finds a certificate presented for authentication and that certificate has the thumbprint that we configured it to look for in our WAP application.   It’s all goodness at that point so it passes the request back to https://www.sphybridlab.com, and away we go.  The final results then look like this:

 (NOTE:  This picture keeps disappearing from this post and I have no idea why; you can always try this link directly: https://skydrive.live.com/#cid=96D1F7C6A8655C41&id=96D1F7C6A8655C41%216922&v=3.  It’s just an image of an integrated set of search results from on prem and o365 that you get with hybrid search)

As I mentioned above, look for expanding coverage and support on WAP and SharePoint hybrid features in the coming months.