Integrating Yammer and SharePoint 2013 Search Results

This post is in response to a question I get pretty frequently when talking about Yammer.  The question that many folks want to know is “how can I crawl my Yammer content from SharePoint?”  Well, unfortunately you can’t, and I have no trick up my sleeve to show you otherwise today to say otherwise.  However, that doesn’t mean that you can’t have a nice consolidated set of search results that includes content from both Yammer and SharePoint, and that’s the point of today’s post.  By the way, this code sample is based on a demo that I showed at the 2014 SharePoint Conference, so for those of you that were looking for some of that code, here you go.

As I’ve mentioned many times when talking to folks about Yammer development, there is a number of REST endpoints for your programming pleasure.  After mentioning this in about three different venues I decided it would just be easier to write something up to demonstrate and that’s exactly what I did.  What we’ll walk through today is the specific implementation that I used for this case.  So to begin with, let’s look at my out of the box search results page when I do a search for products.  Because of the incredibly bad way in which this blog site deals with pictures, I’m just going to give you a link to the picture here; CTRL + Click to open it in a new browser tab:

Okay, so I’ve previously posted about using the Yammer REST endpoints from .NET.  For this example I simply built out from that sample code, as I’ve been suggesting all of you do when you find something else you need to code for that it does not cover.  You can find that original post and sample code at  With that in hand, I decided to write my own REST endpoint for this example.  So why did I decide to do that?

  • I wanted to demonstrate the use of the new CORS support in WebAPI 2.1.  This allows me to define what hosts can make cross-domain calls into my REST endpoint to query for Yammer data.  That gives me an additional layer of security, plus it now lets me make those client-side cross domain calls to my endpoint.
  • I wanted to have something that delivered HTML as the result.  That allows me to ensure that anyone querying Yammer gets the same exact user experience no matter what application they are using.
  • I wanted to have something that could be reused in many different applications – could be other web apps, could be Windows 8 apps, could be mobile apps, whatever – with a simple REST endpoint there’s no end to the possible reuse.
  • In my scenario, I wanted to be able to issue queries against Yammer even for people that didn’t have Yammer accounts.  So in this scenario what I did was to use a service account to make all the query requests.  By “service account”, I just mean I created an account in Yammer whose only purpose in life is to provided me with an access token that I can use to programmatically read or write content with Yammer.  For more details about access tokens in Yammer please see my initial post on this topic that I linked to above.

So with that being said, I started out by adding all the custom classes for serializing Yammer JSON data to my project, along with my code for making GET and POST requests to Yammer (as described in the original blog post).

The next thing I did was build a little console app to test out sending in a query to Yammer and getting back a set of search results.  I do that to a) get down the process for issuing the query and b) getting the JSON back so I can build out a new class into which I can serialize the set of search results.  I’ve attached some source code to this posting, so you can find the YammerSearchResults class and see how I incorporated that into the project.  I’ll also look at it in a little more detail later in this post.

Once I had all of that foundational pieces in place, I created my new WebAPI endpoint.  There are many places out on the web where you can get your WebAPI info; for me I used the example from Mike Wasson here as my starting point for learning everything I needed for this project:  With my new WebAPI controller class created, I next enabled CORS support in my project by adding this to the WebApiConfig.cs file (in the App_Start folder)

//enable cross site requests


Now that my project supports CORS, I need to flesh out the where and how for supporting it.  To do that I added this attribute to the WebAPI controller class:


[EnableCors(origins: http://localhost:1629,,https://sps2”, headers: “*”, methods: “*”)]


So the main things worth pointing out here are that I added a list of hosts that I trust in the origins attribute.  The “localhost:1629” reference is in there because that’s the web dev server that Visual Studio created for a test web application project I created.  I used that to test the functionality of the REST endpoint in just a standard HTML page before I ever tried to incorporate it into SharePoint.  The and sps2 hosts are ones that I use for SharePoint provider-hosted apps and SharePoint respectively.  So that allows me to use it both in my provider-hosted apps as well as SharePoint itself.  Finally the headers and methods attributes are configured to allow all headers and methods through.

The next thing I did was create the method in my WebAPI controller that I use to execute the query against Yammer and return results.  The method signature for it looks like this:


// GET api/<controller>?search=steve%20peschka

public HttpResponseMessage Get(string search)


So as you can see, I’m going to take a string as input – my search string – and I’m going to return an HttpResponseMessage.  Inside my method, the code should look quite familiar if you have seen my original post on Yammer dev with .NET – I’m just using my MakeGetRequest and GetInstanceFromJson methods:


List<SearchResult> finds = new List<SearchResult>();

string response = YammerREST.MakeGetRequest(searchUrl + “?search=” + search, accessToken);

YammerSearchResults ysr = YammerSearchResults.GetInstanceFromJson(response);

The “searchUrl” is just the standard Yammer REST endpoint for search, and the “accessToken” is the access token I got for my Yammer service account.  Okay, so I got my set of search results back, but one of the first things I noticed is that the search results that are returned include very little information about the user – basically just a Yammer User ID.  Of course, I not only wanted to show the author information, but I also wanted to show the picture of the Yammer user.  This unfortunately does require making another call out to REST to get this information.  In my particular scenario, I only wanted to show the top 3 results from Yammer, so what I did was simply enumerate through the results and get that user information for each one. 

One of the nice things I was able to do with the YammerSearchResults class that I created for this is I was able to reuse the YammerMessages class that I had created in my original posting.  So my search results include a collection of messages (where each message is a message that matches the search criteria), so I can simply use that code from before to enumerate through the results.  This is what that looks like:

foreach(YammerMessage ym in ysr.Messages.Messages)


//get the Yammer User that posted each message so we can pull in their picture url

string userUrl = oneUserUrl.Replace(“[:id]”, ym.SenderID);

response = YammerREST.MakeGetRequest(userUrl, accessToken);

         YammerUser yu = YammerUser.GetInstanceFromJson(response);


//some other stuff here I’ll describe next


While I’m enumerating through the messages in the search results, I go ahead and make another call out to get the information I want for each user.  Again, I’m able to use the MakeGetRequest and GetInstanceFromJson methods I described in my original post.  With that in hand, I can go ahead and create the dataset I’m going to use to generate the HTML of search results.  In order to do that, I created a local class definition in my controller and a List<> of that class type.  With those pieces in place I can create one record for each search result that includes both the search result information as well as the user information.  My List<> is called “finds” and the code for pulling this all together looks like this (and goes in the enumeration loop above, where it says “some other stuff I’ll describe next”):


//add a new search results

finds.Add(new SearchResult(yu.FullName, yu.WebUrl, yu.FirstName,

ym.MessageContent.RichText, yu.PhotoUrl, DateTime.Parse(ym.CreatedAt),



iCount += 1;


if (iCount == 3)



As you can see, I’m plugging in the message from the search result with “ym.MessageContent.RichText”, and all of the rest of the fields are information about the user.  Now that I have my list of search results, the rest of the WebAPI controller method is kind of boring.  I just create a big long string in a StringBuilder instance, I add some style info and then I add HTML for each search result.  I then take the results of the big long string and stick it in an HttpResponseMessage to return, like this:

new HttpResponseMessage()


                Content = new StringContent(results),

                StatusCode = System.Net.HttpStatusCode.OK


Shaboom, and there you go.  Now, perhaps the best part of all of this is on the SharePoint side.  What I decided to do there was to create a new Search vertical.  All that really means is that I added a new search results page to the Pages library in an Enterprise Search site collection.  You then add a new search navigation item in the search settings for the site, and you point that navigation item at the page you added.  Then you go and customize the page to return whatever search results you want.  I’m not covering this in super detail here obviously because it’s not really on point with this topic, but it should be relatively easy to find on the web.  If not, say so in the comments and we can always cover that in another blog post.

But…I created my new search vertical and then the changes I made to it were pretty simple.  First, I just plugged in the out of the box search results web parts onto it.  Then, I added an out of the box script editor web part above those.  This is really the beauty of this solution to me – I didn’t have to create anything custom in SharePoint at all.  Since it’s all just client side script and code, I didn’t write a custom web part for SharePoint – I just created my script and pasted it into the out of the box web part.  To complete this story, I would LOVE to paste in here the HTML and javascript that I use in the script editor web part to make this all work.  However, it is completely unusable when pasted into the awesome blog editing tools on this site <sarcasm>.  So instead you’ll have to get the attachment to see it – just look for the text file called SharePoint Script Part Snippet.txt.

Now, with all that done, you can click here to see what the final results look here:

Note that in my particular case I chose to only show search results that were from messages.  You could have just as easily shown search results for people, files, etc.  Also I configured links on the search results so that you can click on any one of them to view the conversation in Yammer.  I also included a link with a “more search results” kind of functionality, and when you click on it a new browser tab opens with the search results page in Yammer, using the query terms that were entered on the page.  So it lets you slide over very easily into Yammer to get the full search experience that it provides too.

Also – as a side note – someone else at SPC mentioned that they took a similar approach but chose to return the results in an OpenSearch compliant Xml format so that it could be added as a federated search result.  That didn’t fit my particular scenario, but it’s an interesting take on things so I’m including it here for your consideration.  Nice work person that made this comment.  🙂  

You can download the attachment here:

You Get More Results in the Query Builder for a Web Part Than When the Web Part is Rendered on the Page in SharePoint 2013

I’ve had this issue happen to me a while back and then I saw two other people have the same problem in the last couple of weeks so I decided to do a quick post on this.  The scenario usually manifests itself something like this:  you are working on some kind of custom search application – like doing a query based a custom managed property or something like that.  You edit the search web part properties and open up the query builder.  When you run your query in there you see all of your search results being returned.  However, when you save the web part changes and load the page, you don’t see all of the search results; you get a smaller number of results returned.

One of my initial efforts at fixing this was to change the trim duplicates property in the web part to false, even though I knew I did not have any real duplicates in my search results (they were all site collections, something I would not have thought we would even do a duplication check on).  That did not fix the problem anyways, but one of the folks here suggested that using that property in the web part settings may not be sufficient to actually resolve the problem.

His suggestion was to export the web part and save it locally.  If you open it up in an editor like Notepad, look for property called DataProviderJSON, and you will see a sub-property in there called TrimDuplicates.  By default it is true, so change it to false and then import the .webpart back into a site collection and add it to the page.  I was happily surprised to find that this did in fact fix this problem for me, and I started getting back a full set of results.  I burned a lot of hours though trying to get this working “as expected” out of the box, so hopefully this post will save you some time.

Architecture Design Recommendation for SharePoint 2013 Hybrid Search Features

The SharePoint 2013 hybrid capabilities are intended to let users in Office 365 access and search across certain content from an on premises SharePoint farm.  By design, current hybrid features cannot be configured to simultaneously allow users outside a corporate network to access the on premises farm, and to also allow on premises content to be used in Office 365.  To support both scenarios users will need to connect directly to the on premises SharePoint farm through a solution such as DirectAccess or VPN.  That will enable users to access an on premises farm both when they are on a corporate network in addition to outside of that corporate network, as well as use the hybrid capabilities to work with data both from Office 365 and SharePoint on premises.

Using the search hybrid feature of SharePoint 2013 can best be accommodated with a single zone in SharePoint and a split DNS.  The reason I suggest a single zone is so that search results in Office 365 will be rendered using the same Url that users use to access content.  The reason I suggest split DNS is so that users can be redirected to an endpoint that uses standard SharePoint security mechanisms for authentication, but queries from Office 365 can be directed through a reverse proxy configured to use certificate authentication.  The hybrid search feature in SharePoint 2013 supports sending a certificate for authentication as part of the query request.  Here’s a diagram to illustrate:


Using the example diagram above, when the user requests, they are on corpnet so the internal DNS routes them to the internal address of  That is a load balancing device that sends their request onto one of the SharePoint web front ends where they will be authenticated and can access their content.

They are still on corpnet – either physically or via DirectConnect or VPN as described above – and they browse their Office 365 tenant on  Their Office 365 tenant is configured to use the search hybrid features so when a user executes a query, that query will also be sent to the on premises SharePoint farm.  The request goes from Office 365 to  Since Office 365 is using the external DNS though, it resolves that to the address  A reverse proxy device is listening on that address and it requires certificate authentication.  The search hybrid features are designed to respond to requests for certificate authentication, so Office 365 sends the certificate to authenticate the request.  Once the reverse proxy device completes the certificate authentication it forwards the request onto the internal load balancer at, where it gets routed to one of the SharePoint web front ends.  When the search results are rendered they use as the host name.  When a user clicks on that search result, they are still on corpnet so they again use the internal DNS to resolve that, which will be   When they request the content then, it will be routed to the load balancer and onto SharePoint and the user will be able to retrieve the content.

Creating Refinable and Sortable Managed Properties in Sites and Site Collections in SharePoint 2013

Someone was asking today about how to create a managed property at the site or site collection level that is refinable and sortable.  When you go into Schema Management at the site collection level you may notice that you can create new managed properties, but only simple ones that are based on string and you they have several attributes that you cannot change, such as whether they are sortable or refinable.  That is by design in SharePoint 2013; if you need to create new managed properties with those attributes then it can only be done at the Search Service Application level.  However, we do ship some “pre-baked” managed properties out of the box to enable this scenario, and in this post I’ll try and explain how you use them.


If you go into the Site Settings and look at the Managed Properties in the Schema, do a search for Refinable in the list of managed properties.  What you’ll find is that there are a BUNCH of them, of all different data types.  For example RefinableDouble, RefinableString, RefinableDate, RefinableDecimal, and so forth and so on.  For each one of these we actually include several properties – for example, RefinableDouble00 through RefinableDouble00 through RefinableDouble09, RefinableDate00 through RefinableDate19, etc.  The reason we include these managed properties, and multiple instances of them, is because they are all defined as refinable and sortable, and they map to different data types. 


So let’s go back to the scenario we started with – suppose you create a new Choice field in your site collection called FavoriteColor.  You want to create a managed property for this field that is both refinable and sortable.  The first thing you want to do, as I’ve mentioned many times, is create a Site Column for the field; this will automatically create a crawled property for you.  Now in order to use it as a refinable and sortable field, you need to “connect” it with one of the Refinable* properties I mentioned above.  In this case, we will use RefinableString00.  So to do this connection, I’ll go into Site Settings and click on Schema.  Then I’ll type in refinables in the managed property list so I find RefinableString00.  If you click on that, you’ll see that it is already mostly configured exactly like I want – refinable and sortable are both true.  There’s really only two things I need to do – map my crawled property and give it an alias.


I’m going to give it an alias so I can refer to it using a name that makes sense, rather than RefinableString00.  So, in the alias field I type the name I want to refer to this crawled property by, which is “FavoriteColor”.  Next, to connect this with the my crawled property I simply click on the Add Mapping button.  It brings up a list of all my crawled properties, and from there I can select ows_FavoriteColor, which I got when I added my Site Column (Note:  your crawled property names may vary, but you can always find it based on the Title you gave the field).  All I need to do now is save my changes and I’m done – I have a sortable and refinable property for my custom schema in my site or site collection, with a friendly name that I can use to refer to it by.  Here’s a quick screen shot with all of these values filled in so you can see how it looks.


Another Port Opening for Search in SharePoint 2013

My friend JP is hard at work finding all the port openings we need that haven’t gotten completely documented yet, and here’s a new one.  (By the way, these all ARE working their way into TechNet, it will just take a month or two probably for them to find their way there).  If you have firewall turned on between your SharePoint servers, then you should know that you need to open TCP on port 808 on your servers that are running Search components.  Thanks again JP for finding another golden nugget here.

Resolving the Directory Links Across Partitions are not Allowed Error When Crawling OData BDC Sources

I’ve been seeing this error more and more recently, both inside and outside the walls of my employers.  The Directory Links across partitions are not allowed error seems to routinely crop up when you have created an OData source with Visual Studio and have uploaded the model to the BDC in SharePoint.  What seems to be happening then is you create a new content source in Search for the BDC and configure it to crawl all BDC applications.  Then whenever you do a crawl on that content source you get the error message described above.

I was fortunate enough to have Venkatesh work with me on this a bit and what we found is that if you change the content source to crawl the individual BDC applications instead of the entire catalog, the error went away.  For illustration purposes, here’s what this configuration looks like in the content source UI:

This should get you to the point where your crawls are able to complete successfully and you can find content when executing a query.  After that, things get a little more dicey…in all honesty right now the links for BDC search results don’t really appear to do anything useful, but that will be the next lump of coal I dig into.  Meanwhile, I know lots of people have been hitting this error so I wanted to share this while I have it, and then continue to flesh it out as we make progress.

Getting a Full Result Set from a Remote SharePoint Index in SharePoint 2013

This post is in follow up to a previous post about setting up an OAuth trust between two farms:  The primary reason for writing that post was to describe how to set up an OAuth trust between two farms, which can be used for a number of reasons.  A secondary part of that post was to describe the process of setting up a Remote SharePoint Index, which is one of the top reasons why you would create that trust.  What I have since discovered is that this type of trust by default will only return search results from the web application with which the trust is created.  For example, as the post indicates you run some PowerShell that looks like this:

$i = New-SPTrustedSecurityTokenIssuer -Name FARMB -Description “Farm B” -IsTrustBroker:$false -MetadataEndPoint “
New-SPTrustedRootAuthority -Name FARMB -MetadataEndPoint
$p = Get-SPAppPrincipal -Site -NameIdentifier $i.NameId
Set-SPAppPrincipalPermission -Site -AppPrincipal $p -Scope SiteSubscription -Right FullControl

In this case, the results you would get back would only be those items that are contained in the web application.  If you have multiple web applications, or other non-SharePoint content sources, they won’t be returned in the search results when queried remotely.  Side Note:  this limitation does not exist when you are using Remote SharePoint Index in a hybrid situation between Office 365 and an on-premises SharePoint farm.  So, how do we get results from all our web applications and content sources?  Well there are two things we need to do:  1) create additional realms and grant the SPAppPrincipal permissions to it and b) when you grant permissions, set the scope to SiteCollection instead of SiteSubscription. 

Let’s look at a concrete example:  suppose you are in Farm A, and you have 3 web applications:, and  (Another side note:  please don’t interpret this to mean that you should have multiple web applications in SharePoint 2013 – you should try and use one web app and host name site collections, and add web apps if business requirements dictate).  Farm A is going to trust Farm B, and Farm B is going to send queries to Farm A.  In Farm A then, we need to set up a realm for each of the three web applications and grant permissions to the SPAppPrincipal that Farm B will use when issuing the queries.  We’ll start out with the first two lines of PowerShell, which are the same as our original post:

$i = New-SPTrustedSecurityTokenIssuer -Name FARMB -Description “Farm B” -IsTrustBroker:$false -MetadataEndPoint “
New-SPTrustedRootAuthority -Name FARMB -MetadataEndPoint

Now that we have our reference to the SPTrustedSecurityTokenIssuer for Farm B, which is in our variable $i, we can use that when granting rights to each of the realms we create.  So to create the realms, we do this for each of the web applications:

#this first line only needs to be done once 

$realm = $i.NameId.Split(“@”)


#then do this for each web application

$s1 = Get-SPSite –Identity
$sc1 = Get-SPServiceContext -Site $s1
Set-SPAuthenticationRealm -ServiceContext $sc1 -Realm $realm[1]
$p = Get-SPAppPrincipal -Site -NameIdentifier $i.NameId
Set-SPAppPrincipalPermission -Site -AppPrincipal $p -Scope SiteCollection -Right FullControl

Once you’ve completed this for and, you will be able to issue queries from Farm B and get results from all of your content sources in Farm A.  That includes your SharePoint content sources as well as non-SharePoint sources.





Adding a New Search Partition and Replica in SharePoint 2013

I think there are probably resources out there somewhere for this by now, but I had a hard time finding them when I went looking for them previously so I thought I’d just post it here.  Fortunately my friend Knut B. was good enough to shoot me some PowerShell a while back to help you manage your index partitions.  In short – what you want to do is get a reference to the search service instance on the host where you want to create a partition or partition replica, then you’re going to clone the existing search topology and add your partition or replica to it.  Once you’ve done that, you can tell SharePoint to start using the clone of the topology that you created.  Assuming you are starting with a farm that was created with the farm wizard, you will have one index partition, and that partition has no replicas.  So let’s look first at adding a new search partition:

# Specify the new server you want to add, and start the Search Service Instance:
$newssi = Get-SPEnterpriseSearchServiceInstance -Identity “nameOfServerThatYouWantTheNewPartitionOn”
Start-SPEnterpriseSearchServiceInstance -Identity $newssi
# Wait until the SSI is running. Run the following command until the SSI state indicates “Online”:
Get-SPEnterpriseSearchServiceInstance -Identity $newssi

Now that you’ve picked the server you want to work with and you know the search service instance is running on it, you can clone the existing search topology:

# Clone the existing topology:
$ssa = Get-SPEnterpriseSearchServiceApplication
$activeTopology=Get-SPEnterpriseSearchTopology -Active -SearchApplication $ssa
$newTopology = New-SPEnterpriseSearchTopology -SearchTopology $activeTopology -SearchApplication $ssa -Clone

Once you have your clone in hand, you can create a new partition.  Partitions are just numbered 0 through whatever, so again, assuming you have used the wizard and just have one partition so far, then that partition number is 0.  To add a second partition to our cloned topology we’ll call it partition 1, then we’ll set our cloned topology to be the new search topology.

# Add a new index component and specify it is associated with the new index partition 1:
New-SPEnterpriseSearchIndexComponent -SearchTopology $newTopology -SearchServiceInstance $newssi -IndexPartition 1
Set-SPEnterpriseSearchTopology -Identity $newTopology

As you can see in the New-SPEnterpriseSearchIndexComponent, we pass along the $newssi variable, which is where we assigned the server on which we want to create the partition.  Now once that partition is created, we run virtually the same exact PowerShell to create a replica of that partition on another server.  Since I explained above what’s going on, I’ll just paste the entire PowerShell here and then comment on it:

# Specify the new server you want to add, and start the Search Service Instance:
$newssi = Get-SPEnterpriseSearchServiceInstance -Identity “nameOfServerThatYouWantTheReplicaOn”
Start-SPEnterpriseSearchServiceInstance -Identity $newssi
# Wait until the SSI is running. Run the following command until the SSI state indicates “Online”:
Get-SPEnterpriseSearchServiceInstance -Identity $newssi
# Clone the existing topology:
$ssa = Get-SPEnterpriseSearchServiceApplication
$activeTopology=Get-SPEnterpriseSearchTopology -Active -SearchApplication $ssa
$newTopology = New-SPEnterpriseSearchTopology -SearchTopology $activeTopology -SearchApplication $ssa -Clone
# Add a new index component and specify it is associated with the new index partition 1:
New-SPEnterpriseSearchIndexComponent -SearchTopology $newTopology -SearchServiceInstance $newssi -IndexPartition 1
Set-SPEnterpriseSearchTopology -Identity $newTopology

The two things to note here are:

  1. In the Get-SPEnterpriseSearchServiceInstance cmdlet I indicate which server I want to host the partition replica
  2. In the New-SPEnterpriseSearchIndexComponent cmdlet I indicated the partition with the -IndexPartition flag.  Since I already have an index partition 1, SharePoint will create a replica of that partition for me.

That’s it – hope that gets you on your way to managing your search partitions in SharePoint 2013, and thanks again to Knut for sharing his PowerShell.

Using User Context (AKA Segmentation) in Search with SharePoint 2013

Using a context to represent the current user in search was something that we first introduced in FAST Search for SharePoint 2010.  If you’re interested in how it worked back then you can take a look at this post:  In SharePoint 2013, we don’t have the same exact feature, but we have to replace it is called User Segmentation.  Now rather than me trying to explain User Segments, what they are, and how they work, someone on the search team has already done a bang up job of that here:  I really encourage you to go read this blog entry, he does a great job of explaining what it is and an example of how to use it.

I’m not here to steal his thunder or take credit for it, but here’s what I have done.  When you read that post you’ll see that it mentions having to write a custom web part to figure out any user segmentations that should be applied to the current query and then it adds them to it.  In the blog it describes adding a user segmentation based on a property of the browser.  What I decided to do is to write a web part that adds a user segmentation based on the current user’s department.  For those of you who have not heard, when you do a profile import from Active Directory in SharePoint 2013, we automatically import all of the unique Department values into a special term store.  This makes it a great choice to do some customization based on the current user’s department.

One question that may come up is “if I am going to present content based on a user’s department, why not just use audiences?”  That’s a fair question, and here’s the difference.  With audience targeting, it’s a simple on / off switch – you either show a web part or not.  With user segmentation I can take that info out of the profile or anywhere else, and I can customize the content that’s shown.  Since I’m using a query rule, I can execute one or more additional queries for a user, I can add a promoted result, or I can even change the ranking of a query – for example if I want to target certain content to be shown higher in the search results based on the department you work in.  These are just some fantastically cool features that you get with search in SharePoint 2013.

So, to help you along in using this feature, I’m just attaching my entire Visual Studio project – the compiled web part assembly, solution, and source code – for the web part that adds the current user’s department to the user segmentation.  After that you can do with it what you will – use it as is, or use it as the basis to refine or write your own web part for managing user segmentation.  There are a couple of points I wanted to highlight about the use of user segmentation and this web part when you read the post I linked to above:

  • When you create the query rule, by default it’s configured to query the catalog for the publishing site.  If you go with this option then you won’t get any search results.  Instead, choose the option to query “All Sources”.  The picture of the query rule configuration in the post shows this but does not call it out.  Since you need to change the default behavior for this to work, I am just calling it out.
  • In the post it talks about using a different web part to display the results of whatever content you are highlighting when using the user segmentation.  In this case (as the post alludes to), my web part inherits from the ContentBySearchWebPart, so you can use the control both to set the user segmentation as well as to display the highlighted content.  The only thing that’s a little different from what’s described in the post, is that when you add the web part to the page you will have one different value in the Settings property for the web part.  In the Settings, just set the “Query results provided by” property to “This web part”.

That’s it – you are ready to go now.  Hopefully you will all find some interesting scenarios for user segmentation in your searches.  For my particluar scenario when I was writing this I was targeting some special training for employees that belonged to an “Executive” department.  As a result when they hit the page where my web part is being used, they see this banner and link for special insider trading training that they will all have to take:

You can download the attachment here:

Running Client Script in a Display Template After Rendering is Complete in SharePoint 2013

My friend Bryan asked me a good question the other day about how to create a display template that would do its rendering thing, but then run his own javascript function afterwards.  After doing some snooping around, one of my other buddies Jesus was good enough to share the magic mystery meat to this problem.  To invoke your own function after the template renders, you need to add this the javascript section of your display template – by that I mean in the javascript you add below the first <div> tag in the page with any other js you will use for rendering your template.  In that javascript you need to add a callout like this:

AddPostRenderCallback(ctx, function()
     //code to execute


Now, this alone was not enough to solve Bryan’s problem.  He needed to obtain a ClientContext for some additional CSOM calls he wanted to make.  As it turns out, just trying to create it in this delegate doesn’t work because we don’t load all of the scripts in the search results page.  So, in order to get the current context you can call Srch.ScriptApplicationManager.get_clientRuntimeContext().  If there are other scripts that you need to load in your display templates you can use the EnsureScriptFunc method, like this:  EnsureScriptFunc(“sp.js”, “SP.ClientContext”, function () { //callback  }); .