OAuth and the Rehydrated User in SharePoint 2013 – How’d They do That and What do I Need to Know

First let me say that one of the things I like about blogging about random SharePoint mystery meat is the fact that I can use completely colloquial language like my blog title.  Something that’s nearly impossible to get away with, unless, say, you are creating a new version of SharePoint.  Anyone see the amusingly social messages in SharePoint 2013?  “Sorry about the wait, I’m almost done”, and similar such things.  I find it all the more amusing because it’s interspersed with things like the HRESULT my friend Tom got the other day.  Hmm…the more things change, the more they stay the same. 

But onto the topic at hand.  I’ve mentioned oAuth a time or two in this blog already when discussing some of the new SharePoint 2013 features.  And while I’m still not going to go whole hog into “what is oAuth”, because we have a whole team of writers working on that, I do want to expand again ever so slightly on some of the ways in which it is used.  The best example of an oAuth trust is probably for a Remote SharePoint Index for search – that’s what allows a person in one farm to issue a query that gets routed to another SharePoint farm, and in that remote SharePoint farm it’s able to reconstruct the user’s identity so that search results can be properly security trimmed.  It’s also used in other scenarios like the new app model (i.e. does this user have rights to access the content the app is requesting), between server applications like SharePoint and Exchange (does this user have rights to the mailbox content), and many others.  Remote SharePoint Index is a good example to me though because I think it’s perhaps the best scenario to use to envision why we need to do the things we do in order to make this all work as expected. 

So let’s start at the beginning – how can FarmA “make a Steve” that looks “Steve” on FarmB?  Well it all starts with the User Profile Application.  So let’s say that Steve is on FarmB and issues a query.  That query gets sent over to FarmA, and along with that query are some attributes about Steve.  By default, those attributes are going to be Steve’s SMTP address, SIP address, account name, and name identifier.  When FarmA gets that request, the first thing it’s going to do is a lookup in its local User Profile Application; it’s going to look for a profile that matches the values for Steve that were sent over.  This is why it’s so important to make sure your UPA is healthy and populated in SharePoint 2013, and why I wrote this blog article about doing that:  http://blogs.technet.com/b/speschka/archive/2012/08/08/mapping-user-profiles-for-saml-users-with-an-ad-import-in-sharepoint-2013.aspx.

Okay, so now FarmA has found the Steve user profile, what can it do with that?  Well the answer from here is “it depends”, and thus why it’s so important to do the planning for this aspect of your organization.  The thing it depends on is the authentication type you are using – here’s how:

  • Windows claims – if you are using Windows claims, then for the most part everything you need is in the user profile.  It has your account name and it has your AD group memberships.  What I mean by “for the most part”, I’ll explain in a bit.  But the short version is if you’re using Windows claims then you’re pretty much good.
  • Forms Based Auth claims – if you are using FBA then there are a couple of things you need to know.  The first is that you need a way to populate your UPA and keep it up to date.  If you just happen to be using FBA with the LDAP provider and your directory is actually Windows Active Directory, then you’re in a pretty good state.  You can create a profile connection to Active Directory and just associate it with the FBA provider in a manner similar to what I described in the post I linked to above.  In most cases though, AD is not going to be your provider, which means you will have to write some custom code to populate the UPA.  That should be enough to get you the only attribute we should really care about for FBA users, which is account name.  As you all know though, “for the most part” (again, to be explained in a bit), the rest of your data comes from your role provider.  Well the really cool thing we do here is that when we rehydrate an FBA user, we also invoke the associated role provider, so it’s just like the FBA user logged on locally.  That allows us to grab all of the role claims for an FBA user.
  • SAML claims – this story is similar to FBA, in that the first thing you need to do is to populate your UPA.  If you’re lucky, your users are in AD and you can just import them directly following the guidance in the linked blog post above.  If you’re not lucky, then you’ll need to find a way to connect to a source directory and import from there.  This of course is probably most complicated with SAML claims because you could have one to many directories, and you may not even own all of them (i.e. maybe you have partners, are federating with ACS and using Facebook or another provider, etc.).  Be that as it may, if you want all this “stuff” to work, then you will need to find a way to populate your UPA.  There’s a second more important point here though – when you log in as a SAML user, you get a set of claims from your Identity Provider (IdP).   This user rehydration process has no way to simulate that login.  That’s just the nature of SAML, right – you could get redirected one to many times and provide any number of authentication prompts and authentication types (like two factor auth) that we could never account for.  So what does that mean to you?  You really need to add your claims via claims augmentation if you want to use them to secure your content and authorize access to that content via this user rehydration process.  You are not going to get claims from the IdP during rehydration so if you want them, you grant them locally.  This is the “for the most part” that I was mentioning above, which I will explain now.
  • “For the most part” – what does that mean?  Well by now hopefully it’s become clear – whether you’re a Windows user, FBA user or SAML user, in addition to the claims you get from your authentication provider, you can also have additional claims added via augmentation:  http://blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx.  The one other thing that we do during the rehydration process is to invoke all of the registered custom claims providers, so that we can also grab any additional claims for the rehydrated user that they would have received if they had logged on locally and had those providers invoked.

This is why I like the Remote SharePoint Index scenario so much for explaining the planning required here.  As you can imagine, within a farm you may grant rights to content based on a Windows group, an FBA role, a SAML claim, or any claim added via augmentation.  If you don’t possess those claims when you query for content, then it will be security trimmed out and you will not see it.  So you can see how important it is that every claim you are granted when you log in locally, you also get when we rehydrate a version of you.

There’s a lot of potential planning that goes into making all of this work, so hopefully this will help you identify the major moving parts so you know what to focus on.

Setting Up an oAuth Trust Between Farms in SharePoint 2013

One of the things you’re likely to hear a lot about in SharePoint 2013, and I may end up writing a lot about, is oAuth.  In SharePoint 2013 oAuth is used to establish a trust between two applications for purposes of establishing the identity of a principal (user or application).  In SharePoint you will use oAuth trusts between SharePoint and things like Exchange and Lync, with ACS or individual application developers who are using the new cloud app model, or even between two different SharePoint farms for things like the remote SharePoint index feature in Search.

 

What oAuth does NOT do is become an authentication provider for people; you still will use your New-SPTrustedIdentityTokenIssuer to create those trusts to your identity providers.  For oAuth trusts we have a new cmdlet that is very similar in name, and it’s called the New-SPTrustedSecurityTokenIssuer.   When we establish this kind of trust with a security token issuer we call it an S2S trust, which means “server to server”.  Remember this acronym because you will start seeing it a lot in SharePoint 2013.  In this post I’m going to talk through some of the particulars required to create this trust.

 

First it’s worth pointing out that many features that require an S2S trust will establish this trust themselves.  They may do that via feature activation, or feature teams may provide you a PowerShell script or cmdlet to run that creates the trust as part of turning on their feature.  There will be times when you need to do it yourself though, and that’s what this post is about.

 

One of the things you’ll need to resolve first is whether you are going to be using SSL or not.  The reality is, that in most cases in SharePoint 2013, you should probably use SSL.  The reason I say that is because there are so many scenarios that use oAuth in SharePoint 2013, and when you do you are passing around a cookie with an access token.  That access token is like a key that unlocks the door to data.  The access token is signed by a certificate so it can’t be spoofed by someone that creates their own access token, but you don’t want it flying around in clear text because in theory someone could grab that cookie and replay it for the duration of the cookie lifetime.  SSL protects you from that cookie replay attack, the same way that you would use SSL with a forms based auth site for the same reason.  That being said, there are still reasons why you may want to run your sites over HTTP – you’re in a test environment, you’re building out a dev environment, you’re running entirely on an internal network and don’t feel it’s a risk, etc.  I’m not here to judge – I’m just here to explain.  J

 

STEP 1:  Configure the STS

There are a couple of settings in the configuration of SharePoint’s security token service (STS) that you may want to tweak if you are not using SSL.  You can get all of the STS configuration settings with this cmdlet:  Get-SPSecurityTokenServiceConfig.   There are two ways to establish a trust – one is with a certificate and one is using a new oAuth metadata endpoint that all SharePoint farms have.  Using the metadata endpoint is the easiest way to go, but if that endpoint is not SSL secured then you need to set the AllowMetadataOverHttp property of the SharePoint STS to true.  If you are not going to be running your web apps over SSL, then you will need to set the AllowOAuthOverHttp property to true as well.  Here’s a little PowerShell that demonstrates how to set these properties:

 

$c = Get-SPSecurityTokenServiceConfig

$c.AllowMetadataOverHttp = $true

$c.AllowOAuthOverHttp= $true

$c.Update()

 

STEP 2:  Create the Trust

Once the STS is configured as required, we can look at how to establish the trust from one farm to another.  As I mentioned above, all SharePoint farms have a metadata endpoint now that is used to supply information and the access token signing certificate.  That metadata endpoint is at /_layouts/15/metadata/json/1.   If you actually try to navigate to that in a browser you will be prompted to save it, which you can do to examine it.  What you will find if you open it up in notepad is that it is just a JSON payload.  It includes the name identifier for the STS (which it calls “issuer”), along with a serialized version of the token signing certificate (which it describes as the “value” for the key “x509certificate”).  If you look a little further at the data, you’ll see that the issuer is actually servicename + “@” + realm values.  It also matches the NameIdentifier property on the STS; this information is important, for reasons I will explain in a bit.

 

In this example let’s say that FARM_B needs to trust calls from FARM_A, because FARM_A is going to use FARM_B as a remote SharePoint index.  Also, let’s say that FARM_A has a web application at http://FARM_A.   To create the trust we would run the New-SPTrustedSecurityTokenIssuer cmdlet on a server in FARM_B like this (I’ll explain why I’m using the “$i = “ stuff later in the post):

 

$i = New-SPTrustedSecurityTokenIssuer -Name FARM_A -Description “FARM_A description” -IsTrustBroker:$false -MetadataEndPoint “http://FARM_A/_layouts/15/metadata/json/1”

 

Now, let’s say that you are setting up a trust with a services only farm.  You don’t want to create a web application, site collection and SSL certificate just so you can create a trust from it.  So, we have a second method that you can use to establish the trust using the New-SPTrustedSecurityTokenIssuer cmdlet.  In the second form you can just provide the token signing certificate and the name identifier.  You get the token signing certificate just like you did in SharePoint 2010 – go to a server in the farm, run the MMC, add the Certificates snap-in for the Local Computer, look in the SharePoint…Certificates node, and the first certificate in the list is the one you want – just save it to the local drive without the private key as a .cer file.  You need the certificate and the NameIdentifier attribute of the STS that I was describing above in order to establish the trust.  The cmdlet in that case looks like this (it assumes you have copied the STS certificate to a file called C:\sts.cer on a server in FARM_B):

 

$i = New-SPTrustedSecurityTokenIssuer -name FARM_A -Certificate “C:\sts.cer” -RegisteredIssuerName “00000003-0000-0ff1-ce00-000000000000@72da1552-085a-49de-9ecb-73ba7eca8fef ” -Description “FARM_A description” -IsTrustBroker:$false

 

STEP 3:  Trust the Token Signing Certificate

Just like you do with a SPTrustedIdentityTokenIssuer, you need add the trust used to sign oAuth tokens to the list of trusted root authorities in SharePoint.  Again, you have two options for doing this:  if you create your trust via the metadata endpoint, you can establish the trust like this:

 

New-SPTrustedRootAuthority -Name FARM_A -MetadataEndPoint http://FARM_A/_layouts/15/metadata/json/1/rootcertificate

 

Otherwise, you can create add it to the trusted root authority list just like you did in SharePoint 2010:

 

$root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:\sts.cer”)

New-SPTrustedRootAuthority -Name “Token Signing Root CA Certificate” -Certificate $root

 

From a trust perspective, you’re done at this point – you’re trust is established and you can now create new application principals based on it.  How you’ll use it is based on the application itself; in the case of a remote SharePoint index I’ll go ahead and finish out the scenario now for completeness.

 

STEP 4:  Creating an App Principal (example only for remote SharePoint index):

There are two steps in this process – getting an app principal and granting it rights.  Remember from our scenario that FARM_B needs to trust calls from FARM_A because it is going to get queries for the remote SharePoint index.  So for my app principal I need to get a reference to the web app in FARM_B that FARM_A is going to use.  Once I have that reference then I can grant rights for FARM_A to use it.

To get a reference to an the app principal you use the cmdlet like this:

$p = Get-SPAppPrincipal -Site http://FARM_B -NameIdentifier $i.NameId

IMPORTANT:  There is one important thing to note here, that I think will be especially common especially during the SharePoint 2013 beta.  You may get strange errors in PowerShell when you try and get the SPAppPrincipal.  What I’ve found is that if your available memory on your server drops below 5% then all WCF calls will fail.  Since this PowerShell cmdlet calls into a service application endpoint, which is hosted as a WCF, the Get-SPAppPrincipal cmdlet fails when you are low on memory.  You can check in the Windows Event Viewer in the Application Log to see whether this is the cause of your problem.  This has happened to me multiple times so far so chances are others will see it as well.

Note that as I described earlier in the post, I finally get to use my $i variable to grab the NameIdentifier of the STS in FARM_A.  Now that I have a reference to the app principal for the FARM_B web app, I can grant it rights like so:

Set-SPAppPrincipalPermission -Site http://FARM_B -AppPrincipal $p -Scope SiteSubscription -Right FullControl

There you have it – those are your options and methodologies for creating an oAuth trust between two SharePoint farms.  I’ll continue to dig into oAuth and the various uses and issues to be aware of over time on this blog.

UPDATE:  There are other steps you need to do to get complete set of results back from all content sources when using Remote SharePoint Index; for more details see https://samlman.wordpress.com/2015/03/02/getting-a-full-result-set-from-a-remote-sharepoint-index-in-sharepoint-2013/.

An Important Tip About Client ID Values for S2S Apps in SharePoint 2013

Here’s something that might cost you a TON of time if you aren’t careful, so please take a few minutes to read this.  You should be seeing some documentation pretty shortly that describes how to create what we call an S2S application, which means Server to Server trust application.  You will also see this called a “high trust application”.  I won’t go into the specifics of what exactly it is because there’s a whole team of folks creating that content for you, however one important thing you need to know is that in the process of creating these applications, you need to generate a GUID that’s used to identify your application.  That GUID is used when you set up the trust between SharePoint and your application (using the New-SPTrustedSecurityTokenIssuer cmdlet), and it’s also used in the AppManifest.xml for your SharePoint app as well as in the web.config for your hosted service that will be making calls into SharePoint.

The really important thing to know here is that you MUST MAKE ALL LETTERS IN THE GUID LOWERCASE – basically, the exact opposite of what I just did there.  🙂  I don’t know if that will be the “case” when SharePoint 2013 has it’s final release, but that is the situation in beta 2.  This is important to know too because most people generate GUIDs when they’re in Visual Studio, using the GUID generator tool that comes with it.  That tool generates GUIDs with uppercase letters, so you need to remember to convert them back down.  For example, if you get a GUID that looks like 759600FB-8517-4A23-8576-C17D2351894C, then you need to change it to 759600fb-8517-4a23-8576-c17d2351894c before you start using it in all the locations I described above. 

If you don’t do this, you will get a 401 Unauthorized error when your application attempts to retrieve data from SharePoint.  If you do a Fiddler trace on the request and look at the Raw output from the SharePoint server response, you will see an error message that says “The issuer of the token is not a trusted issuer.”

OAuth, o365 APIs and Azure Service Management APIs – Using Them All Together

I’ve been spending some time lately fooling around the o365 API’s.  Frankly, it has been a hugely frustrating experience; I’ve never seen so many documentation gaps and examples and sample code that didn’t work, to flat out wouldn’t compile in a long time.  So, once I finally stumbled up on the “right” collection of code that worked to get me the all important access token for the o365 APIs I decided to take a quick detour to see if I could use the same approach to manage my Azure subscription with the Service Management APIs.  Turns out you can, and actually do so in a way that leverages both of the APIs together in kind of a weird way.  Here’s the story of how it works (with a special twist at the end, you’ll have to read all the way down to see it).

The first step in getting any of this working is to create an application in Azure.  In that app, you can grant the rights that it is going to require – to things like Office 365 and the Azure Service APIs.  The easiest way to do this scenario is to create the application using the Microsoft Office 365 API Tools for Visual Studio, which as of this writing you can find here:  https://visualstudiogallery.msdn.microsoft.com/a15b85e6-69a7-4fdf-adda-a38066bb5155. 

Once you have those installed, the next step is to create a new project – whatever kind of project type is appropriate for what you are trying to accomplish.  Once you’ve done that you want right-click on your project name and select “Add Connected Service”, like this:

 

 

When you add your Connected Service the first thing you’ll do is click on the Register your app link.  It will first ask you to log in using your Azure organizational account.  This account needs to be an admin account – one that has rights to create a new application in an Azure Active Directory:

 

 

After entering your credentials Visual Studio will take care of registering the app with Azure.  It then presents you with a list of permission scopes, and for each one you can click on the scope to select which permissions in that scope you want your app to have.  Remember that when a user installs your application it will ask if it’s okay for the app to have access to these things you’re defining in the permissions list. 

In this case I’m just going to select the Users and Groups scope and select the permissions to Read directory data and Access your organization’s directory:

 

 

That should be enough permissions to use the o365 Discovery Service API; I’ll explain why we want to do that in a bit.  For now just click the Apply button, then click the OK button to save your changes.  When you do that Visual Studio will begin modifying the application with the permissions you requested.  In the Output window in fact you should see it add the Discovery Service NuGet package to your project (“ServiceApi” is the name of my project in Visual Studio): 

Adding ‘Microsoft.Office365.Discovery’ to ServiceApi.

 

When it’s done with that it should open up a page in the Visual Studio browser window that includes the interesting information about what you need to do next, and of course, no details on how to actually do it.  That’s why you’re reading this blog post now, right??  🙂 

The next thing we’re going to do is to go into the Azure Management Portal and add the other rights we want for our application, which is to use the Service Management API.  So open up your browser and log into the Azure Management Portal at https://manage.windowsazure.com.  Scroll down and click on the Active Directory icon in the left navigation, then click on your Azure Active Directory domain in the right pane.  When you go into the details for your directory, click on the Applications tab, then find the application you just created.  Click on it and then click on the Configure link.   

One of the things you’ll see at the top of the page is the CLIENT ID field.  If you look in Visual Studio at your project, you should have had something like an app.config file added to it when you set up the application (app.config is added to winforms project; the config file that is added will vary based on your project type).  Open up the app.config file and you will see an appSettings section and an entry for ida:ClientId.  The value for it should be the same as the CLIENT ID value you see for your application in the Azure portal. 

Scroll down to the bottom of the page for the application to the “permissions to other applications” section.  You should see one entry in there already, and that’s for the permission we requested for working with the directory data.  Now click on the drop down in that section that says “Select application”, and select Windows Azure Service Management API.  On the drop down next to it that says “Delegated Permissions: 0”, click on it and check the box next to the item that says “Access Azure Service Management”. 

 

Once you’ve done that click the SAVE button at the bottom of the page.  You’re done now with all of the configuration you need to do in Azure, so let’s shift gears back into our project. 

The next thing we need to do is plug the application key values we need into our application.  That includes the client ID, redirect URI, the Discovery Service resource ID and the Azure Service Management resource ID.  You can get the client ID and redirect URI values from the app.config file that was added to your project.  If for some reason you can’t find it in your project, you can also find it in the Configure tab for the application in the Azure management portal.  The client ID value is in the CLIENT ID field, and the redirect URI value is in the REDIRECT URIS list.  Finally, we are also going to use what’s called the “Common Authority” for getting an authentication context – an ADAL class (Active Directory Authentication Library that is used under the covers to get an access token to the o365 and Azure resources). 

The resource IDs are fixed – meaning they are always the same, no matter what application you are using them from.  So with that in mind, here’s what the values look like once I’ve added them to the code behind for my winforms application: 

 

Okay, we’re getting awfully close to actually writing some code…the last thing we need to do before we start doing that though is to add the ADAL NuGet package to our application.  So in Visual Studio open up the NuGet package manager, search find and add the ADAL package and add it.  When you’re done you should see it AND the Microsoft Office 365 Discovery Library for .NET, which was added when we configured our application previously:

 

 

Now we’ll add the last couple of helper variables we need – one which is the base Url that can be used to talk to any specific Azure tenant, and the other is a property to track our AuthenticationContext: 

private const string BASE_TENANT_URL = https://login.windows.net/{0};

public static AuthenticationContext AuthContext { get; set; }

 

Okay, now let’s add the code that will get the access token for us, and then we’ll walk through it a little bit: 

private async Task<AuthenticationResult> AcquireTokenAsync(string authContextUrl, string resourceId)

{

   AuthenticationResult ar = null;

 

   try

   {

       //create a new authentication context for our app

       AuthContext = new AuthenticationContext(authContextUrl);

 

       //look to see if we have an authentication context in cache already

       //we would have gotten this when we authenticated previously

       if (AuthContext.TokenCache.ReadItems().Count() > 0)

       {

 

          //re-bind AuthenticationContext to the authority source of the cached token.

          //this is needed for the cache to work when asking for a

          //token from that authority.

          string cachedAuthority =

              AuthContext.TokenCache.ReadItems().First().Authority;

 

          AuthContext = new AuthenticationContext(cachedAuthority);

       }

 

       //try to get the AccessToken silently using the resourceId that was passed in

       //and the client ID of the application.

       ar = (await AuthContext.AcquireTokenSilentAsync(resourceId, ClientID)); 

   }

   catch (Exception)

   {

       //not in cache; we’ll get it with the full oauth flow

   }

 

   if (ar == null)

   {

       try

       {

          //request the token using the standard oauth flow

          ar = AuthContext.AcquireToken(resourceId, ClientID, ReturnUri);

       }

       catch (Exception acquireEx)

       {

          //let the user know we just can’t do it

          MessageBox.Show(“Error trying to acquire authentication result: ” +

              acquireEx.Message);

       }

   }

 

   return ar;

The first thing we’re doing is creating a new AuthenticationContext for a particular authority.  We use an authority for working with an AuthenticationContext so that we can manage the cache of AuthenticationResults for an authority.  ADAL provides a cache out of the box, so once we get an AuthenticationResult from an AuthenticationContext, we can just pull it from that cache without having to go through the whole oAuth flow all over again.  The AuthenticationResult by the way is where we get the things we need to access a resource – an access token and a refresh token. 

Once we’ve created our AuthenticationContext again then we can try and acquire the AuthenticationResult out of the cache, which is done with the call to AcquireTokenSilentAsync.  If there’s an AuthenticationResult in cache then it will be returned to you.  If not then it throws an exception, which we catch directly below that call. 

Once we get through that part of the code we look to see if we were able to get an AuthenticationResult.  If not, we’ll go ahead and use the oAuth flow to obtain one.  That means a dialog will pop up and the user will have to enter their credentials and approve our application to access the content we said we needed when we configured the permissions for our application. 

Now that we’ve got the code out of the way to get an AuthenticationResult, we can write the code to actually go work with our data.  This is where you can see this kind of interesting intersection between the o365 APIs and the Azure Service Management API that I was describing at the start of this post.  As I alluded to earlier, I really put this code together and wrote this post for two reasons:   1) there are SO MANY o365 API examples that either don’t include clear instructions on how to obtain an access token, or the code they have either does not even compile or does not work.  This is proof I guess that APIs change, right?  Now you have an example that works (at least for today). 2) the Service Management API requires the tenant ID to work with it.  Well, as it turns out, when you get your AuthenticationResult from a call to the DisoveryService through the o365 APIs, you will get the tenant ID back.  

So you can use this pattern – call the DiscoveryService, use the CommonAuthority for the authContextUrl and authenticate, then you will have the tenant ID.  Then take the tenant ID, use the login Url for the specific tenant as the authContextUrl and get an AuthenticationResult to use with the Service Management APIs (we’ll end up getting the access token silently).  Once you have that you can take the access token from it to do whatever you’re going to do with Service Management APIs. 

Now, let me explain one other behavior that you may not be aware of, and then explain how that impacts the code I described above.  If you read my description of the pattern above, you may notice this:  in the first call to create an AuthenticationContext I use the CommonAuthority for the authContextUrl, which is https://login.windows.net/Common.  In the next call I use an authContextUrl of https://login.windows.net/myTenantIdGuid.  So if I’m using two different Urls to create my AuthenticationContext, then how could there be something in the cache for me to use when as I say above, on the second call I’m going to get the AuthenticationResult out of the cache?  Well, it turns out that when you create the AuthenticationContext with the CommonAuthority, after the user actually authenticates ADAL changes the Authority property of the AuthenticationContext from CommonAuthority to https://login.windows.net/myTenantIdGuid.  This is actually pretty cool.  The net result of all this is – I actually don’t need to authenticate into the DiscoveryService at all to get the tenant ID for the tenant being used.  So what’s the net of everything I’ve given you so far?  Well as I explained at the beginning, you now have some nice code that actually does work with the o365 APIs.  However now that you understand how it works you can see you really don’t need to call into the DiscoveryService to get the tenant ID, so you learned a little something about how the AuthenticationContext works in the process. 

Okay, so now that we know that, let’s take a look at the code to get our subscription data from Azure.  You’ll see that it makes just a single call to get an AuthenticationResult for working with the Service Management APIs.  Note, this code uses the new HttpClient library, which is installed as a NuGet package; here’s the package I added: 

 

The code looks like this: 

try

{

   //base Url to get subscription info from

   string subscriptionUrl = “https://management.core.windows.net/subscriptions&#8221;;

 

   AuthenticationResult ar =

       await AcquireTokenAsync(CommonAuthority, AzureManagementResourceId);

 

   //yeah this is me just being lazy; should really do something here

   if (ar == null)

       return;

 

   string accessToken = ar.AccessToken;

 

   if (!string.IsNullOrEmpty(accessToken))

   {

       //create an HTTP request for the subscription info

       HttpClient hc = new HttpClient();

 

       //add the header with the access token

       hc.DefaultRequestHeaders.Authorization = new

          System.Net.Http.Headers.AuthenticationHeaderValue(“Bearer”, accessToken);

 

       //add the header with the version (REQ’D)

       hc.DefaultRequestHeaders.Add(“x-ms-version”, “2014-06-01”);

 

       HttpResponseMessage hrm = await hc.GetAsync(new Uri(subscriptionUrl));

 

       if (hrm.IsSuccessStatusCode)

          SubscriptionsTxt.Text = await hrm.Content.ReadAsStringAsync();

       else

          MessageBox.Show(“Unable to get subscription information.”);

   }

}

catch (Exception ex)

{

   MessageBox.Show(“Error trying to acquire access token: ” + ex.Message);

} 

So just to recap what we’ve been talking about…the first thing I do is get an AuthenticationResult using the CommonAuthority.  If the person that signs in does not have rights to use the Service Management APIs for the tenant they sign into, then they won’t get the chance to grant the app permissions to do what it wants.  Otherwise they person will ostensibly grant rights to the app, and you’ll get your AuthenticationResult.  From that we take our access token and we add two headers to our request to the subscriptions REST endpoint:  the authorization header with our access token, and a special Microsoft version header, which is configured per the Microsoft SDK.  Then we just make our request, hopefully get some data back, and if we do we stick it in the text box in our application.  Voila – mission accomplished!  Here’s an example of the XML for a subscription (and yes, the GUIDs shown here are not the actual ones from my tenant):

 

My biggest regret here is that even though in some ways this is “simplified”, it still takes me nine pages of a Word document to explain it.  However I do think you have some good code that you can go out and run with today to start working with both the o365 APIs as well as the Azure Service Management APIs, so I think it’s worth it.  Just bookmark this post for a rainy day, and then when you’re ready to start developing on either SDK you know where to find the info to get you started. I’ve also attached to this post the complete source code of the application I wrote here. You’ll just need to update the client ID and return URI in order to use it. I’ve also included the original Word document from which this somewhat ugly post was pasted.

You can download the attachment here: