Analyzing and Fixing Azure Web Sites with the SCM Virtual Directory

There’s so many things you do every day as part of operating and maintaining your Azure web sites.  They’re a common target for developers because you get 10 free sites with your Azure subscription, and if you know what you’re doing you can spin that up into even more applications by using custom virtual directories as I’ve previously explained here:  https://samlman.wordpress.com/2015/02/28/developing-and-deploying-multiple-sharepoint-2013-apps-to-a-single-azure-web-site/.  That example is specific to using them for SharePoint Apps, but you can follow the same process to use them for standard web apps as well.

Typically, you go through your publishing and management process using two out of the box tools – Visual Studio and the Azure browser management pages.  What happens though when you need to go beyond the simple deploy and configure features of these tools?  Yes, there are third-party tools out there than can help with these scenarios, but many folks don’t realize that there’s also a LOT that you can do with something that ships out of the box in Azure, which is the Kudo Services, or as I’ve called it above, the SCM virtual directory.

The SCM virtual directory is present in every Azure web site.  To access it, you merely insert “scm” between your web name and the host name.  For example, if you have an Azure web site at “contoso.azurewebsites.net”, then you would navigate to “contoso.scm.azurewebsites.net”.  Once you authenticate and get in, you’ll arrive at the home page for what they call the Kudu Services.  In this post I really just wanted to give you an overview of some of the features of the Kudu Services and how to find them, which I kind of just did.  🙂  At the end though I’ll include a link to more comprehensive documentation for Kudu.

Going back to my example, I found out about all of the tools and analysis available with the Kudu Services a few months ago when I was trying to publish an update to an Azure web site.  Try as I might, the deployment kept failing because it said a file in the deployment was being used by another process on the server.  Now of course, I don’t own the “server” in this case, because it’s an Azure server running the IIS service.  So that’s how I started down this path of “how am I gonna fix that” in Azure.  SCM came to the rescue.

To begin with, here’s a screenshot of the Kudu home page:

Kudu1

As you can see right off the bat, you get some basic information about the server and version on the home page.  The power of these features come as you explore some of the other menu options available.  When you hop over to the Environment link, you get a list of the System Info, App Settings, Connection Strings, Environment variables, PATH info, HTTP headers, and the ever popular Server variables.  As a long time ASP.NET developer I will happily admit that there have been many times when I’ve done a silly little enumeration of all of the Server variables when trying to debug some issue.  Now you can find them all ready to go for you, as shown in this screenshot:

Kudu2

Now back to that pesky “file in use” problem I was describing above.  After trying every imaginable hack I could think of back then, I eventually used the “Debug console” in the Kudu Services.   These guys really did a nice job on this and offer both a Command prompt shell as well as a PowerShell prompt.  In my case, I popped open the Command prompt and quickly solved my issue.  Here’s an example:

Kudu3

One of the things that’s cool about this as well is that as I motored around the directory structure with my old school DOS skills, i.e. “cd wwwroot”, the graphical display of the directory structure was kept in sync above the command prompt.  This really worked out magnificently, I had no idea how else I was going to get that issue fixed.

Beyond the tools I’ve shown already, there are several additional tools you will find, oddly enough, under the Tools menu.  Want to get the IIS logs?  No problem, grab the Diagnostic Dump.  You can also get a log stream, a dashboard of web jobs, a set of web hooks, the deployment script for your web site, and open a Support case.

Finally, you can also add Site Extensions to your web site.  There are actually a BUNCH of them that you can choose from.  Here’s the gallery from the Site Extensions menu:

Kudu4

Of course, there’s many more than fit on this single screen shot.  All of the additional functionality and the ease with which you can access it is pretty cool though.  Here’s an example of the Azure Websites Event Viewer.  You can launch it from the Installed items in your gallery and it pops open right in the browser:

Kudu5

So that’s a quick overview of the tools.  I used them some time ago and then when I needed them a couple of months ago I couldn’t remember the virtual directory name.  I Bing’d my brains out unsuccessfully trying to find it, until it hit me when I looked at one of my site deployment scripts – they go to the SCM vdir as well.  Since I had such a hard time finding it I thought I would capture it here and hopefully your favorite search engine will find enough of keywords in this post to help you track it down when you need it.

Finally, for a full set of details around what’s in the Kudu Services, check out their GitHub wiki page at https://github.com/projectkudu/kudu/wiki.

Debugging a Windows Service Startup Method

It’s been a long time since I wrote a Windows service but…I was doing that today and realized that the more things change, the more they stay the same. It would seem that it’s still a painful and problematic process if you want to debug the startup of a custom Windows service you write in Visual Studio.  If you follow the published guidance around this (https://msdn.microsoft.com/en-us/library/7a50syb3(v=vs.110).aspx), it suggests a variety of work-arounds for doing this, but nothing that I would call an out of the box solution.

Well…there IS an out of the box solution.  So even though it may not be rocket science I guess it’s worth sharing.  If you want to break in on the OnStart event of your custom Windows service, just add this line in your code where you want to start debugging:  Debugger.Launch();

Compile your code then install the Windows service.  When you start the service you should see a dialog that asks you if you want to debug the service – like this:


Just say yes and another dialog will pop up where it asks you if you want to start a new instance of Visual Studio, or use an existing instance.  Of course you will have Visual Studio already up and going with your source code for your service loaded and break points set.  Just pick that instance of Visual Studio and – voila’ – you are debugging OnStart.  Old school sometimes works best.  🙂

Debugging Display Templates

Right after I published my last post on using custom display templates, of course one of the first questions I got was great – how do I debug them?  Well there are a couple of ways that I’ve found to debug these:

  1. In your display template add your own javascript after the first div tag and put in a debugger; statement.  Note that you MUST uncheck the option in IE to disable script debugging and restart the browser.  This is really cool because you can break in Visual Studio and get all your variables and query values:

 

 

 

  1. The second way is a little more “hard-coded”, so I don’t like it as much but so far it works well.  You need to:
    1. Press F12 to bring up the IE Developer window
    2. Click on the Script tab
    3. In the drop down with the list of script files select clientrenderer.js
    4. Look for the CoreRenderWorker function; I normally find it by going down to the second to last line of script and pressing the END key.
    5. Click and highlight the first line of code inside that function; it should be something like “var a;”
    6. Right-click on it and select the Insert Breakpoint from the menu.
    7. Click the Start Debugging button.
    8. Go back to the browser and execute your query
    9. When the debugger breaks in, click on the Locals tab on the right side of the window, and then click the plus sign next to the “c” variable to expand it.
    10. You can look at all of the variables in there, but generally you will want to click the Play button in the debugger and continue.  Each time a new set of code is loaded the “c” variable collapses, and that’s your clue that you should go back and expand it again to see what data it contains.  I generally find that you need to click the Play button 3 to 5 times, until an object called “CurrentItem” appears under the “c” variable.  That represents a single search result and allows you to peruse all of the values for the managed properties that you requested.  Super useful and does not require Visual Studio.

 

Troubleshooting Tip for Debugging Crawl Issues in SharePoint 2010

I recently came across a very nice troubleshooting methodology when I was trying to debug some authentication issues that were occurring during a SharePoint 2010 crawl.  I was getting some errors and also having difficulty getting the information I needed out of the crawl log to some other issues that were occurring.  Strangely enough, enter Fiddler to the rescue (www.fiddler2.com).

 

I’m sure most everyone is familiar with Fiddler so I won’t bother explaining it here.  The trick though was to get it to capture what was happening during the crawl.  I found a very slick way to do this is to configure Fiddler as a reverse proxy for the crawl account.  The instructions for configuring Fiddler as a reverse proxy can be found here:  http://www.fiddler2.com/Fiddler/help/reverseproxy.asp.  The way I used it was as follows:

 

  1. Log into my crawl server as the crawl account.
  2. Configure Fiddler to be a reverse proxy as described above.
  3. Start Fiddler
  4. Start a new crawl.

 

I had isolated my trouble sites into a separate content source.  So once I followed these steps I was able to see each request from the crawler to that content source, how it was authenticating, and what was happening.  Overall it proved very useful in understanding much more clearly what was going on during the crawl of those sites.

Using WireShark to Trace Your SharePoint 2010 to Azure WCF Calls over SSL

One of the interesting challenges when trying to troubleshoot remotely connected systems is figuring out what they’re saying to each other.  The CASI Kit that I’ve posted about other times on this blog (http://blogs.technet.com/b/speschka/archive/2010/11/06/the-claims-azure-and-sharepoint-integration-toolkit-part-1.aspx) is a good example whose main purpose in life is providing plumbing to connect data center clouds together.  One of the difficulties in troubleshooting it is that case is that the traffic travels over SSL so it can be fairly difficult to troubleshoot.  I looked at using both NetMon 3.4, which has an Expert add in now for SSL that you can get from http://nmdecrypt.codeplex.com/, and WireShark.  I’ve personally always used NetMon but had some difficulties getting the SSL expert to work so decided to give WireShark a try.

WireShark appears to have had support for SSL for a couple years now; it just requires that you provide the private key used with the SSL certificate that is encrypting your conversation.  Since the WCF service is one that I wrote it’s easy enough to get that.  A lot of the documentation around WireShark suggests that you need to convert your PFX of your SSL certificate (the format that you get when you export your certificate and include the private key) into a PEM format.  If you read the latest WireShark SSL wiki (http://wiki.wireshark.org/SSL) though it turns out that’s not actually true.  Citrix actually has a pretty good article on how to configure WireShark to use SSL (http://support.citrix.com/article/CTX116557), but the instructions are way to cryptic when it comes to what values you should be using for the “RSA keys list” property in the SSL protocol settings (if you’re not sure what that is, just follow the Citrix support article above).  So to combine that Citrix article and the info on the WireShark wiki, here is a quick run down on those values:

  • IP address – this is the IP address of the server that is sending you SSL encrypted content that you want to decrypt
  • Port – this is the port the encrypted traffic is coming across on.  For a WCF endpoint this is probably always going to be 443.
  • Protocol – for a WCF endpoint this should always be http
  • Key file name – this is the location on disk where you have the key file
  • Password – if you are using a PFX certificate, this is a fifth parameter that is the password to unlock the PFX file.  This is not covered in the Citrix article but is in the WireShark wiki.

So, suppose your Azure WCF endpoint is at address 10.20.30.40, and you have a PFX certificate at C:\certs\myssl.pfx with password of “FooBar”.  Then the value you would put in the RSA keys list property in WireShark would be:

10.20.30.40,443,http,C:\certs\myssl.pfx,FooBar.

Now, alternatively you can download OpenSSL for Windows and create a PEM file from a PFX certificate.  I just happend to find this download at http://code.google.com/p/openssl-for-windows/downloads/list, but there appear to be many download locations on the web.  Once you’ve download the bits that are appropriate for your hardware, you can create a PEM file from your PFX certificate with this command line in the OpenSSL bin directory:

openssl.exe pkcs12 -in <drive:\path\to\cert>.pfx -nodes -out <drive:\path\to\new\cert>.pem

So, supposed you did this and created a PEM file at C:\certs\myssl.pem, then your RSA keys list property in WireShark would be:

10.20.30.40,443,http,C:\certs\myssl.pem

One other thing to note here – you can add multiple entries separated by semi-colons.  So for example, as I described in the CASI Kit series I start out with a WCF service that’s hosted in my local farm, maybe running in the Azure dev fabric.  And then I publish it into Windows Azure.  But when I’m troubleshooting stuff, I may want to hit the local service or the Windows Azure service.  One of the nice side effects of taking the approach I described in the CASI Kit of using a wildcard cert is that it allows me to use the same SSL cert for both my local instance as well as Windows Azure instance.  So in WireShark, I can also use the same cert for decrypting traffic by just specifying two entries like this (assume my local WCF service is running at IP address 192.168.20.100):

10.20.30.40,443,http,C:\certs\myssl.pem;192.168.20.100,443,http,C:\certs\myssl.pem

That’s the basics of setting up WireShark, which I really could have used late last night.  🙂   Now, the other really tricky thing is getting the SSL decrypted.  The main problem it seems from the work I’ve done with it is that you need to make sure you are capturing during the negotiation with the SSL endpoint.  Unfortunately, I’ve found with all the various caching behaviors of IE and Windows that it became very difficult to really make that happen when I was trying to trace my WCF calls that were coming out of the browser via the CASI Kit.  In roughly 2 plus hours of trying it on the browser I only ended up getting one trace back to my Azure endpoint that I could actually decrypt in WireShark, so I was pretty much going crazy.  To the rescue once again though comes the WCF Test Client. 

The way that I’ve found now to get this to work consistently is to:

  1. Start up WireShark and begin a capture.
  2. Start the WCF Test Client
  3. Add a service reference to your WCF (whether that’s your local WCF or your Windows Azure WCF)
  4. Invoke one or more methods on your WCF from the WCF Test Client.
  5. Go back to WireShark and stop the capture.
  6. Find any frame where the protocol says TLSV1
  7. Right-click on it and select Follow SSL Stream from the menu

A dialog will pop up that should show you the unencrypted contents of the conversation.  If the conversation is empty then it probably means either the private key was not loaded correctly, or the capture did not include the negotiated session.  If it works it’s pretty sweet because you can see the whole conversation, or only stuff from the sender or just receiver.  Here’s a quick snip of what I got from a trace to my WCF method over SSL to Windows Azure:

  • POST /Customers.svc HTTP/1.1
  • Content-Type: application/soap+xml; charset=utf-8
  • Host: azurewcf.vbtoys.com
  • Content-Length: 10256
  • Expect: 100-continue
  • Accept-Encoding: gzip, deflate
  • Connection: Keep-Alive
  • HTTP/1.1 100 Continue
  • HTTP/1.1 200 OK
  • Content-Type: application/soap+xml; charset=utf-8
  • Server: Microsoft-IIS/7.0
  • X-Powered-By: ASP.NET
  • Date: Sat, 19 Mar 2011 18:18:34 GMT
  • Content-Length: 2533
  • <s:Envelope xmlns:s=”http://www.w3.org/2003/05/soap-envelope” blah blah blah

So there you have it; it took me a while to get this all working so hopefully this will help you get into troubleshooting mode a little quicker than I did.

Azure Development Tips for Debugging and Connection Strings

I just wanted to pass on a couple of tips for Azure development that I’ve recently found to be helpful.

The first is around connection strings.  When you use a connection string, you typically store it in the project properties.  The labs usually demonstrate reading the connection strings with something like this:

// read storage account configuration settings
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
 configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

They then suggest that when you are testing and running in the local dev fabric that you configure the setting to be “UseDevelopmentStorage=true”, and then when you’re ready to move to production you go back in to your project settings and change it to point to your Azure storage.  In practice, I found it much simpler just to store your Azure storage details in the connection string all the time, but in your code use this rather than the sample above:

CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
 string connectionString;

 if (RoleEnvironment.IsAvailable)
   connectionString = RoleEnvironment.GetConfigurationSettingValue(configName);
 else
  connectionString = “UseDevelopmentStorage=true”;

 configSetter(connectionString);
});

The second tip is related to debugging.  It’s sort of second nature to just hit F5 and start running.  Because you have multiple processes going on with the local dev fabric I’ve found it’s easier to start up your solution by going to the Debug menu and choosing Start Without Debugging.  That will compile your app, deploy it to the local dev fabric and start it up.  Then you can attach to the w3wp.exe process for your ASPX or WCF application, set your breakpoints, and hit it using something like the WCF Test Client that comes with Visual Studio 2010. 

 

Debugging Event Receivers in SharePoint 2010

Well folks, it’s been a while since I’ve added to the blog, and ironically I find myself on Christmas morning finally chasing down a little “bugger” that was causing me considerable grief yesterday afternoon.  So Merry Christmas, and here are a couple of tips that may save you some time when you are trying to debug event receivers in SharePoint 2010.

The first issue that remains troublesome, even with Visual Studio 2010, is debugging the activation of your feature.  For example, in my case I have a feature and it contains an event receiver.  For my particular scenario I wanted to bind my event receiver to a specific list, so I had an event receiver for my feature and in the activation callout I did the binding.  Of course, when there are problems in there, how do we debug that?  VS 2010 just tries to push out the solution, deploy it, activate it, and activate features.  Well I found the simplest way to do that is to use our new friend PowerShell.  I let VS 2010 do it’s thing, and then I open up a PowerShell window.  In PowerShell, I use the Enable-SPFeature cmdlet to go ahead and activate my feature.  Before I execute that cmdlet, in VS 2010 I use the Tools…Attach to Process menu to select the PowerShell window I have open.  Then when I execute the PowerShell command VS will hit the breakpoint in my Feature Activated event and I can step through my code.  Not completely straightforward, but once you get the pattern down it proves to be quite useful.

The second issue that was really driving me crazy was debugging the event receiver itself.  In SharePoint 2007 I would just attach to the w3wp.exe process for my web application and I was good to go – I’d hit my breakpoint and debug away.  In SharePoint 2010 I was having no such luck.  I was trying all sorts of things but could not get my debugger to step through the code.  What was also strange is that the data my app would write to the event log was completely out of whack with my latest compiled version of the event receiver – it was from a build I had already changed some time ago.  Two age old tricks finally gave me the clues I needed to solve this puzzle:  1) adding a System.Debugger.Break(); as the first line in my code and 2) rebooting the machine.  The next time my receiver fired, the Debugger.Break() line forces one of those dialogs we’ve all seen before to appear – the one that says you’ve hit an unhandled exception, what do you want to use to debug it?  When that dialog comes up, it also says which process the problem occurred.  Well, it turns out that code is now running in the OWSTIMER.EXE process, no longer the w3wp.exe.  Aha!  I hate it when that happens.  That not only explains why I couldn’t hit my break points, it also explains why the event log data my code was writing seemed dated; it was, there was an old version of my assembly in memory with owstimer.

So, I removed Debugger.Break() statement from my code, and now my process works like this:

  1. Compile
  2. Deploy
  3. net stop sptimerv4
  4. net start sptimerv4
  5. Run my code and hit my breakpoint

I’m actually adding the net stop and net start commands as post-build steps to my solution, so it will all work seamlessly moving forward.  Hope this helps you if you get stranded in this same scenario.

Happy Holidays!