Stay Informed with New Malware Monitoring from Office365Mon.Com

It seems like organizations of all types and sizes are under digital attack these days.  Using email to transmit malware and then compromise an organization is a common way in which these kinds of attacks strike.  Today Office365Mon is launching a new service to help keep you in the know of when and where these attacks are directed at your organization.  In conjunction with the Threat Intelligence features of Office 365, we have a new feature we call Threat Intelligence Monitoring.

The Threat Intelligence features in Office 365 are included for those users that have an E5 license.  The Office 365 E5 license includes numerous additional features beyond the basic email and SharePoint, and Threat Intelligence is one of them.  The Threat Intelligence feature in Office 365 is a collection of insights used in analyzing your tenant to help you find and eliminate threats, proactively.  The Threat Intelligence Monitoring feature in Office365Mon builds on that in some important ways.  For example, you can:

  • Get notified the first time a new malware is sent to your organization. Know when a new type of malware has been targeted at your company so you can make sure you have the tools and plans in place to defend yourself.
  • Get notified when you get more than a certain number of malware within a given time period. Set thresholds for malware volume so you know if you are being targeted for broader malware attacks.
  • Get notified when any user gets more than a certain number of malware in any given day. Be in the know and in control if any of your users are being singled out and specifically targeted with malware attacks so you quarantine and limit the potential damage.

Configuring these options, like all features in Office365Mon, is super simple.  A few mouse clicks and you are ready to go:

ticonfig

Once configured, you’ll have all of the standard Office365Mon notification options to keep you in the know when there’s a problem:  email messages, text messages, and our webhook feature.  In addition to the notifications, there are a number of interesting reports that we provide with Threat Intelligence Monitoring to help you analyze the nature of these attacks against your organization.

For example, here you can get the trend of malwares entering your organization during the current month:

CurrentTrends

In addition to the trend for the current month, there’s a similar chart that shows you a rolling two-month period so you can see what’s being targeted at you over a longer period of time.

You can also get an overview of the top 10 targeted users within your organization, so you can ensure that they are following security best practices:

TargetedUsers

There’s other reports that show you both for the current month as well as historically, data for different ways in which malware has been targeted at your organization.  For example, here’s one that shows the different malware file names that were sent into your organization:

MonthlyCount2

In addition to this, you can view this kind of summary data based on who sent malware infected messages, summaries of the Senders’ IP address, summaries based on the email Subject so you can look for patterns there, summaries on file type and file name as shown above, and also information on when the malware was detected.

We’re also taking this information and have added it into our Microsoft Cloud Command Center.  For those of you who aren’t familiar with it, the Cloud Command Center brings together information that previously existed as islands of data and loaded up all of the key metrics that you need about everything that’s going on with your Microsoft cloud services.  We’ve plugged in the malware trend report and user targeting report into the Cloud Command Center for a really great overview of the health of your organization and its cloud services:

ticloudcommand

We think features like Threat Intelligence Monitoring really expand and strengthen the base of important information you need to be in the know and in control of your organization and its cloud software services.  It all starts in Office 365, so you can help yourself get connected with this information by incorporating the E5 license in your organization.

The Threat Intelligence Monitoring service in Office365Mon is available in Preview today for everyone.  As with all new Office365Mon features, all existing customers have had this feature turned on for the next 90 days to try it out.  All new Office365Mon customers will also have this feature enabled for 90 days so they can see it working in their environment.  As always, we would still love to get feedback on how we can improve it and make it more useful to you, so please feel free to send it our way.  Licensing and pricing is not yet available for the Threat Intelligence Monitoring service; that will be set in Q1 of 2018.

We really have a wide and expansive set of tools to help you with your Microsoft cloud services now.  For monitoring Office 365 performance and availability, go to https://office365mon.com.  For monitoring Azure performance and availability, go to https://azureservicemon.com.  To monitor malware attacks using Threat Intelligence, go Office365Mon.Com and create your Office365Mon subscription, then you can configure Threat Intelligence monitoring at https://www.office365mon.com/Configure/Threats.

Thanks, and I hope everyone has a great holiday season!

From Sunny Phoenix,

Steve

Advertisements

Monitoring Large Lists in SharePoint Online with Office365Mon

One of the enduring realities I saw over and over in my many years working at Microsoft with SharePoint, is that customers LOVE the lists they can create in SharePoint.  They’re super easy to set up, you can quick do all kinds of sorting, filtering and searching on them, and it requires no technical expertise to get up and running and with them.  This led to another enduring reality, which is that many customers started loving lists TOO much.  I saw many, many customers over the years that had lists that had just exploded in size.  As these lists grew larger, the performance in using them tends to get worse and worse.  This problem was also compounded by the fact that many developers saw SharePoint lists as a quick and easy place to store data for their application.  That meant even bigger lists sizes, and more people using them more often.

Over the years, we developed lots of documentation and options for improving the performance of your lists.  As customers have moved to SharePoint Online in Office 365 though, we would occasionally hear people ask if it had the same large list limitations as SharePoint on premises does…and the answer is yes, it does.  Now as more customers are moving their SharePoint on premises data to SharePoint Online, we see increasing concern about how the lists they do have are going to perform once it’s all up in Office 365.  Fortunately, at Office365Mon, we’ve just released a new feature designed specifically to help you stay on top of this issue.

List Monitoring is a feature that lets you select one or more lists in SharePoint Online for us to monitor.  For the lists that we’re monitoring, we will do a couple of things:  first, we’ll issue health probes for each list that we’re monitoring and render the default view for it to see what the performance is like.  That’s typically one of the first places where you’ll see performance issues with a large list.  You can configure List Monitoring so that it will send you a notification if it takes longer than “x” seconds to render the default view, where “x” is a number that you decide.

The second thing we’ll do is keep tabs on how large the list is, i.e. how many items it contains.  Again, you can set a threshold for us to look for, and when a monitored list gets bigger than that threshold, we’ll send you a notification to alert you to it.  So, for example, if you’re worried about a large list approaching that magic 5,000 item limit, you can have us notify you when it’s getting close.  Here’s a screen shot of where you configure the monitoring thresholds:

MonLargeLists1

Selecting the lists to be monitored is also quite simple – we provide you with a collection of all of the lists in the SharePoint Online site that we’re monitoring, and you can just check boxes next to the lists you want us to monitor for you.  It can be any of the lists that come out of the box with SharePoint, or any custom list that you’ve created:

MonLargeLists2

Once we’ve started monitoring lists for you, not only will we notify you according to the thresholds you’ve configured, but as you’ve come to expect from Office365Mon, we also have a nice set of reports you can use to see where you stand.  To begin with, you can see the performance of the recent health probes we’ve issued against monitored lists in our Average Response Time report.  It shows the performance of all of the resources that we’re monitoring for you, including monitored lists.  We also have a new report that shows you the average performance each month just for your monitored lists:

MonLargeLists3

In addition to that, we have a report that shows you the size of your monitored lists each day, so you can visualize any growth trends that might be happening that you need to get in front of:

MonLargeLists4

We also provide a monthly view of the average size of each monitored list, so you have a longer-term view of how rapidly your lists are growing:

MonLargeLists5

Being aware of large lists and their impact on performance is one of the best ways to ensure a good experience for your users.  I’ve heard many, many times from customers that say “our site is slow”.  There are lots of reasons why that might be, but a couple of the most common reasons are slow query times and large lists.  At Office365Mon we’ve provided monitoring for your query execution time for nearly a year now.  With the new List Monitoring feature, now you can also know when you have large list performance problems.  Once you know that, you can start working on a mitigation strategy – splitting the data out into multiple lists, creating customized views of the data, etc., etc., etc.  There are a lot of different things you can do to work on improving the performance, but if you don’t know you have a problem then you’ll forever be stuck wondering why your users keep telling you that your “site is slow”.  Take advantage of features like these to stay in the know and stay in control of your Office 365 tenant, and keep everyone happy.  Start by visiting us at https://www.office365mon.com and clicking the Configure…List Monitoring menu.

This is yet another feature at Office365Mon that was driven from feedback by our customers.  I hope you’ll take a look at this feature and as always, let us know how we can make it better as well as ways in which we might be able to help you to do Office 365 monitoring even better.

From Sunny Phoenix,

Steve

 

Tips for ULS Logging Part 2

In part 1 of ULS logging tips (http://blogs.technet.com/b/speschka/archive/2011/01/23/tips-for-uls-logging.aspx) I included a some code to add a custom ULS logging Area and demonstrated how to use it within your project.  After working with it a bit I noticed a few things:

1.       There was some code in there that didn’t jive up with the latest SDK – basically some code pieces implemented that the SDK implied I should not need, and some stuff that I was doing slightly differently than the SDK example (which, by the way, needs to be updated and I will work on that separately).

2.       Logging would only occur when I set the TraceSeverity to Medium; there was not an effective way to layer in different trace levels.

3.       The most super annoying part – when I tried to invoke my custom class that is required for my custom Area, it would fail when I tried doing any logging during a POST activity.  It triggered the all too common error about not being able to update an SPWeb during an POST unless setting the AllowUnsafeUpdates property to true.  Now setting that on an SPWeb during a log event, just to make my log event work, seemed borderline insane (my apologies to all the insane people out there by the way).  So I decided there must be a better way.

In this posting then, I’m going to improve upon the previous example and add to it along the way.  Here’s what we’re going to show:

1.       An improved class for the custom Area – it actually slims down some in this version.

2.       Integration with Configure diagnostic logging in the Monitoring section of Central Admin.  This integration will further allow support for configuring tracing levels on the custom Area and Category.

3.       Registration information – a very simple way to both register and unregister the custom ULS logging class so it integrates with Central Admin, and can be removed from Central Admin.

To begin with, let’s look at the new streamlined version of the class.  There are many similarities between this and the version I showed in the first posting, but this is a little slimmer and simpler.  Here’s what it looks like now.
    [Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
    public class SteveDiagnosticService : SPDiagnosticsServiceBase
    {
 
        private const string LOG_AREA = “Steve Area”;
 
 
        public enum LogCategories
        {
            SteveCategory
        }  
 
 
        public SteveDiagnosticService()
        {
        } 
 
 
        public SteveDiagnosticService(string name, SPFarm parent)
            : base(name, parent)
        {
        } 
 
 
        public static SteveDiagnosticService Local
        {
            get
            {
                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();
            }
        }  
 
 
 
        public void LogMessage(ushort id, LogCategories LogCategory,
TraceSeverity traceSeverity, string message,
params object[] data)
        {
 
            if (traceSeverity != TraceSeverity.None)
            {
                SPDiagnosticsCategory category
                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];
                Local.WriteTrace(id, category, traceSeverity, message, data);
            }
 
        }
 
 
        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()
        {
yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
        }
    }
 
Here are the important changes from the first version:

1.        Add the Guid attribute to the class:

[Guid(“833B687D-0DD1-4F17-BF6A-B64FBC1AC6A8”)]
 
I added a Guid attribute to the class because SharePoint requires it in order to uniquely identify it in the configuration database.

2.       Changed the default constructor:

        public SteveDiagnosticService()
        {
        } 
 
Now it’s just a standard empty constructor.  Before I called the other overload for the constructor that takes a name for the service and an SPFarm.  Just less code is all, which is a good thing when you can get away with it.

3.       Deleted the HasAdditionalUpdateAccess override.  Again, turned out I wasn’t really using it, so continuing with the “less is more” theme I removed it.

 

4.       Shortened the ProvideAreas method up significantly; now it matches the same pattern that is used in the SDK:

 

yield return new SPDiagnosticsArea(LOG_AREA, 0, 0, false,
new List<SPDiagnosticsCategory>()
{
new
SPDiagnosticsCategory(LogCategories.AzureConfig.ToString(),
                     TraceSeverity.Medium, EventSeverity.Information, 1,
                     Log.LOG_ID)
                     });
So that addressed problem #1 above – my code is now a little cleaner and meaner.  The other problems – lack of tracing levels, throwing an exception when logging during a POST, and integration with central admin – were all essentially fixed by taking the code a bit further and registering.  The example in the SDK is currently kind of weak in this area but I was able to get it to do what we need.  To simplify matters, I created a new feature for my assembly that contains my custom ULS class I described above.  I added a feature receiver and it, I register the custom ULS assembly during the FeatureInstalled event, and I unregister it during the FeatureUninstalling event.  This was a good solution in my case, because I made my feature a Farm scoped feature so it auto activates when the solution is added and deployed.  As it turns out, the code to do this register and unregister is almost criminally simple; here it is:
public override void FeatureInstalled(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Update();
}
catch (Exception ex)
       {
throw new Exception(“Error installing feature: “ + ex.Message);
}
}
 
 
public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
{
try
       {
SteveDiagnosticsService.Local.Delete();
}
catch (Exception ex)
{
throw new Exception(“Error uninstalling feature: “ + ex.Message);
}
}
 
The other thing to point out here is that I was able to refer to “SteveDiagnosticService” because I a) added a reference to the assembly with my custom logging class in my project where I packaged up the feature and b) I added a using statement to my feature receiver class for the assembly with my custom logging class.
By registering my custom ULS logging class I get a bunch of benefits:

·         I no longer get any errors about updating SPWeb problems when I write to the ULS log during a POST

·         My custom logging Area and Category shows up in central admin so I can go into Configure diagnostic logging and change the trace and event level.  For example, when it’s at the default level of Medium tracing, all of my ULS writes to the log that are of a TracingSeverity.Medium are written to the log, but those that are TracingSeverity.Verbose are not.  If I want to have my TracingSeverity.Verbose entries start showing up in the log, I simply go Configure diagnostics logging and change the Trace level in there to Verbose.

Overall the solution is simpler and more functional.  I think this is one of those win-win things I keep hearing about.
P.S.  I want to record my protest vote here for LaMarcus Aldridge of the Portland Trailblazers.  His failure to get added to the Western Conference NBA All Star team is tragically stupid.  Okay, enough personal commentary, I know y’all are looking for truly Share-n-Dipity-icous info when you come here.  Hope this is useful.

Tips for ULS Logging

UPDATE 2-4-2011:  I recommend taking a look at the updated example for this at http://blogs.technet.com/b/speschka/archive/2011/02/04/tips-for-uls-logging-part-2.aspx.  The new example is better and more functional.

When I added some ULS logging to a recent project I noticed one annoying little side effect.  In the ULS logs the Area was showing up as “Unknown”.  I realize that some aspects of this have been covered other places but I just wanted to pull together a quick post that describes what I found to be the most expedient way to deal with it (certainly much less painful than some of the other solutions I’ve seen described).  One thing worth pointing out is that if you don’t want to do this at all, I believe the Logging Framework that the best practices team puts up on CodePlex may do this itself.  But I usually write my own code as much as possible so I’ll briefly describe the solution here.

The first thing to note is that most of the SDK documentation operates on the notion of creating a new SPDiagnosticsCategory.  The constructor for a new instance of the class allows you to provide the Category name, and when you do that you will definitely see any custom Category name you want to use show up in the ULS log in the Category column.  In most cases if you’re doing your own custom logging you also want to have a custom Area to go hand in hand with your custom Category(s).  Unfortunately, the SDK puts you through a considerably more complicated exercise to make that happen, because you cannot use a simple constructor to create a new Area and use it – you have to write your own class that derives from SPDiagnosticsServiceBase.

The way I’ve chosen to implement the whole thing is to create one CS file that contains both my logging class and my diagnostic service base class.  I’ll start first with the diagnostic base class – below I’ve pasted in the entire class, and then I’ll walk through the things that are worth noting:

    public class SteveDiagnosticService : SPDiagnosticsServiceBase

    {

 

        private const string LOG_AREA = “Steve Area”;

 

 

        public enum LogCategories

        {

            SteveCategory

        }  

 

 

 

        public SteveDiagnosticService()

            : base(“Steve Diagnostics Service”, SPFarm.Local)

        {

        } 

 

 

 

        public SteveDiagnosticService(string name, SPFarm parent)

            : base(name, parent)

        {

        } 

 

 

 

        protected override bool HasAdditionalUpdateAccess()

        {

            return true;

        }

 

 

        public static SteveDiagnosticService Local

        {

            get

            {

                return SPDiagnosticsServiceBase.GetLocal<SteveDiagnosticService>();

            }

        }  

 

 

 

        public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

            string message, params object[] data)

        {

 

            if (traceSeverity != TraceSeverity.None)

            {

                SPDiagnosticsCategory category

                 = Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

                Local.WriteTrace(id, category, traceSeverity, message, data);

            }

 

        }

 

 

        protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

        {

 

            List<SPDiagnosticsCategory> categories = new List<SPDiagnosticsCategory>();

 

            categories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

            SPDiagnosticsArea area = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, categories);

 

            List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>();

 

            areas.Add(area);

 

            return areas;

        }

    }

 

Let’s look at the interesting parts now:

private const string LOG_AREA = “Steve Area”;

Here’s where I define what the name of the Area is that I’m going to write to the ULS log.

public enum LogCategories

{

SteveCategory

}    

This is the list of Categories that I’m going to add to my custom Area.  In this case I only have one category I’m going to use with this Area, but if you wanted to several of them you would just expand the contents of this enum.

public void LogMessage(ushort id, LogCategories LogCategory, TraceSeverity traceSeverity,

string message, params object[] data)

{

if (traceSeverity != TraceSeverity.None)

{

SPDiagnosticsCategory category

= Local.Areas[LOG_AREA].Categories[LogCategory.ToString()];

 

Local.WriteTrace(id, category, traceSeverity, message, data);

}

}

 

This is one of the two important methods we implement – it’s where we actually write to the ULS log.  The first line is where I get the SPDiagnosticCategory, and I make reference to the Area it belongs to when I do that.  In the second line I’m just calling the base class method on the local SPDiagnosticsServiceBase class to write to the ULS log, and as part of that I pass in my Category which is associated with my Area.

protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()

{

 

List<SPDiagnosticsCategory> theCategories = new List<SPDiagnosticsCategory>();

 

theCategories.Add(new SPDiagnosticsCategory(

             LogCategories.SteveCategory.ToString(),

             TraceSeverity.Medium, EventSeverity.Information));

 

SPDiagnosticsArea theArea = new SPDiagnosticsArea(

             LOG_AREA, 0, 0, false, theCategories);

 

List<SPDiagnosticsArea> theArea = new List<SPDiagnosticsArea>();

 

theArea.Add(area);

 

return theArea;

}

 

In this override I return to SharePoint all of my custom Areas.  In this case I’m only going to use my one Area, so that’s all I send back.  Also, as I mentioned above, I’m only using one custom Category with my Area.  If I wanted to use multiple custom Categories then I would 1) add them to the enum I described earlier and 2) add each one to my theCategories List instance I’ve defined in this method.

That’s really the main magic behind adding a custom Area and getting it to show up in the appropriate column in the ULS logs.  The logging class implementation is pretty straightforward too, I’ll paste in the main part of it here and then explain a little further:

    public class Log

    {

 

        private const int LOG_ID = 11100;

 

 

        public static void WriteLog(string Message, TraceSeverity TraceLogSeverity)

        {

            try

            {

                //in this simple example, I’m always using the same category

                //you could of course pass that in as a method parameter too

                //and use it when you call SteveDiagnosticService

                SteveDiagnosticService.Local.LogMessage(LOG_ID,

                    SteveDiagnosticService.LogCategories.SteveCategory,

                    TraceLogSeverity, Message, null);

            }

            catch (Exception writeEx)

            {

                //ignore

                Debug.WriteLine(writeEx.Message);

            }

        }

    }

 

This code is then pretty easy to implement from my methods that reference it.  Since WriteLog is a static method my code is just Log.WriteLog(“This is my error message”, TraceSeverity.Medium); for example.  In this example then, in the ULS log it creates an entry in the Area “Steve Area” and the Category “SteveCategory” with the message “This is my error message”.

How Your Client Application can know when your SharePoint 2010 Farm is Busy

SharePoint 2010 has a number of new features built in to keep your farm up and running as happily and healthy as possible.  One of the features it uses to do that is called Http Throttling.  It allows you to set thresholds for different performance counters.  You decide which counters to use and what thresholds are “healthy” for your environment.  I covered this concept in more detail in this post:  http://blogs.technet.com/speschka/archive/2009/10/27/adding-throttling-counters-in-sharepoint-2010.aspx. 

One of the things you may not know is that client applications get information about the health of the farm based on those throttling rules every time they make a request to the farm.  When SharePoint responds, it adds a special response header called X-SharePointHealthScore.  If you’re writing a client application that works with data in a SharePoint server, you should consider regularly reading the value of this header to determine whether it’s okay to continue at your current pace, or as the score gets higher, consider dialing back the number of requests you are making.

To demonstrate this concept I’ve created an web part for admins that I’ll be releasing later this year (I haven’t decided exactly how I’m going to distribute it).  It’s designed to let you monitor one to many farms based on the response header I’ve described above.  It’s a nice little quick view of the health of your farm.  Here’s an example of what the web part looks like – in this case it’s retrieving health data for two different farms I have:

The web part is built using a combination of web part, Silverlight, jQuery, and ASPX page.  When it receives an error, like the server is busy error you can get with Http Throttling, you can click the plus sign at the bottom of the web part to see the error details.  Here’s an example of what that looks like:

Take a look at this when you’re writing your client applications, and understand how best to respond when your farm is getting busy.

Creating Health Monitor Rules for SharePoint 2010

Hey all, I saw an interesting discussion today about creating new health rules in SharePoint 2010 and thought I would share.  All the credit for this one goes to Chris C. who was good enough to share this information.  The discussion started because of some confusion about how to add a new health rule.  In the public beta you can go into Central Admin, hit up the rule definitions and add a new one.  Unfortunately you will find that doesn’t really work.  By RTM we should / will have the new rule definition button disabled, because that isn’t the right way to get it into the system.  Instead, here in Chris’ words are how you should approach this.

A health rule is basically a class (derived from either SPHealthAnalysisRule or SPRepairableHealthAnalysisRule) which implements a Check() method, some string properties that describe the problem to the administrator, and optionally a Repair() method (if an SPRepairableHealthAnalysisRule).

For adding health rules to your farm:

  • Create your class and compile it into a dll with a strong name
  • GAC the dll on each of the machines in your farm
  • Run the following In OM/PowerShell:
    $loc=”C:\mydlllocation.dll”
    $asm=[System.Reflection.Assembly]::LoadFrom($loc)   <- $asm should be GAC’d
    [Microsoft.SharePoint.Administration.Health.SPHealthAnalyzer]::RegisterRules($asm)

Once these steps are complete, you should see your rule(s) in the Health Rules List (i.e. http://centraladmin:8080/Lists/HealthRules/AllItems.aspx).  This is the full list of rules.  Each of these runs on a different schedule (defined in the class) inside of a timer job.  Once a rule is run, a result list item is stored in the health reports (or “Problems and Solutions”) list.  The reason you are not seeing any generated reports is most likely because your OOB rules have not been run yet.

Thanks again Chris for sharing this valuable information.

Playing with Large List Throttling

If you’ve followed my blog entries by now then you’ve seen quite a bit of information on throttling for large lists in SharePoint 2010.  One of the things you may find is that you will have scenarios where you want to be able to toggle the enforcement of the throttling on a list by list basis.  As I explained in a previous post, an SPList object does have an EnableThrottling property.  With that useful bit of information in hand, I wrote a little web part that allows you to manage list throttling in different site collections pretty easily.  It’s implemented as a web part, as shown here:

 

As you can see, it’s a pretty simple web part, with a fairly easy to use interface.  You can toggle the list throttling on or off for any individual list, or turn it off or on for all lists in a site collection.  I’ve attached the WSP for this web part to this posting.  Feel free to use it and abuse it, there’s really only a couple of important things to remember at this point.  1) This web part MUST BE RUN IN THE CENTRAL ADMIN SITE!  This restriction *might* be lifted by the time we RTM, but we’ll just have to wait and see.  2)  This web part will only work with Windows (classic) auth sites for now.  Hope you find it useful.

You can download the attachment here: