By David Falkenstein, Solutions Architect, Quest Software
Over the last few months one of my more challenging projects has been to make the manner of gathering Windows counters for Foglight more efficient; in my case, to create a new version of an IIS dashboard. Future blog entries are going to chronicle this effort in detail.
Some of this will be fairly technical and focused on C# development initially, so hang on; it’ll be fun (for me anyway).
We're going to create, from scratch, a new type-two Foglight agent. For a really in depth and well written discussion on script agents in Foglight, please see Geoff Vona’s article. For more information on developing C# applications, you have only to Google the term, but I recommend you look here for the most in-depth and complete discussions on Microsoft development.
Foglight has amazing reach. Among its many features, it is capable of consuming and integrating new data. The problem most often becomes that of “which data?” vs. “how can I do this?”
This means that I have the potential of collecting virtually any kind of data that my OS can see. With this power, of course, comes responsibility. Is the load that I placed on my target for the purposes of monitoring it more than the value of the data I get for the effort? It's the classic argument of what should I collect, and at what point am I collecting too much. Too much data can be a bad thing.
Another problem that I ran into is in the efficiency of solving the need. For example sampling counters is a highly recursive operation and one subject to change at any moment (one can add a new application without warning). There is also the issue of security; how can I construct a monitor that does not violate security constraints.
So how did I do it?
With these issues in mind, I set out to make the most efficient collector of system counters that I could manage, and format that information for Foglight to consume.
As you read through the technical details, you'll see that I used a dataset approach to gather my counters and LINQ to query the dataset following a sampling period.
When I get into it, you'll see that I effectively reduced my system loads to one cycle of recursion in order to build a dataset of all of the active counters that I wanted to target on the platform at the moment. It should be noted that, as a caveat, my approach does require a restart when significant system changes occur, but I felt that due to the ad hoc nature of adding new applications and new counters, this approach was the fastest and most effective.
As we get into this, you'll see that I reduced the complexity of my application by centralizing the definition of all counters that I want to target into one location. Now, in my case, I chose to hard code these definitions intentionally, but this is not a constraint. It would take little effort for one interested in doing so, to rewrite these blocks of code to make these definitions in an external location - perhaps in a configuration file or a database.
Along the way, I learned some new and interesting techniques that I want to share with you.
My next blog entry will discuss the approach that I took and we will take an in-depth look at the code: Adventures in Agent Creation - Part 2