Is a “Good” Monitoring Solution Good Enough?

Data being the lifeblood of organizations now, it stands to reason that keeping the right data available to the right people is a key to success in business today. And lowering costs while still positioning the organization for desired growth and positive customer experience are likely some of your business’s largest challenges. 

Organizations are taking hard looks right now to ensure that performance and resource optimization is happening in all areas of their data delivery systems. It’s more than just the database, and it’s more than just the technical infrastructure and expanding uses of cloud. It’s all those things, but also the complexity and pace of growth and change of the technical infrastructure, not to mention changes to the demands of data consumers both inside and outside the organization. Analytics. Instantaneous eCommerce. AI initiatives (a significant data consumer). How to keep the data pipelines for all this healthy and flowing?

Performance must be maintained, and in “performance” I’m including availability. Reducing unplanned downtime is a requirement. It all really comes down to meeting customer (data consumer) expectations. In today’s environments, to do all this well, the best monitoring/observability tool you can find is a must.

Trends to consider when measuring “good enough”

The following forces are changing the way organizations need to shape tactics to succeed with their strategies. And these forces often change perceptions about what sort of monitoring solution the organization should use to ensure a healthy data ecosystem.

  • Cost savings through use of open source products.
  • NoSQL databases continue to flourish – partly because of the explosion of unstructured data that can and should be used for analysis and for internal and external services by your business.
  • Cloud data platforms are on the rise – they fulfill specific requirements of data management and data availability very well.
  • Around three fourths of organizations are undertaking active cloud migrations of workloads.
  • Microservice architectures often means more than one database platform per application because of the best-fit choice for the function of each microservice.
  • Business areas selecting their own database platforms and developing applications.

Good at something: is that good enough?

There are lots of monitoring software vendors out there, and there are lots of cloud cost management and cloud data migration tools out there. Most of them are very good at something. But even a quick look at each of their websites makes it clear that most of them aren’t even players in all areas where an organization requires excellence today.

So, is a “good” solution that meets only some of these challenges “good enough”?

  1. How does the monitoring solution accommodate the specific needs and scale of our enterprise environment?
  2. Is observability built into the solution that makes meaningful use of the right collected information to reduce mean time to resolution (MTTR) consistently?
  3. Is value found quickly? First insights should not be delayed or hidden.

Let’s examine these important areas closer, and come up with an answer – is “good” good enough?

Item #1: How does a monitoring tool accommodate the scale and specific needs of your enterprise environment?

While your organization is unique, it probably shares some very common challenges around information technology expansion – especially related to data. Here are some of those challenges you need to consider in regards to a monitoring solution that you hope is good enough.

  • Do we have the skills in-house to manage performance of new databases in our environment?
  • Do we have database DevOps performance risks under control?
  • Can we succeed in keeping our customer experience as high as possible, even as our technical infrastructure changes so quickly?
  • Can we monitor the health of databases in the cloud?
  • What about virtual infrastructure on which the databases run?
  • Are we empowering the data consumers who build our business success?

New database vendor offerings are surely entering your organization and other legacy databases are sticking around. As those new databases arrive in your environment, and as skills diminish in some of your legacy databases over time, the ability of your monitoring tool to allow everyone to find performance issues and diagnose their cause, and then figure out action to take, is essential. 

Surveys show that at least 90% of database managers (DBAs, etc.) support more than one database platform now, and a large majority of those support more than two. If your monitoring solution only monitors SQL Server, Oracle and MySQL, for example, either another solution will be needed, or your operations teams will have blind spots to performance on some (perhaps many) of your most important databases today – or in the near future.

The list of supported platforms for Foglight® by Quest increased by three in the last few months. Those additions bolstered an already impressive list of popular database offerings. 

And maybe most importantly of all, Foglight brings best-of-breed analysis and diagnostic tools for performance optimization for your databases and the infrastructures within which they operate. Change tracking, workload comparisons, performance baselines, multi-dimensional performance investigations and Query Insights are all features that enable a multi-platform environment’s team to do what they need to do without expertise on each platform.

DevOps is a special challenge to DBAs and similar pros. For some, it’s a loss of control over database changes.  But still, the goal of making safe database changes that don’t adversely affect performance or availability remains. By using workload comparisons of before and after changes, perhaps even in non-production but production-like environments, the danger of troublesome (or catastrophic) changes being promoted to production is a few mouse-clicks from being avoided with Foglight.

Databases running in cloud environments need attention, too. Foglight by Quest will help with monitoring a variety of IaaS and PaaS databases.

And a sometimes forgotten culprit of performance issues, the infrastructure (servers, hosts, virtual machines) that data relies on to keep flowing should be monitored as well. Foglight has robust virtualization monitoring and cloud cost management in one place, along with advice on cloud migrations and server consolidation. The Foglight virtual machine waste management and capacity management can save you much time and money, just as they have for so many companies globally.  

Item #2: Observability?

Observability means that the data being collected by the monitoring aspects of your solution – log, metrics, traces – are being used wisely to help you diagnose problems. And that should include pointing you to the causes of, and the fixes available for, those problems.

Having a tool that is strong in observability commonly reduces mean-time-to-resolution (MTTR) by 25% or more, according to recent reports. Quest Foglight observes at the database and infrastructure levels, providing context and valuable insights across a spectrum of platforms.

Item #3: Quick time to value

Time to value is an important measurement for both the software vendor and each of their customers. As a software customer, you likely measure the wisdom of your investment on a mix of factors – some are hard dollar savings, and others are calculated savings such as “saved downtime” or “time necessary to solve a slowdown in a customer-facing application.”

No operations leader wants their team spending more time than necessary on firefighting. Businesses today will push back on that – they need innovation and improvements happening. More time for those things is of great value in today’s business climate.

Value of software can also be measured as the capability of finding an insight more quickly than would have been possible without the tool.

Foglight is up and running fast and can quickly be customized for your own environment – an essential for the proactive and accurate response to performance issues. And when opting for the cloud-hosted version of Foglight, onboarding is even faster.

Expect a quicker time to value with Foglight. Quest is constantly focuses on improvements here, knowing how important it is to everyone.

Conclusion

Foglight, now with a cloud managed option, is not just good in quality or features or security. It’s not just good in the breadth of what it can show you across your data ecosystem, or good in the depth of diagnostic capabilities it provides you. Not just good at reducing TCO. It’s simply best of breed because it excels at all of those.

The value Foglight brings to organizations is clear. And value is more “up front” than ever. You won’t have to dig for valuable information like:

  • What are our most impactful database queries across all our database instances?
  • Which database instances have the worst, and most, problems right now – across all our platform types?
  • What resource consumption and KPIs are outside normal performance?
  • Past trends (long- and short-term) for our resources and data behavior.
  • The correlation of changes in the environment to performance changes.
  • Comparison of performance between environments and timeframes.

The product team at Quest is constantly analyzing the market and deciding on the next platforms to add, then getting to work on them. That means as your organization depends on new databases in the future, Foglight will keep up.

In a word, the evidence from Quest’s 25+ years in the database performance management business shows that Foglight is dynamic. It’s always been designed to not passively present a lot of graphs and charts to you. It compellingly and proactively identifies problems and helps show you how to get the problem fixed once and for all. 

Find out more about how Foglight will drive data availability and empowerment in your organization through potent features and design at https://www.quest.com/foglight.

Anonymous
Related Content