[AUDIO LOGO] Hey, Amit, Thank you so much for taking my call. I'm really sorry for doorstepping you. And I've got one question. I'm doing a bit of an assignment. You know about the data migration project that we're working towards. I wanted to ask you your advice about best practices on performance-monitoring tools, diagnostics, and what should people be looking out for when they're in database migration mode and they need to get a new monitoring solution for the new target? Do you have any advice that you can offer people and myself?
You definitely want to have a solution that can help you capture baselines. So if you can establish the baseline of your system's performance pre and post-migration, this allows you to ensure and to make sure that you're meeting your expected levels. So basically, you have a previous standard that you want to meet or exceed, and baselines will help you establish that real quick.
Identifying bottlenecks-- if you're looking for a monitoring solution, ensure or make sure that it has the ability to spot those bottlenecks because you need to make timely adjustments. Your solution should be able to pick up where those hotspots are, and then allow you to have a gateway towards optimization.
And then a couple of other points around data integrity I'd mentioned earlier-- most of the time when you have data arriving in a new environment, the quality of the data needs to be checked-- are there any discrepancies, things of that nature. Usually, those arise when the workload starts to run because you'll start to see-- Yeah.
Yeah.
And then data-- I'm sorry-- resource utilization-- that's going to perk up not only the development team but also the administrators and also the folks that are allocating the budget towards this new environment. Everybody's going to be concerned-- how big it is and how much it's going to cost. Resource utilization needs to be tracked during migration.
I wouldn't even say "tracked." I've just thought about it there. It's like, what about tracking it? You want to have a report because those guys are going to be looking for-- they want to see a report.
But also predictability, so if we are doing any sort of testing, we should be able to use the time series principles to be able to then predict in a very reliable manner what the allocation should be. That's the whole goal of monitoring. It's not to know what happened. It's to know what can happen down the road. It's to be able to do some level of prediction.
Final couple points-- error detection-- any real-time monitoring needs to be done as you go. You need to see those errors, and then you have that post-migration validation, so confirming that everything operates and performs smoothly. So there's a wrap up, or there's some sort of-- I won't call it generative or artificial intelligence, but there needs to be something that you're monitoring solution gives you that puts a button or-- I'm sorry-- a bow around the data because it needs to be able to mitigate that gap and help you explain why environment A is better than environment B. So it needs to be very objectively put together, somewhat professional in nature, and allow it to be ubiquitously presented in the organization.
So let me just say that another way, on the street, it could be presented to the developers, it could be presented to it, administrator, like you said, a report or some sort of a dynamic representation of the post-migration, the validation. It should be very easily explained why this environment is not suitable or why it is suitable.
Again, I think it could be a case of multiple tools-- post-migration validation.
[INTERPOSING VOICES]
Yeah.
So there's already some tools built for that. Like for example, in Foglight, you have a compare. So when you compare two disparate-- let's say, SQL Server environments or two disparate Oracle environments, and what's happening is that people are still remaining on those platforms. They trust those platforms. There's no reason for them to move off, but there's a better way for them to support those workloads.
So as opposed to being on antiquated-- or for example, hardware, and for instance, taking on the maintenance burden on-premise, they've now shifted that to the cloud. But it's still very much the same product. So the whole point is that if you have the ability to do a comparison between the two, right there, it will help you spot whether or not a configuration change was meritable.
One example would be if you deallocated or allocated a specific resource like CPU or memory, you'd very quickly be able to validate whether or not that change was good. And then if you go into the database itself, DBAs are going to adjust a number of tuning parameters. So most database instances, like take SQL Server, for example, come with out-of-the-box setting for parallelism, how the database will utilize and multi-thread the CPU for its processing.
Well, that sometimes needs some special optimization based on the workload or the queries that are introduced to it. So when you make that change, you need to be able to easily spot the parameter and be able to see during that workload or during that period of time when that parameter was not yielding the expected performance. What--