Quite often when I discuss BMF with customers they want to understand how to generate workload for their existing stored procedures. As part of their development life cycle if a change is made to a stored procedure they want to re-run a test and make sure response and scalability hasn’t degraded.
While BMF has capture replay capabilities, this distinct feature for regression testing can be tricky to workout.
Here are 6 simple steps:
As an example let’s say we want to make a change to procedure C_SP_Order_Status (BMF object created using TPC-C test).
In order to test this procedure we need to pass 4 parameters. In BMF create a new job using the wizard.
New Job Wizard
Add a transaction
Click on the highlighted area to enter a new transaction.
Select the stored procedure.
Click on Bind Params.
The easy route is to use BMF and pass random parameters that wouldn’t be valid. But as you might guess this is only going to fail as there won’t be any orders with these parameters. What we need is some test data that BMF can use.
Generate Test Data
So we prepare some test data in SSMS and export into a CSV file.
Save as CSV.
Back to BMF.
Remove the parameter values and right click for BFScript Wizard.
Enter the location of the file we created and use retrieval mode Random, this ensures we pick random values up from the file which would be a more realistic scenario.
For the next three parameters use mode Current so we get the 2nd, 3rd and 4th columns from the file but they are all relevant values on the same record.
There is a green test button to check it's all working ok.
The next step is to work out how many concurrent users you want and how many executions of the procedure to make. You can simulate real life examples with keying time and latency, but I just want to get a benchmark that I can keep reusing over time. I entered 50 users, 100 executions each.
Click run job.
And after completion run the report to view the benchmark result.
So imagine re-running the test after a minor code change, or functionality change and the results show some degradation.
Also worth mentioning that wait stats for these procedures can be captured and compared using Foglight for SQL Server and Performance Investigator.
We will look at this in a future blog and work out where the problem is and how to optimise the code.