Am I correct in assuming that the chart scale in SQL PI (seconds/s) is seconds per session?

  • Hi, 

    The seonds/s metric is seconds per second. It represents the number of seconds of db workload (cpu usage + wait time) per second (time).

    If you had a single CPU instance you would have 3600 seconds of compute available each hour. If you had queries using all of the CPU, it would average 1 second/s

    If additional SQL starts waiting on cpu or other resource, that graph starts to climb.

    In multi-core servers, you will have "core count" x 3600 seconds available each hour for CPU.

    There's also a support KB available here with additional explanations.

  • I cant seem to add a new topic so I'll ask my question here,   Is the time range on the SQL PI screen showing the items based on the time (and time zone) they occurred on the SQL server being monitored or are they shown as the time they were added to the Foglight SQL server PI?      Say a crashed happened on a server in the CET but my Foglight servers are in the US EDT time zone.   the crashed happened at 1:02 CET.   Will I search the Foglight PI and use the 1:02 PM time or do I need to adjust my time frame to equate to the time zone my foglight servers are in??   

  • Don, You have to convert. Some of our servers are in UTC; when they fail and I search the server logs, I have to add 5 from the Foglight error time to get to the right Server log time. On my local terminal, I set my time setting (clock on the task bar) to show both local (Foglight Server time) and UTC (SQL Server time) so I don't have to think about Standard time, Daylight saving time, etc.

  • Thanks!!   Good idea showing both.  I have SQL servers in 3 different AWS regions so this is an important distinction and I appreciate the info!