In a recent debate on the interwebs, Jim Hirschuaer took the stance that “APDEX is Fatally Flawed” and Jonah Kowall from Gartner fired back on Twitter, disagreeing with the analysis.
So who has it right?
In the “APDEX is Fatally Flawed” blog post I would agree that APDEX is flawed, not because it’s a useless index, but because it has failed to gain widespread adoption. As product manager for the Foglight user experience management solution since 2005, I have had less than 5 requests for APDEX metrics from prospects and customers. The few users that have requested it seem to be rather academic sorts, not your typical user. Only one of those that had asked had been looking to use APDEX to promote the score to the business, which is something I find telling since that’s where I would expect such an index to be useful.
Let’s take a look at our other options - what else is there that’s better able to broadly communicate a website performance trend? The best is still surveys. You just can’t beat qualitative data right from the users. The problem here is the difficulty in getting enough of that information to call it an indicator.
The most widely used standard for communicating website performance and availability is synthetic transaction monitoring. The reason for this is that most organizations already own some kind of synthetic monitoring solution and the reports are easy to understand. They measure response time and availability of a sequence of pages, that are familiar to the business, and have a clean perspective. What I mean by clean perspective is that they run the same user, from the same browser, on the same machine, with the same network, and the same resources. The only variables that aren’t controlled are performance and availability. The problem we are all too familiar with, with regards to synthetics, is that there usually aren’t enough of them running against the application to be statistically significant (hence the requirement for real end user monitoring).
The unsolved problem in the web application monitoring market today is: how do you paint a picture of the real user experience that’s as simple to digest as synthetics? Averages are out of the question because of outliers, response time distributions are difficult to understand, modes and bell curves are great for the analytics gurus, but what do we use for the regular Joe to understand if his web performance is good or bad, at a glance, and see a trend?
In Foglight we use an SLA. This is where we set one threshold for page time end-to-end and another for page time backend. Let’s say the end-to-end SLA is 3 seconds and the backend SLA is 1 second. If you had 1000 pages and 800 of them were under 3 seconds you would have an SLA attainment of 80%. We feel it’s a bit easier to explain and gives our users a key performance indicator (KPI) not unlike APDEX. This, in a sense, is our index.
While I too have doubts about the practicality and usefulness of the APDEX approach, communicating a business transaction status, accompanied by a IT transaction status, and determining how one impacts the other as an analysis exercise, is really key to establishing a solid APM practice.