A Simple Perspective on Innovation in Application Performance Management

Everybody agrees that end-user Response Time is the most important metric for judging application performance (or the more popular term App performance in the recent days). We will just refer to it as response time in this article. The conventional definition of response time is the time elapsed between a user clicking on a button/hitting a return key and getting the results that she is expecting. Another way of defining it is the wait time the user experiences when she is involved in the same operation.

The most important direct factors that contribute to response time are the size of the data bytes of the result, bandwidth in the network path, network conditions (such as packet loss), network latency and application chattiness, and server load. One can measure the response time by implementing some instrumentation in the user machine or simply by clocking the time sitting next to the user. When the App runs slow and the response time is long, it generally means that the direct factors mentioned above are causing this slowness. One can trouble-shoot by focusing on these factors and hopefully fix the problem.

But things are not simple anymore. There is more good news but some bad news. The good news is that there is lot of innovation in speeding up or accelerating applications. Remember when your friend asked a complex puzzle to solve and you gave an instant answer that amazed him and you appeared a lot more brilliant than you really are (the truth of the matter was somebody had posed the same puzzle a few months ago and you could not solve it in days – finally gave up and got the answer from that person).

Somewhat similar to the above situation, an App such as an Internet browser might have cached the image you were trying to download and might surprise you with instant result – this is an example of caching. There are many more techniques that are emerging in the Application Performance Management (APM) space to speed up or accelerate applications. Examples are compression, TCP optimization, proxying/chattiness reduction, pre-fetching, using Content Delivery Network (CDN), etc.

Big web portals are constantly innovating to enhance end-user experience. Google, for example, is always at the cutting edge of providing almost instantaneous response time. When you are searching for key words Google expects that you are likely to click next on the first of the ten results and it can pre-fetch that link while you are still thinking and moving your mouse. When you actually click on the first link, the results are already there in your computer and there is instant gratification. Notice that this would not work if you happened to click on the third or the fourth link.

This is all good news and we can get used to great response times both at home, at work, and on the road. The only bad news is if something goes wrong in this fast-lane (like a traffic jam) how do you trouble-shoot the problem? It, sure, will be lot more complex than in the simple world before.

It is imperative that the tools that help trouble-shooting App performance issues and the experts who do the trouble-shooting keep up with these rapid innovations. Also it is important to get direct feedback from the end-users (through trouble- ticketing systems, automated surveys, etc.). Together they should hopefully be able to fix the response time issues fairly quickly. Otherwise the end-users will be in alternating states of agony and ecstasy.

Apsera Tech, a premium APM consulting company, has years of experience in WAN optimization, networking and application performance management. It has helped Fortune 1000 companies in diverse industries such as financial services, healthcare, manufacturing, and publishing in planning/resolving critical business application performance issues like slow application response time, WAN optimization and WAN acceleration.

Processing your request, Please wait....

Leave a Reply