In the first article in this series, we reviewed how NPM statistics will provide details of the operation and performance of internal network super highways. The second article introduced the role of service enablers and how their poor performance can drastically impact application performance. This article will focus on flow based APM level metrics which are available in traffic flow data.
There are basically two primary methods for this type of data collection. The first method is to use synthetic agents and the other is to collect traffic flow data. Synthetic agents have their place in the enterprise, and I have advised them to clients to address certain use cases. However, if the only source of data is a synthetic agent, and the agent has alerted to a condition, then what? What do you do now? Is this alert showing me that a single transaction has gone “off the rails” or do I have an enterprise wide issue? Quite simply, this is very difficult to determine. I believe that if you combine the two methods together (agents and traffic flow), then you have a real world solution. My opinion is that synthetic transactions alone typically represent a product, not a total solution.
Before continuing, I would like to clarify that my use of the term “agent” in this article is in reference to synthetic agents. Not a reference to a component management agent which keeps track of hardware components such as disk drive, CPU, and memory usage. I will touch on component management and its role in article 5.
Looking at Traffic Flow Data
So why look at flow data (packets) traversing your internal network? Isn’t that the data-set just for that packet jockey/Packet Ninja/Packet Head on staff??? NO! From this extremely rich data-set you can determine individual performance of PCs, server, and permit the detection/alerting of errors. If properly deployed, key information is available in meta data to present to an untapped audience.
Since we are looking at packet flow data, we are reviewing live client/customer data, which is exactly the rich data-set you should review! This approach is the direct opposite of synthetic transactions. The common challenge with so much data is addressing how to best locate the “diamond from the rough”. As speeds, network convergence and adoption of the Internet of Things (IoT) increase, a true strategy is required. A strategy will help to address security and considerations so you’re able to keep up with this raging data flow. But more on that strategy in article 5!
What can you expect by adding network based application performance management to your tool-set? First, we will gain access into what the client(s) are experiencing. As the user(s) accesses resources, experiences errors, and/or attempt a malicious act we will be able to view it from that layer 7 protocol.
For example, if you were to successfully access www.epic.com; the network based APM tool-set will show a 200 http status code. Sometimes failed attempts may show a range of http protocol errors from 404s to 500s. By watching these flows we can keep track of the resources being accessed, URLs in the case of HTTP, and can be monitored individually or separately. In addition, we can monitor the response time of individual URLs and their corresponding network metrics/response times. This functionality is a key building block of a solution, where it accounts for multiple disciplines and perspectives, not a singular targeted viewpoint.
Continue on to the next article in the Series
“Applying Advanced APM to Healthcare”
Link to the article ===> http://problemsolverblog.czekaj.org/troubleshooting/applying-advanced-apm-healthcare-part-4/