If You’re “API” and You Know It, Clap Your Hands !!
avatar

Problem to Solve — We have many separate APM/NPM performance management tool sets and want to create some integration points to incorporate all of these different data sets. How can we accomplish that?

 

ClappingHands-11Come on, you all know the song from your kindergarten days, so don’t be shy and sing along, 

If you’re API and you know it clap your hands !!

If you’re API and you know it clap your hands !!

Oh wait a second, weren’t the words “happy and you know it”?…..  darn nerd brain taking over again!!  I have a feeling that people are going to start complaining about these goofy attempts at IT comedy.   Obviously, this blog is directed at the use of leveraging the Application Programming Interface (API) for those of us in the IT Service Delivery world.   The wiki link in case you have never heard the term. http://en.wikipedia.org/wiki/Application_programming_interface

API — Who Cares ???

To be honest, unless you have some basic programming interest or a job that demands programming skills, most of us are not going to leverage an API in our daily life.   For those of you that do have adequate programming skills, I applaud you as the last programming that I did involved the use of BASIC language.  You might need to use an API if you have to integrate different systems, and a perfect use case for that is IT management tool sets.    It really does not matter whether your tool sets are Open Source or Commercial, they typically will have some type of API to get data “in or out” of the solution.   Go ahead and search the product documentation or check with your solution vendor, but I will all but guarantee that you will find documented (or undocumented) API’s into the solution.

API — Gimme Some Real World Use Cases ???

That I will.   In most cases, the challenge is to gather / compare data from a combination of network tools, configuration tools, server tools, security tools, SYSLOG tools, application tools, and packet capture tools.  Some examples that customers have requested from me in the past involved this short list.

  • Create a “Business Level Report” from data from network – server – application tool
  • Create an “Executive” style Services Dashboard
  • Share tool configuration data (i.e., Interface Names) with a Configuration Management Database (CMDB) tool.   The wiki link http://en.wikipedia.org/wiki/CMDB
  • Extract capacity planning information from a network links, server statistics, and application performance
  • Modeling of current network utilization to plan for new XYZ application rollout
  • Compare traffic flows from a network tool to compare against a security tool – Firewall or Policy
  • Leveraging a whole bunch of unstructured data into a Big Data type of deployment  (A future blog topic unto itself)
  • Find Invalid Certificates in a PKI environment.   Reference guest blogger Robert Wright’s article for a detailed use case.  Here is the link   http://problemsolverblog.czekaj.org/troubleshooting/detection-invalid-certificates-packet-analysis/

 

The Approach

When trying to solve these types of use cases, my approach is to simply get to the whiteboard (and I love whiteboards).   I find this gives me the best view of the overall project scope.

  1. Write down the problem that I need to solve.   (What is the overall Goal?)
  2. Write down all of the available tool solutions that are available to me.
  3. Write down each individual tool’s available data sets (i.e., Server Tools – CPU / Memory / Disk ….. Network Tools – bit rate / volume/  errors)
  4. Target the data sets that best help me achieve the goal in step #1
  5. Analyze the API’s available in each solution and how to extract that data
  6. Find someone that really knows how to program to execute the API process.     🙂

 

The Challenges

The challenge with any API is really finding a “happy place” between flexibility,  data extraction and reconciliation and overall ease of use.      🙂

1) SQL Interface – Let’s face it, each vendor’s solution will likely have very different methods for implementing their API.  Some tools have an industry standard Standard Query Language (SQL) and some have their own proprietary methods.  

Why is this Valuable? – It will take a bit of time to review and figure out each tool’s API implementation so you can figure out how to best leverage the information.   Because SQL is more industry standard (and there are lots of skilled resources available), this will likely be a more efficient process.   If the API is not SQL-like, it is not a deal breaker by any means, but does mean you will have to invest more time to figure out how it works.

2) Data Granularity – I cannot stress this concept enough, as referenced one of my previous blog articles http://problemsolverblog.czekaj.org/capacity-planning/watered-bandwidth-report-alvinnnnn/  You have to consider that you are trying to interface different tools from different vendors that collect and log their data by different methodologies.   This is a key point relative to the overall goal of the project.     For example, let’s assume that the goal for the API is to collect and create a report on data from server – network – application tools.  That being said, if the server tool collects data in 5 minute intervals, the network tool at 1 minute granularity, and the application tool at 15 minute granularity, then you have a problem to address because the data will not match up.

Why is this Valuable?   The reason that I stress this is because you want to compare time series information in an “apples to apples” format.   If not, then the audience for the report is likely to misunderstand or make a poor decision based on the information.  To explain further, one minute data will be collected and have (5) distinct data points when compared against a data set that is collected in five minute granularity, that will have (1) distinct data point.   The net effect is that one data set will show more data points than the other.   The longer the time frame for data collection, the more “averaged” and watered down that data becomes.  Averaged data will effectively hide spikes.

3) Wrapper or GUI – API’s can be extremely effective means for data extraction, but if the end user cannot get at the data effectively, then you have a problem.    I always recommend to “hide” the complexities of the API interface (which is usually a Command Line Interface) with a wrapper or Graphical (GUI) front end.

Why is this Valuable?  Let’s use an Object Lesson.    Would you write a program to give to your CIO or a non-technical business user, and then teach them how to edit programming code and variables so that they could get their data?   Or would it be a better idea to create a pretty and easy to use interface for them to input variable and press the GO button to get their data?     Enough said …..  🙂

4) API can be an Input Process too – I think that by default when we think of an API, it is usually in the context of data extraction.    However, it is also important to consider the use of the API as a data input process.    For example, leveraging interface name data extracted from a CMDB tool that goes into your tools can be extremely effective.

Why is this Valuable?  In the context of the CMDB example, it means that you could do your administration centrally and at one time.   This could be additionally applicable to other use cases for adding/deleting users, device naming, application definitions, packet filters, etc.    The bottom line is that the API, when used as an input, can save time administrating multiple tools with the same data as well improve accuracy of data input.

Points to Ponder

  • What unique use cases for API’s have you leveraged in the past?
  • Any lessons or words of wisdom from your experiences?
  • How do you think the “Big Data” concept will increase or decrease the need for API’s?

Until next time …. 

 

🙄 🙄  🙄