Problem to Solve: A Model for a Successful Tools Rollout
Here is a link to an article I wrote for GRCoutlook.com in March 2022. It covers some of the best practices and guidelines for a successful network or cybersecurity tools deployment.
Problem to Solve: A Model for a Successful Tools Rollout
Here is a link to an article I wrote for GRCoutlook.com in March 2022. It covers some of the best practices and guidelines for a successful network or cybersecurity tools deployment.
Problem to Solve: Hospitals combat rising denial-of-service attacks with network triage I had the privilege to be interviewed by SilconANGLE “theCUBE” regarding the many challenges in IT security in Healthcare. In a nutshell, the session highlights the challenges and safety … Continue reading
Problem to Solve: My company has invested in an IoT platform. For such a vast network of data collection, how can we assure service delivery and performance for the whole platform?
So what does an old school nursery rhyme have to do with the Internet of Things (IoT)? Not much other than if this nursery rhyme were to come out today, old MacDonald would have some flavor of IoT probably running at the farm.
The new Internet of Things version of Old MacDonald would be more like:
a “device device” here ..
and “cloud cloud” there ..
here an “app”, there a “protocol”
everywhere an “Analytics Analytics”
Old MacDonald had a farm, E-I-E-IoT
So now we are all clear why I did not become a professional nursery rhyme writer. However, when it comes to IoT and the backend service delivery chain involved, I do have some thoughts.
Think about what an IoT Service Delivery chain actually looks like from the end device collection, all the way through to the back end analytics systems. It includes things like the device, transport of data, data collection, Big Data Analytics, Cloud, and integration with corporate systems.
Information is king, and that is why companies are investing in IoT platforms. The end device “widget” has all kinds of interesting data and metrics about itself. End devices come in all sizes and shapes, things like ATMs, Smart Meters, Google Glass, Medical Devices, Light Bulbs, Batteries, etc. The metrics for these devices will vary based on what the “widget” is, but examples are things like temperature reading, location/GPS, voltage, errors, etc. The data collected from the widget gets turned into business information at the end of the service delivery chain. Some customers are even looking at embedding “applets” (small application footprints) on the end device itself.
So once you have a device and data, you have to provide some method for getting the data “off of the device”. Transport methodologies vary from RF, cellular, wireless, long haul ethernet to accomplish the harvesting and transport of the collected data. Transport protocols vary from custom developed, HTTP, MDM, Bluetooth, Zigby, MQTT, etc. Many of them are based on IPv6 due to the extreme number of IP addresses involved.
The data from the end device gets transmitted back to a data collection hub. These hubs could be deployed as Cloud options (i.e. Amazon Web Services or Azure IoT Hub), co-location facilities, distributed data centers, or just be back hauled all the way back to the corporate data center. At this layer, the raw data is usually aggregated and processed through to the format that will likely be pushed into a Big Data Analytics solution. Many times this information from the Analytics solution will communicate back to a corporate ERP or CRM system.
There are many types of analytics platforms to help create information from the data. Looking for trends inside the data for usage, failures, metrics, maintenance, population to glean information to make better business decisions is the goal. Creating a competitive advantage into new business markets (i.e.Blue Ocean Strategy https://en.wikipedia.org/wiki/Blue_Ocean_Strategy) from your own collected data is a tremendous opportunity.
Are you deploying a private or public cloud solution or maybe in the planning stages? Of course you are. Or if you’re not, someone within your organization may already have started! In this particular blog posting, I will focus on cloud being deployed as Infrastructure as a Service (IaaS), which in the public setting would be Amazon EC2, Google GCE, or Microsoft Azure Virtual Machines. The equivalent private cloud vendors would be VMware and Microsoft’s Hyper-V. Disclaimer. There are MANY MANY more vendors in both private and public cloud spaces than I have listed. Please do not take the omission as a disregard of the excellent products and services other solutions offer.
Additionally, the fundamental technology which will be discussed (802.1AE/MACsec) in this blog post can be leveraged outside the cloud as well.
Balance
Growing up in Northeast Ohio, I was fascinated with all things “security”. In my youth, SNORT was on version 1.8 and I had just installed it up on a 133 MHz PC. I installed it to justify the creation of a security budget at my then employer at the time. The concerns over security quickly started to pile into our cubicles as we started to evangelize our security messaging.
As we started to express the need to inspect traffic inbound and outbound to the internet, folks were concerned about their privacy. The now defunct SSL 3.0 was just starting to gain traction and we noticed how useless our signatures had become on this subset of traffic.
These concerns directly coincide with the CIA triad of Confidentiality, Integrity and Availability. This triad existed then, and I would argue that it has even more usefulness and purpose today.
What I quickly learned back then was there was a need for balance. You can lock everything down with encryption, and immediately you hamper (note I did not say limit) your visibility into both integrity and availability. In the world of service triage, we must be able to understand historical trends to be able to define availability. The same is spelled for integrity, as we are unable to monitor and perform analytics on the user’s traffic.
MACsec aka 802.1AE
When MACsec was first presented to me, I immediately and foolishly thought it was some new Apple security applet. It is actually a data link (layer 2) level encryption which is increasingly being deployed in both private and public clouds. This functionality permits point-to-point encryption between pieces of network gear OR between hosts on the same network. MACsec is an ideal deployment option within the VXLAN space.
Layer 2 encryption gives a unique capability, where depending on the deployment methodology, could mean zero impact on the individual hosts. Server and security teams rejoice as they are now able to “break bread” together. All the while the NOC, monitoring teams, and problem solvers are scratching their heads on how this implementation impacts their monitoring tools.
Some, even within the very security teams recommending MACsec, start to question how the tools (which they likely bought after the last breach) will interface with this new world of confidentiality. The reality is that the common methodology of tapping network links and deploying aggregation switches is useless in this scenario.
So, if you use this method you just implemented a security strategy which ensures complete confidentiality from the risk of your cloud vendor snooping on your data. However, you have effectively locked out every other team and workflow utilized to ensure availability and ensure integrity.
How do we re-balance this scenario?
Service Triage of Yesterday, and Today
If you haven’t been keeping tabs on the service triage market place, you may have missed the significant developments this space is churning out. It once was a singular solution with only taps/spans going into hardware based probes. It then evolved into a bipedal approach with taps/spans feeding into aggregation switch manufacturers, which then fed monitoring solutions. For the past 5 years that approach was very successful for numerous vendors and customers.
The introduction of cloud forced the market place to rethink the service triage market. How do we implement the hardware solution based in probes and aggregation switch when our applications reside virtually inside the cloud? You simply cannot start deploying hardware inside Amazon, Google or Microsoft. As we begin to look at SDN and NFV deployments, the questions start outpacing the answers, and I haven’t even mentioned containers! The simple answer is we must go with a hybrid approach across the board to address these types of environments.
Once we define the problem we are attempting to solve, we can select the tools which can solve the said problem.
Hybrid Service Triage
As stated in the previous section we need to “call an audible” during the kick off of these projects. Covering the various network edges such as Internet, cloud connections (Direct Connect, Express Route, etc), WAN and MAN is a good start.
But to the point of this article, if we were to implement taps across our infrastructure which leverages MACsec, we would simply see large sums of indiscriminate traffic. So while we have the edge devices to understand network utilization of our finite network resources (Internet/WAN bandwidth), we need to deploy micro services inside our cloud environments to better define availability.
By deploying these micro services dedicated to service triage, we implement visibility prior to the encryption occurring. BINGO! But what about those security tools ensuring integrity?
Micro-Services Answers the Call of Enabling Integrity
Micro-services having direct access to raw packets, can either create smart-data for security tool consumption or even export traffic to the previously mentioned aggregation switches. That’s right, I said it! The micro-service can tunnel traffic back to your aggregation switches to your “ground locked” security tools.
So let’s take a step back for a moment. Your service triage tool set can take your traditional hardware based security tools, and “cloud enable” them!! This can be a HUGE budget slashing opportunity and cost savings. By deploying service triage solutions into the cloud, you have visibility into cloud application traffic that can be backhauled to your existing security solutions (on premise). The net win of this deployment is that you now have necessary visibility into the cloud traffic, AND effectively make “existing security tools” capable of seeing this traffic as well. Leveraging existing investments always yields a cost avoidance budget wise.
Problem to Solve: With the popularity of the video streaming of the NCAA Basketball tournament, how can I be sure that our network does not collapse under the load? Under the definition of irony, would be a picture of me .. … Continue reading
In my last article, I spoke of seeking advice and almost portrayed a level of caution when approaching cloud. The reality of the matter is, we all want what the cloud promises, an easier and more profitable platform for delivering services at … Continue reading
I absolutely love my job, and obviously have no qualms about telling people about it! On a daily basis, I help people solve complex business problems. Sometimes those business problems are rooted in human nature, not technology. When human nature … Continue reading
Problem to Solve – My company transitioned to a Cloud-based Office365 deployment. How can we assure the application service is working properly if it is not in our Data Center? In the past year, we have had many customers ask … Continue reading
Problem to Solve – My company transitioned to a Cloud-based Office365 deployment. How can we assure the application service is working properly if it is not in our Data Center? In the past year, we have had many customers ask us … Continue reading
Problem to Solve – My internal customers are complaining about the performance of the new Cloud based application that our company just rolled out. How can my APM/NPM tools help me with Cloud Apps? Ah, the “Cloud” …. One of … Continue reading