I absolutely love my job, and obviously have no qualms about telling people about it! On a daily basis, I help people solve complex business problems. Sometimes those business problems are rooted in human nature, not technology. When human nature drives technological adoption and not human condition we open ourselves and our corporations up to harm.
Over the past months I’ve seen a sudden surge in cloud adoption. The vast majority of these projects are the result of executive initiatives. These initiatives aim to save money, increase productivity, and simplify IT. What’s not too love? All of these are laudable goals! The problem lies when these solutions are “slammed” in without guidance from industry, vendor, or internal company experts. In my opinion, at this point we enter in a “modern day gold rush”.
Without consideration of existing workflows, infrastructure, and policies, corporations are rushing to adopt cloud, thinking only of gains and not how to handle situations, just like a pretty white cloud that turns into a thunder-head. Or,perhaps they are failing to ask the question “can we support such an offering internally?” The result? A fantastic business and technology goal degrades productivity and increases soft costs.
Much like the gold miners of the 1800’s rushing to California without first thinking of winter and food, modern day pioneers forget how they to plan to measure success, or triage service failure. Your industry/vendor experts should be on site providing guidance to you the consumer, in the role of 3rd party advisors so you can avoid these common speed bumps for cloud as well as your many other IT initiatives. When both the consumer and vendor enter into true partnership, the consumer realizes value and vendor profits.
Common Mistakes of Cloud Deployments
How will your cloud deployment connect to the cloud offering? “The internet my good man!” is the most common response I hear while also watching the IT Director/Architect/Project Manager grimace. Why are these individuals being perceived as Debby downers?
Your internal experts know your infrastructure better than most. Listen to them, and ask questions to ensure your project has the highest chance of success. Service levels are not guaranteed by the internet. Right from the beginning of your cloud deployment you may experience long response times, disconnects, and general errors resulting from the instability of the internet.
Additionally your own employees or customers could inadvertently cause your cloud initiative to experience problems. How? Internet bandwidth consumptions. If your corporation offers public wi-fi, or a liberal internet use policy, you may find that your existing connectivity cannot support the additional load of cloud.
During the discovery phase of your cloud project, review existing connectivity. Your existing internet connection may very well suffice; however, have you performed the analysis? This is most easily performed by your network performance management (NPM) toolsets in under 30 minutes.
Additionally by utilizing a pilot program, you can also perform capacity planning to estimate if further connectivity is required. Even if projections show that existing connectivity will suffice, you should invite your ISP to discus lead times required to bring new connections online. Take this time to also discuss their past experiences with customer deployments of your cloud of choice.
Finally, you may find that dedicated connectivity is required to the cloud provider. If this is the case, the question must be asked, “How much bandwidth do we really need?” Again, NPM helps answer this question to appropriately size the dedicated connectivity.
Problem: Existing Infrastructure
Does your corporation provide content filtering of illicit internet content? If so, this function in itself can cause a problematic condition. Certain cloud providers (see Microsoft blog) recommend against the use of proxies with their solutions. Prior to deployment, talk with your internal experts first BEFORE deployment or commitment to implement time tables.
Solution: Existing Infrastructure
By deploying an NPM+ and APM solution around your existing infrastructure (proxies, Internet edge, WAN) we can take and preserve cloud vendors advice yet still guarantee service level and internet use. A followup article will address how you can leverage your solution for this very topic.
Problem: Legacy Solution
You’re deploying your brand new cloud offering, how do you and your team know the existing performance of said solution? What… you don’t care? Perhaps you should reconsider. After total completion of your project, you can provide ROI statistics, such as, “By moving to the cloud we reduced OPEX by 20% and increased productivity by 50% by reducing response time”.
When that field report that your new offering is slower than the original in-house solution, what better way to triage than to have baseline statistics of both the existing application and that of your cloud offering? Now you have 3rd party numbers to understand next steps.
Solution: Legacy Solution
Again, your APM/NPM solution is here to save the day. By utilizing the same exact equipment/purchase I mentioned previously, you can again provide response time data and adoption numbers of your cloud offering.
To provide coverage around legacy applications, review packet broker solutions and that market. These solutions provide the liquid ability to deploy and then move APM/NPM toolsets where they are needed most.
Many cloud offerings realize their fullest value by moving existing applications to the cloud. Simple enough, right? Why is it taking my IT team so long to move this one little application?
The reality of today’s IT, is tribal knowledge of how an application truly works and dependencies are exceptionally low. You may go in thinking we need to move one web server, and discover it’s supported by three application servers, an independent database and links into the mainframe. This is the equivalent to a gold miner digging under a lake, then puncturing the lake bed. Your miners/IT folks will be underwater, with every man for themselves.
Throughout the previous examples, we are deploying instrumentation around our legacy application and measuring response times. This will also reveal conversations and dependencies that we were previously unaware of. The first step however is acknowledging that we don’t have all the answers, nor do we understand the application. Only then can we step out of our own way and learn!
Even more misunderstood and forgotten are service enablers. Enablers are basic services, that if taken down or degraded, will waste your teams time and cost your corporation money. Enablers provide lookup and location data to your end users to understand where cloud services can be located.
By breaking out your enablers (LDAP/RADIUS/DNS/etc) in your APM/NPM, you will be able to track and understand the requirements of your cloud offering. Remember, just because you’re moving to the cloud doesn’t mean your data center will shutdown! There are many internal resources still required which you will continue to host locally!
As you venture into procuring cloud offerings, remember to stay buoyant. Never becoming too gloomy or overly excited where mistakes most often occur. Better, review your deployment with your internal and external experts. By utilizing your existing or new APM/NPM tool set, you can guarantee success, for both the project and the subsequent troubleshooting which will be required. By placing probability on your side, you can better guarantee striking it rich with cloud!