Post number 3 in a series of 12 from one of our provider partners, NTT.
Perhaps the primary reason that the cloud was invented was for developers. Development work has always seemed to be relegated to cast-offs of the rest of the technology stack. One of the primary reasons is that code wasn’t always written to be hardware independent. Back in the 80’s and 90’s, if you were developing in a Unix platform (Solaris, HPUX, AIX) there were hardware-specific calls that could be made by the programs to optimize them on the platform. This is to a certain extent still true for some platforms, and it makes supporting multiple platforms very difficult for software providers because they have to write code on many different platforms. Predicting needs for development work is also tricky because you have to make a few key decisions on the fly:
1.How do you know what you need until you figure out what you are making?
2.Once you make something, what scenarios do you need to be ready to test?
3.What platforms are you going to support? Typically vendors would choose the ones that they had the best chance of selling licenses for.
4.How do you cost justify equipment on an idea?
5.Do the incremental costs of supporting different hardware/software platforms outweigh the cost of acquiring hardware?
6.What do you do about troubleshooting customer issues? It may be almost impossible to keep all the hardware around for all the scenarios that customers may run into.
For years people have struggled with these dilemmas. Most development and test organizations use older or repurposed assets for their development. While this helps “sweat” assets and extend useful life, it may not prepare the development staff for real-world scenarios that customers may want to see. One of the biggest problems is load testing of the software. In order to test the limits of what a product can do, you may need to have hardware that exceeds the budget and that may not have a useful life after the project. In many scenarios, software vendors give it a best chance effort on load testing and then try to extrapolate out their results. And this is done with good cause; investing in a lot of infrastructure that may only be used once during load testing and then sitting relatively idle the rest of the time doesn’t make a lot of financial sense. Not only that, an idle asset can go down in value quickly. For example, 3-5 years ago the highest end servers were dual or quad core servers with very little integration with virtualization. Now there are 10-core servers that talk directly to hypervisor. Those dual-core servers still have some value, but it rapidly diminishes over time.
Now that we defined some of the issues that developers have with regards to hardware, how would we use the cloud to help eliminate some of the development issues? Whether it is a small project that someone needs to create a pilot for or a multi-server web environment, public clouds provide an ideal way to allocate development resources.
-Pay-per-use – It is not uncommon for developers to have multiple code strings that are active at the same time. Part of the code could be in development, testing, or support calls. Many organizations have development going on in different parts of the world that are in different time zones. Some organizations have experimented with follow-the-sun development where when the workday is done they take down the environment and load a different environment in another region of the world. In the past, this was somewhat problematic because of the automation that was necessary. There are also issues around latency that are involved. With the cloud environment, the users can stop the environment they are using and essentially the billing stops when the machine is down (except for the cost of storing the image). This saves them the cost of maintaining the servers in off hours. If there are international operations, images could be stood up in a region as opposed to traversing a trans-oceanic WAN connection.
-Speed of provisioning – Faster provisioning of servers for specific tests can also be achieved, which helps out with tight development timeframes. The typical development shop will have several stages of development going on at the same time. Several developers may be working on individual features that need to come together in the final product. With a cloud environment, multiple codebases can be loaded simultaneously providing individualized instances of the programs. Capabilities to rollback an instance are also available in some environments. So instead of having to reload everything to run a test, a developer can just rollback to a snapshot of the environment before testing started. Load testing can also be spun up quickly based on cloned images to give rapid testing capabilities. This can also shorten test times.
-Load and integration testing – This is perhaps the most difficult scenario to plan for when developing software because you are exploring the out limits of the capability of your software. Unfortunately (or fortunately depending on who you ask), your customer wants to know how far they can push the software and what degradation may happen when you approach those limits. The reasoning for this is obvious; they want to know how much scale the product has. Cloud bursting for these tests gives a more realistic view for scenarios that do not happen very regularly. We have talked about costs a lot already, but this is definitely a place where a lot of the cost of acquiring and maintaining systems can be reduced.
-Support – Many support organizations have to support multiple versions of the product running on different operating systems. The support matrix can be cumbersome for many providers. Cloud environments provide an excellent way to trouble shoot problems. If the customer is running their software in your cloud environment, then that environment can easily be cloned to troubleshoot problems. This enables more rapid isolation of troubles and increases the speed for resolving trouble tickets. This will lead to greater end user satisfaction.
-Allocation of costs – Many promising development ideas and projects never get off the ground because there aren’t multiple product sponsors willing to share the cost of the infrastructure. This is one of the reasons many development groups got the cast-off hardware from production. There was a need to keep costs down. Cloud environments do not require the upfront capital costs of traditional hardware purchases and flow more like an expense. That expense can easily be terminated if a project does not come to fruition. If capital costs were involved on a project that still needs to be depreciated, then killing an unsuccessful idea becomes more difficult.
Public cloud environments have a lot to offer for enterprise development environments and can be shown in the long run to be a better option for creating development environments.
Next Post – Moving Enterprises to a Public or Hybrid Cloud (Part 4 of 12) – Allocating Costs
Contact StrataCore to learn more about NTT America Cloud services
(206) 686-3211 or stratacore.com
About the author: Ladd Wimmer
Ladd Wimmer is a valuable member of the NTT Communications team. He has over 15 years of experience implementing, architecting, and supporting enterprise servers, storage and virtualization solutions in a variety of IT computing and service provider environments. He worked as Systems Engineer/Solution Architect in the Data Center Solutions practice of Dimension Data, most recently serving as technical lead for the roll out of Cisco’s UCS and VCE vBlock platforms across the Dimension Data customer base. Ladd has also run two IBM partner lab environments and worked for an early SaaS provider that created lab environments for Sales, QA testing and Training.
CommentsCurrently, there are no comments. Be the first to post one!