Moving Enterprises to the Public or Hybrid Cloud (Part 1 of 12)

moving enterprises public hybrid cloud part 1 12

This series of blog posts from one of our provider partners, NTT, will look at some of the reasons large enterprises should be thinking about the cloud. For small environments, using cloud service providers may be the only way they can preserve cash and create an idea that will survive in the market. It is a little more difficult for large organizations to go this route. There are several reasons why:

1. Large organizations typically have already invested in infrastructure so there is a sunk cost that they must absorb
2. Larger organizations may have stricter governance policies that they have to adhere to
3. There may be larger groups of workers and expertise in the organization, so technical resources may not be
readily available
4. Politics (need I say more)
5. Layers of approval to get something done
6. Established policies

I could go on to the multiple reasons why it is more difficult for large organizations to use cloud computing but that is not my intent. My intent is to explore places where investing in Cloud Computing may help large organizations. Instead of writing it down in one post, I intend to split it up, so if you find one of the topics does not apply to what you are looking to do, you can skip it. The main topics that I will be covering in the posts are as follows:

-A brief history and cloud options
-Global Expansion
-Developmental Work
-Allocating Costs of Infrastructure across Multiple Project Sponsors
-Projects with Tight Time Frames
-Moving, Upgrading, or Consolidating Datacenters
-Mergers/Acquisitions/Spinoffs
-Space Constraints
-Big Data Challenges
-Reducing Software Costs
-Disaster Recovery

A Brief History
Cloud is not a new technology. An article found on Wikipedia states “The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined, ‘computation may someday be organized as a public utility.’” Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill‘s 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing’s roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch’s law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.

Back when computing first started and mainframes were the primary computing platform – and were very expensive – people paid by immediate need. Time-sharing was established on the mainframe and people paid for what they used. The advent of microcomputers moved people more to a distributed environment, but as computing and networking power increased, many of these distributed servers became underutilized or obsolete. Back in the late 90s, VMware took a page out of the mainframe hand book and carved up computer resources for servers to better utilize them. The next logical step in the evolution was for people to pool their resources together to drive down the costs of infrastructure. This is what is driving cloud infrastructure today.

Cloud Options
Most people who have been paying attention to the IT world today have heard of the aaS (as-a-Service) offerings. The main 3 are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Whether an enterprise recognizes it or not, they have probably already bought into one or more of these options. For specific definitions on each one, you can check Wikipedia post.

Most organizations have purchased and are using some form of Software that they access from somewhere else to run their business. Salesforce.com is probably the best known, but there are hundreds of companies that provide software. For example, today alone I have used SaaS services such as Gmail, Turbotax, Facebook and online banking. Customers are very comfortable with these services, and in the future many of the software packages that we load on our computers may exist in someone else’s data center.

When people talk about clouds, they are generally referring to where they run the company’s assets. That brings in the discussion around Private, Public and Hybrid Clouds. Many large organizations are looking at their existing infrastructure and saying that it wouldn’t be too much of a stretch to create a private cloud. They may already have the infrastructure and space in place and are typically only lacking cloud automation and chargeback software to provide portals to their end users.

Public cloud is somewhat of an extension of traditional hosting environments. For years large and small organizations have been willing to take their own assets and move them to hosting facilities. This makes a lot of sense because now the cost of creating and maintaining the infrastructure gets spread across multiple companies. So instead of an organization having to buy their own UPS and backup generators, as well as provide power and cooling to the room, customers have been able to pass the management of this on to a specialist. Many people use DR facilities from SunGard or IBM in such a way. Public cloud takes that concept and extends it, providing not just space and power, but also networking, servers, storage, security and a host of other features.

Potentially the most beneficial of the Cloud offerings for large businesses is the Hybrid Cloud. Large organizations already have some amount of sunk cost built into their existing infrastructure that still has value. Hybrid cloud gives them the power to augment their capacity without some of the management headaches. Additional speed and flexibility can also be gained by moving workload to the cloud.

Next Post: Moving Enterprises to the Public or Hybrid Cloud (Part 2 of 12) – Global Expansion

 

Please contact StrataCore to learn more about NTT America Cloud services
(206) 686-3211 or stratacore.com

 

About the author: Ladd Wimmer

Ladd Wimmer is a valuable member of the NTT Communications team. He has over 15 years of experience implementing, architecting, and supporting enterprise servers, storage and virtualization solutions in a variety of IT computing and service provider environments. He worked as Systems Engineer/Solution Architect in the Data Center Solutions practice of Dimension Data, most recently serving as technical lead for the roll out of Cisco’s UCS and VCE vBlock platforms across the Dimension Data customer base. Ladd has also run two IBM partner lab environments and worked for an early SaaS provider that created lab environments for Sales, QA testing and Training.

{{cta(‘0e624c08-1ca4-4e94-8ab4-4b404f83f199’)}}

Subscribe to get the latest IT trends, news and advice, right in your inbox

Ready to take your IT infrastructure to the next level? Talk to StrataCore today.

Skip to content