Quadra

Connecting Technology and Business.

Get The Most Out Of Your Virtualization Implementations

Build A Business Case

Virtualization is no different than any other technology initiative: IT needs executive support—and funding—before the first piece of hardware is touched. This is especially critical given the intangible nature of virtualization initiatives. "A business case will help gain support for the initiative," says a virtualization consultant. "It will force the IT department to consider all of its elements to justify its time and expense. They should be able to outline the key benefits, its key efficiency improvements, and its hard costs."

Focus On Tertiary Savings

While simplified infrastructure can save major help and support costs and even reduce implementation timelines from months and weeks to hours and minutes, the less obvious opportunities can often be the strongest selling points for non-technical C-level sponsors. "Find out what you're spending on energy, not only for server power but server cooling," says an expert. Virtualization in this area is a tremendous savings. Virtualizing storage infrastructure can also drive potentially huge future savings by replacing conventional "islands of storage" with SANs or other shared architecture. "Calculate your current spending on hardware attrition and employ a prominent ROI calculator" is the advice.

Study Your Software Vendors

When you shift from physical to virtual infrastructure, the licensing rules shift, as well. Costs can fluctuate uncontrollably if IT doesn't invest enough time in due diligence. "Before making any decisions about which servers to virtualize, it's important to understand each software vendor's support policies," says a solutions manager for virtualization. Is the software supported to run in a virtual machine? Is it supported to run on specific platforms? It's also critical to determine each software vendor's licensing policies. Only after these questions have been answered can IT begin to narrow down specific choices for virtualization tools and implementation strategies. IT can't afford to ignore virtualization's impact on hardware, software licensing, and maintenance and support costs. IT leaders must ask a lot of questions, including what's required, how many, and how much it'll cost. IT needs to "outline the real costs to the organization". Are the upgrades needed? Which products and services will lower costs and improve productivity? How much is needed to run them? Can those costs be cut further? What benefits will be realized? The top benefits could include greater employee productivity, improved access to information, and more robust data security.

Identify Potential Bottlenecks

Virtualization's promise of increased performance doesn't come without certain costs. Additional abstraction layers can complicate storage performance. Ensuring that your virtual platform is operating at peak performance requires new measures that IT needs to get educated about. Resource bottlenecks are emergent in virtualized environments where proper measures aren't taken. Beyond initial implementation, longer term planning should focus on optimizing what's already in place.

Go Beyond The Hardware

Virtualization involves far more than simply server and storage infrastructure. The hardware is only the beginning. Establishing a shared storage infrastructure and virtualizing introduces the opportunity to achieve previously unheard-of service levels. A massive boon of virtualizing is the ability to evolve away from hardware spending and instead invest in your existing architecture. Virtualization is only limited by dreamt-up use cases: Once your servers and workstations are virtualized, consider application virtualization.

Back Up Your Backup Environment

It's all too easy to leave backup and disaster recovery plans behind when planning a virtualization initiative. Revisiting backup/restore architecture while virtualizing can be a cost-effective—and risk-reducing—step. Many vendors offer enhanced backup products for virtual infrastructure, and some price by socket or host, reducing costs considerably. If your business has a long-term maintenance contract, consider reviewing it before changing your backup architecture.

The Economics of Data Center Operations

The economics of operating a data center are comprised of many items that factor into the total cost of ownership.

1. Resiliency: Whether building a data center or evaluating provider facilities, cost is derived from the level of redundant infrastructure built into it. The Uptime Institute data center tiers describe criteria to differentiate four classifications of site infrastructure topology based on increasing levels of redundant capacity components and distribution paths.

2. Down Time: The historical cost model for operating IT effectively has been the cost of down time. The data center reliability attribute is a key ingredient to how the data center should be designed and what the requirements of infrastructure are. The cost of down time is drastically different among the different types of businesses and the facility design considerations should reflect as much. The amount of risk a business is willing to assume in maintaining the uptime of their IT has a large impact on the cost of a data center.

3. Staffing is an often overlooked or underestimated factor in determining the cost of data center operations. In addition to IT staff, facilities staffs ensure data center reliability and provision and maintain electrical and mechanical systems. Security staff requirements will vary depending on the size of the data center and individual needs of the business, but often require on-site personnel 24 hours a day and 365 days a year. If you are building a company data center, does the business have the experience in designing, building and operating it?

4. Financial considerations:

a. Site Selection: If you have the luxury of selecting a location throughout the U.S. for a data center site, incorporate local utility rates and tax incentives into the overall cost.

b. Cost Segregation: Research the use of audit estimating techniques to segregate or allocate costs to individual components of property (e.g., land, land improvements, buildings, equipment, furniture and fixtures, etc.).

c. Capital Recovery Factor: When evaluating the true capital cost of a data center, look at capital recovery factor, which is the ratio of a constant annuity to the present value of receiving that annuity for a given length of time.

d. Internal Rate of Return (IRR): What is the estimated IRR for the data center build project? IRR is an indicator that is commonly used to evaluate the desirability of investments or projects.

5. Timing: Consider the economics of technological obsolescence if building a data center. Weigh the costs of alignment with business and IT strategies against the risk of obtaining additional funding to increase power and cooling capacity to accommodate higher IT densities down the road.

6. Vertical Scalability: Scale is top of mind throughout most aspects of IT. The significance of scalability in the data center carries a different connotation and has a higher price tag if not considered properly. Vertical scalability means cloud computing-like elasticity capabilities incorporated into data center infrastructure and available floor space. It means turning up the dial on power and cooling densities without disrupting the business. The gains of turning up that dial equate to agility in operations adaptability to changing business needs and future cost avoidance in provisioning additional power and cooling to match increased requirements of IT.

 

An implicit value is derived from a data center strategy that is the coalescence of internal and financial perspectives. The misconception that going high-density in the rack equates to a higher cost is usually not founded with the realities of what efficient infrastructure is capable of. The new datacenter is equipped to handle and scale with high density servers and will ultimately save money through the power and cooling efficiencies gained. Examine all aspects of the economics of data center operations in order to understand the implications of risk assumption and true costs involved.

Data Protection in the Cloud

Primary Data Protection

Primary data is data that supports online processing. Primary data can be protected using a single technology, or by combining multiple technologies. Some common methods include the levels of RAID, multiple copies, replication, snap copies, and continuous data protection (CDP).

Primary data protection within the mass market cloud is usually left up to the user. It is rare to find the methods listed above in mass market clouds today because of the complexity and cost of these technologies. A few cloud storage solutions protect primary data by maintaining multiple copies of the data within the cloud on non-RAID-protected storage in order to keep costs down.

Primary data protection in the enterprise cloud should resemble an in-house enterprise solution. Robust technologies like snap copies and replication should be available when a business impact analysis (BIA) of the solution requires it. APIs for manipulating the environment are critical in this area so that the data protection method can be tightly coupled with the application.

The main difference between in-house enterprise solutions and storage in an enterprise cloud is how the solution is bundled. To maintain the cloud experience of deployment on demand, options must be packaged together so the service can be provisioned automatically. The result is a pick list of bundled options that typically meet a wide variety of requirements. There may not be an exact match in the frequency of snap shots, replication, and the like, for a customer's requirements. Nonetheless, most users will usually sacrifice some flexibility to realize the other benefits of operating within an enterprise cloud.

8

Secondary Data Protection

Secondary data consists of historical copies of primary data in the form of backups. This type of data protection is meant to mitigate data corruption, recover deleted or overwritten data, and retain data over the long-term for business or regulation requirements. Typical solutions usually include backup software and several types of storage media. Data de-duplication might also be used, but this can raise issues in a multi-tenant environment regarding the segregation of data.

There are solutions (commercial and public-domain) that can be added to mass market cloud storage offerings to accomplish secondary data protection, but it is rare for the mass market cloud providers to package this together with the online storage. Although the reasons vary, in some instances SLAs related to restore times and retention periods can be difficult to manage.

Whether the solution is a private or a multi-tenant cloud platform, control, visibility, and restore SLAs are critical for secondary data protection. Initiating a restore should be straightforward and should happen automatically once the request is submitted. Users should be able to count on some predictable level of restore performance (GBs restored / amount of time) and should be able to select the length of retention from a short pick list of options. Finally, users should also be able to check on the status of their backups online. Since frequency and retention determine the resources required for storing backups — and thus the cost — online status of usage and billing should be viewable by the consumer to avoid surprises at the end of the billing period.

Technical Drivers for Cloud Computing

An impressive group of technical advancements are boosting adoption of cloud computing. These technologies bring advantages to IT, but collectively, they create a whole that is greater than the sum of its parts:

Blade servers: Blade servers have been around for several years, but as companies have been relying increasingly on data centres, they have increased in importance. The ability to stack multiple servers in a small space makes cabling, management and maintenance less cumbersome.

Virtualization: Adding virtualization to blade servers is also rewriting the rules of capacity utilization and redundancy. Because virtualization technology lets IT use server capacity more fully, IT has fewer servers to manage (and pay for). Because virtualization lets IT offload capacity in an on-demand, at-will fashion, IT can more easily set up virtualized off-site systems that support business continuity. This capability also supports hybrid cloud scenarios.

Networking technology: This, too, is evolving, with the move to Ethernet fabrics such as Cisco's Unified Computing System. With a single data-center network transport—one that can simultaneously transmit IP and Fibre Channel traffic over a single connection—IT has more options. Along with sophisticated new management capabilities, it gets high performance, low latency, robust security and lossless transmission. At a time when IT is dealing with the moves to virtualization on thousands of devices, transition to 10Gb Ethernet (while still supporting support legacy servers and applications), and the reduction of power and cooling overhead, it needs a more efficient network to simplify its workload.

Automation: One of the most complex facets of new technology is its management, and virtualization wouldn't be such a plus if IT had to shift workloads manually. Automating the daily provisioning activities associated with customer registration, multitenant segmentation, and virtual machine placement minimizes recurring costs. Also, service providers such as PAETEC can provide a multitenant infrastructure that enables them to offer service on a per-customer, per-virtual machine basis.

The Benefits Of Cloud Storage

The Economic Angle

A credit crunch can make it more difficult for businesses to finance the capitalized costs associated with adding more data center storage on site. Economic uncertainty can mean businesses will have to keep their costs variable and non-capitalized – using on-demand storage solutions – and they may be encouraged to consider outsourced storage solutions in the cloud. In addition, many businesses with highly variable storage needs do not want to have to pay for storage which is often unused. The latest online backup and storage services are cost-effective compared to most internal solutions, and provide the added benefit of offsite disaster recovery. From a business point of view, the ability to access your files from anywhere, from any computer and cost effectively ensure business continuity has clear advantages. Cheaper costs per GB (for the same functionality) and true site disaster recovery are key business drivers. The flexibility of cloud storage is very appealing to customers. Cloud storage products should provide elasticity, with capacity that grows as a business requires and scales back as soon as this excess capacity is no longer needed – you should only pay for what you use. A cloud storage service provider should base its pricing on how much storage capacity a business has used, how much bandwidth was used to access its data, and the value-added services performed in the cloud such as security and deduplication. Unfortunately there are many service providers that offer "low price" but fail to include basic services, so that hidden fees can add up very quickly. Some common hidden fees to watch out for are connecting fees, account maintenance charges, and data access charges. To make sure service providers aren't including additional fees cloud platforms should offer clear and predictable monthly bills allowing customers to manage costs more accurately.

Demand Security

In terms of security, cloud-based services must be managed and operated at equivalent levels to enterprise systems. The data must be properly encrypted both in motion and at rest, the physical locations of the cloud must be secure, and the business processes must be appropriate to the data and usage. Once those constraints are satisfied, cloud storage is no more or less secure than physical storage and the chance of data leakage by using cloud computing is no higher than that of physical on-premises storage. Although cloud computing standards are still being developed, existing standards, such as SAS 70 compliance and tier levels, are key indicators. Another major issue facing cloud storage is where the customer's data is actually kept. Many cloud products may not offer specific locations for where customer's data will reside or actually offer "location-less" clouds as a benefit. The actual physical location of a customer's data can be very important (for EU Data Protection Directive compliance, for example) and, if you are utilizing cloud storage for your disaster recovery plan or attempting to pass strict security audits, then the location of the data and the mechanisms defined to make that data accessible can be critical. If you live in a hurricane zone, for example, you wouldn't want to risk that your cloud is in the same area.

Easier All Around

Cloud storage can address many challenges that physical storage doesn't:

• Customers are not dependent on a single server.

• There is no direct hardware dependency.

• Customers don't have to buy more disk space than they initially need to accommodate future data growth.

• Business continuity is provided in the event of a site disaster.

• A virtual storage container can be provisioned that is larger than the physical space available.

• Customers can drastically reduce over-provisioning in a pay-as-you-go model.

• Cloud storage allows customers to access the entire storage pool from a single point.

 

All of these benefits make the administrator's job easier with a single administrative interface and a unified view of the data storage.

Virtual Desktop Infrastructure

The virtual desktop infrastructure (VDI) brings virtualization to the end-user's computer, and is one of the fastest-growing cloud services. It is a new form of server-side virtualization in which a virtual machine on a server hosts a single virtualized desktop. VDI is a popular means of desktop virtualization as it provides a fully customized user desktop, while still maintaining the security and simplicity of centralized management. Leading vendors in this area are Microsoft, VMware, and Citrix, employing technologies such as:

PCoIP – allows all enterprise desktops to be centrally located and managed in the data center, while providing a robust user experience for remote users.

Remote desktop protocol (RDP) – a Microsoft proprietary protocol which is an extension of the ITU-T T.128 application sharing protocol allowing a user to graphically interface with another computer.

Citrix independent computing architecture (ICA) – a Citrix proprietary protocol that is a platform-independent means of exchanging data between servers and clients.

Network performance is a key factor here. As described in a posting on the Citrix Blog, the table below estimates the amount of bandwidth that might be used by XenDesktop users. It doesn't require many users to hit tens of Mbps.

 

 

Activity

XenDesktop Bandwidth

Office

43 Kbps

Internet

85 Kbps

Printing

573 Kbps

Flash Video

174 Kbps

Standard WMV video

464 Kbps

High Definition WMV

1812 Kbps

Security in Cloud Storage

Security and virtualization are often viewed as opposing forces. After all, virtualization frees applications from physical hardware and network boundaries. Security, on the other hand, is all about establishing boundaries. Enterprises need to consider security during the initial architecture design of a virtualized environment.

Data security in the mass market cloud, whether multi-tenant or private, is often based on trust. That trust is usually in the hypervisor. As multiple virtual machines share physical logical unit numbers (LUNs), CPUs, and memory, it is up to the hypervisor to ensure data is not corrupted or accessed by the wrong virtual machine. This is the same fundamental challenge that clustered server environments have faced for years. Any physical server that might need to take over processing needs to have access to the data/application/operating system. This type of configuration can be further complicated because of recent advances in backup technologies and processes. For example, LUNs might also need to be presented to a common backup server for off-host backups.

Businesses need to secure data in the enterprise cloud in three ways.

  • The first involves securing the hypervisor. The primary goal: To minimize the possibility of the hypervisor being exploited, and to prevent any one virtual machine from negatively impacting any other virtual machine. Enterprises also need to secure any other server that may have access to LUNs, like an off-host backup server.
  • The other area that needs to be addressed is the data path. Enterprises need to pay attention to providing access paths to only the physical servers that must have access to maintain the desired functionality. This can be accomplished through the use of zoning via SAN N-port ID virtualization (NPIV), LUN masking, access lists, and permission configurations.
  • Last, there should be options for data encryption in-flight and at-rest. These options might be dependent on the data access methods utilized. For data under the strictest compliance requirements, the consumer must be the sole owner of the encryption keys which usually means the data is encrypted before it leaves the operating system.

One other area that enterprise clouds should address is how data is handled on break/fix drives and reused infrastructure. There should be well defined break/fix procedures so that data is not accidently compromised. When a customer vacates a service, a data erase certificate should be an option to show that the data has been deleted using an industry standard data erase algorithm.

Where is your Data Center?

Despite the term "cloud," which tends to downplay factors related to physical location, and the rise in concern about cyber-attack, which can be perpetrated from anywhere in the world, the place where data is housed remains of paramount concern for IT professionals.

Proximity to water and the electrical grid is critical. Political and economic stability is also key to sustaining site and data security. In addition, there is the long-term need for highly skilled personnel to tend to the site, providing maintenance and upkeep of the facility and computing equipment as well as of the data itself.

DATA CENTER RISK INDEX

Rank

Index Score

Country

1

100

United States

2

91

Canada

3

86

Germany

4

85

Hong Kong

5

82

United Kingdom

6

81

Sweden

7

80

Qatar

8

78

South Africa

9

76

France

10

73

Australia

11

71

Singapore

12

70

Brazil

13

67

Netherlands

14

64

Spain

15

62

Russia

16

61

Poland

17

60

Ireland

18

56

China

19

54

Japan

20

51

India

 

Ranked from most attractive (highest score) to least attractive (lowest score), countries received a score based on 11 factors, including energy cost, international bandwidth, ease of doing business, corporation tax, cost of labor, political stability, sustainability, natural disasters, GDP per capita, inflation, and water availability. Source: hurleypalmerflatt and Cushman & Wakefield.

 

According to this Data Center Risk Index, the US is the most attractive location among 20 countries ranked according to the risks likely to affect successful data center operations. Energy is relatively inexpensive in the US and the country has an excellent reputation as a place to do business. Despite high corporation tax, the US is expected to remain the top choice for companies seeking a low risk location. Canada, which ranked second, scored top in political stability and water availability, fourth in sustainability and corporation tax, and fifth in ease of doing business, making it a highly desirable country in which to locate data centers. Advances in distributed computing and network technologies have made it possible for companies to venture farther afield into rural areas that optimize site selection based on cooling, power, bandwidth, and lower risk profiles. Such centers pump and recirculate cool groundwater in lieu of using chillers and use free outside air to cool tens of megawatts of server heat.

How Much Power Is Consumed At A Data Center?

Central to the attraction of data centres is their ability to deliver the highest possible power consumption efficiencies. This is measured as a ratio, known as power usage effectiveness (PUE), between the amount of electricity needed to power the IT equipment and that needed to power facility features such as air conditioning and lighting. A PUE of 1.0 is ideal.

The top consumer of power within a data center is cooling. A computer server's CPU is the site of a huge amount of electrical activity, which means it generates a lot of heat, and requires constant cooling. Too much heat is deadly for computer circuits, whose pathways, measured in nanometres, can degrade due to melting. Computer programs can also degrade due to compromises of the data within basic electronic files, which, after all, consist of a series of electrons as subject to the effects of heat and cold as any other physical object. Temperatures around a CPU can reach 120 degrees. One of the solutions to mitigate this is to handle all hot air and cold air separately. This means the hot air does not get a chance to impact the temperature of the cold air. Through a specially designed venting and cooling system, the hot air can channelled out and continuously replaced with chilled air.

While individual data centres are achieving PUEs of 1.07 through efforts of this nature, the overall number of data centres continues to climb, tending to mitigate the overall beneficial effect of such efficiencies. According to a study on data center electricity use, by Jonathan G. Koomey, consulting professor at Stanford University, electricity used by data centres worldwide increased by about 56% from 2005 to 2010 (while it doubled from 2000 to 2005). In the US it increased by about 36%. The US hosts approximately 40% of the world's data center servers. It is estimated that US server farms, as data centres are also called, consume between 1.7% and 2.2% of the national power supply.

For example, Apple's new USD 1 billion "iDataCenter" in North Carolina is estimated to require as much as 100 MW of power, equivalent to that required to power 80,000 US homes or 250,000 European Union homes. Greenpeace's 2010 Make It Green report estimates that the global demand for electricity from data centres was on the order of 330 billion kWh in 2007, close to the equivalent of the entire electricity demand of the UK (345 billion kWh). However, this demand is projected to triple or quadruple by 2020.

Welcome to your Blog!

To begin using your site, click Create a Post under Blog Tools.

What is a Blog?

A Blog is a Web site designed to help you share information related to a particular subject area in the form of text, images, links, and other media such as video. Blogs can be used as team sites, news sites, journals, diaries, and more.

Blog posts usually consist of frequent short postings and are typically displayed in reverse chronological order (newest entries first). Blogs encourage site visitors to interact with one another by leaving comments on posts.

Blogs can be also be used as a team communication tool. Keep team members informed by providing a central place for links and relevant news.