Connecting Technology and Business.

Gartner's prediction on Cloud email and Collaboration Services

​By the end of 2014, penetration of cloud email and collaboration services (CECS) will stand at 10 percent and will have passed the "tipping point" with broad-scale adoption under way, according to Gartner, Inc.

Although Gartner believes that the time is right for some enterprises — particularly smaller ones and those in industries with long underserved populations such as retail, hospitality and manufacturing — to move at least some users to CECS during the next two years, analysts warned that readiness varies by service provider and urged caution.


"Ultimately, we expect CECS to become the dominant provisioning model for the next generation of communication and collaboration technologies used in enterprises," said Tom Austin, vice president and Gartner fellow. "However, it is not dominant today, it will not be the only model, and it will take a decade or more for the transition to play out. Right now, the list of reasons to move to CECS is long, as is the list of reasons to avoid it."
Consequently, Gartner is lowering its short-term projected adoption rate for CECS. Analysts predict that most enterprises will not begin the move to CECS until 2014 when growth in the market will take off, before leveling off in 2020 as it exceeds 55 percent.
Gartner has pushed out the point at which it believes that 10 percent of the enterprise market will use cloud-based or software-as-a-service (SaaS) email from year-end 2012 to year-end 2014. Analysts said organizations are moving more slowly than anticipated for three primary reasons.
"The first is asset inertia. Organizations seek to extract maximum value from their investments in email and switching early can be like trading in a 2-year-old, low-mileage automobile. Secondly, senior IT managers are much more focused on strategic initiatives that help them to grow or transform their enterprise's business and moving to cloud-based or SaaS email services is generally viewed as a cost-saving move rather than a strategic initiative. Finally, the practical realities of the vendors' CECS offerings, when examined up close, are sometimes less compelling than the glossy stories they tell," Mr. Austin said.
While most enterprises that have adopted CECS appear to have moved everyone to CECS, closer investigation reveals that they often retain small, dedicated, on-premises systems to maintain greater control over the content created and consumed by C-level executives — whose communications are almost always subject to legal and regulatory scrutiny at semi-regular intervals.
"There are several reasons why enterprises might not want to be ahead of the curve on CECS, not least the perception that early adopters pay a premium in terms of acquisition cost, and that by waiting the organization can avoid paying an 'early adopter premium,'" said Mr. Austin. "However, cloud-based collaboration services appear to be forward priced and, while we do expect the cost of CECS to follow a cost-learning curve, the motive for much of the investment in CECS appears to be cost reduction. Thus, if CECS otherwise makes sense for an enterprise, it would be far better off proceeding now, while requiring that the CECS supplier guarantees to continue to reduce prices as prices in general fall in the market."

Cloud Computing - the tax angle

Tax issues up in the air

Most domestic tax laws and treaty provisions that apply to cross-border transactions were designed for bricks-and-mortar companies. Many rules apply based on physical location to determine whether a payment is from a domestic or foreign source. The source of income can affect whether foreign tax credits are available to offset foreign taxes paid. It can also affect whether withholding taxes are imposed on payments received from cloud computing transactions and whether tax treaty relief is available on that income.


But device and location independence are two of the cloud’s key features. In any given cloud network, the cloud service provider may own or manage many servers, routers and other technical data storage devices off-site, possibly located across multiple systems and countries around the world. Some of the specific tax complexities that can result are as follows.
Nature of payment
The tax treatment of a cloud user’s payments for cloud computing services depends on the extent of the user’s rights to use the cloud computing software. Whether the payment is characterised as sales income, service income or royalty can dramatically affect how and where the income is taxed, and whether withholding taxes or tax treaty relief will apply.
Sales income – A computer program is sold when the transfer includes all substantial rights and burdens of ownership. Sales income is generally sourced to where the property is produced and/or to where the sale takes place (e.g., where title and benefits and burdens of ownership pass to the buyer). In the cloud computing context, all production activities are rarely conducted in one location, and so pinpointing the payment’s source may not be straightforward.
Service income – Generally, income derived from the provision of services is sourced to where the services are performed. Like sales transactions, all of the inputs that make up a cloud service offering are rarely conducted in a single jurisdiction.
Royalties – If the cloud user gains the right to exploit intellectual property, the payment is a royalty. These payments are sourced to where the intangible property is used (exploited) and, unlike payments for services, they may attract withholding taxes in the payer’s jurisdiction.
Value-added taxes
Providing cloud computing services can create VAT registration, collection and reporting obligations in the jurisdictions where the services are consumed. A number of countries offer relief for electronically delivered services or the possibility of exempting certain forms of software services in their entirety.
Permanent establishment
Cloud service users and providers need to determine whether their activities or operations are substantial enough to create a taxable presence in a foreign jurisdiction. If so, they may become liable for tax in the foreign country. Relief may be available under an applicable income tax treaty.
Transfer pricing
Cloud computing businesses face transfer pricing issues raised by how the value of the business is distributed among the intellectual property, cloud computing infrastructure, and supporting personnel. Where more than one entity combine efforts to provide a cloud computing offering to customers, the business will need to evaluate each entity’s economic contribution to the effort and compensate each entity according to arm’s-length principles.
Developing your tax strategy for cloud computing
Clearly, with the high degree of tax uncertainties involved and the current lack of administrative guidelines, it is vital for organisations engaged in cloud computing – whether as providers or customers – to put in place comprehensive tax strategies for undertaking cloud computing projects. 
In developing this strategy, organisations should begin by mapping out the project’s system of international payments and services, including their location, direction, risk and beneficiaries. Project managers should then work with their tax teams or independent tax advisers to take a position on the nature of payments and permanent establishment and withholding tax risks inherent in their system. From there, project managers can adjust their systems and take other mitigating steps, such as entering discussions with tax authorities or obtaining advanced rulings, to put the organisations in the best tax position possible.
Organisations that do this from the outset will gain a significant long-term advantage over any competitors who fail to comprehensively address the tax risks and opportunities arising from the burgeoning new cloud computing industry.
- A KPMG whitepaper

Cloud Computing Taxonomy

I thought the slide below gives you a clearer view of what the cloud technology Offers today. Not much is needed to explain as everything below is plain.

Cloud Computing Taxonomy.png

IaaS – Infrastructure as a Service
PaaS – Platform as a Service
SaaS – Software as a Service
Note: In IaaS, the Operating System is partially managed by the vendor and the rest by the customer.

Technical Drivers for Cloud Computing

An impressive group of technical advancements are boosting adoption of cloud computing. These technologies bring advantages to IT, but collectively, they create a whole that is greater than the sum of its parts:

Blade servers: Blade servers have been around for several years, but as companies have been relying increasingly on data centres, they have increased in importance. The ability to stack multiple servers in a small space makes cabling, management and maintenance less cumbersome.

Virtualization: Adding virtualization to blade servers is also rewriting the rules of capacity utilization and redundancy. Because virtualization technology lets IT use server capacity more fully, IT has fewer servers to manage (and pay for). Because virtualization lets IT offload capacity in an on-demand, at-will fashion, IT can more easily set up virtualized off-site systems that support business continuity. This capability also supports hybrid cloud scenarios.

Networking technology: This, too, is evolving, with the move to Ethernet fabrics such as Cisco's Unified Computing System. With a single data-center network transport—one that can simultaneously transmit IP and Fibre Channel traffic over a single connection—IT has more options. Along with sophisticated new management capabilities, it gets high performance, low latency, robust security and lossless transmission. At a time when IT is dealing with the moves to virtualization on thousands of devices, transition to 10Gb Ethernet (while still supporting support legacy servers and applications), and the reduction of power and cooling overhead, it needs a more efficient network to simplify its workload.

Automation: One of the most complex facets of new technology is its management, and virtualization wouldn't be such a plus if IT had to shift workloads manually. Automating the daily provisioning activities associated with customer registration, multitenant segmentation, and virtual machine placement minimizes recurring costs. Also, service providers such as PAETEC can provide a multitenant infrastructure that enables them to offer service on a per-customer, per-virtual machine basis.

Data Protection in the Cloud

Primary Data Protection

Primary data is data that supports online processing. Primary data can be protected using a single technology, or by combining multiple technologies. Some common methods include the levels of RAID, multiple copies, replication, snap copies, and continuous data protection (CDP).

Primary data protection within the mass market cloud is usually left up to the user. It is rare to find the methods listed above in mass market clouds today because of the complexity and cost of these technologies. A few cloud storage solutions protect primary data by maintaining multiple copies of the data within the cloud on non-RAID-protected storage in order to keep costs down.

Primary data protection in the enterprise cloud should resemble an in-house enterprise solution. Robust technologies like snap copies and replication should be available when a business impact analysis (BIA) of the solution requires it. APIs for manipulating the environment are critical in this area so that the data protection method can be tightly coupled with the application.

The main difference between in-house enterprise solutions and storage in an enterprise cloud is how the solution is bundled. To maintain the cloud experience of deployment on demand, options must be packaged together so the service can be provisioned automatically. The result is a pick list of bundled options that typically meet a wide variety of requirements. There may not be an exact match in the frequency of snap shots, replication, and the like, for a customer's requirements. Nonetheless, most users will usually sacrifice some flexibility to realize the other benefits of operating within an enterprise cloud.


Secondary Data Protection

Secondary data consists of historical copies of primary data in the form of backups. This type of data protection is meant to mitigate data corruption, recover deleted or overwritten data, and retain data over the long-term for business or regulation requirements. Typical solutions usually include backup software and several types of storage media. Data de-duplication might also be used, but this can raise issues in a multi-tenant environment regarding the segregation of data.

There are solutions (commercial and public-domain) that can be added to mass market cloud storage offerings to accomplish secondary data protection, but it is rare for the mass market cloud providers to package this together with the online storage. Although the reasons vary, in some instances SLAs related to restore times and retention periods can be difficult to manage.

Whether the solution is a private or a multi-tenant cloud platform, control, visibility, and restore SLAs are critical for secondary data protection. Initiating a restore should be straightforward and should happen automatically once the request is submitted. Users should be able to count on some predictable level of restore performance (GBs restored / amount of time) and should be able to select the length of retention from a short pick list of options. Finally, users should also be able to check on the status of their backups online. Since frequency and retention determine the resources required for storing backups — and thus the cost — online status of usage and billing should be viewable by the consumer to avoid surprises at the end of the billing period.

The Benefits Of Cloud Storage

The Economic Angle

A credit crunch can make it more difficult for businesses to finance the capitalized costs associated with adding more data center storage on site. Economic uncertainty can mean businesses will have to keep their costs variable and non-capitalized – using on-demand storage solutions – and they may be encouraged to consider outsourced storage solutions in the cloud. In addition, many businesses with highly variable storage needs do not want to have to pay for storage which is often unused. The latest online backup and storage services are cost-effective compared to most internal solutions, and provide the added benefit of offsite disaster recovery. From a business point of view, the ability to access your files from anywhere, from any computer and cost effectively ensure business continuity has clear advantages. Cheaper costs per GB (for the same functionality) and true site disaster recovery are key business drivers. The flexibility of cloud storage is very appealing to customers. Cloud storage products should provide elasticity, with capacity that grows as a business requires and scales back as soon as this excess capacity is no longer needed – you should only pay for what you use. A cloud storage service provider should base its pricing on how much storage capacity a business has used, how much bandwidth was used to access its data, and the value-added services performed in the cloud such as security and deduplication. Unfortunately there are many service providers that offer "low price" but fail to include basic services, so that hidden fees can add up very quickly. Some common hidden fees to watch out for are connecting fees, account maintenance charges, and data access charges. To make sure service providers aren't including additional fees cloud platforms should offer clear and predictable monthly bills allowing customers to manage costs more accurately.

Demand Security

In terms of security, cloud-based services must be managed and operated at equivalent levels to enterprise systems. The data must be properly encrypted both in motion and at rest, the physical locations of the cloud must be secure, and the business processes must be appropriate to the data and usage. Once those constraints are satisfied, cloud storage is no more or less secure than physical storage and the chance of data leakage by using cloud computing is no higher than that of physical on-premises storage. Although cloud computing standards are still being developed, existing standards, such as SAS 70 compliance and tier levels, are key indicators. Another major issue facing cloud storage is where the customer's data is actually kept. Many cloud products may not offer specific locations for where customer's data will reside or actually offer "location-less" clouds as a benefit. The actual physical location of a customer's data can be very important (for EU Data Protection Directive compliance, for example) and, if you are utilizing cloud storage for your disaster recovery plan or attempting to pass strict security audits, then the location of the data and the mechanisms defined to make that data accessible can be critical. If you live in a hurricane zone, for example, you wouldn't want to risk that your cloud is in the same area.

Easier All Around

Cloud storage can address many challenges that physical storage doesn't:

• Customers are not dependent on a single server.

• There is no direct hardware dependency.

• Customers don't have to buy more disk space than they initially need to accommodate future data growth.

• Business continuity is provided in the event of a site disaster.

• A virtual storage container can be provisioned that is larger than the physical space available.

• Customers can drastically reduce over-provisioning in a pay-as-you-go model.

• Cloud storage allows customers to access the entire storage pool from a single point.


All of these benefits make the administrator's job easier with a single administrative interface and a unified view of the data storage.

Security in Cloud Storage

Security and virtualization are often viewed as opposing forces. After all, virtualization frees applications from physical hardware and network boundaries. Security, on the other hand, is all about establishing boundaries. Enterprises need to consider security during the initial architecture design of a virtualized environment.

Data security in the mass market cloud, whether multi-tenant or private, is often based on trust. That trust is usually in the hypervisor. As multiple virtual machines share physical logical unit numbers (LUNs), CPUs, and memory, it is up to the hypervisor to ensure data is not corrupted or accessed by the wrong virtual machine. This is the same fundamental challenge that clustered server environments have faced for years. Any physical server that might need to take over processing needs to have access to the data/application/operating system. This type of configuration can be further complicated because of recent advances in backup technologies and processes. For example, LUNs might also need to be presented to a common backup server for off-host backups.

Businesses need to secure data in the enterprise cloud in three ways.

  • The first involves securing the hypervisor. The primary goal: To minimize the possibility of the hypervisor being exploited, and to prevent any one virtual machine from negatively impacting any other virtual machine. Enterprises also need to secure any other server that may have access to LUNs, like an off-host backup server.
  • The other area that needs to be addressed is the data path. Enterprises need to pay attention to providing access paths to only the physical servers that must have access to maintain the desired functionality. This can be accomplished through the use of zoning via SAN N-port ID virtualization (NPIV), LUN masking, access lists, and permission configurations.
  • Last, there should be options for data encryption in-flight and at-rest. These options might be dependent on the data access methods utilized. For data under the strictest compliance requirements, the consumer must be the sole owner of the encryption keys which usually means the data is encrypted before it leaves the operating system.

One other area that enterprise clouds should address is how data is handled on break/fix drives and reused infrastructure. There should be well defined break/fix procedures so that data is not accidently compromised. When a customer vacates a service, a data erase certificate should be an option to show that the data has been deleted using an industry standard data erase algorithm.

The Economics of Data Center Operations

The economics of operating a data center are comprised of many items that factor into the total cost of ownership.

1. Resiliency: Whether building a data center or evaluating provider facilities, cost is derived from the level of redundant infrastructure built into it. The Uptime Institute data center tiers describe criteria to differentiate four classifications of site infrastructure topology based on increasing levels of redundant capacity components and distribution paths.

2. Down Time: The historical cost model for operating IT effectively has been the cost of down time. The data center reliability attribute is a key ingredient to how the data center should be designed and what the requirements of infrastructure are. The cost of down time is drastically different among the different types of businesses and the facility design considerations should reflect as much. The amount of risk a business is willing to assume in maintaining the uptime of their IT has a large impact on the cost of a data center.

3. Staffing is an often overlooked or underestimated factor in determining the cost of data center operations. In addition to IT staff, facilities staffs ensure data center reliability and provision and maintain electrical and mechanical systems. Security staff requirements will vary depending on the size of the data center and individual needs of the business, but often require on-site personnel 24 hours a day and 365 days a year. If you are building a company data center, does the business have the experience in designing, building and operating it?

4. Financial considerations:

a. Site Selection: If you have the luxury of selecting a location throughout the U.S. for a data center site, incorporate local utility rates and tax incentives into the overall cost.

b. Cost Segregation: Research the use of audit estimating techniques to segregate or allocate costs to individual components of property (e.g., land, land improvements, buildings, equipment, furniture and fixtures, etc.).

c. Capital Recovery Factor: When evaluating the true capital cost of a data center, look at capital recovery factor, which is the ratio of a constant annuity to the present value of receiving that annuity for a given length of time.

d. Internal Rate of Return (IRR): What is the estimated IRR for the data center build project? IRR is an indicator that is commonly used to evaluate the desirability of investments or projects.

5. Timing: Consider the economics of technological obsolescence if building a data center. Weigh the costs of alignment with business and IT strategies against the risk of obtaining additional funding to increase power and cooling capacity to accommodate higher IT densities down the road.

6. Vertical Scalability: Scale is top of mind throughout most aspects of IT. The significance of scalability in the data center carries a different connotation and has a higher price tag if not considered properly. Vertical scalability means cloud computing-like elasticity capabilities incorporated into data center infrastructure and available floor space. It means turning up the dial on power and cooling densities without disrupting the business. The gains of turning up that dial equate to agility in operations adaptability to changing business needs and future cost avoidance in provisioning additional power and cooling to match increased requirements of IT.


An implicit value is derived from a data center strategy that is the coalescence of internal and financial perspectives. The misconception that going high-density in the rack equates to a higher cost is usually not founded with the realities of what efficient infrastructure is capable of. The new datacenter is equipped to handle and scale with high density servers and will ultimately save money through the power and cooling efficiencies gained. Examine all aspects of the economics of data center operations in order to understand the implications of risk assumption and true costs involved.

How Much Power Is Consumed At A Data Center?

Central to the attraction of data centres is their ability to deliver the highest possible power consumption efficiencies. This is measured as a ratio, known as power usage effectiveness (PUE), between the amount of electricity needed to power the IT equipment and that needed to power facility features such as air conditioning and lighting. A PUE of 1.0 is ideal.

The top consumer of power within a data center is cooling. A computer server's CPU is the site of a huge amount of electrical activity, which means it generates a lot of heat, and requires constant cooling. Too much heat is deadly for computer circuits, whose pathways, measured in nanometres, can degrade due to melting. Computer programs can also degrade due to compromises of the data within basic electronic files, which, after all, consist of a series of electrons as subject to the effects of heat and cold as any other physical object. Temperatures around a CPU can reach 120 degrees. One of the solutions to mitigate this is to handle all hot air and cold air separately. This means the hot air does not get a chance to impact the temperature of the cold air. Through a specially designed venting and cooling system, the hot air can channelled out and continuously replaced with chilled air.

While individual data centres are achieving PUEs of 1.07 through efforts of this nature, the overall number of data centres continues to climb, tending to mitigate the overall beneficial effect of such efficiencies. According to a study on data center electricity use, by Jonathan G. Koomey, consulting professor at Stanford University, electricity used by data centres worldwide increased by about 56% from 2005 to 2010 (while it doubled from 2000 to 2005). In the US it increased by about 36%. The US hosts approximately 40% of the world's data center servers. It is estimated that US server farms, as data centres are also called, consume between 1.7% and 2.2% of the national power supply.

For example, Apple's new USD 1 billion "iDataCenter" in North Carolina is estimated to require as much as 100 MW of power, equivalent to that required to power 80,000 US homes or 250,000 European Union homes. Greenpeace's 2010 Make It Green report estimates that the global demand for electricity from data centres was on the order of 330 billion kWh in 2007, close to the equivalent of the entire electricity demand of the UK (345 billion kWh). However, this demand is projected to triple or quadruple by 2020.

Where is your Data Center?

Despite the term "cloud," which tends to downplay factors related to physical location, and the rise in concern about cyber-attack, which can be perpetrated from anywhere in the world, the place where data is housed remains of paramount concern for IT professionals.

Proximity to water and the electrical grid is critical. Political and economic stability is also key to sustaining site and data security. In addition, there is the long-term need for highly skilled personnel to tend to the site, providing maintenance and upkeep of the facility and computing equipment as well as of the data itself.



Index Score




United States









Hong Kong



United Kingdom









South Africa






































Ranked from most attractive (highest score) to least attractive (lowest score), countries received a score based on 11 factors, including energy cost, international bandwidth, ease of doing business, corporation tax, cost of labor, political stability, sustainability, natural disasters, GDP per capita, inflation, and water availability. Source: hurleypalmerflatt and Cushman & Wakefield.


According to this Data Center Risk Index, the US is the most attractive location among 20 countries ranked according to the risks likely to affect successful data center operations. Energy is relatively inexpensive in the US and the country has an excellent reputation as a place to do business. Despite high corporation tax, the US is expected to remain the top choice for companies seeking a low risk location. Canada, which ranked second, scored top in political stability and water availability, fourth in sustainability and corporation tax, and fifth in ease of doing business, making it a highly desirable country in which to locate data centers. Advances in distributed computing and network technologies have made it possible for companies to venture farther afield into rural areas that optimize site selection based on cooling, power, bandwidth, and lower risk profiles. Such centers pump and recirculate cool groundwater in lieu of using chillers and use free outside air to cool tens of megawatts of server heat.