Quadra

Connecting Technology and Business.

Deploy your application in minutes instead of weeks

Choose your language, workload, operating system 

With support for Linux, Windows Server, SQL Server, Oracle, IBM, and SAP, Azure Virtual Machines gives you the flexibility of virtualization for a wide range of computing solutions—development and testing, running applications, and extending your datacenter. It’s the freedom of open-source software configured the way you need it. It’s as if it was another rack in your datacenter, giving you the power to deploy an application in minutes instead of weeks.

cid:image001.png@01D1F9F7.D9E1FE80

Get more choice

It’s all about choice for your virtual machines. Choose Linux or Windows. Choose to be on-premises, in the cloud, or both. Choose your own virtual machine image or download a certified pre-configured image in our marketplace. With Virtual Machines, you’re in control.

Scale to what you need

Combine the performance of a world-class supercomputer with the scalability of the cloud. Scale from one to thousands of virtual machine instances. Plus, with the growing number of regional Azure datacenters, easily scale globally so you’re closer to where your customers are.

Pay only for what you use

Keep your budget in check with low-cost, per-minute billing. You only pay for the compute time you use.

Enhance security and compliance

We’ll help you encrypt sensitive data, protect virtual machines from viruses and malware, secure network traffic, and meet regulatory and compliance requirements.

-Azure web pages

 

Windows Azure - Primary Models

Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft managed data centers. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing environment.

Azure models.png 

Virtual machines

VMs are basic cloud building blocks. Get full control over a virtual machine with virtual hard disks. Install and run software yourself. Configure multiple machines with different roles to create complex solutions. VMs are nearly identical to conventional (real) servers, and are the easiest way to move existing workloads to the cloud.

Cloud Services

Easily access and manage these general purpose VMs. Microsoft maintains and updates each VMs as need with system updates. You configure the VMs as needed, and scale out as many copies as needed. You can have two types of VMs – Worker roles and web roles.  Worker roles are made for computing and sunning services. The web role is simply a worker role with IIS already installed and configured.

Web sites

Use these for pure web apps. The underlying system software is hidden from you and managed for you. Focus only on your web code. First choose a web technology from the gallery, then develop with your framework and finally deploy from your source control. Use the data platform of your choice. Develop 1 site of 500. Scale can be automated – by schedule, usage or by a quota trigger.

Mobile services

Mobile service allows you to quickly implement data and authentication capabilities. With mobile service, easily create a web API to store data and execute business logic. Apps on any device call the API, and users are authenticated by third party providers. Apps can also receive notifications when events occur. Worry less about data and authentication and focus more on your service.

Three Danger Zones of Virtualization

​In the area of virtualisation, three general risk areas have been identified.

The first revolves around traditional security risk areas. These risks affect both virtual and physical machines. Virtual software layers expand the potential attack surface for targeted malware and breach attempts. In some cases, a malware-infested virtual machine can be introduced to attack a network from within. The risks of data loss also increase with virtualisation. With the creation of virtual networks, more confidential data is located at more areas both inside and outside of the organisation. Virtual machines can also suffer from gaps in the security updates and patching process. Furthermore, traditional protection models can also fail to track the fluidity of virtual instances, thereby leaving open gaps for intrusions.

The second consists of risks exclusive to virtual environments. Accelerated provisioning may enable organisations to provision and run new services much more rapidly, but it gives little time to identify and address security risks. Moreover, sensitive data previously restricted to certain trust domains can now reside beside other data on host systems, increasing the risk of data loss. Virtual networks also add new layers of complexity due to the dynamic movements of virtual machines, as well as more workload interactions, administrative and user access points. This decreases virtual machine visibility.
 
The third area concerns hybrid environments. With quick provisioning and dynamically mobile workloads, these environments are incredibly susceptible to threats. Advanced security threats can deploy techniques such as drive-by downloads, zero-day vulnerability exploits and rootkits to attack virtual machines. Applications are also distributed across physical and virtual environments, resulting in many pieces of code across multiple platforms. Visibility is also lost in the complexity of adopting IT managed services and Infrastructure-as-a-Service outsourcing services
 
- Securing the promise of Virtualization
A Symantec/VMware Position Paper

Virtualization - current trends

A survey of 350+ IT professionals from the US, United Kingdom, Japan and Germany to determine current perceptions and utilization of virtualization practices was recently concluded. Here are the findings: 

Server virtualization is a growing trend in enterprise IT. Within the next year, IT professionals anticipate that virtualized machines will outnumber physical, nonvirtualized machines. And while virtualization technology offers many benefits, there are also a number of challenges.

Understanding IT Virtualization

 
Virtualization is part of an overall trend in enterprise IT. Qualified survey participants were IT professionals whose primary job responsibility included server virtualization in companies where virtualized servers are protected by antimalware security software and have been in production/deployed for at least 3 months..
 
As expected, server virtualization is widely embraced in today’s marketplace. In addition to virtualized servers, some 70% of the survey respondents also indicated their organization has already deployed or is currently piloting Virtual Desktop Infrastructure (VDI) technology. 

High Level of Virtualization across Server Workloads

 
When asked to identify the specific server workloads that are virtualized within their organization, it is apparent that organizations are working hard to virtualize many different server workloads. Currently database, apps, and virtual desktops are the most commonly virtualized server workloads.
 
In general, US enterprises tended to be further along in their adoption of virtualization technologies, than their counterparts in Germany, UK, or Japan. In fact, more than 50% of the US respondents indicated their organizations had successfully virtualized one or more of the server workloads listed above; almost 40% had virtualized the following:
 
1.     Database
2.     Apps
3.     Virtual Desktops
4.     Email
5.     File
6.     Collaboration
7.     Web
8.     Print
9.     FTP
10.  DNS
11.  Proxy
 
But the other countries weren’t far behind. More than 50% of the international respondents indicated their company had 5 or more server workloads virtualized. And of the respondents who indicated their organization currently has more physical unvirtualized host servers than virtualized machines, some 60% said they expect that to flipflop within the next year, with virtualized machines soon outnumbering unvirtualized servers.
 
Server virtualization managers were in agreement that the most common critical resource within their virtual environment was the CPU. Memory and I/O were also highlighted as critical resources for specific types of workloads.
  
The average peak utilization level of their virtualized server was fairly consistent when working with all of the server workloads: anywhere from 6080%.

Storage Virtualization

Virtual storage defined 

There is a fundamental difference between a resource that uses virtualization internally, versus something that provides a set of virtual interfaces. This point of confusion is often exploited by vendors obscuring whether or not a resource delivers virtualization. As an example, all modern operating systems use virtualization, but only hypervisors such as VMware, Hyper-V and Xen deliver virtualized computing. Similarly, nearly all enterprise storage systems utilize virtualization, but only a few products provide virtualized storage.
 
Without allowing the use of any storage system and any network connectivity, storage virtualization does not imply “virtualized storage.” In order to deliver the type of virtualization required for highly flexible cloud services and ITaaS, virtual storage must provide standard, virtual interfaces, supporting multiple storage vendors’ products.
 
Benefits of virtual storage
 
Some of the high-level benefits of using virtual storage (rather than storage with virtualization) include:
--Improved efficiency through greater storage utilization
--Standardized management of storage, providing decreased operational expenses
--Storage product inter-changeability, providing lower capital expenses
 
How virtual storage works
 
Virtualization is an abstraction that provides a simple consistent interface for a potentially complicated system. By providing a consistent interface, it frees both the engineers designing systems, and users from being tied to any one specific implementation.
 
Most commonly, virtualization is implemented through a mapping table that provides access to resources. The use of mapping tables is the reason why 64-bit addresses, or even larger, are required. In order to keep track of the billions or trillions of resources, a large address space is needed. In order to overcome limitations of grouping, and the size or granularity of resource access, it is also common to use multiple levels of mapping or indirection.
 
With the advent of thin provisioning, multiple point in time copies of volumes and multi-terabyte disk drives, many vendors have found it necessary to employ three levels of mapping.
 
Implementation approaches:
 
Symmetric: This method is commonly known as “in-band.” With this approach, all I/O moves through the virtualization layer. The mapping table is also managed and maintained on the devices providing the in-band virtualization.
 
Asymmetric: This method is also commonly referred to as “out-of-band.” In this implementation, the data and meta-data are handled differently. The mapping table, or meta-data about where data actually resides, is loaded into each host accessing the storage.
 
Hybrid: There is still another hybrid approach, known as “split-path.” This method is still an asymmetric approach, although the I/O mapping does occur somewhat “in-band.” The split-path method is typically used with a storage network switch, which contains the virtualization layer or mapping table. Although the I/Os do flow directly in the path and through the storage networking switch, the management and administration occur out-of-band. For this reason, the approach is known as “split-path.” The most common example of this approach was EMC’s Invista product. LSI’s SVM (also sold as HP’s SVSP) also uses this method.
 
Where virtual storage occurs
Host based: This was one of the first methods of providing virtual storage. This method delivers more than just virtualized storage, because it uses the hosts’ ability to connect to multiple storage systems from different vendors to provide a common way of managing and allocating resources.
The term most often used for this class of products is a “volume manager,” which manages volumes or LUNs on a host system. Several operating systems have basic volume managers included, such as HP-UX, AIX, z/OS, Solaris, Linux and Windows. Third-party volume manager products are also available such as Symantec’s Veritas) Volume Manager.
 
Network based: As implied by the name, this approach places the virtualization within the data path between the host and the storage system. With the advent of storage networks, the network-based approach to delivering virtualized storage has become popular. One issue that plagued early versions of these products was the lack of advanced software protection capabilities.
There have been several popular and successful products in this category, most notably IBM’s SAN Volume Controller (SVC) and NetApp’s V-Series, as well as products from vendors such as DataCore, FalconStor, LSI StoreAge and others. Another recent offering, though only for virtualized server environments, is EMC’s VPLEX[1].
 
Storage system: This approach is somewhat similar to network-based virtualization. Storage networking connections such as Fibre Channel and IP are used to connect third-party storage to the primary storage system that is providing the virtualization. This method has the advantage that existing data protection and storage management tools may be extended to support the external, virtualized storage. The Hitachi USP storage platform is the most complete and successful example of this model to date.
 
The new data center required to deliver ITaaS and cloud computing requires virtualized components at its foundation. Without virtualized computing, networking and storage, administrators will be unable to meet the dynamic demands of their customers without over-provisioning, over-charging, or both.
 
The success of virtualized computing is now seen by nearly everyone as a transformational event. However, without virtualized networking and storage, data centers will continue to operate inefficiently. The next wave of transformation begins with the use of virtualized storage.
 
- Russ Fellows - a senior analyst with the Evaluator Group

Virtualization towards a Private Cloud

​Maintain Control and Governance with Private Cloud Computing

Faster provisioning is great but not if it means compromising security or unauthorized access to resources. Many surveys on cloud computing cite security as the most significant concern hindering the adoption of cloud computing. A host-based distributed architecture enables virtual networks to be protected at the host level, leading to lower overall costs and higher quality of service.

Automated provisioning also requires control to ensure compliance with appropriate access policies. One approach is to control the consumption of resources by users before the resources are provisioned either through request/approval workflow or quotes/limits on what can be provisioned. An alternative approach is to freely allow end users to consume resources from the infrastructure but observe this consumption to generate usage reports. Those usage reports can then have cost models applied to them to generate cost reports that display the cost of the consumption.
 
Obtain Agility with Policy-Driven Automation
 
Resource management policies can improve utilization of shared infrastructure by allowing resources to be overcommitted beyond baseline reservations. Multi-cluster linked clone technology will enable identical virtual machine configurations to be provisioned rapidly and without a full duplication of the original template. Resource pooling and virtual distributed network configuration will reduce the amount of hardware needed to deliver services, and will enable intelligent policy management mechanisms like distributed resource scheduling. Software controls can enforce isolation that minimizes the risk of a user-driven or system-driven fault.
 
With this unique policy-driven automation in place, one will be able to essentially deliver zero-touch infrastructure, where IT is transformed into a service provider defining and delivering services rather than operationally responding to custom requests.

Downsides to Virtualization

First, increased network complexityaffects performance. Aside from merely increasing the number of network devices, virtualizationadds tiers to the switching fabric, increasing latency, power consumption and managementcomplexity.

Second, the consolidation of virtual machines on physical servers affects switching scalabilityand performance. As dual-processor servers with six-, eight- and even 10-core CPUs becomecommon, consolidation ratios will climb. Currently, a hypervisor virtual switch with a workloadof 10 to 15 VMs per system extracts a modest overhead of about 10% to 15%, but thatfigure that will undoubtedly increase when handling scores of VMs.

The third challenge is that software switching complicates management and security. Network monitoring, management, traffic reporting and security tools use standard protocols operating on physical ports, but as more traffic is switched within the hypervisor, these tools lose visibility into a significant amount of network activity. Some vendors make their monitoringand analysis software available on VMs to regain visibility, but these are proprietary solutions that typically support only one or two hypervisor vendors, and usually come with additional license costs.

Fourth, the ability to seamlessly and transparently move VMs from one physical server to another complicates management and security. Such dynamic movement of application workloads becomes a headache when keeping network policies aligned with applications.

Fifth, virtualization exacerbates demands for shared storage, due to the inherent need to decouple OS images, applications and data from the underlying server hardware.

-  Analyt i c s . Informat ionWeek. com

Cost-effective Server Virtualization

Step 1 — Assess your server environment.

A number of valuable tools on the market today enable the collection of performance metrics from your server environment. These tools can help you identify which servers would make good virtualization candidates. A partial list of available tools includes:
• VMware
o Capacity Planner
o vCenter Server Guided Consolidation
• Microsoft
o Microsoft Assessment and Planning Toolkit (MAP)
o System Center Virtual Machine Manager
o System Center Operations Manager

Step 2 — Finalize candidates for virtualization.

Before making any final decisions about which servers to virtualize, it’s important to understand each software vendor’s support policies. Is the software supported to run in a virtual machine? Is it supported to run on specific platforms (e.g., Oracle, Microsoft, etc.)? It’s also critical to determine each software vendor’s licensing policies.
• Does the policy allow for virtualization of the software?
• Does it permit dynamic movement of the virtual machine via vMotion,
XenMotion or other means?
• If the host is licensed, does the virtualization policy allow for additional
software instances to run at no additional cost?
Once you’ve answered these questions, you will be able to build a final list of virtualization candidates.

Step 3 — Determine the virtualization platform.

Citrix, Microsoft and VMware produce some of the well-known virtualization platforms, or hypervisors. Each platform is different. Once you’ve identified the features, functionality and caveats of each hypervisor, you can make an informed decision about which one will best meet your specific business needs.

Step 4 — Determine the server hardware platform.

To realize the greatest cost savings, determine if server reuse is an option for your business. Note, however, that any system more than three to four years old will probably be removed from newer virtualization platform hardware compatibility lists (HCLs). Warranty expirations and renewal costs also represent a key consideration in deciding on the merit and cost-effectiveness of server reuse. Finally, some hypervisors require specific processors with specific features (AMD-V or Intel VT) to operate properly. The hypervisor you select may, therefore, determine the feasibility of server reuse.
If new server hardware is the right option, consider rack servers, blade servers, AMD and Intel options and embedded hypervisors. HP, IBM and Sun all manufacture server hardware that leverages virtualization technology.

Step 5 — Determine storage hardware platform.

EMC, HP, IBM, NetApp and Sun Microsystems all produce leading storage systems. Again, it is important to identify the features, functionality and caveats of each platform in order to determine which one will best match your particular business needs.

Step 6 — Revisit backup/restore architecture.

Many vendors offer enhanced backup products for virtual infrastructure. Some of these vendors price by socket or host, thereby reducing costs considerably. If your business is tied to a long-term maintenance contract, consider finishing it out before changing your backup architecture. For those ready to investigate backup/restore architecture, the following companies offer some of the best options for virtualized infrastructure:
• CA • Commvault • EMC • IBM • Microsoft • Symantec • Vizioncore
 

Step 7 — Understand server operating system licensing.

If you plan on running many Windows Server instances, consider licensing each host processor with Windows Server 2008 Data Center Edition. Doing so enables unlimited Windows Server instances with no additional charge as well as dynamic movement of instances from host to host. These benefits prove extremely useful if you plan to clone and test Windows Server instances. Linux distributions have similar offerings but will vary by vendor.

Step 8 — Plan the project.

Assigning a project manager remains key to ensuring a successful virtualization implementation. Working with the project manager, you should build a project plan.
Determining the vendors through whom to acquire the necessary hardware, software and services represents another critical aspect of planning. Procuring everything from a single vendor will often yield the deepest discounts. But be sure to research all possibilities.
Finally, make sure to build a conversion time line — and stick to it. This step will prove one of the most effective means of controlling cost.

Step 9 — Educate and Implement.

For the virtualization process to remain cost effective, you’ll need to educate your implementation team. This education could take place in various ways:
• Online, classroom or onsite training
• Onsite jumpstart engagements
• Planning and design workshops
The virtualization platform should be rolled out simultaneously with the server and storage hardware. Once everything is in place, you’re ready to optimize the environment, which should include documentation and testing of backup and, if applicable, disaster recovery procedures.

Step 10 — Leverage physical-to-virtual (P2V) conversion to the greatest degree possible.

To realize the greatest cost savings and ROI, you’ll want to eliminate as many physical servers as possible through virtualization. You’ll also want to achieve this conversion quickly and efficiently. After accomplishing these conversions, don’t forget to turn off old systems. These systems can be either recycled, repurposed or sold.
With fewer physical servers, don’t forget that you’ll have less heat dissipation. Maximizing your savings will thus also involve a reassessment of your data center cooling distribution.
As a final consideration, it may be most cost-effective to outsource P2V projects. Utilizing a team of experienced P2V engineers can save a substantial amount of time and money.
-CDW

Virtualizing disaster recovery using cloud computing

Cloud-based business resilience—a welcome, new approach 

Cloud computing offers an attractive alternative to traditional disaster recovery. "The Cloud" is inherently a shared infrastruc ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time.

DR picture0.png

Cloud-based business resilience managed services like IBM SmartCloud™ Virtualized Server Recovery are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.

DR picture1.pngThought Leadership white paper from IBM