Quadra

Connecting Technology and Business.

The Birth of Cloud Computing

In the last 10 years we have been moving towards the consumption of technology in the same way we use gas or electricity. A few names were put forward for this model: Utility Computing, Grid Computing, Cloud Computing and to a certain extent even outsourcing or hosting. All of these mean more or less the same thing, however;

Utility and grid aren’t new terms and have been in use for many years
 
Outsourcing became a dirty word in the late 1990’s when many businesses failed to realise the perceived benefits and went through a period of insourcing.
 
Hosting has been delivered successfully for many years from renting rack space in someone else’s Data Centre (Co-Location). Or having a vendor providing a server for a businesses use (e.g. virtual or physical web servers) with the business paying a monthly or annual fee for the service
 
So is Cloud simply a rebranding of what we already have? To a certain extent it is exactly that, a reinvention of what we already and have used for many years. However there is more than just one element to a Cloud solution.

In the beginning…

To many the first real use of computers in business were the huge Mainframes that powered the world’s largest businesses in the 60’s and 70’s. Mainframe technology can actually trace its roots back to WWII and the ENIGMA code breaking machine, widely regarded as the first ever super-computer. So why are Mainframes so important to the understanding of Cloud? The answer becomes quite apparent; nearly every component that a Cloud service requires existed in the computing environments of the late 60’s and early 70’s!

What is a Mainframe?

Mainframes can be described very simply as a core group of required components:
ƒƒ CPU (the processor that is the thinking part of a computer)
ƒƒ Memory (where calculations and results are kept when in use)
ƒƒ Storage (long term storage of files and documents, originally to large tape reels, now primarily disks)
ƒƒ Network (allowing a computer to talk to another one)
ƒƒ Operating System (the chief program that tells all the components how to work together)
ƒƒ Programs (individual pieces of software designed to perform a certain task)
So what has changed over the years? Arguably very little. All those components used then are still requirements now. They are just faster, cheaper, and far more widely available than ever before.

The problem with Mainframes...

Size - Many took up entire floors just to be able to run the monthly accounts.
Cost - If your business had anything much more sophisticated than an electronic typewriter you would have had to invest a massive amount of money even for basic functionality.
Complexity - Not by modern day standards, but you needed experienced and costly engineers to run, maintain and even worse, repair them.

How were they used at the time?

Now to the interesting part! Due to the size, cost, complexity and overhead many office buildings or campuses had a single Mainframe but then shared out the computing power to the tenant businesses. Let’s put that another way, you shared computing resources with other businesses upon a single platform, paid a percentage towards the costs and had IT services delivered to your office and users with no upfront investment or associated cost of building your own infrastructure. Sound familiar?

Virtualisation was born

By the late 60’s it just seemed crazy to have all this computing power dedicated to running a solitary program, especially when loading the program could take many hours. A new idea was developed where a Kernel program was built onto the Mainframe hardware first. The Kernel’s job was to talk to each of the programs and book a time slot for each one to take turns using the computing resources, thus ensuring the best possible use of the hardware. This was imaginatively called time sharing. That Kernel is what we would now call a Hypervisor, which is the basic component of a virtual environment. In the early 1970’s IBM coined the phrase ‘Virtual Machine’ and also ‘Virtualisation’.

Why did we move away from Mainframes?

All those massive mainframes running all those programs could never be sustainable, so it was deemed a better idea to have a smaller computer that could happily run a program or two for a business and sit in a cupboard or under a desk, thus the modern day server was born.

The problem with modern day servers...

Roll forward through the years and we have new problems, those single box servers have multiplied. All of a sudden we need big costly rooms that get filled up with more and more servers, each running a new program, each requiring costly expertise to run, repair and deliver to the business the key applications it needs to function in modern times.
A solution to this problem becomes clear, wouldn’t it be a fantastic idea to build one big computer that replaces all the smaller servers, virtualise it and just worry about that one physical piece of hardware? In essence, virtualisation returns us to the days of the mainframe and we have gone full circle.

Is Cloud virtualisation?

There is much talk, and indeed incorrect statements that Cloud is virtualisation and vice versa. So is Virtualisation an exciting and new technology? Is it a ground breaking solution to business problems? Or is it in fact a 40 year old solution that has been in constant use by many globally and is currently enjoying new prominence over the last ten years?  Modern virtualisation technologies are far more advanced and with many more features than those of decades past, and at best form a part of a cloud solution.  Something that needs to be made clear though, is that you do not have to have any form of virtualisation to build a Cloud solution, it is a simple component. What is true however is that virtual technologies are just cheaper, faster and far more widely available now than ever before.
 
- Stuart James - Cloud - It is not a nebulous concept - a Whitepaper

Multitenancy explained

​Traditionally, different instances of the same software are run on different servers for different clients. The hardware is different, the software instance is different and so are the database schemas. The infrastructure could be in-house or outsourced.

In software architecture, when a single instance of software runs on a server and is used by multiple clients, it is termed as multi-tenancy. The hardware, operating system, data-storage and software application are same for all clients. But clients cannot share or access each other’s data.

There are minor differences as compared to virtualization where each customer’s application appears to run on a separate physical machine. Security is more robust in virtualization as compared to multi-tenancy.

The key advantages of multi-tenancy are:
Resource utilization
IT resources, both hardware and software, are utilized fully with multiple customers accessing the same resources.

Data aggregation
Since there is a single database schema, data is collected from a single source. This simplifies data mining and analysis. Performance tuning of the database also becomes easier for the administrator.

Application management
Release management becomes simpler since there is only single instance of the application. The upgrades done to the software or hardware are common for all customers. Administration and maintenance of the infrastructure becomes easier as there is only a single platform and hardware to maintain.

Scalability
When a new hardware is added to the platform, the capacity and procession power of the whole environment increases. This makes scaling up easier for all customers.

Cost savings
Economies of scale lead to cost savings for customers since the licensing costs of underlying software, maintenance costs of hardware, etc are all shared. This is beneficial to small and medium enterprises which need IT resources but cannot afford to invest on the infrastructure.

Multi-tenancy model has some pitfalls as well as compared to traditional single-tenancy model.
- Since per tenant metadata needs to be stored and maintained, the multi-tenant applications need more efforts to maintain. The environment also becomes complex.
- In a single-tenant environment, if there is any problem only one client is impacted, whereas in a multi-tenant environment, all clients are impacted simultaneously in case of problem.
- Downtime during new releases or updates impact more than one client. Even if an update is for one customer, the downtime during release impacts all.
- There are security concerns also since competing enterprises might be sharing the same infrastructure.

The service providers must put in extra efforts to ensure business continuity, performance and address security concerns of their customers.

Extract from forum posting by Uma Avantsa, Contributing Editor, TradeBriefs

Cloud is a business decision, not an IT decision

Some cloud vendors are currently selling cloud and offering savings of as high as 70%. It is suggested that focusing on a headline operational cost saving figure underestimates the business transformation required as well as potentially significant setup costs that, when considered, rarely make the business case for cloud compelling.

  

Cloud offers a new economic and consumption model for IT and application purchases; however, what many are not highlighting is the business transformation and IT migration which has to accompany the purchase of cloud services to derive real business benefits. These can be time consuming, involve write downs of existing investments, process re-engineering, systems migration and structural changes.

  

Moving your IT systems to the cloud is not a simple ‘lift and shift’. It requires transformation (in technical parlance, ‘virtualisation’). This key step is often overlooked in the business case. The cost of a migration to a virtualised environment may outweigh the savings the cloud offers. Clients should look to plan and build for the cloud - not migrate large legacy systems to the cloud. The business case for the latter is seldom positive. 
 
Deloitte recommends that businesses thouroughly assess the relevance of the cloud to their organization and where its impact can establish the most value. This requires building a strong business case with clear milestones and tracking its benefits.
 
Here’s a list of things to remember about using the cloud:
1. This is not just an IT cost reduction program; it is a business transformation
2. A clear business case is needed, with clear measureable deliverables
3. Security, privacy and business governance are important, so investigate what can and cannot be accomplished within the regulations – don’t believe everything you hear
4. Using the cloud is a journey and as such needs clear planning to choose the right partners. Not all cloud providers’ offers are the same
5. The cloud opens many new and exciting strategic options. Think about how to capture these options in your analysis
6. Although this is a business transformation, both IT and the CIO are crucial to success and need to be a core part of the team
7. Be clear in communications across all aspects of the business, both internally and externally, about the business benefits and expected outcomes.
Remember that the cloud exists for a good reason: it can be a pragmatic tool that can help your business better manage its IT requirements. But to make the most of the cloud, you need to do your homework and due diligence. Following these steps can help your organisation to take advantage of the best aspects of the cloud while helping to minimise your risk.
 
- Pragmatic Cloud Computing, Deloitte

Virtualization towards a Private Cloud

​Maintain Control and Governance with Private Cloud Computing

Faster provisioning is great but not if it means compromising security or unauthorized access to resources. Many surveys on cloud computing cite security as the most significant concern hindering the adoption of cloud computing. A host-based distributed architecture enables virtual networks to be protected at the host level, leading to lower overall costs and higher quality of service.

Automated provisioning also requires control to ensure compliance with appropriate access policies. One approach is to control the consumption of resources by users before the resources are provisioned either through request/approval workflow or quotes/limits on what can be provisioned. An alternative approach is to freely allow end users to consume resources from the infrastructure but observe this consumption to generate usage reports. Those usage reports can then have cost models applied to them to generate cost reports that display the cost of the consumption.
 
Obtain Agility with Policy-Driven Automation
 
Resource management policies can improve utilization of shared infrastructure by allowing resources to be overcommitted beyond baseline reservations. Multi-cluster linked clone technology will enable identical virtual machine configurations to be provisioned rapidly and without a full duplication of the original template. Resource pooling and virtual distributed network configuration will reduce the amount of hardware needed to deliver services, and will enable intelligent policy management mechanisms like distributed resource scheduling. Software controls can enforce isolation that minimizes the risk of a user-driven or system-driven fault.
 
With this unique policy-driven automation in place, one will be able to essentially deliver zero-touch infrastructure, where IT is transformed into a service provider defining and delivering services rather than operationally responding to custom requests.

Downsides to Virtualization

First, increased network complexityaffects performance. Aside from merely increasing the number of network devices, virtualizationadds tiers to the switching fabric, increasing latency, power consumption and managementcomplexity.

Second, the consolidation of virtual machines on physical servers affects switching scalabilityand performance. As dual-processor servers with six-, eight- and even 10-core CPUs becomecommon, consolidation ratios will climb. Currently, a hypervisor virtual switch with a workloadof 10 to 15 VMs per system extracts a modest overhead of about 10% to 15%, but thatfigure that will undoubtedly increase when handling scores of VMs.

The third challenge is that software switching complicates management and security. Network monitoring, management, traffic reporting and security tools use standard protocols operating on physical ports, but as more traffic is switched within the hypervisor, these tools lose visibility into a significant amount of network activity. Some vendors make their monitoringand analysis software available on VMs to regain visibility, but these are proprietary solutions that typically support only one or two hypervisor vendors, and usually come with additional license costs.

Fourth, the ability to seamlessly and transparently move VMs from one physical server to another complicates management and security. Such dynamic movement of application workloads becomes a headache when keeping network policies aligned with applications.

Fifth, virtualization exacerbates demands for shared storage, due to the inherent need to decouple OS images, applications and data from the underlying server hardware.

-  Analyt i c s . Informat ionWeek. com

Cost-effective Server Virtualization

Step 1 — Assess your server environment.

A number of valuable tools on the market today enable the collection of performance metrics from your server environment. These tools can help you identify which servers would make good virtualization candidates. A partial list of available tools includes:
• VMware
o Capacity Planner
o vCenter Server Guided Consolidation
• Microsoft
o Microsoft Assessment and Planning Toolkit (MAP)
o System Center Virtual Machine Manager
o System Center Operations Manager

Step 2 — Finalize candidates for virtualization.

Before making any final decisions about which servers to virtualize, it’s important to understand each software vendor’s support policies. Is the software supported to run in a virtual machine? Is it supported to run on specific platforms (e.g., Oracle, Microsoft, etc.)? It’s also critical to determine each software vendor’s licensing policies.
• Does the policy allow for virtualization of the software?
• Does it permit dynamic movement of the virtual machine via vMotion,
XenMotion or other means?
• If the host is licensed, does the virtualization policy allow for additional
software instances to run at no additional cost?
Once you’ve answered these questions, you will be able to build a final list of virtualization candidates.

Step 3 — Determine the virtualization platform.

Citrix, Microsoft and VMware produce some of the well-known virtualization platforms, or hypervisors. Each platform is different. Once you’ve identified the features, functionality and caveats of each hypervisor, you can make an informed decision about which one will best meet your specific business needs.

Step 4 — Determine the server hardware platform.

To realize the greatest cost savings, determine if server reuse is an option for your business. Note, however, that any system more than three to four years old will probably be removed from newer virtualization platform hardware compatibility lists (HCLs). Warranty expirations and renewal costs also represent a key consideration in deciding on the merit and cost-effectiveness of server reuse. Finally, some hypervisors require specific processors with specific features (AMD-V or Intel VT) to operate properly. The hypervisor you select may, therefore, determine the feasibility of server reuse.
If new server hardware is the right option, consider rack servers, blade servers, AMD and Intel options and embedded hypervisors. HP, IBM and Sun all manufacture server hardware that leverages virtualization technology.

Step 5 — Determine storage hardware platform.

EMC, HP, IBM, NetApp and Sun Microsystems all produce leading storage systems. Again, it is important to identify the features, functionality and caveats of each platform in order to determine which one will best match your particular business needs.

Step 6 — Revisit backup/restore architecture.

Many vendors offer enhanced backup products for virtual infrastructure. Some of these vendors price by socket or host, thereby reducing costs considerably. If your business is tied to a long-term maintenance contract, consider finishing it out before changing your backup architecture. For those ready to investigate backup/restore architecture, the following companies offer some of the best options for virtualized infrastructure:
• CA • Commvault • EMC • IBM • Microsoft • Symantec • Vizioncore
 

Step 7 — Understand server operating system licensing.

If you plan on running many Windows Server instances, consider licensing each host processor with Windows Server 2008 Data Center Edition. Doing so enables unlimited Windows Server instances with no additional charge as well as dynamic movement of instances from host to host. These benefits prove extremely useful if you plan to clone and test Windows Server instances. Linux distributions have similar offerings but will vary by vendor.

Step 8 — Plan the project.

Assigning a project manager remains key to ensuring a successful virtualization implementation. Working with the project manager, you should build a project plan.
Determining the vendors through whom to acquire the necessary hardware, software and services represents another critical aspect of planning. Procuring everything from a single vendor will often yield the deepest discounts. But be sure to research all possibilities.
Finally, make sure to build a conversion time line — and stick to it. This step will prove one of the most effective means of controlling cost.

Step 9 — Educate and Implement.

For the virtualization process to remain cost effective, you’ll need to educate your implementation team. This education could take place in various ways:
• Online, classroom or onsite training
• Onsite jumpstart engagements
• Planning and design workshops
The virtualization platform should be rolled out simultaneously with the server and storage hardware. Once everything is in place, you’re ready to optimize the environment, which should include documentation and testing of backup and, if applicable, disaster recovery procedures.

Step 10 — Leverage physical-to-virtual (P2V) conversion to the greatest degree possible.

To realize the greatest cost savings and ROI, you’ll want to eliminate as many physical servers as possible through virtualization. You’ll also want to achieve this conversion quickly and efficiently. After accomplishing these conversions, don’t forget to turn off old systems. These systems can be either recycled, repurposed or sold.
With fewer physical servers, don’t forget that you’ll have less heat dissipation. Maximizing your savings will thus also involve a reassessment of your data center cooling distribution.
As a final consideration, it may be most cost-effective to outsource P2V projects. Utilizing a team of experienced P2V engineers can save a substantial amount of time and money.
-CDW

Virtualizing disaster recovery using cloud computing

Cloud-based business resilience—a welcome, new approach 

Cloud computing offers an attractive alternative to traditional disaster recovery. "The Cloud" is inherently a shared infrastruc ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time.

DR picture0.png

Cloud-based business resilience managed services like IBM SmartCloud™ Virtualized Server Recovery are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.

DR picture1.pngThought Leadership white paper from IBM

Mastering Cloud Operations Requirements

Operations management needs to master six new capabilities to deliver on the promise of cloud.

 

1. Operate on the “Pools” of Compute, Storage, and Memory

Traditionally, operations management solutions have provided coverage for individual servers, storage arrays, or network devices. With the cloud, it becomes imperative to operate at the “pool” level. You have to look beyond what can be monitored at the individual device level. Operations organizations must ensure that they have immediate access to the operational status of the pool. That status could be aggregated by workload (current usage) and capacity (past usage and future projections). Perhaps more importantly, the status needs to accurately reflect the underlying health of the pool, even though individual component availability is not the same as pool availability. The operations management solution you use should understand the behavior of the pool and report the health status based on it. 

2. Monitor Elastic Service

Elasticity is central to cloud architectures, which means that services can dynamically expand and contract based on demand. Your operations management solution must adapt to this dynamic nature. For example, when monitoring the performance of a service, monitoring coverage should expand or retract with the service — automatically. This means that a manual process cannot be used to figure out and deploy monitoring capabilities to the target. Your operations management solution needs to know the configuration of that service and automatically deploy or remove necessary agents. Another important consideration is coverage for both cloud and non-cloud resources. This is most critical for enterprises building a private cloud. Why? Chances are that not every tier of a multitier application can be moved to the cloud. There may be static, legacy pieces, such as a database or persistent layer, which are still deployed in the physical boxes. Services must be monitored no matter where resources are located, in the cloud or on premises. In addition, a management solution should natively understand different behavior in each environment. When resources are located in both private and public clouds, your operations solution should monitor services in each seamlessly. It should also support inter-cloud service migration. At the end of day, services must be
monitored no matter where their resources are located. Your operations management solution must know their location and understand the behavior of services accordingly.
 

3. Detect Issues Before They Happen

Compared to workloads in the traditional data center, workloads in the cloud exhibit a wider variety of behavioral issues due to their elastic nature. When service agility is important, relying on reactive alerts or events to support stringent SLAs is not an option — particularly for service providers. You need to detect and resolve issues before they happen. Yet, how do you do that? First and foremost, you should implement a monitoring solution that knows how to learn the behavior of your cloud infrastructure and cloud services.
While this technology exists in the traditional data center, device-level behavior evolves more rapidly and with less conformity in the cloud. That’s why your solution should have the ability to learn the behavior of abstracted
resources, such as pools, as well as service levels that are based on business key performance indicators (KPIs). Based on those metrics, the solution should give predictive warnings to isolate problems before they affect your customer. To further pinpoint problems, operations should conduct a proper root cause analysis. This becomes even more critical in the cloud, where large numbers of scattered resources are involved. This information might manifest itself as a sea of red alerts suddenly appearing in a monitoring dashboard. Even though one may be a critical network alert, chances are you are not going to notice it. Your operations management solution should
intelligently detect the root cause of an issue with the cloud infrastructure and highlight that network event in your dashboard, while also invoking your remediation process.
 

4. Make Holistic Operations Decisions

In the cloud, you have to manage more types of constructs in your environment than in the traditional IT environment. In addition to servers, operating systems, and applications, you will have compute pools, storage pools, network containers, services, and tenants (for service providers). These new constructs are tightly coupled. You cannot view their performance and capacity data in silos; they have to be managed holistically. It is important to know who your most crucial customers are — and to identify their services so you can focus on recovering them in order of priority. In addition, you may want to send out alerts to affected customers to proactively let them know there is an issue. Your operations management solution should give you a panoramic view of all these aspects and their relationships. Not only will it let you quickly isolate the problem, but it will also save you money if you know which SLAs cost more to breach and therefore should be addressed first. 

5. Enable Self-Service for Operations

To give your cloud users their desired experience while also saving on support costs, it’s important to provide constant feedback. Traditionally, performance data has not been available to the end user. In the cloud, however, there is a larger number of users or service requests with a relatively lower ratio of administrators. For that reason, it’s important to minimize the “false alarms” or manual routine requests. The best way is to let your end users see the performance and capacity data surrounding their services. You can also let your users define key performance indicators (KPIs) to monitor, the threshold levels they want to set, and some routine remediation processes they want to trigger (such as auto-scaling). The operations management solution should allow you to easily plug this data into your end-user portal. 

 

6. Make Cloud Services Resilient 

Resiliency is the ultimate goal of proper cloud operations management. If a solution is able to understand the behavior of cloud services and proactively pinpoint potential issues, it’s natural for that solution to automatically isolate and eliminate problems. First, the solution must have accurate behavior learning and analytics capabilities. Second, a human must create well-defined policies with an automated policy engine or a human interactive process. Lastly, the solution must plug seamlessly into other lifecycle management solutions, such as provisioning, change management, and service request management. Operations management in a silo cannot make your cloud resilient. You should plan the right architectural design as a foundation and implement a good management process that reflects the paradigm shift to ensure your success.
 
Thought leadership Whitepaper by Brian Singer, BMC Software

Virtualization Implications

Implications from Server Virtualization
1.     Due to its evident benefits "CIassicaI” server virtualization remains a key technology in the next years
2.     But server virtualization also takes the next step towards to a technology allowing IT units to act as a real IT service provider — this will be accelerated by an increased customer demand
3.     IT units have pro-actively to decide if they will provide these services by themselves, if they will use an appliance based approach and where these services will be located
4.     They also have to decide if they buy-in some of the services from external service providers

 

5.     “Wait and see” isn’t a real option since server virtualization and cloud computing are strong instruments for external providers in order to further improve their competitiveness

 

6.     It is expected that the majority of services provided by an virtualized environment for end users have to be accessible by using a web frontend —this will finally allow more flexibility regarding the end user devices to be used
 
Implications from Client Virtualization
1.     Client virtualization — mainly on servers — achieves a breakthrough within the upcoming years since required software is proven and attractiveness is high
2.     Usage will be tremendously accelerated by an increased use of tablets and smart phones in the corporations
3.     The IT departments must be able to provide a controlled and cost efficient virtualized environment, fulfilling company security standards, on almost any end user device
4.     This speeds up the transition from the traditional “enterprise owned and managed” clients, where installation images need to be maintained and applications need to be provided, to small footprint web-based end user devices ...
5.     .. .which can be provided based on the BYOD approach if CapEx I cost reduction and a high degree of user flexibility are in the focus
 

 

Source JSC

Making Sense of Multi-Tenancy

Whether it’s an emerging biotech, a stable managed markets organization, or the world’s largest pharmaceutical company, every client needs an adaptable customer relationship management (CRM) system. Life sciences companies must be able to make changes to the system as often as necessary to keep up with market fluctuations, regulatory changes, territory realignments, and technology innovation. A simple field change that takes up to six months in a client/server environment, takes just a few minutes with an application “in the cloud.”
Cloud computing is a witty term used to describe the process of taking traditional software off the desktop and moving it to a server-based system that’s hosted centrally by a service provider. This service allows companies to make updates, alleviate glitches, and manage software from any location from one computer, rather than run around to every system and make changes locally.
While cloud computing might be the catchphrase of the moment, not all systems are created equal. A feature that should be considered when looking for a new software-as-a-service (SaaS) system is multi-tenancy, which is a chief characteristic of mature cloud computing application.
 
Making Sense of Multi-tenancy
 
Multi-tenancy is the architectural model that allows pharmaceutical SaaS CRM vendors—vendors with products “in the cloud”— to serve multiple customers from a single, shared instance of the application. In other words, only one version of an application is deployed to all customers who share a single, common infrastructure and code base that is centrally maintained. No one customer has access to another’s data, and each can configure their own version of the application to meet their specific needs.

 

Multi-tenant architectures provide a boundary between the platform and the applications that run on it, makingit possible to create applications with logic that’s independent of the data they control. Instead of hard-coding data tables and page layouts, administrators define attributes and behaviors as metadata that functions as the application’s logical blueprint. Individual deployments of those applications occupy virtual partitions rather than separate physical stacks of hardware and software.
These partitions store the metadata that defines each life sciences company’s business rules, fields used, custom objects, and interfaces to other systems. In addition to an application’s metadata, these virtual partitions also store custom code, ensuring that any potential problems with that code will not affect other customers, and preventing bad code associated with one object from affecting any other aspects of an individual customer’s application.

 

In addition, the model must be totally scalable—both up and down—as a result of employee changes, transaction growth, new product launches, mergers and acquisitions, or any number of business events that can dramatically alter business needs. CRM solutions from traditional on-premise vendors are expensive to scale because of the complexity and cost of scaling each layer of hardware and software stacks, which often require messy system replacements and data migrations.
 
Centralized Upkeep
 
Life sciences organizations benefit from both hardware and software performance improvements with a true multitenant cloud computing solution. When it comes to hardware, the provider sets up a server and network using the pooled resources of all its sales revenues that would not be financially feasible for any one individual customer to purchase on its own. It’s simply an economy of scale.
Investing in first class hardware results in more scalable, reliable, and secure performance than any other alternative. This is true no matter how large or small the client is—from 10 to 10,000 users, each customer still uses the same hardware.

 

The same is true with software. With multi-tenant SaaS, all customers are running on the same version or same set ofcode, which means that all of the users are working on the very latest release of the software 100 percent of the time—as opposed to locally installed programs where there may be 20 different versions of an application in use and 20 different sets of code to maintain without a single customer on the latest release. For each version of the software, the vendor provides the team to maintain it, investigate bugs, make and deploy patches, and more.
 
No Hardware, No Problem
 
Gartner estimates that two thirds of IT time and budgets is spent on maintaining infrastructure and dealing with updates. Multi-tenant SaaS lowers these costs because there is no hardware to buy or install, and there is no on-site software to maintain or update.

 

In addition to hardware, software, and maintenance savings, cloud computing CRM systems are much faster and therefore less expensive to implement. With multi-tenant SaaS, product design and configuration happens in parallel. That means project team members can log in and start working on day one.
 
The Maturation of a Technology
 
In his book, The Big Switch, Nicholas Carr describes how one hundred years ago, companies stopped generating their own power with “dynamos” and instead plugged into a growing national power grid of electricity. Looking back today, the benefits are obvious: dramatically lower cost, greatly reduced maintenance, and ubiquitous distribution. It also made the process of upgrading much easier as changes made to the common grid were immediately available to the benefit of all users. But most importantly it unleashed the full potential of the industrial revolution to companies of all shapes and sizes.
The life sciences industry is in the midst of a similar revolution today. Cloud computing has become the modern-day version of electrical power— the grid, replaced by the cloud. But only with true, multi-tenant SaaS can companies feel the full effects of this innovation.

 

Pharmaceutical Executive, Online – An Advanstar publication