Connecting Technology and Business.


We know that a formula in a cell is also displayed in the formula bar of the Excel interface. How about displaying all formulae in a worksheet all at a time so that, many times, it helps editing one or more formula in various cells easily?

The secret lies in a button that is placed usually just below the Esc key on your keyboard. This "~" symbol is called "Tilde". Use this with a Ctrl key and immediately Excel displays all the formulae in the worksheet. Use this same key combination again and you will see the results of these formulae in the cells. Ctrl+ ~ toggles between formulae and result in the cells of a worksheet. Make sure to return to the results display mode or you will see some features not functioning as expected.


Cost-effective Server Virtualization

Step 1 — Assess your server environment.

A number of valuable tools on the market today enable the collection of performance metrics from your server environment. These tools can help you identify which servers would make good virtualization candidates. A partial list of available tools includes:
• VMware
o Capacity Planner
o vCenter Server Guided Consolidation
• Microsoft
o Microsoft Assessment and Planning Toolkit (MAP)
o System Center Virtual Machine Manager
o System Center Operations Manager

Step 2 — Finalize candidates for virtualization.

Before making any final decisions about which servers to virtualize, it’s important to understand each software vendor’s support policies. Is the software supported to run in a virtual machine? Is it supported to run on specific platforms (e.g., Oracle, Microsoft, etc.)? It’s also critical to determine each software vendor’s licensing policies.
• Does the policy allow for virtualization of the software?
• Does it permit dynamic movement of the virtual machine via vMotion,
XenMotion or other means?
• If the host is licensed, does the virtualization policy allow for additional
software instances to run at no additional cost?
Once you’ve answered these questions, you will be able to build a final list of virtualization candidates.

Step 3 — Determine the virtualization platform.

Citrix, Microsoft and VMware produce some of the well-known virtualization platforms, or hypervisors. Each platform is different. Once you’ve identified the features, functionality and caveats of each hypervisor, you can make an informed decision about which one will best meet your specific business needs.

Step 4 — Determine the server hardware platform.

To realize the greatest cost savings, determine if server reuse is an option for your business. Note, however, that any system more than three to four years old will probably be removed from newer virtualization platform hardware compatibility lists (HCLs). Warranty expirations and renewal costs also represent a key consideration in deciding on the merit and cost-effectiveness of server reuse. Finally, some hypervisors require specific processors with specific features (AMD-V or Intel VT) to operate properly. The hypervisor you select may, therefore, determine the feasibility of server reuse.
If new server hardware is the right option, consider rack servers, blade servers, AMD and Intel options and embedded hypervisors. HP, IBM and Sun all manufacture server hardware that leverages virtualization technology.

Step 5 — Determine storage hardware platform.

EMC, HP, IBM, NetApp and Sun Microsystems all produce leading storage systems. Again, it is important to identify the features, functionality and caveats of each platform in order to determine which one will best match your particular business needs.

Step 6 — Revisit backup/restore architecture.

Many vendors offer enhanced backup products for virtual infrastructure. Some of these vendors price by socket or host, thereby reducing costs considerably. If your business is tied to a long-term maintenance contract, consider finishing it out before changing your backup architecture. For those ready to investigate backup/restore architecture, the following companies offer some of the best options for virtualized infrastructure:
• CA • Commvault • EMC • IBM • Microsoft • Symantec • Vizioncore

Step 7 — Understand server operating system licensing.

If you plan on running many Windows Server instances, consider licensing each host processor with Windows Server 2008 Data Center Edition. Doing so enables unlimited Windows Server instances with no additional charge as well as dynamic movement of instances from host to host. These benefits prove extremely useful if you plan to clone and test Windows Server instances. Linux distributions have similar offerings but will vary by vendor.

Step 8 — Plan the project.

Assigning a project manager remains key to ensuring a successful virtualization implementation. Working with the project manager, you should build a project plan.
Determining the vendors through whom to acquire the necessary hardware, software and services represents another critical aspect of planning. Procuring everything from a single vendor will often yield the deepest discounts. But be sure to research all possibilities.
Finally, make sure to build a conversion time line — and stick to it. This step will prove one of the most effective means of controlling cost.

Step 9 — Educate and Implement.

For the virtualization process to remain cost effective, you’ll need to educate your implementation team. This education could take place in various ways:
• Online, classroom or onsite training
• Onsite jumpstart engagements
• Planning and design workshops
The virtualization platform should be rolled out simultaneously with the server and storage hardware. Once everything is in place, you’re ready to optimize the environment, which should include documentation and testing of backup and, if applicable, disaster recovery procedures.

Step 10 — Leverage physical-to-virtual (P2V) conversion to the greatest degree possible.

To realize the greatest cost savings and ROI, you’ll want to eliminate as many physical servers as possible through virtualization. You’ll also want to achieve this conversion quickly and efficiently. After accomplishing these conversions, don’t forget to turn off old systems. These systems can be either recycled, repurposed or sold.
With fewer physical servers, don’t forget that you’ll have less heat dissipation. Maximizing your savings will thus also involve a reassessment of your data center cooling distribution.
As a final consideration, it may be most cost-effective to outsource P2V projects. Utilizing a team of experienced P2V engineers can save a substantial amount of time and money.

Virtualizing disaster recovery using cloud computing

Cloud-based business resilience—a welcome, new approach 

Cloud computing offers an attractive alternative to traditional disaster recovery. "The Cloud" is inherently a shared infrastruc ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time.

DR picture0.png

Cloud-based business resilience managed services like IBM SmartCloud™ Virtualized Server Recovery are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.

DR picture1.pngThought Leadership white paper from IBM

Ensure your email gets read

1. Make the purpose of the message clear

  • A standard subject heading such as "Action Requested," "Response Requested," "FYI," or "Read Only," depending on the action indicated in the body of the message.
  • The meaningful objective or supporting project that the message relates to, for example, "FY '05 budget forecasting."
  • The required action if applicable, for example, "Consolidate departmental budget spreadsheets."
  • The due date if applicable, for example, "Due by July 7."
An example of an effective Subject line is "Action Requested—Consolidate all department spreadsheets for FY '06 budget and return to me by June 15th."

 2. Tell recipients what action you want them to take

  • Action: The recipient needs to perform an action. For example, "Provide a proposal for a 5% reduction in Travel & Entertainment expense."
  • Respond: The recipient needs to respond to your message with specific information. For example, "Let me know if you can attend the staff meeting at 9:00 A.M. on Friday."
  • Read only: The recipient needs to read your message to make sure they understand something. No response is necessary. For example, "Please read the attached sales plan before our next staff meeting on August 12th."
  • FYI only: The recipient should file your message for future reference. No response is necessary. In fact, even reading the message is optional. For example, "Enclosed for your records are your completed expense reports."

 3. Provide the proper data and documents

Make sure you give recipients all of the information they need to complete an action or respond successfully to your request. Your co-workers shouldn't have to come back to you asking for information, whether it is a supporting document or a link to a file on a shared website. You can include supporting information in the body of the message, in an attached file, or in an attached email. In addition, if you want recipients to fill out a form, it's a good idea to attach a sample copy of the form that shows how it should be filled out.

4. Send the message only to relevant recipients
Target your message to the appropriate audience. Only people who have to complete an action on the Subject line should receive your message. Be thoughtful and respectful when you enter names on the To line. People observe your thoughtfulness and the results are more effective. Here are two simple questions to help you filter the To line recipients:
  • Does this email relate to the recipient's objectives?
  • Is the recipient responsible for the action in the Subject line?

5. Use the CC line wisely
It's tempting to put loads of people on the CC line to cover your bases, but doing so is one of the fastest ways to create an unproductive environment. Here are some things to consider when using the CC line:
  • No action or response should be expected of individuals on the CC line. The recipient needs to only read or file the message.
  • Only those individuals whose meaningful objectives are affected by the email should be included on the message. If you are not sure that the information is related to a co-worker's objectives, check with that person to see if they want to receive your email on that topic.

6. Ask "final questions" before you click Send
The final thing you want to do is check your work to be sure you are supporting meaningful actions. Sending clear, well-defined messages can reduce the volume of email you send and receive, encouraging correct action, saving time, and limiting email trails. Make sure you ask the following questions before you send the message:
  • Have I clarified purpose and actions?
  • Have I included supporting documents and written a clear Subject line?
  • Did I write the message clearly enough that it does not come back to me with questions?
  • Am I sending the message to the correct recipients?
  • Have I run the spelling checker and edited the message for grammar and jargon?

Sally McGhee

Mastering Cloud Operations Requirements

Operations management needs to master six new capabilities to deliver on the promise of cloud.


1. Operate on the “Pools” of Compute, Storage, and Memory

Traditionally, operations management solutions have provided coverage for individual servers, storage arrays, or network devices. With the cloud, it becomes imperative to operate at the “pool” level. You have to look beyond what can be monitored at the individual device level. Operations organizations must ensure that they have immediate access to the operational status of the pool. That status could be aggregated by workload (current usage) and capacity (past usage and future projections). Perhaps more importantly, the status needs to accurately reflect the underlying health of the pool, even though individual component availability is not the same as pool availability. The operations management solution you use should understand the behavior of the pool and report the health status based on it. 

2. Monitor Elastic Service

Elasticity is central to cloud architectures, which means that services can dynamically expand and contract based on demand. Your operations management solution must adapt to this dynamic nature. For example, when monitoring the performance of a service, monitoring coverage should expand or retract with the service — automatically. This means that a manual process cannot be used to figure out and deploy monitoring capabilities to the target. Your operations management solution needs to know the configuration of that service and automatically deploy or remove necessary agents. Another important consideration is coverage for both cloud and non-cloud resources. This is most critical for enterprises building a private cloud. Why? Chances are that not every tier of a multitier application can be moved to the cloud. There may be static, legacy pieces, such as a database or persistent layer, which are still deployed in the physical boxes. Services must be monitored no matter where resources are located, in the cloud or on premises. In addition, a management solution should natively understand different behavior in each environment. When resources are located in both private and public clouds, your operations solution should monitor services in each seamlessly. It should also support inter-cloud service migration. At the end of day, services must be
monitored no matter where their resources are located. Your operations management solution must know their location and understand the behavior of services accordingly.

3. Detect Issues Before They Happen

Compared to workloads in the traditional data center, workloads in the cloud exhibit a wider variety of behavioral issues due to their elastic nature. When service agility is important, relying on reactive alerts or events to support stringent SLAs is not an option — particularly for service providers. You need to detect and resolve issues before they happen. Yet, how do you do that? First and foremost, you should implement a monitoring solution that knows how to learn the behavior of your cloud infrastructure and cloud services.
While this technology exists in the traditional data center, device-level behavior evolves more rapidly and with less conformity in the cloud. That’s why your solution should have the ability to learn the behavior of abstracted
resources, such as pools, as well as service levels that are based on business key performance indicators (KPIs). Based on those metrics, the solution should give predictive warnings to isolate problems before they affect your customer. To further pinpoint problems, operations should conduct a proper root cause analysis. This becomes even more critical in the cloud, where large numbers of scattered resources are involved. This information might manifest itself as a sea of red alerts suddenly appearing in a monitoring dashboard. Even though one may be a critical network alert, chances are you are not going to notice it. Your operations management solution should
intelligently detect the root cause of an issue with the cloud infrastructure and highlight that network event in your dashboard, while also invoking your remediation process.

4. Make Holistic Operations Decisions

In the cloud, you have to manage more types of constructs in your environment than in the traditional IT environment. In addition to servers, operating systems, and applications, you will have compute pools, storage pools, network containers, services, and tenants (for service providers). These new constructs are tightly coupled. You cannot view their performance and capacity data in silos; they have to be managed holistically. It is important to know who your most crucial customers are — and to identify their services so you can focus on recovering them in order of priority. In addition, you may want to send out alerts to affected customers to proactively let them know there is an issue. Your operations management solution should give you a panoramic view of all these aspects and their relationships. Not only will it let you quickly isolate the problem, but it will also save you money if you know which SLAs cost more to breach and therefore should be addressed first. 

5. Enable Self-Service for Operations

To give your cloud users their desired experience while also saving on support costs, it’s important to provide constant feedback. Traditionally, performance data has not been available to the end user. In the cloud, however, there is a larger number of users or service requests with a relatively lower ratio of administrators. For that reason, it’s important to minimize the “false alarms” or manual routine requests. The best way is to let your end users see the performance and capacity data surrounding their services. You can also let your users define key performance indicators (KPIs) to monitor, the threshold levels they want to set, and some routine remediation processes they want to trigger (such as auto-scaling). The operations management solution should allow you to easily plug this data into your end-user portal. 


6. Make Cloud Services Resilient 

Resiliency is the ultimate goal of proper cloud operations management. If a solution is able to understand the behavior of cloud services and proactively pinpoint potential issues, it’s natural for that solution to automatically isolate and eliminate problems. First, the solution must have accurate behavior learning and analytics capabilities. Second, a human must create well-defined policies with an automated policy engine or a human interactive process. Lastly, the solution must plug seamlessly into other lifecycle management solutions, such as provisioning, change management, and service request management. Operations management in a silo cannot make your cloud resilient. You should plan the right architectural design as a foundation and implement a good management process that reflects the paradigm shift to ensure your success.
Thought leadership Whitepaper by Brian Singer, BMC Software

Virtualization Implications

Implications from Server Virtualization
1.     Due to its evident benefits "CIassicaI” server virtualization remains a key technology in the next years
2.     But server virtualization also takes the next step towards to a technology allowing IT units to act as a real IT service provider — this will be accelerated by an increased customer demand
3.     IT units have pro-actively to decide if they will provide these services by themselves, if they will use an appliance based approach and where these services will be located
4.     They also have to decide if they buy-in some of the services from external service providers


5.     “Wait and see” isn’t a real option since server virtualization and cloud computing are strong instruments for external providers in order to further improve their competitiveness


6.     It is expected that the majority of services provided by an virtualized environment for end users have to be accessible by using a web frontend —this will finally allow more flexibility regarding the end user devices to be used
Implications from Client Virtualization
1.     Client virtualization — mainly on servers — achieves a breakthrough within the upcoming years since required software is proven and attractiveness is high
2.     Usage will be tremendously accelerated by an increased use of tablets and smart phones in the corporations
3.     The IT departments must be able to provide a controlled and cost efficient virtualized environment, fulfilling company security standards, on almost any end user device
4.     This speeds up the transition from the traditional “enterprise owned and managed” clients, where installation images need to be maintained and applications need to be provided, to small footprint web-based end user devices ...
5.     .. .which can be provided based on the BYOD approach if CapEx I cost reduction and a high degree of user flexibility are in the focus


Source JSC

Trim videos in PowerPoint to fit the situation

 After you watch your video clips, you might notice that you were shaking the camera at the beginning and end of each clip, or that you want to remove a part that is not pertinent to the message of your video.

Fortunately, you can fix these problems with the Trim Video feature by trimming the beginning and end of your video clip.
  1. In Normal view, on the video frame, press Play.
  2. Select the video on the slide.
  3. Under Video Tools, on the Playback tab, in the Editing group, click Trim Video.
  4. In the Trim Video dialog box, do one or more of the following:
    • To trim the beginning of the clip, click the start point (shown in the image below as a green marker, on the far left). When you see the two-headed arrow, drag the arrow to the desired starting position for the video.
    • To trim the end of the clip, click the end point (shown in the image below as a red marker, on the right). When you see the two-headed arrow, drag the arrow to the desired ending position for the videoVideo trim.png

Turn on AutoRecover and AutoSave to protect your files in case of a crash

Crashes happen. The power goes out. And sometimes, people accidentally close a file without saving. To avoid losing all your work when stuff like that happens, make sure AutoRecover and AutoSave are turned on:


  1. Click the File tab. 
  2. Under Help, click Options.
  3. Click Save.
  4. Make sure the Save AutoRecover information every x minutes check box is selected.
  5. In Word 2010, Excel 2010 and PowerPoint 2010, make sure the Keep the last autosaved version if I close without saving check box is selected.
 Important    The Save button is still your best friend. To be sure you don’t lose your latest work, click Save  (or press CTRL+S) often.
To be extra safe, enter a small number in the minutes box, like 10. That way, you’ll never lose more than 10 minutes of work.
On the other hand, if you want to make Office faster, try entering a larger number in the minutes box, like 20.
AutoRecover saves more than your files. It also saves your workspace (if it can). Suppose you open several spreadsheets in Excel and the power goes out. When you restart Excel, AutoRecover tries to open your spreadsheets again, laid out the way they were before, with the same cells selected.
In Word 2010, Excel 2010, and PowerPoint 2010, AutoRecover has another benefit. It can restore earlier versions of your file.​