Quadra

Connecting Technology and Business.

Cost-effective Server Virtualization

Step 1 — Assess your server environment.

A number of valuable tools on the market today enable the collection of performance metrics from your server environment. These tools can help you identify which servers would make good virtualization candidates. A partial list of available tools includes:
• VMware
o Capacity Planner
o vCenter Server Guided Consolidation
• Microsoft
o Microsoft Assessment and Planning Toolkit (MAP)
o System Center Virtual Machine Manager
o System Center Operations Manager

Step 2 — Finalize candidates for virtualization.

Before making any final decisions about which servers to virtualize, it’s important to understand each software vendor’s support policies. Is the software supported to run in a virtual machine? Is it supported to run on specific platforms (e.g., Oracle, Microsoft, etc.)? It’s also critical to determine each software vendor’s licensing policies.
• Does the policy allow for virtualization of the software?
• Does it permit dynamic movement of the virtual machine via vMotion,
XenMotion or other means?
• If the host is licensed, does the virtualization policy allow for additional
software instances to run at no additional cost?
Once you’ve answered these questions, you will be able to build a final list of virtualization candidates.

Step 3 — Determine the virtualization platform.

Citrix, Microsoft and VMware produce some of the well-known virtualization platforms, or hypervisors. Each platform is different. Once you’ve identified the features, functionality and caveats of each hypervisor, you can make an informed decision about which one will best meet your specific business needs.

Step 4 — Determine the server hardware platform.

To realize the greatest cost savings, determine if server reuse is an option for your business. Note, however, that any system more than three to four years old will probably be removed from newer virtualization platform hardware compatibility lists (HCLs). Warranty expirations and renewal costs also represent a key consideration in deciding on the merit and cost-effectiveness of server reuse. Finally, some hypervisors require specific processors with specific features (AMD-V or Intel VT) to operate properly. The hypervisor you select may, therefore, determine the feasibility of server reuse.
If new server hardware is the right option, consider rack servers, blade servers, AMD and Intel options and embedded hypervisors. HP, IBM and Sun all manufacture server hardware that leverages virtualization technology.

Step 5 — Determine storage hardware platform.

EMC, HP, IBM, NetApp and Sun Microsystems all produce leading storage systems. Again, it is important to identify the features, functionality and caveats of each platform in order to determine which one will best match your particular business needs.

Step 6 — Revisit backup/restore architecture.

Many vendors offer enhanced backup products for virtual infrastructure. Some of these vendors price by socket or host, thereby reducing costs considerably. If your business is tied to a long-term maintenance contract, consider finishing it out before changing your backup architecture. For those ready to investigate backup/restore architecture, the following companies offer some of the best options for virtualized infrastructure:
• CA • Commvault • EMC • IBM • Microsoft • Symantec • Vizioncore
 

Step 7 — Understand server operating system licensing.

If you plan on running many Windows Server instances, consider licensing each host processor with Windows Server 2008 Data Center Edition. Doing so enables unlimited Windows Server instances with no additional charge as well as dynamic movement of instances from host to host. These benefits prove extremely useful if you plan to clone and test Windows Server instances. Linux distributions have similar offerings but will vary by vendor.

Step 8 — Plan the project.

Assigning a project manager remains key to ensuring a successful virtualization implementation. Working with the project manager, you should build a project plan.
Determining the vendors through whom to acquire the necessary hardware, software and services represents another critical aspect of planning. Procuring everything from a single vendor will often yield the deepest discounts. But be sure to research all possibilities.
Finally, make sure to build a conversion time line — and stick to it. This step will prove one of the most effective means of controlling cost.

Step 9 — Educate and Implement.

For the virtualization process to remain cost effective, you’ll need to educate your implementation team. This education could take place in various ways:
• Online, classroom or onsite training
• Onsite jumpstart engagements
• Planning and design workshops
The virtualization platform should be rolled out simultaneously with the server and storage hardware. Once everything is in place, you’re ready to optimize the environment, which should include documentation and testing of backup and, if applicable, disaster recovery procedures.

Step 10 — Leverage physical-to-virtual (P2V) conversion to the greatest degree possible.

To realize the greatest cost savings and ROI, you’ll want to eliminate as many physical servers as possible through virtualization. You’ll also want to achieve this conversion quickly and efficiently. After accomplishing these conversions, don’t forget to turn off old systems. These systems can be either recycled, repurposed or sold.
With fewer physical servers, don’t forget that you’ll have less heat dissipation. Maximizing your savings will thus also involve a reassessment of your data center cooling distribution.
As a final consideration, it may be most cost-effective to outsource P2V projects. Utilizing a team of experienced P2V engineers can save a substantial amount of time and money.
-CDW

Virtualizing disaster recovery using cloud computing

Cloud-based business resilience—a welcome, new approach 

Cloud computing offers an attractive alternative to traditional disaster recovery. "The Cloud" is inherently a shared infrastruc ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time.

DR picture0.png

Cloud-based business resilience managed services like IBM SmartCloud™ Virtualized Server Recovery are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.

DR picture1.pngThought Leadership white paper from IBM

Ensure your email gets read

1. Make the purpose of the message clear

  • A standard subject heading such as "Action Requested," "Response Requested," "FYI," or "Read Only," depending on the action indicated in the body of the message.
  • The meaningful objective or supporting project that the message relates to, for example, "FY '05 budget forecasting."
  • The required action if applicable, for example, "Consolidate departmental budget spreadsheets."
  • The due date if applicable, for example, "Due by July 7."
An example of an effective Subject line is "Action Requested—Consolidate all department spreadsheets for FY '06 budget and return to me by June 15th."

 2. Tell recipients what action you want them to take

  • Action: The recipient needs to perform an action. For example, "Provide a proposal for a 5% reduction in Travel & Entertainment expense."
  • Respond: The recipient needs to respond to your message with specific information. For example, "Let me know if you can attend the staff meeting at 9:00 A.M. on Friday."
  • Read only: The recipient needs to read your message to make sure they understand something. No response is necessary. For example, "Please read the attached sales plan before our next staff meeting on August 12th."
  • FYI only: The recipient should file your message for future reference. No response is necessary. In fact, even reading the message is optional. For example, "Enclosed for your records are your completed expense reports."

 3. Provide the proper data and documents

Make sure you give recipients all of the information they need to complete an action or respond successfully to your request. Your co-workers shouldn't have to come back to you asking for information, whether it is a supporting document or a link to a file on a shared website. You can include supporting information in the body of the message, in an attached file, or in an attached email. In addition, if you want recipients to fill out a form, it's a good idea to attach a sample copy of the form that shows how it should be filled out.


4. Send the message only to relevant recipients
Target your message to the appropriate audience. Only people who have to complete an action on the Subject line should receive your message. Be thoughtful and respectful when you enter names on the To line. People observe your thoughtfulness and the results are more effective. Here are two simple questions to help you filter the To line recipients:
  • Does this email relate to the recipient's objectives?
  • Is the recipient responsible for the action in the Subject line?

5. Use the CC line wisely
It's tempting to put loads of people on the CC line to cover your bases, but doing so is one of the fastest ways to create an unproductive environment. Here are some things to consider when using the CC line:
  • No action or response should be expected of individuals on the CC line. The recipient needs to only read or file the message.
  • Only those individuals whose meaningful objectives are affected by the email should be included on the message. If you are not sure that the information is related to a co-worker's objectives, check with that person to see if they want to receive your email on that topic.

6. Ask "final questions" before you click Send
The final thing you want to do is check your work to be sure you are supporting meaningful actions. Sending clear, well-defined messages can reduce the volume of email you send and receive, encouraging correct action, saving time, and limiting email trails. Make sure you ask the following questions before you send the message:
  • Have I clarified purpose and actions?
  • Have I included supporting documents and written a clear Subject line?
  • Did I write the message clearly enough that it does not come back to me with questions?
  • Am I sending the message to the correct recipients?
  • Have I run the spelling checker and edited the message for grammar and jargon?

Sally McGhee

Mastering Cloud Operations Requirements

Operations management needs to master six new capabilities to deliver on the promise of cloud.

 

1. Operate on the “Pools” of Compute, Storage, and Memory

Traditionally, operations management solutions have provided coverage for individual servers, storage arrays, or network devices. With the cloud, it becomes imperative to operate at the “pool” level. You have to look beyond what can be monitored at the individual device level. Operations organizations must ensure that they have immediate access to the operational status of the pool. That status could be aggregated by workload (current usage) and capacity (past usage and future projections). Perhaps more importantly, the status needs to accurately reflect the underlying health of the pool, even though individual component availability is not the same as pool availability. The operations management solution you use should understand the behavior of the pool and report the health status based on it. 

2. Monitor Elastic Service

Elasticity is central to cloud architectures, which means that services can dynamically expand and contract based on demand. Your operations management solution must adapt to this dynamic nature. For example, when monitoring the performance of a service, monitoring coverage should expand or retract with the service — automatically. This means that a manual process cannot be used to figure out and deploy monitoring capabilities to the target. Your operations management solution needs to know the configuration of that service and automatically deploy or remove necessary agents. Another important consideration is coverage for both cloud and non-cloud resources. This is most critical for enterprises building a private cloud. Why? Chances are that not every tier of a multitier application can be moved to the cloud. There may be static, legacy pieces, such as a database or persistent layer, which are still deployed in the physical boxes. Services must be monitored no matter where resources are located, in the cloud or on premises. In addition, a management solution should natively understand different behavior in each environment. When resources are located in both private and public clouds, your operations solution should monitor services in each seamlessly. It should also support inter-cloud service migration. At the end of day, services must be
monitored no matter where their resources are located. Your operations management solution must know their location and understand the behavior of services accordingly.
 

3. Detect Issues Before They Happen

Compared to workloads in the traditional data center, workloads in the cloud exhibit a wider variety of behavioral issues due to their elastic nature. When service agility is important, relying on reactive alerts or events to support stringent SLAs is not an option — particularly for service providers. You need to detect and resolve issues before they happen. Yet, how do you do that? First and foremost, you should implement a monitoring solution that knows how to learn the behavior of your cloud infrastructure and cloud services.
While this technology exists in the traditional data center, device-level behavior evolves more rapidly and with less conformity in the cloud. That’s why your solution should have the ability to learn the behavior of abstracted
resources, such as pools, as well as service levels that are based on business key performance indicators (KPIs). Based on those metrics, the solution should give predictive warnings to isolate problems before they affect your customer. To further pinpoint problems, operations should conduct a proper root cause analysis. This becomes even more critical in the cloud, where large numbers of scattered resources are involved. This information might manifest itself as a sea of red alerts suddenly appearing in a monitoring dashboard. Even though one may be a critical network alert, chances are you are not going to notice it. Your operations management solution should
intelligently detect the root cause of an issue with the cloud infrastructure and highlight that network event in your dashboard, while also invoking your remediation process.
 

4. Make Holistic Operations Decisions

In the cloud, you have to manage more types of constructs in your environment than in the traditional IT environment. In addition to servers, operating systems, and applications, you will have compute pools, storage pools, network containers, services, and tenants (for service providers). These new constructs are tightly coupled. You cannot view their performance and capacity data in silos; they have to be managed holistically. It is important to know who your most crucial customers are — and to identify their services so you can focus on recovering them in order of priority. In addition, you may want to send out alerts to affected customers to proactively let them know there is an issue. Your operations management solution should give you a panoramic view of all these aspects and their relationships. Not only will it let you quickly isolate the problem, but it will also save you money if you know which SLAs cost more to breach and therefore should be addressed first. 

5. Enable Self-Service for Operations

To give your cloud users their desired experience while also saving on support costs, it’s important to provide constant feedback. Traditionally, performance data has not been available to the end user. In the cloud, however, there is a larger number of users or service requests with a relatively lower ratio of administrators. For that reason, it’s important to minimize the “false alarms” or manual routine requests. The best way is to let your end users see the performance and capacity data surrounding their services. You can also let your users define key performance indicators (KPIs) to monitor, the threshold levels they want to set, and some routine remediation processes they want to trigger (such as auto-scaling). The operations management solution should allow you to easily plug this data into your end-user portal. 

 

6. Make Cloud Services Resilient 

Resiliency is the ultimate goal of proper cloud operations management. If a solution is able to understand the behavior of cloud services and proactively pinpoint potential issues, it’s natural for that solution to automatically isolate and eliminate problems. First, the solution must have accurate behavior learning and analytics capabilities. Second, a human must create well-defined policies with an automated policy engine or a human interactive process. Lastly, the solution must plug seamlessly into other lifecycle management solutions, such as provisioning, change management, and service request management. Operations management in a silo cannot make your cloud resilient. You should plan the right architectural design as a foundation and implement a good management process that reflects the paradigm shift to ensure your success.
 
Thought leadership Whitepaper by Brian Singer, BMC Software

Trim videos in PowerPoint to fit the situation

 After you watch your video clips, you might notice that you were shaking the camera at the beginning and end of each clip, or that you want to remove a part that is not pertinent to the message of your video.

Fortunately, you can fix these problems with the Trim Video feature by trimming the beginning and end of your video clip.
  1. In Normal view, on the video frame, press Play.
  2. Select the video on the slide.
  3. Under Video Tools, on the Playback tab, in the Editing group, click Trim Video.
  4. In the Trim Video dialog box, do one or more of the following:
    • To trim the beginning of the clip, click the start point (shown in the image below as a green marker, on the far left). When you see the two-headed arrow, drag the arrow to the desired starting position for the video.
    • To trim the end of the clip, click the end point (shown in the image below as a red marker, on the right). When you see the two-headed arrow, drag the arrow to the desired ending position for the videoVideo trim.png

Virtualization Implications

Implications from Server Virtualization
1.     Due to its evident benefits "CIassicaI” server virtualization remains a key technology in the next years
2.     But server virtualization also takes the next step towards to a technology allowing IT units to act as a real IT service provider — this will be accelerated by an increased customer demand
3.     IT units have pro-actively to decide if they will provide these services by themselves, if they will use an appliance based approach and where these services will be located
4.     They also have to decide if they buy-in some of the services from external service providers

 

5.     “Wait and see” isn’t a real option since server virtualization and cloud computing are strong instruments for external providers in order to further improve their competitiveness

 

6.     It is expected that the majority of services provided by an virtualized environment for end users have to be accessible by using a web frontend —this will finally allow more flexibility regarding the end user devices to be used
 
Implications from Client Virtualization
1.     Client virtualization — mainly on servers — achieves a breakthrough within the upcoming years since required software is proven and attractiveness is high
2.     Usage will be tremendously accelerated by an increased use of tablets and smart phones in the corporations
3.     The IT departments must be able to provide a controlled and cost efficient virtualized environment, fulfilling company security standards, on almost any end user device
4.     This speeds up the transition from the traditional “enterprise owned and managed” clients, where installation images need to be maintained and applications need to be provided, to small footprint web-based end user devices ...
5.     .. .which can be provided based on the BYOD approach if CapEx I cost reduction and a high degree of user flexibility are in the focus
 

 

Source JSC

Turn on AutoRecover and AutoSave to protect your files in case of a crash

Crashes happen. The power goes out. And sometimes, people accidentally close a file without saving. To avoid losing all your work when stuff like that happens, make sure AutoRecover and AutoSave are turned on:

 

  1. Click the File tab. 
  2. Under Help, click Options.
  3. Click Save.
  4. Make sure the Save AutoRecover information every x minutes check box is selected.
  5. In Word 2010, Excel 2010 and PowerPoint 2010, make sure the Keep the last autosaved version if I close without saving check box is selected.
 Important    The Save button is still your best friend. To be sure you don’t lose your latest work, click Save  (or press CTRL+S) often.
 
To be extra safe, enter a small number in the minutes box, like 10. That way, you’ll never lose more than 10 minutes of work.
On the other hand, if you want to make Office faster, try entering a larger number in the minutes box, like 20.
 
AutoRecover saves more than your files. It also saves your workspace (if it can). Suppose you open several spreadsheets in Excel and the power goes out. When you restart Excel, AutoRecover tries to open your spreadsheets again, laid out the way they were before, with the same cells selected.
In Word 2010, Excel 2010, and PowerPoint 2010, AutoRecover has another benefit. It can restore earlier versions of your file.​

VM Security Vulnerabilities

While Hypervisors are considered better secured than general-purpose OSes, Virtualization does introduce a new & potentially devastating threat-matrix in an enterprise environment. Here are some of the virtualization specific threats & vulnerabilities that IT & security administrators should be aware of before deploying virtualization environments.

 

 
• VM Sprawl:
VM Sprawl refers to uncontrolled deployment of VMs in an Enterprise environment. It is a simple, short & quick process to deploy new VMs on existing VM severs hence if an Enterprise doesn’t have authorization policies around
a) VM Change Management;
b) a formal review process for VM security before deployment and/or
c) an authorized set of VM templates
then VM deployments can get out of control which is commonly known as “VM Sprawl”. VM Sprawl is one of the biggest problems in Enterprise deployments of Virtualization.
 
• Hyperjacking:
Hyperjacking is a term used for an attack which takes control over the Hypervisor that creates the virtual environment within a VM Host. Since Hypervisors run beneath the Host OS, if installed, a rogue hypervisor can take complete control of the virtualization server, all the guest VMs within the virtualized environment and possibly the Host OS as well. So far Hyperjacking vulnerabilities are mostly specific to Type-2 Hypervisors. However, Hyperjacking of the service console or Dom0 on Type-1 hypervisors is possible which in essence would allow the attacked unlimited access in the entire virtualization server. Regular security measures such as Endpoint firewalls, IDS/IPS, Anti-Virus etc are ineffective & defense-less against Hyperjacking since security solutions in VM or server are not even aware that the host machine has been compromised. Though largely theoretically at this point, it’s a critical threat to the security of every virtualized environment.
 
• VM Escape:
Normally virtual machines are encapsulated, isolated environments. The operating systems running inside the virtual machine shouldn’t know that they are virtualized, and there should be no way to break out of the virtual machine and alter the parent hypervisor. The process of breaking out and interacting with the hypervisor or VM Host is called a “VM escape”.
 
• Incorrect VM Isolation:
VM Isolation is a critical aspect of keeping virtualized environment safe. Just like with Physical machines and Physical firewalls, virtual machines should be restricted in communication from one-to-another. Incorrect VM Isolation can result in problems as simple as reduced virtualization performance (one VM constantly communicating to another reduces local resource usage for more important tasks) to denial-of-service and VM take-over.
 

 

 • Denial of Service:
Several types of denial of service exploits & vulnerabilities have been discovered in various types of hypervisors from different vendors. These potential DoS vulnerabilities range from traditional network based attacks or remote DoS to bring the Host or a specific Guest OS down; all the way to more exotic types of denial of service such as the ones which exploit hypervisor or virtualization tool & backdoor communications.

 

 
• VM Poaching (or Resource Hogging):
VM Poaching occurs when one VM Guest OS takes up more CPU or other resources allocated to it against the other Guest OS running in the same virtualized environment. A run-away VM can completely consume the hypervisor, thus starving rest of the VMs running within the hypervisor. VM poaching can occur with any of the hypervisor resources including memory, CPU, network and/or disk.
 
• Unsecured VM Migration: (VMotion)
When a VM is moved from one VMHost to another, the security policies & tools set up on the new VMHost need to be updated with moved VM so that same security policies for that VM can be enforced on the new VM Host as well. The dynamic natures of “VM Migration” could potentially open up security risks and exposure for not only the “migrated VM” but also for the new VMHost & other Guests running on that VM Host.
-      From a Whitepaper from RedCannon Security Inc.

Making Sense of Multi-Tenancy

Whether it’s an emerging biotech, a stable managed markets organization, or the world’s largest pharmaceutical company, every client needs an adaptable customer relationship management (CRM) system. Life sciences companies must be able to make changes to the system as often as necessary to keep up with market fluctuations, regulatory changes, territory realignments, and technology innovation. A simple field change that takes up to six months in a client/server environment, takes just a few minutes with an application “in the cloud.”
Cloud computing is a witty term used to describe the process of taking traditional software off the desktop and moving it to a server-based system that’s hosted centrally by a service provider. This service allows companies to make updates, alleviate glitches, and manage software from any location from one computer, rather than run around to every system and make changes locally.
While cloud computing might be the catchphrase of the moment, not all systems are created equal. A feature that should be considered when looking for a new software-as-a-service (SaaS) system is multi-tenancy, which is a chief characteristic of mature cloud computing application.
 
Making Sense of Multi-tenancy
 
Multi-tenancy is the architectural model that allows pharmaceutical SaaS CRM vendors—vendors with products “in the cloud”— to serve multiple customers from a single, shared instance of the application. In other words, only one version of an application is deployed to all customers who share a single, common infrastructure and code base that is centrally maintained. No one customer has access to another’s data, and each can configure their own version of the application to meet their specific needs.

 

Multi-tenant architectures provide a boundary between the platform and the applications that run on it, makingit possible to create applications with logic that’s independent of the data they control. Instead of hard-coding data tables and page layouts, administrators define attributes and behaviors as metadata that functions as the application’s logical blueprint. Individual deployments of those applications occupy virtual partitions rather than separate physical stacks of hardware and software.
These partitions store the metadata that defines each life sciences company’s business rules, fields used, custom objects, and interfaces to other systems. In addition to an application’s metadata, these virtual partitions also store custom code, ensuring that any potential problems with that code will not affect other customers, and preventing bad code associated with one object from affecting any other aspects of an individual customer’s application.

 

In addition, the model must be totally scalable—both up and down—as a result of employee changes, transaction growth, new product launches, mergers and acquisitions, or any number of business events that can dramatically alter business needs. CRM solutions from traditional on-premise vendors are expensive to scale because of the complexity and cost of scaling each layer of hardware and software stacks, which often require messy system replacements and data migrations.
 
Centralized Upkeep
 
Life sciences organizations benefit from both hardware and software performance improvements with a true multitenant cloud computing solution. When it comes to hardware, the provider sets up a server and network using the pooled resources of all its sales revenues that would not be financially feasible for any one individual customer to purchase on its own. It’s simply an economy of scale.
Investing in first class hardware results in more scalable, reliable, and secure performance than any other alternative. This is true no matter how large or small the client is—from 10 to 10,000 users, each customer still uses the same hardware.

 

The same is true with software. With multi-tenant SaaS, all customers are running on the same version or same set ofcode, which means that all of the users are working on the very latest release of the software 100 percent of the time—as opposed to locally installed programs where there may be 20 different versions of an application in use and 20 different sets of code to maintain without a single customer on the latest release. For each version of the software, the vendor provides the team to maintain it, investigate bugs, make and deploy patches, and more.
 
No Hardware, No Problem
 
Gartner estimates that two thirds of IT time and budgets is spent on maintaining infrastructure and dealing with updates. Multi-tenant SaaS lowers these costs because there is no hardware to buy or install, and there is no on-site software to maintain or update.

 

In addition to hardware, software, and maintenance savings, cloud computing CRM systems are much faster and therefore less expensive to implement. With multi-tenant SaaS, product design and configuration happens in parallel. That means project team members can log in and start working on day one.
 
The Maturation of a Technology
 
In his book, The Big Switch, Nicholas Carr describes how one hundred years ago, companies stopped generating their own power with “dynamos” and instead plugged into a growing national power grid of electricity. Looking back today, the benefits are obvious: dramatically lower cost, greatly reduced maintenance, and ubiquitous distribution. It also made the process of upgrading much easier as changes made to the common grid were immediately available to the benefit of all users. But most importantly it unleashed the full potential of the industrial revolution to companies of all shapes and sizes.
The life sciences industry is in the midst of a similar revolution today. Cloud computing has become the modern-day version of electrical power— the grid, replaced by the cloud. But only with true, multi-tenant SaaS can companies feel the full effects of this innovation.

 

Pharmaceutical Executive, Online – An Advanstar publication

Sumifs and Countifs in Excel

​Let us assume that you have a range of numbers in an Excel worksheet. You want to sum all numbers in the range that fall under a certain condition. You also want to know how many such numbers are there in that range that satisfy the condition. Here is how you can do this:

Go to the cell where you want the sum of the numbers that fall under the condition

Enter the formula =sumif(range, condition). For example, if you want sum of all numbers that are less than 500 in the range B2 to H17, enter =sumif(B2:H17, "<500").

Go to the cell where you want to display the number of cells that contain numbers that fall under your condition

Enter the formulaEnter the formula =countif(range, condition). For example, if you want count of all numbers that are less than 500 in the range B2 to H17, enter =countif(B2:H17, "<500").

 This is one of the ways of using the sumif and countif functions