Quadra

Connecting Technology and Business.

Windows Hello for Business

The current methods of authentication with passwords alone are not sufficient to keep users safe. Users reuse and forget passwords. Passwords are breachable, phishable, prone to cracks, and guessable. They also get difficult to remember and prone to attacks like “pass the hash”.

What is Windows Hello for Business?

Windows Hello for Business is a private/public key or certificate-based authentication approach for organizations and consumers that goes beyond passwords. This form of authentication relies on key pair credentials that can replace passwords and are resistant to breaches, thefts, and phishing.

Windows Hello for Business lets a user authenticate to a Microsoft account, a Windows Server Active Directory account, a Microsoft Azure Active Directory (Azure AD) account, or a non-Microsoft service that supports Fast IDentity Online (FIDO) authentication. After an initial two-step verification during Windows Hello for Business enrollment, Windows Hello for Business is set up on the user's device, and the user sets a gesture, which can be Windows Hello or a PIN. The user provides the gesture to verify their identity. Windows then uses Windows Hello for Business to authenticate the user and help them to access protected resources and services.

The private key is made available solely through a “user gesture” like a PIN, biometrics, or a remote device like a smart card that the user uses to sign in to the device. This information is linked to a certificate or an asymmetrical key pair. The private key is hardware attested if the device has a Trusted Platform Module (TPM) chip. The private key never leaves the device.

The public key is registered with Azure Active Directory and Windows Server Active Directory (for on-premises). Identity Providers (IDPs) validate the user by mapping the public key of the user to the private key, and provide sign-in information through One Time Password (OTP), PhoneFactor, or a different notification mechanism.

Why should enterprises adopt Windows Hello for Business?

By enabling Windows Hello for Business, enterprises can make their resources even more secure by:

  • Setting up Windows Hello for Business with a hardware-preferred option. This means that keys will be generated on TPM 1.2 or TPM 2.0 when available. When TPM is not available, software will generate the key.
  • Defining the complexity and length of the PIN, and whether Hello usage is enabled in your organization.
  • Configuring Windows Hello for Business to support smart card-like scenarios by using certificate-based trust.

How does Windows Hello for Business work?

  1. Keys are generated on the hardware by TPM or software. Many devices have a built-in TPM chip that secures the hardware by integrating cryptographic keys into devices. TPM 1.2 or TPM 2.0 generates keys or certificates that are created from the generated keys.
  2. The TPM attests these hardware-bound keys.
  3. A single unlock gesture unlocks the device. This gesture allows access to multiple resources if the device is domain-joined or Azure AD-joined.

How does the Windows Hello for Business lifecycle work?

  • The user proves their identity through multiple built-in proofing methods (gestures, physical smart cards, multi-factor authentication) and sends this information to an Identity Provider (IDP) like Azure Active Directory or on-premises Active Directory.
  • The device then creates the key, attests the key, takes the public portion of this key, attaches it with station statements, signs in, and sends it to the IDP to register the key.
  • As soon as the IDP registers the public portion of the key, the IDP challenges the device to sign with the private portion of the key.
  • The IDP then validates and issues the authentication token that lets the user and the device access the protected resources. IDPs can write cross-platform apps or use browser support (via JavaScript/Webcrypto APIs) to create and use Windows Hello for Business credentials for their users.

What is the deployment requirement for Windows Hello for Business?

At the enterprise level

The enterprise has an Azure subscription.

At the user level

The user's computer runs Windows 10 Professional or Enterprise.

-from a document in Microsoft Docs

Password Spray attacks and Four sure steps to disrupt them

As long as we’ve had passwords, people have tried to guess them. Let us get up to speed on a common attack called password spray which has become MUCH more frequent recently.  There are some best practices that we can adopt to defend against this sophisticated attack.

In a password spray attack, the bad guys try the most common passwords across many different accounts and services to gain access to any password protected assets they can find. Usually these span many different organizations and identity providers.

Four easy steps to disrupt password spray attacks:

Step 1: Use cloud authentication

In the cloud, we see billions of sign-ins to Microsoft systems every day. Microsoft’s security detection algorithms allow them to detect and block attacks as that are happening. Because these are real time detection and protection systems driven from the cloud, they are available only when doing Azure AD authentication in the cloud (including Pass-Through Authentication).

Smart Lockout

In the cloud, Microsoft uses Smart Lockout to differentiate between sign-in attempts that look like they’re from the valid user and sign-ins from what may be an attacker. They can lock out the attacker while letting the valid user continue using the account. This prevents denial-of-service on the user and stops overzealous password spray attacks. This applies to all Azure AD sign-ins regardless of license level and to all Microsoft account sign-ins.

Tenants using Active Directory Federation Services (ADFS) will be able to use Smart Lockout natively in ADFS in Windows Server 2016 starting in March 2018. They can look for this ability to come via Windows Update.

IP Lockout

IP lockout works by analyzing those billions of sign-ins to assess the quality of traffic from each IP address hitting Microsoft’s systems. With that analysis, IP lockout finds IP addresses acting maliciously and blocks those sign-ins in real-time.

Attack Simulations

Now available in public preview, Attack Simulator as part of Office 365 Threat Intelligence enables customers to launch simulated attacks on their own end users, determine how their users behave in the event of an attack, and update policies and ensure that appropriate security tools are in place to protect an organization from threats like password spray attacks.

Step 2: Use multi-factor authentication

A password is the key to accessing an account, but in a successful password spray attack, the attacker has guessed the correct password. To stop them, we need to use something more than just a password to distinguish between the account owner and the attacker. The three ways to do this are below.

Risk-based multi-factor authentication

Azure AD Identity Protection uses the sign-in data mentioned above and adds on advanced machine learning and algorithmic detection to risk score every sign-in that comes in to the system. This enables enterprise customers to create policies in Identity Protection that prompt a user to authenticate with a second factor if and only if there’s risk detected for the user or for the session. This lessens the burden on your users and puts blocks in the way of the bad guys.

Always-on multi-factor authentication

For even more security, Enterprises can use Azure MFA to require multi-factor authentication for their users all the time, both in cloud authentication and ADFS. While this requires end users to always have their devices and to more frequently perform multi-factor authentication, it provides the most security for the enterprise. This should be enabled for every admin in an organization.

Azure MFA as primary authentication

In ADFS 2016, Microsoft has the ability use Azure MFA as primary authentication for passwordless authentication. This is a great tool to guard against password spray and password theft attacks: if there’s no password, it can’t be guessed. This works great for all types of devices with various form factors. Additionally, enterprises can now use password as the second factor only after an OTP has been validated with Azure MFA.

Step 3: Better passwords for everyone

Even with all the above, a key component of password spray defence is for all users to have passwords that are hard to guess. It’s often difficult for users to know how to create hard-to-guess passwords. Microsoft helps you make this happen with these tools.

Banned passwords

In Azure AD, every password change and reset runs through a banned password checker. When a new password is submitted, it’s fuzzy-matched against a list of words that no one, ever, should have in their password (and l33t-sp3@k spelling doesn’t help). If it matches, it’s rejected, and the user is asked to choose a password that’s harder to guess. Microsoft builds the list of the most commonly attacked passwords and updates it frequently.

Custom banned passwords

To make banned passwords even better, Microsoft is going to allow tenants to customize their banned password lists. Admins can choose words common to their organization—famous employees and founders, products, locations, regional icons, etc.—and prevent them from being used in their users’ passwords. This list will be enforced in addition to the global list, so enterprises don’t have to choose one or the other. It’s in limited preview now and will be rolling out this 2018.

Banned passwords for on-premises changes

This spring, Microsoft is launching a tool to let enterprise admins ban passwords in hybrid Azure AD-Active Directory environments. Banned password lists will be synchronized from the cloud to the on-premises environments and enforced on every domain controller with the agent. This helps admins ensure users’ passwords are harder to guess no matter where—cloud or on-premises—the user changes his/her password. This launched to limited private preview in February 2018 and will go to GA this year.

Change how you think about passwords

A lot of common conceptions about what makes a good password are wrong. Usually something that should help mathematically actually results in predictable user behaviour: for example, requiring certain character types and periodic password changes both result in specific password patterns. If an enterprise is using Active Directory with PTA or ADFS, they have to update their password policies. If they are using cloud managed accounts, enterprises need consider setting their passwords to never expire.

Step 4: More awesome features in ADFS and Active Directory

If an enterprise is using hybrid authentication with ADFS and Active Directory, there are more steps they can take to secure their environment against password spray attacks.

The first step: for organizations running ADFS 2.0 or Windows Server 2012, plan to move to ADFS in Windows Server 2016 as soon as possible.  The latest version will be updated more quickly with a richer set of capabilities such as extranet lockout. Microsoft has made it really easy to upgrade from Windows Server 2012R2 to 2016.

Block legacy authentication from the Extranet

Legacy authentication protocols don’t have the ability to enforce MFA, so the best approach is to block them from the extranet. This will prevent password spray attackers from exploiting the lack of MFA on those protocols.

Enable ADFS Web Application Proxy Extranet Lockout

If enterprises do not have extranet lockout in place at the ADFS Web Application proxy, they should enable it as soon as possible to protect their users from potential password brute force compromise.

Deploy Azure AD Connect Health for ADFS

Azure AD Connect Health captures IP addresses recorded in the ADFS logs for bad username/password requests, gives admins additional reporting on an array of scenarios, and provides additional insight to support engineers when opening assisted support cases.

(To deploy, admins must download the latest version of the Azure AD Connect Health Agent for ADFS on all ADFS Servers (2.6.491.0). ADFS servers must run Windows Server 2012 R2 with KB 3134222 installed or Windows Server 2016).

Use non-password-based access methods

Without a password, a password can’t be guessed. These non-password-based authentication methods are available for ADFS and the Web Application Proxy:

  • Certificate based authentication allows username/password endpoints to be blocked completely at the firewall.
  • Azure MFA, as mentioned above, can be used to as a second factor in cloud authentication and ADFS 2012 R2 and 2016. But, it also can be used as a primary factor in ADFS 2016 to completely stop the possibility of password spray.
  • Windows Hello for Business, available in Windows 10 and supported by ADFS in Windows Server 2016, enables completely password-free access, including from the extranet, based on strong cryptographic keys tied to both the user and the device. This is available for corporate-managed devices that are Azure AD joined or Hybrid Azure AD joined as well as personal devices via “Add Work or School Account” from the Settings app.
- Based on a blog from Microsoft Security

A free ticket to kickstart your Digital Transformation journey with Amazon

If your enterprise is preparing for a digital transformation journey and is looking for a simple strategy to test waters (or road testing, if you want), here is what none can refuse to accept – a free ticket to kick start your journey and that with the pioneer that offered infrastructure as a service – Amazon.

Let us first look at what services are offered for free for 12 months by AWS in its Free Tier

(Only available to new AWS customers, and are available for 12 months following an AWS sign-up date).

Elastic Compute Cloud (EC2)

Use this to create Virtual machines for your workloads.

  • 750 hours of Amazon EC2 Linux t2.micro instance usage (1 GiB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month
  • 750 hours of Amazon EC2 Microsoft Windows Server† t2.micro instance usage (1 GiB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month

Elastic Load Balancer

Automatically distributes incoming application traffic across multiple targets – Available as Application load balancer, Network load balancer and Classic load balancer

  • 750 hours of an Elastic Load Balancer shared between Classic and Application load balancers, 15 GB data processing for Classic load balancers, and 15 LCUs for Application load balancers

Elastic Block Storage

Persistent block storage volumes for EC2 instances / Virtual machines

  • 30 GB of Amazon Elastic Block Storage in any combination of General Purpose (SSD) or Magnetic, plus 2 million I/Os (with EBS Magnetic) and 1 GB of snapshot storage

Elastic Container Registry

A fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

  • 500 MB-month of Amazon Elastic Container Registry storage for new customers

Amazon Simple Storage Service (S3)

Object storage built to store and retrieve any amount of data from anywhere

  • 5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests

Amazon Elastic File System (EFS)

A simple, scalable file storage for use with Amazon EC2 instances

  • 5 GB per month of Amazon EFS capacity free

Amazon Relational Database Service (RDS)

Set up, operate, and scale a relational database in the cloud.

  • 750 hours of Amazon RDS Single-AZ db.t2.micro Instances, for running MySQL, PostgreSQL, MariaDB, Oracle BYOL or SQL Server (running SQL Server Express Edition) – enough hours to run a DB Instance continuously each month
  • 20 GB of database storage, in any combination of RDS General Purpose (SSD) or Magnetic storage
  • 10 million I/Os (for use with RDS Magnetic storage; I/Os on RDS General Purpose (SSD) storage are not separately billed)
  • 20 GB of backup storage for your automated database backups and any user-initiated DB Snapshots

Amazon Cloud Directory

Enables you to build flexible cloud-native directories for organizing hierarchies of data along multiple dimensions. With Cloud Directory, you can create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries including AD LDS

  • 1GB of storage per month; 10,000 write requests per month; 100,000 read requests per month;

Amazon Connect

A self-service cloud-based contact center service to deliver better customer service

  • 90 minutes per month of Amazon Connect usage; A local direct inward dial (DID) number for the AWS region; 30 minutes per month of local (to the AWS region) inbound DID calls; 30 minutes per month of local (to the AWS region) outbound calls; For US regions, a US toll-free number for use per month and 30 minutes per month of US inbound toll-free calls

Amazon GameLift

A managed service for deploying, operating, and scaling dedicated game servers for session-based multiplayer games

  • 125 hours per month of Amazon GameLift c4.large.gamelift On-Demand instance usage; 50 GB EBS General Purpose (SSD) storage

Data Transfer

  • 15 GB of data transfer out and 1GB of regional data transfer aggregated across all AWS services

Amazon Data Pipeline

A web service to reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals

  • 3 low frequency preconditions running on AWS per month; 5 low frequency activities running on AWS per month

Amazon ElastiCache

Fully managed Redis and Memcached to seamlessly deploy, operate, and scale popular open source compatible in-memory data stores

  • 750 hours of Amazon ElastiCache cache.t2micro Node usage - enough hours to run continuously each month.

Amazon CloudFront

A global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to viewers with low latency and high transfer speeds.

  • 50 GB Data Transfer Out, 2,000,000 HTTP and HTTPS Requests of Amazon CloudFront

Amazon API Gateway

A fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale

  • 1 Million API Calls per month

Amazon Cognito

Add user sign-up, sign-in and access control of web and mobile application users

  • The Your User Pool feature has a free tier of 50,000 MAUs each month; 10 GB of cloud sync storage; 1,000,000 sync operations per month.

Amazon Sumerian

Create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise.

  • 50MB published scene that receives 100 views per month for free in the first year

Amazon Elasticsearch Service

A fully managed service to deploy, secure, operate, and scale Elasticsearch for log analytics, full text search, application monitoring etc.

  • 750 hours per month of a single-AZ t2.micro.elasticsearch instance or t2.small.elasticsearch instance; 10GB per month of optional EBS storage (Magnetic or General Purpose)

Amazon Pinpoint

Engage your customers by tracking the ways in which they interact with your applications

  • 5,000 free targeted users per month; 1,000,000 free push notifications per month; 100,000,000 events per month

AWS OpsWorks for Chef Automate

A fully-managed configuration management service that hosts Chef Automate, a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment.

  • 7500 node hours (which equals 10 nodes) per month

AWS OpsWorks for Puppet Enterprise

A fully-managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.

  • 7500 node hours (which equals 10 nodes) per month

Amazon Polly

A Text-to-speech service that turns text into lifelike speech, allowing to create applications that talk, and build entirely new categories of speech-enabled products

  • 5 million characters per month

AWS IoT

A managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices.

  • 250,000 messages (published or delivered) per month

Amazon Lex

An automatic speech recognition / speech-to-text service for building conversational interfaces into any application using voice and text

  • 10,000 text requests per month; 5,000 speech requests per month

Here below is the list of services that are always free (non-expiring)

These free tier offers do not automatically expire at the end of your 12 month AWS Free Tier term and are available to all AWS customers. 

Amazon DynamoDB


A fully managed, fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale


  • 25 GB of Storage, 25 Units of Read Capacity and 25 Units of Write Capacity – enough to handle up to 200M requests per month with Amazon DynamoDB.

Amazon Cognito

Add user sign-up, sign-in and access control of web and mobile application users

  • The Your User Pool feature has a free tier of 50,000 MAUs each month; The Federated Identities feature for authenticating users and generating unique identifiers is always free with Amazon Cognito.

(The Your User Pool feature is currently in Beta and you will not be charged for sending SMS messages for Multi-Factor Authentication (MFA) and phone verification. However, separate pricing for sending SMS messages will apply after the conclusion of Beta period.)

AWS CodeCommit

A fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories

  • 5 active users per month; 50 GB-month of storage per month; 10,000 Git requests per month

Amazon CloudWatch

A monitoring service for AWS cloud resources and the applications you run on AWS

  • 10 Amazon Cloudwatch custom metrics, 10 alarms, and 1,000,000 API requests; 5 GB of Log Data Ingestion; 5 GB of Log Data Archive; 3 Dashboards with up to 50 metrics each per month

AWS X-Ray

Analyze and debug production, distributed applications, such as those built using a microservices architecture

  • 100,000 traces recorded per month; 1,000,000 traces scanned or retrieved per month

Amazon Mobile Analytics – Now called Amazon Pinpoint

Engage customers by tracking the ways in which they interact with your applications.

  • 100 million free events per month

AWS Database Migration Service

Migrate databases to AWS quickly and securely

  • 750 Hours of Amazon DMS Single-AZ dms.t2.micro instance usage; 50 GB of included General Purpose (SSD) storage

AWS Storage Gateway

A hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage for backup and archiving, disaster recovery, cloud bursting, storage tiering, and migration

  • Up to 100GB a month free; up to $125 a month maximum charges

Amazon Chime

A communications service for online meetings, video conferencing, calls, chat, and to share content, both inside and outside your organization.

  • Unlimited usage of Amazon Chime Basic

Amazon Simple Workflow Service (SWF)

A task-based API that makes it easy to coordinate work across distributed application components by providing a programming model and infrastructure for coordinating distributed components and maintaining their execution state in a reliable way.

  • 1,000 Amazon SWF workflow executions and a total of 10,000 activity tasks, signals, timers and markers, and 30,000 workflow-days.

Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS)

SQS is a fully managed message queuing service to decouple and scale microservices, distributed systems, and serverless applications. SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients.

  • 1,000,000 Requests of Amazon Simple Queue Service; 1,000,000 Requests, 100,000 HTTP notifications and 1,000 email notifications for Amazon Simple Notification Service

Amazon Elastic Transcoder

A media transcoding service for developers and businesses to convert (or “transcode”) media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.

  • 20 minutes of SD transcoding or 10 minutes of HD transcoding

AWS Key Management Service

A managed service to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys

  • 20,000 free requests per month

AWS Lambda

A platform service to run code without provisioning or managing servers

  • 1,000,000 free requests per month; Up to 3.2 million seconds of compute time per month

AWS CodePipeline

A continuous integration and continuous delivery service for application and infrastructure updates. 

  • 1 active pipeline per month

AWS Device Farm

An app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time.

  • Free one-time trial of 1,000 device minutes

AWS Step Functions

A serverless platform service to orchestrate AWS Lambda functions for serverless applications.

  • 4,000 state transitions per month

Amazon SES

A cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails to customers.

  • 62,000 Outbound Messages per month to any recipient when you call Amazon SES from an Amazon EC2 instance directly or through AWS Elastic Beanstalk.; 1,000 Inbound Messages per month.

Amazon QuickSight

A business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data

  • 1 user, 1 GB of SPICE (Super-fast, Parallel, In-memory, Calculation Engine)

Amazon Glacier

A secure, durable cloud storage service for data archiving and long-term backup

  • 10 GB of Amazon Glacier data retrievals per month for free. The free tier allowance can be used at any time during the month and applies to Standard retrievals.

Amazon Macie

A security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.

  • 1 GB processed by the content classification engine; 100,000 events

AWS Glue

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics

  • 1 Million objects stored in the AWS Glue Data Catalog; 1 Million requests made per month to the AWS Glue Data Catalog

AWS CodeBuild

A fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy

  • 100 build minutes per month of build.general1.small compute type usage

 † The following Windows variants are not eligible for the free tier: Microsoft Windows Server 2008 R2 with SQL Server Web, Microsoft Windows Server 2008 R2 with SQL Server Standard, Microsoft Windows 2008 R2 64-bit for Cluster Instances and Microsoft Windows 2008 R2 SQL Server 64-bit for Cluster Instances.

AWS Marketplace offers free and paid software products that run on the AWS Free Tier. If you qualify for the AWS Free Tier, you can use these products on an Amazon EC2 t2.micro instance for up to 750 hours per month and pay no additional charges for the Amazon EC2 instance (during the 12 months).

Refer this page for more details

Digital Transformation helps Microsoft weed out fake marketing leads

Microsoft has showcased how it solved the Fake leads problem as a Leader in Digital Transformation

“Fake leads” is the problem to tackle

When people sign up via online forms, they sometimes give a fake name, company name, email, or phone number. They may submit randomly typed characters (keyboard gibberish) or use profanity. Or, they may accidentally make a small typographical error, but otherwise the name is real—so we don’t want to classify the lead as junk.

The abundance of fake lead names across Microsoft subsidiaries results in:

·         Lost productivity for our global marketers and sellers. Fake names waste an enormous amount of time since sellers rely on accurate information to follow up with leads.

·         Lost revenue opportunities. Among thousands of fake lead names, there could be one legitimate opportunity.

Each day, thousands of people sign up using thousands of web forms. But, in any month, many of the lead names—whether a company or a person—are fake.

The solution to tackle “Fake leads”

Improving data quality is critical. To do that, and to determine if names are real or fake, Microsoft built a machine learning solution that uses:

·         Microsoft Machine Learning Server (previously Microsoft R Server).

·         A data quality service that integrates machine learning models. When a company name enters the marketing system, the system calls their data quality service, which immediately checks if it’s a fake name.

So far, machine learning has reduced the number of fake company names that enter Microsoft’s marketing system, at scale. Their solution has prevented thousands of names from being routed to marketers and sellers. Filtering out junk leads has made their marketing and sales teams more efficient, allowing them to focus on real leads and help customers better.

Microsoft Machine Learning Server

Microsoft needed a scalable way to eliminate fake names across millions of records and to build and operationalize their machine learning model—in other words, they wanted a systematic, automated approach with measurable gains. They chose Machine Learning Server, in part, because:

·         It can handle their large datasets—which enables them to train and score their model.

·         It has the computing power that they need.

·         They can control how they scale their model and operationalize for high-volume business requests.

·         Access is based on user name and password, which are securely stored in Azure Key Vault.

·         It helps expose the model as a secure API that can be integrated with other systems and improved separately.

The difference between rule-based model to Machine Learning

Rules

Experts create static rules to cover common scenarios. As new scenarios occur, new rules are written. A static, rules-based model can make it hard to capture varying types of keyboard gibberish (like akljfalkdjg). With static rules, Microsoft’s marketers must waste time sorting through the fake leads and deciphering misleading or confusing information.

Machine Learning

Algorithms are used to train the model and make intelligent predictions. Algorithms help build and train the model by labeling and classifying data at the beginning of the process. Then, as data enters the model, the algorithm categorizes the data correctly—saving valuable time. Microsoft used the Naive Bayes classifier algorithm to categorize names as real/fake. This algorithm is influenced by how LinkedIn detects spam names in their social networks.

Scenarios where the model is used

Microsoft’s business team identified their subsidiaries worldwide that are most affected by fake names. Now, they are weeding out fake names so that marketers and sellers don’t have to. Going forward, they plan to:

·         Create a lead data quality metric with more lead-related signals and other machine learning models that allow them to stack-rank their leads. The goal is to give a list to their sellers and marketers that suggests which leads to call first and which to call next.

·         Make contact information visible to their sellers and marketers when they’re talking on the phone with leads. For example, if the phone number that someone gave in an online form is real, but the company name isn’t, their seller can ask the lead to confirm the company name.

Choosing the technology

Microsoft incorporated the following technologies into their solution:

·         The programming language R and the Naive Bayes classifier algorithm for training and building the model are based, in part, on the approach that LinkedIn uses.

·         Machine Learning Server with machine learning, R, and artificial intelligence (AI) capabilities help them build and operationalize their model.

·         Their data quality service, which integrates with the machine learning models to determine if a name is fake – person or company.

Designing the approach

Microsoft designed their overall architecture and process to work as follows:

1.       Marketing leads enter their data quality and enrichment service, where their team does fake-name detection, data matching, validation, and enrichment. They combine these data activities using a 590-megabyte model. Their training data consists of about 1.5 million real company names and about 208,312 fake (profanity and gibberish) company names. Before they train the model, they remove commonly used company suffixes such as Private, Ltd., and Inc.

2.       They generate n-grams—combinations of contiguous letters—of three to seven characters and calculate probabilities that each n-gram belongs to the real/fake name dataset in the model. For example, an n-gram that shows three sequenced letters of the name “Microsoft” would look like “Mic,” “icr” “cro” and so on. The training process computes how often the n-grams occur in real/fake company names and stores the computation in the model.

3.       They have four virtual machines that run Machine Learning Server. One serves as a web node and three serve as compute nodes. They have more compute nodes so that they can scale to handle the volume of requests that they have. The architecture gives them the ability to scale up or down by adding/removing compute nodes as needed based on the volume of requests. The provider calls a web API hosted on the web node, with company name as input.

4.       The web API calls the scoring function on the compute node. This scoring function generates n-grams from the input company name and calculates the frequencies of these n-grams in the real/fake training dataset.

5.       To determine whether the input company name is real or fake, the predict function in R uses these calculated n gram frequencies stored in the model, along with the Naive Bayes rule.

To summarize, the scoring function that’s used during prediction generates the n-grams. It uses the frequencies of each n-gram in the real/fake name dataset that’s stored in the model to compute the probability of the company name belonging to the real/fake name dataset. Then, it uses these computed probabilities to determine if the company name is fake.

What Microsoft learned about Business, technical, and design considerations

·         Ideally, the business problem should be solved within your organization itself rather than outsourcing it. Your organization will have deeper historical knowledge of the business domain, which helps to design the most relevant solution.

·         Having good training and test data is crucial. Most of the work Microsoft did was labeling their test data, analyzing how Naive Bayes performed compared to rxLogisticRegression and rxFastTrees algorithms, determining how accurate their model was, and updating their model where needed.

·         When you design a machine learning model, it’s important to identify how to effectively label the raw data. Unlabeled data has no information to explain or categorize it. Microsoft labels the names as fake/real and apply the machine learning model. This model takes new, unlabeled data and predicts a likely label for it.

·         Even in machine learning, you risk having false positives and negatives, so you need to keep analyzing predictions and retraining the model. Crowdsourcing is an effective way to analyze whether the predictions from the model are correct; otherwise, these can be time-consuming tasks. In Microsoft’s case, due to certain constraints they faced, they didn’t use crowdsourcing, but they plan to do so in the future.

Operationalizing with Machine Learning Server vs. other Microsoft technologies

Some other technical and design considerations included deciding which Microsoft technologies to use for creating machine learning models. Microsoft offers great options such as Machine Learning Server, SQL Server 2017 Machine Learning Services (previously SQL Server 2016 R Services), and Azure Machine Learning Studio. Here are some tips to help you decide which to use for creating and operationalizing your model:

·         If you don’t depend on SQL Server for your model, Machine Learning Server is a great option. You can use the libraries in R and Python to build the model, and you can easily operationalize R and Python models. This option allows you to scale out as needed and lets you control the version of R packages that you want to use for modeling.

·         If you have training data in SQL Server and want to build a model that’s close to your training data, SQL Server 2017 Machine Learning Services works well—but there are dependencies on SQL Server and limits on model size.

·         If your model is simple, you could build it in SQL Server as a stored procedure without using libraries. This option works well for simpler models that aren’t hard to code. You can get good accuracy and use fewer resources, which saves money.

·         If you’re doing experiments and want quick learning, Azure Machine Learning Studio is a great choice. As your training dataset grows and you want to scale your models for high-volume requests, consider Machine Learning Server and SQL Server 2017 Machine Learning Services.

Challenges and roadblocks Microsoft faced

·         Having good training data. High-quality training data begins with a collection of company names that are clearly classified as real or fake—ideally, from companies around the world. Microsoft feeds that information into their model for it to start learning the patterns of real or fake company names. It takes a while to build and refine this data, and it’s an iterative process.

·         Identifying and manually labeling the training and test dataset. Microsoft manually labeled thousands of records as real or fake, which takes a lot of time and effort. Instead, one can take advantage of crowdsourcing services if possible, to avoid manual labeling. With these services, one can submit company names through a secure API and a human says if the company name is real or fake.

·         Deciding which product to use for operationalizing the model. Microsoft tried different technologies, but found computing limitations and versioning dependencies between the R Naive Bayes package they used and what was available in Azure Machine Learning Studio at the time. Microsoft chose Machine Learning Server because it addressed those issues, had the computing power they needed, and helped them easily scale out their model.

·         Configuring load balance. If Microsoft’s Machine Learning Server web node gets lots of requests, it randomly chooses which of the three compute nodes to send the request to. This can result in one node that’s overutilized while another is underutilized. They like to use a round-robin approach, where all nodes are used equally to better distribute the load. This can be achieved by using an Azure load balancer in between the web and compute node.

Measurable benefits Microsoft has seen so far

The gains Microsoft has made thus far are just the beginning. So far, Machine Learning Server has helped them in the following ways:

·         With the machine learning model, their system tags about 5 to 9 percent more fake records than the static model. This means the system prevented 5 to 9 percent more fake names from going to marketers and sellers. Over time, this represents a vast number of fake names that their sellers do not have to sort through. As a result, marketer and seller productivity is enhanced.

·         They have captured more gibberish data and most profanities, with fewer fake positives and fake negatives. They have a high degree of accuracy, with an error rate of +/– 0.2 percent.

·         Their time to respond to requests has improved. With 10,000 data classifications of real/fake in 16 minutes and 200,000 classifications in 3 hours 13 minutes, they have ensured that their data quality service meets service level agreements for performance and response time. They plan to improve response time by slightly modifying the algorithm in Python.

Next steps

Microsoft is excited about how their digital transformation journey has already enabled them to innovate and be more efficient. They will build on this momentum by learning more about business needs and delivering other machine learning solutions. Their roadmap includes:

·         Ensuring that their machine learning model delivers value end-to-end. Machine learning is just one link in the chain that reaches all the way to sellers and marketers around the world. The whole chain needs to work well.

·         Expanding their set of models and making business processes and lead quality more AI-driven vs. rule-driven.

·         Operationalizing other machine learning models, so that they get a holistic view of a lead.

·         Addressing issues created from sites that create fake registrations.

By improving data quality at scale, Microsoft is enabling marketers and sellers to focus on customers and to sell their products, services, and subscriptions more efficiently.

A free ticket to kickstart your Digital Transformation journey with Microsoft

Microsoft Azure

You can start your digital transformation journey today - your first mile is free.

Access a number of services available in Microsoft Azure without paying a penny (or rupee). Some are available for free for the first 12 months while many are always free. Added to this, you also get a pocket money of ₹13,300 to spend for the first month of your journey.

Let us first look at what services are always offered for free by Microsoft

1.       Do you want to quickly create powerful cloud apps using a fully-managed platform? Get 10 web, mobile or API apps with Azure App Service with 1 GB storage

2.       Wish to build apps faster with serverless architecture? You can now send 1 million requests and get4,00,000 GBs of resource consumption with Azure Functions Service

3.       Are you looking for simplifying the deployment, management and operations of Kubernetes - an open-source system for automating deployment, scaling, and management of containerized applications to groups containers that make up an application into logical units – for easy management and discovery? Use Azure Container service to cluster virtual machines.

4.       Are you planning for Identity and Access Management on the Cloud for your organization? Store 50,000 objects with Azure Active Directory with Single Sign-On (SSO) for 10 apps per user.

5.       Do you want to try managing Identity and access of your customers? 50, 000 monthly stored users and 50,000 authentications per month with Azure Active Directory B2C

6.       You can build and operate always-on, scalable and distributed microservice apps using Azure Service Fabric

7.       Do you want to complement your IDE to share code, track work and ship software for any language – all in a single pack? List first 5 users free with Visual Studio Team Services

8.       Get actionable insights through application performance management and instant analytics - Unlimited nodes (server or platform-as-a-service instance) with Application Insights and 1 GB of telemetry data included per month

9.       You can quickly provision software product development and test environment for Linux and Windows applications at the Azure DevTest Labs and use it without limit

10.   Enterprises can use Machine learning with 100 modules and 1 hour per experiment with 10 GB included storage at the Azure Machine Learning Studio – just drag and drop to deploy a solution – no coding

11.   Capitalize on the free policy assessment and recommendations with Azure Security Center where you get unified security management and advanced threat protection across hybrid cloud workloads.

12.   Get unlimited personalized recommendations and Azure best practices with Azure Advisor

13.   Start connecting IoT assets, monitor and manage them at the Azure Iot Hub. The free edition includes 8,000 messages per day with 0.5 KB message meter size

14.   Start delivering high availability and network performance to your applications using the public load-balanced IP with Azure Load Balancer

15.   Integrate your data in a hybrid environment. You can now experiment with 5 low frequency activities with Azure Data Factory

16.   If you develop mobile and / or web apps, use this service to search the cloud 50 MB storage for 10,000 hosted documents with Azure Search including 3 indexes per service

17.   Get a free namespace and push 1 million notifications to any platform from any back end with Azure Notification Hubs

18.   Manage compute power without limit using Azure Batch for cloud-scale job scheduling and cluster management

19.   Automate your process and manage the cloud with a free 500 minute of job run time with Azure Automation

20.   Get more value from your data assets – include unlimited users and 5,000 catalog objects with Azure Data Catalog

21.   Detect human faces, compare similar ones and organize images – 30,000 transactions per month processing at 20 transactions per minute with Face API

22.   Convert 5,000 audio to text and vice versa transactions per month with Bing Speech API

23.   Easily conduct real-time text translation with a simple REST API call – free 2 million characters included for Translator Text API

24.   Transform your log data into actionable insights using this free 500 MB-per-day analysis plus 7-day retention period with Log Analytics

25.   Run 1 job, 5 jobs per collection and 3,600 job executions on simple or complex recurring schedules for free with Scheduler

26.   Get your first 50 private virtual networks free with Azure Virtual Network

27.   Unlimited inbound Inter-VNet data transfer

These services listed below are free for the first 12 months

1.       Deploy 1 or more Azure B1S General Purpose Virtual Machines for Microsoft Windows Server (1 core 1GB RAM, 2 GB SSD Disk space) and run them for 750 hours (aggregate)

2.       Deploy 1 or more Azure B1S General Purpose Virtual Machines for Linux (1 core 1GB RAM, 2 GB SSD Disk space) and run them for 750 hours (aggregate)

3.       Get 128 GB of Managed Disks (as a combination of two 64 GB (P6) SSD storage, plus 1 GB snapshot and 2 million I/O operations) for persistent secure disc storage for your VMs in Azure

4.       Get 5 GB of LRS-Hot Blob Storage – a massively scalable object storage for unstructured data - with 2 million read, 2 million write and 2 million write/list operations

5.       Get 5 GB of LRS File Storage – simple secure and fully managed files sharewith 2 million read, 2 million list and 2 million other file operations

6.       Deploy an SQL Database Standard S0 instance with 250 GB data and 10 database transaction units

7.       Deploy a globally distributed multi-model database service with Azure Cosmos DB to store 5 GB data with 400 reserved in units

8.       15 GB of bandwidth for outbound data transfer with free unlimited inbound transfer.

There is one service that is always free after first 12 months

1.       5 GB of bandwidth for outbound data transfer with free unlimited inbound transfer always free after first 12 months.

The Azure free account is available to all new customers of Azure. If you have never had an Azure free trial or have never been a paying Azure customer, you are eligible. You don’t have to pay anything at all at the start.

Please access the FAQ here for further details.

Reduce the noise in your data to improve forecast

The cloud and big data


When the cloud came into being, it brought with it immense storage power at cheaper rates. It ushered in the era of Big data. As a result, it also raised expectation levels in the minds of statisticians - and decision makers who depended on them - that this would do wonders to their decision-making processes.


Boon or bane?


The sample space had drastically increased due to social media and IoT, leading to more data being made available now. Applying statistical models to this huge data would improve the probability of a predicted event occurring (or not occurring) or improve the reliability of the forecast by pushing the R squared value to near unity. Right? Wrong. The data deluge only added to more noise than dependable signals.

 

Illusion or disillusion?


As time went by, people became disillusioned by the failure of the system to aid them with reliable information in decision-making. So as their hyper-expectations were not met, they just drop off quickly without pursuing further this journey.


The signal and the noise


It was now the turn of the experts to come with their reasons as to why such huge data could not help them decide better. One significant reason is that while there is enough data - and more – for the model, it requires a great deal of cleaning – removing the noise in the data that could distort the results and predictions before this data can be put to any use at all.

 

Persistence pays!


Early adapters of technology gained over the long run. Microsoft and Amazon are examples of winners who persisted in their vision to make this big data the fuel to their decision-making engine. They soon gathered themselves up from the trough of disillusionment to the slope of enlightenment by applying scientific methods to the data gathered and adopting newer methods to remove noise and false signals from the data. This way, they could arrive at real signals that aided in building reliable data models. They have climbed to the plateau of productivity now with their data models helping them in better decision making based on information.

 

Here are a few points to ponder:


  • People expect a lot from technology today, but the problem is while we have a lot of data, there are not enough people who possess skills to make this big data useful and not enough training and skill building efforts being put to make data scientists out of this huge population of technology experts.
  • Cleaning up data is the first big problem in predictive analysis – there are many external factors that might tend to distort the data that has been collected.
  • If we are considering a correlation between two variables and don’t know what causes this correlation, it is better not to consider this correlation at all. (Star fish predicting the FIFA world-cup winner or a baseball team’s win or lose determining the movement of the share-market).
  • Seeking for signals desperately, people end up with more noise than signals – so they make decisions with their instinct / gut feeling / experience playing an 80% part and statistics playing the last 20%. Instead, we should be guided by statistics 80% and leave the rest to our instincts and that too only if there is a drastically negative indicator in the statistical model. 

Here are some suggestions to reduce the noise and arrive at signals:

 

Start with a hypothesis / instinct and then keep refining it as you go ahead with analysis – this might sometimes lead to reverse your hypothesis.


Think probabilistically


When predicting, consider the margin of error (uncertainty) of the historic data and then include that in the prediction to make a decision. The person that discloses the greatest uncertainty is doing a better job than the one who conceals this uncertainty to his prediction. Three things to carry with while predicting: Data models, Scientific theories that influence the situation and experience (learn from the number of forecasts made and the feedback about the forecast)


Know where you come from


Consider the background and the existing biases of the possible forecaster / decision maker and the situation the data is being collected / considered


Try and err


Companies need to focus on the 80% effort for the last 20% results to retain the competitive advantage – real statistics of a few customers would be better than hypothetical data of a huge number of customers.

 

Notes:

 

  • Large and smart companies especially Technology firms should dare to take risks in the competitive advantage area. Most of the risk-taking will pay off. As they are big, they can bear failures unlike small firms and individuals in which case this might be termed as gambling.
  • People make better inferences from visuals than just data presented as raw data. Charts must show simple essential info. Unless required to bring greater clarity, we must avoid showing more information that crowd together on the charts to create more noise.
  • People must become Bias detectors – raise business questions and be apprehensive about magic bullet solutions.
  • Analysts should disclose the limitations to their analyses.


- Insights from a session by Nate Silver

Innovations in Excel that users love

Real-time collaboration—As with other Office 365 apps, you and your co-workers can securely work simultaneously within an Excel file from any device (mobile, desktop, and web). This allows you to know who else is working with you in a spreadsheet, see where they’re working, and view changes automatically within seconds, reducing the time it takes to collect feedback and eliminating the need to maintain multiple versions of a file. Live, in-app presence indicators through Skype for Business make it easy to connect with available co-workers in the moment.

Powerful data modeling—Get & Transform is one of Excel’s most powerful features, enabling you to search for data sources, make connections, and shape your data to meet specific analysis needs. Excel can connect to data sitting in the cloud, in a service, or stored locally. You can then combine different data sets from these sources into a single Data Model for a unique, unified view. Plus, you can create a Data Model to import millions of rows of data into Excel—keeping your analysis in one place.

Insightful visualizations—Excel is an inherently visual tool, giving you new perspectives through a variety of charts and graphs. We continue to enhance visualization in Excel—with geographical maps and waterfall charts—to provide easier analysis and a better, more impactful way to share insights across your company.

Dashboard creation and sharing—Power BI is the cloud-based data visualization tool that allows you to create and publish dashboards. We intentionally designed Power BI and Excel to work together, so you can surface the most relevant insights for the task at hand. Excel data can be imported into Power BI, while Power BI reports can be analysed in Excel for new perspectives. You can then easily share these dashboards and insights with others in your company.

Built-in extensibility—Like other Office 365 applications, Excel can be customized to meet the specific needs of your company. Excel’s rich ecosystem of add-ins and other tools can help you work with data in more relevant ways. Plus, the Excel platform is flexible enough for IT admins or Microsoft partners to develop custom solutions.

-Office Blogs, Dec 2017

Security Intelligence Report of Microsoft

Microsoft regularly aggregates the latest worldwide security data into the Security Intelligence Report (SIR), unpacking the most pressing issues in cybersecurity.

Here are some highlights:

Cloud Threat Intelligence

The cloud has become the central data hub for any organization, which means it’s also a growing target for attackers.

Compromised Accounts

Definition - Attackers break into the cloud-based account simply by using the stolen sign-in credentials of a user
Analysis - A large majority of these compromises are the result of weak, guessable passwords and poor password management, followed by targeted phishing attacks and breaches of third-party services.

Cloud-based user account attacks have increased 300% from last year, showing that attackers have found a new favorite target.

Drive-by download sites

Definition - A website that hosts malware in its code and can infect a vulnerable computer simply by a web visit
Analysis - Attackers sneak malicious code into legitimate but poorly secured websites. Machines with vulnerable browsers can become infected by malware simply by visiting the site. Bing search constantly monitors sites for malicious elements or behavior, and displays prominent warnings before redirecting to any suspicious site.

Taiwan and Iran have the highest concentration of drive-by download pages

Endpoint threat intelligence

An endpoint is any device remotely connected to a network that can provide an entry point for attackers––such as a laptop or mobile device. Since users interact with an endpoint, it remains a key opportunity for attackers and a security priority for organizations.

Ransomware

Definition - Malware that disables a computer or its files until an amount of money is paid to the attackers
Analysis - Ransomware attacks have been on the rise, disrupting major organizations and grabbing global headlines. Attacks like WannaCry and Petya disabled thousands of machines worldwide in the first half of 2017. Windows 10 includes mitigations that prevent common exploitation techniques by these and other ransomware threats.

Ransomware disproportionately targeted Europe with Czech Republic, Italy, Hungary, Spain, Romania, and Croatia being the top six countries with the highest encounter rates.

Exploit Kits

Definition - A bundle of malicious software that discovers and abuses a computer's vulnerabilities
Analysis - Once installed on a compromised web server, exploit kits can easily reach any computer lacking proper security updates that visits the site.

Many of the more dangerous exploits are used in targeted attacks before appearing in the wild in larger volumes.

Takeaways and Checklist:

  • The threats and risks of cyberattacks are constantly changing and growing. However, there are some practical steps you can take to minimize your exposure.
  • Reduce risk of credential compromise by educating users on why they should avoid simple passwords, enforcing multi-factor authentication and applying alternative authentication methods (e.g., gesture or PIN).
    Enforce security policies that control access to sensitive data and limit corporate network access to appropriate users, locations, devices, and operating systems (OS).
  • Do not work in public Wi-Fi hotspots where attackers could eavesdrop on your
    communications, capture logins and passwords, and access your personal data. Regularly update your OS and other software to ensure the latest patches are installed

India specific report

The statistics presented here are generated by Microsoft security programs and services running on computers in India in March 2017 and previous quarters. This data is provided from administrators or users who choose to opt in to provide data to Microsoft, using IP address geolocation to determine country or region.

Encounter rate trends

15.5 percent of computers in India encountered malware, compared to worldwide encounter rate of 7.8 percent. The most common malicious software category in India was Trojans. The second most common malicious software category was Worms. The third most common malicious software category was Downloaders & Droppers.

The most common unwanted software category was Browser Modifiers. The second most common unwanted software category was Software Bundlers. The third most common unwanted software category was Adware.

The most common malicious software family encountered was Win32/Fuery, Win32/Fuery is a cloud-based detection for files that have been automatically identified as malicious by the cloud-based protection feature of Windows Defender. The second most common malicious software family encountered was Win32/Vigorf. Win32/Vigorf is a generic detection for a variety of threats. The third most common malicious software family encountered was Win32/Skeeyah. Win32/Skeeyah is a generic detection for various threats that display Trojan characteristics. The fourth most common malicious software family encountered was Win32/Dynamer. Win32/Dynamer is a generic detection for a variety of threats.

The most common unwanted software family encountered was Win32/Foxiebro. Win32/Foxiebro is a browser modifier that can inject ads to search results pages, modify web pages to insert ads, and open ads in new tabs. The second most common unwanted software family encountered was Win32/ICLoader. Win32/ICLoader is a software bundler distributed from software crack sites, which installs unwanted software alongside the desired program. It sometimes installs other unwanted software, such as Win32/Neobar. The third most common unwanted software family encountered was MSIL/Wizrem. MSIL/Wizrem is a software bundler that downloads other unwanted software, including Win32/EoRezo and Win32/Sasquor. It might also try to install malicious software such as Win32/Xadupi.

Security software use

Nearly 18% of the computers in India are not running up-to-date real-time security software when compared to the world-wide number of about 12%.

Malicious Websites

Attackers often use websites to conduct phishing attacks or distribute malware. Malicious websites typically appear completely legitimate and often provide no outward indicators of their malicious nature, even to experienced computer users. In many cases, these sites are legitimate websites that have been compromised by malware, SQL injection, or other techniques, in an effort by attackers to take advantage of the trust users have invested in them. To help protect users from malicious webpages, Microsoft and other browser vendors have developed filters that keep track of sites that host malware and phishing attacks and display prominent warnings when users try to navigate to them.

The information presented here has been generated from telemetry data produced by Windows Defender SmartScreen in Microsoft Edge and Internet Explorer.
  • Eight websites per hundred thousand URLs are malicious - drive-by download pages.
  • 420 websites per hundred thousand internet hosts are malicious - Phishing sites.
  • 890 websites per hundred thousand internet hosts are malicious - Malware hosting sites.
- Microsoft Security intelligence report, Volume 22

Digital Transformation - Sustaining the digital transformation

The challenge. Digital transformation is a journey with many predetermined milestones along the way for organizations to ensure that they stay on the intended path throughout, but the destination is not a well-defined spot. As technology changes are dynamic, unpredictable and quick, so is the digital destination. This might have to be redefined, moved further and prepared for further transformations dictated by newer disruptions that will arrive in future. It is essential that at least the foundational digital skills are laid strong enough for expansion and changes that would be required later in the transformation path.

The approach. Enterprises must orchestrate their skills build-up around this transformation. It is essential that the organization has enough people who grasp the idea, can contribute to the cause voluntarily or otherwise and involve actively in the concentrated efforts towards the desired result. While it is desirable that the existing management and workforce in entirety come on-board, many organizations might not have enough people who would share the same vision and willingly stay on the well-defined transformation path.   In such cases, businesses must look outside for resources that are already skilled in technology and operations that align well with the transformation vision. Hiring might have to start at the top which in turn might help in identifying the right talent in the middle and lower levels. Some innovation might be required in the recruitment strategy and enterprises might have to cast their net wider for rare skills.

Training must be an integral part of the agenda to increase the digital awareness organization-wide. This will result in bringing employees up to speed in specific digital technologies. Organizing employee exchange programs across functions and locations and introducing reverse mentoring initiatives might yield quicker results. Building an enterprise-wide knowledgebase with documents, videos and do it yourself kits for existing employees and new hires would help enterprises simplify the learning path quicken the path for the staff to contribute to the efforts and results. A centralized digital platform that is accessible easily by the employees for any kind of corporate information and a seamless communication system that can bring people and information closer would make a big difference than the traditional approach.

A well-defined reward system must also be in place for sustaining the transformation and the structure might have to extend beyond corporate boundaries. Enterprises must also make sure rewards are more than financial - social recognition and executive-level appreciation might be other alternatives.

Partnering with organizations that might yield a synergic effect to the digital vision is one other option to be seriously considered by organizations that don’t have the required skillsets and resources ready. Acquiring businesses that already have skilled resources that can contribute to the organizational vision is another strategy.

It is also essential to build a close relationship between internal IT and the business so that they work in sync towards the digital goals. Results need to be measured, monitored, reviewed, course-corrected and iterated periodically to retain the pace and steer the efforts in the right direction. IT solutions need to be designed and implemented for such activities. Managing the enterprise strategic score card and driving the initiative-level business case and related KPIs are essential for sustaining the transformation.

Digital Transformation - Mobilizing the organization

The challenge. Motivating the senior management and driving the digital transformation is one thing but mobilizing the whole organization and on-boarding them in the journey is another and the tougher challenge. The enterprise needs to send clear signals and through as many channels as possible. The objective is to motivate the lower level employees to enroll themselves in this endeavor with zero or little force. Redefined policies and modified work practices must be clearly defined and enforced, and participation must be encouraged and rewarded. The goals and results of the transformation need to be transparently defined, and benefits clearly conveyed to the entire organization such that every team and every individual will contribute to the cause. 

The approach. The appointment of a CDO, a digital challenge thrown by the CEO that has a measurable result by a certain cutoff date, or the visible branding of the transformation in a large way across the organization such as declaring a digital year are some of the activities that will send clear signal to the entire organization that the business is serious about this transformation effort. The leaders must lead from the front engaging in digital transformational activities themselves and encouraging the team around them to adopt the new policies and newly set procedures. The transformation should be co-created with the teams shouldering the responsibilities together with the management.

New behaviors need to be standardized but enterprises must also allow the digital culture to evolve organically across the organization.  Digital champions who can liaison between the management and the end users must be identified in every department and team, trained and encouraged to help people around them to adopt the digital culture. Quick digital wins must be rightly identified, advertised and rewarded so that the whole organization is motivated and mobilized around the transformation efforts. Enterprises must make visible changes to work practices and institutionalize them. Adoption of solutions for transformation must be encouraged rather than just deployment.