Posts Tagged ‘model’

Building an Effective & ExtensibleData & Analytics Operating Model

To keep pace with ever-present business and technology change and challenges, organizations need operating models built with a strong data and analytics foundation. Here’s how your organization can build one incorporating a range of key components and best practices to quickly realize your business objectives.

Executive Summary

To succeed in today’s hypercompetitive global economy, organizations must embrace insight-driven decision-making. This enables them to quickly anticipate and enforce business change with constant and effective innovation that swiftly incorporates technological advances where appropriate. The pivot to digital, consumer-minded new regulations around data privacy and the compelling need for greater levels of data quality together are forcing organizations to enact better controls over how data is created, transformed, stored and consumed across the extended enterprise. Chief data/analytics officers who are directly responsible for the sanctity and security of enterprise data are struggling to bridge the gap between their data strategies, day-to-day operations and core processes. This is where an operating model can help. It provides a common view/definition of how an organization should operate to convert its business strategy to operational design. While some mature organizations in heavily regulated sectors (e.g., financial services), and fast-paced sectors (e.g., retail) are tweaking their existing operating models, younger organizations are creating operating models with data and analytics as the backbone to meet their business objectives. This white paper provides a framework along with a set of must-have components for building a data and analytics operating model (or customizing an existing model).

The starting point: Methodology

Each organization is unique, with its own specific data and analytics needs. Different sets of capabilities are often required to fill these needs. For this reason, creating an operating model blueprint is an art, and is no trivial matter. The following systematic approach to building it will ensure the final product works optimally for your organization. Building the operating model is a three-step process starting with the business model (focus on data) followed by operating model design and then architecture. However, there is a precursory step, called “the pivots,” to capture the current state and extract data points from the business model prior to designing the data and analytics operating model. Understanding key elements that can influence the overall operating model is therefore an important consideration from the get-go (as Figure 1 illustrates). The operating model design focuses on integration and standardization, while the operating model architecture provides a detailed but still abstract view of organizing logic for business, data and technology. In simple terms, this pertains to the crystallization of the design approach for various components, including the interaction model and process optimization.

Preliminary step: The pivots

No two organizations are identical, and the operating model can differ based on a number of parameters — or pivots — that influence the operating model design. These parameters fall into three broad buckets:

Design principles: These set the foundation for target state definition, operation and implementation. Creating a data vision statement, therefore, will have a direct impact on the model’s design principles. Keep in mind, effective design principles will leverage all existing organizational capabilities and resources to the extent possible. In addition, they will be reusable despite disruptive technologies and industrial advancements. So these principles should not contain any generic statements, like “enable better visualization,” that are difficult to measure or so particular to your organization that operating-model evaluation is contingent upon them. The principles can address areas such as efficiency, cost, satisfaction, governance, technology, performance metrics, etc.

Sequence of operating model development

Current state: Gauging the maturity of data and related components — which is vital to designing the right model — demands a two-pronged approach: top down and bottom up. The reason? Findings will reveal key levers that require attention and a round of prioritization, which in turn can move decision-makers to see if intermediate operating models (IOMs) are required.

Influencers: Influencers fall into three broad categories: internal, external and support. Current-state assessment captures these details, requiring team leaders to be cognizant of these parameters prior to the operating-model design (see Figure 2). The “internal” category captures detail at the organization level. ”External” highlights the organization’s focus and factors that can affect the organization. And “support factor” provides insights into how much complexity and effort will be required by the transformation exercise.

Operating model influencers

First step: Business model

A business model describes how an enterprise leverages its products/services to deliver value, as well as generate revenue and profit. Unlike a corporate business model, however, the objective here is to identify all core processes that generate data. In addition, the business model needs to capture all details from a data lens — anything that generates or touches data across the entire data value chain (see Figure 3). We recommend that organizations leverage one or more of the popular strategy frameworks, such as the Business Model Canvas1 or the Operating Model Canvas,2 to convert the information gathered as part of the pivots into a business model. Other frameworks that add value are Porter’s Value Chain3 and McKinsey’s 7S framework.4 The output of this step is not a literal model but a collection of data points from the corporate business model and current state required to build the operating model.

Second step: Operating model

The operating model is an extension of the business model. It addresses how people, process and technology elements are integrated and standardized.

Integration: This is the most difficult part, as it connects various business units including third parties. The integration of data is primarily at the process level (both between and across processes) to enable end-to-end transaction processing and a 360-degree view of the customer. The objective is to identify the core processes and determine the level/type of integration required for end-to-end functioning to enable increased efficiency, coordination, transparency and agility (see Figure 4). A good starting point is to create a cross-functional process map, enterprise bus matrix, activity based map or competency map to understand the complexity of core processes and data. In our experience, tight integration between processes and functions can enable various functionalities like self-service, process automation, data consolidation, etc.

The data value chain

Standardization: During process execution, data is being generated. Standardization ensures the data is consistent (e.g., format), no matter where (the system), who (the trigger), what (the process) or how (data generation process) within the enterprise. Determine what elements in each process need standardization and the extent required. Higher levels of standardization can lead to higher costs and lower flexibility, so striking a balance is key.

Integration & standardization

Creating a reference data & analytics operating model

The reference operating model (see Figure 5) is customizable, but will remain largely intact at this level. As the nine components are detailed, the model will change substantially. It is common to see three to four iterations before the model is elaborate enough for execution.

For anyone looking to design a data and analytics operating model, Figure 5 is an excellent starting point as it has all the key components and areas.

Final step: Operating model architecture

Diverse stakeholders often require different views of the operating model for different reasons. As there is no one “correct” view of the operating model, organizations may need to create variants to fulfill everyone’s needs. A good example is comparing what a CEO will look for (e.g., strategic insights) versus what a CIO or COO would look for (e.g., an operating model architecture). To accommodate these variations, modeling tools like Archimate5 will help create those different views quickly. Since the architecture can include many objects and relations over time, such tools will help greatly in maintaining the operating model. The objective is to blend process and technology to achieve the end objective. This means using documentation of operational processes aligned to industry best practices like Six Sigma, ITIL, CMM, etc. for functional areas. At this stage it is also necessary to define the optimal staffing model with the right skill sets. In addition, we take a closer look at what the organization has and what it needs, always keeping value and efficiency as the primary goal. Striking the right balance is key as it can become expensive to attain even a small return on investment. Each of the core components in Figure 5 needs to be detailed at this point, in the form of a checklist, template, process, RACIF, performance metrics, etc. as applicable – the detailing of three subcomponents one level down. Subsequent levels involve detailing each block in Figure 6 until task/activity level granularity is reached.

Reference data & analytics operating model (Level 1)

The operating model components

The nine components shown in Figure 5 will be present in one form or another, regardless of the industry or the organization of business units. Like any other operating model, the data and analytics model also involves people, process and technology, but from a data lens.

Component 1: Manage process: If an enterprise-level business operating model exists, this component would act as the connector/ Component 1: Manage Process: If an enterpriselevel business operating model exists, this component would act as the connector/bridge between the data world and the business world. Every business unit has a set of core processes that generate data through various channels. Operational efficiency and the enablement of capabilities depend on the end-to-end management and control of these processes. For example, the quality of data and reporting capability depends on the extent of coupling between the processes.

Component 2: Manage demand/requirements & manage channel: Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.

Sampling of subcomponents: An illustrative view

Component 3: Manage data: This component manages and controls the data generated by the processes from cradle to grave. In other words, the processes, procedures, controls and standards around data, required to source, store, synthesize, integrate, secure, model and report it. The complexity of this component depends on the existing technology landscape and the three v’s of data: volume, velocity and variety. For a fairly centralized or single stack setup with a limited number of complementary tools and technology proliferation, this is straightforward. For many organizations, the people and process elements can become costly and time-consuming to build.

To enable certain advanced capabilities, the architect’s design and detail are major parts of this component. Each of the five subcomponents requires a good deal of due diligence in subsequent levels, especially to enable “as-aservice” and “self-service” capabilities.

Component 4a: Data management services: Data management is a broad area, and each subcomponent is unique. Given exponential data growth and use cases around data, the ability to independently trigger and manage each of the subcomponents is vital. Hence, enabling each subcomponent as a service adds value. While detailing the subcomponents, architects get involved to ensure the process can handle all types of data and scenarios. Each of the subcomponents will have its set of policy, process, controls, frameworks, service catalog and technology components.

Enablement of some of the capabilities as a service and the extent to which it can operate depends on the design of Component 3. It is common to see a few IOMs in place before the subcomponents mature.

Component 4b: Data analytics services: Deriving trustable insights from data captured across the organization is not easy. Every organization and business unit has its requirement and priority. Hence, there is no one-size-fits-all method. In addition, with advanced analytics such as those built around machine-learning (ML) algorithms, natural language processing (NLP) and other forms of artificial intelligence (AI), a standard model is not possible. Prior to detailing this component, it is mandatory to understand clearly what the business wants and how your team intends to deliver it. Broadly, the technology stack and data foundation determine the delivery method and extent of as-a-service capabilities.

Similar to Component 4a, IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/requirements from the industry and business units. The decision on the setup — centralized vs. federated — will influence the design of subcomponents.

Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.

Component 5: Manage project lifecycle: The project lifecycle component accommodates projects of Waterfall, Agile and/or hybrid nature. Figure 5 depicts a standard project lifecycle process. However, this is customizable or replaceable with your organization’s existing model. In all scenarios, the components require detailing from a data standpoint. Organizations that have an existing program management office (PMO) can leverage what they already have (e.g., prioritization, checklist, etc.) and supplement the remaining requirements.

The interaction model design will help support servicing of as-a-service and on-demand data requests from the data and analytics side during the regular program/project lifecycle.

Component 6: Manage technology/ platform: This component, which addresses the technology elements, includes IT services such as shared services, security, privacy and risk, architecture, infrastructure, data center and applications (web, mobile, on-premises).

As in the previous component, it is crucial to detail the interaction model with respect to how IT should operate in order to support the as-aservice and/or self-service models. For example, this should include cadence for communication between various teams within IT, handling of live projects, issues handling, etc.

Component 7: Manage support: No matter how well the operating model is designed, the human dimension plays a crucial role, too. Be it business, IT or corporate function, individuals’ buy-in and involvement can make or break the operating model.

The typical support groups involved in the operating-model effort include BA team (business technology), PMO, architecture board/group, change management/advisory training and release management teams, the infrastructure support group, IT applications team and corporate support group (HR, finance, etc.). Organization change management (OCM) is a critical but often overlooked component. Without it, the entire transformation exercise can fail.

Component 8: Manage change: This component complements the support component by providing the processes, controls and procedures required to manage and sustain the setup from a data perspective. This component manages both data change management and OCM. Tight integration between this and all the other components is key. Failure to define these interaction models will result in limited scalability, flexibility and robustness to accommodate change.

The detailing of this component will determine the ease of transitioning from an existing operating model to a new operating model (transformation) or of bringing additions to the existing operating model (enhancement).

Component 9: Manage governance: Governance ties all the components together, and thus is responsible for achieving the synergies needed for operational excellence. Think of it as the carriage driver that steers the horses. Although each component is capable of functioning without governance, over time they can become unmanageable and fail. Hence, planning and building governance into the DNA of the operating model adds value.

The typical governance areas to be detailed include data/information governance framework, charter, policy, process, controls standards, and the architecture to support enterprise data governance

Intermediate operating models (IOMs)

As mentioned above, an organization can create as many IOMs as it needs to achieve its end objectives. Though there is no one right answer to the question of optimal number of IOMs, it is better to have no more than two IOMs in a span of one year, to give sufficient time for model stabilization and adoption. The key factors that influence IOMs are budget, regulatory pressure, industrial and technology disruptions, and the organization’s risk appetite. The biggest benefit of IOMs lies in their phased approach, which helps balance short-term priorities, manage risks associated with large transformations and satisfy the expectation of top management to see tangible benefits at regular intervals for every dollar spent.

IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/ requirements from the industry and business units. The decision on the setup – centralized vs. federated – will influence the design of subcomponents.

DAOM (Level 2)

To succeed with IOMs, organizations need a tested approach that includes the following critical success factors:

  • Clear vision around data and analytics.
  • Understanding of the problems faced by customers, vendors/suppliers and employees.
  • Careful attention paid to influencers.
  • Trusted facts and numbers for insights and interpretation.
  • Understanding that the organization cannot cover all aspects (in breadth) on the first attempt.
  • Avoidance of emotional attachment to the process, or of being too detail-oriented.
  • Avoidance of trying to design an operating model optimized for everything.
  • Avoidance of passive governance — as achieving active governance is the goal.
Methodology: The big picture view

Moving forward

Two factors deserve highlighting: First, as organizations establish new business ventures and models to support their go-to-market strategies, their operating models may also require changes. However, a well-designed operating model will be adaptive enough to new developments that it should not change frequently. Second, the data-to-insight lifecycle is a very complex and sophisticated process given the constantly changing ways of collecting and processing data. Furthermore, at a time when complex data ecosystems are rapidly evolving and organizations are hungry to use all available data for competitive advantage, enabling things such as data monetization and insight-driven decisionmaking becomes a daunting task. This is where a robust data and analytics operating model shines. According to a McKinsey Global Institute report, “The biggest barriers companies face in extracting value from data and analytics are organizational.”6 Hence, organizations must prioritize and focus on people and processes as much as on technological aspects. Just spending heavily on the latest technologies to build data and analytics capabilities will not help, as it will lead to chaos, inefficiencies and poor adoption. Though there is no one-sizefits-all approach, the material above provides key principles that, when adopted, can provide optimal outcomes for increased agility, better operational efficiency and smoother transitions.

Endnotes

1 A tool that allows one to describe, design, challenge and pivot the business model in a straightforward, structured way. Created by Alexander Osterwalder, of Strategyzer.
2 Operating model canvas helps to capture thoughts about how to design operations and organizations that will deliver a value proposition to a target customer or beneficiary. It helps translate strategy into choices about operations and organizations. Created by Andrew Campbell, Mikel Gutierrez and Mark Lancelott.
3 First described by Michael E. Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance. This is a general-purpose value chain to help organizations understand their own sources of value — i.e., the set of activities that helps an organization to generate value for its customers.
4 The 7S framework is based on the theory that for an organization to perform well, the seven elements (structure, strategy, systems, skills, style, staff and shared values) need to be aligned and mutually reinforcing. The model helps identify what needs to be realigned to improve performance and/or to maintain alignment.
5 ArchiMate is a technical standard from The Open Group and is based on the concepts of the IEEE 1471 standard. This is an open and independent enterprise architecture modeling language. For more information: www.opengroup.org/subjectareas/enterprise/archimate-overview.
6 The age of analytics: Competing in a data-driven world. Retrieved from www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/The%20age%20of%20analytics%20Competing%20in%20a%20data%20driven%20world/MGI-The-Age-of-Analytics-Full-report.ashx

References

https://strategyzer.com/canvas/business-model-canvas

https://operatingmodelcanvas.com/

Enduring Ideas: The 7-S Framework, McKinsey Quarterly, www.mckinsey.com/business-functions/strategy-andcorporate-finance/our-insights/enduring-ideas-the-7-s-framework.

www.opengroup.org/subjectareas/enterprise/archimate-overview

Importance of Data Readiness Check

How many times have we gone through the routine of seeing data issue after data issue when something goes live in production, no matter how much of due-diligence was put in place? Despite having industry leading frameworks, operating models, air-tight processes, best-in-class templates, data issues creep in at multiple touch-points. Based on my assessment of UAT/post Go-Live scenario at various clients, more than 60% of the issues faced is related to data. Some of the organizations assessed had some sort of a piecemeal approach (or a quick fix in their parlance) to reduce data issues but it was more reactive and not sufficient, foolproof, scalable and/or repeatable.

The project types assessed include data migration, data integration, data transformation and creating/enhancing a bunch of analytical reports. In all the scenarios, data issue topped the charts as the number one pain area to be addressed.  This is because the “data” aspect was either overlooked or not given the required level of importance. None of the processes were designed keeping data as the core/foundation. Everyone understands that data is an asset, yet no one has designed the frameworks models and processes keeping data as the key focus. It is just one module or component in the entire spectrum of things. Addressing all aspects around data in a systematic manner in addition to the existing parameters is key to reducing data issues.

Figure: 1 – Key areas around data

Figure: 1 – Key areas around data

Figure 1 shows some of the key areas that need to be addressed around data as a bare minimum to reduce data related issues.

A conceptual “Data Readiness” framework that can be customized and scaled up as required to suit various project types is shown here. Conducting an end-to-end data readiness check using such a well-defined framework, covering all major touch-points for data will help address data related issues early. While this framework is predominantly helpful during the approval/kick-off phase of projects, it extends all the way till the project goes live and declared stable.

Figure: 2 – Base framework for Data readiness

Figure: 2 – Base framework for Data readiness

This highly scalable and customizable framework comes with supporting artifact(s) for each area as applicable. Refer Figure 3 for details. These artifacts are spread across people, process and technology areas. This will automatically address issues like schedule overruns, cost overruns, rework, low user confidence on IT, etc. as it touches upon all the aspects related to data.

Figure: 3 – Supporting artifacts & checklist items

Figure: 3 – Supporting artifacts & checklist items

The biggest advantage of this framework is that it can be easily blended with any existing governance model and PMO model an organization might be following.

Divestiture Framework – Data Perspective

Introduction

The selling of assets, divisions, or subsidiaries to another corporation or individual(s) is termed divestiture. According to a divestiture survey conducted by Deloitte, “the top reason for divesting a business unit or segment is that it is not considered core to the company’s business strategy” and “the need to get rid of non-core assets or financing needs as their top reason for divesting an asset”. In some cases, divestiture is done to de-risk the parent company from a high-potential but risky business line or product line. Economic turnaround and a wall of capital also drive the demand for divestitures.

Divestitures have some unique characteristics that distinguish them from other M&A transactions and spin-offs. For example, the need to separate (aka disentangle) business and technology assets of the unit being sold from that of the seller before the sale is executed. Performing the disentanglement under tighter time constraints, i.e. before the close of the transaction, unlike in the case of an acquisition scenario adds to the complexity.

The critical aspect of the entire process is data disposition. Though similar technologies could have been deployed on the buyer and seller side, the handover can end up painful if a formal process is not adopted right from the due-diligence phase. This is because, in the case of divestiture, the process is not as simple as a ‘lift-shift and operate’ process. There is a hand full of frameworks available in the market detailing the overall process in a divestiture scenario nevertheless the core component which is “data” is touched upon at the surface level and not expanded enough to throw light on the true complexities involved.

What does the trend indicate?

Divestitures and carve-outs are very common in Life Science, Retail and Manufacturing.

If we observe the economic movements and divestiture trend over the past decade, it is clear that the economic conditions have a direct correlation and significant impact on divestiture. So organizations have to proactively start assessing their assets at least annually to understand which assets are potential candidates for divestitures and prepare for the same. This way, when the time is right the organization would be well prepared for the transition services agreement (TSA) phase.

The bottom-line

Overall planning is a critical success factor, however, that process not involving sufficient planning around the “data” component can result in surprises at various points during the course of divestiture and even end up in breaking the deal. End of the day the shareholders and top management will only look at the data to say if the deal was successful or not.

Faster due-diligence, quicker integration, and visible tracking of key metrics/milestones from the start is what one looks for. As per industry experts, having a proactive approach in place has helped sellers to increase the valuation of the deal.

One of the key outputs of this framework is a customized technology and data roadmap. This roadmap will contain recommendations and details around the data and technology complexities that need to be addressed prior to, during, and post the divestiture to ensure a higher success rate for both the selling and buying organization.

The Divestiture Model – Buyer and Seller Perspective

Broadly the Divestiture model has three components: Core, Buyer, and Seller:

  1. Core component: Handles all activities related to overall due-diligence related to data such as identifying data owners and stewards, data disposition strategy, value creation opportunity (VCO), and enterprise-level data integration with core applications at the buyer and seller end.
  2. Seller component: Focuses on seller side activities related to data like data inventory, business metadata documentation, data lineage/dependency, business process, data flow/process flow diagrams and level of integration with enterprise apps, business impact on existing processes, and resource movement (technology and people).
  3. Buyer component: Focuses on buyer-side activities related to data like data mapping, data integration, data quality, technology, capacity planning, and business process alignment.
  4. Governance: The entire process is governed by a 360-degree data/information governance framework to maintain the privacy, security, regulatory, and integrity aspects of data between the two organizations.

Divestiture model – The Core

Addressing the “data” component:

Selling Organization

Only a few sellers understand the fact that just getting a deal signed and closed isn’t always the end. From a pre-divestiture perspective, the organization should have a well-defined process for possible carve-outs, a good data inventory with documented business metadata, documented business processes around the non-performing assets, and a clear data lineage and impact document. Armed with this information, the selling organization can get into any kind of TSA comfortably and answer most of the questions the buyer will raise during their due-diligence.

From a post-divestiture perspective, the selling organization needs to assess what technologies and processes need to be tweaked or decoupled to achieve the company’s post-divestiture strategy. A plan to minimize the impact of operational dependencies on existing systems and processes with enterprise applications like ERP when the data stops coming in. If this was not done thoroughly and analyzed well in advance, it can have a crippling effect on the entire organization. A typical mistake committed by the selling organization is, just looking at the cost savings due to alignment/rationalization of infrastructure and missing the intricate coupling the data has at the enterprise level.

Having a divestiture strategy with data as the core of the framework can address a host of issues for the selling organization and speed up the pace of transactions.

Buying Organization

There could be two potential scenarios when it comes to the buying organization. Either the organization already has the product line or business unit and is looking to enhance its position in the market or the organization is extending itself into a new line of business with no past hands-on experience. In the case of the former, the complexities can be primarily attributed to migration and merging of data between the two organizations. Questions like what data to keep/pull, what technology to use, what data requires cleansing, similarities in processes, capacity planning to house the new data, what tweaks will be required to existing reports, the new reports that need to be created, to show the benefit of the buy to shareholders, etc. arise.

The pre-divestiture stage will address most of the questions raised above and based on the parameters a strategy is drawn for data disposition. During the divestiture stage, when the data disposition is actually happening, new reports, scorecards, and dashboards are built to ensure complete visibility across the organization at every stage of the divestiture process.

In the latter case where the organization is extending itself into a new line of business, questions like should a lift and shift strategy be adopted or should just the key data be brought in, or should it be a start from a clean state, etc. arise. There is no one correct answer for this as it depends on the quality of processes, technology adopted and data coming from the selling organization.

Divestiture Data Framework

The Divestiture Data Framework was designed to highlight the importance of the core component which is “data”.

Divestiture Data Framework

One of the key outputs of this framework is a customized technology and data roadmap. The roadmap will contain recommendations and details around both data and technology complexities that need to be addressed prior to, during, and post the divestiture to ensure a higher success rate for both the selling and buying organization.

Social Media Influence Scoring – Healthcare Providers

Pricing strategy for the budget constrained BI client

Abstract

Financial services industry and the public services industry is currently faced with increasing regulatory norms, cuts in IT budget and a need for a strong IT foundation to stay ahead of competition. Though the CIOs seem to be pushing for IT transformation projects, Gartner Survey indicates that “For 2013, CIO IT budgets are projected to be slightly down, with a weighted global average decline of 0.5 percent”.

As a seasoned consultant you might be able to see the true potential of your client. But for various reasons like budget constraints, other IT priorities, etc. your request to implement certain key projects might not see the light of day. Find out what you can do to achieve a win-win situation in this article.

The D2RC model

In this model, the consultant is required to get two things. One, secure a “Go-Ahead” from the client to the start the project by explaining the big picture. Two, get the required approvals to reinvest the savings obtained in the first three months (post due-diligence phase) on key projects identified by the consultant. The four stages of the D2RC (Due-diligence – Deliver – Reinvest – Charge) model is detailed below:

Pricing

Due-Diligence – This phase charts out a clearly defined and agreed upon benefits with up-front process to measure the resultant value. The consultant based on his knowledge of the client picks the least-effort-maximum-impact project that could bring about substantial savings to kick-start the “Deliver” phase. The consultant also identifies potential projects for implementation along with a roadmap. This is typically a consulting phase.

Deliver – In a short duration of say two to three months, the identified project must be executed and the savings required for reinvestment secured. If the consultant is going to take longer than three months, its better the project is reconsidered or not attempted unless there is a strong rationale behind it.

Re-invest – The estimated savings over a one-year period from the “Deliver” phase would fund the potential projects identified in the “due-diligence” phase. Projects will have to be hand-picked such that the funds are sufficient to implement at least 70-75% of the project. This is to show tangible benefits to the business for obtaining remaining funds.

Charge – Midway through the Re-invest phase, the consultant can show tangible benefits to the client and either charge for effort spent until then or ask for a percentage of the savings as fee. This is more like a Risk & Reward model. End of the day, it all depends on the progress made and tangible benefits the client can see.

Making it work:

The first step for the consultant is to convince his company that there is a lot of potential with the client. Secondly, he has to secure client buy-in before the engagement starts, which is not only tough, but also risky. To make this model work, the consultant must:

  • Have sufficient knowledge of the client, his environment and culture (preferably worked with the client)
  • Good rapport with key stakeholders (business, IT and top management)
  • Long term commitment from the client and support when it comes to working with other vendors
  • Clear definition of scope of activities, quantum of work and the projects to be carried out
  • Follow caution in terms of explaining the model, billing and charge backs when tangible benefits are realized

The Risks:

If the “due-diligence” and “deliver” phase combined takes a long time to show tangible results, the planned projects might not even kick-off. If the due-diligence was based on sub-standard analysis or incorrect assumptions, the savings obtained in “deliver” phase will not be substantial to meet the funds required for the re-invest phase. If the consultant cannot show visible progress or results at regular intervals, the project is destined for failure. Above all, the consultant must have a clear view on when to pull the plug should anything go wrong to ensure damage control.

Presence of other incumbent vendors can also pose a risk if there is no client buy-in at the executive level to enable smooth implementation.

Possible due-diligence projects:

Here is a sample list of projects the consultant could consider for the least-effort-maximum-impact projects.

  • Technology stack rationalization – If the client has a host of tools in their environment, this is a good place to start. The savings obtained from licenses per year can be used for reinvestment
  • Lean BI projects – On occasions where the client has a lot of processes in place, which are age old, attempting to study a few key processes and trimming them could help achieve the required savings
  • Centralization – If the client operates on a fragmented model, analyzing the impact of centralizing a few key processes could bring in efficiency and savings. Example, centralized report creation team

Benefits:

  • Guaranteed Savings for the client
  • The consultant can build the clients confidence/trust and a possible long-term relationship
  • For the client, he gets to execute projects which he has been wanting to without having to go to the top management for funds
  • Prevent the client from floating a RFP. Caveat: Assuming the client does not have the funding to float a RFP

Conclusion

There are several pricing models available in the market. What differentiates this model from the others is the fact that this is a combination of Risk-Reward model and Result-Oriented pricing model. Given today’s market condition, this model is a win-win for both the consultant and the client if risks are understood and cautiously handled.

Sentiment Analyzer Evaluation Parameters

Product and Service Organizations are increasingly showing keen interest in sentiment analysis as it can make or break their reputation overnight. Understanding the polarity/sentiment of every users post and addressing them on time is becoming critical. Considering the fact that thousands of posts are being generated every day on a single topic, it is nearly impossible to analyze them manually.

Sentiment analysis algorithms are not accurate and cannot be 100% accurate. But it can certainly provide the required warning signals to change the (product) strategy if required at the right time. This can also help the organization save time by narrowing down the thousands of tweets to a few hundred tweets to be analyzed manually.

Once the need for sentiment analysis has been identified, the organization starts to scout for COTS products and find there are way too many. This article has a list of 40+ parameters under six key buckets that the organization can use to shortlist the right tool for them.

In the below table, you can find the six buckets mentioned above along with the parameters and a short description about each.

I Performance
1 Efficiency Scan speed (Ex. Number of tweets/sentences scanned and analyzed per second)
2 Robustness Tool running consistently without crashing. Ex. Loads of data can result in the application hanging after a certain point)
3 Data size Ability to scale to large data sets. Partly capability of the tool (architecture) and partly the underlying database
II Algorithms
4 Natural Language Processing (NLP) NLP or Machine learning algorithms. Mostly statistics based
5 Computational linguistics Statistical and/or rule-based modeling of natural language. Focuses on the practical outcome of modeling human language use
6 Text analytics Set of linguistic, statistical, and machine learning techniques that model and structure the information for BI, exploratory data analysis, research, or investigation
7 Proprietary Vs. Open algorithms Usage of Free and open source algorithms vs. proprietary algorithms
8 Mostly Human interpretation Post extraction of text, most of the analysis for sentiment is performed manually by people
9 Bayesian inference Statistical inference in which some kinds of evidence or observations are used to calculate the probability of the sentiment
10 Keyword based Keyword based search
11 Combination of the above algorithms Ability to pass the text through multiple algorithms to get the right sentiment. Number of techniques employed
12 Ability to override sentiment Automated sentiments are not always right and might need correction before reporting
III Functionality
13 Ability to fine-tune the modeling algorithms Ability to easily modify an existing algorithm to enhance its capability. Example, add an additional layer of filtering mechanism
14 Plug-ins / API / Widget support Ability to add 3rd party plug-ins to perform specialized tasks. Example, 80-20 suppression, additional graphs.
15 Data filtering/Cleansing capability Useful if there are two similar products in the market. Ex. Norton Internet Security Vs. Norton 360
16 Value substitution capability Useful in tweets where users use different abbreviations or there are spelling errors. Example, MS, Microsoft and Microsoft
17 Supported Platforms Ability to work on Linux, Windows, Mac OS, Mobile platforms, etc.
18 Alert/Trigger functionality Ability to set triggers on key metrics where real-time monitoring is available
19 Auditing/Log feature Audit feature helps capture the amount of data grabbed and processed
20 Geo Identification Ability to identify the source of the conversation. Example, Asia, US, UK, etc.
21 Multi-lingual support Support for more than one language. Example, French and English
IV Reporting
22 Export options (output) Excel, PDF, publish to portal
23 Visualization options Variety of graphs. Bar, pie, line, radar, etc.
24 Dashboard capability Refreshable and drillable dashboards
25 Customizable reports Ability to have calculated columns. Generate different visualization for the same data easily
26 Pre-defined reports Out-of-the-box reports to get social media up and running instantly
27 Drill down/drill up facility on reports Ability to see detailed/summarized information by drilling on an item present in the report
28 Web interface View and analyze reports online
V User interface & Integration
29 Training / Learning curve Tools like Radian6 requires experts to handle the tool
30 Targeted user group Analysts, Business users, combination, etc.
31 Error reporting If any of the configured feeds fail, or reports fail, a mechanism using which the error can be reported
32 Web interface Complete tool is available online and can be used to configure and build reports. No thick client
33 Complete GUI support No command line interface to perform any task
34 Bundled database Does the tool come with a built-in database or 3rd party database like MySQL, MS Access, etc. needs to be procured?
35 Native connectivity to popular data sources Built-in native connectivity to popular forums, groups, blog sites, micro blogs, etc.
36 Integration with BI and CRM tools Ability to integrate the processed data with BI tools and other CRM data. Partnership with leading BI Vendors could be considered.
37 Approved APIs APIs approved by Social media providers enable direct connectivity to their servers and enhance the rate of data pull. Ex. Twitter
VI Vendor Credibilty
38 Established Client Base Referencible clientele
39 Licensing options Free/Trial/Paid. User based license, server based, service based, etc.
40 References Installation with more than 1 to 2 years into production. This might not apply always as new companies come up with very innovative solutions
41 Support Services Tool support, training, etc.
42 Consulting services Analysis of data for client

Feel free to comment on the above parameters. If you have any addtional parameters that you feel are relevant, do let me know and I will be happy to include them (with due credits to you).

User grouping for high impact social media strategy definition

INTRODUCTION

Everyone understands the value of grouping users. There are various ways to classify users but primarily they fall into one of the three levels defined below:

 

LEVEL 1: PRIMITIVE

This is the easiest and the most common way of grouping users. This Level is synonymous to a company in US trying to read books about Japan to understand their culture. The distance between the user and the organization is high. Beyond a certain point, this level fails to connect with the users.

Organizations can only take generic decisions based on the typical characteristics of this user group. Example, if an organization is developing a portal and it finds that 75% of its users are female and in the teenage group, they might target pink for the portal theme. But if this group happens to like rock music and partying, the organizations decision to go with pink could be a road to disaster.

LEVEL 2: PASSIVE

Levels-2 is all about studying the user passively but closely to understand their presence (where they are active), their behavior (what they do online) and their tastes (what they like). This is synonymous to a company in US sending someone to Japan to collect information. Inclusion of Level 1 grouping to this level would be an added advantage. A good starting point would be to analyze data captured from social media networks like Facebook, Twitter, etc.

LEVEL 3: ACTIVE

In this level, hand picked influencers are analyzed in depth from both an internal & external networks perspective. Right content (information, samples) is then pushed to this small group of handpicked few to create the impact. This levels termed “Active” because on identification of influencers, some amount of handshake/contact/personal touch happens with the influencer (user). Tools like Radian 6, Sysomos, Klout help in identifying these potential influencers. This level is hard, time-consuming and relatively expensive but has a higher pay-off .

Forrester’s User life-cycle

Forrester’s User life-cycle (Ref. Figure 2) categorizes users into 6 groups: Creators, Critics, Collectors, Joiners, Spectators and In-actives. This user life-cycle maps to Level 2 mentioned above as it can easily group users primarily based on their age and characteristic alone. But this does not tell you how many creators or critics in a particular age group posted only once. It is also not possible to determine how many of the joiners became inactive after the first login.

Source: Forrester Research

 

User Profiler

The profiler designed here (Refer Figure 3) categorizes users into 6 major groups: Joiner, Information seeker, Active/Dynamic, Responder, Creator and In-active. It is possible to map these groups to the Forrester’s user life-cycle groups with the exception of the Collectors group.

 

What do these categories mean?

JoinerMoment a user registers they enter the Joiner level and take a default maturity of 0

Info SeekerUsers who seek answers to questions are the information seekers. Quality of the question decides the users’ maturity. Typically this ranges between 1 and 3

Active / DynamicThese users actively seek information and respond to others in the group regularly. The maturity for these users range between 2 and 3

ResponderResponders are mostly users within the organization or industry experts. They either have the information or the resources required to secure the information needed for information seekers. Maturity lies between 2 and 4

ListenerThese users log in and search for answers but have not posted anything yet. Mostly spectators. Beyond a certain point the lack of user activity except for login’s is either a BOT or Information collector (RSS)

TransitionThis is the phase where the users tend to break the routine and shift to other sites or become inactive. One possibility is that the user has found what he/she was looking for

In-activeIf the user does not have a single login in a say 1 year, the user is in-active and their maturity score drops to 0. This bucket helps identify and purge records easily for maintenance & performance purposes

CreatorTop management, visionaries and industry experts who are capable of generating interest in the public are Creators. Default maturity is assumed as 5.

Please feel free to get in touch with me if you have any queries or have suggestions around improving the above user profiler.

Data Warehouse Rationalization: The Next Level

Introduction

As the organization grows organically or in-organically, there will be an infusion of multiple technologies, loss of governance and a drift away from the centralized model. In some organizations where the BI environment is not robust, there will be an increased usage of MS Excel and MS Access to meet various business needs like cross-functional reporting, dashboards, etc.

When the organization is small, one can cater to the business requirements by manually crunching numbers. But as it grows, factors like localization, compliance, availability and skill set of resources make it unmanageable and tedious. Similarly, when the usage of Excel spreadsheets grow the complexity also grows along with it (macros, interlinked excels, etc.)

There are various models and frameworks for rationalization available in the industry today. Each model designed to address a specific problem and offer an excellent short-term solution. Interestingly all these industry models lack two important factors. A governance component and pillars/enablers to sustain the effort and add value to the client.

 

Rationalization – The piece meal problem:

When the environment slowly starts to become unmanageable, organizations look towards rationalization as a solution. Some of the factors that lead to the need for rationalization are:

  • Compliance issues
  • License fees consuming a large portion of the revenue
  • Consolidation of operations is a bottleneck
  • Data inconsistency start to creep in
  • Integration post merger or acquisition
  • Analysts spending more time validating the data rather than analyzing the information

Generally organizations identify what needs rationalization based on what is impacting their revenue or productivity the most. E.g. Tool license fee. Consultants are then called to fix that particular problem and move on. There are two issues here:

  • First, the consultant concentrates only on the problem at hand and does not validate the environment for the root cause to fix it permanently
  • Second, the framework used by the consultant fixes the problem perfectly from a short-term perspective and does not guarantee a long-term solution

 

The Solution:

Rationalization models in the industry today needs an update for inclusion of a governance component to help assess, rectify and sustain the rationalization effort in the long run. In addition to this, enabling components needs to be identified and included to add value to the overall exercise.

Figure 1

A typical rationalization model with governance component and post rationalization enablers would look as shown in Figure 2. This model is not comprehensive and shows only the most common rationalization scenarios.

 

Pre-Rationalization Assessment

A pre-assessment when conducted would evaluate if the requested rationalization is all that is required to fix the existing problems or something more is needed for a permanent fix. Typically a root cause analysis helps identify the actual reason behind the current scenario. Along with this, the existing governance maturity and value creation opportunities are also identified to help enhance the user experience, adoption, sustainability and stability of the environment.

A simple excel questionnaire should help initiate the pre-assessment phase (Contact the author if you are interested in this excel) without dwelling deep in trying to understand the intricacies in the environment. Based on the assessment findings, the course of the rationalization exercise can be altered. For example, a formal Data Governance program kicked off in parallel.

 

Rationalize

This is the phase where the actual rationalization takes place using customized models and frameworks. Complete inventory gathering, metadata analysis, business discussions to understand the needs, etc. is performed. At this point, the client’s need from a short-term perspective is met. Sustainability of the environment would still be questionable.

 

Governance Component

The depth of coverage for the governance component is to be mutually decided between the client and consultant based on environmental factors. Parameters like size of the IT team, existing processes and policies, maturity, etc. play a major role in deciding the depth required.

The governance component introduced within the rationalization framework would typically deep-dive into the system to understand the policies & standards, roles & responsibilities, etc from various perspectives to fix or find a work-around to the problem and ensure the scenario does not repeat itself.

Click to enlarge

Enablers

Post rationalization enablers have no dependency on either the rationalization effort or the governance component. This component is kicked off post the rationalization phase either as a separate project or as an extension to the rationalization phase. The enabling components though they are optional, will play a vital role in adding value to the client by sustaining the setup from a long-term perspective and user adoption perspective. On occasions where more than one enabler has been identified, it is sufficient if the key enabler is addressed on priority.

For example, if the client requests for report rationalization, there is a high probability that the data model was not flexible and hence the users were creating multiple reports or there was a training issue. This would have got identified during the pre-assessment phase and can be addressed as part of this phase by setting up something like a Global Report Shop. The governance component would have helped address a part of this issue by ensuring policies are in place to see that the users do not create reports at will and at the same time they are directed to the right team or process to meet their requirements.

Taking another example, if the client goes for a KPI and metrics standardization exercise, there is a good chance that there is a need for data model changes, dashboards to be recreated and analytic reports to be designed from scratch. This can be handled by setting up a core analytics team well in advance. If this was not identified and addressed as part of the KPI standardization project, users day-to-day activities would get hampered and result in poor adoption.

Benefits

Primary advantages of this rationalization model are:

  • Maximum revenue realization in a short duration
  • Helps sustain the quality of the environment
  • Enhances users productivity and adoption

Conclusion

Rationalization must be seen from a broader and long-term perspective. Rationalization without addressing the governance component will not be strong and one without any supporting pillars like the enablers mentioned will not serve the long-term purpose.