Picture this: the familiar thud of my shoes on pavement, and breath syncing with each stride. For the last few years, running has been my sanctuary; a place where pacing, rhythm, and focus reign supreme. Half-marathons, 15Ks and 10Ks all followed a predictable dance (a test of linear endurance). You plan your splits, find your groove, and push through the mental fog. Then I signed up for my first Spartan 5K, an obstacle course race that laughed in the face of everything I’d learned. Mud, walls, rope climb and sheer chaos flipped my running world upside down. Was it a one-off thrill, or the start of a new obsession? Let’s unpack this wild one a bit.
“I didn’t realize it then, but this shift from predictability to chaos would mirror the very transformation many organizations are facing today.”
The comfort of the known path: The Marathon Mindset
If you’ve followed my blog (Marathon Diary), you know my running journey has been a masterclass in Pacing, Rhythm, and Focus. Of course, proper nutrition, hydration, and training, and you’ve got a formula for crossing finish lines and setting PR’s now and then. In a nutshell, every disciplined runner (and strategist) eventually learns this triad:
Pacing (resource management): This is the strategic budgeting of energy, ensuring no burnout before the final stretch. It’s the disciplined execution of a long-term business plan. Spread it wisely over kms/miles to avoid crashing.
Rhythm (Process Consistency): The steady cadence of breath and stride creating a meditative, predictable workflow. It’s the commitment to established, repeatable processes for consistent delivery.
Focus (Monotony Resilience): The mental battle against laziness and fatigue is about single-minded dedication to a long-term goal. It’s the drive to push past inertia when work feels routine. You wrestle with your mind, pushing past the voice begging you to stop.
Success is largely determined by meticulous planning, hydration, nutrition, and the ability to maintain a steady state in traditional running. It is predictable, controlled, and deeply satisfying at the end. It’s a test of endurance, where you master your body and mind over long, unbroken stretches. Or so I thought! until the Spartan 5K came calling.
Spartan 5K: Chaos as the New Constant
The Spartan 5K was less a race and more a rapid-fire series of high-stakes, high-impact challenges. From the starting line, the routine vanished, replaced by a need for immediate adaptation: a six-foot wall requiring upper-body power, followed by a low crawl under barbed wire demanding immediate shifts in locomotion. Obstacles like rope climbs and the bucket carry transformed a cardio challenge into an integrated test of strength, agility, and grit.
This wasn’t about finding a rhythm; it was about constant disruption and real-time problem-solving. It struck me how much this mirrored what I see in the Retail-CPG sector today where disruption, consumer shifts, and AI-driven competition have turned once-stable playbooks into obstacle courses.
The Agility Shift: Moving from Pacing to Power Bursts
The Spartan 5K forced a paradigm shift in how I viewed strategy and execution. From Endurance to Agility, the three shifts:
Pacing → Power Bursts: My marathon strategy of calculated splits was useless. Success was measured in sprints to an obstacle, maximum-effort bursts to clear it, and on-the-fly recovery before the next challenge. This mirrors the need for modern professionals to transition from slow, linear projects to rapid sprints and intense periods of deep work.
Rhythm → Interruption Resilience: The steady flow was deliberately broken by obstacles. My heart rate and muscle recruitment spiked and dropped repeatedly. The lesson: the capacity to perform, recover quickly, and adapt to the next interruption is more valuable than maintaining an unbroken groove.
Focus → Split-Second Problem-Solving: The focus shifted from enduring monotony to immediate risk assessment (like the height of the rope climb). It demanded a different kind of mental resilience given the zero practice and tricks. The ability to observe how others do the rope climb and execute under pressure, not just push through fatigue is another level.
Crossing that finish line felt different from any road race victory I have experienced. It had nothing to do with time; it was a testament to raw agility and the collaborative spirit forged in the struggle. In today’s market, endurance still matters, but agility wins the race.
The Crossroads: Rhythm vs. AI-gility
“We’re now entering an age of AI-gility — where both human adaptability and AI-assisted speed define success.”
My journey is now at a fascinating crossroads. My personal blog is full of stories about pursuing PRs (Personal Records) and the meticulous planning of road running. But the Spartan experience suggests a deeper truth: strategy must be agile. It raises a question relevant to any career, industry or business:
Do we perfect the single, linear path we know, or do we seek out disruptive challenges that force us to develop new layers of strength and resilience?
Your Turn: Join the Conversation
I’m turning to my professional network for insights.
AI-gility in Action: Have you encountered obstacles where a project or market movement completely disrupted your plans/workflow and forced you to pivot? How did you adapt your strategy in the ‘Spartan moment’?
The Trifecta Dare: Should I commit to the Spartan Trifecta next year, blending my established endurance training with a year dedicated to high-agility, high-strength challenges?
Drop your stories and thoughts in the comment. Your input might just push me toward my next, muddy, adventure.
“McKinsey research finds that agile organizations outperform non-agile peers by 30% in operational performance.”
For years, Consent Management Platforms (CMPs) like OneTrust, TrustArc, and Osano served as the digital privacy gatekeepers of the web. They helped companies display those now-ubiquitous cookie popups and ensure that users gave (or didn’t give) permission for tracking. But while technically necessary for GDPR, CCPA, and similar regulations, CMPs have become more of a compliance checkbox than a meaningful privacy safeguard. We, as users, feel the frustrations of this broken process. Thanks to the evolution of AI and digital experiences, this model is changing.
The Problem: Consent Management Is Fragmented, Fatiguing, and Fading
With AI-first browsers like Comet (to be launched by Perplexity) explicitly designed to “track everything users do online” for hyper-personalized experiences,the locus of control is moving away from individual websites to the browser layer, where consent could be set once and respected everywhere.
In short, Browsers — not websites — are becoming the central actors in user data collection. This shift renders traditional CMPs increasingly irrelevant — unless they evolve.
AI Browsers Don’t Just Observe — They Act!
The implication: CMPs must become smarter or “agent-aware”. They’ll need to integrate directly with browsers and their APIs to:
Interpret global consent settings issued by users.
Detect when AI agents are scraping or collecting data.
Ensure downstream systems (like adtech or analytics platforms) respect those browser-level preferences.
Figure 1: Consent management flow – Today
This isn’t hypothetical. OneTrust and BigID are already deploying AI-driven privacy agents and compliance automation tools, which could evolve to interface directly with browser AI.
Programmable & Portable Consent
Imagine a future where users set privacy preferences once — during browser setup — and those settings follow across every site, platform, and digital touchpoint. That’s programmable consent.
In this model:
CMPs don’t just ask for consent; they interpret and enforce it.
Consent signals become machine-readable, portable, and actionable across systems/devices.
Privacy becomes not a moment in time, but a persistent layer of the digital experience.
Figure 2: Consent management flow – Tomorrow
This requires a fundamental re-architecture of CMPs — from UI overlays to backend orchestration engines.
The existing setup is not going to go away anytime soon. They will co-exist for a while, but the additional layer to address the emergence of AI browsers is inevitable in the near term.
The initial rollout of consent management at the browser level might be rigid or with limited options, but with subsequent rollouts, this could change. For example, browsers could provide options to set consent at website level, website category level, bookmarked/favorite sites level, or as simple as allowing websites to push their ubiquitous popups when a site is opened for the first time on the AI-browser and store the user preference for future visits on the browser.
Blueprint for CMP 2.0: Consent Engineering in Action
CMPs face an urgent need to redefine their value. Instead of focusing solely on front-end banners, they must shift toward being Consent Orchestration Engines or Consent Engineering Platforms —interpreting, enforcing, and governing consent across platforms, applications, and back-end data systems.
Few key opportunities and imperatives for CMPs:
§ Agent-aware and API-first with AI-Browsers
Consent signals will originate from browsers and autonomous agents. CMPs must build real-time API hooks to sync with browser preferences and ensure websites respect those choices.
§ Orchestration Across Platforms
CMPs must manage (and synchronize) machine-readable consent across all digital touchpoints (e.g., website, mobile app, SaaS tools), not just the web layer. Encoding consent in standardized formats (e.g., Global Privacy Control (GPC)) that downstream systems can interpret and enforce automatically is critical.
§ Consent-as-a-Service
Offer “consent-as-a-service” embedded at the edge (e.g., browser extensions, SDKs) to enforce rules downstream—in data warehouses, CDPs, marketing clouds.
§ Downstream Data Governance
It’s not just about capture—it’s about ensuring consent follows the data. I.e., data flow control, compliance logging, and privacy auditing for server-side and AI-powered data operations. CMPs must enforce usage restrictions in analytics, personalization, and advertising systems.
§ Consent Auditing & Logging (PrivacyOps)
Regulators want proof. CMPs can provide the audit layer for browser-generated preferences, creating reconciliations between user intent and system behavior. Deploy AI to detect tracking violations, scan for third-party risks, and auto-generate regulatory reports. Where applicable, collaborate with cloud providers or AI agents to enforce preferences.
Who’s Leading the Way?
Leading CMPs are taking steps to adapt to this new future. For example, there is a lot of investment in AI governance and automation by OneTrust. Use of AI/ML for consent management by BigID and so on.
These companies aren’t just reacting—they’re re-architecting.
What This Means for Privacy Leaders and Digital Teams
We’re at the beginning of a major shift. AI browsers will rewrite the rules of data privacy, and businesses that rely on outdated CMPs risk being caught flat-footed. Hence, the implications of this browser-centric future are profound:
Chief Privacy Officers must start redefining what compliance looks like when consent is programmable and portable.
Marketing and data teams need to reconfigure how they ingest and process user data—browser signals might override what your CRM thinks it knows.
Engineering teams must build consent-aware architectures that support API-driven orchestration and server-side governance.
In short, the cookie banner era is ending. The age of dynamic, portable, agent-aware consent is here. It is time for you to:
Audit your current CMP for readiness in an AI-agent web environment.
Evaluate browser-level consent initiatives and their implications for your data strategy.
Explore integration paths between your privacy stack and AI/automation tools.
Are these thoughts in your mind?
How to evaluate your consent architecture for the AI browser era?
Is your CMP strategy AI-agent ready?
Should your next privacy investment be in compliance… or consent engineering?
Don’t get left behind. Reach out, and let’s collaborate on building a forward-thinking approach to consent that aligns with the browser-level revolution.
Organizations frequently discuss the importance of data quality and its impact on business value. Even the most sophisticated analytical models falter with outdated and unreliable data, resulting in misleading recommendations, inaccurate forecasts, suboptimal business decisions, and wasted resources.
In today’s data-driven world, organizations face information overload, often storing vast amounts of data without considering its diminishing relevance. While some clients recognize this “information overload” and exercise caution regarding what they capture, others maintain the status quo, leading to increased costs, flawed insights, low customer satisfaction, and poor performance.
“What goes into production environment, stays there.”
“As per regulations, we need to store 7 years of data. But we only flag records. Do not delete them!”
Organizations must understand that the value of data is not static; it evolves and degrades over time. This understanding is crucial for accurate analysis and effective decision-making. In fact, one dimension of quality is timeliness, which translates to the lifetime value of data or data aging. This article explores the concept of ‘data aging’ and its implications for the success of data-driven initiatives.
The four dimensions of data
To calculate the lifetime validity of data, one must understand the four dimensions of data, commonly referred to as the 4V’s: Volume (Vo), Velocity (Ve), Variety (Va), and Veracity (Vr). The first three—Volume, Velocity, and Variety—are straightforward.
Dimension
Description
Volume (Vo)
The sheer amount/quantity of data from various sources. E.g., transactions, logs.
Velocity (Ve)
The speed at which data is generated and processed. Also known as the rate of data flow. E.g., real-time, batch.
Variety (Va)
The diverse forms/types of data. E.g., structured, semi-structured, and unstructured data.
Veracity (Vr)
The reliability and trustworthiness of data. E.g., accuracy, consistency, conformity.
Let’s focus on the fourth V, Veracity (Vr) which encompasses the accuracy and truthfulness aspects of data. Veracity is a function of four components that directly influence the insights and Business Value (Bv) generated.
This equation represents a more traditional view and emphasizes the fundamental aspects of data veracity: data quality, data value, data density, data volatility, and the impact of time. This equation is suitable for situations where the dataset is small and data volume, velocity, and variety are relatively stable or not significant factors. In short, the focus is on the intrinsic quality and reliability of the data.
The components explained:
Quality of Data (Dq): A normalized quantitative score, derived from a comprehensive data profiling process, serves as a measure of data quality (Dq). This score encapsulates the 4Cs: completeness, correctness, clarity, and consistency.
Data Volatility (Dvo): Refers to the duration for which the data or dataset remains relevant. It quantifies the spread and variability of data points, extending beyond mere temporal change. While some define volatility as the rate of data change, this definition emphasizes the overall fluctuation, i.e., rate at which data changes[1]. For example, customer preferences. A numerical scale, such as 1 to 10, can be used to represent the spectrum from low to high volatility.
Data Value (Dva): Represents the actionable insights, cost savings, or value of derived knowledge obtained through analytical modeling, such as correlation and regression. In essence, it answers the question, “What is the practical significance of this data analysis?” A numerical scale, such as 1 to 10, can be used to represent the range from low to high data value.
Quality of Data Density (Dd): Measures the concentration of valuable, complete, and relevant information within a dataset. It emphasizes the presence of meaningful data, rather than sheer volume. For example, a dataset with numerous entries but missing essential fields exhibits low data density quality. This assessment is determined through a combination of data profiling and subject matter expert (SME) evaluation.
Computing the lifetime value using Vr
All the above components are time-dependent, and any equation involving time will have an associated lifetime or value. Hence, the value of data either remains constant (for a period) or degrades over time, depending on the type of data. Now, let us integrate the 3Vs (Volume, Velocity and Variety) into this equation (Vr).
To briefly explain, data quality, value, and density are in the numerator because high values for these components improve data reliability. The other components negatively impact trustworthiness with higher values and are therefore in the denominator. To tailor the equation to specific use cases, weight coefficients can be incorporated to reflect the relative importance of each factor. These weights should be adjusted based on the unique context or requirements of the analysis. Generally, a lower overall score indicates that the data is aged, exhibits reduced stability, and/or possesses diminished reliability. This characteristic can be particularly valuable in scenarios where historical trends and patterns hold greater significance than contemporary data, such as retrospective studies or long-term trend analyses.
Real-world examples
Consider customer purchasing behavior data. Companies utilize segmentation and personalization based on customer lifecycle stages for targeted marketing. As individuals transition through life stages, their purchasing patterns evolve. Consequently, relying on data from a specific historical point—such as during a period of job searching, financial dependence, or early adulthood—to predict purchasing behavior during a later stage of financial independence, high-income employment, family life, or mid-adulthood is likely to produce inaccurate results.
Similarly, credit rating information demonstrates the impact of data aging. Financial institutions typically prioritize a customer’s recent credit history for risk assessment. A credit rating from an individual’s early adulthood is irrelevant for risk calculations in their mid-40s. These examples underscore the principle of data aging and its implications for analytical accuracy.
Strategies for mitigating the effects of data aging
Data Governance: Establishing clear data retention and data quality standards.
Data Versioning (by customer stages): Tracking changes to data over time to understand its evolution.
AI Infusion: Utilizing AI at every stage of the data lifecycle to identify and address data anomalies, inconsistencies and data decay.
Conclusion
The truth is, data isn’t static. It’s a living, breathing entity that changes over time. Recognizing and adapting to these changes is what separates effective data strategies from those that quickly become obsolete. If you found this post insightful, please comment below! In a future post, I will explore the impact of other components like data gravity and data visualization on business value. Let me know if that’s something you’d like to see!
Reference:
“The Importance of Data Quality in a Data-Driven World” by Gartner (2023)
“Data Decay: Why Your Data Isn’t as Good as You Think It Is” by Forbes (2022)
McKinsey & Company, “The Age of Analytics: Competing in a Data-Driven World” (2023)
Deloitte Insights, “Data Valuation: Understanding the Value of Your Data Assets” (2022)
[1] “rate of change of data” is typically represented as a derivative in mathematics. It gives a precise value showing how one variable changes in relation to another (e.g., how temperature changes with time). “rate at which data changes” emphasizes the speed or pace at which the data is changing over time (pace of data variation).
To keep pace with ever-present business and technology change and challenges, organizations need operating models built with a strong data and analytics foundation. Here’s how your organization can build one incorporating a range of key components and best practices to quickly realize your business objectives.
Executive Summary
To succeed in today’s hypercompetitive global economy, organizations must embrace insight-driven decision-making. This enables them to quickly anticipate and enforce business change with constant and effective innovation that swiftly incorporates technological advances where appropriate. The pivot to digital, consumer-minded new regulations around data privacy and the compelling need for greater levels of data quality together are forcing organizations to enact better controls over how data is created, transformed, stored and consumed across the extended enterprise. Chief data/analytics officers who are directly responsible for the sanctity and security of enterprise data are struggling to bridge the gap between their data strategies, day-to-day operations and core processes. This is where an operating model can help. It provides a common view/definition of how an organization should operate to convert its business strategy to operational design. While some mature organizations in heavily regulated sectors (e.g., financial services), and fast-paced sectors (e.g., retail) are tweaking their existing operating models, younger organizations are creating operating models with data and analytics as the backbone to meet their business objectives. This white paper provides a framework along with a set of must-have components for building a data and analytics operating model (or customizing an existing model).
The starting point: Methodology
Each organization is unique, with its own specific data and analytics needs. Different sets of capabilities are often required to fill these needs. For this reason, creating an operating model blueprint is an art, and is no trivial matter. The following systematic approach to building it will ensure the final product works optimally for your organization. Building the operating model is a three-step process starting with the business model (focus on data) followed by operating model design and then architecture. However, there is a precursory step, called “the pivots,” to capture the current state and extract data points from the business model prior to designing the data and analytics operating model. Understanding key elements that can influence the overall operating model is therefore an important consideration from the get-go (as Figure 1 illustrates). The operating model design focuses on integration and standardization, while the operating model architecture provides a detailed but still abstract view of organizing logic for business, data and technology. In simple terms, this pertains to the crystallization of the design approach for various components, including the interaction model and process optimization.
Preliminary step: The pivots
No two organizations are identical, and the operating model can differ based on a number of parameters — or pivots — that influence the operating model design. These parameters fall into three broad buckets:
Design principles: These set the foundation for target state definition, operation and implementation. Creating a data vision statement, therefore, will have a direct impact on the model’s design principles. Keep in mind, effective design principles will leverage all existing organizational capabilities and resources to the extent possible. In addition, they will be reusable despite disruptive technologies and industrial advancements. So these principles should not contain any generic statements, like “enable better visualization,” that are difficult to measure or so particular to your organization that operating-model evaluation is contingent upon them. The principles can address areas such as efficiency, cost, satisfaction, governance, technology, performance metrics, etc.
Sequence of operating model development
Current state: Gauging the maturity of data and related components — which is vital to designing the right model — demands a two-pronged approach: top down and bottom up. The reason? Findings will reveal key levers that require attention and a round of prioritization, which in turn can move decision-makers to see if intermediate operating models (IOMs) are required.
Influencers: Influencers fall into three broad categories: internal, external and support. Current-state assessment captures these details, requiring team leaders to be cognizant of these parameters prior to the operating-model design (see Figure 2). The “internal” category captures detail at the organization level. ”External” highlights the organization’s focus and factors that can affect the organization. And “support factor” provides insights into how much complexity and effort will be required by the transformation exercise.
Operating model influencers
First step: Business model
A business model describes how an enterprise leverages its products/services to deliver value, as well as generate revenue and profit. Unlike a corporate business model, however, the objective here is to identify all core processes that generate data. In addition, the business model needs to capture all details from a data lens — anything that generates or touches data across the entire data value chain (see Figure 3). We recommend that organizations leverage one or more of the popular strategy frameworks, such as the Business Model Canvas1 or the Operating Model Canvas,2 to convert the information gathered as part of the pivots into a business model. Other frameworks that add value are Porter’s Value Chain3 and McKinsey’s 7S framework.4 The output of this step is not a literal model but a collection of data points from the corporate business model and current state required to build the operating model.
Second step: Operating model
The operating model is an extension of the business model. It addresses how people, process and technology elements are integrated and standardized.
Integration: This is the most difficult part, as it connects various business units including third parties. The integration of data is primarily at the process level (both between and across processes) to enable end-to-end transaction processing and a 360-degree view of the customer. The objective is to identify the core processes and determine the level/type of integration required for end-to-end functioning to enable increased efficiency, coordination, transparency and agility (see Figure 4). A good starting point is to create a cross-functional process map, enterprise bus matrix, activity based map or competency map to understand the complexity of core processes and data. In our experience, tight integration between processes and functions can enable various functionalities like self-service, process automation, data consolidation, etc.
The data value chain
Standardization: During process execution, data is being generated. Standardization ensures the data is consistent (e.g., format), no matter where (the system), who (the trigger), what (the process) or how (data generation process) within the enterprise. Determine what elements in each process need standardization and the extent required. Higher levels of standardization can lead to higher costs and lower flexibility, so striking a balance is key.
Integration & standardization
Creating a reference data & analytics operating model
The reference operating model (see Figure 5) is customizable, but will remain largely intact at this level. As the nine components are detailed, the model will change substantially. It is common to see three to four iterations before the model is elaborate enough for execution.
For anyone looking to design a data and analytics operating model, Figure 5 is an excellent starting point as it has all the key components and areas.
Final step: Operating model architecture
Diverse stakeholders often require different views of the operating model for different reasons. As there is no one “correct” view of the operating model, organizations may need to create variants to fulfill everyone’s needs. A good example is comparing what a CEO will look for (e.g., strategic insights) versus what a CIO or COO would look for (e.g., an operating model architecture). To accommodate these variations, modeling tools like Archimate5 will help create those different views quickly. Since the architecture can include many objects and relations over time, such tools will help greatly in maintaining the operating model. The objective is to blend process and technology to achieve the end objective. This means using documentation of operational processes aligned to industry best practices like Six Sigma, ITIL, CMM, etc. for functional areas. At this stage it is also necessary to define the optimal staffing model with the right skill sets. In addition, we take a closer look at what the organization has and what it needs, always keeping value and efficiency as the primary goal. Striking the right balance is key as it can become expensive to attain even a small return on investment. Each of the core components in Figure 5 needs to be detailed at this point, in the form of a checklist, template, process, RACIF, performance metrics, etc. as applicable – the detailing of three subcomponents one level down. Subsequent levels involve detailing each block in Figure 6 until task/activity level granularity is reached.
Reference data & analytics operating model (Level 1)
The operating model components
The nine components shown in Figure 5 will be present in one form or another, regardless of the industry or the organization of business units. Like any other operating model, the data and analytics model also involves people, process and technology, but from a data lens.
Component 1: Manage process: If an enterprise-level business operating model exists, this component would act as the connector/ Component 1: Manage Process: If an enterpriselevel business operating model exists, this component would act as the connector/bridge between the data world and the business world. Every business unit has a set of core processes that generate data through various channels. Operational efficiency and the enablement of capabilities depend on the end-to-end management and control of these processes. For example, the quality of data and reporting capability depends on the extent of coupling between the processes.
Component 2: Manage demand/requirements & manage channel: Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.
Sampling of subcomponents: An illustrative view
Component 3: Manage data: This component manages and controls the data generated by the processes from cradle to grave. In other words, the processes, procedures, controls and standards around data, required to source, store, synthesize, integrate, secure, model and report it. The complexity of this component depends on the existing technology landscape and the three v’s of data: volume, velocity and variety. For a fairly centralized or single stack setup with a limited number of complementary tools and technology proliferation, this is straightforward. For many organizations, the people and process elements can become costly and time-consuming to build.
To enable certain advanced capabilities, the architect’s design and detail are major parts of this component. Each of the five subcomponents requires a good deal of due diligence in subsequent levels, especially to enable “as-aservice” and “self-service” capabilities.
Component 4a: Data management services: Data management is a broad area, and each subcomponent is unique. Given exponential data growth and use cases around data, the ability to independently trigger and manage each of the subcomponents is vital. Hence, enabling each subcomponent as a service adds value. While detailing the subcomponents, architects get involved to ensure the process can handle all types of data and scenarios. Each of the subcomponents will have its set of policy, process, controls, frameworks, service catalog and technology components.
Enablement of some of the capabilities as a service and the extent to which it can operate depends on the design of Component 3. It is common to see a few IOMs in place before the subcomponents mature.
Component 4b: Data analytics services: Deriving trustable insights from data captured across the organization is not easy. Every organization and business unit has its requirement and priority. Hence, there is no one-size-fits-all method. In addition, with advanced analytics such as those built around machine-learning (ML) algorithms, natural language processing (NLP) and other forms of artificial intelligence (AI), a standard model is not possible. Prior to detailing this component, it is mandatory to understand clearly what the business wants and how your team intends to deliver it. Broadly, the technology stack and data foundation determine the delivery method and extent of as-a-service capabilities.
Similar to Component 4a, IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/requirements from the industry and business units. The decision on the setup — centralized vs. federated — will influence the design of subcomponents.
Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.
Component 5: Manage project lifecycle: The project lifecycle component accommodates projects of Waterfall, Agile and/or hybrid nature. Figure 5 depicts a standard project lifecycle process. However, this is customizable or replaceable with your organization’s existing model. In all scenarios, the components require detailing from a data standpoint. Organizations that have an existing program management office (PMO) can leverage what they already have (e.g., prioritization, checklist, etc.) and supplement the remaining requirements.
The interaction model design will help support servicing of as-a-service and on-demand data requests from the data and analytics side during the regular program/project lifecycle.
Component 6: Manage technology/ platform: This component, which addresses the technology elements, includes IT services such as shared services, security, privacy and risk, architecture, infrastructure, data center and applications (web, mobile, on-premises).
As in the previous component, it is crucial to detail the interaction model with respect to how IT should operate in order to support the as-aservice and/or self-service models. For example, this should include cadence for communication between various teams within IT, handling of live projects, issues handling, etc.
Component 7: Manage support: No matter how well the operating model is designed, the human dimension plays a crucial role, too. Be it business, IT or corporate function, individuals’ buy-in and involvement can make or break the operating model.
The typical support groups involved in the operating-model effort include BA team (business technology), PMO, architecture board/group, change management/advisory training and release management teams, the infrastructure support group, IT applications team and corporate support group (HR, finance, etc.). Organization change management (OCM) is a critical but often overlooked component. Without it, the entire transformation exercise can fail.
Component 8: Manage change: This component complements the support component by providing the processes, controls and procedures required to manage and sustain the setup from a data perspective. This component manages both data change management and OCM. Tight integration between this and all the other components is key. Failure to define these interaction models will result in limited scalability, flexibility and robustness to accommodate change.
The detailing of this component will determine the ease of transitioning from an existing operating model to a new operating model (transformation) or of bringing additions to the existing operating model (enhancement).
Component 9: Manage governance: Governance ties all the components together, and thus is responsible for achieving the synergies needed for operational excellence. Think of it as the carriage driver that steers the horses. Although each component is capable of functioning without governance, over time they can become unmanageable and fail. Hence, planning and building governance into the DNA of the operating model adds value.
The typical governance areas to be detailed include data/information governance framework, charter, policy, process, controls standards, and the architecture to support enterprise data governance
Intermediate operating models (IOMs)
As mentioned above, an organization can create as many IOMs as it needs to achieve its end objectives. Though there is no one right answer to the question of optimal number of IOMs, it is better to have no more than two IOMs in a span of one year, to give sufficient time for model stabilization and adoption. The key factors that influence IOMs are budget, regulatory pressure, industrial and technology disruptions, and the organization’s risk appetite. The biggest benefit of IOMs lies in their phased approach, which helps balance short-term priorities, manage risks associated with large transformations and satisfy the expectation of top management to see tangible benefits at regular intervals for every dollar spent.
IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/ requirements from the industry and business units. The decision on the setup – centralized vs. federated – will influence the design of subcomponents.
DAOM (Level 2)
To succeed with IOMs, organizations need a tested approach that includes the following critical success factors:
Clear vision around data and analytics.
Understanding of the problems faced by customers, vendors/suppliers and employees.
Careful attention paid to influencers.
Trusted facts and numbers for insights and interpretation.
Understanding that the organization cannot cover all aspects (in breadth) on the first attempt.
Avoidance of emotional attachment to the process, or of being too detail-oriented.
Avoidance of trying to design an operating model optimized for everything.
Avoidance of passive governance — as achieving active governance is the goal.
Methodology: The big picture view
Moving forward
Two factors deserve highlighting: First, as organizations establish new business ventures and models to support their go-to-market strategies, their operating models may also require changes. However, a well-designed operating model will be adaptive enough to new developments that it should not change frequently. Second, the data-to-insight lifecycle is a very complex and sophisticated process given the constantly changing ways of collecting and processing data. Furthermore, at a time when complex data ecosystems are rapidly evolving and organizations are hungry to use all available data for competitive advantage, enabling things such as data monetization and insight-driven decisionmaking becomes a daunting task. This is where a robust data and analytics operating model shines. According to a McKinsey Global Institute report, “The biggest barriers companies face in extracting value from data and analytics are organizational.”6 Hence, organizations must prioritize and focus on people and processes as much as on technological aspects. Just spending heavily on the latest technologies to build data and analytics capabilities will not help, as it will lead to chaos, inefficiencies and poor adoption. Though there is no one-sizefits-all approach, the material above provides key principles that, when adopted, can provide optimal outcomes for increased agility, better operational efficiency and smoother transitions.
Endnotes
1 A tool that allows one to describe, design, challenge and pivot the business model in a straightforward, structured way. Created by Alexander Osterwalder, of Strategyzer. 2 Operating model canvas helps to capture thoughts about how to design operations and organizations that will deliver a value proposition to a target customer or beneficiary. It helps translate strategy into choices about operations and organizations. Created by Andrew Campbell, Mikel Gutierrez and Mark Lancelott. 3 First described by Michael E. Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance. This is a general-purpose value chain to help organizations understand their own sources of value — i.e., the set of activities that helps an organization to generate value for its customers. 4 The 7S framework is based on the theory that for an organization to perform well, the seven elements (structure, strategy, systems, skills, style, staff and shared values) need to be aligned and mutually reinforcing. The model helps identify what needs to be realigned to improve performance and/or to maintain alignment. 5 ArchiMate is a technical standard from The Open Group and is based on the concepts of the IEEE 1471 standard. This is an open and independent enterprise architecture modeling language. For more information: www.opengroup.org/subjectareas/enterprise/archimate-overview. 6 The age of analytics: Competing in a data-driven world. Retrieved from www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/The%20age%20of%20analytics%20Competing%20in%20a%20data%20driven%20world/MGI-The-Age-of-Analytics-Full-report.ashx
The WOW factor – Find that one thing that will create the wow factor for your client and consider half the engagement done. Unfortunately this at times strikes a day before the final presentation, but trust me the one night left in your hand is more than sufficient to do wonders.
See through your clients’ eye – The key things required to find this wow factor is the ability to see the vision of your client. Feel their problem. Understand their priorities and decrypt what they are not able to explain in words.
Power of Analogy – Use a simple analogy to restate the clients’ true problem. Use the same or another simple analogy to explain the solution. I normally take the example of food (if I am hungry) or a completely different industry or topic like cars or gadgets depending on the circumstance.
Let the numbers speak – When you know you are with a tough client, it’s always safe to let the numbers speak. Quantify everything in the simplest and most logical way possible. Include as many stakeholders as required into the equation to remove anomalies and there you have a nice little shield to defend yourself.
The animated story – The term ‘animation’ here stands for the visualizations. We all know the attention span and time from a CxO is highly limited. Everyone is looking for innovation and simplicity. With prescriptive and cognitive analytics gaining popularity, it is necessary to let the charts and pictures form the core of your presentation, while having a nice story crafted around this core. Some consultants would call this gift wrapping. Also make sure your presentation can tell the story you intended all by itself, as that is the cherry on the cake.
Know not just thy client but also their industry – If your client is a consulting firm, the amount of content you stuff in a slide does not matter, attention to detail does. Get all your facts triple checked! It will only take 6 seconds for them to process the entire slide and rip you apart. The first question is most likely, “what do you mean by that statement” or “where did you get that number from?”
If it is a financial client, make sure you are careful with the usage of red color. And do have a lot of charts backed by numbers. It only takes a few seconds to process this kind of content for them. On the other hand, if your client is in the Consumer Goods or Retail segment, use icons, cartoons and animations. It keeps them glued to your presentation and helps them digest the content faster and better.
I still have not figured out if region has an impact but culture definitely has. This is from my experience talking to a client in Thailand, Jakarta and Malaysia.
Sell the solution to your team first – the best way to test the solution you have built is to try selling it to your own team. No matter how much time and effort you put in, give an ear to your team members (especially the junior most staff as the most abstract and raw thinking comes from them). No consensus with your team is as good as you are a dead man even before the battle starts.
Product and Service Organizations are increasingly showing keen interest in sentiment analysis as it can make or break their reputation overnight. Understanding the polarity/sentiment of every users post and addressing them on time is becoming critical. Considering the fact that thousands of posts are being generated every day on a single topic, it is nearly impossible to analyze them manually.
Sentiment analysis algorithms are not accurate and cannot be 100% accurate. But it can certainly provide the required warning signals to change the (product) strategy if required at the right time. This can also help the organization save time by narrowing down the thousands of tweets to a few hundred tweets to be analyzed manually.
Once the need for sentiment analysis has been identified, the organization starts to scout for COTS products and find there are way too many. This article has a list of 40+ parameters under six key buckets that the organization can use to shortlist the right tool for them.
In the below table, you can find the six buckets mentioned above along with the parameters and a short description about each.
I
Performance
1
Efficiency
Scan speed (Ex. Number of tweets/sentences scanned and analyzed per second)
2
Robustness
Tool running consistently without crashing. Ex. Loads of data can result in the application hanging after a certain point)
3
Data size
Ability to scale to large data sets. Partly capability of the tool (architecture) and partly the underlying database
II
Algorithms
4
Natural Language Processing (NLP)
NLP or Machine learning algorithms. Mostly statistics based
5
Computational linguistics
Statistical and/or rule-based modeling of natural language. Focuses on the practical outcome of modeling human language use
6
Text analytics
Set of linguistic, statistical, and machine learning techniques that model and structure the information for BI, exploratory data analysis, research, or investigation
7
Proprietary Vs. Open algorithms
Usage of Free and open source algorithms vs. proprietary algorithms
8
Mostly Human interpretation
Post extraction of text, most of the analysis for sentiment is performed manually by people
9
Bayesian inference
Statistical inference in which some kinds of evidence or observations are used to calculate the probability of the sentiment
10
Keyword based
Keyword based search
11
Combination of the above algorithms
Ability to pass the text through multiple algorithms to get the right sentiment. Number of techniques employed
12
Ability to override sentiment
Automated sentiments are not always right and might need correction before reporting
III
Functionality
13
Ability to fine-tune the modeling algorithms
Ability to easily modify an existing algorithm to enhance its capability. Example, add an additional layer of filtering mechanism
14
Plug-ins / API / Widget support
Ability to add 3rd party plug-ins to perform specialized tasks. Example, 80-20 suppression, additional graphs.
15
Data filtering/Cleansing capability
Useful if there are two similar products in the market. Ex. Norton Internet Security Vs. Norton 360
16
Value substitution capability
Useful in tweets where users use different abbreviations or there are spelling errors. Example, MS, Microsoft and Microsoft
17
Supported Platforms
Ability to work on Linux, Windows, Mac OS, Mobile platforms, etc.
18
Alert/Trigger functionality
Ability to set triggers on key metrics where real-time monitoring is available
19
Auditing/Log feature
Audit feature helps capture the amount of data grabbed and processed
20
Geo Identification
Ability to identify the source of the conversation. Example, Asia, US, UK, etc.
21
Multi-lingual support
Support for more than one language. Example, French and English
IV
Reporting
22
Export options (output)
Excel, PDF, publish to portal
23
Visualization options
Variety of graphs. Bar, pie, line, radar, etc.
24
Dashboard capability
Refreshable and drillable dashboards
25
Customizable reports
Ability to have calculated columns. Generate different visualization for the same data easily
26
Pre-defined reports
Out-of-the-box reports to get social media up and running instantly
27
Drill down/drill up facility on reports
Ability to see detailed/summarized information by drilling on an item present in the report
28
Web interface
View and analyze reports online
V
User interface & Integration
29
Training / Learning curve
Tools like Radian6 requires experts to handle the tool
30
Targeted user group
Analysts, Business users, combination, etc.
31
Error reporting
If any of the configured feeds fail, or reports fail, a mechanism using which the error can be reported
32
Web interface
Complete tool is available online and can be used to configure and build reports. No thick client
33
Complete GUI support
No command line interface to perform any task
34
Bundled database
Does the tool come with a built-in database or 3rd party database like MySQL, MS Access, etc. needs to be procured?
35
Native connectivity to popular data sources
Built-in native connectivity to popular forums, groups, blog sites, micro blogs, etc.
36
Integration with BI and CRM tools
Ability to integrate the processed data with BI tools and other CRM data. Partnership with leading BI Vendors could be considered.
37
Approved APIs
APIs approved by Social media providers enable direct connectivity to their servers and enhance the rate of data pull. Ex. Twitter
VI
Vendor Credibilty
38
Established Client Base
Referencible clientele
39
Licensing options
Free/Trial/Paid. User based license, server based, service based, etc.
40
References
Installation with more than 1 to 2 years into production. This might not apply always as new companies come up with very innovative solutions
41
Support Services
Tool support, training, etc.
42
Consulting services
Analysis of data for client
Feel free to comment on the above parameters. If you have any addtional parameters that you feel are relevant, do let me know and I will be happy to include them (with due credits to you).
Everyone understands the value of grouping users. There are various ways to classify users but primarily they fall into one of the three levels defined below:
LEVEL 1: PRIMITIVE
This is the easiest and the most common way of grouping users. This Level is synonymous to a company in US trying to read books about Japan to understand their culture. The distance between the user and the organization is high. Beyond a certain point, this level fails to connect with the users.
Organizations can only take generic decisions based on the typical characteristics of this user group. Example, if an organization is developing a portal and it finds that 75% of its users are female and in the teenage group, they might target pink for the portal theme. But if this group happens to like rock music and partying, the organizations decision to go with pink could be a road to disaster.
LEVEL 2: PASSIVE
Levels-2 is all about studying the user passively but closely to understand their presence (where they are active), their behavior (what they do online) and their tastes (what they like). This is synonymous to a company in US sending someone to Japan to collect information. Inclusion of Level 1 grouping to this level would be an added advantage. A good starting point would be to analyze data captured from social media networks like Facebook, Twitter, etc.
LEVEL 3: ACTIVE
In this level, hand picked influencers are analyzed in depth from both an internal & external networks perspective. Right content (information, samples) is then pushed to this small group of handpicked few to create the impact. This levels termed “Active” because on identification of influencers, some amount of handshake/contact/personal touch happens with the influencer (user). Tools like Radian 6, Sysomos, Klout help in identifying these potential influencers. This level is hard, time-consuming and relatively expensive but has a higher pay-off .
Forrester’s User life-cycle
Forrester’s User life-cycle (Ref. Figure 2) categorizes users into 6 groups: Creators, Critics, Collectors, Joiners, Spectators and In-actives. This user life-cycle maps to Level 2 mentioned above as it can easily group users primarily based on their age and characteristic alone. But this does not tell you how many creators or critics in a particular age group posted only once. It is also not possible to determine how many of the joiners became inactive after the first login.
User Profiler
The profiler designed here (Refer Figure 3) categorizes users into 6 major groups: Joiner, Information seeker, Active/Dynamic, Responder, Creator and In-active. It is possible to map these groups to the Forrester’s user life-cycle groups with the exception of the Collectors group.
What do these categories mean?
Joiner: Moment a user registers they enter the Joiner level and take a default maturity of 0
Info Seeker: Users who seek answers to questions are the information seekers. Quality of the question decides the users’ maturity. Typically this ranges between 1 and 3
Active / Dynamic: These users actively seek information and respond to others in the group regularly. The maturity for these users range between 2 and 3
Responder: Responders are mostly users within the organization or industry experts. They either have the information or the resources required to secure the information needed for information seekers. Maturity lies between 2 and 4
Listener: These users log in and search for answers but have not posted anything yet. Mostly spectators. Beyond a certain point the lack of user activity except for login’s is either a BOT or Information collector (RSS)
Transition: This is the phase where the users tend to break the routine and shift to other sites or become inactive. One possibility is that the user has found what he/she was looking for
In-active: If the user does not have a single login in a say 1 year, the user is in-active and their maturity score drops to 0. This bucket helps identify and purge records easily for maintenance & performance purposes
Creator: Top management, visionaries and industry experts who are capable of generating interest in the public are Creators. Default maturity is assumed as 5.
Please feel free to get in touch with me if you have any queries or have suggestions around improving the above user profiler.
The aim of this blog was to develop a framework to assess the impact/influence of social media on the user(patients) for those healthcare providers who have invested in it. I am going to split this blog into pieces and the first part is going to build the foundation for the same.
The Common Definition for Social Media: Social media is the creation and sharing of user-generated content. The content is nothing but unstructured data like dialogues (text), images, audio & video, etc. as a result of interaction and participation between individuals and/or groups.
There are eight pillars one needs to understand clearly to build a strong foundation. They are:
The Definition – How does Healthcare define Social Media?
The source – Social media sources for Healthcare
The Influencers – Who or What can impact/influence the outcome?
Trends – Where is the Healthcare industry headed with respect to Social media?
The Competition – What is the competition/peers in the industry doing to capture the users attention in the social media space?
Monitoring – How is Social Media monitored by the Healthcare companies?
What can hospitals do – with Social media
How to do – How can hospitals accomplish / get into social media?
Before expanding on the above eight pillars, a few facts on why this topic is relevant today and needs attention.
81% of all Cyberchondriacs have looked for health information online
17% have gone online to look for health information 10 or more times
86% satisfied with their ability to find the information they want online
85% believe the information they found is reliable
(SOURCE: The Harris Poll, Harris Interactive)
Moreover, social media has increased the awareness among users by letting them look up the internet for a variety of things like, facilities provided by a hospital, review about doctors, experience of other patients like themselves, comparison between treatment options given by doctors for the same disease, etc. This has put direct pressure on healthcare providers to not only excel in their services but also involve themselves in the social space.
Starting with the definition, expanding the eight pillars from a Healthcare providers perspective:
The Definition:
This is how Healthcare defines social media, “People trust ‘a person like me’ more than authority figures from business, government and media”, “Seeking ongoing dialogue, not one-way advertisement” and “Trust, transparency, openness, honesty”.
The Sources:
The social media sources for Healthcare not in any particular order.
The sources mentioned above do not include sources like newspaper/journal advertisements, patient education, etc. Only the online media is in scope for this blog.
The Influencers: The key influencers in this industry: Doctors, Patients, Patients relatives & friends, Hospital staff, Insurance agencies, Awards and Care givers.
Though these look pretty obvious, it is the word-of-mouth comments & reviews left by these people on the internet that has impacted the hospital most. A simple example, an eye care hospital (name cannot be disclosed) in India started off very well till a time came when a series of mistakes made by their doctors due to negligence changed the tables. Today, the patients do not even want to check if the doctors have changed or not. It will take another round of influencers to change this outlook about the hospital.
Trends: When we say trends, its more around understanding what the leaders and emerging healthcare providers are doing in the social media space to influence and gain the visibility among the internet savvy public. Some of the interesting trends observed are below:
Information sharing – Doctors come together to share research, exchange observations, and support one another
Engaging e-patients – Connecting patients with similar disease processes, ailments, or conditions
Rating – Of doctors, hospitals, service providers, etc. online
Targeted search – Connect with patients and potential patients
Managing a conversation – Q&A with surgeons
Convergence with personal health records – Reminders to buy prescription medicines, Online personal health diary
Location-based social media “check ins” – Inova Health System offers flu shot deals
The Competition:
The first mover advantage… and a big hit! It’s all about thinking differently and that idea clicking with the public. Few instances where such innovative/unique thinking resulted in positive visibility for the healthcare provider:
Broadcast Emergency Room (ER) wait times every few hours across their Twitter feeds
Seattle’s Swedish Medical Center, along with Mayo Clinic drew attention for hosting the first-ever “Sleep up.” The event, consisted of an all-night live stream following a patient undergoing sleep disorder testing and a Twitter Q&A session with physician sleep specialists, drew 10,000 visitors
Live-tweet surgeries – Personally this is my favorite
Interactive fitness program called “Fitfor50″ that incorporates video, Facebook, Twitter and user stories
Disaster alerting and response
Social media chats
Daily health tips
Some of this have become common and no longer a differentiating factor. Example, health tips and social media chats. Users expect this by default though social media penetration by healthcare providers is still in the nascent stage.
Monitoring:
Currently there is a mix of in-house and outsourced social media monitoring and analysis. The major downside to this is setting up the platform for the users to interact and abandoning it. Here abandon means the lack of constant monitoring. When a patient posts a question in a Q&A platform setup by the provider, he/she expects a reply. When that does not happen, negative influence creeps in and the healthcare providers reputation goes for a toss.
Buzz vs. Prevalence analysis on datasets is one of the commonly found monitoring mechanism in the healthcare industry.
What can hospitals do with Social Media: Few of the benefits healthcare providers can reap by getting involved in Social media are below:
Help progress medical research when physicians start to interact more often with more social information available in hand
Publish health news first (alerts, awareness, crisis communication, health hazard information, etc.)
Monitor hospital reputation
Introduce new products and service lines
Strengthen patient provider relationships
Bring patients with similar problems together
Physician opinion sharing
Enable better marketing and communications efforts
How can healthcare providers do this? In simpler terms, where can they start? Based on my analysis, some of the key pointers are below:
Dedicated team to monitor social activity
Integrate marketing activities with social network
Tools to monitor social activity
Encourage physician to interact in social forums
Dedicated team to check and respond to users grievances online
Setup dashboards for executive team with metrics for social media monitoring
Are you faced with questions like, • ‘Consulting – Is it an Expense or an Investment?’
• ‘Strategy – Is it just a jargon or a savior?’
• ‘Roadmap – Is it on track as planned? How do I do a health check?’
Corporates over a period of time had invested a lot of money in various tools and technologies to cater to the needs of various departments within their organization. This has now grown into a mammoth and has started to eat into their profits through licensing costs, maintenance costs, etc. Sometimes one has lost count of the number of application in the landscape, has no clue what a particular application is for, who is using it and why it was needed in the first place? The impact of this has become even more evident especially during the recession period as the IT budgets became slim and expectations from the top management to cut costs increased. The CIO now needs to strategize the IT spends to manage the cost within the budget. This paper tries to address where the CIO can start and how to leverage the services provided by various IT service providers.
Read on to find out how the recession hit industries can leverage the consulting arm of the IT service providers to cut costs and at the same time get a futuristic view of their architecture / approach.
Recession hit Industry:
During recession, quite a few Corporates had employed IT consultants to increase their productivity and cut costs. They achieved this by trying to get a neutral 3rd party view of where they were and what they needed to do next, to sustain themselves during recession. These corporates were primarily looking at consolidation, rationalization and/or optimization of their existing setup and even re-org in some cases.
Situation in the IT Services industry:
IT service providers realized the need for value added services in the market and forayed into the consulting services spectrum during the recession period. This was not set up just to help the clients come out of recession but also to sustain themselves as the IT budgets were growing thin.
In how many cases has one seen the consulting team go back and check what went right or wrong in their proposed recommendation? Even if everything went right, what is the probability that the users have bought into the solution? The problem is really fixed and users are reaping the benefits?
The real problem here is that the consulting and delivery are two separate arms inside the same organization. There are pros & cons to this setup, and yes, they are different beasts. They need to have separate teams, heads, etc. but to achieve the full potential and for the survival of both these arms in the long run, they have to work hand-in-hand. Gone are the days when CIOs just wanted help in ‘doing it’, now they want to have ‘insights’ into what their competitors are doing. This has lead to a very strong and rapid growth in the consulting arm within the IT services organization.
Let’s ask the question, “Can Consulting survive another recession, if it were to happen in the near term?” The answer would be, consulting might not survive as well as it did in the last recession as most of the companies would now have already or just about gone through a fairly large consolidation, rationalization and/or optimization exercise. So in order to survive the next recession, consulting will now have to start looking at new avenues through which it can find in-roads into their clients and keep the consulting arm from taking a hit.
The Solution:
Typically there is a strategy phase and then there is an implementation phase. The missing piece here is a continual monitoring/improvement phase. While consulting arm takes care of the strategic piece and delivery arm takes care of the implementation, the “Extended consulting” will take care of this continual monitoring/improvement phase and also client relationship.
To sustain the consulting industry in the long run, a strong link between the consulting and delivery arm needs to be created now. The result of this link would be a C-D-EC (Consulting – Delivery – Extended consulting/collaboration) model. The advantage here is twofold. One, it helps the consultants provide better recommendations and customize their existing frameworks. Two, after everything is setup, a minor tweak to the original recommendation based on user feedback can go a long way in terms of user buy-in, higher satisfaction levels, etc. One cannot deny the fact that there is a link between the two arms as of today but one has to understand that this link is weak and not sufficient for survival in the long run.
For pure-play consulting firms, where the implementation is done by another vendor, or cases where the delivery team (implementation team) belongs to a different vendor because the clients’ organization has a policy of not deploying the same vendor for consulting and implementation will have to customize their model to have an in-road back into the organization to Re-assess and Optimize. Now one might ask, “Is it good to have the same vendor for both consulting and delivery?” This is a debatable topic weighing the pros and cons, but the C-D-EC model is fairly isolated from this problem. Who does it, does not matter. Is it done, is what that matters here.
If consulting and delivery is done by the same vendor, the initial homework can be done by providing the consulting arm, access to the delivery team’s repository in-house. This will give the consultants sufficient time to study what was carried out at the clients site based on their recommendations, come up with tweaks and value adds before they go back to the client to make amendments to their earlier proposed recommendations and make it stronger and better. A by-product of this effort is that the consultants now have an opportunity to build better frameworks and processes around their existing frameworks. This can then be put forward to clients who are yet to undergo an optimization or consolidation exercise.
C-D-EC (Consulting – Delivery – Extended Consulting/Collaboration) model
The consulting models followed by various organizations across the globe when mapped to a six-sigma methodology would indicate that it only address a portion of the methodology. When the delivery team takes over and completes the implementation, we can say that 80% of the six-sigma methodology is addressed.
C-D-EC model is based on the lines of six-sigma and addresses the complete DMAIC methodology end-to-end. The engagement starts with the consulting phase (Define-Measure-Analyze) to understand and define the client’s problem, walk-through the client’s existing landscape, measure (quantify) the bottle-necks, and finally analyze the findings by base-lining each identified parameter that is in focus. End of this consulting phase would be the typical recommendations, governance structure, best practices and implementation roadmap.
When the delivery team takes over and completes the implementation based on the proposed recommendations, we can say that the Improve phase of the six-sigma methodology is addressed.
What we have seen till now is what is being followed in the Industry as of today. The Control phase of the six-sigma methodology is either left out completely or addressed in parts through a maintenance and support program. The problem with addressing the control phase through maintenance, and support is that, it takes time to realize the complete benefit of the investment made. Consultants have no way to figure out how effective their recommendation was. For example, the support team can take anywhere between two to five months to just stabilize and familiarize themselves with the environment. This is because the development team normally does not end up as the support team also. Hence, on an average it can take anywhere between four to five months for the support team to figure out possible areas for enhancement and roll it into production. Let’s have a look at how bringing the consulting team back in, post-implementation can reduce this time lag of four to five months and also create some positive impact.
Extended Consulting:
The consulting team is brought back to study the gaps in implementation, understand how the users are interacting with the new setup, collect their feedback and propose an Optimize & Sustain strategy. This could be as simple as tweaking the processes/recommendations proposed initially or bringing in a completely new perspective like a collaboration that could take the entire setup to a completely new level. Under normal circumstances, this extended consulting would lead to the formulation of a continuous, repeatable and robust framework which the client could reuse on a day-to-day basis using its internal team. This fills the missing piece in the six-sigma methodology, the Control phase.
This phase can be included as part of the existing consulting models of various organizations at a small or no cost at all to the client depending on various factors like size of the implementation, relationship with the client, knowledge leveraged, etc.
Benefits of CDEC over CD
As explained earlier, the overall benefit of this model is the ability to Re-assess and re-align the roadmap in large deals, especially roadmaps which are for a period of two years and above. It also gives the ability to fine tune the solution proposed keeping the current market situation in mind.
On the other hand, it gives the consultants and in-road back into the client’s organization to look for new business opportunities and also sustain a long term relationship.
Framework for Extended Consulting
Now that we have brought the consulting team back in, questions like where should one start, how should we go about doing this extended consulting phase, can we follow the same methodology followed initially, etc will arise. One cannot follow the same methodology as it will become an over burden / over doing the whole thing. Hence the ideal answer would be a downplayed version of the original consulting model any vendor follows.
The primary factors, this downplayed version needs to address is given by the ACUTE methodology below.
ACUTE (Analyze, Compare and gather User feedback to Trigger Enhancement)
Analyze
The setup, post implementation is studied at a high level to check for completeness and accuracy with which the implementation has been done. This phase is purely from a technical perspective.
Compare
The details collected in the analyze phase is now compared against the recommendations given. The gaps if any are identified along with possible reasons for the same and impact of each gap identified. It would be an added advantage to interact with the development team(s) directly and get certain clarifications on these possible mismatches.
User Feedback
Key users feedback is collected on parameters like usability, productivity and user friendliness as the primary parameters. Only the actual users of the implementation will be part of this phase.
Trigger Enhancement
This is the phase where the findings from the Compare phase and feedback from users are studied and alternate solutions and/or tweaks to the earlier proposed recommendations/processes wherever required is drafted. In some cases, a re-usable framework customized for the client need can be generated and given to them which their in-house team can leverage.
Thanks to KrishnaKumar who helped me with his valuable feedback to come out with this model.