Posts Tagged ‘dg’

Building an Effective & ExtensibleData & Analytics Operating Model

To keep pace with ever-present business and technology change and challenges, organizations need operating models built with a strong data and analytics foundation. Here’s how your organization can build one incorporating a range of key components and best practices to quickly realize your business objectives.

Executive Summary

To succeed in today’s hypercompetitive global economy, organizations must embrace insight-driven decision-making. This enables them to quickly anticipate and enforce business change with constant and effective innovation that swiftly incorporates technological advances where appropriate. The pivot to digital, consumer-minded new regulations around data privacy and the compelling need for greater levels of data quality together are forcing organizations to enact better controls over how data is created, transformed, stored and consumed across the extended enterprise. Chief data/analytics officers who are directly responsible for the sanctity and security of enterprise data are struggling to bridge the gap between their data strategies, day-to-day operations and core processes. This is where an operating model can help. It provides a common view/definition of how an organization should operate to convert its business strategy to operational design. While some mature organizations in heavily regulated sectors (e.g., financial services), and fast-paced sectors (e.g., retail) are tweaking their existing operating models, younger organizations are creating operating models with data and analytics as the backbone to meet their business objectives. This white paper provides a framework along with a set of must-have components for building a data and analytics operating model (or customizing an existing model).

The starting point: Methodology

Each organization is unique, with its own specific data and analytics needs. Different sets of capabilities are often required to fill these needs. For this reason, creating an operating model blueprint is an art, and is no trivial matter. The following systematic approach to building it will ensure the final product works optimally for your organization. Building the operating model is a three-step process starting with the business model (focus on data) followed by operating model design and then architecture. However, there is a precursory step, called “the pivots,” to capture the current state and extract data points from the business model prior to designing the data and analytics operating model. Understanding key elements that can influence the overall operating model is therefore an important consideration from the get-go (as Figure 1 illustrates). The operating model design focuses on integration and standardization, while the operating model architecture provides a detailed but still abstract view of organizing logic for business, data and technology. In simple terms, this pertains to the crystallization of the design approach for various components, including the interaction model and process optimization.

Preliminary step: The pivots

No two organizations are identical, and the operating model can differ based on a number of parameters — or pivots — that influence the operating model design. These parameters fall into three broad buckets:

Design principles: These set the foundation for target state definition, operation and implementation. Creating a data vision statement, therefore, will have a direct impact on the model’s design principles. Keep in mind, effective design principles will leverage all existing organizational capabilities and resources to the extent possible. In addition, they will be reusable despite disruptive technologies and industrial advancements. So these principles should not contain any generic statements, like “enable better visualization,” that are difficult to measure or so particular to your organization that operating-model evaluation is contingent upon them. The principles can address areas such as efficiency, cost, satisfaction, governance, technology, performance metrics, etc.

Sequence of operating model development

Current state: Gauging the maturity of data and related components — which is vital to designing the right model — demands a two-pronged approach: top down and bottom up. The reason? Findings will reveal key levers that require attention and a round of prioritization, which in turn can move decision-makers to see if intermediate operating models (IOMs) are required.

Influencers: Influencers fall into three broad categories: internal, external and support. Current-state assessment captures these details, requiring team leaders to be cognizant of these parameters prior to the operating-model design (see Figure 2). The “internal” category captures detail at the organization level. ”External” highlights the organization’s focus and factors that can affect the organization. And “support factor” provides insights into how much complexity and effort will be required by the transformation exercise.

Operating model influencers

First step: Business model

A business model describes how an enterprise leverages its products/services to deliver value, as well as generate revenue and profit. Unlike a corporate business model, however, the objective here is to identify all core processes that generate data. In addition, the business model needs to capture all details from a data lens — anything that generates or touches data across the entire data value chain (see Figure 3). We recommend that organizations leverage one or more of the popular strategy frameworks, such as the Business Model Canvas1 or the Operating Model Canvas,2 to convert the information gathered as part of the pivots into a business model. Other frameworks that add value are Porter’s Value Chain3 and McKinsey’s 7S framework.4 The output of this step is not a literal model but a collection of data points from the corporate business model and current state required to build the operating model.

Second step: Operating model

The operating model is an extension of the business model. It addresses how people, process and technology elements are integrated and standardized.

Integration: This is the most difficult part, as it connects various business units including third parties. The integration of data is primarily at the process level (both between and across processes) to enable end-to-end transaction processing and a 360-degree view of the customer. The objective is to identify the core processes and determine the level/type of integration required for end-to-end functioning to enable increased efficiency, coordination, transparency and agility (see Figure 4). A good starting point is to create a cross-functional process map, enterprise bus matrix, activity based map or competency map to understand the complexity of core processes and data. In our experience, tight integration between processes and functions can enable various functionalities like self-service, process automation, data consolidation, etc.

The data value chain

Standardization: During process execution, data is being generated. Standardization ensures the data is consistent (e.g., format), no matter where (the system), who (the trigger), what (the process) or how (data generation process) within the enterprise. Determine what elements in each process need standardization and the extent required. Higher levels of standardization can lead to higher costs and lower flexibility, so striking a balance is key.

Integration & standardization

Creating a reference data & analytics operating model

The reference operating model (see Figure 5) is customizable, but will remain largely intact at this level. As the nine components are detailed, the model will change substantially. It is common to see three to four iterations before the model is elaborate enough for execution.

For anyone looking to design a data and analytics operating model, Figure 5 is an excellent starting point as it has all the key components and areas.

Final step: Operating model architecture

Diverse stakeholders often require different views of the operating model for different reasons. As there is no one “correct” view of the operating model, organizations may need to create variants to fulfill everyone’s needs. A good example is comparing what a CEO will look for (e.g., strategic insights) versus what a CIO or COO would look for (e.g., an operating model architecture). To accommodate these variations, modeling tools like Archimate5 will help create those different views quickly. Since the architecture can include many objects and relations over time, such tools will help greatly in maintaining the operating model. The objective is to blend process and technology to achieve the end objective. This means using documentation of operational processes aligned to industry best practices like Six Sigma, ITIL, CMM, etc. for functional areas. At this stage it is also necessary to define the optimal staffing model with the right skill sets. In addition, we take a closer look at what the organization has and what it needs, always keeping value and efficiency as the primary goal. Striking the right balance is key as it can become expensive to attain even a small return on investment. Each of the core components in Figure 5 needs to be detailed at this point, in the form of a checklist, template, process, RACIF, performance metrics, etc. as applicable – the detailing of three subcomponents one level down. Subsequent levels involve detailing each block in Figure 6 until task/activity level granularity is reached.

Reference data & analytics operating model (Level 1)

The operating model components

The nine components shown in Figure 5 will be present in one form or another, regardless of the industry or the organization of business units. Like any other operating model, the data and analytics model also involves people, process and technology, but from a data lens.

Component 1: Manage process: If an enterprise-level business operating model exists, this component would act as the connector/ Component 1: Manage Process: If an enterpriselevel business operating model exists, this component would act as the connector/bridge between the data world and the business world. Every business unit has a set of core processes that generate data through various channels. Operational efficiency and the enablement of capabilities depend on the end-to-end management and control of these processes. For example, the quality of data and reporting capability depends on the extent of coupling between the processes.

Component 2: Manage demand/requirements & manage channel: Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.

Sampling of subcomponents: An illustrative view

Component 3: Manage data: This component manages and controls the data generated by the processes from cradle to grave. In other words, the processes, procedures, controls and standards around data, required to source, store, synthesize, integrate, secure, model and report it. The complexity of this component depends on the existing technology landscape and the three v’s of data: volume, velocity and variety. For a fairly centralized or single stack setup with a limited number of complementary tools and technology proliferation, this is straightforward. For many organizations, the people and process elements can become costly and time-consuming to build.

To enable certain advanced capabilities, the architect’s design and detail are major parts of this component. Each of the five subcomponents requires a good deal of due diligence in subsequent levels, especially to enable “as-aservice” and “self-service” capabilities.

Component 4a: Data management services: Data management is a broad area, and each subcomponent is unique. Given exponential data growth and use cases around data, the ability to independently trigger and manage each of the subcomponents is vital. Hence, enabling each subcomponent as a service adds value. While detailing the subcomponents, architects get involved to ensure the process can handle all types of data and scenarios. Each of the subcomponents will have its set of policy, process, controls, frameworks, service catalog and technology components.

Enablement of some of the capabilities as a service and the extent to which it can operate depends on the design of Component 3. It is common to see a few IOMs in place before the subcomponents mature.

Component 4b: Data analytics services: Deriving trustable insights from data captured across the organization is not easy. Every organization and business unit has its requirement and priority. Hence, there is no one-size-fits-all method. In addition, with advanced analytics such as those built around machine-learning (ML) algorithms, natural language processing (NLP) and other forms of artificial intelligence (AI), a standard model is not possible. Prior to detailing this component, it is mandatory to understand clearly what the business wants and how your team intends to deliver it. Broadly, the technology stack and data foundation determine the delivery method and extent of as-a-service capabilities.

Similar to Component 4a, IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/requirements from the industry and business units. The decision on the setup — centralized vs. federated — will influence the design of subcomponents.

Business units are normally thirsty for insights and require different types of data from time to time. Effectively managing these demands through a formal prioritization process is mandatory to avoid duplication of effort, enable faster turnaround and direct dollars to the right initiative.

Component 5: Manage project lifecycle: The project lifecycle component accommodates projects of Waterfall, Agile and/or hybrid nature. Figure 5 depicts a standard project lifecycle process. However, this is customizable or replaceable with your organization’s existing model. In all scenarios, the components require detailing from a data standpoint. Organizations that have an existing program management office (PMO) can leverage what they already have (e.g., prioritization, checklist, etc.) and supplement the remaining requirements.

The interaction model design will help support servicing of as-a-service and on-demand data requests from the data and analytics side during the regular program/project lifecycle.

Component 6: Manage technology/ platform: This component, which addresses the technology elements, includes IT services such as shared services, security, privacy and risk, architecture, infrastructure, data center and applications (web, mobile, on-premises).

As in the previous component, it is crucial to detail the interaction model with respect to how IT should operate in order to support the as-aservice and/or self-service models. For example, this should include cadence for communication between various teams within IT, handling of live projects, issues handling, etc.

Component 7: Manage support: No matter how well the operating model is designed, the human dimension plays a crucial role, too. Be it business, IT or corporate function, individuals’ buy-in and involvement can make or break the operating model.

The typical support groups involved in the operating-model effort include BA team (business technology), PMO, architecture board/group, change management/advisory training and release management teams, the infrastructure support group, IT applications team and corporate support group (HR, finance, etc.). Organization change management (OCM) is a critical but often overlooked component. Without it, the entire transformation exercise can fail.

Component 8: Manage change: This component complements the support component by providing the processes, controls and procedures required to manage and sustain the setup from a data perspective. This component manages both data change management and OCM. Tight integration between this and all the other components is key. Failure to define these interaction models will result in limited scalability, flexibility and robustness to accommodate change.

The detailing of this component will determine the ease of transitioning from an existing operating model to a new operating model (transformation) or of bringing additions to the existing operating model (enhancement).

Component 9: Manage governance: Governance ties all the components together, and thus is responsible for achieving the synergies needed for operational excellence. Think of it as the carriage driver that steers the horses. Although each component is capable of functioning without governance, over time they can become unmanageable and fail. Hence, planning and building governance into the DNA of the operating model adds value.

The typical governance areas to be detailed include data/information governance framework, charter, policy, process, controls standards, and the architecture to support enterprise data governance

Intermediate operating models (IOMs)

As mentioned above, an organization can create as many IOMs as it needs to achieve its end objectives. Though there is no one right answer to the question of optimal number of IOMs, it is better to have no more than two IOMs in a span of one year, to give sufficient time for model stabilization and adoption. The key factors that influence IOMs are budget, regulatory pressure, industrial and technology disruptions, and the organization’s risk appetite. The biggest benefit of IOMs lies in their phased approach, which helps balance short-term priorities, manage risks associated with large transformations and satisfy the expectation of top management to see tangible benefits at regular intervals for every dollar spent.

IOMs help achieve the end goal in a controlled manner. The interaction model will focus more on how the analytics team will work with the business to find, analyze and capture use cases/ requirements from the industry and business units. The decision on the setup – centralized vs. federated – will influence the design of subcomponents.

DAOM (Level 2)

To succeed with IOMs, organizations need a tested approach that includes the following critical success factors:

  • Clear vision around data and analytics.
  • Understanding of the problems faced by customers, vendors/suppliers and employees.
  • Careful attention paid to influencers.
  • Trusted facts and numbers for insights and interpretation.
  • Understanding that the organization cannot cover all aspects (in breadth) on the first attempt.
  • Avoidance of emotional attachment to the process, or of being too detail-oriented.
  • Avoidance of trying to design an operating model optimized for everything.
  • Avoidance of passive governance — as achieving active governance is the goal.
Methodology: The big picture view

Moving forward

Two factors deserve highlighting: First, as organizations establish new business ventures and models to support their go-to-market strategies, their operating models may also require changes. However, a well-designed operating model will be adaptive enough to new developments that it should not change frequently. Second, the data-to-insight lifecycle is a very complex and sophisticated process given the constantly changing ways of collecting and processing data. Furthermore, at a time when complex data ecosystems are rapidly evolving and organizations are hungry to use all available data for competitive advantage, enabling things such as data monetization and insight-driven decisionmaking becomes a daunting task. This is where a robust data and analytics operating model shines. According to a McKinsey Global Institute report, “The biggest barriers companies face in extracting value from data and analytics are organizational.”6 Hence, organizations must prioritize and focus on people and processes as much as on technological aspects. Just spending heavily on the latest technologies to build data and analytics capabilities will not help, as it will lead to chaos, inefficiencies and poor adoption. Though there is no one-sizefits-all approach, the material above provides key principles that, when adopted, can provide optimal outcomes for increased agility, better operational efficiency and smoother transitions.

Endnotes

1 A tool that allows one to describe, design, challenge and pivot the business model in a straightforward, structured way. Created by Alexander Osterwalder, of Strategyzer.
2 Operating model canvas helps to capture thoughts about how to design operations and organizations that will deliver a value proposition to a target customer or beneficiary. It helps translate strategy into choices about operations and organizations. Created by Andrew Campbell, Mikel Gutierrez and Mark Lancelott.
3 First described by Michael E. Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance. This is a general-purpose value chain to help organizations understand their own sources of value — i.e., the set of activities that helps an organization to generate value for its customers.
4 The 7S framework is based on the theory that for an organization to perform well, the seven elements (structure, strategy, systems, skills, style, staff and shared values) need to be aligned and mutually reinforcing. The model helps identify what needs to be realigned to improve performance and/or to maintain alignment.
5 ArchiMate is a technical standard from The Open Group and is based on the concepts of the IEEE 1471 standard. This is an open and independent enterprise architecture modeling language. For more information: www.opengroup.org/subjectareas/enterprise/archimate-overview.
6 The age of analytics: Competing in a data-driven world. Retrieved from www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/The%20age%20of%20analytics%20Competing%20in%20a%20data%20driven%20world/MGI-The-Age-of-Analytics-Full-report.ashx

References

https://strategyzer.com/canvas/business-model-canvas

https://operatingmodelcanvas.com/

Enduring Ideas: The 7-S Framework, McKinsey Quarterly, www.mckinsey.com/business-functions/strategy-andcorporate-finance/our-insights/enduring-ideas-the-7-s-framework.

www.opengroup.org/subjectareas/enterprise/archimate-overview

BICC Vs BI CoE Vs DG

Focus on the concept. Not the name

Let me start with BICC and BI CoE. From what I have read till now in articles and books like “Business Intelligence for Dummies”, there is hardly any difference between the two. However, if there is a choice they prefer CoE over Competency Center. The reason quoted in the book “Business Intelligence for Dummies” By Swain Scheps is, the word “competence” has the connotation of bare minimum proficiency, giving the feeling of being damned by faint praise. Even though BICC will go well beyond mere proficiency, it sounds mere average.

Based on what I have been seeing till now, I would say there is more than a fine line of difference between BICC and BI COE. According to me, BI COE can be considered as a subset of BICC. While one can have a number of COEs each concentrating on a particular area like say, Data, Tools & Technology, process, etc. BICC is something that houses all these BI COEs under one single umbrella and manages them.

To bring in more clarity, these CoEs will manage the resources, define standards & processes and best practices within the scope of that CoE. Any framework or approach or methodology set up for monitoring the overall progress of these CoEs including shared assets, processes and technology will be performed by the BICC.

Now to create some controversy/initiate discussion here, lets bring in data governance (DG). Organizations either implement BICC or DG but not both. On a closer look, you will see that, these organizations that claim to implement DG and not BICC, over a period have actually expanded into what otherwise would have been done as part of a BICC initiative. As shown in the picture below, they started off with “Data” governance as the primary goal (Represented as point 1). Once the base was set, they realized that changes to other areas like infrastructure, tools, etc was needed and started to enter into that space and control them (Represented as point 2).

As you can see, point 3 is represented in dotted lines because it is not mandatory for the DG team to setup separate CoEs in this model. It’s a single, strong, core team (dedicated in some cases) that could have gone ahead and performed the job of various CoEs during the setup without actually setting up the CoE. In other cases, they would have setup small teams like one for technology as a whole and so on to quicken the process of implementation.

So to summarize, for me its something like this BICC > BI COE > Data Governance.
I am open to all kinds of feedback as I know I can be wrong. So please share your views on this topic.

Reference:
“Business Intelligence for Dummies” By Swain Scheps
http://www.executionmih.com/business-intelligence/competency-centre-competencies.php

Data Governance: A beginners approach to Level-2 implementation

The 3-Level approach
The 3-level approach (Figure 1) works well when the organization is serious, has the required funding, resources to execute and a mature Enterprise Information Management (EIM) setup.

In organizations where the IT team is pressurized to show tangible output for every penny invested, a 4-level approach as shown in Figure 2 can be adopted. The question, “Is it possible to divide this further into multiple levels?” might arise. The thumb rule is more the number of levels, the (substantial) tangible benefit business can see at every stage will reduce. So a faster turn-around with visible progress should be the mantra for a successful implementation.

The four level data governance implementation has been considered for this article as that’s more common in the industry today.

What do we need for Level-2?
To setup a ‘very basic’ DG practice leveraging the existing setup, a database with a few tables, some scripts to act as a trigger and an intranet portal should suffice. Note that this is just a starting point and hence focus would be on data quality and not data governance to begin with. As funds flow in, one can move from data quality monitoring to data quality control and from there to data governance.

Going online
The organizations existing intranet portal would be leveraged and the objective now is to take processes online one by one starting with the change management process. As part of Level-1 activities, the DG team would have already defined the process for change management and this would be used now. Steps to convert this process into something that can be monitored are as follows:

  • Capture key fields to be used in the form. Example, Ticket ID, priority of change request, source & target systems affected, change description, etc.
  • Design the form front end with the fields identified and link this with a database (MS Access, Oracle, MS SQL) at the backend. Mature organizations can leverage their SOA setup
  • Define the process flow. Example, who raises the request, what happens next, who approves it, exceptions handling, etc. This detail would again come from the detail captured in the previous phase
  • Now bring in the governance component to ensure that no change gets moved to production without going through this process

More the detail captured in step a, tighter will be the governance in step d. For example, if there is a field to capture the results of QA, automatically it will imply that the change that is moving to production has to move through QA (Dev – QA – Production). Assuming process has been defined to ensure QA compliance for approval of change request. Step c which talks about defining process flow can be accomplished using database triggers and/or procedures. If the organization already has a rule engine in place, it can be leveraged here.

As part of enhancement, this setup can be improved by addition of small bits like SLA tracking, report generation capability, etc. Report generation capability can help governance in a major way by helping the DG team to track various critical parameters like open requests, SLA breaches, requests moved to production through exception process, etc. This in turn will help the DG team to fine tune processes and improve their maturity.

Moving ahead and taking another process online, say BI Tools & Technology control. The steps shown in figure 3 are applicable here also. The difference would be in steps c and d, where the process is defined and governed.

Some of the other processes that can be brought under the governance control without too much of cost and effort are:

  • Database related activities (E.g. New DB creation, index creation)
  • Data quality dashboard (E.g. rejected records list pulled from ETL, data certification status)
  • Data security (E.g. user access to systems)
  • Training tracker

In addition to this, various database triggers can be set to increase response levels. Primary advantage of getting to this stage is that the users would now be conversant with various processes in the organization and adapted to them. Some amount of governance would also have been established to move to the next level.

What next?
The organization can now consider moving to the next level by creating self-healing mechanisms wherever applicable. More advanced tools like say Kalido can be considered if required to improve the DG maturity level.

I’m really bored today hence posting this blog without much thought. Will definitely refine it later. Please do feed me with your comments to improve on this topic.

Data Governance: A reality check

Abstract

Giving data governance recommendations is easy but implementing them is altogether a different ball game. Organizations do realize the importance and value of data governance initiatives but when it comes to funding and prioritization of such initiatives, it always takes a lower priority. Over a period, governance becomes the root cause of a handful of the organizational issues and it is then when the organizations start to prioritize DG and bring in experts to set things right.

“Are these experts providing what the organization needs?”
“Is the problem of data governance solved when the expert leaves?”
“If not, what is it that is lurking and needs to be addressed?”

This article tries to throw light on the expectations of these organizations, what they get from the experts and how much of the recommendation can be converted or is converted to reality.

The Genesis

Problems related to data governance (DG) usually begin in an organization when investments in technology infrastructure and human resources are considered as assets but not the data itself. Businesses must understand that the backbone of their organization is ‘data’ and their competitiveness is determined by how data is managed internally across functions.

The problem of DG becomes critical as an organization grows in the absence of any formal oversight (i.e., a governance committee). Silos of technology seep in and lead to disruption of the enterprise architecture, followed by processes. In due course, data gets processed by the consumers of data. The impact of this is huge as users can now manipulate and publish data. The ripple effect of this can be felt on processes like change management that will now happen locally at users’ machines rather than going through a formally governed process.

The ‘In-house’ DG Team

The DG team is either dormant or virtual in most cases and often exists at either Level-1 (Lowest) or Level-2 in terms of maturity (on any model). The primary reason for this lack of maturity is the team’s unfamiliarity to DG processes beyond establishing naming standards, documentation of processes, identification of roles & responsibilities and a few other basic steps.

Even if the team is capable of setting up a DG practice, the business teams are not flexible enough to empower it sufficiently to establish a successful governance practice. In some cases, people identified to participate in the DG committee are already engaged in other business priorities such as, a program manager who is already tasked with multiple high-priority corporate development initiatives. This masks the true purpose of a DG team and makes it virtual.

The Problem

Firstly, the in-house teams do not have a well defined approach towards setting up DG processes; those with an approach are unable to define success metrics and those who manage to define the metrics, do not know how to put it into action.

According to the Quality Axiom,

“What cannot be defined cannot be measured;
What cannot be measured cannot be improved, and
What cannot be improved will eventually deteriorate.”

This is very much true in DG also. In DG, it is not enough to define metrics. Organizations need to be able to control and monitor them, too. The problem lies in setting up the infrastructure and control mechanisms capable of continuous monitoring. To solve this, a specialist is usually called for.

The Catch

The recommendations provided by the specialist often look very promising and implementable. The catch lies in the gap between what the organization expects and gets and how much of the recommendations can actually be implemented.

The first part, “Expectations of the organization” relates to the assessment of the existing state of DG and getting recommendations to solve the DG problem. The second part, “What the organization gets,”   is where the mismatch happens. Most of the maturity models used by specialists, measure DG along the following five dimensions:

  • Enterprise architecture
  • Data lifecycle
  • Data quality and controls
  • Data security
  • Oversight (funding, ownership, etc.)

The recommendations provided by these specialists traditionally pivot around these categories, but on a closer look, each of these categories have two components:

  1. Soft component (Level 1) – the easily implementable ones and
  2. Hard component (Level 2 & 3) – something which requires funding, effort and time to build

No matter how well the recommendations are categorized and sub categorized, these two components always exist. The soft components involving policy/process changes, re-organization, etc. are relatively easy to set up. However, the hard components that involve infrastructure, database, dashboards, etc. either get ignored or are partially implemented. It is this hard component that defines, How many of the expert recommendations can be implemented?”

 

The 3-level approach (Figure 1) works well when the organization is serious, has the required funding, resources and wants to setup DG in a short time frame.

The Reality Check

Implementing DG is not as simple as buying a product off the shelf and installing it. Even specialists sometimes turn down requests for implementing these hard components because of inadequate Enterprise Information Management (EIM) maturity. EIM architecture involves data sourcing, integration, storage and dissemination. Setting up DG processes will be difficult even if one of these components is weak in terms of governance.

There are several pitfalls one must look out for. First, DG initiatives can result in power shifts and can be very difficult for senior management to accept since they have hitherto enjoyed ungoverned process authority. This aversion to change/give up power paves the way for something termed as a failure at Level-1 stage. This is an organization culture issue that needs to be addressed to move further.

Second, normally the specialists quote a price that involves setting up foundation elements such as changing the ETL infrastructure, setting up metadata, fixing master data, etc. to setup DG. The business, instead of considering the cost of getting these foundation elements up and running as individual investments, sees this expenditure as part of DG investment cost and fails to get the required funding. When a multi-year roadmap is proposed and funding is sought in the name of implementing DG every time, sponsors will hardly be enthusiastic about the investment. Also, through the initial phases of a DG engagement, tangible results are difficult to establish and showcase and hence the business might lose trust in the initiative. This can be termed as failure at Level-2 stage.

Conclusion

To enable a mature DG setup, organizations need to step back and take a look at their existing architecture, get the foundational elements in place, and finally push hard to implement the necessary hard and soft DG components. The cost of implementing these foundational elements should not be considered as a part of DG initiative, because doing so will only result in undercutting the priority and interest in this critical undertaking.

Data governance has to be an evolving process in an organization. A big bang overnight implementation of DG might not be always advisable unless the foundational components are in place.