banner image for William G. Smith & Associates


The term "Information Resources" incorporates three broad categories of "information stuff" essential to the modern business enterprise: its large mass of stored data (the DATA RESOURCE); the huge volume of application system program code (the APPLICATIONS RESOURCE); and the numerous networked hardware components, along with the operating programming that makes it work (the TECHNOLOGY RESOURCE). These three components, working intricately together, allow the enterprise to produce and use the information required to effectively operate and compete on a daily basis. Interestingly, the most critical, but the most neglected (and often the most costly) information resource category is the stored DATA RESOURCE. Invariably, most of the IT/IS budget and effort is spent managing applications and technology - in many enterprises, there is virtually no effective DATA RESOURCE MANAGEMENT (DRM). Yet, if the data resource is unavailable, inaccessible, incorrect, untimely, insecure, or inconsistent, the information necessary to operate and manage the enterprise simply cannot be produced - at any cost - and irrespective of how much application program code and technology the enterprise has. Ultimately, the ability for any enterprise to use and leverage information always boils down to the state of the DATA RESOURCE. The IRM approach (as opposed to the typical and traditional IS/IT approach) elevates and emphasizes the management of the DATA RESOURCE as a co-equal consideration and investment with the management of the APPLICATIONS and TECHNOLOGY RESOURCES.

For any enterprise, there are a finite set of data subjects ("entities" in the parlance of data modeling) - the persons, places, things, concepts, and events - about which it must collect and store data in order to effectively function and compete. Generally speaking, a large and complex enterprise must keep data about more entities than a small, simple one. A very typical example entity is CUSTOMER - any commercial enterprise must invariably keep data about its customers, and CUSTOMER data must be shared by almost every business process. Today, that data is typically stored redundantly in many files/databases which are controlled by different "application systems", then shared by means of very costly and time-lagged data interfaces which move data from the files/databases of one application system to the files/databases of many others. A very large bank was having problems retaining customers, because it had managed to build, and was trying to maintain over 196 redundant CUSTOMER databases for different "applications" in the bank! When a customer notified the bank of a change of address, an excruciatingly slow, costly, and ineffective process began to try to find and correctly update the customer's address in all 196 CUSTOMER databases; there were customers who were still receiving statements at their "old" address over five years after they notified the bank of their change of address. Needless to say, the bank lost many unhappy customers due to this highly visible data mis-management; a customer who perceives that the bank cannot manage to properly keep their address current will have no confidence in the bank properly knowing their account balance. Sooner or later, these types of data messes become very visible to, and create problems for the customers (and suppliers, regulators, managers, etc.) of any business, and those customers will readily leave that business for a competitor who operates faster, cheaper, better, and more flexibly because they have a much cleaner and simpler data resource. The consequences of large data messes are invariably significant: disgruntled customers, orders which are not filled, manufacturing disasters, accidents and injuries, undetected financial fraud or theft, patients who die, legal negligence, shipments that are lost, inventory which is wasted, contractual obligations which are unmet, non-compliance with government regulations...

The simple and obvious question asked by bank management was, "Wouldn't it be cheaper and better for the bank to build and carefully maintain a single CUSTOMER database, and share that database for all business processes in the bank that need CUSTOMER data? How in the world did we end up with 196 inconsistent CUSTOMER databases, which all have to be updated in some synchronized way to keep our CUSTOMER data relatively correct and consistent, when a single CUSTOMER database would have worked so much better, and been so much cheaper to build and maintain?" The simple answer to this question is that the IT/IS organization in this bank (and in fact, most enterprises today) was chartered and measured by how fast they could develop (or buy and install) new "applications" demanded by any part of the bank to assist it to do something. But every new application (especially purchased vendor "packages") tends to bring with it a Trojan horse of more data to be stored and managed, and that data usually overlaps some data already being stored and managed within one or more other application data stores. As more and more application systems are developed or purchased, the redundant (and laboriously interfaced) enterprise data mess grows exponentially. By responding to demands from various parts of the business to build or purchase and install "applications" quickly, the IT/IS organization was inadvertently also building a massive data mess, which ultimately began to cost the bank its customers' business. In the traditional IS/IT approach, there is no effective DRM function, so it is no one's job to make sure that a large data mess is not intentionally, nor inadvertently created. The current state of data redundancy in the average enterprise is extremely costly: the average enterprise could function much faster, cheaper and better with less than 10% of the data it currently stores and updates! The main objective of the IRM approach (which incorporates effective DRM) is to build subject-organized, single-source databases (e.g., a CUSTOMER database, a PRODUCT database, an ORDER database, an EMPLOYEE database, a SHIPMENT database, etc.), which are properly related (e.g., CUSTOMERS are related to their ORDERS, the ORDERS are related to the PRODUCTS being ordered, the ORDERS are related to SHIPMENTS which deliver the ordered products, etc.) so the enterprise database is a very sensible mirror image of the real world persons, places, things, concepts, and events with which the business must operate, and about which it must collect, store, and utilize data, independent of individual applications. Done correctly, each data fact should only be stored (and updated) in one physical data store, which should be shared real-time by all applications, and all persons in the business who legitimately need access to that data - so that every part of the business can know the same thing at the same time. This shared database principle is analogous to the enterprise having a single mind, and profoundly collapses the time lags in the business ("information float") so the business can perform much faster, much cheaper, and much better. Interestingly, the technology to achieve this grand state has existed for over well over forty years, but the number of businesses which have actually achieved it is extremely small. The reason is that enterprises have continually required IS/IT management to build or buy lots of "applications", with no awareness of the massive data mess which inevitably results - until it becomes so bad that the business (and especially its customers) begin to experience the negative consequences. In manufacturing, engineering, food services, pharmacetical, health care, research, laboratory, airlines, railways, traffic control, insurance, power generation, military, government, and even banking, poor-quality data can literally result in enormous destruction, and loss of life.

DRM in the traditional IS/IT "disintegrated application system" environment is usually limited to a somewhat janitorial role, laboriously keeping track of the ever-increasing amount of redundantly-stored data, and attempting (by working with unwilling application system developers, project managers, and business middle managers) to ensure that update synchronization between redundant data stored in various application systems is as timely as possible, that inconsistency is minimized, and that meaning of data is reasonably well documented so it can be used intelligently. DRM in such an environment (a very typical IS/IT environment) will seldom make any measurable progress in reducing the data mess; it must simply try to mitigate the harmful business consequences of the data mess. However, DRM as a functional part of an IRM environment can dramatically improve enterprise performance by ensuring the sharing of data so business persons/processes can literally know the same thing at the same time, increase the speed of the business process, eliminate redundant/unncecessary work, reduce errors/rework, and improve customer satisfaction. DRM can simply document the mess, or begin abolishing the mess; it is largely a question of how the DRM organization is designed, and the authority and resources which are invested in it.

The good news is that there are some very helpful trends and forces in the IT/IS arena today which mitigate toward designing, building, managing a well-designed, non-redundant, and universally-shared data resource. Today, enterprises are devoting huge amounts of resource and effort to design and build enterprise websites to inform, educate, motivate, and conduct commerce with their customers and suppliers. An intelligent enterprise doesn't want to create a schizophrenic image by building 196 different websites - a single, well-conceived, integrated, intelligently organized, and well-designed website is the goal. And, the data which is available through that website must always be consistent, correct, easy to navigate and use, timely, and secure. These objectives are all the very natural goals, and achievable outcomes of intelligent INFORMATION and DATA RESOURCE MANAGEMENT. Also, the emergence of data storage in the proverbial "cloud" also mitigates toward building a single-source enterprise database (resident in the "cloud") which can be shared by all parties who have legitimate need to know the same thing at the same time. (Ironically, this could have been accomplished over 50 years ago, with the advent of database technology, if we had only understood and exploited the principle of sharing the same data at the same time, rather than creating massive messes of redundant and inconsistent data!)


WGS&A offers consulting services and training to assist our clients to design and implement very effective INFORMATION and DATA RESOURCE MANAGEMENT within the context of managing all the enterprise information resources to assist the enterprise to measurably operate faster, cheaper, better, and more flexibly. In order to succeed at improving the quality, organization, accuracy, sharing, timeliness, accessibility, security, and usefulness of the enterprise applications and data resource, while dramatically reducing the redundancy and inconsistency of both, DRM must become a co-equal partner with those organizations that are developing and managing the applications, and those that are managing the computer hardware/networking facilities of the enterprise. In fact, by aggressively attacking and reducing the prevailing data mess, the management of application program code, and networked hardware actually becomes much cheaper, simpler and easier. WGS&A has extensive experience in the design of very effective organizational structures, with specialization in the part of the enterprise which manages its information resources.

Our IRM/DRM consulting services include:

Please CONTACT us if you have questions, or are prepared to discuss a prospective IRM/DRM or data modeling consultation.