iorg.com logo

Top

White Papers


 

© Copyright 2000
iorg.com

TOC - Ch 1 - Ch  2 - Ch 3 - Ch 4 - Ch 5 - Ch 6 - Ch 7 - Ch 8 - Ch 9

Chapter 2: The Issues of Management

Intranet Organization: Steven L. Telleen, Ph.D.

Introduction

As we discussed in the previous chapter, Intranet technology makes the creation and publishing of information easy. It also makes the retrieval and viewing of the information easy. What is not easy is finding the relevant information that is created in this independent environment. What Intranet technology cannot provide is the organizational and process infrastructure to support the creation and management of information. Without this infrastructure there is no efficient way for information to be found. Our paper systems have information infrastructures that have been refined over several hundred years. In most organizations these infrastructures are so integrated with the way we operate that they have become the processes and procedures we use to do business. They are inseparable from our concepts of management, because information is the driver of the processes being managed. 

The infrastructures that have developed around the management of information on paper have been largely centralized. The characteristics of paper publishing encourage centralization. Central decisions can be duplicated and sent out the hierarchy, but feedback is relatively slow. It is difficult if not impossible to find and access information not managed by the central structure. Once copies have been distributed to multiple sites, managing updates and local changes becomes expensive and in many cases nearly impossible. This is not to say that decentralized infrastructures do not exist in the paper world. However, they are rare in our managed enterprises. 

Managing distributed systems provides interesting challenges that are not found in centralized environments. The biggest challenge is moving from an attitude of control to an attitude of enabling independent decisions and actions. Without some standards, organizations lose their ability to communicate effectively and coordinate their activities. Without some level of support, domain experts become too involved in low level maintenance activities at the expense of the high leverage functions that most benefit the enterprise. The challenge is meeting the needs for coordination and efficiency without destroying the independence of decision making and action that make enterprises strong and flexible. 
(top)

Control & communication - the key to purpose driven activities

At the heart of the issue lies not only our notions about organizational structure, but our operational paradigm of what constitutes a functional, effective organization. Our traditional organizational structures have focused on a central command and control model. The organization was designed to bring information to the central command site and distribute and enforce the decisions back to the edges of the enterprise. As enterprises became larger and more complex, the number of intermediate steps increased as did the amount of information needing to be processed. While electronic media have sped the passage of information through these steps, and even allowed us to eliminate steps, it has not decreased the amount of information that must be processed or the number of decisions that must be made. Even with the new technology, the central command and control paradigm appears stretched to its limits. 

But how does an organization support effective, goal-directed activity without someone in charge? Here is where the paradigm shift occurs. The distributed command and control paradigm has a body of theory and experience that supports its approach, although many of today’s proponents of the shift seem unaware of its existence. The philosophical underpinnings come from a focus of study known as General Systems Theory. Ludwig von Bertalanffy, a biologist, is considered the father of General Systems Theory, and almost every field of science (physical, social, and mathematical) has contributed to its development. 

The basic tenet of General Systems Theory is that all systems share certain characteristics that allow them to function as systems, regardless of their type or level of organization. General Systems Theory attempted to identify and document the characteristics common to all systems. What is important to our discussion is a set of calculations done by the Economist and Nobel Prize winner, Herbert Simon (see "The Architecture of Complexity," Proceedings of the American Philosophical Society, 106, 1962). Simon was able to show that a system composed of independently stable subsystems could withstand significantly higher perturbations than systems built directly from their components. 

Numerous examples of this principle have been documented in practice. One dramatic example, in 1989 a team at Xerox PARC, headed by Bernardo Huberman, demonstrated an application of this general system principle with a computer program called SPAWN. The problem was to develop a program that could efficiently allocate free cycles on networked desktop computers. What Huberman’s team was unable to accomplish using a central command and control model was fairly quickly accomplished using a distributed decision-making model. 

For a system to function as a system, rather than a collection of parts, it must have ways of self-organizing and even directing behavior. If command and control is distributed to the subsystems, then we must look elsewhere for the self-organizing capabilities of the complex system. What the complex system provides is coordination and communication for the self-stabilizing subsystems. The paradigm shift, then, is one of moving from a central command and control model to a distributed command and control model with central communication and coordination. 

Before progressing, a quick look at contributions by two current day authors is in order. Both these authors follow in the tradition of General Systems Theory, even if they do not explicitly recognize the heritage. The earlier of the two works is Michael Rothschild's, Bionomics, published in 1990, the source of the "Grand View" table in chapter one. The second is James Moore's, The Death of Competition, published in 1996. Both authors use principles from Biology and Evolutionary Ecology to explain the apparent discontinuities between today's economic and social realities and traditional economic and social models. Both have a grasp on the biological principles that they cover and do a good job of presenting these principles and their relevance to a general audience. 

The major difference between these two authors is that Rothschild focuses on competition as the primary dynamic that drives systems to efficiency and stability. Moore, on the other hand, acknowledges the importance of competition, then goes on to focus on several other key forces that determine the diversity and stability of complex systems. He invokes the general principle of coevolution to explain the evolving business ecosystem and explains the development of new "industries" in terms of ecological succession. 

Both books are worth reading, Rothschild for his insights into the interplay among information, biology and economics, Moore for his insights into how the broader forces at play in an ecosystem relate to the forces at play in today's business environment. I bring attention to these two books to make another point. At the highest level, these two authors are discussing two distinctly different kinds of systems. Rothschild implicitly views the economy as an undirected or chaotic (in the sense of chaos theory) system. Moore, on the other hand explicitly recognizes human systems as being purpose-driven systems. 

These are important distinctions, because both kinds of systems exist in nature. A wild ecosystem is chaos driven. An organism or organization is purpose driven. Agriculture is one example of our attempt to turn a chaotic ecosystem into a purpose-driven ecosystem. The distinction between chaotic and purpose-driven systems is important because it relates to the decision / communication models discussed above. Chaotic systems are the result of distributed communication systems rather than distributed decision-making systems. 


The above diagram maps the relationships between centralized and distributed control and communication. 

Our twentieth century pyramidal organization structure is a good example of a system designed around central decision making and central communications. This has been a dominant and successful structure for most large western organizations, be they business, non-profit or governmental. These organizations are guided by a person in charge. In large organizations this person is surrounded by trusted confidants that expand the reach of the decision maker, but this can only be extended so far before the structure begins to ossify or becomes unwieldy. 

A possible example of an, arguably, successful central-control and distributed-communication model is a client-server computing environment. The requirement for central management (control) has been a major contributor to making the distributed communication model of client-server computing so unwieldy. We also see numerous examples of unplanned central-control and distributed-communication effects in large organizations today. When Pyramidal organizations get too large, the central-control / distributed-communication model tends to emerge as part of the deformation and communication breakdown process. One aspect is what employees often call the mushroom school of management (they keep us in the dark and feed us lots of fertilizer). Another is the phenomenon known as "skunkworks." Large organizations display these chaotic characteristics because even if the central decisions are intended to provide a purpose, the distributed communication provides too much opportunity for error, embellishment or disregard. 

The bottom two cells in the diagram depict the two versions of distributed control and decision making. The first, with a central communication and coordination structure, is a model for complex systems working toward a common purpose. The Intranet/web technologies were developed in this type of structure, and this is the model I suggest will become more common with the widespread use of Intranets in organizations. Higher order organisms are another example of this kind of structure. Our nervous systems provide a common communication and coordination pathway, but most of our body activities are managed locally, reacting and adjusting to both the immediate environment and the information from the nervous system. 

The bottom right cell is similar, in that each subsystem reacts and adjusts to those around it. The difference is, there is no purposeful mission being coordinated. The system evolves chaotically. This is typical of natural ecosystems and the species from which they are composed. Evolution is driven by the demands of the moment, not a conscious purpose. 

I call the process that drives the distributed, decision-making and control systems adaptive innovation. This refers to the ability of each subsystem to react to its local conditions. The result is not only a larger array of responses than can be carried out centrally, but the ability of each subsystem to tailor itself to its particular environment. In purpose-driven systems, the mission becomes the key information component against which adjustments are made. This is why vision and goals are becoming so important in organizations. 

The issue of purpose-driven versus chaotic systems may seem academic, but this is at the heart of the current economic debates in the United States government. The issues around where and how much the government should do in the areas of regulation and stimulation of the economic subsystems are directly related to conflicting beliefs about whether the economy is (or should be) a chaos-driven or purpose-driven system. We will examine this issue more in the final chapter on Intranet futures. 

I have come to believe that this also is an important point for some people accepting Intranet development. There are individuals who try to control the publishing infrastructure on the Intranet because the only options they see are the completely distributed (chaos) approach and the completely centralized (pyramid) approach. They do not see how a distributed management approach can be reconciled with purpose-driven results. 
(top)

Information life-cycle management

Organizational information generally carries content that enables action leading to a gain or loss of resources. An organization amplifies its ability to control those resources by dividing among multiple individuals the work required to reach a goal. For the organization to be effective, activities and progress must be coordinated. An important reason for sharing information within organizations is the agreement on and coordination of these goals and tasks. 

A requirement for successful coordination is consistency of information. It is not very efficient if the existence or location of important information remains unknown to an individual who needs it. It also is not very efficient if a team tries to reach a consensus when each member is operating from a different information base that may be incompatible or inconsistent with the others on the team. Some information gets stale and requires attention to keep it current. Most of today's organizational structures and processes have been refined over centuries to solve these problems for paper-based information. 

Information currency and integrity is a much simpler problem when the content does not change often, activities being coordinated are not large or complex, and the information is centrally collected and distributed. However, these are not common characteristics of most enterprises today. The distributed environments more commonly found today need to be able to coordinate information in a different way, and this requires a different set of management structures and processes than most organizations have inherited. 
(top)

Access, power and innovation

The key characteristic of this technology is its ability to shift control of electronic information management from the technology specialists back to the information creators, and control of information flow from the information creators to the information users. If the user has the ability to easily retrieve and view the information when they need it, the information no longer needs to be sent to them just-in-case. Publishing can be separated from automatic distribution. This applies to forms, reports, standards, meeting minutes, sales support tools, training materials, schedules, and a host of other documents that flood our in-baskets on a regular basis. 

Making this work requires not only a new information infrastructure, as discussed above, but a shift in attitude and culture. As technology supporters we must retrain ourselves to think in terms of solving problems by providing the tools and infrastructure that allow information creators to do it themselves. As creators of information we must retrain ourselves to publish without distributing. As users we must retrain ourselves to take more responsibility for determining and tracking our changing information needs, and actively and efficiently acquiring the information when we need it. 

From an organizational perspective it is useful to look at how the paradigm shift affects three organizational characteristics: management, communication and leadership. 

We already have discussed the pressures for a management shift from central decision making to decentralized adaptive innovation. It was noted that this shift is the real driver behind the explosive growth of Intranets, and is taking place for organizational reasons resulting from complexity and scale. The Intranet implementation strategy in a large organization provides an excellent example of the competing management principles involved in the shift. 

A central decision-making paradigm approaches the Intranet implementation by determining which departments will participate, and which functions will be developed in each. It then provides the resources to implement each project in the order determined. This is a "we will do it for you" model. The distributed decision-making approach views the Intranet as a utility and concentrates on identifying and meeting the infrastructure requirements and on quickly imparting the knowledge, skills and tools to all the departments so they can implement whatever projects they determine make sense. This is an "enable you to do it for yourself" or adaptive innovation model. 

In practice, reaching agreement on the central project plan often takes longer than a well executed knowledge and skills roll-out. Once agreement is reached a centralized development effort quickly becomes overloaded. Another example of a problem with surface to volume ratio. Our experience has been that in the time it takes to do one project centrally, the decentralized approach generates a project for each department. After the first project, the difference in quantity, quality and responsiveness of content between the two approaches becomes increasingly pronounced. And, since both approaches require implementation of the technical infrastructure, in the central approach, departments often get frustrated waiting for their turn, and begin to implement their own projects anyway. This is one route to the content chaos mentioned above. 

The shift in communications is one from publisher push to user pull. In an earlier paper I discussed this concept in more detail. Since that paper was written, the abilities of the technology have expanded to include not just information, but also logic. Ultimately the shift in communications may have a more profound effect on our personal attitudes than the shifts in either management or leadership. 

PUBLISHERS

PUSH MENTALITY
PULL MENTALITY
  • I know what you need - and I'll send it!
  • I don't know what you need - so I'll send it all!
  • I don't care if you need it - I'll sent it anyway!
  • I know my mission and audience
  • I make information available on demand
  • I measure and improve information usefulness 
Today most of us rely on information push in both our professional and personal lives. It is someone else's responsibility to get the information to us, be it another worker, another department, or the marketing department of the company whose products we buy. The problem is, there is too much information, so our decisions become capricious from an inability to process it all. This causes us to become stressed, always fearing that we have not heard about the latest development that might make our choices obsolete, our career paths unsuccessful or our lives unfulfilled. 

DOMAIN SPECIALISTS

PUSH MENTALITY
PULL MENTALITY
  • Someone needs to tickle me
  • Someone needs to tell me what information is available
  • Someone needs to tell me what information I need
  • I set up my own ticklers
  • I know how to find information when I need it
  • My job is to determine what information I need 
In an Intranet workshop I was helping to facilitate, we encouraged a discussion of when pushing information was appropriate. One of the participants held the view that since she used many products from a certain vendor, it not only was appropriate, but desirable for that vendor to actively push information about their new products to her. Her rationale was that a pull model required her to go to the information when it may not have changed. However, look at the cost. Her approach not only took away control of her time and priorities (creating information overload), but abdicated to the vendor her responsibility for determining what information was important. This clearly was someone who had not made the paradigm shift on a personal level. 

This is not to say that her rationale was faulty. Time is wasted by going to sites just to see what has changed. However, there are a number of methods and tools that solve this problem while leaving control of the information flow with the user rather than the publisher. For example, agents now exist that allow the user to identify specific pages for tracking (e.g. Katipo, WebSeeker). The agent then checks regularly and reports back whenever a change is detected. The user has control, adding and deleting what they want to monitor, without the overload caused by accepting all push materials. 

One could argue that some vendors will make insignificant changes on their pages just to get additional "mind share." However, because the user is in control, this type of trickery is a high risk, low reward, activitity for the vendor. A user who is tricked frequently into coming to view something not of interest, not only is not likely to buy that item, but is likely to become disenchanted with the vendor and remove them from their monitoring agent. Instead, the reverse already is happening. Smart vendors are providing tools on their site that allow users to customize their experience and to sign up with (or remove themselves from) agents that monitor very specific information domains at the site. This is the basis of one-to-one marketing. 

Part of the shift to a "user pull" paradigm involves not only a shift in responsibility for finding and retrieving information. It requires a shift in the way we relate to information, personally. Our only salvation may be to become comfortable making decisions based on patterns and trends, determining when and where specific detailed information is required and being able to find it quickly. Conversely we must wean ourselves from the belief that we somehow need to know every bit of information out there, regardless of its impact on our current decisions or choices. In a fast changing world, filled with more information than we can assimilate, making a reasonable decision and moving forward is more effective than agonizing over the best decision of the moment. 

The paradigm shift in leadership is important because it plays a large role in determining how individuals will react to an Intranet implementation. There seem to be three basic types of resistance to Web adoption. The first is from those who do not understand the organizational and paradigm shift underway. The second is from those who fear losing power (either personal or market) in the shift to the new paradigm. The third is from those who recognize the shift as inevitable, but are trying to slow the progress to gain more time to reposition their products or power base. 

In the end, the resistance is likely to be unsuccessful and may in fact be detrimental to those resisting. The underlying organizational requirements are fueling the move to the Web, not the technology per se. General Systems Theory predicts that those organizations that successfully decentralize decision making into self-regulating subsystems will become more stable and capable of managing today's increasingly complex environments than those that struggle to maintain a central decision making model. All three forms of resistance to Intranet implementations are more reactions to the organizational shift than the technology. Leaders in the information age need to provide the vision, stimulate diversity and the mixing of ideas, and prune or transplant inappropriate growth rather than gatekeep the information. Intranets provide a tool to assist in these activities. 
(top)

Core Paradigm Differences in Tools

The paradigm conflict in tools matches that of the central versus distributed approach to decision making and the level of control deemed healthy. We can see this conflict in two areas of Intranet tools: Content Creation and Management Tools and Workgroup Tools. 

The Web provides a major advance in electronic communication by creating a standard for content. By content I mean the text, images, audio, video and logic, all the objects available to members of the Intranet. The standards not only allow the content to be used, unaltered, across diverse platforms, it also allows the content to be modified by any standard tool that edits that type of content. As an author, I do not need to be concerned with what specific brand of tool was used to create this content originally. I can and have switched tool brands while creating and editing both the text and images for the web version of this document. Whatever brand of tool I am using at the moment will allow me to view and edit the content. 

From a practical standpoint this leads to two conclusions. First, enforcement of a single brand of tool on an entire organization for compatibility reasons is no longer an issue. Second, if the organization is standardizing on a brand of authoring tool for contractual or support reasons, the decision is less monumental than in the past. It is easy to switch to some other brand later. In fact, a long period of transition with a mixed tool set, does not cause problems with content sharing or updates, as long as the standards are focused on the output, not the creation tools. 

The following model shows the major components of a distributed management Intranet. By adhering to web standards, the output in each functional box should be independent of the vendor-specific product used to perform the functions in the other boxes. In other words, content created with one product should be editable using another vendor's product for the same class of output (html, java, jpeg, etc.). Likewise, discovery agents should be able to find and catalog content regardless of what vendor-specific product created it or what mix of vendors' products are used to manage and serve the content on the Intranet. Environment managers should be able to take input from any standard discovery agent, and should allow the user to specify the vendor-specific products to be used for editing or creating each class of content. 

From a business standpoint it is important to recognize that many vendors with established products are not thrilled with this transportability of content. From a short term perspective, they need time to transition their products. From a longer term perspective they would like some kind of barrier to insulate their established business from constant competition. In other words, they would rather have you invest in their incremental changes than in some upstart's monumental change. The most common tactic is to combine two or more of the functional areas in the model to create a proprietary lock. For example, the environment manager may only work with a specific discovery agent or a specific creation tool set. The most common incarnation of this is to have the environment manager work only with content stored and cataloged using a specific content manager. (Note: single source content managers serve the same functional purpose in the old paradigm that discovery agents, and the tools that analyze their results, serve in the Intranet paradigm.) 

While most content creation and management vendors have added web-standard products, two short term phenomena are helping them maintain their proprietary versions: the need many customers have for parallel paper formatted versions for those individuals not yet Web-enabled and the vendors' ability to easily incorporate viewers for their proprietary content protocols into web browsers as helper applications or plug-ins. 

Using the need for non-Web versions of content as a hook, some vendors of authoring tools provide the ability to generate Web content as one form of output from their proprietary tool. Once generated, the Web version can be modified with any standard Web authoring tool, but the result does not go the other way, so only the Web version is modified. Thus the "master" copy can only be maintained in the proprietary format. Until the web-standard content can be attractively printed (proper pagination, etc.), the proprietary solutions will have an edge in the mixed output environment. ForeFront's WebPrinter offers a solution for Windows clients.  Many other vendors are just providing plug-in viewers for their proprietary formats. But the plug-in approach just reinforces the old paradigm of content dependency on proprietary authoring tools and creates clutter for the user. 

Web-publishing tools tend to fall into one or both of the "Discovery Agent" and "Environment Manager" areas. Historically, the products come from a wider variety of starting points than authoring tools. In addition to basic serving of the files, there are two major functions that these tools provide. One is the ability to efficiently find the content (structuring, indexing, searching) the other is management of the content (availability, update, integrity). In both functions we see the conflict between the central and distributed models. 

Any product that requires Intranet content to go through a single point to be published, be it a single server or a single application, is forcing a central decision-making model and the potential for a central bottleneck on the organization. This is not to say that organizations should not have a comprehensive index of their Intranet content. The issue is the way in which such an index is created and maintained. 

In the distributed model, an Intranet-wide index resides at some location. It really doesn't matter where. The index is searchable by the attributes and behaviors of the objects that have been indexed. The information in the index is maintained by an automated discovery agent that searches the Intranet links on a regular basis and creates a current map of objects and links to their occurrence. In this way individuals are not constrained from publishing by central bottlenecks, but a reasonably current consolidated view of all the content is available. 

This model follows the rules of self-regulating subsystems. The brand of server (hardware or Webware) for individual Web servers in the Intranet is not important to a discovery agent. Individuals and groups can self publish without running into procedural or resource bottlenecks. The index and agent applications are independent of the Intranet content. If a different brand of discovery agent or indexer is desired, it can easily be substituted. 

The publishing tools for content management are less generic than most of the other web alternatives. There are several good tools available today to help authors or publishers manage a complex of related pages, but they tend to be tightly tied to specific authoring tools and web servers. This is mainly due to the various wizards and "bots" that allow non-technical authors to create their own complex functionality. In a distributed decision-making organization, these tools are viewed as distributed aids under the control of the authors or publishers. There is no central command and control manager, nor is there a need for one. Each publisher can use a server with the package she prefers, and the output is standard regardless of the publisher package managing it. The communication and coordination function is handled by the discovery agent/index method described above. 

Products using the agent-discovery model are starting to emerge for managing web objects in an application development environment. Examples are products from Wallop Software and NetCarta, which use agents to discover and map the objects and relationships available on an Intranet, including applets, graphics and HTML pages. As these tools evolve they open up possibilities for increasingly flexible and powerful publishing-management and application-development capabilities that are based on communication and coordination rather than central control. 

Coordination, or workflow, tools are the newest of the Intranet tools, although the Internet versions of the most commonly used functions in the proprietary workflow packages are actually older. For example, email, threaded conferences, searchable bulletin-boards, news groups and self-service subscription servers are all old Internet functions that actually contributed to the Web standards. In many cases the newer proprietary clones are less flexible than the best of their Internet counterparts. 

It was the ability to track and manage processes that distinguished the workflow packages from the traditional Internet tools, until recently. This will be one of the most interesting areas to watch develop in the future. The reason is that Internet and Web implementations tend to coordinate activities via messaging approaches. The traditional workflow packages are primarily database applications that use common variables in a database to coordinate activities. Both of these approaches have their own set of strengths and weaknesses, and applications can be built that mix the two approaches. 

Since sharing common data is the essence of traditional workflow packages, a major issue has been the sharing of databases by geographically distributed groups and by mobile computers. The standard way of handling this has been replication of the databases, making multiple copies, then comparing and copying changed files when the opportunity arises. Initially this was accomplished through the proprietary database of the workflow vendor. More recently, application vendors in this market have begun to offer "synchronization" of client and server databases outside of the workflow vendors' packages. These tend to work with many common SQL databases, and the brand does not have to be the same on the client and the server. These same vendors are moving their workflow packages to Intranet technology. 

What is most intriguing is the question of how message-based technology, that makes up much of the Intranet tool set, might apply to the world of workflow. Are there opportunities to rethink the problems of workflow in the distributed paradigm, or is this aspect of coordinated behavior inherently dependent on centralized control and therefore best handled with centralized technologies? The interest in applying Intranets to workflow management is recent, and has not yet attracted the number of entrepreneurs that fueled earlier innovation in other areas of the Intranet. The early entrants are primarily building interfaces between existing data-sharing models rather than exploring the extension of the distributed-messaging paradigm to the fundamental problems that workflow packages must solve. However, a few companies, like WebFlow, have begun to develop approaches based on the new paradigm. This is perhaps the most promising area for the next wave of leapfrog applications. 

One problem that must be solved is that of asynchronous clients. It is somewhat surprising that Intranet software has not addressed the issue of mobile users, since the basic Internet email technology has long supported mobile users through the caching and queuing of messages. Tools like WebWhacker are beginning to bring these capabilities to Web files, but remain in the view only mode. When Web forms can be saved locally, filled out and queued off-line, then submitted when the user becomes reconnected, a whole new set of message-based workflow tools will become available. The advent of Java and portable objects will encourage this process. 

The proprietary workflow infrastructure vendors continue to try and sell central control as their value add in the Intranet market space. Meanwhile tools that support the distributed-control, central-coordination model are beginning to emerge. A candid question anyone implementing Intranet workflow tools should ask of their potential vendors is their commitment, plans and timetables for evolving to a distributed control model. Those vendors who believe they can hold back the tide of distributed decision making (distributed publishing) and pull versus push information indefinitely will likely have a short life. 

When implementing Intranet policies, the organization needs to address the issues around standards and proprietary tools for both the short and long term. In general, moving toward Web-standard content and approaches provides the most long-term flexibility for incorporating new functionality and integrating diverse content in the event of changing requirements, mergers or partnerships with other organizations. 
(top)

Adaptive Innovation

In his book The Death of Competition, Moore advances the concept of coevolution as the new business model. All players in a business ecosystem must coevolve for the system to grow and remain healthy. This same concept is central to the notion of the distributed management model advanced above. Each business element (self-regulating component) finds itself in a continually changing environment. It survives and adds value to the overall mission by adapting to the changing conditions of the organization. 

The strength of this new organizational model is its resilience and flexibility. Every part does not have to respond to an attack or opportunity, only the parts directly affected. Likewise, if one strategy fails, the effect on the whole organization is diluted not just by the limited area affected, but by the strength of resources and relationships of the parts not being challenged. There are more responses and more creative minds trying more things than any central organization could ever manage. And, those responding are the most sensitive to and knowledgeable about the problems they are trying to solve. This is the strength of what I call Adaptive Innovation. 

Adaptive Innovation is why an implementation approach that focuses on creating the infrastructure and imparting the knowledge and skills to all the departments has the best chance of success. 

  1. A small increment of effort spread across each unit in an enabled organization will produce more output than a large centralized effort.
  2. If the tools and approaches are useful, the time and effort expended on them will grow by displacing existing approaches and activities that are less useful.
  3. The uses and time displaced will be different in each organizational unit based on that unit's determination of what works or what makes sense.
  4. Uses and quality of information will improve over time if regular communication among the units is encouraged, because of idea sharing, competition and peer pressure.
(top)

Business Implications

Companies are moving quickly to implement Intranets even though the business ramifications are not fully understood. As the MIS director of one company put it: "The potential benefits to the company are as yet unclear, but it appears obvious that we cannot ignore the energy building around the Web." 

It is difficult to predict many of the outcomes of Intranet technology because most enterprises adopt the technology to solve a proximate problem, and justify it on that basis. Since the technology, employed to perpetuate the existing management model, also enables an effective, alternate, management model, the original justifying benefits often are accompanied by changes that show up in totally unexpected places. 

Desktop computing increased computing costs, but decreased secretarial staff and virtually eliminated typing pools. Implementation of intra-enterprise TCP/IP networks increased networking costs, but generated offsetting savings in telephone and express mail costs (Schlumberger, reported in InformationWeek 1995). In other cases, Intranet implementations have increased some networking costs, but generated savings in photocopy, computer storage, printing, and travel costs. As roles begin to shift, Intranet implementations also may reduce the number of personnel or even eliminate some of the functions required to support today’s paper-based communication. 

While many initial Intranet justifications are based on reducing costs, most quantum business leaps come not from cost savings, but from increased opportunities and revenue. As mentioned in chapter one, this type of fundamental change is more appealing to most executives than incremental cost savings. However, these types of benefits are much more difficult to quantify in traditional terms. They tend to be enterprise specific and more story (sensibility) based than numbers based. While numbers usually are presented, they make sense only in the context of the assumptions that the story makes "reasonable." Chapter 8 on implementation planning presents more information on developing cost justifications. 

We are now ready to move to the next chapter where we will examine in more detail the basic roles that support management of Intranet content. 
(top)

Next Chapter
Table of Contents
 


Original Version: October, 1996
Last Updated: October, 1997
Copyright 1996/1997 - Steven L. Telleen, Ph.D.
info@iorg.com

This material is based in part on work that the author wrote while an employee of the Amdahl Corporation. Those portions covered by the Amdahl Corporation Copyright are reprinted with the permission of the Amdahl Corporation.
 


  top page | papers


For more information contact: info@iorg.com
© Copyright 1997-1999 iorg.com