|Steven L. Telleen, Ph.D.
Director, IntraNet Solutions
Copyright (c) 1996 All Rights Reserved
If the only driver for the Web was the external Internet, the growth would have been much more modest, and certainly not capable of sustaining the level of hype we saw in 1995. The time it takes to create the required infrastructure and entice the public at large to adopt the technology is enough to slow the growth. And, with the level of media attention focused on the Web, the seeds for a financial and media backlash have already been sown.
The Intranet is a different matter. There are immediate and compelling reasons for large organizations, public and private, to adopt Intranets. They have been struggling with a series of complex organizational scaling issues for decades, and not entirely by chance, the Internet technology applied to internal networks simplifies many of these issues. In addition, many organizations already have in place the infrastructures and attitudes required to adopt an Intranet. They have the need; they have the hardware; and the Internet technology is providing the software to make it all work.
So what are these drivers? From an abstract view, the major issue is one of scaling-up. When anything in the natural world grows larger, there are certain principles that come into play. Perhaps the most basic is the effect of what we call "surface to volume ratio." While the principle here is charted easily with basic mathematics, a more everyday example might help in understanding its application.
If you ever have needed to make mashed potatoes fast, from raw potatoes, you may have learned that potatoes cook faster when they are cut into small chunks than when boiled whole. The smaller the chunks, the faster they will cook. Intuitively, this is because the centers of each of the smaller chunks are closer to the surface than the center of the larger chunks or the whole potato. They have a better surface to volume ratio.
This principle also applies to organizations. If we imagine an organization as a sphere, we can look at the surface area as that part of the organization that reacts to customers and markets, and the volume as the organizational size and complexity. As an organization's surface area increases, due to either increased market size or increased environmental complexity, the volume must be adjusted to support the increased surface. The bottom line is that, in the absence of structural changes, the volume must increase twice as fast as the surface. This creates a natural limit to the size an organization can attain and still remain effective.
Many enterprises today are based on the premise of continued growth, but as this model shows, the ratio of organizational complexity will continue in the 1:4:8 ratio as long as the organization retains a central, pyramid decision-making structure.
This results in a second characteristic of scaling-up, increased inertia. As an organization grows, it takes more effort to react because there are not only more points to coordinate, but more layers that the information must pass through, in both directions, to input the data and return a plan of action. Add to this the time it takes the central decision making organization to become aware of the increasing number of competing inputs and process a response, and many challenges found in today's large organizations begin to be understandable, if not comprehensible.
The pragmatics of this have not gone unnoticed in modern organizational practice. These challenges provide the rationale for downsizing. However, just eliminating many of the management and staff in the middle cannot solve the problem. A given surface size must have the volume to support it. Without the volume, the surface begins to collapse (because the organization cannot collect and respond to the changes over the surface size it is trying to support). Think about this as a basketball. If you take air (supporting volume) out of the basketball, it will become deformed and lose its bounce. To get a functioning basketball, the volume must be replaced or the size of the skin (surface) must be shrunk to make a smaller ball.
Even with an infrastructure of computers and networks to help support the volume, an organization structured around central decision making quickly becomes overwhelmed with raw input, and decisions are either slowed or impaired. This is why many downsizing activities have resulted in either a scaling back of the business scope of the organization or a restaffing to previous levels over the succeeding few years.
A third characteristic of systems as they scale up is deformation. If the volume increases twice as fast as the surface, gravity, acting on the weight of object, begins to deform the supporting surface. This is one reason you don't see animals the size of whales on land. On land, supporting the weight of a mammal with the volume of a whale would require legs larger in diameter than the whale's body.
As organizations grow large, they tend to display symptoms of organizational deformation, caused by breakdowns in effective communication. These can take the form of internal stress (inter-group conflict), organizational rigidity (unhealthy bureaucratic activity) and loss of central control though unsanctioned adaptive innovation (the development of skunkworks whose results cannot be re-assimilated). The overall result is a natural limit to the size organizations with a centralized decision-making structure can grow.
Restructuring to distribute decision making is an alternative. However, unlike cooking potatoes, complex organizations cannot just split up and still remain organizations. After all, organizations are successful because they accrue the benefits of coordinated activities. So how can organizations meet this challenge? Once again, there are successful patterns that seem to occur naturally in complex systems to deal with these problems. Over several decades in the middle of this century, a number of prominent scholars investigated a subject known as General Systems Theory (see Ervin Laszlo, The Relevance of General Systems Theory, George Braziller, New York, 1972). There were several patterns of general systems that were noted, but two have particular relevance to the current discussion.
The first was the observation that complex systems are made up of lower-level, self-regulating subsystems (see Herbert Simon, "The Architecture of Complexity," Proceedings of the American Philosophical Society, 106, 1962, for a more detailed discussion). As components at each level become self-regulating and standardized, the functionality becomes simplified and it becomes feasible to understand and build the next level of complexity. This type of hierarchization, in fact, is the principle behind the microprocessor revolution. Lower-level systems ("off-the-shelf" components) are organized into higher order systems of complexities that would be impractical without the replicated subsystems from which they are made.
The second was the observation that complex systems, made up of self-regulating subsystems, were more stable than systems of similar size where all parts react to all stimuli. The stressed organizations of today are already moving in this direction, reorganizing to move decision making from central, pyramidal, structures to distributed authority and decision making.
What defines the higher level system, and determines its functionality and ultimate complexity, is the process it uses for communication and coordination among the self-regulating sub-systems. The basic infrastructure for communication and coordination in organizations today remains anchored in paper-based approaches, even when the information has been transferred to electrons. Our organizations now appear to be reaching the limits of size and complexity that are supportable with a paper-based communication architecture. For at least three decades business consultants have been talking about the need for organizations to decentralize decision making and restructure out of the traditional central decision-making model. Business Process Re-engineering is the latest variant of that movement. What appears to have been missing was an appropriate infrastructure to support the communication and coordination requirements of these new decentralized organizations.
It is here that technology has become important. Electronic communication provided the ability to overcome many of the distribution and update constraints of paper-based systems. The speed and breadth at which client-server technology was adopted is a testament to the underlying pressure to decentralize decision making. Client-server has never been cost effective for centralized organization models, even though this was often the type of organization that adopted client-server. And, because client-server was primarily focused on the parts rather than the process of communication and coordination, it did not solve the fundamental problem of distributing decision making in organizations.
There was also the issue of standards. The early standards for client-server systems were complex and unable to achieve the interoperability required at almost every component level. Groupware companies attempted to provide a communication and coordination infrastructure, but rather than being standards based, each company developed proprietary interfaces for multiple levels in the hierarchy of components. This resulted in systems that were expensive to build, expensive to maintain, and required all participants to purchase the same proprietary infrastructure. Additionally, the philosophy behind many of these systems remained inherently paper-based and oriented toward centralized decision making and management control.
The Web is ideally suited to support a communication and coordination infrastructure in a distributed decision-making environment. Even at the technology level the Internet is based on a distributed decision-making model. Decisions that allow the Internet to complete its mission are made locally by the "self-regulating subsystems" rather than centrally at the point of message origin. The standards that support this model are independent of the transport media and convey and display any type of digital content. The Web standards are simple, straight forward, and actually work. And, they are already in wide-spread use world-wide. This collection of standards that make up the Web of today was not developed in a centralized decision-making environment. It came about through an adaptive innovation model enabled by the central communication and coordination structure of the Internet itself.
The technology that enables organizations to support decentralized decision making and still coordinate activities has finally arrived. Its adoption is easy and cheap for those organizations that made the earlier investments in client-server, since they already have the technical infrastructure in place. The Web merely provides the software to make a trend that was already in motion work.
Before progressing, a quick look at contributions by two current day authors is in order. Both these authors follow in the tradition of General Systems Theory, even if they do not explicitly recognize the heritage. The earlier of the two works is Michael Rothschild's, Bionomics, published in 1990. The second is James Moore's, The Death of Competition, published in 1996. Both authors use principles from Biology and Evolutionary Ecology to explain the apparent discontinuities between today's economic and social realities and traditional economic and social models. Both have a grasp on the biological principles that they cover and do a good job of presenting these principles and their relevance to a general audience.
The major difference between these two authors is that Rothschild focuses on competition as the primary dynamic that drives systems to efficiency and stability. Moore, on the other hand, acknowledges the importance of competition, then goes on to focus on several other forces that also are key in determining the diversity and stability of complex systems. He invokes the general principle of coevolution to explain the evolving business ecosystem and explains the development of new "industries" in terms of ecological succession.
Both books are worth reading, Rothschild for his insights into the interplay among information, biology and economics, Moore for his insights into how the broader forces at play in an ecosystem relate to the forces at play in today's business environment. I bring these two books up to make another point. At the highest level, these two authors are discussing two distinctly different kinds of systems. Rothschild implicitly views the economy as an undirected or chaotic (in the sense of chaos theory) system. Moore, on the other hand explicitly recognizes human systems as being purpose-driven systems.
These are important distinctions, because both kinds of systems exist in nature. A wild ecosystem is chaos driven. An organism or organization is purpose driven. Agriculture is the attempt to turn a chaotic ecosystem into a purpose-driven ecosystem. The distinction between chaotic and purpose-driven systems is important because it relates to the decision / communication models discussed above. Chaotic systems are related to distributed communication systems rather than distributed decision-making systems.
The above diagram maps the relationships between centralized and distributed control and communication.
Our twentieth century pyramidal organization structure is a good example of a system designed around central decision making and central communications. This has been a dominant and successful structure for most large western organizations, be they business, non-profit or governmental. These organizations are guided by a vision or purpose.
I do not have an example of a successful central-control and distributed-communication organization. Perhaps the anti-Nazi groups that started in Denmark and spread to other parts of Europe during World War II could be considered here. However, I suspect they actually had a fairly high degree of distributed control and decision-making in addition to their cell method of communicating and coordinating activities. When Pyramidal organizations get too large, the central-control / distributed-communication model tends to emerge as part of the deformation and communication breakdown process. It is what employees often call the mushroom school of management (they keep us in the dark and feed us lots of fertilizer). These systems are chaotic because even if the central decisions are intended to provide a purpose, the distributed communication provides too much opportunity for error, embellishment or disregard.
The bottom two cells in the diagram depict the two versions of distributed control and decision making. The first, with a central communication and coordination structure, is a model for complex systems working toward a common purpose. The Internet technologies were developed in this type of structure, and this is the model I suggest will become more common with the widespread use of Intranets in organizations. Higher order organisms are another example of this kind of structure. Our nervous systems provide a common communication and coordination pathway, but most of our body activities are managed locally, reacting and adjusting to both the immediate environment and the information from the nervous system.
The bottom right cell is similar, in that each subsystem reacts and adjusts to those around it. The difference is, there is no purposeful mission being coordinated. The system evolves chaotically. This is typical of natural ecosystems and the species from which they are composed. Evolution is driven by the demands of the moment, not a conscious purpose.
I call the process that drives the distributed, decision-making and control systems adaptive innovation. This refers to the ability of each subsystem to react to its local conditions. The result is not only a larger array of responses than can be carried out centrally, but the ability of each subsystem to tailor itself to its particular environment. In purpose-driven systems, the mission becomes the key information component against which adjustments are made.
The issue of purpose-driven versus chaotic systems may seem academic, but this is at the heart of the current economic debates in the United States government. The issues around where and how much the government should do in the areas of regulation and stimulation of the economic subsystems are directly related to conflicting beliefs about whether the economy is (or should be) a chaos-driven or purpose-driven system.
I have come to believe that this also is an important point for some people accepting Intranet development. There are individuals who try to control the publishing infrastructure on the Intranet because the only options they see are the completely distributed (chaos) approach and the completely centralized (pyramid) approach. They do not see how a distributed management approach can be reconciled with purpose-driven results.
The pilot project generates enthusiasm in the participants and provides the experience to help others in the organization understand the power and potential enabled by this technology. As the interest becomes more widespread, the organization enters the second phase. Although not always explicit, the organization embarks on a strategy for rolling out the technology. It may be haphazard, incorporating projects as interest and resources come about. It may begin as an explicit choice of projects. Or, it may focus on an orderly introduction of the technology, policies and skills across the organization leaving the choice and development of specific projects up to each department or organizational sub-group as they are enabled.
It is important to recognize that in the second phase the challenge shifts from testing the technology to development of organizations, roles and skills. Many organizations do not explicitly recognize or confront this shift. As a result, implementations have ranged between the extremes of pervasive self-publishing and content chaos to limited publishing access and managed documents via central control, the electronic version of central decision making. The approach chosen reflects the conflicts between the old paper-based and new Web-based paradigms that are beginning to emerge. This conflict can be seen not only in the organizational response, but in the tools being marketed to support Intranet implementations.
We already have discussed the pressures for a management shift from central decision making to decentralized adaptive innovation. It was noted that this shift is the real driver behind the explosive growth of Intranets, and is taking place for organizational reasons resulting from complexity and scale. The Intranet implementation strategy in a large organization provides an excellent example of the competing management principles involved in the shift.
A central decision-making paradigm approaches the Intranet implementation by determining which departments will participate, and which functions will be developed in each. It then provides the resources to implement each project in the order determined. This is a "we will do it for you" model. The distributed decision-making approach views the Intranet as a utility and concentrates on identifying and meeting the infrastructure requirements and on quickly imparting the knowledge and skills to all the departments so they can implement whatever projects they determine make sense. This is an "enable you to do it for yourself" or adaptive innovation model.
In practice, reaching agreement on the central project plan often takes longer than a well executed knowledge and skills rollout. Once agreement is reached a centralized development effort quickly becomes overloaded. Another example of a problem with surface to volume ratio. Our experience has been that in the time it takes to do one project centrally, the decentralized approach generates a project for each department. After the first project, the difference in quantity, quality and responsiveness of content between the two approaches becomes increasingly pronounced. And, since both approaches require implementation of the technical infrastructure, in the central approach, departments often get frustrated waiting for their turn, and begin to implement their own projects anyway. This is one route to the content chaos mentioned above.
The shift in communications is one from publisher push to user pull. In an earlier paper I discussed this concept in more detail. Since that paper was written, the abilities of the technology have expanded to include not just information, but also logic. Ultimately the shift in communications may have a more profound effect on our personal attitudes than the shifts in either management or leadership.
Today most of us rely on information push in both our professional and personal lives. It is someone else's responsibility to get the information to us, be it another worker, another department, or the marketing department of the company whose products we buy. The problem is, there is too much information, so our decisions become capricious from an inability to process it all. This causes us to become stressed, always fearing that we have not heard about the latest development that might make our choices obsolete, our career paths unsuccessful or our lives unfulfilled.
Part of the shift to a "user pull" paradigm involves not only a shift in responsibility for finding and retrieving information. It requires a shift in the way we relate to information, personally. Our only salvation may be to become comfortable making decisions based on patterns and trends, determining when and where specific detailed information is required and being able to find it quickly. Conversely we must wean ourselves from the belief that we somehow need to know every bit of information out there, regardless of its impact on our current decisions or choices. In a fast changing world, filled with more information than we can assimilate, making a reasonable decision and moving forward is more effective than agonizing over the best decision of the moment.
The paradigm shift in leadership is important because it plays a large role in determining how individuals will react to an Intranet implementation. There seem to be three basic types of resistance to Web adoption. The first is from those who do not understand the organizational and paradigm shift underway. The second is from those who fear losing power (either personal or market) in the shift to the new paradigm. The third is from those who recognize the shift as inevitable, but are trying to slow the progress to gain more time to reposition their products or power base.
In the end, the resistance is likely to be unsuccessful and may in fact be detrimental to those resisting. The underlying organizational requirements are fueling the move to the Web, not the technology per se. General Systems Theory predicts that those organizations that successfully decentralize decision making into self-regulating subsystems will become more stable and capable of managing today's increasingly complex environments than those that struggle to maintain a central decision making model. All three forms of resistance to Intranet implementations are more reactions to the organizational shift than the technology.
The Web provided a major advance in electronic communication by creating a standard for content. This not only allows the content to be used, unaltered, across diverse platforms, it also allows the content to be modified by any standard tool that edits that type of content. As an author, I do not need to be concerned with what specific brand of tool was used to create the content originally. Whatever brand of tool I am using at the moment will allow me to view and edit the content.
From a practical standpoint this leads to two conclusions. First, enforcement of a single brand of tool on an entire organization for compatibility reasons is no longer an issue. Second, if the organization is standardizing on a brand of authoring tool for contractual or support reasons, the decision is less monumental than in the past. It is easy to switch to some other brand later. In fact, a long period of transition with a mixed tool set, does not cause problems with content sharing or updates.
From a business standpoint it is important to recognize that many vendors of established authoring tools are not thrilled with this transportability of content. From a short term perspective, they need time to transition their products. From a longer term perspective they would like some kind of barrier to insulate their business from constant competition. Fortunately for them, two short term phenomena are protecting them: the need for parallel paper formatted versions for those individuals not yet Web-enabled and the ability to easily incorporate viewers for their proprietary content protocols into web browsers as helper applications or plug-ins.
Using the need for non-Web versions of content as a hook, some vendors of authoring tools provide the ability to generate Web content as one form of output from their proprietary tool. Once generated, the Web version can be modified with any standard Web authoring tool, but the result does not go the other way, so only the Web version is modified. Many other vendors are just providing plug-in viewers for their proprietary formats. Both these approaches reinforce the old paradigm of content dependency on proprietary authoring tools.
When implementing Intranet policies this issue must be addressed for both the short and long term. In general, moving toward Web-standard content provides the most long-term flexibility for incorporating new functionality and integrating diverse content in the event of mergers or partnerships with other organizations. A tool that allows a document maintained in the Web-standard format to be printed in a predefined print template would offer a great strategic alternative to maintaining print-based tools that translate proprietary formats to the Web-based standards.
Web-publishing tools come from a wider variety of starting points than authoring tools. In addition to basic serving of the files, there are two major functions that these tools provide. One is the ability to efficiently find the content (structuring, indexing, searching) the other is management of the content (availability, update, integrity). In both functions we see the conflict between the central and distributed models.
Any product that requires Intranet content to go through a single point to be published, be it a single server or a single application, is forcing a central decision-making model on the organization. This is not to say that organizations should not have a comprehensive index of their Intranet content. The issue is the way in which such an index is created and maintained.
In the distributed Web model a searchable index resides at some location on the Intranet (it really doesn't matter where). The content in the index is maintained by an automated Web-Crawler or Spider that searches the Intranet links on a regular basis and creates a current map of key words and links to their occurrence. In this way individuals are not constrained from publishing by central bottlenecks, but a reasonably current consolidated view of all the content is available.
This model follows the rules of self-regulating subsystems. The brand of server (hardware or Webware) for individual Web servers in the Intranet is not important. Individuals and groups can self publish without running into procedural or resource bottlenecks. The index and agent applications are independent of the Intranet content. If a different brand of web crawler or indexer is desired, it can easily be substituted.
The publishing tools for content management are less generic than most of the other web alternatives. There are several good tools available today to help authors or publishers manage a complex of related pages, but they tend to be tightly tied to specific authoring tools and web servers. This is mainly due to the various wizards and "bots" that allow non-technical authors to create their own complex functionality. In a distributed decision-making organization, these tools are viewed as distributed aids under the control of the authors or publishers. There is no central command and control manager, nor is there a need for one. Each publisher can use a server with the package she prefers, and the output is standard regardless of the publisher package managing it. The communication and coordination function is handled by the web-crawler/index method described above.
Coordination, or workflow, tools are the newest of the web tools, although the Internet versions of the most commonly used functions in the proprietary workflow packages are actually older. For example, email, threaded conferences, searchable bulletin-boards, and newsgroups are all old Internet functions that actually contributed to the Web standards. In many cases the newer proprietary clones are less functional than the best of their Internet counterparts.
It was the ability to track and manage processes that distinguished the workflow packages from the traditional Internet tools, until recently. This will be one of the most interesting areas to watch develop in the future. The reason is that Internet and Web implementations tend to coordinate activities via messaging approaches. The traditional workflow packages are primarily database applications that use common variables in a database to coordinate activities. Both of these approaches have their own set of strengths and weaknesses, and applications can be built that mix the two approaches.
Since sharing common data is the essence of traditional workflow packages, a major issue has been the sharing of databases by geographically distributed groups and by mobile computers. The standard way of handling this has been replication of the databases, making multiple copies, then comparing and copying changed files when the opportunity arises. Initially this was accomplished through the proprietary database of the workflow vendor. More recently, application vendors in this market have begun to offer "synchronization" of client and server databases outside of the workflow vendors' packages. These tend to work with many common SQL databases, and the brand does not have to be the same on the client and the server. These same vendors are moving their workflow packages to the Web infrastructure.
What is most intriguing is the question of how message-passing technology that makes up much of the Intranet tool set might apply to the world of workflow. Are there opportunities to rethink the problems of workflow in the distributed paradigm, or is this aspect of coordinated behavior inherently dependent on centralized control and therefore best handled with centralized technologies? The interest in applying Intranets to this area is recent, and has not yet attracted the number of entrepreneurs that fueled earlier innovation in other areas of the Intranet. The early entrants are primarily building interfaces between existing data sharing models rather than exploring the extension of the distributed messaging paradigm to the fundamental problems that workflow packages must solve. This is perhaps the most promising area for the next wave of leapfrog applications.
One problem that must be solved is that of asynchronous clients. It is somewhat surprising that Web software has not addressed the issue of mobile users, since the basic Internet email technology has long supported mobile users through the caching and queuing of messages. Tools like WebWhacker and WebArranger are beginning to bring these capabilities to Web files, but remain in the view only mode. When Web forms can be saved locally, filled out and queued off-line, then submitted when the user becomes reconnected, a whole new set of message-based workflow tools will become available. The advent of Java and portable logic will encourage this process.
The proprietary workflow infrastructure vendors continue to try and sell central control as their value add in the Intranet. Meanwhile tools will emerge to support the distributed-control, central-coordination model. A candid question anyone implementing Intranet workflow tools should ask of their potential vendors is their commitment, plans and timetables for evolving to a distributed control model. Those vendors who believe they can hold back the tide of distributed decision making (distributed publishing) and pull versus push information indefinitely will likely have a short life.
The strength of this new organizational model is its resilience and flexibility. Every part does not have to respond to an attack or opportunity, only the parts directly affected. Likewise, if one strategy fails, the effect on the whole organization is diluted not just by the limited area affected, but by the strength of resources and relationships of the parts not being challenged. There are more responses and more creative minds trying more things than any central organization could ever manage. And, those responding are the most sensitive to and knowledgeable about the problems they are trying to solve. This is the strength of what I call Adaptive Innovation.
Adaptive Innovation is why an implementation approach that focuses on creating the infrastructure and imparting the knowledge and skills to all the departments has the best chance of success.
One last point. Managing Adaptive Innovation as discussed here requires a different approach from that used in traditional central decision-making organizations. It cannot be planned and controlled in the traditional engineering sense. Specific outcomes can be unpredictable. Central responses must be systemic rather than direct. They must strengthen both the individual decision units and the coordination infrastructure rather than dictate processes and outcomes. As Moore says, managing these kind of organizations is more like gardening than engineering. The study and development of new management practices and metrics will be a fertile new field as we move forward, because many of the old engineering models of management will not be sufficient to nurture the complex organic enterprises of the future.Author: Steven L. Telleen, Ph.D.
Comments about this material may be sent to the author at firstname.lastname@example.org