iorg.com logo

Top

White Papers


 

© Copyright 2000
iorg.com

TOC - Ch 1 - Ch  2 - Ch 3 - Ch 4 - Ch 5 - Ch 6 - Ch 7 - Ch 8 - Ch 9

Chapter 6: Intranet Applications

Intranet Organization: Steven L. Telleen, Ph.D. 

Introduction

Intranets are the silicon of software. Solid-state technology revolutionized the development of electronic devices by moving the design considerations to a higher level of abstraction. The basic circuits became standard and replicatable, allowing the solution to be viewed as functions rather than wiring. The communication standards of the Internet that provide location transparency, and the content standards of the web that provide client transparency enable the same leap of abstraction in design for applications, allowing solutions to be viewed as functions rather than software code. This is not a new concept in software. Object programming has been pursuing this goal for some time. However, the pioneering efforts at applying object programming concepts were limited by an incompatible and complex infrastructure. By contrast, the Intranet provides both the ability and the incentive to move to an object approach. 

Developing Intranet applications is a layered process. First, the technical infrastructure must be created, a process many organizations are completing at the time this was written. Next the point solutions that attract users to the technology must be employed. The tools that make Intranet technology manageable and provide more efficient development environments need to mature. It is only after the infrastructure and tools are built that this technology will begin to show its true impact. This is the point where the radical transformations will begin to occur. In electronics, solid-state technologies first replaced functions that were implemented with earlier generation electronic technologies. Over time, solid-state electronics also began replacing functions that historically were implemented with mechanical solutions, for example, the ignition systems in automobiles. We should expect a similar pattern of evolution from traditional software with the implementation of Intranets. 

I have tried to avoid the FUD (Fear, Uncertainty and Doubt) approach to adopting Intranets in this book, but the natural progression of this technology, as outlined in the previous paragraph, makes it difficult to avoid here. The implications of Intranet applications on those organizations that participate in this evolution are going to be profound. It is difficult to see how those who do not lay this foundation and move forward today will be able to catch up and save themselves from the fate faced by the Swiss watch makers or companies like Addressograph when previously mechanical solutions were replaced by solid-state electronics. 

The shift in applications requires the evolution and maturity of key tools. The specific outcomes in many of these tool areas are still in flux and will be more affected by the power struggles and buying behaviors of the markets than by the technology. The advice in the 1960s Bob Dylan song: "...don't speak too soon for the wheel's still in spin, and there's no telling who that it's naming," certainly applies here. But the power of the standards also provides clear trends. If those who are the winners now cannot give up the comfort of their proprietary locks and move forward, then a ripe opportunity exists for others to replace them. 

Chapter 2 introduced the concept of the four functional boxes in an Intranet environment: Standard Content, Creation Tools, Discovery Agents and Environment Managers. It now is time to revisit those boxes in more detail, look at the trends for the tools in each, and how previous solutions can be viewed in comparison. We will begin with standard content, followed by creation tools, discovery agents and, finally, environment managers. We end with environment managers because this provides the springboard for creating the higher level business applications. By this point it should be apparent that Intranet applications are not the monolithic chunks of software code we have come to know as software, but a distributed software environment that provides structure where necessary and also allows for user driven customization. 
(top)

The Application Infrastructure

Standard Content

The act of standardizing on content (the output) rather than the tools that create the content is what makes the Intranet work. If you hear someone agonizing over whether to standardize their company on a particular brand of Intranet browser or server, you can be certain they have not yet attained a fundatmental understanding of what is important, and different, about the Intranet. The maintenance of vendor independent content standards is the one sacred goal that all users must defend in the market if they don't want their benefits to vanish, and standard content is enforced by standard browsers. Any content that starts with "This page best viewed with (fill in the brand name browser)," is a step toward destroying the fundamental fabric of both the Intranet and the World-Wide Web. Standard content and vendor independent browsers are synonymous, and just as video tape recorders, CD ROMs and a myriad of other technologies could not develop without strict adherance to their standards, so Intranets (and the WWW) cannot develop without standard content and vendor independent browsers. 

This standardization of content, along with the transparency of location, provides a significantly different option for supporting applications than the traditional MIS approaches. Not only does all the functionality no longer need to reside on a single general-purpose server, a case can be made that the single server approach often is not as desirable. A specialty server is less complex, since it doesn't have to solve the general problem, is significantly smaller, since it doesn't have to integrate additional functionality, and is more reliable and easier to maintain, because it is significantly more simple. By spreading smaller servers around the Intranet, there also is more opportunity to distribute the traffic on the network. What is important is that the server delivers standard content, not the brand of server hosting a specific function. 

A number of companies  are coming out with  very inexpensive turnkey web servers (hardware and software) that can be installed and operated by non-technical personnel (see:  Cisco , Cobalt Microservers, Compact Devices,  and Microtest). One can anticipate other specialty servers that support web-enabled databases, specific functions and  vertical web application logic (see Encanto Networks ). This is part of the Intranet trend toward modularization of applications into functions that simplify the creation, implementation, maintenance and use of the function. We will see both the trend to break larger applications into simplified functions and the trend toward domain specialists managing more of their own information and process functions to continue into other areas of the Intranet. 

Before finishing this discussion of standard content, it is important to note that standard content no longer refers to text and graphics only. Logic (methods and processes) also are being standardized. The winning standard appears to be logic conveyed in the Java language. However, there are other options for providing users with logical operations in the standard content environment of the Intranet. The logic can be processed on a specific server and the user interfaces (forms and reports in database terminology) can be provided so they meet the content standards. In fact, this is the most widely used method for providing logic on Intranets today. This also is the way many application vendors are "web-enabling" their existing applications. We will look at more variations on this theme, and some possible trends and options, later in this chapter. 

Content Creation Tools

Standard content creates an interesting dilemma for vendors of content creation tools. After all, prior to standard content what kept you from using one brand of word processor one day and another brand the next? It was the proprietary content that each word processor created. You could not edit or view the output you created yesterday with a competitor's product. This was great for the winning software vendor, because entire organizations were forced to standardize on a single brand in order to share content. It was not great for users who needed to collaborate to get their work done. An Intranet changes that - if you insist on standard content. 

With standard content, any creation tool (editor) can read and change the standard content. Thus one can switch from one editor capable of handling standard text content (HTML) to another on a whim. In fact, the text in this book has been edited using at least five different editing tools crossing Macintosh and PC platforms. The images have been edited using three different editing tools, and some of these images also have been edited across both Macintosh and PC platforms. Since Java logic is coded in basic text, Java code, too, can be edited across different vendors' tools. While this was not the primary reason for developing the standard content, it is an unavoidable outcome. 

This is a positive development for consumers. To remain competitive some vendors have begun to move from spending their R&D dollars perpetuating proprietary locks to competing by adding features to make the content creation tools easier for non-technical users to use. This drives the enablement process. However, sometimes these tools encourage rather poor habits. For example, one major vendor has a very nice WYSIWYG editor, but when adding images, the default is to create a copy of the image in the same folder as the page. If the author does not explicitly uncheck the box, every time he edits the image link, multiple copies of the image are generated all over, and the author has just created an electronic version of the update nightmare frequently seen in both the world of paper and of client-server. Other packages provide easy features but in the process take over, and modify, the management of the links. This locks the user into the package for future updates and can even lock the user into publishing on a specific brand of server. 

There are many more examples, but the point for users and those who support them is to be on the alert for defaults that encourage behaviors that don't take maximum advantage of the Intranet's transparency of location and for packages that take over and modify the standard content so that in the future it can only be edited and managed by that package. This latter point will be discussed again with environment managers, because many vendors seem to be trying to maintain the lock their creation tools had with proprietary content by shifting the lock to proprietary environment managers. 

In addition to direct creation tools, there are indirect creation tools. These are tools that take exisitng proprietary content and convert it into standard content. Some tools do this as a discrete function, while others create a dynamic environment between the two formats. In both cases, indirect tools tend to be used where some reason exists to continue managing the information in the proprietary tool set for some time. 

When a discrete function tool does the conversion, the standard content version is stand alone. If the original version changes, another discrete translation occurs, and the earlier standard version is replaced with the latest version. If the standard version is changed, there usually is no reverse translation process, other than to explicitly make the change in both places. In this way, discrete function conversion tools tend to keep the development and maintenance environment locked in the proprietary toolset. Discrete function tool sets are tied to most of the proprietary document editors that existed before the advent of standard content, e.g. Adobe, Interleaf, etc., or the general purpose,  Net-It Central

When this was first written, Microsoft's Internet Assistant was included in this section. However, with the release of Word-97, Word has become a standard content editor, that not only will save the edited content as HTML, but will read, modify and resave  HTML content created with other editors. This makes moving the corporate document standard to HTML a matter of policy and changing habits for many companies, rather than a wholesale conversion decision. 

Dynamic environments are common when the interaction is with information managed in a database. In these instances, the translation to standard content is made "on demand." This may be done through a forms interface interacting with CGI or Java scripts on the server, or through an automatically generated set of pages that let the user browse the database (e.g. Netscheme). Most of the tools for creating database links to standard content outputs are for SQL and Object Oriented databases, and most database vendors now provide a set of tools for their products. However, some tools also have been developed for MVS and VM environments (see: Simware, Amazon, Polaris, Idea). 

Discovery Agents

This leads us quite naturally to the area of discovery agents. The reason is that DBMSs are a highly structured case of a discovery agent. Traditional document management systems also fall into this category. The purpose of both of these tools (and their approaches) is to discover specific information on demand. The difference between these tools and the modern spider-based discovery agent is the amount of pre-structuring of the information that must take place before the discovery agent will work. And, it is the pre-structuring that quickly becomes complex as the size of the database grows. 

A spider-based discovery agent takes advantage of the standard linking and location transparency of a web environment to find information. For this reason spider-based discovery agents are not limited to predetermined structures on specific servers. The early agents have tended to be general purpose, they catalog the entire web. See the Web Robots Database for a list of discovery agents. These general purpose discovery agents often are combined with parts of DBMS or Document Management technology. For example, most general web spiders use an indexing and search tool licensed from a document management vendor (many from Verity, originally a document management company). 

In addition to general purpose spiders, specialty agents are emerging that provide more focused discovery for their client. For example, discovery agents can be used to map web and document links and check for breaks in page links (BiggByte, Dr. Watson, Inspector Web, Linklint, lvrfy, Net Mechanic) or to monitor specific pages for content changes (Katipo and WebSeeker). Many specialty agents use database management technology to manage the discrete, highly-structured data about their tasks. However, rather than using general purpose database management software, we find the more progressive agent vendors custom writing the database for specific agents. This is another example of the modularization discussed above. Over the past two decades we have increased our knowledge of effective database approaches and algorithms to the point where the option of writing a highly targeted function must be weighed against carrying the burden of database generality and full function that will never be required by the agent. The standard environment of the Intranet provides these special purpose agents with the linkage and consistent interface that used to be available only from a rigidly integrated structure within a full function database. 

Discovery agents promise to be one of the richest Intranet tool focus areas over the next few years. Agents can be either client or server based depending on the application. But regardless of where they reside, a discovery agent's power comes from its ability to enable each user to control her own information access and flow. We just now are beginning to scratch the surface of discovery agents. As we gain more experience with their application we should expect them to replace the discovery function in a number of the applications that today rely on databases for their discovery and coordination capabilities, just like solid-state ignitions eventually replaced mechanical rotors and points in engines. 

How soon will this happen? It already has started. SAIC developed and demonstrated a prototype application for the FBI that used AltaVista to spider and index the FBI's  Oracle database of case files. Queries that previously took over 20 hours and required formal SQL statements to be submitted, were generated using the AltaVista key word interface and returned the results in 4 seconds. Add to this the interactive "Refine" feature that AltaVista provides, and the feature/function takes on a dimension previously unavailable to the non-technical members of the community. 

It is not hard to imagine creating a common search index across multiple, diverse, databases using this same approach. The implications as an alternate way to solve at least some data warehouse requirements are astounding and could revolutionize the discipline. It also could replace the current approach of  generating meta-dictionaries for other cross-database application development environments. 

Using a tool like Netscheme, mentioned above, rather than scripts, to create a hyperlink schema into the database, that the spider could search directly, would revolutionize the process even further. These approaches all share a cooperative object model that uses the common content and navigation standards of the web for  interoperation. Our ability to shed our old perspectives and assumptions is all that holds us back from seeing even more ways to simplify and enhance our digital environments. 

Environment Managers

Environment managers are a diverse lot. They often are sold as site managers or web managers or web development environments. Many are tightly tied to tools in one or more of the other functional areas. Their purpose is to provide an integrated view of the content and tools so new development can be accomplished efficiently. Examples of environment managers range from FrontPage (Microsoft) and SiteMill (Adobe) to NetObjects, Edge, HAHT, Wallop and Netcarta.  It should be noted that most vendors have not viewed their products as environment managers. Many products that act as environment managers originally viewed their function as managing all the information in the company. More recently, the vendors have recognized that it is more realistic to think in terms of managing a related complex of pages. Each complex of pages is viewed as a web in its own right. 

The problem with most environment managers is their tight coupling with their own vendor specific tools. What is required is that the environment manager allows each user to specify both the tools and the server of choice. For some projects (webs) being managed it may be more efficient for users to use the tools they find most productive and work on their own servers rather than to be forced to use the environment manager's tool set and server. This is possible using discovery agents to feed the environment manager, but few products support the concept today. As mentioned above, proprietary locks work by forcing the coupling of tools in one of the functional areas with those in another. Since Intranet content became standards based, the only alternative creation tool vendors have for creating a proprietary lock is to tightly couple their tools with their own environment manager. 

Bucking this trend, a few companies have done a good job of separating their functional components and creating an open environment manager. One of the best is Interwoven's TeamSite. Another  interesting new type of environment manager is MovieWorks. Today only available on Macintosh platforms, it manages a multimedia environment. However, in 1998 it is expected to  become available for Windows platforms as well. Only one of the features that  makes this product interesting is that it does not attempt to integrate the various editors, but to integrate the content the editors put out.  The output can be viewed using any browser that supports QuickTime. 

Environment managers are discussed last not only because they integrate the other three functions, but because high level applications are environment managers. If you think back on the major functional areas, you should be able to see that this is the case. A high level application has a content base from which it draws, a method for discovering the relevant content, and tools for updating or adding to the content. If the process being supported is highly structured and well understood, then the environment manager is very focused and looks like a standard transaction-based application. If the process is only partially structured, then the environment manager looks more like a framework or more general purpose tool. If the process is unstructured, the environment manager looks like a custom development environment. 

For most of our experience, all four functions have run on a single computer (or complex of computers) and have been integrated because a single vendor dictated the specific vendor brands that could be used. The time has come to rethink our high level applications in the new paradigm, taking into account the location transparency, content standards and new capabilities. This is the future of Intranet applications. The content is no longer static or localized. Applications are the management and manipulation of the overall knowledge base to meet diverse requirements and goals. Specific tools come and go. 
(top)

Process Building Blocks

Before looking at how all this comes together in an application, it is worthwhile to review some of the basic process tools used and the functionality they provide. These are listed below with a short description of each. 

Mail

Electronic mail was one of the earliest functions on the Internet, and in many ways is the basis of much of the web technology. The content standards for mail and the web technology are shared. Mail is an important tool for Intranet applications because it provides the major form of PUSH. 

Threaded discussion

Threaded discussions are an integration of email and web functionality. A threaded discussion organizes what amounts to emails around subjects and discussions. The discussion is accessed using the web browser and the user genearlly starts by viewing an index of the contents in her web browser. Generally the index is organized by subject, with the primary statement listed first and the replies underneath organized by date and author. However, some threaded discussion managers allow the user to select a view of the content by date or author as well. Indentation is used to show the relationship of replies to each other. To view the content, the user selects the link. To add a response, a form is included with each message or the user selects a reply button and an in-browser form pops up for entry. This is a PULL medium. Newsgroups can be considered a user-initiated, push version of threaded discussion that uses normal email rather than a browser. 

Document to threaded discussion

This is an integration of threaded discussion with standard web documents, and is valuable for reviews, negotiation and collaboration. In one implementation , the tool takes a web document and tags each paragraph with an icon. By selecting the icon next to a specific paragraph, the browser brings up the discussion thread for that paragraph. This organizes the comments around the paragraphs, facilitates simultaneous discussion among mulitple parties and provides documentation of the issues and resolutions. The threaded-discussion content also can be sorted to provide whole document, date and reviewer views of the discussion. The SamePage product from WebFlow provides this functionality. However, the current version insists on changing and managing the document links. For documents without links it works fine, but if you have links be prepared either to be locked into publishing through the WebFlow environment manager, or to manually changing all your links back when the collaboration is complete. WebFlow intends to correct this problem in their next version do out later this year. 

Forms to mail

Forms to mail provides an easy way to collect information that you might otherwise have requested in an email. The form is accessed and filled out using a standard web browser. Often the notification and request of the desired respondents is done using email. The advantage of the forms is that the form is clearer than email questions, the response is easier and the results are in a standard format. Forms to mail can also be used on a strictly pull page for feedback and collecting information about the audience. 

Forms to database

This works exactly like forms to mail, except the results are fed directly into a database manager. A CGI or Java script or an interface provided by the database vendor mediates the results. This can be used to populate or update a database or to query a database depending on the script. 

Database to HTML

This is basically a database report formatted in HTML. One use is to return the results from a form-to-database query. However, a more sophisticated use is to generate custom pages for a user based on the user's profile. The profile might be based on personal interests, history of past accesses or it might be based on security clearance. The page generated then contains only the information of interest or only the information that user is allowed to see. 

Personal Agents

Personal agents were discussed in the previous section. They provide an intermediate option between complete push and complete pull, and might be viewed as user initiated push. The key to a personal agent is that it is controlled by the user. The user can turn on specificly targeted push, turn it off and direct what it is looking for. Agents can be used to monitor pull pages so the user knows when updates occur without continual checking  (Katipo and WebSeeker). Agents can be used to screen, sort and even delete incoming mail. They can be used to search, monitor and screen the Intranet for specific content The actual logic and processing may reside either on a server or on the client. For example, Amazon Books provides a server-side personal agent called, "Eyes,"that screens for books with specified characteristics. A company specializing in agents that provide options between complete pull and complete push is First Floor

Standard Script Libraries

This is a relatively new class of Intranet functions that generally work with specific environment managers. The scripts are provided as "extensions" to the web server software. The environment manager has wizards that walk novice users through the process of developing otherwise complicated functions. For example, by responding to a wizard, a non-technical user is able to create their own threaded discussion or create a form that returns an email or feeds a database. The extension library consists of CGI or Java scripts and the wizard creates the proper HTML tags to activate the scripts. Today these are all proprietary, but this is a ripe area for standardization. If the individual companies don't standardize on script libraries, some company is bound to create a generic library of functions with a call translator so that pages generated by any popular wizard can run off their script library. 

Shared White Boards

These are applications that facilitate Intranet collaboration. The same window, or page appears on more than one computer. All of the participants can both make changes and see the changes as they are made. The use of this technology will increase as organizations gain experience with and incorporate Intranet collaboration into their work cultures. 

Voice and Video Conferencing

Today these frequently are used in tandem with other Intranet technologies, over a separate infrastructure. The technology already exists to integrate both voice and video conferencing in the Intranet infrastructure (for example see VXTREME). As the bandwidth increases, and the technology matures, this integration will become more common. 
(top)

How Applications Work

To start this section let us take a look at how traditional applications work. Generally the application is separated into three major functions: the user interface, the logic and the database. Additionally, the developer must decide on which platforms the application will be supported. It should be apparent that from a software developer's perspective an Intranet application immediately eliminates some of the choices. The user interface already is defined as a standard, and the user already has obtained the the software of her choice to run on her system of choice. This leaves the application developer to concentrate on the usability, logic and the content discovery and creation aspects of the application. 

Traditional end user applications generally have two major limitations. They limit the user's ability to reformulate or restructure the interaction, and they interact with data from a pre-structured, single database. For complex solutions, like Human Resources, Help Desk or Sales Force Automation, the data required by the user often resides in multiple legacy databases and a large portion is semistructured, idea-based information, not the discrete quantities and numbers that databases manage so well. 

For these reasons, most application packages in these markets focus on the logic that supports the structured part of the problem and on the user interface. The legacy information may be acknowledged, but is not the main thrust of the marketing and sales effort. The obligatory statement that, of course, the legacy databases can be integrated is made, and the focus returns to the structured process logic. And yet, in the implementation, the knowledge-base problem is the significant barrier to an effective application, not the structured process logic. 

True Intranet applications are still in their infancy, but one point should be clear. An Intranet application starts from the perspective of the knowledge base, not the perspective of the structured application logic. The application logic is one resource in the knowledge base that leverages the environment to perform specific functions. This is a significant difference that is possible only because of the vendor-independent communication and content standards of an Intranet. 

We can expect Intranet applications to evolve over time. The applications we see today tend to be traditional applications, modified to take advantage of the standard user interface. This type of application retains the tight integration of functionality in a proprietary implementation and the dependence on its database for information discovery and display. The next wave of Intranet applications will begin to integrate solutions that extend to functionality beyond the structured and semi-structured processes of traditional applications. The use of spider-based discovery agents will begin to feed application logic, and the logic will begin to facilitate the interaction of higher level ideas over the raw manipulation of data that characterizes computer applications today. As our knowledge about discovery agents and their implementation improve, the application logic will begin to unbundle from today's structured database model. This will both increase the need for vendor-independent, object-interface standards and facilitate the unbundling of integrated logic into functional objects. See Corel's Java for Office as an example of early moves in the direction of unbundling logic into functional objects. 

So what will these applications look like? They will be built on the standard content and discovery agent model. They will focus on semi-structured and unstructured parts of the problem. They will facilitate self-customization by the user. Structured logic and processes will be developed and shared by anyone, and many will be single-use "throw-aways." 

The one characteristic of an Intranet that makes it different from all previous computer-based infrastructures is not the wealth of information available, but its ability to make everyone a publisher (and soon a programmer). This fact is often either overlooked or viewed as a problem that has to be managed. In fact, if a medium does not allow everyone to publish, outside the "fill-in-the-blank" structure of previous computer technology, then real communication cannot happen. The organizational interaction is limited to dictates and highly structured feedback. 

How can we move to the much desired "learning organizations" if we hide in structure and cannot embrace some chaos and inefficiency? If we already know how to structure the problem and the information, how can we learn? Learning is the process of discovering structure. If we can only manage the knowledge (the repository of our previous learning) that fits into our currently understood structures, how can we advance our knowledge as we learn radically new things? 

This is not to say there is not a place for structured processes or broadcast information. However, the new infrastructure does allow us more options in the way we approach and define problems. The key considerations in Intranet application architecture will be twofold: the desirable level of structuring and when content needs to be pushed rather than pulled. A new generation of "push" tools are emerging that give users more control. See Marimba's Castinet product, the Pointcast Network product or FirstFloor's Smart Delivery product as examples. Complex Intranet applications will support a mixture of push and pull possibilities, applied to gain the best overall effectiveness. 

In their book, Decision Support Systems, Peter Keen and Michael Scott-Morton identified three classes of processes which they called, structured, semi-structured and unstructured. Prior to the time of their work, computer applications primarily focused on structured processes. These well-understood, repeatable processes fit the original "batch" mode of the technology quite well. The concept of Decision Support Systems opened the world of computer applications to the support of semi-structured processes. This was enabled by the advent of interactive computing from video terminals, which meant that users could interact with programs while they were running. 

The introduction of Intranet applications extends computer functionality to include support for unstructured processes. The user has the capability to scan (browse) and screen (search) unstructured information to help formulate more specific questions or to stimulate new ideas or approaches. The process building blocks in the previous section can be combined in various ways to support all three process classes. Additionally, the technology allows the application author to mix modes of support within a single solution. 
(top)

Building Intranet Applications

What will Intranet applications ultimately look like? Until we have more experience, it may be impossible to tell. However, the following is my attempt at five general rules meant to help sort through some of the issues involved in designing an Intranet application. 

#1: Think beyond traditional applications - think about the whole function.

Because Intranet applications are modular, we can support more complete functions. We do not have to pre-structure all the processes and information. We do not have to build all the pieces at once. Think about the objectives, the end result and alternate ways to get there. 

#2: Develop the process in terms of functional classes and how they relate to each other.

As complex, general-purpose applications begin to deconstruct into more simplified functional logic, we will see how functional classes can be shared across specialty domains. For example, most applications can be defined in terms of the four functional classes listed below. 
  • Tracking (customers, resources, trouble tickets, inventories)
  • Configuring (products, solutions, benefits, financial instruments)
  • Informing
    • Publication (pull)
    • Notification (push)
  • Exchanging
    • Negotiations (ideas)
    • Collaboration (ideas)
    • Transactions (money)
Whether the number of basic classes is more or less than four, and whether these are the "right" four, is not the point. The point is that classes such as these allow us to reuse our content and tools across a broader range of domains than the content and tools embedded in the current "silo" applications. As we gain more experience the most effective classes will emerge. 

#3: For each function within the solution identify whether the process being supported is structured, semi-structured or unstructured.

For each functional class ask what you expect of the user: repetition and standard behavior or thought and innovation? Recognize that some of each may be required in the process of reaching an end result. However, for a specific functional class in a specific process the degree of structure should be identifiable. Determining the degree of structure will help identify the tool or approach for implementing the functional class. When the processes in a functional class are well structured (e.g. money transactions, scheduling, tracking, user profiling) database technology is indicated. When processes are less structured (e.g. requests for others' experience, innovation, creation, negotiation, exception handling) the message-based technologies are more appropriate. 

#4 View each interaction with the user in terms of the appropriate degree of push and pull

This concept has been presented in detail in previous chapters. When identifying push-pull characteristics for Intranet applications the following guidelines are useful: 
  • Push what is needed now that has a short life
    • One-time Notices and Requests
    • Personal Communication
  • Pull what will be referenced in the future
    • Anything printed for large numbers of employees
    • Recurring communication ( e.g.standing meeting minutes)
  • If it is not obvious, consider subscription agents  (list servers & personal agents)

#5 Support learning by individuals and the organization.

Unlike traditional applications, Intranet applications should be designed with adaptation and learning in mind. Individuals can learn in several ways. They can learn from the knowledge base, by retrieving what they and others have previously learned and captured. They can learn by recombining and extrapolating what they, or the knowledge base, previously learned. They can learn from experience, through random or systematic trials. For the organization, learning means not only solving the problem, but capturing the experience in the knowledge base. 

It should be obvious that learning can happen in both structured and unstructured environments. Any complex Intranet application should provide an area for unstructured activity, an area where new knowledge can be generated. These areas may take the form of discussion groups or chat rooms for asking questions or brainstorming ideas, or they may be tools that allow users to create what I earlier called throw-away applications. These are applications that are so quick and easy to create that non-technical users can quickly build and publish custom functionality. Examples that exist today are the wizards that let users create a threaded discussion group within minutes to discuss their single issue or to create a form to collect structured feedback. In the future, we will see Java applets that look like spreadsheets, can be configured for specific functions just like today's spreadsheets, then included in an HTML page. 
(top)

Summary

The final form of Intranet applications is impossible to predict from our current experience base. Early Intranet applications are primarily traditional applications with a web front-end. As we gain more experience, traditional, general-purpose applications will begin to desconstruct into more specific functional classes, enabled by the location transparency and content standards of the Intranet. The use of spider-based discovery agents will become more wide-spread, and replace pre-structured databases in at least some applications. Functional classes will be applied across traditional application silos. Intranet applications will expand to support unstructured processes in addition to the traditional structured ones. Applications will no longer be viewed as "programs" but as learning systems that gain knowledge, both structured and unstructured. 
(top)

Next Chapter
Table of Contents
 


Original Version: February, 1997
Last Updated: November, 1997
Copyright 1997 - Steven L. Telleen, Ph.D.
info@iorg.com


  top page | papers


For more information contact: info@iorg.com
© Copyright 1997-1999 iorg.com