iorg.com logo

Top

White Papers


 

© Copyright 2000
iorg.com

TOC - Ch 1 - Ch  2 - Ch 3 - Ch 4 - Ch 5 - Ch 6 - Ch 7 - Ch 8 - Ch 9

Chapter 5: Security and Availability

Intranet Organization: Steven L. Telleen, Ph.D. 

Security

Introduction

The biggest concern most executives and managers have about implementing an Intranet is security. As with other parts of the Intranet implementation, the toughest issues around security are not technical, but organizational and strategic. But, before examining these issues, let’s begin with a look at the nature of security in general. Security is not limited to the world of electrons and networks. All of us already know a fair amount about security from the everyday securing of our physical valuables. 

Take, for example, the physical analogy of a painting. If it is very valuable, we might be inclined to store it in a vault. However, presumably we bought it not just for an investment, but to enjoy its beauty. Locking it in a vault provides safety, but not enjoyment. We might secure our house, be sure not to advertise our ownership and hang it on our wall where access is limited to us and our friends. If access to the house is adequately protected, this may be an appropriate compromise between functionality and safety. But, if this is a painting we want to share with the world, we might consider hanging it in a museum. The painting is still protected, but more people know about it and it may be more attractive to a thief. 

Each of these scenarios has different security challenges. But one thing they all share is that the protection is never absolute. In any one of these scenarios, if the value to the thief is great enough, a security hole will be sought and likely discovered. The more value perceived, the more effort the thief will put into finding a security hole. Security, whether physical or virtual, is a continually changing balance of value, risk and practicality. 

To understand security, we must understand the points of vulnerability. I find it useful to break security into three basic threat areas: storage, access and transfer. Once again we can use physical assets as an example. 

Storage Protection refers to protection of the assets when they are not in normal use. If you own a retail outlet, you may put a lot of effort into preventing shoplifting. But, no matter how secure you make your showroom floor, if you don’t protect your stockroom, someone will come in the back door and steal your goods before they ever get on display. The same is true of your electronic information. Basic computer and file security is required, including securing alternate access points! Since Intranet technology makes information location irrelevant to the logical display, you might consider storing truly sensitive content on a separately secured server with additional protection and special monitoring. We do the equivalent with our physical assets when we put them in a safe, or safe deposit box. 

Once we have secured the basic storage of our valuables, we need to consider how we allow access and to whom. Access security has improved dramatically over the past several years, driven in large part by the Internet commerce movement. In addition to the basic password methods, systems that require physical tokens, some with challenge/response mechanisms, have become practical. Many of the servers and browsers also have the ability to create an encrypted transaction automatically before the user even provides a password, so the passwords and keys are encrypted before log-in. If you use a Netscape browser, this is what key in the lower left corner signifies. If the key is broken the transaction is unencrypted. When the key is whole, the browser and server have negotiated an encryption key, and the transaction is encrypted. 

The most recent access control mechanisms are based on the ability of the web server to tailor pages for specific users. Once a user has been authenticated, all interactions are mediated through an object layer that dynamically generates pages showing the user only the choices for which she has access privileges. Because all interactions that the user initiates on the server are mediated by their object representation, the only behaviors available are those defined for that object and the objects it is authorized to access. 

The third area of potential threat is protecting information in transit. As any movie buff knows, from Robin Hood to The Great Train Robbery, valuables in transit make attractive targets for thieves. The same is true in the virtual world. Unless you are using a closed, secured network, your information can always be hijacked in transit. And, even in a closed network the information can be hijacked without extraordinary precautions. The major way to protect it, is to encrypt the transmission or the information on the page. A note is in order here. Encryption/decryption algorithms are subject to U.S. government export restrictions based on national security claims. Until an effective international encryption standard is allowed by the U.S. governement, international companies will have challenges using encryption to secure international transmissions, even for intra-company interactions. 

In addition to encryption techniques, some organizations have developed methods for strategically breaking content into annonymous chunks for transmission and presentation. This can be done at two levels. Since the user generally knows what they accessed, a page with sensitive information may be designed without any identifying contextual information on it. For example, benefits information for an individual would not include the individual's name, employee number or any other identifying information. If someone intercepts the message, they have lots of data, but no way to relate it to a specific individual. 

The second level is at the packet level. When information is sent over the Intranet, the content is broken into small packets, and the packets are reassembled at their destination. The information can be divided in such a way that no single packet contains enough data to derive the sensitive information. On a busy, diverse Intranet, finding enough of the right packets to reconstruct the message is like finding a needle in a haystack. If each packet is encrypted with a different key, the task becomes almost impossible. 

The techniques above can be combined in different ways to make security better than that for most non-computerized information. 
(top)

The Basics of Security Techniques

Because Intranet security is of such interest to so many people and causes so much discussion, I have included this section that goes into more detail than some readers may prefer. If you are not interested in this tutorial, feel free to skip to the next section on Developing a Secutity Strategy. 

Encryption is perhaps the single most important technology for network security. It has uses beyond protecting information in transit. Many encryption algorithms can be used with other algorithms to insure the integrity of the electronic content, that is, to insure that someone has not changed information in contracts or other legal documents after the parties have reached agreement. Some encryption approaches require special hardware, some use tokens (disks or smart cards), others are strictly software. The intense debate over how and where to implement encryption standards encompasses conflicts over everything from national security to individual privacy. 

Encryption uses a mathematical formula to scramble the information. The users of the formula provide a key (a word or string of characters) that the formula uses to generate a unique encryption.There are two types of keys in use today. The first is called a symmetric key, because the same string of characters is used both to encrypt the information and to return the information to normal form. The second is called an asymmetric key, because the string of characters used to encrypt the information will not return it to normal form. A different string of characters is required to decrypt the information. 

The number of characters in a key is one factor in determining how easy it is to "guess" the key and decrypt the information. This is at the heart of the U.S. export regulations. Currently, the U.S. only allows encryption algorithms to be exported if they use relatively short keys. Inside the U.S. encryption algorithms with long encryption keys can be used and these are almost impossible to guess. 

Asymmetric keys have some very pragmatic uses. For one thing, one of the keys can be made public while the other is held in private. This way if someone wants to send you an encrypted message, they can encrypt it using your public key knowing that only you can decrypt it, because you are the only one who knows your private key. This means you do not have to negotiate and remember unique keys for every person with whom you interact. 

The other use of asymmetric keys is for digital signatures. If you encrypt a message using your private key, only your public key will decrypt the message. If your public key decrypts the message it proves that your private key was used to encrypt it. Since, presumably, only you know you private key, this acts as a digital signature. 

In reality digital signatures involve a more complex process that provides even more protection from tampering than physical signatures. But first we must introduce the concept of message integrity. 

Integrity techniques are used to insure that the information received is the same as the information that was sent. This is important for several reasons. First, an error in transmission may have altered or dropped an important piece of information. Second, someone may have maliciously altered the information even though they could not decrypt it. Like encryption, a mathematical formula is involved. In this case it takes the entire set of information and reduces it to a unique numeric sequence. If one bit in the information changes, the resulting sequence will not be the same. The unique sequence, called an integrity check sum, is created and sent with the message. On the other end, a new check sum is calculated and compared to the original. If they match, the message is guaranteed to be the same as sent. The check sum is also called a message digest. 

In most instances encrypting and decrypting entire documents just to provide a digital signature is too resource intensive. Therefore, if the information itself is not sensitive, a digital signature is used by encrypting only the message digest (or check sum) of the document. If the decrypted message digest matches that of the current document, it insures that the person whose public key decrypted the document "signed" it, and it insures that the document being looked at has not been altered since it was signed. This is why even when an entire document is encrypted the digital signature still includes the message digest. It insures the document has not been altered after the signature. 

A word often times mentioned along side encryption is "certification." Because you cannot see the person or the premises at the other end of an electronic transaction, in transactions where something of value will change hands we would like to certify that the person or company on the other side is who they say they are. This is the problem certification attempts to solve. In the physical world references and letters of credit serve the function of certification. In the world of commerce organizations like the Better Business Bureau provide certification functions. And, in financial transactions companies like TRW and Equifax provide certification services. Inside the enterprise the corporate picture identification card, managed by HR, is a form of certification. 

Electronic certification uses multiple digital signatures to certify the authenticity of the parties. The originating party might provide a certificate with their information on it. Part of that certificate is a digital signature and certificate of the authorizing reference. The digital signature of the reference is based on the check sum of the public information including their certificate, thus assuring that none of the public information has been altered. The reference's public key insures that they really "signed" the certificate, their digital signature insures the itegrity of the information and their certificate contains a reference certifying that they really are who they say they are. Their certificate works the same way, with their reference's certificate and signature included. 

The question is, how many levels of certifiers are required and who is the ultimate certifying authority. Where certificates are used today, most do not involve more that three levels of certificates. As to the ultimate authority, in theU.S. the Postal Service is attempting to take on that role. 

Putting all the pieces together, a message secured on multiple levels can be sent over a network using the following model. 

As a final note, there are three kinds of activities organizations attempt to stop: 

  • unauthorized access to information
  • unauthorized changes to information
  • malicious destruction of information or processes (including introducing viruses)
The access and authentication processes above are methods for dealing with all three. A technique not mentioned above, since it is primarily used for information entering an Intranet from outside, is packet screening. This is software that looks inside each packet received before it is allowed inside the firewall. The packet is screened for information patterns that look like viruses or attempts to break security barriers. Suspicious packets are logged and kept from entering the Intranet. 
(top)

Developing a Security Strategy

Security strategies should not be based on current or future products or technology. They need to be based on the functional needs and risks of the organization. The toughest part of developing a security strategy is determining what needs to be secured, and from whom. Security is not free. Every time the security level is tightened, the organization pays in terms of increased complexity of access, increased response time and reduced communication. As stated above, security is a balance of value, risk and practicality. Before developing a strategy it is important to understand something about the concept of risk. 

What struck me when I read my first book on risk was that most of the book was not about cost-benefit analysis but about perception and psychology. This is because risk is not an objective phenomenon. Some of us otherwise rational people can look at the overwhelming statistical evidence on the safety of commercial airline flights versus personal automobiles and still "feel" less at risk behind the wheel of our cars. A number of psychological factors are involved, particularly our feelings about our personal level of control over the circumstances. 

When addressing the issue of security risks it is helpful to remember that risk consists both of objective data and subjective feelings based on personal psychology. Since quantitative data may be difficult to determine, individual experience and comfort levels often take the dominant role. During development of a security policy, it is important to continually remind ourselves that reducing the risk of a security breach comes at a cost, and that cost may be a higher level of risk in some other part of the organization not being considered, like competitiveness. 

In theory, the basic formula for considering objective factors is quite simple. One merely needs to determine what the information is worth to those denied access, what it will cost them to obtain it under the current security implementation, and the cost (consequences) to the enterprise if they do manage to obtain it. The objective then becomes to keep the cost of unauthorized access higher than the value to the potential thief, until the cost of security controls exceeds the cost of unauthorized access. Another way of saying this last point is: until the value to the thief is higher than the value to the enterprise of keeping the information private. 

In practice this formula becomes more complicated, because "value" and "worth" of information is often subjective, and value to the person initiating the breach may be secondary to the information itself, like excitement or revenge. It also is nearly impossible to predict all the groups that constitute a potential security risk, and the value to each group is a matter of their personal perception. 

As part of the objective evaluation, information also should be obtained on the procedures, risk and incidence of unauthorized access of information using current communication mechanisms. For example, many companies maintain security procedures to keep competitors from obtaining specific product or planning information. Yet the competitors usually get the information almost immediately upon disclosure to a customer. The point here is to bring some realism into the security process. You don’t want to make it easier for competitors to get the information, but you also don’t want to pay the cost of super-securing information that is going to find its way to the competitor through another route anyhow. 

Two subjective factors stand out in risk assessment: comfort levels and perceived threats to status. New technologies are always suspicious because we have no experience. We can intellectually step through any number of scenarios, but until we have run a production environment for some time, there is no way to tell what situations may have been overlooked. The comfort level of the person who will be held accountable if a breach occurs is the single most important factor in the level of security, or whether the information will be allowed on the new technology at all. 

In my first experience with Intranet development, the Human Resources Department at the company created a wonderful system, complete with access controls tied to the existing corporate information systems. However, the Vice President of Human Resources would not let the system go on line because he feared some hacker would crack the system and access someone else’s private information. Even after other companies began publicizing the availability of similar HR functionality on their Intranets he maintained his resistance. Pieces of the system have come on line, one at a time, as this Vice President develops increasing comfort with the risks and controls. No level of new technology would have changed this situation. Only continued exposure, and a program that provides staged experience can help. 

Concern over unauthorized access is not the only factor that can affect subjective risk levels. Some people have achieved (or at least perceive) their status and power in the enterprise by limiting access to information. Whether the motivation is control, hiding mistakes or just job security, the result will effect the creation and acceptance of a security policy. In many organizations this type of subjective personal risk will create the most complications in developing a pragmatic security policy. Most of us are aware of studies done in the U.S. federal government on unnecessary classification of information and the ensuing costs. Recent studies in industry show the same patterns. 

The most important, time consuming and contentious activity in implementing a security policy will be determining what information needs to be protected. No amount of technology can help with this process, because the factors are individual perceptions and comfort levels. The security policy sets the processes and ground rules for determining how information gets put into or taken out of restriction categories. Decisions on specific information will be made by the individuals who own the information. 
(top)

Setting a Security Policy

The first step in creating an Intranet security policy is developing a written charter. The charter consists of two parts: a goals statement and a responsibility statement. 

The goals you choose should give the reader an idea of where your enterprise stands on the balance of value versus cost, business requirements versus risk, openness versus gatekeeping and what constitutes the optimum balance for your enterprise. Is your policy one of allowing access to everything unless specifically identified for denial, or is it one of denying access to everything unless specifically identified for access. These will have very different effects and impacts on the culture, productivity and innovation of your organization. 

The responsibility section provides a clear statement of how security will be administered within your enterprise including who (what organization and position) is responsible for maintaining and monitoring the corporate Intranet security strategy and policy and who reviews and approves that strategy and policy. It also includes a description of how this function and strategy fit with other security organizations in the enterprise, and what is expected of each organization. 

The second step in creating an Intranet security policy is creating a written process that describes how responsibility for Intranet security will be delegated, implemented and enforced. This includes a management section and an individual employee section. 

The management section contains a description of responsibilities at each organizational and management level. Security objectives and how they will be monitored are an important part of this section. The objectives should be consistent with the goals in the charter. Where appropriate, standards may be provided that help the manager make decisions consistent with the corporate goals and policies. Standards and security classifications can be particularly useful in helping managers determine when they need to classify (or should not classify) a specific type of information. 

A very clear statement of employee responsibilities, expectations and sanctions is required for an effective security implementation. However, the statement is not sufficient if employees are not aware of its existence. The statement should be followed by a well defined employee communication program. The program must address not only the initial introduction of expected responsibilities to each employee, it also must include an ongoing awareness and refresher program. This can be done in conjunction with other security awareness programs and with other Internet standards programs. 

The final required part of an Intranet security policy is the definition of an audit program to monitor and manage compliance and risk. Some aspects of the audit program will be discussed later in this chapter. The important point here is that the security policy should explicitly call for regular audits, both internally and by independent auditors, and define how they will happen and who in the enterprise will be apprised of the results. For servers with sensitive information (and this includes the firewall that protects the Intranet) a program of continuous logging, analysis and monitoring of activity for suspicious patterns is critical. A program of active intrusion testing, looking for vulnerabilities, is also a good idea. 
(top)

Developing Privilege Tables

Determining who gets access to what information is not a challenge created by Intranets. The issue is as old as information itself. Since information first went up on computers, access control has been an issue. A popular method for documenting, and implementing computer security is the use of "Privilege Tables." A privilege table contains a row of all the unique security classes of information and a list of all users with access to the system. The cells in the resulting table are used to record the access privileges of each user. In each cell a user either has access or does not. 

Privilege tables are popular because they provide a documentation format that can be easily implemented in an automated access control program. When a user logs on, the system authenticates her. When she requests access to particular information, the software looks at the privilege table to determine if she is authorized. This type of system not only simplifies the management of who gets access, but it simplifies access for the user. Because of the privilege table, the user only has to be authenticated once, rather than at each access. 

An Intranet does create a complication in that Intranet information usually resides on more than one computer system. As of this writing, there were no widely-trusted, commercially-available systems to allow single-point authentication across an Intranet in an acceptable way. Some companies have developed home grown systems, and many of the web server vendors are getting ready to deliver these systems. Nevertheless, using a privilege table to develop and document user access privileges provides valuable process aid. 

One of the major issues in developing an enterprise privilege table is determining the granularity of fields. From a process standpoint it is useful to lump users into specific user classes, and make decisions based on the user class rather than the individual. Likewise, it is more efficient to lump information into information classes and again make those decisions for the class rather than each piece of information. In theory, it would only be a matter of matching information classes with user classes, and the job would be done. 

In practice, we usually discover the organization has trouble identifying and agreeing on the classes and class definitions, let alone which specific items belong in each class and which user classes get access to which information classes. Perhaps there is an opportunity for software to assist in creating and managing access control at this level. It seems like a natural application for developing classes based on multivariate analysis then feeding them into an object management system. Individual users and information become objects that carry their class affiliations with them as attributes, and object rules can be used to determine access privileges. I do not know of any packages that do this today, but you can follow this logic to help you develop a privilege table manually. 

To be effective, development of the privilege table should involve the organizations responsible for creating the information and managing compliance. A method for assisting the process is to develop an initial format for creating information and user classes then have each organization create a set of information and user classes from their perspective. The initial information is consolidated and the process iterated until an acceptable class structure is reached. Putting the latest consolidated information on web pages and iterating through the issues using a threaded discussion group will help expedite this process. 

As part of the table development process, and part of the decision making process, look at who has access to this information on paper or through existing processes today. While the process may uncover previously "invisible" problem areas, it will give you a fairly accurate view of what your organization considers acceptable risk. 
(top)

Security is an Evolving Game

Once developed, a security policy needs regular review and update. The environment is ever changing and the technologies and strategies to breach, and to protect, information will change and coevolve. Three activities should take place at regular intervals in any Intranet implementation where information needs protection: Threat Identification, Active Penetration Testing and External Audits. 

Threat identification takes two forms, theory and practice. The theory is done via vulnerability assessment. This is where the information is assessed for value to the enterprise, value to potential security risks and consequences of a security breach. This is generally facilitated by assessing generic security risks and weaknesses for probable occurrence, then identifying additional points of threat. Identifying threats requires experience as well as knowledge, which is why external consultants frequently are retained to help with this process. 

The practice of threat identification is accomplished through continuous monitoring and auditing, often with the use of automated tools. The goal is to identify attempted security breaches while they are being attempted. The physical analogy is an alarm system with sensors and a security guard to follow up. However, good security requires more than technology and patrolling. Otherwise it becomes a game without consequences for attempting a breach. Behavioral elements need to be incorporated to make even attempts at security breaches less attractive. The three primary elements are: strict consequences, immediate follow-up and making highly visible examples of those caught. 

Active penetration testing involves sanctioned attempts to actually penetrate the security, particularly at known vulnerability points. There are firms that specialize in this type of testing and reporting. A number of software tools also exist that duplicate known hacking techniques and apply them against the target system. Probably the best known of this class is a package called, SATAN, which received a large dose of publicity when first released, because people feared it would be used by hackers to identify weaknesses of targeted systems. 

Finally, having your security system routinely audited by outsiders is a must. Internal monitoring and audits are important, but an external perspective is invaluable. Many auditing firms also conduct penetration testing as part of the audit and provide consulting on threat identification. Because they see a broader range of experiences and results in the course of their business than any one company is likely to encounter, this consulting can be very valuable. However, be cautious. Good security people are naturally paranoid, and they want you to be paranoid too. Don’t let them scare you into levels of security that are not good for your business. Know the risks, then make decisions based on the costs to your business of ameliorating those risks. 
(top)

Security Summary

The sections above can be summarized in six points: 
  1. View security pragmatically, based on the consequences and likelihood of a failure.
  2. Have an official policy and plan
  3. Publicize expectations and sanctions
  4. Monitor and audit continuously
  5. Visibly prosecute violators
  6. Use outside experts to provide a broader experience base and remove blinders
(top)

Availability

As Intranets become the primary computing infrastructure in organizations, the issue of availability becomes increasingly important. In a very real sense, the network now is the computer, and all the requirements we had for single image systems apply to the distributed systems and network interconnections that make up the Intranet. If a part of the system between the user and the information the user needs is not working, the business function in which the user is engaged is in jeopardy. 

In many respects the network environment is more complex than the old mainframe/host environments. When something goes wrong in the distributed network world there are more places the problem could reside. However, as we learn more about architecting for the new requirements, and as our monitoring and intervention tools get better, we also may see the complexities of network computing as bringing many inherent strengths. I believe we have barely scratched the surface of understanding and capitalizing on the strengths of distributed computing. 
(top)

Network Strategies

There are two diametrically opposed strategies for managing network availability. One strategy is to simplify the network configuration to reduce the number of points of potential failure and to make finding points of failure easier when they do occur. When a system does fail, the problem can be quickly diagnosed and the information brought back on line. This has been the primary strategy of data centers, and hence most corporate networks today. 

The alternative is a strategy of optimized complexity. In this model, when a failure occurs, the system is complex enough to provide an alternate path to the information. Availability is as good as the system’s ability to reroute the information pathways. The time to find and fix a specific failure becomes less critical, because the user retains access to the information during the failure. With proper automation, the user would not even know that a pathway was down. The rerouting would happen in real time. 

This latter strategy was the basis for developing the Internet standards in the first place. The U.S. Department of Defense needed a way to send computerized information across diverse networks when the status of any given network could change during the course of the transaction. The characteristics of the Internet’s built-in self-routing, that some point to as a weakness, also contain the capability to provide strengths in the areas of availability and scaleability. 

What does not work is an automated self-routing approach implemented on a simplified network. When a point fails, the system has no alternate resource to provide a route. Yet, these simplified networks are the norm for the Intranets in most enterprises today. What is needed is a rethinking of the enterprise network, with attention paid to subnet optimization for availability. This focus has not received much attention in the context of Intranets. As Intranets become the primary computing infrastructure for most enterprises, architecting subnets for availability will take on increasing importance. 

Two basic architectures can be used to provide alternate pathways. The first is what I will call the triangle configuration, because a triangle is its simplest form. Think of each of the three points on a triangle as being a node on the network. Each point is connected to the other two points on the triangle. The most direct way to send a message from one point to another is directly, along the path that makes up the side of the triangle. But what if this path is broken? The message can still be sent by routing it through the other point and on to the destination. This isn’t quite as efficient as the direct path, but is certainly more efficient than having no path at all. Of course a complex set of links such as this can be quite slow. 

The other basic architecture I call the concentric ring configuration. Think of two circles, one inside the other. Each is a high speed communication backbone. Between these two rings are the local clients or the Local Area Networks (in a triangle configuration). They are connected to both rings. This model provides the efficiency of a high speed backbone, with the redundancy required for automatic availability. 

It is worth mentioning that subnet diversity has the potential to not only improve availability, but also performance. By distributing the information across subnets, the network traffic also can be distributed. When information or routing is centralized, performance suffers much like rush hour traffic in an urban environment. Everyone is going to roughly the same location. By distributing the information and routing, load balancing becomes possible, and high traffic areas can be minimized. As subnet optimization becomes more common and better understood, a whole new set of methodologies and tools will begin to emerge to assist with subnet optimization and load balancing. 
(top)

Redundancy Strategies

The network optimization strategy discussed above is, in essence, a redundancy strategy. It provides for redundant network pathways. Redundancy strategies are not new to computing. They are applied to other resources in the network today. In fact, redundant servers, routers and storage devices are quite common today. Most enterprise storage devices available are based on RAID technology (Redundant Arrays of Inexpensive Disks). Likewise, many servers have failover options, that provide mirroring and substitution in case of a failure. These technologies can be used to architect highly available Intranets, although this is rarely done today. 

Another example of using redundancy to provide availability is the creation of redundant data. Making backup copies of data is a form of redundancy. So is replication for either performance improvements or to support failover processing. Data redundancy is also common for those of us who give electronic presentations and computer demonstrations. It is not uncommon for presenters to carry a copy of presentations or key demonstration web pages on their hard drive, even when the presentation is intended to be given on-line. If online availability becomes compromised, the presentation can be given from the redundant data on the hard drive, in some circumstances without the audience even being aware of the problem. In critical situations, like live presentations, using situational redundancy as a personal discipline is a pragmatic solution today. In some instances this same approach is used by placing the data on an alternate server and network rather than the presenter’s hard drive. 

Viewing the complexity and distributed nature of an Intranet’s technical infrastructure as a strength in providing availability and load balancing is not widespread. However, it is not difficult to see how the demand for this view is almost inescapable. As tools developed for the remote monitoring and managing of client server environments begin to incorporate Internet standards and adapt to these new demands, we can expect the rapid gains in this area that we have come to expect in other areas of the Intranet. 
(top)
 

Next Chapter
Table of Contents
 


Original Version: October, 1996
Last Updated: November, 1996
Copyright 1996 - Steven L. Telleen, Ph.D.
info@iorg.com


  top page | papers | inet faqs | inet tools | ec tools | contact us


For more information contact: info@iorg.com
© Copyright 1997-1999 iorg.com