Chapter 5: Security and
Organization: Steven L. Telleen, Ph.D.
The biggest concern most executives and managers have about
implementing an Intranet is security. As with other parts of the
Intranet implementation, the toughest issues around security are not
technical, but organizational and strategic. But, before examining
these issues, let’s begin with a look at the nature of security in
general. Security is not limited to the world of electrons and
networks. All of us already know a fair amount about security from the
everyday securing of our physical valuables.
Take, for example, the physical analogy of a
If it is very valuable, we might be inclined to store it in a vault.
presumably we bought it not just for an investment, but to enjoy its
Locking it in a vault provides safety, but not enjoyment. We might
our house, be sure not to advertise our ownership and hang it on our
where access is limited to us and our friends. If access to the house
adequately protected, this may be an appropriate compromise between
functionality and safety. But, if this is a painting we want to share
with the world, we
might consider hanging it in a museum. The painting is still protected,
more people know about it and it may be more attractive to a
Each of these scenarios has different security
But one thing they all share is that the protection is never absolute.
any one of these scenarios, if the value to the thief is great enough,
security hole will be sought and likely discovered. The more value
the more effort the thief will put into finding a security hole.
Security, whether physical or virtual, is a continually changing
balance of value, risk
To understand security, we must understand the
of vulnerability. I find it useful to break security into three basic
areas: storage, access and transfer. Once again we can use physical
as an example.
Storage Protection refers to protection of the
when they are not in normal use. If you own a retail outlet, you may
a lot of effort into preventing shoplifting. But, no matter how secure
make your showroom floor, if you don’t protect your stockroom, someone
come in the back door and steal your goods before they ever get on
The same is true of your electronic information. Basic computer and
security is required, including securing alternate access points! Since
technology makes information location irrelevant to the logical
you might consider storing truly sensitive content on a separately
server with additional protection and special monitoring. We do the
equivalent with our physical assets when we put them in a safe, or safe
Once we have secured the basic storage of our
we need to consider how we allow access and to whom. Access security
improved dramatically over the past several years, driven in large part
the Internet commerce movement. In addition to the basic password
systems that require physical tokens, some with challenge/response
have become practical. Many of the servers and browsers also have the
ability to create an encrypted transaction automatically before the
user even provides a password, so the passwords and keys are encrypted
before log-in. If you use a Netscape browser, this is what key in the
lower left corner signifies. If the key is broken the transaction is
unencrypted. When the key is whole, the browser and server have
negotiated an encryption key, and the transaction is encrypted.
The most recent access control mechanisms are
on the ability of the web server to tailor pages for specific users.
a user has been authenticated, all interactions are mediated through an
layer that dynamically generates pages showing the user only the
for which she has access privileges. Because all interactions that the
initiates on the server are mediated by their object representation,
only behaviors available are those defined for that object and the
it is authorized to access.
The third area of potential threat is protecting
in transit. As any movie buff knows, from Robin Hood to The
Robbery, valuables in transit make attractive targets for
The same is true in the virtual world. Unless you are using a closed,
network, your information can always be hijacked in transit. And, even
a closed network the information can be hijacked without extraordinary
The major way to protect it, is to encrypt the transmission or the
information on the page. A note is in order here. Encryption/decryption
subject to U.S. government export restrictions based on national
security claims. Until an effective international encryption standard
is allowed by
the U.S. governement, international companies will have challenges
to secure international transmissions, even for intra-company
In addition to encryption techniques, some
have developed methods for strategically breaking content into
chunks for transmission and presentation. This can be done at two
Since the user generally knows what they accessed, a page with
information may be designed without any identifying contextual
on it. For example, benefits information for an individual would not
the individual's name, employee number or any other identifying
If someone intercepts the message, they have lots of data, but no way
relate it to a specific individual.
The second level is at the packet level. When
is sent over the Intranet, the content is broken into small packets,
the packets are reassembled at their destination. The information can
divided in such a way that no single packet contains enough data to
the sensitive information. On a busy, diverse Intranet, finding enough
the right packets to reconstruct the message is like finding a needle
a haystack. If each packet is encrypted with a different key, the task
The techniques above can be combined in different
to make security better than that for most non-computerized
of Security Techniques
Because Intranet security is of such interest to so many people and
causes so much discussion, I have included this section that goes into
more detail than some readers may prefer. If you are not interested in
this tutorial, feel free to skip to the next
section on Developing a Secutity Strategy.
Encryption is perhaps the single most important
for network security. It has uses beyond protecting information in
Many encryption algorithms can be used with other algorithms to insure
integrity of the electronic content, that is, to insure that someone
not changed information in contracts or other legal documents after the
have reached agreement. Some encryption approaches require special
hardware, some use tokens (disks or smart cards), others are strictly
software. The intense debate over how and where to implement encryption
standards encompasses conflicts over everything from national security
to individual privacy.
Encryption uses a mathematical formula to scramble
information. The users of the formula provide a key (a word or string
characters) that the formula uses to generate a unique encryption.There
two types of keys in use today. The first is called a symmetric key,
the same string of characters is used both to encrypt the information
to return the information to normal form. The second is called an
asymmetric key, because the string of characters used to encrypt the
information will not return it to normal form. A different string of
characters is required to decrypt the information.
The number of characters in a key is one factor in
how easy it is to "guess" the key and decrypt the information. This is
the heart of the U.S. export regulations. Currently, the U.S. only
encryption algorithms to be exported if they use relatively short keys.
the U.S. encryption algorithms with long encryption keys can be used
these are almost impossible to guess.
Asymmetric keys have some very pragmatic uses. For
thing, one of the keys can be made public while the other is held in
This way if someone wants to send you an encrypted message, they can
it using your public key knowing that only you can decrypt it, because
you are the only one who knows your private key. This means you do not
negotiate and remember unique keys for every person with whom you
The other use of asymmetric keys is for digital
If you encrypt a message using your private key, only your public key
decrypt the message. If your public key decrypts the message it proves
your private key was used to encrypt it. Since, presumably, only you
you private key, this acts as a digital signature.
In reality digital signatures involve a more
process that provides even more protection from tampering than physical
But first we must introduce the concept of message integrity.
Integrity techniques are used to insure that the
received is the same as the information that was sent. This is
for several reasons. First, an error in transmission may have altered
dropped an important piece of information. Second, someone may have
altered the information even though they could not decrypt it. Like
encryption, a mathematical formula is involved. In this case it takes
the entire set of
information and reduces it to a unique numeric sequence. If one bit in
information changes, the resulting sequence will not be the same. The
sequence, called an integrity check sum, is created and sent with the
On the other end, a new check sum is calculated and compared to the
If they match, the message is guaranteed to be the same as sent. The
sum is also called a message digest.
In most instances encrypting and decrypting entire
just to provide a digital signature is too resource intensive.
if the information itself is not sensitive, a digital signature is used
encrypting only the message digest (or check sum) of the document. If
decrypted message digest matches that of the current document, it
that the person whose public key decrypted the document "signed" it,
it insures that the document being looked at has not been altered since
was signed. This is why even when an entire document is encrypted the
signature still includes the message digest. It insures the document
not been altered after the signature.
A word often times mentioned along side encryption
"certification." Because you cannot see the person or the premises at
other end of an electronic transaction, in transactions where something
value will change hands we would like to certify that the person or
on the other side is who they say they are. This is the problem
attempts to solve. In the physical world references and letters of
serve the function of certification. In the world of commerce
like the Better Business Bureau provide certification functions. And,
financial transactions companies like TRW and Equifax provide
services. Inside the enterprise the corporate picture identification
managed by HR, is a form of certification.
Electronic certification uses multiple digital
to certify the authenticity of the parties. The originating party might
a certificate with their information on it. Part of that certificate is
a digital signature and certificate of the authorizing reference. The
digital signature of the reference is based on the check sum of the
public information including their certificate, thus assuring that none
of the public information has been altered. The reference's public key
insures that they really "signed" the certificate, their digital
signature insures the itegrity of the information and their certificate
contains a reference certifying that they really are
who they say they are. Their certificate works the same way, with their
certificate and signature included.
The question is, how many levels of certifiers are
and who is the ultimate certifying authority. Where certificates are
today, most do not involve more that three levels of certificates. As
the ultimate authority, in theU.S. the Postal Service is attempting to
on that role.
Putting all the pieces together, a message secured
multiple levels can be sent over a network using the following
As a final note, there are three kinds of
organizations attempt to stop:
The access and authentication processes above are methods for dealing
with all three. A technique not mentioned above, since it is primarily
information entering an Intranet from outside, is packet screening.
is software that looks inside each packet received before it is allowed
the firewall. The packet is screened for information patterns that look
viruses or attempts to break security barriers. Suspicious packets are
and kept from entering the Intranet.
- unauthorized access to information
- unauthorized changes to information
- malicious destruction of information or
processes (including introducing viruses)
Security strategies should not be based on current or future products
or technology. They need to be based on the functional needs and risks
organization. The toughest part of developing a security strategy is
what needs to be secured, and from whom. Security is not free. Every
the security level is tightened, the organization pays in terms of
complexity of access, increased response time and reduced
As stated above, security is a balance of value, risk and practicality.
developing a strategy it is important to understand something about the
What struck me when I read my first book on risk
that most of the book was not about cost-benefit analysis but about
and psychology. This is because risk is not an objective phenomenon.
of us otherwise rational people can look at the overwhelming
evidence on the safety of commercial airline flights versus personal
and still "feel" less at risk behind the wheel of our cars. A number of
psychological factors are involved, particularly our feelings about our
personal level of
control over the circumstances.
When addressing the issue of security risks it is
to remember that risk consists both of objective data and subjective
based on personal psychology. Since quantitative data may be difficult
determine, individual experience and comfort levels often take the
role. During development of a security policy, it is important to
continually remind ourselves that reducing the risk of a security
breach comes at a cost,
and that cost may be a higher level of risk in some other part of the
not being considered, like competitiveness.
In theory, the basic formula for considering
factors is quite simple. One merely needs to determine what the
is worth to those denied access, what it will cost them to obtain it
the current security implementation, and the cost (consequences) to the
enterprise if they do manage to obtain it. The objective then becomes
to keep the cost
of unauthorized access higher than the value to the potential thief,
the cost of security controls exceeds the cost of unauthorized access.
way of saying this last point is: until the value to the thief is
than the value to the enterprise of keeping the information
In practice this formula becomes more complicated,
"value" and "worth" of information is often subjective, and value to
person initiating the breach may be secondary to the information
like excitement or revenge. It also is nearly impossible to predict all
groups that constitute a potential security risk, and the value to each
is a matter of their personal perception.
As part of the objective evaluation, information
should be obtained on the procedures, risk and incidence of
access of information using current communication mechanisms. For
many companies maintain security procedures to keep competitors from
specific product or planning information. Yet the competitors usually
the information almost immediately upon disclosure to a customer. The
here is to bring some realism into the security process. You don’t want
make it easier for competitors to get the information, but you also
want to pay the cost of super-securing information that is going to
its way to the competitor through another route anyhow.
Two subjective factors stand out in risk
comfort levels and perceived threats to status. New technologies are
suspicious because we have no experience. We can intellectually step
any number of scenarios, but until we have run a production environment
some time, there is no way to tell what situations may have been
The comfort level of the person who will be held accountable if a
occurs is the single most important factor in the level of security, or
the information will be allowed on the new technology at all.
In my first experience with Intranet development,
Human Resources Department at the company created a wonderful system,
with access controls tied to the existing corporate information
However, the Vice President of Human Resources would not let the system
on line because he feared some hacker would crack the system and access
else’s private information. Even after other companies began
the availability of similar HR functionality on their Intranets he
his resistance. Pieces of the system have come on line, one at a time,
this Vice President develops increasing comfort with the risks and
No level of new technology would have changed this situation. Only
exposure, and a program that provides staged experience can help.
Concern over unauthorized access is not the only
that can affect subjective risk levels. Some people have achieved (or
least perceive) their status and power in the enterprise by limiting
to information. Whether the motivation is control, hiding mistakes or
job security, the result will effect the creation and acceptance of a
policy. In many organizations this type of subjective personal risk
create the most complications in developing a pragmatic security
Most of us are aware of studies done in the U.S. federal government on
unnecessary classification of information and the ensuing costs. Recent
studies in industry
show the same patterns.
The most important, time consuming and contentious
in implementing a security policy will be determining what information
to be protected. No amount of technology can help with this process,
the factors are individual perceptions and comfort levels. The security
sets the processes and ground rules for determining how information
put into or taken out of restriction categories. Decisions on specific
information will be made by the individuals who own the
The first step in creating an Intranet security policy is developing a
charter. The charter consists of two parts: a goals statement and a
The goals you choose should give the reader an
of where your enterprise stands on the balance of value versus cost,
requirements versus risk, openness versus gatekeeping and what
the optimum balance for your enterprise. Is your policy one of allowing
to everything unless specifically identified for denial, or is it one
denying access to everything unless specifically identified for access.
will have very different effects and impacts on the culture,
and innovation of your organization.
The responsibility section provides a clear
of how security will be administered within your enterprise including
(what organization and position) is responsible for maintaining and
the corporate Intranet security strategy and policy and who reviews and
that strategy and policy. It also includes a description of how this
function and strategy fit with other security organizations in the
enterprise, and what is expected of each organization.
The second step in creating an Intranet security
is creating a written process that describes how responsibility for
security will be delegated, implemented and enforced. This includes a
management section and an individual employee section.
The management section contains a description of
at each organizational and management level. Security objectives and
they will be monitored are an important part of this section. The
objectives should be consistent with the goals in the charter. Where
may be provided that help the manager make decisions consistent with
corporate goals and policies. Standards and security classifications
be particularly useful in helping managers determine when they need to
(or should not classify) a specific type of information.
A very clear statement of employee
expectations and sanctions is required for an effective security
However, the statement is not sufficient if employees are not aware of
existence. The statement should be followed by a well defined employee
communication program. The program must address not only the initial
introduction of expected
responsibilities to each employee, it also must include an ongoing
and refresher program. This can be done in conjunction with other
awareness programs and with other Internet standards programs.
The final required part of an Intranet security
is the definition of an audit program to monitor and manage compliance
risk. Some aspects of the audit program will be discussed later in this
The important point here is that the security policy should explicitly
for regular audits, both internally and by independent auditors, and
how they will happen and who in the enterprise will be apprised of the
results. For servers with sensitive information (and this includes the
protects the Intranet) a program of continuous logging, analysis and
of activity for suspicious patterns is critical. A program of active
testing, looking for vulnerabilities, is also a good idea.
Determining who gets access to what information is not a challenge
created by Intranets. The issue is as old as information itself. Since
information first went up on computers, access control has been an
issue. A popular method
for documenting, and implementing computer security is the use of
Tables." A privilege table contains a row of all the unique security
of information and a list of all users with access to the system. The
in the resulting table are used to record the access privileges of each
In each cell a user either has access or does not.
Privilege tables are popular because they provide
documentation format that can be easily implemented in an automated
control program. When a user logs on, the system authenticates her.
she requests access to particular information, the software looks at
privilege table to determine if she is authorized. This type of system
only simplifies the management of who gets access, but it simplifies
for the user. Because of the privilege table, the user only has to be
once, rather than at each access.
An Intranet does create a complication in that
information usually resides on more than one computer system. As of
writing, there were no widely-trusted, commercially-available systems
allow single-point authentication across an Intranet in an acceptable
Some companies have developed home grown systems, and many of the web
vendors are getting ready to deliver these systems. Nevertheless, using
privilege table to develop and document user access privileges provides
valuable process aid.
One of the major issues in developing an
privilege table is determining the granularity of fields. From a
standpoint it is useful to lump users into specific user classes, and
decisions based on the user class rather than the individual. Likewise,
is more efficient to lump information into information classes and
make those decisions for the class rather than each piece of
In theory, it would only be a matter of matching information classes
user classes, and the job would be done.
In practice, we usually discover the organization
trouble identifying and agreeing on the classes and class definitions,
alone which specific items belong in each class and which user classes
access to which information classes. Perhaps there is an opportunity
software to assist in creating and managing access control at this
It seems like a natural application for developing classes based on
analysis then feeding them into an object management system. Individual
and information become objects that carry their class affiliations with
as attributes, and object rules can be used to determine access
I do not know of any packages that do this today, but you can follow
logic to help you develop a privilege table manually.
To be effective, development of the privilege
should involve the organizations responsible for creating the
and managing compliance. A method for assisting the process is to
an initial format for creating information and user classes then have
organization create a set of information and user classes from their
The initial information is consolidated and the process iterated until
acceptable class structure is reached. Putting the latest consolidated
information on web pages and iterating through the issues using a
threaded discussion group will help expedite this process.
As part of the table development process, and part
the decision making process, look at who has access to this information
paper or through existing processes today. While the process may
previously "invisible" problem areas, it will give you a fairly
view of what your organization considers acceptable risk.
Once developed, a security policy needs regular review and update. The
is ever changing and the technologies and strategies to breach, and to
information will change and coevolve. Three activities should take
at regular intervals in any Intranet implementation where information
protection: Threat Identification, Active Penetration Testing and
Threat identification takes two forms, theory and
The theory is done via vulnerability assessment. This is where the
is assessed for value to the enterprise, value to potential security
risks and consequences of a security breach. This is generally
facilitated by assessing
generic security risks and weaknesses for probable occurrence, then
additional points of threat. Identifying threats requires experience as
as knowledge, which is why external consultants frequently are retained
help with this process.
The practice of threat identification is
through continuous monitoring and auditing, often with the use of
tools. The goal is to identify attempted security breaches while they
being attempted. The physical analogy is an alarm system with sensors
a security guard to follow up. However, good security requires more
technology and patrolling. Otherwise it becomes a game without
for attempting a breach. Behavioral elements need to be incorporated to
even attempts at security breaches less attractive. The three primary
are: strict consequences, immediate follow-up and making highly visible
of those caught.
Active penetration testing involves sanctioned
to actually penetrate the security, particularly at known vulnerability
There are firms that specialize in this type of testing and reporting.
number of software tools also exist that duplicate known hacking
and apply them against the target system. Probably the best known of
class is a package called, SATAN, which received a large dose of
when first released, because people feared it would be used by hackers
identify weaknesses of targeted systems.
Finally, having your security system routinely
by outsiders is a must. Internal monitoring and audits are important,
an external perspective is invaluable. Many auditing firms also conduct
penetration testing as part of the audit and provide consulting on
threat identification. Because they see a broader range of experiences
and results in the course of their business than any one company is
likely to encounter, this consulting can be very valuable. However, be
cautious. Good security people are naturally paranoid, and they want
you to be paranoid too. Don’t let them scare you into
levels of security that are not good for your business. Know the risks,
make decisions based on the costs to your business of ameliorating
The sections above can be summarized in six points:
- View security pragmatically, based on the
consequences and likelihood
of a failure.
- Have an official policy and plan
- Publicize expectations and sanctions
- Monitor and audit continuously
- Visibly prosecute violators
- Use outside experts to provide a broader
experience base and remove
As Intranets become the primary computing infrastructure in
organizations, the issue of availability becomes increasingly
important. In a very real sense,
the network now is the computer, and all the requirements we had for
image systems apply to the distributed systems and network
interconnections that make up the Intranet. If a part of the system
between the user and the
information the user needs is not working, the business function in
the user is engaged is in jeopardy.
In many respects the network environment is more
than the old mainframe/host environments. When something goes wrong in
distributed network world there are more places the problem could
However, as we learn more about architecting for the new requirements,
as our monitoring and intervention tools get better, we also may see
complexities of network computing as bringing many inherent strengths.
believe we have barely scratched the surface of understanding and
on the strengths of distributed computing.
There are two diametrically opposed strategies for managing network
availability. One strategy is to simplify the network configuration to
reduce the number of points of potential failure and to make finding
points of failure easier when they do occur. When a system does fail,
the problem can be quickly diagnosed
and the information brought back on line. This has been the primary
of data centers, and hence most corporate networks today.
The alternative is a strategy of optimized
In this model, when a failure occurs, the system is complex enough to
an alternate path to the information. Availability is as good as the
ability to reroute the information pathways. The time to find and fix a
specific failure becomes less critical, because the user retains access
to the information during the failure. With proper automation, the user
would not even know that
a pathway was down. The rerouting would happen in real time.
This latter strategy was the basis for developing
Internet standards in the first place. The U.S. Department of Defense
a way to send computerized information across diverse networks when the
of any given network could change during the course of the transaction.
characteristics of the Internet’s built-in self-routing, that some
to as a weakness, also contain the capability to provide strengths in
areas of availability and scaleability.
What does not work is an automated self-routing
implemented on a simplified network. When a point fails, the system has
alternate resource to provide a route. Yet, these simplified networks
the norm for the Intranets in most enterprises today. What is needed is
rethinking of the enterprise network, with attention paid to subnet
for availability. This focus has not received much attention in the
context of Intranets. As Intranets become the primary computing
most enterprises, architecting subnets for availability will take on
Two basic architectures can be used to provide
pathways. The first is what I will call the triangle configuration,
a triangle is its simplest form. Think of each of the three points on a
as being a node on the network. Each point is connected to the other
points on the triangle. The most direct way to send a message from one
to another is directly, along the path that makes up the side of the
triangle. But what if this path is broken? The message can still be
sent by routing it through the other point and on to the destination.
This isn’t quite as
efficient as the direct path, but is certainly more efficient than
having no path at all. Of course a complex set of links such as this
can be quite slow.
The other basic architecture I call the concentric
configuration. Think of two circles, one inside the other. Each is a
speed communication backbone. Between these two rings are the local
or the Local Area Networks (in a triangle configuration). They are
to both rings. This model provides the efficiency of a high speed
with the redundancy required for automatic availability.
It is worth mentioning that subnet diversity has
potential to not only improve availability, but also performance. By
the information across subnets, the network traffic also can be
When information or routing is centralized, performance suffers much
rush hour traffic in an urban environment. Everyone is going to roughly
same location. By distributing the information and routing, load
becomes possible, and high traffic areas can be minimized. As subnet
becomes more common and better understood, a whole new set of
and tools will begin to emerge to assist with subnet optimization and
The network optimization strategy discussed above is, in essence, a
redundancy strategy. It provides for redundant network pathways.
Redundancy strategies are not new to computing. They are applied to
other resources in the network today. In fact, redundant servers,
routers and storage devices are quite common
today. Most enterprise storage devices available are based on RAID
(Redundant Arrays of Inexpensive Disks). Likewise, many servers have
options, that provide mirroring and substitution in case of a failure.
technologies can be used to architect highly available Intranets,
this is rarely done today.
Another example of using redundancy to provide
is the creation of redundant data. Making backup copies of data is a
of redundancy. So is replication for either performance improvements or
support failover processing. Data redundancy is also common for those
us who give electronic presentations and computer demonstrations. It is
uncommon for presenters to carry a copy of presentations or key
web pages on their hard drive, even when the presentation is intended
be given on-line. If online availability becomes compromised, the
can be given from the redundant data on the hard drive, in some
without the audience even being aware of the problem. In critical
like live presentations, using situational redundancy as a personal
discipline is a pragmatic solution today. In some instances this same
approach is used
by placing the data on an alternate server and network rather than the
Viewing the complexity and distributed nature of
Intranet’s technical infrastructure as a strength in providing
and load balancing is not widespread. However, it is not difficult to
how the demand for this view is almost inescapable. As tools developed
the remote monitoring and managing of client server environments begin
incorporate Internet standards and adapt to these new demands, we can
the rapid gains in this area that we have come to expect in other areas
Original Version: October, 1996
Last Updated: November, 1996
Copyright 1996 - Steven L.