iorg.com logo

Top

White Papers


 

© Copyright 2000
iorg.com

Using I-net Agents
Creating Individual Views From Unstructured Content

Bart Meltzer and Steve Telleen
iorg.com


The world of human communication and information has long been too voluminous and complex for any one individual to monitor and track. To cope with the information overload, we have learned to create organizations that divide and delegate responsibilities for the creation, monitoring, management, organization and presentation of information. The bureaucratic organizational structures of most modern corporations and governments are the instantiation of how these information responsibilities have been delegated so the information flows efficiently into the creation of specific goods or services. We use the term bureaucratic to describe the rigid structure and processes that allow the delegated information to be efficiently and consistently reassembled to meet the corporate goals. 

Today, these traditional ways of dealing with information are again under stress. Organizations have grown too large, the environment too complex and the information too ubiquitous for the carefully pre-structured relationships of the past to keep pace. As executives and managers we are told to decentralize, distribute and empower all aspects of our business.  While the benefits in flexibility and market responsiveness often are obvious, the process also amplifies the information explosion. To make matters worse, the organizational structures eliminated to create the business benefits are the same structures we relied on in the past to organize and share information. So how do we deal with what looks like information chaos? Fortunately, two complimentary technologies have emerged that allow us to coordinate, communicate and even organize information, without rigid, one-size-fits-all structures. The first is the Internet/Web technologies, that we will refer to as I-net technology, and the second is the evolution of software agents. 

Together, these technologies are the new-age building blocks for robust information architectures, designed to help information consumers find what they are looking for in the way that they want to find it. The web and software agents make it possible to build sophisticated, well performing information brokers designed to deliver content, from multiple sources, to each individual, in the individual's specific context and under the individual's own control. This is more than a search engine. This is the ability to provide meaningful information to each individual based on her needs, and a way to improve the information supplier/consumer relationship by providing the information consumer with more precise control over the interaction. 

In our context, an agent is, simply put, one who takes action at the instigation of another. The concept of agents, in our sense, is not new or restricted to software. According to the Oxford English Dictionary, this meaning of the term in the English language dates at least as far back a 1593. In one of the author's previous writings, the concept of an information broker is clearly an agent function. What is new is the ability to create extremely powerful, flexible and individualized software agents because of the I-net infrastructure. These software agents can be highly effective tools for individualizing the organization and management of distributed information. 

The world of software agents remains a poorly understood and extremely urgent area of I-net activity. It is urgent because a plethora of software products are on the market today that are acting as software agents, and yet there seems to be little understanding by the software vendors or consumers of what an I-net software agent is, or could be. The much hyped channel technology (often mislabled as Push) is but one example of the confusion that results by not recognizing that these software products are agents that can be understood within a larger conceptual framework. The remainder of this article will present some key aspects of the software agent framework. 

From the agent employer's perspective, the agent is a service whose location should be as transparent as the rest of the content on the I-net. Whether a specific agent's logic resides on the employer's local system or is a service on a remote system becomes an architectural decision based on system capabilities and load balancing considerations, rather than the agent service itself. Some basic collection services are best aggregated at logical concentration points to ease network traffic. For example, a primitive form of agent technology is the spider-based index of an intranet. There is no technical reason why each individual could not have a spider-based agent on their local system to search the intranet either generally or specifically. However, in practice, the network infrastructure would quickly become overwhelmed. It is architecturally more efficient to have the intranet searched regularly by a single spider-based agent that catalogs the content for use by other agents (say a traditional search engine) employed by each individual. Again, the logic for a general purpose search engine may be more appropriately placed as a shared service on a remote system, while more specialized and sophisticated agents may access the catalog from a remote system but themselves reside on a local system. Many web-based merchants already provide their patrons with site-specific agent services that parse the content on their site to each individual's interests, or track changes in specific content identified by the individual and display these changes the next time the individual visits the site or send an email notification to the individual when the change occurs. 

As we begin to develop more sophisticated information agents, a general classification of different types of agents becomes useful. At the highest level, there are two basic functions that information agents perform, sensory functions and action functions. Sensory agents discover, collect and organize information from the system at large. The spider-based agents and search engines discussed above are examples of sensory agents. Action agents cause changes in other parts of the system at large. Computer viruses are an example of an action agent. This distinction between sensory and action information appears to be based in nature. Our own nervous systems are divided into two separate systems, one for collection of information, the other for acting on information. In biology, the terms affective and effective are the names used to differentiate these two nervous systems. 

Sensory agents can be further distinguished as scanning agents, screening agents or tracking agents. Scanning agents are the most general in that they collect and organize information that is not focused on a specific goal. Spiders and other general cataloging services are examples of scanning agents. General "browsing" of the web is a scanning activity. Screening agents, essentially, perform pattern matching services. The agent screens the information and only delivers information from sources that match the requested pattern. Search engines are one example of a screening agent. Dynamic pages built from patron provided profiles are another example of a screening agent. Tracking agents are even more specific. The employer of the agent identifies a specific target to be tracked and specifies certain changes or states in which she is interested. The agent then monitors the target on a regular basis and reports back only when the specified changes occur. Agents that allow an information consumer to target a specific page, or content on a page, and then notify the consumer when a change occurs are tracking agents. Agents that track each visitor's activities and report back only when encountering specific access patterns (for either security or marketing reasons) are another example of tracking agents. 

We have identified two types of action agents: those that modify content and those that activate or deactivate processes. Action agents may operate by knowing how to activate (and repurpose) existing processes, or they may require the target to have specific logic installed that they can manipulate. A computer virus is an example of the former, while cookies, software-update agents and system management agents generally are examples of the latter. Some of the uses of cookies may be viewed as the former case when sites stretch the cookies original intent and collect other information about the visitor without the visitor's consent. Publishing agents can be either. Problems arise when desirable action agents require logic in the target that is not part of a community-owned standard. If one views channel technologies as action agents, the current vendor battles can easlily be understood as a lack of community-owned standards for the logic in the targets. 

The term "Push" recently has been a hot marketing buzz-word. However, even a cursory analysis of products billed as "push" quickly indicate that the term is widely misused. Since push and pull are characteristics of agents, we propose the following three definitions to help clarify the concepts. 

  • Push is information that is:
    • Not requested
    • Delivered at the convenience of the publisher or agent
  • Pull is information that is:
    • Requested
    • Delivered at the convenience of the information consumer
  • Subscription is information that is:
    • Requested
    • Delivered at the convenience of the publisher or agent.
Using these definitions we can begin to classify our requirements and intentions for specific actions and agents and use the results to create more effective communication architectures. We also can use these definitions to untangle the products currently lumped together in the push category and separate them into those that are true push, those that are automated pull-agents and those that are subscription agents. By more accurately defining our needs and our tools we improve our effectiveness. As a general rule, true push agents should be highly controlled and used sparingly in organizations to minimize information overload. Automated pull and subscription agents, that allow individual control by each information consumer, should be encouraged to reduce information overload. 

It is easy to see that before long agent development will need to address how agents interact with each other rather than just with people. In the past we would have approached this problem by attempting to standardize processes across all agents. Our experience with I-nets suggests that a more productive approach may be standardize content (like we did with HTML/HTTP) and use specialized agents to provide inter-agent interfaces. In this model, agents begin to look like information versions of hormones and enzymes in biological systems rather than the highly structured parts of earlier machine models. One even begins to wonder if the "one gene, one enzyme" rule of biological systems might be translated as "one agent, one function" for efficient I-net development. 

As we move to these new models, our goal-directed intranets and extranets will continue to require well architected infrastructures to maintain flexibility and create efficiencies. This is true today because many tools are not based on community-owned standards, and most approaches are implicitly based on creating homogeneity rather than supporting diversity. As our infrastructures begin to incorporate community-owned standards and a preference for diversity, intranet and extranet architectures will continue to be important as evolving, competitive differentiators. 

Summary and Conclusions

Agents and I-net standards are the building blocks that make individual customization of information possible in the unstructured environment of I-nets. Agents will begin to specialize and become much more than today's general purpose search engines and "push" technologies. Successful application vendors will rethink their applications, replacing structured forms with increasingly specialized agents to support both sensory and action functions within specific, user-defined contexts. Well developed architectures, based on community-owned standards, and robust tools that support the standards are critical to a successful implementation of agent technology. The new agents will move us from the "one-size-fits-all" approach of today's applications into a world that allows individual users to find, use and share what they want, the way that they want it. 

Copyright 1997, Bart Meltzer and Steve Telleen
bart@cngroup.com - stevet@iorg.com
Last updated: June 24, 1997


  top page | papers


For more information contact: stevet@iorg.com
© Copyright 1997-2000 iorg.com