Techniques from the field of artificial intelligence, in particular so-called autonomous agents, can be used to implement a complementary style of interaction, which has been referred to as indirect management. Instead of user-initiated interaction via commands and direct manipulation, the user is engaged in a cooperative process in which both human and computer agents initiate communication, monitor events, and perform tasks. The agent is not necessarily an interface between the computer and the user. In fact, the most successful interface agents are those that do not prohibit the user from taking actions and fulfilling tasks personally. A novel approach to building interface agents is discussed. Results from several prototype agents that have been built using this approach, including agents that provide personalized assistance with meeting scheduling, e-mail handling, electronic news filtering, and selection of entertainment are presented.
There are many factors that underlie the marked increase in machine intelligence quotient (MIQ). The most important factor is the use of what might be referred to as soft computing and, in particular, fuzzy logic, to mimic the ability of the human mind to effectively employ modes of reasoning that are approximate rather than exact. In 1990, high-MIQ consumer products employing fuzzy logic began to grow in number and visibility. Somewhat later, neural network techniques combined with fuzzy logic began to be employed in a wide variety of consumer products, endowing such products with the capability to adapt and learn from experience. Underlying this evolution of high-MIQ products and systems was an acceleration in the employment of soft computing, and especially fuzzy logic, in the conception and design of intelligent systems that can exploit the tolerance for imprecision and uncertainty, learn from experience, and adapt to changes in the operating conditions.
While most of the software programs provide their users with significant value when used in isolation, there is increasing demand for programs that can interoperate - that is, exchange information and services with other programs and thereby solve problems that cannot be solved alone. Agent-based software engineering was invented to facilitate the creation of software able to interoperate in such settings. In this approach to software development, application programs are written as software agents, i.e. software components that communicate with their peers by exchanging messages in an expressive agent communication language. Agent-based software engineering is often compared to object-oriented programming. Three questions are raised about the concept of agent-based software engineering: 1. What is an appropriate agent communication language? 2. How are agents built that are capable of communicating in this language? 3. What communication architectures are conducive to cooperation?
Many artificial intelligence (AI) researchers have long wished to build robots, and their cousins called agents, that seem to think, feel, and live. It can be argued that while scientists may have more effectively recreated scientists, it is the artists who have come closest to understanding and perhaps capturing the essence of humanity that AI researchers ultimately seek. If this is true, then the results of artistic inquiry, especially the insights into character animation, such as those expressed in Disney Animation: The Illusion of Life by F. Thomas and O. Johnston, may provide key information for building computational models of believable interaction characters. These constructions are called believable agents. Their development is one of the goals of the research group in the Oz project. Details are presented.
The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. Physicists and engineers at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, collaborated with many other institutes to build the software and hardware for high-energy physics research. The idea of W3 was prompted by positive experience of a small home-brew personal hypertext system used for keeping track of personal information on a distributed project. The Web has expanded rapidly from its origins at CERN across the Internet irrespective of boundaries of nations or disciplines. An explanation of what W3 is, where it fits in with other systems in the field, and what the future holds is presented.
The multicast backbone (MBone) is a virtual network on top of the Internet providing a multicasting facility to the Internet. It began in March 1992 when the first audiocast on the Internet took place from the Internet Engineering Task Force (IETF) meeting in San Diego. At that event 20 sites listened to the audiocast. Two years later, at the IETF meeting in Seattle, about 567 hosts in 15 countries tuned in to the 2 parallel broadcasting channels and also joined the discussion. As soon as some crucial tools existed, the usage exploded. At the Swedish Institute of Computer Science (SICS), the contribution to the Swedish University Network SUNET increased from 26GB per month in February 1993 to 69GB per month in March 1993, mainly due to multicast traffic. At that time, SICS was the major connection point between the US and Europe in MBone. MBone has also been the cause of severe problems in the NSFnet backbone, such as saturation of major international links rendering them useless.
The Dexter Hypertext Reference Model is an attempt to capture, both formally and informally, the important abstractions found in a wide range of existing and future hypertext systems. The goal of the model is to provide a principled basis for comparing systems as well as for developing interchange and interoperability standards. The model is divided into 3 layers. The storage layer describes the network of nodes and links that is the essence of hypertext. The run-time layer describes mechanisms supporting the user's interaction with the hypertext. The within-component layer covers the content and structures within hypertext nodes. The focus of the model is on the storage layer and the mechanisms of anchoring and presentation specification that form the interfaces between the storage and the storage layer and the within-component and run-time layers, respectively.
Just four years ago, the only widely reported commercial application of neural network technology outside the financial industry was the airport baggage explosive detection system developed at Science Applications International Corporation (SAIC). Since that time scores of industrial and commercial applications have come into use, but the details of most of these systems are considered corporate secrets and are shrouded in secrecy. This hastening trend is due in part to the availability of an increasingly wide array of dedicated neural network hardware.
The fisheye lens approach to viewing and browsing graphs is explored. This approach to viewing a graph shows an area of interest quite large and with detail and shows other areas successively smaller and in less detail. It achieves this smooth integration of local detail and global context by repositioning and resizing elements of the graph. Topics which are discussed concerning fisheye views include terminology, a formal framework, and an implementation strategy. A fisheye lens seems to have all the advantages of the other approaches for viewing and browsing a graph without any of the drawbacks. It is concluded that a fisheye view is one of the many ways to display and explore a graph or any other structure. Discovering and quantifying the strengths and weaknesses of fisheye views are challenges for the future.
Usability engineering aims at improving interactive systems and their user interfaces. Measurable usability parameters fall into 2 broad categories: 1. subjective user preference measures, assessing how much the users like the system, and 2. objective performance measures, which measure how capable the users are at using the system. A positive correlation between subjective preference and objective performance should be expected, since people can be expected to prefer to use computers that help them rather than hinder them. However, most computer professionals probably know of cases where users did not prefer the system that would seem to be better based on the objective performance measures. A meta-analysis of published comparisons between systems in which usability has been measured for both subjective and objective performance is presented. Meta-analysis is the analysis of analyses in which the results from a large number of individual studies are integrated in order to arrive at an overriding conclusion.
Interest in the study of neural networks has grown remarkably in the last several years. This effort has been characterized in a variety of ways: as the study of brain-style computation, connectionist architectures, parallel distributed-processing systems, neuromorphic computation, artificial neural systems. The common theme to these efforts has been an interest in looking at the brain as a model of a parallel computational device very different from that of a traditional serial computer.
Personal software assistants that help users with tasks like finding information, scheduling calendars, or managing work flow will require significant customization to each individual user. The potential of machine-learning methods to automatically create and maintain such customized knowledge for personal software assistants is explored. The design of one particular learning assistant, a calendar manager called Calendar APprentice (CAP), that learns its users' scheduling preferences from experience is described. Results are summarized from approximately 5 user-years of experience, during which CAP has learned an evolving set of several thousand rules that characterize scheduling preferences for each of its users. Based on this experience, machine-learning methods may play an important role in future personal software assistants.
The Internet Softbot (software robot) is a fully implemented artificial intelligence agent developed at the University of Washington. It uses a Unix shell and the World-Wide Web to interact with a range of Internet resources. The softbot's added value is threefold. First, it provides an integrated and expressive interface to the Internet. Second, the softbot dynamically chooses which facilities to invoke and in what sequence. Third, the softbot fluidly backtracks from one facility to another based on information collected at run time. As a result, the softbot's behavior changes in response to transient system conditions. The ideas underlying the softbot-based interface are discussed.
The technique of building user interface prototypes on paper and testing them with real users is called low-fidelity (lo-fi) prototyping. Lo-fi prototyping is a simple and effective tool that has failed to come into general use in the software community. Paper prototyping is potentially a breakthrough idea for organizations that have never tried it, since it allows developers to demonstrate the behavior of an interface very early in development, and test designs with real users. If quality is partially a function of the number of iterations and refinements a design undergoes before it enters the market, lo-fi prototyping is a technique that can dramatically increase quality. It is fast, it brings results early in development, and it allows a team to try far more ideas then they could with high-fidelity prototypes. The steps for building a lo-fi prototype include: 1. Assemble a kit. 2. Set a deadline. 3. Construct models, not illustrations. Steps for preparing for and conducting a test of the prototype are also discussed.
When a person tries to develop an understanding of an unfamiliar computer program or portion of a program, the informal, human-oriented expression of computational intent must be created or reconstructed through a process of analysis, experimentation, guessing, and crossword puzzle-like assembly. The problem of discovering these human-oriented concepts and assigning them to their realization within a specific program or its context is the concept assignment problem. Programming-oriented concepts are signaled by the formal features of the programming language or other features that can be deductively or algorithmically derived from those features while human concept recognition appears to additionally use informal, inherently ambiguous tokens, require plausible reasoning, and rely heavily on a priori knowledge from the specific domains. Thus, concept assignment is more like a decryption problem than a parsing problem. An example of this paradigm shift, in which a priori knowledge is used to drive the assignment of human-oriented concepts, is given.
Visualization includes the study both of image and understanding. It spans many academic disciplines, scientific fields, and multiple domains of inquiry. However, if visualization is to continue to advance as an interdisciplinary science, it must become more than a grab bag of techniques for displaying data. The classification of visual information is discussed. Classifications structure domains of systematic inquiry and provide concepts for developing theories to identify anomalies and to predict future research needs. Extant taxonomies of graphs and images can be characterized as either functional or structural. Functional taxonomies focus on the intended use and purpose of the graphic material. In contrast, structural categories focus on the form of the image rather than its content. Features that characterize high-level categories of visual representations are identified and a classification of visual representations is constructed.
Cryptography was introduced to the commercial world from the military by designers of automatic teller machine (ATM) systems in the early 1970s. Since then, ATM security techniques have inspired may of the other systems - especially those where customers initiate low-value transactions for which one should account. There are 3 common problems with ATM security: 1. program bugs, 2. postal interception of cards, and 3. thefts by bank staff. Almost all security failures are in fact due to implementation and management errors. The bulk of the computer security research and development budget is expended on activities that are of marginal relevance to real needs. The real problem is how to build robust security systems, and a number of recent research ideas are providing insights into how this can be achieved.
The most flexible way to tailor a software entity is to program it. The problem is that programming is too difficult for most people today. Consider children trying to learn programming. It is hypothesized that fewer than 10% of the children who are taught programming in school continue to program after the class ends. As a step toward solving end-user programming problems, a prototype system designed to allow children to program agents in the context of simulated microworlds is discussed. The approach is to apply the good user interface principles developed during the 1980s for personal computer applications to the process of programming. The key idea is to combine 2 powerful techniques - graphical rewrite rules and programming - by demonstration. The combination appears to provide a major improvement in end users' ability to program agents.
textabstractFrom Computing Reviews, by Jeanine Meyer The purpose of this paper is to convince the reader of the need for a general model for hypermedia and present the Amsterdam hypermedia model (AHM) as fulfilling that need. Hypertext, which is described by the authors as a 'relatively mature discipline,' has the Dexter model, but the authors show that enhancing that model for hypermedia is not a straightforward task. In particular, it requires attention to issues of synchronization. The AHM model includes attention to timing and composite objects as well as implementation issues such as channels and having the sources of components residing over a distributed system. The paper features one example and also describes an authoring and presentation environment called CMIFed. It is generally well written. The paper can be understood even if one has not studied the Dexter hypertext reference model or the CMIF multimedia document model and, in fact, this paper could serve as an introduction to the issues involved. Too much of the focus, however, is on other systems and not on what AHM actually is. The authors do not demonstrate the model by using it to express the featured example. Moreover, to really merit the term 'model,' AHM should be shown as serving a substantial role in describing and implementing applications in terms of two or more distinct authoring or runtime environments. This is not done, though it appears well within the experience and understanding of the authors.
Before there were computers, there was thinking about the mind as a computer - as a machine. And in this way, computer science and engineering trace their roots to using natural examples. Within these fields of endeavor, AI drew its initial inspiration from nature, and work on computer-simulated brains received the lion's share of the early attention. Specifically, Darwinian evolution has spawned a family of computational methods called genetic algorithms (GAs) or evolutionary algorithms (EAs).